text
stringlengths 6
128k
|
---|
# On the value of the fifth maximal projection constant
Beata Derȩgowska111B.D. is partially supported by National Science Center
(NCN) grant no. 2021/05/X/ST1/01212. For the purpose of Open Access, the
author has applied a CC-BY public copyright licence to any Author Accepted
Manuscript (AAM) version arising from this submission. Matthew Fickus222The
views expressed in this article are those of the authors and do not reflect
the official policy or position of the United States Air Force, Department of
Defense, or the U.S. Government. Simon Foucart333S.F. is partially supported
by grants from the NSF (DMS-2053172) and from the ONR (N00014-20-1-2787).
Barbara Lewandowska Institute of Mathematics
Pedagogical University of Krakow, Podchorazych 2, Krakow, 30-084, Poland
Department of Mathematics and Statistics
Air Force Institute of Technology, Wright-Patterson AFB, OH 45433, USA
Department of Mathematics, Texas A&M and Institute of Data Science
Texas A&M University, College Station, TX 77843, USA Faculty of Mathematics
and Computer Science
Jagiellonian University, Lojasiewicza 6, Krakow, 30-048, Poland
###### Abstract
Let $\lambda(m)$ denote the maximal absolute projection constant over real
$m$-dimensional subspaces. This quantity is extremely hard to determine
exactly, as testified by the fact that the only known value of $\lambda(m)$
for $m>1$ is $\lambda(2)=4/3$. There is also numerical evidence indicating
that $\lambda(3)=(1+\sqrt{5})/2$. In this paper, relying on a new construction
of certain mutually unbiased equiangular tight frames, we show that
$\lambda(5)\geq 5(11+6\sqrt{5})/59\approx 2.06919$. This value coincides with
the numerical estimation of $\lambda(5)$ obtained by B. L. Chalmers, thus
reinforcing the belief that this is the exact value of $\lambda(5)$.
###### keywords:
maximal absolute projection constant , maximal relative projection constant ,
equiangular tight frames , real mutually unbiased equiangular tight frames
###### MSC:
41A65 , 41A44 , 46B20 , 15A42 , 42C15
††journal: Journal of Functional Analysis
## 1 Introduction
Let $X$ be a real Banach space and $Y\subset X$ be a finite-dimensional
subspace. Let $\mathcal{P}(X,Y)$ denote the set of all linear and continuous
projections from $X$ onto $Y$, recalling that an operator $P\colon
X\rightarrow Y$ is called a projection onto $Y$ if $P|_{Y}={\rm Id}_{Y}.$ We
define the relative projection constant of $Y$ by
$\lambda(Y,X):=\inf\\{\|P\|:\;P\in\mathcal{P}(X,Y)\\}$
and the absolute projection constant of $Y$ by
$\lambda(Y):=\sup\\{\lambda(Y,X):Y\subset X\\}.$ (1)
The literature also deals with the maximal absolute projection constant, which
is defined by
$\lambda(m):=\sup\\{\lambda(Y):\;\dim(Y)=m\\}.$
By the Kadec–Snobar theorem (see [17]), we have $\lambda(m)\leq\sqrt{m}$.
Moreover, it has been shown in [18] that this estimate is asymptotically the
best possible. However, the determination of the constant $\lambda(m)$ seems
to be difficult: apart from $\lambda(1)=1$, the only known value of
$\lambda(m)$ is $\lambda(2)=4/3$ — this is Grünbaum conjecture, formulated in
[14] and proved in [6]. Numerical computations presented in [13] indicate that
$\lambda(3)$ should equal $(1+\sqrt{5})/2$ — this was stated, with an
erroneous proof, in [19]. Other numerical experiments conducted by B. L.
Chalmers (and unfortunately unpublished) suggest that $\lambda(5)\approx
2.06919$. In this article, we show that
$\lambda(5)\geq 5(11+6\sqrt{5})/59\approx 2.06919.$
Viewed in isolation, this could seem anecdotal. However, several sources of
evidence hint that this is the actual value of $\lambda(5)$. This comes as a
surprise, because it was growingly believed that obtaining exact formulas for
$\lambda(m)$ was an unreasonable quest. Now there is hope that this quest
could be realized after all.
To establish the announced lower bound, we make a detour via maximal relative
projection constants. Recent results concerning maximal relative and absolute
projection constants can be found in [1, 2, 4, 13, 21]. Here, we only give the
definition of the maximal relative projection constant for $n\geq m$ as
$\lambda(m,n):=\sup\\{\lambda(Y,l_{\infty}^{(n)}):\;\dim(Y)=m\textrm{ and
}Y\subset l_{\infty}^{(n)}\\}.$
This is motivated by the fact that, in the expression (1) of $\lambda(m)$, it
suffices to take the supremum over finite-dimensional $l_{\infty}$ superspaces
(see e.g. [22, III.B.5]), so that the nondecreasing sequence
$(\lambda(m,n))_{n\geq m}$ converges to $\lambda(m)$. In reality, there even
is an $N\in\mathbb{N}$ such that $\lambda(m,n)=\lambda(m)$ for all $n\geq N$
(see [1, Theorem 1.4]). Our estimation of $\lambda(m,n)$ will rely on the
following result proved in [5].
###### Theorem 1.1
For integers $n\geq m$, one has
$\lambda(m,n)=\max\bigg{\\{}\sum_{i,j=1}^{n}t_{i}t_{j}|U^{\top}U|_{ij}:t\in\mathbb{R}^{n},\;\|t\|_{2}=1,U\in\mathbb{R}^{m\times
n},\;UU^{\top}={\rm I}_{m}\bigg{\\}}.$
Although this theorem provides an essential tool for estimating the maximal
relative projection constants, computing their exact values remains a
challenging problem, carried out in just a few cases (see e.g. [1, 5, 13]).
One particular situation where an explicit formula is available involves
equiangular tight frames. Let us recall that a system of unit (i.e.,
$l_{2}$-normalized) vectors $(v_{1},\dots,v_{n})$ in $\mathbb{R}^{m}$ is
called equiangular if there is a constant $c\geq 0$ such that
$|\langle v_{i},v_{j}\rangle|=c\qquad\textrm{ for all
}i,j\in\\{1,\dots,n\\},\;i\neq j.$
It is called a tight frame if
$VV^{\top}=\frac{n}{m}{\rm I}_{m},$
where $V$ is the matrix with columns $v_{1},\dots,v_{n}$. The system
$(v_{1},\dots,v_{n})$ of unit vectors is called an equiangular tight frame if
it is both equiangular and a tight frame. For an equiangular tight frame of
$n$ unit vectors in $\mathbb{R}^{m}$, it is well known (see e.g. [12, Theorem
5.7]) that
$|\langle v_{i},v_{j}\rangle|=\sqrt{\frac{n-m}{m(n-1)}}\qquad\textrm{ for all
}i,j\in\\{1,\dots,n\\},\;i\neq j.$
The above-mentioned explicit formula is presented as part of the result below.
Built from Theorems 1 and 2 of [20], it appeared in a slightly different form
as Theorem 5 in [13]. A new self-contained proof is included later as an
appendix.
###### Theorem 1.2
For integers $n\geq m$, the maximal relative projection constant
$\lambda(m,n)$ is upper bounded by
$\delta_{m,n}:=\frac{m}{n}\left(1+\sqrt{\frac{(n-1)(n-m)}{m}}\right).$
Moreover, the equality $\lambda(m,n)=\delta_{m,n}$ occurs if and only if there
is an equiangular tight frame for $\mathbb{R}^{m}$ consisting of $n$ unit
vectors.
###### Remark 1.1
We note in passing that $\delta_{m,n}<\sqrt{m}$ for $n\geq m>1$ (thus
providing another justification for Kadec–Snobar estimate). This is seen by
applying Cauchy–Schwarz inequality for the noncolinear vectors
$[1,\sqrt{n-1}]$ and $[1,\sqrt{(n-m)/m}]$ in
$\displaystyle\delta_{m,n}$
$\displaystyle=\frac{m}{n}\bigg{(}1+\sqrt{n-1}\sqrt{\frac{n-m}{m}}\bigg{)}<\frac{m}{n}\sqrt{1+n-1}\sqrt{1+\frac{n-m}{m}}=\sqrt{m}.$
In the rest of this paper, we present new explicit lower bounds for
$\lambda(m,n)$ under the condition that certain mutually unbiased equiangular
tight frames for $\mathbb{R}^{m}$ exist (see Theorem 2.3). We then provide a
construction of an infinite family of such mutually unbiased equiangular tight
frames (see Theorem 3.4). Finally, combining these two ingredients, we
highlight the resulting estimation of $\lambda(5,16)$ to arrive at the
promised lower bound for $\lambda(5)$, conjectured to be its true value.
## 2 The Lower Bound
Before stating the main result, we start with an observation about mutually
unbiased equiangular tight frames, formally defined below.
###### Definition 2.1
Two equiangular tight frames $(v_{1},\dots,v_{k})$ and $(w_{1},\dots,w_{l})$
for $\mathbb{R}^{m}$ are mutually unbiased if there exists $c\in\mathbb{R}$
such that
$|\langle v_{i},w_{j}\rangle|=c\qquad\mbox{for all }i\in\\{1,\dots,k\\}\mbox{
and }j\in\\{1,\dots,l\\}.$
This definition generalizes a concept introduced in [9] so as to permit the
case $k\neq l$. We point out that the scalar $c$ is uniquely determined, as
also noted in [3].
###### Lemma 2.1
The constant $c$ appearing in the definition of mutually unbiased equiangular
tight frames for $\mathbb{R}^{n}$ necessarily satisfies
$c=\frac{1}{\sqrt{m}}.$
Proof. Let $(v_{1},\dots,v_{k})$ and $(w_{1},\dots,w_{l})$ be mutually
unbiased equiangular tight frames for $\mathbb{R}^{m}$ and let
$V\in\mathbb{R}^{m\times k}$ be the matrix with columns $v_{1},\dots,v_{k}$.
For any $j\in\\{1,\dots,l\\}$, because the two frames are mutually unbiased,
we have
$\|V^{\top}w_{j}\|_{2}^{2}=\sum_{i=1}^{k}|\langle
v_{i},w_{j}\rangle|^{2}=\sum_{i=1}^{k}c^{2}=kc^{2}.$
Since $(v_{1},\dots,v_{k})$ is a tight frame for $\mathbb{R}^{m}$, we also
have $VV^{\top}=(k/m){\rm I}_{m}$, and so
$\|V^{\top}w_{j}\|_{2}^{2}=\langle V^{\top}w_{j},V^{\top}w_{j}\rangle=\langle
w_{j},VV^{\top}w_{j}\rangle=\Big{\langle}w_{j},\dfrac{k}{m}w_{j}\Big{\rangle}=\dfrac{k}{m}\|w_{j}\|_{2}^{2}=\dfrac{k}{m}.$
It follows that $kc^{2}=k/m$, and hence $c=1/{\sqrt{m}}$, as claimed.
We now present the main theorem of this section, whose statement involves the
quantity $\delta_{m,n}$ introduced in Theorem 1.2.
###### Theorem 2.3
If mutually unbiased equiangular tight frames $(v_{1},\dots,v_{k})$ and
$(w_{1},\dots,w_{l})$ for $\mathbb{R}^{m}$ exist, then the maximal relative
projection constant $\lambda(m,k+l)$ is bounded below as
$\lambda(m,k+l)\geq\frac{m-\delta_{m,k}\delta_{m,l}}{2\sqrt{m}-\delta_{m,k}-\delta_{m,l}}.$
Proof. Let $V\in\mathbb{R}^{m\times k}$ be the matrix with columns
$v_{1},\dots,v_{k}$ and $W\in\mathbb{R}^{m\times l}$ the matrix with columns
$w_{1},\dots,w_{l}$. For any $\theta\in[0,\pi/2]$, let us consider the vector
$t_{\theta}\in\mathbb{R}^{k+l}$ and the matrix
$U_{\theta}\in\mathbb{R}^{m\times(k+l)}$ defined, in block notation, by
$t_{\theta}:=\begin{bmatrix}\cos\theta\dfrac{1}{\sqrt{k}}\mathbb{1}_{k}\\\
\hline\cr\sin\theta\dfrac{1}{\sqrt{l}}\mathbb{1}_{l}\end{bmatrix}\qquad\mbox{and}\qquad
U_{\theta}:=\begin{bmatrix}\;\cos\theta\sqrt{\dfrac{m}{k}}V&\vline&\sin\theta\sqrt{\dfrac{m}{l}}W\;\end{bmatrix},$
(2)
where $\mathbb{1}_{n}$ denotes the $n$-dimensional vector with all entries
equal to $1$. We observe that $\|t_{\theta}\|_{2}=1$, that
$U_{\theta}{U_{\theta}}^{\top}=\cos^{2}\theta\frac{m}{k}VV^{\top}+\sin^{2}\theta\frac{m}{k}WW^{\top}=\cos^{2}\theta\,{\rm
I}_{m}+\sin^{2}\theta\,{\rm I}_{m}={\rm I}_{m},$
and that
${U_{\theta}}^{\top}U_{\theta}=\begin{bmatrix}\cos^{2}\theta\dfrac{m}{k}V^{\top}V&\vline&\cos\theta\sin\theta\dfrac{m}{\sqrt{kl}}V^{\top}W\\\
\hline\cr\cos\theta\sin\theta\dfrac{m}{\sqrt{kl}}W^{\top}V&\vline&\sin^{2}\theta\dfrac{m}{l}W^{\top}W\end{bmatrix}.$
(3)
Therefore, according to the expression of $\lambda(m,n)$ from Theorem 1.1, we
can make use of the tight frame and unbiasedness properties of $U$ and $V$ to
obtain, with the shorthand notation $\phi_{m,n}:=\sqrt{(n-m)/(m(n-1))}$,
$\displaystyle\lambda(m,k+l)$
$\displaystyle\geq\sum_{i,j=1}^{k+l}(t_{\theta})_{i}(t_{\theta})_{j}|{U_{\theta}}^{\top}U_{\theta}|_{i,j}$
$\displaystyle=\cos^{2}\theta\frac{1}{k}\times\cos^{2}\theta\frac{m}{k}\times
k+\cos^{2}\theta\frac{1}{k}\times\cos^{2}\theta\frac{m}{k}\phi_{m,k}\times
k(k-1)$
$\displaystyle+\sin^{2}\theta\frac{1}{l}\times\sin^{2}\theta\frac{m}{l}\times
l+\sin^{2}\theta\frac{1}{l}\times\sin^{2}\theta\frac{m}{l}\phi_{m,l}\times
l(l-1)$
$\displaystyle+2\times\cos\theta\sin\theta\frac{1}{\sqrt{kl}}\times\cos\theta\sin\theta\frac{m}{\sqrt{kl}}\frac{1}{\sqrt{m}}\times
kl$
$\displaystyle=\cos^{4}\theta\bigg{(}\frac{m}{k}+\frac{m}{k}(k-1)\phi_{m,k}\bigg{)}+\sin^{4}\theta\bigg{(}\frac{m}{l}+\frac{m}{l}(l-1)\phi_{m,l}\bigg{)}$
$\displaystyle+2\cos^{2}\theta\sin^{2}\theta\sqrt{m}$
$\displaystyle=\bigg{(}\frac{1+\cos(2\theta)}{2}\bigg{)}^{2}\delta_{m,k}+\bigg{(}\frac{1-\cos(2\theta)}{2}\bigg{)}^{2}\delta_{m,l}+\big{(}\sin(2\theta)\big{)}^{2}\frac{\sqrt{m}}{2}.$
Since this is valid for any $\theta\in[0,\pi/2]$, after setting
$x:=\cos(2\theta)$, we arrive at
$\displaystyle\lambda(m,k+l)$
$\displaystyle\geq\max_{x\in[-1,1]}\left(\frac{\delta_{m,k}(1+2x+x^{2})}{4}+\frac{\delta_{m,l}(1-2x+x^{2})}{4}+\frac{\sqrt{m}}{2}(1-x^{2})\right)$
$\displaystyle=\frac{1}{4}\max_{x\in[-1,1]}\left(ax^{2}+2bx+c\right),$
where $a:=\delta_{m,k}+\delta_{m,l}-2\sqrt{m}$,
$b:=\delta_{m,k}-\delta_{m,l}$, and $c:=\delta_{m,k}+\delta_{m,l}+2\sqrt{m}$.
Taking momentarily for granted that $a<0$ and that $x_{*}:=-b/a\in[-1,1]$, we
deduce that
$\displaystyle\lambda(m,k+l)$
$\displaystyle\geq\frac{1}{4}\left(ax_{*}^{2}+2bx_{*}+c\right)=\frac{1}{4}\left(-\frac{b^{2}}{a}+c\right)=\frac{1}{4}\frac{b^{2}-ac}{-a}$
$\displaystyle=\frac{1}{4}\frac{(\delta_{m,k}-\delta_{m,l})^{2}+(2\sqrt{m}-\delta_{m,k}-\delta_{m,l})(2\sqrt{m}+\delta_{m,k}+\delta_{m,l})}{2\sqrt{m}-\delta_{m,k}-\delta_{m,l}}$
$\displaystyle=\frac{1}{4}\frac{4m-4\delta_{m,k}\delta_{m,l}}{2\sqrt{m}-\delta_{m,k}-\delta_{m,l}},$
which is the announced lower bound. It now remains to notice that $a<0$ and
that $-b/a\in[-1,1]$, but both follow from the general observation that
$\delta_{m,n}<\sqrt{m}$ for $n\geq m>1$, see Remark 1.1.
Before uncovering a family of mutually unbiased equiangular tight frames in
the next section, we emphasize here two noteworthy properties relating the
vector $t_{\theta}$ and the matrix $U_{\theta}$ that appeared in the above
proof.
###### Proposition 2.1
Let $\gamma_{m,k,l}$ be the lower bound for $\lambda(m,k+l)$ from Theorem 2.3
and let $\theta\in[0,\pi/2]$ be the angle used in its proof, i.e.,
$\gamma_{m,k,l}=\frac{m-\delta_{m,k}\delta_{m,l}}{2\sqrt{m}-\delta_{m,k}-\delta_{m,l}}\qquad\mbox{and}\qquad\cos(2\theta)=\frac{\delta_{m,k}-\delta_{m,l}}{2\sqrt{m}-\delta_{m,k}-\delta_{m,l}}.$
Then, with
$t_{\theta}\in\mathbb{R}^{k+l},U_{\theta}\in\mathbb{R}^{m\times(k+l)}$ defined
as in (2) and with $T_{\theta}:={\rm diag}[t_{\theta}]$, one has
$\displaystyle|U_{\theta}^{\top}U_{\theta}|\,t_{\theta}$
$\displaystyle=\gamma_{m,k,l}\,t_{\theta},$ (4) $\displaystyle T_{\theta}{\rm
sgn}(U_{\theta}^{\top}U_{\theta})T_{\theta}\,U_{\theta}^{\top}$
$\displaystyle=\frac{\gamma_{m,k,l}}{m}\,U_{\theta}^{\top}.$ (5)
Proof. When establishing both (4) and (5), it will be useful to keep in mind
that $\delta_{m,n}$ is tied to $\phi_{m,n}=\sqrt{(n-m)/(m(n-1))}$ via
$\delta_{m,n}=\frac{m}{n}\bigg{(}1+(n-1)\phi_{m,n}\bigg{)}=\frac{m}{n}\bigg{(}1+\frac{n-m}{m}\frac{1}{\phi_{m,n}}\bigg{)}.$
Starting with the justification of (4), we notice that, since the matrix
$V^{\top}V$ has diagonal entries equal to $1$ and off-diagonal entries equal
to $\phi_{m,k}$ in absolute value, we have
$|V^{\top}V|=(1-\phi_{m,k}){\rm I}_{k}+\phi_{m,k}\mathbb{1}_{k,k},$
where $\mathbb{1}_{n,n^{\prime}}$ denotes the $n\times n^{\prime}$ matrix with
all entries equal to $1$. It follows that
$|V^{\top}V|\mathbb{1}_{k}=(1-\phi_{m,k})\mathbb{1}_{k}+k\phi_{m,k}\mathbb{1}_{k}=(1+(k-1)\phi_{m,k})\mathbb{1}_{k}=\frac{k}{m}\delta_{m,k}\mathbb{1}_{k}.$
Likewise, we can obtain
$|W^{\top}W|\mathbb{1}_{l}=\frac{l}{m}\delta_{m,l}\mathbb{1}_{l}.$
Moreover, since the matrices $V^{\top}W$ and $W^{\top}V$ have entries all
equal to $1/\sqrt{m}$ in absolute value, we have
$|V^{\top}W|=(1/\sqrt{m})\mathbb{1}_{k,l}$ and
$|W^{\top}V|=(1/\sqrt{m})\mathbb{1}_{l,k}$, so that
$|V^{\top}W|\mathbb{1}_{l}=\frac{l}{\sqrt{m}}\mathbb{1}_{k}\qquad\mbox{and}\qquad|W^{\top}V|\mathbb{1}_{k}=\frac{k}{\sqrt{m}}\mathbb{1}_{l}.$
Therefore, according to the block-forms of $t_{\theta}$ and
$U_{\theta}^{\top}U_{\theta}$ (see (2) and (3)), we observe that
$\displaystyle|U_{\theta}^{\top}U_{\theta}|\,t_{\theta}$
$\displaystyle=\begin{bmatrix}\cos^{2}\theta\dfrac{m}{k}\cos\theta\dfrac{1}{\sqrt{k}}\dfrac{k}{m}\delta_{m,k}\mathbb{1}_{k}+\cos\theta\sin\theta\dfrac{m}{\sqrt{kl}}\sin\theta\dfrac{1}{\sqrt{l}}\dfrac{l}{\sqrt{m}}\mathbb{1}_{k}\\\
\hline\cr\cos\theta\sin\theta\dfrac{m}{\sqrt{kl}}\cos\theta\dfrac{1}{\sqrt{k}}\dfrac{k}{\sqrt{m}}\mathbb{1}_{l}+\sin^{2}\theta\dfrac{m}{l}\sin\theta\dfrac{1}{\sqrt{l}}\dfrac{l}{m}\delta_{m,l}\mathbb{1}_{l}\end{bmatrix}$
$\displaystyle=\begin{bmatrix}\cos\theta\dfrac{1}{\sqrt{k}}\left(\cos^{2}\theta\delta_{m,k}+\sin^{2}\theta\sqrt{m}\right)\mathbb{1}_{k}\\\
\hline\cr\sin\theta\dfrac{1}{\sqrt{l}}\left(\cos^{2}\theta\sqrt{m}+\sin^{2}\theta\delta_{m,l}\right)\mathbb{1}_{l}\end{bmatrix}.$
(6)
Next, in view of
$\displaystyle\cos^{2}\theta$
$\displaystyle=\frac{1+\cos(2\theta)}{2}=\frac{\sqrt{m}-\delta_{m,l}}{2\sqrt{m}-\delta_{m,k}-\delta_{m,l}},$
$\displaystyle\sin^{2}\theta$
$\displaystyle=\frac{1-\cos(2\theta)}{2}=\frac{\sqrt{m}-\delta_{m,k}}{2\sqrt{m}-\delta_{m,k}-\delta_{m,l}},$
we easily derive that
$\cos^{2}\theta\delta_{m,k}+\sin^{2}\theta\sqrt{m}=\cos^{2}\theta\sqrt{m}+\sin^{2}\theta\delta_{m,l}=\gamma_{m,k,l}.$
(7)
When substituting the latter into (6), the identity (4) immediately follows.
Turning now to the justification of (5), recalling that the matrix $V^{\top}V$
has diagonal entries equal to $1$ and off-diagonal entries equal to
$\phi_{m,k}$ in absolute value, the diagonal entries of the matrix ${\rm
sgn}(V^{\top}V)$ are equal to $1$ and its off-diagonal entries are equal to
those of $V^{\top}V$ divided by $\phi_{m,k}$. In short, we see that ${\rm
sgn}(V^{\top}V)=(1-1/\phi_{m,k}){\rm I}_{k}+(1/\phi_{m,k})V^{\top}V$ holds,
and a similar identity holds for ${\rm sgn}(W^{\top}W)$. Moreover, we also
have ${\rm sgn}(V^{\top}W)=\sqrt{m}\,V^{\top}W$ and ${\rm
sgn}(W^{\top}V)=\sqrt{m}\,W^{\top}V$, as a consequence of all the entries of
$W^{\top}V$ and $W^{\top}V$ being equal to $1/\sqrt{m}$ in absolute value. All
in all, according to the block-form (3) of $U_{\theta}^{\top}U_{\theta}$, we
obtain
${\small{\rm
sgn}(U_{\theta}^{\top}U_{\theta})=\begin{bmatrix}\left(1-\dfrac{1}{\phi_{m,k}}\right){\rm
I}_{k}+\dfrac{1}{\phi_{m,k}}V^{\top}V&\vline&\sqrt{m}\,V^{\top}W\\\
\hline\cr\sqrt{m}\,W^{\top}V&\vline&\left(1-\dfrac{1}{\phi_{m,l}}\right){\rm
I}_{l}+\dfrac{1}{\phi_{m,l}}W^{\top}W\end{bmatrix}.}$
In turn, using the block-form of $T_{\theta}={\rm diag}[t_{\theta}]$, we
derive that $T_{\theta}{\rm sgn}(U_{\theta}^{\top}U_{\theta})T_{\theta}$ takes
the form
${\footnotesize\begin{bmatrix}\cos^{2}\theta\dfrac{1}{k}\left(\left(1-\dfrac{1}{\phi_{m,k}}\right){\rm
I}_{k}+\dfrac{1}{\phi_{m,k}}V^{\top}V\right)&\vline&\cos\theta\sin\theta\dfrac{1}{\sqrt{kl}}\sqrt{m}\,V^{\top}W\\\
\hline\cr\cos\theta\sin\theta\dfrac{1}{\sqrt{kl}}\sqrt{m}\,W^{\top}V&\vline&\sin^{2}\theta\dfrac{1}{l}\left(\left(1-\dfrac{1}{\phi_{m,l}}\right){\rm
I}_{l}+\dfrac{1}{\phi_{m,l}}W^{\top}W\right)\end{bmatrix}.}$
Multiplying on the right by the transpose of
$U_{\theta}={\small\begin{bmatrix}\;\cos\theta\sqrt{\dfrac{m}{k}}V&\vline&\sin\theta\sqrt{\dfrac{m}{l}}W\;\end{bmatrix}}$
and making use of the facts that $VV^{\top}=(k/m){\rm I}_{m}$ and
$WW^{\top}=(l/m){\rm I}_{m}$, the matrix $T_{\theta}{\rm
sgn}(U_{\theta}^{\top}U_{\theta})T_{\theta}\,U_{\theta}^{\top}$ becomes
$\displaystyle{\footnotesize\begin{bmatrix}\cos^{2}\theta\dfrac{1}{k}\cos\theta\sqrt{\dfrac{m}{k}}\left(\left(1-\dfrac{1}{\phi_{m,k}}\right)+\dfrac{k}{m}\dfrac{1}{\phi_{m,k}}\right)V^{\top}+\cos\theta\sin\theta\dfrac{1}{\sqrt{kl}}\sqrt{m}\sin\theta\sqrt{\dfrac{m}{l}}\dfrac{l}{m}V^{\top}\\\
\hline\cr\cos\theta\sin\theta\dfrac{1}{\sqrt{kl}}\sqrt{m}\cos\theta\sqrt{\dfrac{m}{k}}\dfrac{k}{m}W^{\top}+\sin^{2}\theta\dfrac{1}{l}\sin\theta\sqrt{\dfrac{m}{l}}\left(\left(1-\dfrac{1}{\phi_{m,l}}\right)+\dfrac{l}{m}\dfrac{1}{\phi_{m,l}}\right)W^{\top}\end{bmatrix}}$
$\displaystyle=\begin{bmatrix}\cos\theta\sqrt{\dfrac{m}{k}}\left(\dfrac{\cos^{2}\theta}{k}\left(1+\dfrac{k-m}{m}\dfrac{1}{\phi_{m,k}}\right)+\sin^{2}\theta\dfrac{1}{\sqrt{m}}\right)V^{\top}\\\
\hline\cr\sin\theta\sqrt{\dfrac{m}{l}}\left(\cos^{2}\theta\dfrac{1}{\sqrt{m}}+\dfrac{\sin^{2}\theta}{l}\left(1+\dfrac{l-m}{m}\dfrac{1}{\phi_{m,l}}\right)\right)W^{\top}\end{bmatrix}$
$\displaystyle=\begin{bmatrix}\cos\theta\sqrt{\dfrac{m}{k}}\left(\dfrac{\cos^{2}\theta}{m}\delta_{m,k}+\dfrac{\sin^{2}\theta}{\sqrt{m}}\right)V^{\top}\\\
\hline\cr\sin\theta\sqrt{\dfrac{m}{l}}\left(\dfrac{\cos^{2}\theta}{\sqrt{m}}+\dfrac{\sin^{2}\theta}{m}\delta_{m,l}\right)W^{\top}\end{bmatrix}.$
Similarly to (4), the identity (5) now simply follows by exploiting (7) again.
## 3 Construction of Mutually Unbiased Equiangular Tight Frames
To apply the result of Theorem 2.3 in practical situations, we evidently need
to uncover specific integers $k$, $l$, and $m$ allowing mutually unbiased
equiangular tight frames to exist. As a simple example, one can take $k=l=m$
and consider $(v_{1},\ldots,v_{k})$ to be the canonical basis for
$\mathbb{R}^{m}$ and $(w_{1},\ldots,w_{l})$ to be the columns of an $m\times
m$ Hadamard matrix — recall that $m\times m$ Hadamard matrices are conjectured
to exist when and only when $m$ is a multiple of $4$ (the ‘only when’ part
being acquired, of course). This would yield the lower bound
$\lambda(m)\geq(1+\sqrt{m})/2$, $m\in 4\mathbb{N}$, which is inferior to the
lower bounds reported in [13] for $m=4$ and $m=8$. As a slightly more
elaborate example, one can take $k=m$ and $(v_{1},\ldots,v_{k})$ to be the
canonical basis of $\mathbb{R}^{m}$, together with $l>m$ and
$(w_{1},\ldots,w_{l})$ to be a real equiangular tight frame for
$\mathbb{R}^{m}$ that is flat, in the sense that every entry of each vector
$w_{j}$ is either $1/\sqrt{m}$ or $-1/\sqrt{m}$. Real flat equiangular tight
frames are equivalent to binary codes achieving equality in the Grey–Rankin
bound and infinite families are known (see [16, 8]). This would yield the
lower bound $\lambda(m,m+l)\geq(m-\gamma_{m,l})/(2\sqrt{m}-1-\gamma_{m,l})$.
With $m=6$ and $l=16$, this provides the lower bound $\lambda(6)\gtrsim
2.2741$, which is superior to the lower bounds reported in [13] but inferior
to the numerical evaluation $\lambda(6)\approx 2.2857$ performed by B. L.
Chalmers and corroborated by our own computations. In order to apply Theorem
2.3 more effectively, we need further examples of mutually unbiased
equiangular tight frames. To this end, we now relate such frames to a type of
generalized Hadamard matrices.
###### Proposition 3.1
Given integers $k,l\geq m>1$, there are mutually unbiased equiangular tight
frames $(v_{1},\dots,v_{k})$ and $(w_{1},\dots,w_{l})$ for $\mathbb{R}^{m}$ if
and only if there is a $k\times l$ matrix $X$ with the following five
properties:
1. (i)
$X_{ij}\in\\{-1,+1\\}\,$ for all $i\in\\{1,\dots,k\\}$ and
$j\in\\{1,\dots,l\\}$;
2. (ii)
$XX^{\top}X=aX\,$ for some $a\in\mathbb{R}$;
3. (iii)
$X$ has equiangular rows, i.e., $|XX^{\top}|_{i,i^{\prime}}$ is constant over
all $i\neq i^{\prime}$;
4. (iv)
$X$ has equiangular columns, i.e., $|X^{\top}X|_{j,j^{\prime}}$ is constant
over all $j\neq j^{\prime}$;
5. (v)
$X$ has rank $m$.
When this occurs, the following three quantities are necessarily integers:
$\frac{kl}{m},\qquad k\,\sqrt{\frac{l-m}{m(l-1)}},\qquad
l\,\sqrt{\frac{k-m}{m(k-1)}}.$ (8)
Proof. Firstly, let us assume that there are mutually unbiased equiangular
tight frames $(v_{1},\dots,v_{k})$ and $(w_{1},\dots,w_{l})$ for
$\mathbb{R}^{m}$. With $V\in\mathbb{R}^{m\times k}$ and
$W\in\mathbb{R}^{m\times l}$ denoting the matrices with columns
$v_{1},\dots,v_{k}$ and $w_{1},\dots,w_{l}$, respectively, we set
$X=\sqrt{m}\,V^{\top}W\in\mathbb{R}^{k\times l}.$
By Lemma 2.1, we have $|V^{\top}W|_{i,j}=|\langle
v_{i},w_{j}\rangle|=1/\sqrt{m}$ for all $i\in\\{1,\dots,k\\}$ and
$j\in\\{1,\dots,l\\}$, so Property (i) is immediate. In view of
$VV^{\top}=(k/m)\,{\rm I}_{m}$ and of $WW^{\top}=(l/m)\,{\rm I}_{m}$, it is
also straightforward to see that
$XX^{\top}=l\,V^{\top}V\qquad\mbox{and}\qquad X^{\top}X=k\,W^{\top}W.$ (9)
From here, using the fact that $VV^{\top}=(k/m)\,{\rm I}_{m}$ one more time,
we obtain that
$XX^{\top}X=(l\,V^{\top}V)(\sqrt{m}\,V^{\top}W)=(kl/m)\sqrt{m}\,V^{\top}W$,
i.e., $XX^{\top}X=aX$ with $a=kl/m$, so Property (ii) is satisfied. Properties
(iii) and (iv), too, are consequences of (9), since e.g. the off-diagonal
entries of $XX^{\top}$ are constant in absolute value because those of
$V^{\top}V$ are. Finally, Property (v) is also implied by (9) via ${\rm
rank}(X)={\rm rank}(XX^{\top})={\rm rank}(V^{\top}V)={\rm
rank}(VV^{\top})={\rm rank}({\rm I}_{m})~{}=~{}m$.
Conversely, let us assume that Properties (i)–(ii) are fulfilled by some
matrix $X\in\mathbb{R}^{k\times l}$. Consider the singular value decomposition
of this matrix written as $X=P\Sigma Q^{\top}$, where the diagonal matrix
$\Sigma\in\mathbb{R}^{m\times m}$ has positive entries (by (v)) and where the
matrices $P\in\mathbb{R}^{k\times m}$ and $Q\in\mathbb{R}^{l\times m}$ have
orthonormal columns, i.e., $P^{\top}P={\rm I}_{m}$ and $Q^{\top}Q={\rm
I}_{m}$. Property (ii) easily yields $\Sigma^{3}=a\,\Sigma$ and hence
$\Sigma=\sqrt{a}\,{\rm I}_{m}$. Then, looking at the squared Frobenius norm of
$X=\sqrt{a}\,PQ^{\top}$, we derive from (i) that $kl=am$, i.e., that $a=kl/m$.
We now set
$V=\sqrt{\frac{k}{m}}P^{\top}\in\mathbb{R}^{m\times k}\qquad\mbox{and}\qquad
W=\sqrt{\frac{l}{m}}Q^{\top}\in\mathbb{R}^{m\times l}$
and we claim that the columns $v_{1},\ldots,v_{k}$ of $V$ and
$w_{1},\ldots,w_{l}$ of $W$ are mutually unbiased equiangular tight frames for
$\mathbb{R}^{m}$. Indeed, using $V^{\top}V=(k/m)PP^{\top}$ and
$XX^{\top}=aPP^{\top}$, we see that $V^{\top}V=(1/l)XX^{\top}$, so that the
equiangularity of the system $(v_{1},\ldots,v_{k})$ is clear from (iii). Note
that each $v_{i}$ is a unit vector, since
$\|v_{i}\|_{2}^{2}=(V^{\top}V)_{i,i}=(1/l)(XX^{\top})_{i,i}=(1/l)\sum_{j=1}^{l}X_{i,j}^{2}=1$
by (i). The fact that these vectors form a tight frame is seen from
$VV^{\top}=(k/m)\,P^{\top}P=(k/m)\,{\rm I}_{m}$. Similar arguments (using
(iv)) would reveal that the system $(w_{1},\ldots,w_{l})$ is also an
equiangular tight frame. At last, to see that these systems are mutually
unbiased, it suffices to notice that
$V^{\top}W=(\sqrt{kl}/m)\,PQ^{\top}=(1/\sqrt{m})\,X$ and to invoke (i) once
again.
It finally remains to establish that the three quantities in (8) are integers.
For the first one, we have seen (in the proofs of both implications) that
$a=kl/m$ and (i)-(ii) show that $a$ is an integer: any entry of
$XX^{\top}X=aX$ is on the one hand an integer and on the other hand equal to
$\pm a$. For the third one, say, looking e.g. at (9), any off-diagonal entry
of $XX^{\top}=l\,V^{\top}V$ is on the one hand an integer and on the other
hand equal to $l$ times the common absolute inner product in a $k$-vector
equiangular tight frame for $\mathbb{R}^{m}$, i.e., to
$l\sqrt{(k-m)/(m(k-1))}$.
Although conditions (i)–(v) are restrictive, there are matrices $X$ satisfying
them with $m<k<l$. For instance, the $6\times 10$ matrix
$X=\small\left[\begin{array}[]{rrrrrrrrrr}1&1&1&1&1&1&1&1&1&1\\\
1&1&-1&1&-1&-1&1&-1&-1&-1\\\ 1&-1&1&-1&1&-1&-1&1&-1&-1\\\
-1&1&1&-1&-1&1&-1&-1&1&-1\\\ -1&-1&-1&1&1&1&-1&-1&-1&1\\\
-1&-1&-1&-1&-1&-1&1&1&1&1\end{array}\right]$
is one such matrix444As pointed out to us by Josiah Park, this same $6\times
10$ matrix appeared in a recent investigation of spherical half-designs (see
[15]).: it has $\pm 1$ entries, the identity $XX^{\top}X=aX$ is easily
verified (at least computationally), and it was already observed in [11] that
both its rows and its columns form equiangular tight frames for their
$5$-dimensional spans. Therefore, since $X$ fulfills the conditions of
Proposition 3.1 with $m=5$, $k=6$, and $l=10$, we are guaranteed the existence
of mutually unbiased equiangular tight frames $(v_{1},\dots,v_{6})$ and
$(w_{1},\dots,w_{10})$ for $\mathbb{R}^{5}$. Remarkably, this example is but
the first member of the infinite family presented below.
###### Theorem 3.4
For any integer $s\geq 2$, there are mutually unbiased equiangular tight
frames $(v_{1},\dots,v_{k})$ and $(w_{1},\dots,w_{l})$ for $\mathbb{R}^{m}$,
where
$k=2^{s-1}(2^{s}-1),\qquad l=2^{s-1}(2^{s}+1),\qquad m=\frac{2^{2s}-1}{3}.$
Proof. For any such $s$, $k$, $l$ and $m$, the requisite matrix $X$ of
Proposition 3.1 is produced in the recent paper [7], albeit nonobviously so.
In brief, let $Q$ and $B$ be the canonical hyperbolic-quadratic and symplectic
forms on the binary vector space $\mathbb{F}_{2}^{2s}$, respectively:
$\displaystyle Q(x)=Q(x_{1},\dotsc,x_{2s})$
$\displaystyle:=\sum_{r=1}^{s}x_{2r-1}x_{2r},$ $\displaystyle
B(x,y)=B((x_{1},\dotsc,x_{2s}),(y_{1},\dotsc,y_{2s}))$
$\displaystyle:=\sum_{r=1}^{s}(x_{2r-1}y_{2r}+x_{2r}y_{2r-1}).$
Let $\Gamma$ be the corresponding character table of $\mathbb{F}_{2}^{2s}$,
defined by $\Gamma(x,y)=(-1)^{B(x,y)}$ for all $x,y\in\mathbb{F}_{2}^{2s}$.
Any submatrix of $\Gamma$ obviously satisfies (i) from Proposition 3.1. Let
$X$ be the specific submatrix of $\Gamma$ whose rows and columns are indexed
by $\\{x\in\mathbb{F}_{2}^{2s}:Q(x)=1\\}$ and
$\\{x\in\mathbb{F}_{2}^{2s}:Q(x)=0\\}$, respectively. By Lemma 4.2 of [7],
these two subsets of $\mathbb{F}_{2}^{2s}$ are difference sets for
$\mathbb{F}_{2}^{2s}$ of cardinality $k$ and $l$, respectively. As detailed in
[7], this means that the rows and columns of $X$ are equiangular, namely that
(iii) and (iv) hold. Theorem 4.4 of [7] moreover gives that these two
difference sets are paired, meaning that the columns of $X$ form a tight frame
for their span, so that (ii) holds. Theorem 3.3 of [7] then implies that the
rank of $X$ is indeed $m$, so that (v) holds.
We close this section by highlighting that real mutually unbiased equiangular
tight frames are rare objects. Precisely, we have obtained rather stringent
necessary conditions for their existence (not included here because too
detached from our main focus). For instance, these conditions imply that
mutually unbiased $k$-vector and $l$-vector equiangular tight frames for
$\mathbb{R}^{m}$ can only exist for at most thirteen triples of integers
$(m,k,l)$ with $l>k>m+1$ when $m\leq 1000$, and that they cannot exist when
$l=k>m$, in contrast with the complex setting.
## 4 Epilogue: the fifth maximal projection constant
By combining the main results derived in the two previous sections, namely
Theorems 2.3 and 3.4, and after some tedious algebraic manipulation, we can
state that the maximal relative projection constant at any $m$ of the form
$m=(2^{2s}-1)/3$ for some integer $s\geq 2$ is bounded below as
$\lambda(m,4^{s})\geq\frac{2^{2s}-1}{2^{3s}-3\,2^{s-1}+1}\left(\frac{2^{2s-1}+2^{s}-1}{3}+2^{s-1}\sqrt{m}\right).$
(10)
If this was to be an equality, then the vector
$t_{\theta}\in\mathbb{R}_{+}^{n}$, $n=4^{s}$, and the matrix
$U_{\theta}\in\mathbb{R}^{m\times n}$, $m=(2^{2s}-1)/3$, appearing in the
proof of Theorem 2.3 should be maximizers of the expression for $\lambda(m,n)$
from Theorem 1.1. For genuine maximizers $\bar{t}\in\mathbb{R}_{+}^{n}$ and
$\bar{U}\in\mathbb{R}^{m\times n}$, we emphasize the following two necessary
conditions:
1. (a)
$\bar{t}$ is a maximizer of
$\sum_{i,j}t_{i}t_{j}|\bar{U}^{\top}\bar{U}|_{i,j}$ subject to $\|t\|_{2}=1$,
so is characterized by the fact that $\bar{t}$ is an eigenvector (in fact, the
leading eigenvector) of $|U^{\top}U|$ — this is indeed satisfied by
$t_{\theta}$ and $U_{\theta}$, according to (4);
2. (b)
$\bar{U}$ is a maximizer of $\sum_{i,j}\bar{t}_{i}\bar{t}_{j}{\rm
sgn}(\bar{U}^{\top}\bar{U})_{i,j}(U^{\top}U)_{i,j}=\textrm{tr}(\bar{T}{\rm
sgn}(\bar{U}^{\top}\bar{U})\,\bar{T}U^{\top}U)$, $\bar{T}:={\rm
diag}[\bar{t}]$, subject to $UU^{\top}={\rm I}_{m}$, so is characterized by
the fact that the rows of $\bar{U}$ are eigenvectors corresponding to the $m$
largest eigenvalues of $\bar{T}{\rm sgn}(\bar{U}^{\top}\bar{U})\bar{T}$ — this
is indeed satisfied by $t_{\theta}$ and $U_{\theta}$, according to (5).
###### Remark 4.1
The necessary conditions (a)-(b) combine to show that the genuine maximizers
$\bar{t}$ and $\bar{U}$ obey the noteworthy relation
$\big{(}\bar{U}^{\top}\bar{D}\bar{U}\big{)}_{i,i}=\lambda(m,n)\,\bar{t}_{i}^{2}\qquad\mbox{for
all }i\in\\{1,\ldots,n\\},$
where $\bar{D}={\rm diag}[\bar{\mu}_{1},\ldots,\bar{\mu}_{m}]$ is the diagonal
matrix with the $m$ leading eignevalues
$\bar{\mu}_{1}\geq\cdots\geq\bar{\mu}_{m}$ of $\bar{T}{\rm
sgn}(\bar{U}^{\top}\bar{U})\bar{T}$ on its diagonal. Indeed, by (a), we have
$\displaystyle\lambda(m,n)\,\bar{t}_{i}^{2}$
$\displaystyle=\bar{t}_{i}\sum_{j=1}^{n}|\bar{U}^{\top}\bar{U}|_{i,j}\bar{t}_{j}=\sum_{j=1}^{n}(\bar{U}^{\top}\bar{U})_{i,j}(\bar{T}{\rm
sgn}(\bar{U}^{\top}\bar{U})\bar{T})_{i,j}$
$\displaystyle=\big{(}(\bar{U}^{\top}\bar{U})(\bar{T}{\rm
sgn}(\bar{U}^{\top}\bar{U})\bar{T})\big{)}_{i,i}.$ (11)
Now, by (b), we have $\bar{T}{\rm
sgn}(\bar{U}^{\top}\bar{U}\bar{T})\bar{U}^{\top}=\bar{U}^{\top}\bar{D}$, or
$\bar{U}\bar{T}{\rm sgn}(\bar{U}^{\top}\bar{U})\bar{T}=\bar{D}\bar{U}$ by
taking the transpose. Making use of the latter in (11) gives the expected
relation.
The observation that $t_{\theta}$ and $U_{\theta}$ do satisfy conditions
(a)-(b) supports the belief that (10) could be an equality. To the question of
whether the right-hand side of (10) also coincides with the value of the
maximal absolute projection constant $\lambda(m)$, $m=(2^{2s}-1)/3$, the
answer is in general no. Indeed, for $s=3$, hence for $m=21$, $k=28$, and
$l=36$, we have $\gamma_{21,28,36}\approx 3.9397$, while a real equiangular
tight frame for $\mathbb{R}^{21}$ made of $126$ vectors is known to exist (see
e.g. [10]), so Theorem 1.2 yields $\lambda(21)\geq\lambda(21,126)\gtrsim
4.3333$. However, for $s=2$, hence for $m=5$, $k=6$, and $l=10$, there are
convincing reasons to believe that $\gamma_{5,6,10}\approx 2.06919$ coincide
with the value of $\lambda(5)$. These reasons are the extensive numerical
investigations carried out B. L. Chalmers, as well as our own computations
(some of which can be found in a matlab reproducible available on the authors’
webpages). All these clues prompt us to conclude with the following assertion.
###### Theorem 4.5 (and Conjecture)
The fifth absolute projection constant satisfies
$\lambda(5)\geq\lambda(5,16)\geq\frac{5}{59}(11+6\sqrt{5})\approx 2.06919,$
and it is expected that the latter is indeed the true value of $\lambda(5)$.
## Appendix
As bonus material, we present here a new proof of Theorem 1.2 as a immediate
consequence of the technical result below coupled with Theorem 1.1.
###### Proposition 4.1
For integers $n\geq m>1$, one has
$\displaystyle\max\bigg{\\{}\sum_{i,j=1}^{n}$ $\displaystyle
t_{i}t_{j}|U^{\top}U|_{ij}:t\in\mathbb{R}^{n},\;\|t\|_{2}=1,U\in\mathbb{R}^{m\times
n},\;UU^{\top}={\rm I}_{m}\bigg{\\}}$
$\displaystyle\leq\frac{m}{n}\left(1+\sqrt{\frac{(n-1)(n-m)}{m}}\right),$ (12)
with equality if and only if there exists a matrix $U\in\mathbb{R}^{m\times
n}$ with $UU^{\top}={\rm I}_{m}$, $(U^{\top}U)_{i,i}=m/n$ for all
$i\in\\{1,\ldots,n\\}$, and $|U^{\top}U|_{i,j}=\sqrt{(n-m)m/(n-1)}/n$ for all
$i\not=j\in\\{1,\ldots,n\\}$.
Proof. For $t\in\mathbb{R}^{n}$ satisfying $\|t\|_{2}=1$ and
$U\in\mathbb{R}^{m\times n}$ satisfying $UU^{\top}={\rm I}_{m}$, we use the
nonnegativity of $(U^{\top}U)_{i,i}$ (as the inner product of the $i$th column
of $U$ with itself) and Cauchy–Schwarz inequality to write
$\displaystyle\Sigma$
$\displaystyle:=\sum_{i,j=1}^{n}t_{i}t_{j}|U^{\top}U|_{i,j}=\sum_{i=1}^{n}t_{i}^{2}|U^{\top}U|_{i,i}+\sum_{\begin{subarray}{c}i,j=1\\\
i\not=j\end{subarray}}^{n}t_{i}t_{j}|U^{\top}U|_{i,j}$
$\displaystyle\leq\sum_{i=1}^{n}t_{i}^{2}(U^{\top}U)_{i,i}+\sqrt{\sum_{\begin{subarray}{c}i,j=1\\\
i\not=j\end{subarray}}^{n}t_{i}^{2}t_{j}^{2}}\sqrt{\sum_{\begin{subarray}{c}i,j=1\\\
i\not=j\end{subarray}}^{n}(U^{\top}U)_{i,j}^{2}}$
$\displaystyle=\sum_{i=1}^{n}t_{i}^{2}(U^{\top}U)_{i,i}+\sqrt{\sum_{i,j=1}^{n}t_{i}^{2}t_{j}^{2}-\sum_{i=1}^{n}t_{i}^{4}}\sqrt{\sum_{i,j=1}^{n}(U^{\top}U)_{i,j}^{2}-\sum_{i=1}^{n}(U^{\top}U)_{i,i}^{2}}$
$\displaystyle=\sum_{i=1}^{n}\alpha_{i}\beta_{i}+\sqrt{A-\sum_{i=1}^{n}\alpha_{i}^{2}}\sqrt{B-\sum_{i=1}^{n}\beta_{i}^{2}},$
where we have set $\alpha_{i}=t_{i}^{2}$, $\beta_{i}=(U^{\top}U)_{i,i}$,
$A=\big{(}\sum_{i}t_{i}^{2}\big{)}\big{(}\sum_{j}t_{j}^{2}\big{)}=\|t\|_{2}^{4}=1$,
and
$B=\sum_{i,j}(U^{\top}U)_{i,j}^{2}=\|U^{\top}U\|_{F}^{2}=\textrm{tr}(U^{\top}UU^{\top}U)=\textrm{tr}(UU^{\top}UU^{\top})=m$.
Setting also $a=\|t\|_{2}^{2}=1$,
$b=\textrm{tr}(U^{\top}U)=\textrm{tr}(UU^{\top})=m$, as well as
$x_{i}:=\frac{\alpha_{i}-a/n}{\sqrt{A-a^{2}/n}}\qquad\mbox{and}\qquad
y_{i}:=\frac{\beta_{i}-b/n}{\sqrt{B-b^{2}/n}},$
we notice that $\sum_{i=1}^{n}x_{i}=0$ and $\sum_{i=1}^{n}y_{i}=0$. We exploit
these identities a few times to derive
$\displaystyle\Sigma$
$\displaystyle\leq\sum_{i=1}^{n}\bigg{(}\frac{a}{n}+\sqrt{A-\frac{a^{2}}{n}}x_{i}\bigg{)}\bigg{(}\frac{b}{n}+\sqrt{B-\frac{b^{2}}{n}}y_{i}\bigg{)}$
$\displaystyle+\sqrt{A-\sum_{i=1}^{n}\bigg{(}\frac{a}{n}+\sqrt{A-\frac{a^{2}}{n}}x_{i}\bigg{)}^{2}}\sqrt{B-\sum_{i=1}^{n}\bigg{(}\frac{b}{n}+\sqrt{B-\frac{b^{2}}{n}}y_{i}\bigg{)}^{2}}$
$\displaystyle=\frac{ab}{n}+\sqrt{A-\frac{a^{2}}{n}}\sqrt{B-\frac{b^{2}}{n}}\sum_{i=1}^{n}x_{i}y_{i}$
$\displaystyle+\sqrt{A-\frac{a^{2}}{n}-\Big{(}A-\frac{a^{2}}{n}\Big{)}\sum_{i=1}^{n}x_{i}^{2}}+\sqrt{B-\frac{b^{2}}{n}-\Big{(}B-\frac{b^{2}}{n}\Big{)}\sum_{i=1}^{n}y_{i}^{2}}$
$\displaystyle=\frac{ab}{n}+\sqrt{A-\frac{a^{2}}{n}}\sqrt{B-\frac{b^{2}}{n}}\left[\sum_{i=1}^{n}x_{i}y_{i}+\sqrt{1-\sum_{i=1}^{n}x_{i}^{2}}\sqrt{1-\sum_{i=1}^{n}y_{i}^{2}}\right].$
The latter term in square brackets is nothing but the inner product of the
unit vectors $\tilde{x}:=\begin{bmatrix}x,\sqrt{1-\|x\|_{2}^{2}}\end{bmatrix}$
and $\tilde{y}:=\begin{bmatrix}y,\sqrt{1-\|y\|_{2}^{2}}\end{bmatrix}$, so it
is bounded by one. Thus, keeping the values of $a=1$, $b=m$, $A=1$, and $B=m$
in mind, we arrive at
$\sum_{i,j=1}^{n}t_{i}t_{j}|U^{\top}U|_{i,j}\leq\frac{m}{n}+\sqrt{1-\frac{1}{n}}\sqrt{m-\frac{m^{2}}{n}}.$
Taking the supremum over $t$ and $U$ leads to the desired inequality (12)
after some algebraic manipulation. This inequality turns into an equality if
the matrix $U\in\mathbb{R}^{m\times n}$ with $UU^{\top}={\rm I}_{m}$ satisfies
$(U^{\top}U)_{i,i}=m/n$ for all $i\in\\{1,\ldots,n\\}$ and
$|U^{\top}U|_{i,j}=\sqrt{(n-m)m/(n-1)}/n$ for all
$i\not=j\in\\{1,\ldots,n\\}$, simply by choosing $t\in\mathbb{R}^{n}$ with
entries $t_{i}=1/\sqrt{n}$ for all $i\in\\{1,\ldots,n\\}$.
Conversely, let us assume that (12) is an equality. Our goal is now to prove
that $(U^{\top}U)_{i,i}=m/n$ for all $i\in\\{1,\ldots,n\\}$ and
$|U^{\top}U|_{i,j}=\sqrt{(n-m)m/(n-1)}/n$ for all
$i\not=j\in\\{1,\ldots,n\\}$, where $U\in\mathbb{R}^{m\times n}$ satisfying
$UU^{\top}={\rm I}_{m}$ achieves the maximum, together with
$t\in\mathbb{R}^{n}$ satisfying $\|t\|_{2}=1$. We start by taking into account
that equality must hold throughout the first part of the argument. Equality in
Cauchy–Schwarz inequality implies the existence of $c\in\mathbb{R}$ such that
$t_{i}t_{j}=c\,|U^{\top}U|_{i,j}\qquad\mbox{for all
}i\not=j\in\\{1,\ldots,n\\}$
and equality in $\langle\tilde{x},\tilde{y}\rangle\leq 1$ yields $x=y$, i.e.,
$(U^{\top}U)_{i,i}-\frac{m}{n}=\frac{\sqrt{m-m^{2}/n}}{\sqrt{1-1/n}}\bigg{(}t_{i}^{2}-\frac{1}{n}\bigg{)}\qquad\mbox{for
all }i\in\\{1,\ldots,n\\}.$ (13)
Since the matrix $T{\rm sgn}(U^{\top}U)T$ has diagonal entries $(T{\rm
sgn}(U^{\top}U)T)_{i,i}=t_{i}^{2}$ and off-diagonal entries
$(T{\rm sgn}(U^{\top}U)T)_{i,j}=t_{i}t_{j}{\rm
sgn}(U^{\top}U)_{i,j}=c\,|U^{\top}U|_{i,j}{\rm
sgn}(U^{\top}U)_{i,j}=c\,(U^{\top}U)_{i,j},$
the necessary condition (b), written for all $i\in\\{1,\ldots,n\\}$ and
$h\in\\{1,\ldots,m\\}$ as
$\sum_{j=1}^{n}(T{\rm
sgn}(U^{\top}U)T)_{i,j}U^{\top}_{j,h}=\mu_{h}U^{\top}_{i,h},$
where $\mu_{1}\geq\cdots\geq\mu_{m}$ are the $m$ leading eigenvalues of
$T({\rm sgn}(U^{\top}U)T$, becomes
$t_{i}^{2}U^{\top}_{i,h}+\sum_{\begin{subarray}{c}j=1\\\
j\not=i\end{subarray}}^{n}c\,(U^{\top}U)_{i,j}U^{\top}_{j,h}=\mu_{h}U^{\top}_{i,h}.$
In other words, for all $i\in\\{1,\ldots,n\\}$ and $h\in\\{1,\ldots,m\\}$, we
have
$t_{i}^{2}U^{\top}_{i,h}+c\,(U^{\top}UU^{\top})_{i,h}-c\,(U^{\top}U)_{i,i}U^{\top}_{i,h}=\mu_{h}U^{\top}_{i,h},$
or equivalently, in view of $UU^{\top}={\rm I}_{m}$,
$\left(t_{i}^{2}+c-c\,(U^{\top}U)_{i,i}\right)U^{\top}_{i,h}=\mu_{h}U^{\top}_{i,h}.$
(14)
This actually shows that $\mu_{h}$ is independent of $h\in\\{1,\ldots,m\\}$
and — thanks to the alternate expression $\lambda(m,n)=\mu_{1}+\ldots+\mu_{m}$
(see e.g. [13, Theorem 1]) — one must have $\mu_{h}=\lambda(m,n)/m$. Now (14)
reduces (say, by multiplying by $U^{\top}_{i,h}$, summing over $h$, and
simplifying) to $t_{i}^{2}+c-c\,(U^{\top}U)_{i,i}=\lambda(m,n)/m$. Summing
over $i\in\\{1,\ldots,n\\}$) yields
$1+c\,(n-m)=\frac{n}{m}\lambda(m,n)=1+\sqrt{\frac{(n-1)(n-m)}{m}},$
which shows that
$c=\sqrt{\frac{n-1}{m(n-m)}}.$
Invoking Remark 4.1, we notice that $(U^{\top}U)_{i,i}=mt_{i}^{2}$ for all
$i\in\\{1,\ldots,n\\}$, and therefore (13) becomes
$m(t_{i}^{2}-1/n)=\sqrt{m(n-m)/(n-1)}(t_{i}^{2}-1/n)$. Given that
$m~{}\not=~{}\sqrt{m(n-m)/(n-1)}$ when $m>1$, we consequently obtain
$t_{i}^{2}=1/n$ for all $i~{}\in~{}\\{1,\ldots,n\\}$. In turn, we deduce from
$(U^{\top}U)_{i,i}=mt_{i}^{2}$ that $(U^{\top}U)_{i,i}=m/n$ for all
$i\in\\{1,\ldots,n\\}$ and from $c|U^{\top}U|_{i,j}=t_{i}t_{j}$ that
$|U^{\top}U|_{i,j}=\sqrt{m(n-m)/(n-1)}/n$ for all
$i\not=j\in\\{1,\ldots,n\\}$. The proof is now complete.
## References
* [1] G. Basso, Computation of maximal projection constants, J. Funct. Anal. 277/10 (2019), 3560–3585.
* [2] G. Basso, Almost minimal orthogonal projections, Isr. J. Math. 243 (2021), 355–376.
* [3] F. Caro Perez, V. Gonzalez Avella, D. Goyeneche, Mutually unbiased frames, arXiv preprint arXiv:2110.08293 (2021).
* [4] A. Castejon, G. Lewicki, M. Martin, Some results on absolute projection constant, Numer. Func. Anal. Optim. 40/1 (2019), 34–51.
* [5] B. L. Chalmers, G. Lewicki, Three-dimensional subspace of $l_{\infty}^{(5)}$ with maximal projection constant, J. Funct. Anal. 257/2 (2009), 553–592.
* [6] B. L. Chalmers, G. Lewicki, A proof of the Grünbaum conjecture, Studia Math. 200 (2010), 103–129.
* [7] M. Fickus, J. W. Iverson, J. Jasper, E. J. King, Grassmannian codes from paired difference sets, Des. Codes Cryptogr. 89 (2021) 2553–2576.
* [8] M. Fickus, J. Jasper, D. G. Mixon, J. D. Peterson, Hadamard equiangular tight frames, Appl. Comput. Harmon. Anal. 50 (2021) 281–302.
* [9] M. Fickus, B. R. Mayo, Mutually unbiased equiangular tight frames, IEEE Trans. Inform. Theory 67/3 (2020), 1656–1667.
* [10] M. Fickus, D. G. Mixon, Tables of the existence of equiangular tight frames, arXiv:1504.00253 (2016).
* [11] M. Fickus, D. G. Mixon, J. Jasper, Equiangular tight frames from hyperovals, IEEE Trans. Inform. Theory 62/9 (2016) 5225–5236.
* [12] S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing, Birkhäuser, 2013.
* [13] S. Foucart, L. Skrzypek, On maximal relative projection constants, J. Math. Anal. Appl. 447/1 (2017), 309–328.
* [14] B. Grünbaum, Projection constants, Trans. Amer. Math. Soc. 95 (1960), 451–465.
* [15] D. Hughes, S. Waldron, Spherical half-designs of high order, Involve, a Journal of Mathematics 13/2 (2020), 193–203.
* [16] J. Jasper, D. G. Mixon, M. Fickus, Kirkman equiangular tight frames and codes, IEEE Trans. Inform. Theory 60/1 (2014) 170–181.
* [17] I. M. Kadec, M. G. Snobar, Certain functionals on the Minkowski compactum, Math. Notes 10 (1971), 694–696 (English transl.).
* [18] H. König, Spaces with large projection constants, Isr. J. Math. 50/3 (1985), 181–188.
* [19] H. König, N. Tomczak-Jaegermann, Norms of minimal projections, J. Funct. Anal. 119/2 (1994), 253–280.
* [20] H. König, D. Lewis, P.-K. Lin, Finite dimensional projection constants, Studia Mathematica 75/3 (1983), 341–358.
* [21] F. Sokolowski, Minimal projections onto subspaces with codimension 2, Numer. Funct. Anal. Optim. 38/8 (2017), 1045–1059.
* [22] P. Wojtaszczyk, Banach Spaces for Analysts, Cambridge University Press, Cambridge, 1991.
|
# A Benchmarking on Cloud based Speech-To-Text Services for French Speech and
Background Noise Effect††thanks: 6th National Conference on Practical
Applications of Artificial Intelligence, 2021, Bordeaux, France
Binbin Xu1*, Chongyang Tao1+, Zidu Feng1+, Youssef Raqui2, Sylvie Ranwez1*
1 EuroMov Digital Health in Motion, Univ Montpellier, IMT Mines Al s
2 DiappyMed
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
This study presents a large scale benchmarking on cloud based Speech-To-Text
systems: Google Cloud Speech-To-Text, Microsoft Azure Cognitive Services,
Amazon Transcribe, IBM Watson Speech to Text. For each systems, $40\,158$
clean and noisy speech files about $101$ hours are tested. Effect of
background noise on STT quality is also evaluated with 5 different Signal-to-
noise ratios from $40\text{\,}\mathrm{dB}$ to $0\text{\,}\mathrm{dB}$. Results
showed that Microsoft Azure provided lowest transcription error rate $9.09\%$
on clean speech, with high robustness to noisy environment. Google Cloud and
Amazon Transcribe gave similar performance, but the latter is very limited for
time-constraint usage. Though IBM Watson could work correctly in quiet
conditions, it is highly sensible to noisy speech which could strongly limit
its application in real life situations.
### R sum
Alors que les applications de reconnaissance vocale se sont impos es dans
notre quotidien, il existe peu d’ tudes grande chelle pour comparer les
performances des solutions de l’ tat de l’art. Ceci est d’autant plus vrai
dans une langue autre que la langue anglaise. Cet article propose une telle
analyse comparative bas e sur 17 heures d’enregistrement en Fran ais. Quatre
syst mes sont analys s : Google Cloud Speech-To-Text, Microsoft Azure
Cognitive Services, Amazon Transcribe, et IBM Watson Speech to Text. Chacun
ayant t mis l’ preuve de cinq niveaux de bruit de fond, c’est l’ quivalent de
400 heures de discours qui sont analys es. Microsoft Azure Cognitive Services
a montr les meilleurs r sultats en terme de taux d’erreur et une bonne r
sistance au bruit, tandis que la sensibilit au bruit d’IBM Watson Speech to
Text compromet son usage en situation r elle.
### Keywords
Speech-To-Text, Benchmarking, French language, Google Cloud, Microsoft Azure
Cognitive Services, Amazon Transcribe, IBM Watson
## 1 Introduction
Lots applications with automated speech recognition (ASR) or Speech-To-Text
(STT) over the past few years have been developed to improve our daily life
like personal voice assistant, or have been deeply integrated in many of
business chains. Thanks to the substantial development of deep neural network
(DNN), the performance of STT has been drastically improved. Like other deep
neural network applications, today it is not surprising that in some
situations, current STT can even outperform humans. The IBM/Appen human
transcription study [1] showed that word error rate of human parity is about
$5.1\%$. Microsoft Research is the first team reaching this milestone.
However, the outstanding performances in DNN is based on large amount of
labeled training data. This is also the case for DNN models on STT. For
languages other than English, there’s much less high quality audio data like
in English. In consequence, the performances of STT on other languages are in
general lower than for English, especially for languages featuring rich
morphology like French.
Though many public Deep Neural Network models are available for offline use,
retraining or regular updating require extensive computing power which
prevents individuals or small business from accessing these models or using
them in an efficient way. The choice will be the cloud-based API services.
Actually, the most powerful STT systems are all cloud-based. Integrating these
systems in an application or a product line requires at first a benchmarking
on their performance. There exist many benchmarking studies on the performance
of cloud-based STT services. However, they are often conducted with very small
or small sample size, for example, 20–60 sentences or hundreds of sentences.
The benchmarking on English from Picovoice is one of the few large scale tests
on STT, which contains 2620 audio files (5h24m) from LibriSpeech dataset [2].
Benchmarking of cloud-based STT on French is even less studied. Another major
negative factor on STT performance is the background noise. Very often, only
clean speech record is processed. However, for most real-life application, the
background noise can hardly be avoided. This should be taken into account in
STT benchmarking as well.
The objective of this study is to benchmark four most used Speech-To-Text API
(Application Programming Interface) with a large French dataset : 6693 files,
about 17hours speech record. Five levels of common background noise are added
in the clean speech and evaluated additionally. In total, more than 400 hours
speech are transcribed.
## 2 Speech-To-Text system and data
### 2.1 Cloud based STT services
Four cloud based Speech-To-Text services are evaluated in this work:
* •
Amazon Transcribe, is part of Cloud Computing Services from the Amazon Web
Services (AWS) which holds currently the largest share in Cloud Computing
market. Their recent speech recognition model on English reached State-of-the-
Art word error rate at $6.2\%$ [3]. To convert speech files to text, the data
needs to be at first uploaded to Amazon Simple Storage Service (Amazon S3).
Then Transcribe call the objects from S3 for transcription. Though Transcribe
jobs can be treated on batch mode (up to 100 parallel jobs). This S3
requirement adds additional complexity for the transcription tasks. Actually
Amazon Transcribe is the only STT requiring storage. The other three services
can be feed directly with audio files.
* •
Google Cloud Speech-to-Text, is integrated in the widely used platform Google
Cloud. In 2012, Google Research had achieved word error rate at $15\%$ for
English broadcast news transcription. This error rate dropped considerably to
$5.6\%$ with updated model trained on over $12\,500$ hours audio in 2018 [3].
Their STT model is one of the most powerful in the market, and the performance
is continuously improving.
* •
IBM Watson Speech-to-Text. IBM Watson is a conventional top player in speech
recognition. In 2015, their speech recognition system beat other models with a
word error rate at $8\%$ [4]. Two years later, their system reached $5.5\%$
[1]. It’s now among the most popular STT services and provides similar
features as other cloud STT.
* •
Microsoft Azure Cognitive Services. Microsoft’s speech recognition is now one
of the leading STT service. In 2017, their model reached a historical human
parity milestone on conversational telephony speech transcription, with 5.1%
word error rate in benchmarked Switchboard task [5]. As all the other STT
systems, Microsoft’s STT system is also integrated in the Cloud Computing
platform.
All the four STT services offer the possibility to customize (like domain-
specific) or retrain the Speech-to-Text models. However, since they are all
black-boxed APIs, the background models and architectures are unknown, it is
difficult to benchmark the customized models with different configurations in
a fair way. So, only the basic models (APIs) are called.
### 2.2 Speech corpus
The basic audio dataset in this work is from WCE-SLT-LIG [6, 7]. This corpus
contains 6693 speech utterances recorded by 42 native speakers.
Figure 1: Main topic of WCE-SLT-LIG corpus
They come from French news media with main topic on European economy. The
total audio duration is 16h52. The ground-truth transcriptions are also
available, which makes the benchmarking possible.
The number of word in this corpus is $22\pm 12.8$ (median $\pm$ standard
deviation), with audio duration $8.4\pm 4.6$ seconds as shown in Figure 2.
Figure 2: Distribution of WCE-SLT-LIG corpus
### 2.3 Environmental noise corpus
In real world cases, most speech takes place in noisy environments. This is
one of the main challenges in Speech-to-Text applications. To evaluate the
effects on the STT quality, we introduce another recently released
environmental noise dataset: Microsoft Scalable Noisy Speech Dataset (MS-SNSD)
[8]. The dataset provides a variety of common environmental noise, which can
be mixed on clean speech data. The signal-to-noise (SNR) in
$\text{\,}\mathrm{dB}$ can be configured as well.
${\displaystyle\mathrm{SNR_{dB}}=10\log_{10}\left({\frac{P_{\mathrm{signal}}}{P_{\mathrm{noise}}}}\right).}$
where $P_{\mathrm{signal}}$, $P_{\mathrm{noise}}$ are the power of signal and
background noise. We set 5 SNR cases here: $40\text{\,}\mathrm{dB}$,
$30\text{\,}\mathrm{dB}$, $20\text{\,}\mathrm{dB}$, $10\text{\,}\mathrm{dB}$
and $0\text{\,}\mathrm{dB}$ (1:1 signal vs. noise).
The raw MS-SNSD contains 181 noise files. However, many of them are recorded
with strong conversations in other languages (English, German etc.). Some of
noise type are also less common. So, these noises are excluded. We’d like to
evaluate the effect of noise type on the performance of STT, to make sure that
some types are not over-presented, 96 noise files in 18 types are kept.
AirConditioner | Kitchen | SqueakyChair
---|---|---
AirportAnnouncements | LivingRoom | Station
Babble | Munching | Traffic
Cafe | Restaurant | Typing
CafeTeria | ShuttingDoor | VacuumCleaner
CopyMachine | Square | WasherDryer
Table 1: Types of background noise used in this work. 96 noises in 18 types
### 2.4 Evaluation metrics
In Speech-to-Text, the most commonly used metric to evaluate the performance
is Word error rate (WER). Other metrics exist, like Match error rate (MER);
Word information lost (WIL) or Word information preserve (WIP) [9].
$\displaystyle WER$ $\displaystyle=\frac{S+D+I}{N1=H+S+D}$ (1) $\displaystyle
MER$ $\displaystyle=\frac{S+D+I}{N=H+S+D+I}$ (2) $\displaystyle WIP$
$\displaystyle=\frac{H}{N_{1}}\cdot\frac{H}{N_{2}}\cong\frac{I(X,Y)}{H(Y)},$
(3) $\displaystyle WIL$ $\displaystyle=1-WIP$ (4)
where $H$, $S$, $D$ and $I$ correspond to the total number of word hits,
substitutions, deletions and insertions. $N_{1}$ and $N_{2}$ are respectively
the number of words in ground-truth text and the output transcripts. The lower
are WER, MER and WIL, the better the performance is.
| Amazon | Google | IBM | Microsoft
---|---|---|---|---
WER | 11.76% | 14.29% | 14.81% | 9.09%
MER | 11.54% | 14.29% | 14.29% | 9.09%
WIL | 0.19 | 0.25 | 0.24 | 0.16
Table 2: Evaluation on clean audio. Upper, WER distributions; lower, median
values. (STTs accessed in February 2021)
## 3 Results
### 3.1 Clean speech
For clean speech, Microsoft Azure performed quite well, with a WER at $9.09\%$
which is close to the advertised rate. Amazon Transcribe took the second place
with WER $11.76\%$. Google Cloud and IBM Waston gave similar WER ($14.29\%$
and $14.81\%$). These WER are actually very good already. According to the
public DeepSpeech model [10] from Mozila, trained with a mixed French dataset
”CommonVoice + CssTen + LinguaLibre + Mailabs + Tatoeba + Voxforge”, the WER
on test dataset is $19.5\%$ (result retrieved on March 10th 2021) [11]. The
gain with cloud STT API is between $24\%-53\%$.
### 3.2 Noisy speech
After mixing five different levels of environmental noise, Microsoft Azure
gave a quite good global WER $11.11\%$ (Table 3). Amazon Transcribe and Google
Cloud showed the same WER at $20\%$. But IBM Waston failed at certain point.
Its global WER is $29.63\%$, with a word-information-lost rate at $43\%$
(0.43) which is unfortunately high.
| Amazon | Google | IBM | Microsoft
---|---|---|---|---
WER | 20.00% | 20.00% | 29.63% | 11.11%
MER | 19.64% | 20.00% | 28.57% | 11.11%
WIL | 0.31 | 0.33 | 0.43 | 0.19
Table 3: Evaluation on all noisy audio (5 SNR levels combined), median values
At individual SNR level, as shown in Figure 3, Microsoft Azure is the most
robust to noise. The variation across different noise levels is quite small.
In highly noisy environment, the WER from transcription by IBM Waston can be
more than $100\%$. While other STTs would be at worst less than $50\%$.
Figure 3: Evaluation on mixed noisy speech by five signal-noise-ratio levels;
upper, wer distributions; lower, median value for each level. (STTs accessed
in February 2021)
The exceptional STT performance of Microsoft Azure is due to that Microsoft
has been working intensively on Artificial Intelligence based noise
suppression. This environmental noise dataset MS-SNSD comes from Microsoft.
The noise suppression should be already in the pipeline of their Speech-to-
Text models. Actually, in December 2020 Microsoft introduced background noise
suppression functionality in Microsoft Teams meetings [12]. To achieve this,
they used 760 hours of clean speech data and 180 hours of noise data. These
data are now released for Interspeech 2021 Deep Noise Suppression Challenge
[13].
Figure 4: WER by different noise types; median WER values from tests on all
the five SNR noisy speech. (STTs accessed in February 2021)
The performance of STT depends also on the noise types. All the STT services
are sensible to noise type _Restaurant_. IBM Waston’s WER reached $46.51\%$;
Amazon Transcribe had also high WER for this type of noise. Google Cloud and
Microsoft Azure dealt it better without shape WER changes. Background noise in
environment _Restaurant_ could be a mixture of different noises (babble,
conversation, munching, traffic etc.) which make it be more difficult for
Speech-to-Text tasks. In general, Google Cloud and Microsoft Azure are more
robust to environmental noise (variation and standard deviation of the median
WER are $6.5\%$ and $2.6\%$ for Google Cloud; $1.4\%$, $1.2\%$ for Microsoft
Azure); Amazon Transcribe can be placed in the second rank with $24\%$ and
$4.9\%$. As for IBM Waston, as shown previously, it can fail in many cases
when the background noises are too strong. It suffered also strong performance
variation $53.7\%$ and $7.3\%$ of standard deviation of the median WER.
### 3.3 Main STT errors
The main source of errors contributed to WER is the substitution. For clean
speech, or less noisy speech, the percentage of substitution $S$ is generally
much higher than deletion $D$ and insertion $I$. When speech becomes highly
noisy (SNR lower than $10\text{\,}\mathrm{dB}$), deletion $D$ percentage
increased much more. STT service from Microsoft Azure is quite robust to noisy
environment, there’s practically no change for SNR from
$40\text{\,}\mathrm{dB}$ to $10\text{\,}\mathrm{dB}$. Only in the tested case
when mixing directly noise and speech $0\text{\,}\mathrm{dB}$, the deletion
$D$ and substitution $S$ increased slightly. However, the changes are much
more significant for other three STT services, especially for IBM Waston.
Figure 5: Main transcription errors distribution (mean values. The median
percentage values for lower SNR are zero, less meaningful for presentation)
There’s also inter-speakers difference of WER. Amount the 42 speakers, all the
four STTs had more difficulty to transcribe speech from speaker L23_P08.
Figure 6: Word Error Rate by speakers (median values) on clean speech for all
the four STT systems.
### 3.4 Transcription job time
In a production application, the STT service must be as responsive as
possible. Google Cloud is the fastest about the four tested APIs, with a
median value at 1.76 second par job. Microsoft Azure is also fast, 3.51 second
per transcription job. IBM Waston is slower and require 5.43 second to
complete the job. It’s not surprising that Amazon Transcribe is the slowest
STT service, with 27 second per job. Some transcription jobs can take up to
200 second. Even it’s possible to send up to 100 jobs in parallel, single job
waiting is not acceptable for any real world application. This time
requirement does not include the data transfer time to Amazon S3 storage: with
upload speed 100-700 kbps, for a large amount of data, this can take already
quite some time to complete. Though it’s possible to call Amazon Transcribe
for steaming usage, it’s not convenient for non-real-time scenario.
One of the potential reasons of the additional seconds from Google Cloud and
IBM Waston, could be that Microsoft Azure’s returns less complete
transcription information than the other three.
Figure 7: Transcriptio job time in second for all the four STT systems
Another observation is the server responsiveness: job completion time with
Microsoft Azure is almost linear to the audio duration. The variation is also
very tight. But for other APIs, though the relationship could be regarded as
linear, the variation is much larger. Speeches with same length would require
2 to 4 times more execution time to complete the task.
## 4 Discussion
In this work, we evaluated the four most used Speech-to-Text API on French
speech from four Computing Cloud: Amazon Transcribe, Google Cloud, IBM Waston
and Microsoft Azure. 5 levels of different environment noises are mixed with
6690 clean speeches (17 hours). 100 hours speech tests per STT API gave 400
hours speech transcription.
The results showed that Microsoft Azure’s STT service provided the lowest Word
Error Rate (median 9%). It’s also very robust to common environment noise,
even in strong noise environment, the median WERs are only around 16.67%. STT
from Amazon Transcribe and Google Cloud performed well, their WER are
respectively at 11.76% and 14%. Amazon Transcribe works better in relatively
quiet environment while Google Cloud is better for noisy speech. IBM Waston’s
STT service can provide reasonable results with a median WER at 14.29%. But
when the speech is recorded in noisy environment, the WER can go up to around
70% which is difficult to be used. In general, when the signal-to-noise ratio
is higher than $20\text{\,}\mathrm{dB}$, the WERs are still acceptable.
However, if SNR drops lower than $20\text{\,}\mathrm{dB}$, except Microsoft
Azure, all the three APIs will have difficulties to recognize correctly the
speech. Among the 18 environment noise types, Restaurant type is the most
difficult one to deal with for all the four STT APIs.
When the work is time-constraint, Google Cloud will be the first choice with
fastest response time and a reasonable word error rate. Amazon Transcribe can
be used when the framework of the project is on the platform of Amazon Web
Services. The parallel job can help to reduce the total transcription time,
however, per job time is too longer than any other STT service. In average,
one transcription job on Amazon Transcribe is 15 times longer than the same
job on Google Cloud. Otherwise, the general suggestion will be Microsoft
Azure, lowest WER and high robustness to noise. It’s more suitable for
precision-constraint applications.
## References
* [1] G. Saon, G. Kurata, T. Sercu, K. Audhkhasi, S. Thomas, D. Dimitriadis, X. Cui, B. Ramabhadran, M. Picheny, L.-L. Lim, _et al._ , “English conversational telephone speech recognition by humans and machines,” _arXiv preprint arXiv:1703.02136_ , 2017. [Online]. Available: https://arxiv.org/abs/1703.02136
* [2] Picovoice, “Speech-to-text benchmark,” _GitHub_ , 2020. [Online]. Available: https://github.com/Picovoice/speech-to-text-benchmark
* [3] C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, E. Gonina, N. Jaitly, B. Li, J. Chorowski, and M. Bacchiani, “State-of-the-art speech recognition with sequence-to-sequence models,” in _2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , April 2018, pp. 4774–4778.
* [4] G. Saon, H.-K. J. Kuo, S. Rennie, and M. Picheny, “The ibm 2015 english conversational telephone speech recognition system,” _arXiv preprint arXiv:1505.05899_ , 2015. [Online]. Available: https://arxiv.org/abs/1505.05899
* [5] W. Xiong, L. Wu, F. Alleva, J. Droppo, X. Huang, and A. Stolcke, “The microsoft 2017 conversational speech recognition system,” in _2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , April 2018, pp. 5934–5938.
* [6] L. Besacier, B. Lecouteux, N.-Q. Luong, K. Hour, and M. Hadj Salah, “Word confidence estimation for speech translation,” in _International Workshop on Spoken Language Translation_ , Lake Tahoe, United States, Dec. 2014\.
* [7] N.-T. Le, B. Lecouteux, and L. Besacier, “Joint asr and mt features for quality estimation in spoken language translation,” in _International Workshop on Spoken Language Translation_ , Seattle, United States, Dec. 2016.
* [8] C. K. Reddy, E. Beyrami, J. Pool, R. Cutler, S. Srinivasan, and J. Gehrke, “A scalable noisy speech dataset and online subjective test framework,” in _Proc. Interspeech 2019_ , 2019, pp. 1816–1820.
* [9] A. C. Morris, V. Maier, and P. Green, “From wer and ril to mer and wil: improved evaluation measures for connected speech recognition,” in _Eighth International Conference on Spoken Language Processing_ , 2004.
* [10] A. Y. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates, and A. Y. Ng, “Deep speech: Scaling up end-to-end speech recognition,” _CoRR_ , vol. abs/1412.5567, 2014\.
* [11] Jaco-Assistant, “Deepspeech-polyglot,” _GitLab_ , 2021. [Online]. Available: https://gitlab.com/Jaco-Assistant/deepspeech-polyglot
* [12] Microsoft, “Reduce background noise in microsoft teams meetings with ai-based noise suppression,” 2020. [Online]. Available: https://techcommunity.microsoft.com/t5/microsoft-teams-blog/reduce-background-noise-in-microsoft-teams-meetings-with-ai/ba-p/1992318
* [13] C. K. Reddy, H. Dubey, K. Koishida, A. Nair, V. Gopal, R. Cutler, S. Braun, H. Gamper, R. Aichner, and S. Srinivasan, “Interspeech 2021 deep noise suppression challenge,” _arXiv preprint arXiv:2101.01902_ , 2021. [Online]. Available: https://arxiv.org/abs/2101.01902
|
# Two-color pulse compounds in waveguides with a zero-nonlinearity point
O. Melchert<EMAIL_ADDRESS>Leibniz Universität Hannover,
Institute of Quantum Optics (IQO), Welfengarten 1, 30167 Hannover, Germany
Cluster of Excellence PhoenixD (Photonics, Optics, and Engineering -
Innovation Across Disciplines), Welfengarten 1A, 30167 Hannover, Germany S.
Bose Cluster of Excellence PhoenixD (Photonics, Optics, and Engineering -
Innovation Across Disciplines), Welfengarten 1A, 30167 Hannover, Germany
Leibniz Universität Hannover, Institue of Photonics (IOP), Nienburger Str. 17,
30167 Hannover S. Willms Leibniz Universität Hannover, Institute of Quantum
Optics (IQO), Welfengarten 1, 30167 Hannover, Germany Cluster of Excellence
PhoenixD (Photonics, Optics, and Engineering - Innovation Across Disciplines),
Welfengarten 1A, 30167 Hannover, Germany I. Babushkin Leibniz Universität
Hannover, Institute of Quantum Optics (IQO), Welfengarten 1, 30167 Hannover,
Germany Cluster of Excellence PhoenixD (Photonics, Optics, and Engineering -
Innovation Across Disciplines), Welfengarten 1A, 30167 Hannover, Germany U.
Morgner Leibniz Universität Hannover, Institute of Quantum Optics (IQO),
Welfengarten 1, 30167 Hannover, Germany Cluster of Excellence PhoenixD
(Photonics, Optics, and Engineering - Innovation Across Disciplines),
Welfengarten 1A, 30167 Hannover, Germany A. Demircan Leibniz Universität
Hannover, Institute of Quantum Optics (IQO), Welfengarten 1, 30167 Hannover,
Germany Cluster of Excellence PhoenixD (Photonics, Optics, and Engineering -
Innovation Across Disciplines), Welfengarten 1A, 30167 Hannover, Germany
###### Abstract
We study incoherently coupled two-frequency pulse compounds in waveguides with
single zero-dispersion and zero-nonlinearity points. In such waveguides,
supported by a negative nonlinearity, soliton dynamics can be obtained even in
domains of normal dispersion. We demonstrate trapping of weak pulses by
solitary-wave wells, forming nonlinear-photonics meta-atoms, and molecule-like
bound-states of pulses. We study the impact of Raman effect on these pulse
compounds, finding that, depending on the precise subpulse configuration, they
decelerate, accelerate, or are completely unaffected. Our results extend the
range of systems in which two-frequency pulse compounds can be expected to
exist and demonstrate further unique and unexpected behavior.
### Introduction
The incoherent interaction of optical pulses is a central concern in nonlinear
optics. For instance, strong and efficient control of light pulses has been
shown for a soliton, which induces a strong refractive index barrier that
cannot be surpassed by quasi group-velocity matched waves located in a domain
of normal dispersion Demircan et al. (2013); Demircan et al. (2014a),
resulting in mutual repulsion. This mechanism is naturally supported by the
supercontinuum generation process Driben et al. (2010); Demircan et al.
(2014b). A transfer of this concept to waveguides supporting group-velocity
matched copropagation of pulses in separate domains of anomalous dispersion
yields an entirely attractive interaction Melchert et al. (2019a). In this
case, cross-phase modulation (XPM) induced potential wells provide a binding
mechanism that enable molecule-like bound states of pulses. They form a single
compound pulse, consisting of two subpulses at vastly different frequencies.
These objects were previously studied by putting emphasis on their frequency-
domain representation, showing that a soliton can act as localized trapping
potential with discrete level spectrum Melchert et al. (2019a), supporting the
formation of two-frequency pulse compounds in cases where both subpulse-
amplitudes are of similar size Melchert et al. (2019a); Melchert and Demircan
(2021). Perturbations of various type where studied in this context Melchert
et al. (2021); Willms et al. (2022); Oreshnikov et al. (2022). A complementing
approach in terms of a multi-scales analysis, putting emphasis on the
representation in the time domain, showed that they form a class of
generalized dispersion Kerr solitons which can be described using the concept
of a meta-envelope Tam et al. (2019). Such two-color solitons were recently
verified experimentally in mode-locked laser cavities Lourdesamy et al.
(2021); Mao et al. (2021). Here, we extend the range of systems in which such
pulse compounds can be observed. We consider waveguides with a single zero-
dispersion point and a single zero-nonlinearity point, where the nonlinear
coefficient is negative in the domain of normal dispersion. This setup allows
for group-velocity matching within a large range of frequencies, and allows
insight into the complex interplay of sign changing nonlinear and dispersive
effects. Photonic-crystal fibers with frequency dependent nonlinearity with
the above properties can be obtained by doping with nanoparticles Driben et
al. (2009); Bose et al. (2016a, b); Arteaga-Sierra et al. (2018); Linale et
al. (2020); Hernandez et al. (2022). Noble gas filled hollow-core waveguides
also offer the possibility to have a negative refractive index within a domain
of normal dispersion Junnarkar and Uesugi (2000). For a model system with the
above properties, we demonstrate the existence of trapped states in solitary-
wave wells, show that two-frequency pulse compounds with mutually bound
subpulses of similar amplitudes are supported, and discuss the dynamics of
such pulse complexes in presence of the Raman effect. The latter leads to the
surprising finding that, when the center frequency of the solitary wave-well
shifts, a trapped state of higher order can transit into the ground-state. For
our analysis we consider two-frequency pulse compounds for which the subpulses
can be well distinguished in the frequency domain, so that their mutual
interaction can be described by an incoherent interaction stemming from XPM
alone.
### Generalized nonlinear Schrödinger equation.
Subsequently, we model pulse propagation in waveguides with frequency-
dependent nonlinearity in terms of the generalized nonlinear Schrödinger
equation (GNSE) Agrawal (2019); Bose et al. (2016a); Zhao et al. (2022)
$\displaystyle i\partial_{z}A=$ $\displaystyle-\sum_{n\geq
2}\frac{\beta_{n}}{n!}(i\partial_{t})^{n}A-(1-f_{R})\gamma_{\mathrm{eff}}|A|^{2}A$
$\displaystyle-f_{R}\gamma
A\int_{0}^{\infty}h_{R}(t^{\prime})|A(z,t-t^{\prime})|^{2}~{}{\mathrm{d}}t^{\prime},$
(1)
for a complex-valued envelope $A=A(z,t)$. Therein, time $t$ is measured in a
reference frame moving with the group velocity at $\omega_{0}\approx
2.2559\,\mathrm{rad/fs}$, and $z$ is the propagation distance. Following Ref.
Zhao et al. (2022), the dispersion coefficients are taken as
$\beta_{2}=-1.183\times 10^{-2}\,\mathrm{fs^{2}/\upmu m}$,
$\beta_{3}=8.10383\times 10^{-2}\,\mathrm{fs^{3}/\upmu m}$,
$\beta_{4}=-9.5205\times 10^{-2}\,\mathrm{fs^{4}/\upmu m}$,
$\beta_{5}=0.20737\,\mathrm{fs^{5}/\upmu m}$,
$\beta_{6}=-0.53943\,\mathrm{fs^{6}/\upmu m}$,
$\beta_{7}=1.3486\,\mathrm{fs^{7}/\upmu m}$,
$\beta_{8}=-2.5495\,\mathrm{fs^{8}/\upmu m}$,
$\beta_{9}=3.0524\,\mathrm{fs^{9}/\upmu m}$, and
$\beta_{10}=-1.7140\,\mathrm{fs^{10}/\upmu m}$. As function of the angular
frequency detuning $\Omega=\omega-\omega_{0}$, they define the propagation
constant $\beta(\Omega)=\sum_{n=2}^{10}\beta_{n}\Omega^{n}/n!$, with relative
group delay $\beta_{1}(\Omega)=\partial_{\Omega}\beta(\Omega)$ [Fig. 1(a)] and
group-velocity dispersion
$\beta_{2}(\Omega)=\partial_{\Omega}^{2}\beta(\Omega)$ [Fig. 1(b)]. The
nonlinear coefficients are modeled as
$\gamma(\Omega)=\gamma_{0}+\gamma_{1}\Omega$, with
$\gamma_{0}=0.11\,\mathrm{W^{-1}/m}$ and $\gamma_{1}=4.8728\times
10^{-5}\,\mathrm{ps\,W^{-1}/m}$, and as
$\gamma_{\rm{eff}}(\Omega)=\gamma_{0,\mathrm{eff}}+\gamma_{1,\mathrm{eff}}\Omega$,
with $\gamma_{0,\mathrm{eff}}=0.7453\,\mathrm{W^{-1}/m}$, and
$\gamma_{1,\mathrm{eff}}=-4.6822\times 10^{-3}\,\mathrm{ps\,W^{-1}/m}$ [Fig.
1(c)]. For the considered parameters, the zero-disperion point, defined by
$\beta_{2}(\Omega_{\rm{ZDP}})=0$, and the zero-nonlinearity point, defined by
$\gamma_{\rm{eff}}(\Omega_{\rm{ZNP}})=0$, are at
$\Omega_{\rm{ZDP}}\approx\Omega_{\rm{ZNP}}\approx 0.16\,\mathrm{rad/fs}$. The
Raman effect is included as
$h_{R}(t)=(\tau_{1}^{2}+\tau_{2}^{2})\tau_{1}^{-1}\tau_{2}^{-2}\,\exp(-t/\tau_{2})\,\sin(t/\tau_{1})$
with $f_{R}=0.18$, $\tau_{1}=12.2\,\mathrm{fs}$, and
$\tau_{2}=32\,\mathrm{fs}$ Blow and Wood (1989). For the solution of Eq. (1)
with $f_{R}=0.18$ we use a split-step Fourier method Agrawal (2019). When
neglecting the Raman effect, i.e. for $f_{R}=0$, we use the conservation
quantity error method Heidt (2009); Melchert and Demircan (2022). To assess
time-frequency interrelations within $A(z,t)$, we use the spectrogram
$P_{S}(t,\Omega)=\left|\int
A(z,t^{\prime})\exp\left[-(t^{\prime}-t)^{2}/2\sigma^{2}-i\Omega
t^{\prime}\right]~{}{\rm d}t^{\prime}\right|^{2}$ Melchert et al. (2019b),
employing a Gaussian window function with root-mean-square width
$\sigma=50\,\mathrm{fs}$.
Figure 1: Specifics of the model. (a) Group-delay, (b) group-velocity
dispersion, and, (c) effective nonlinear coefficient. Dot and circle indicate
a pair of group-velocity matched pulses. Domain of normal dispersion is shaded
gray. (d) Potential strength as function of soliton center frequency
$\Omega_{\rm{S}}$. Labels on top indicate $\Omega_{\rm{TR}}$, i.e. group-
velocity matched frequencies at which trapped states exist. (e) Wavenumber
eigenvalues $\kappa^{\prime\prime}_{n}$ and potential depth $V_{0}=\min(V)$ as
function of $\Omega_{\rm{S}}$. Vertical dashed line indicates the pair of
frequencies in (a-c).
### Coupled nonlinear Schrödinger equations.
In search of incoherently coupled two-frequency pulse compounds, we
intentionally neglect the Raman effect and consider complex-valued envelopes
$A_{1}$, and $A_{2}$, of two group-velocity (GV) matched pulses with a vast
frequency gap [Fig. 1(a)], described by the two coupled nonlinear Schrödinger
equations (NSEs)
$\displaystyle
i\partial_{z}\,A_{1}-\frac{\beta_{2}^{\prime}}{2}\partial_{t}^{2}\,A_{1}+\gamma^{\prime}\left(|A_{1}|^{2}+2|A_{2}|^{2}\right)A_{1}=0,$
(2a) $\displaystyle
i\partial_{z}\,A_{2}-\frac{\beta_{2}^{\prime\prime}}{2}\partial_{t}^{2}\,A_{2}+\gamma^{\prime\prime}\left(|A_{2}|^{2}+2|A_{1}|^{2}\right)A_{2}=0.$
(2b)
The incoherently coupled NSEs (2) further neglect higher orders of dispersion
as well as four-wave mixing contributions between the two pulses. Their mutual
interaction is included via XPM alone. Considering the pair of GV matched
frequencies $\Omega_{1}=-0.20\,\mathrm{rad/fs}$, and
$\Omega_{2}=0.57\,\mathrm{rad/fs}$ [Fig. 1(a)], yields
$\beta_{2}^{\prime}=-0.0303\,\mathrm{fs^{2}/\upmu m}$,
$\gamma^{\prime}=1.68\,\mathrm{W^{-1}/m}$,
$\beta_{2}^{\prime\prime}=0.0234\,\mathrm{fs^{2}/\upmu m}$, and
$\gamma^{\prime\prime}=-1.91\,\mathrm{W^{-1}/m}$. This distinguishes the
present setup from earlier ones where
$\beta_{2}^{\prime},\,\beta_{2}^{\prime\prime}<0$, and
$\gamma^{\prime},\,\gamma^{\prime\prime}>0$ Melchert et al. (2019a). Below we
look for solutions to Eqs. (2) in the form
$\displaystyle A_{1}(z,t)=U_{1}(t)e^{i\kappa^{\prime}z},\quad\text{and}\quad
A_{2}(z,t)=U_{2}(t)e^{i\kappa^{\prime\prime}z},$ (3)
wherein $U_{1}$, $U_{2}$ are real-valued envelopes, and $\kappa^{\prime}$,
$\kappa^{\prime\prime}$ are the corresponding wave numbers. Substituting Eqs.
(3) into Eqs. (2) yields the two coupled ordinary differential equations
(ODEs)
$\displaystyle\ddot{U}_{1}-\frac{2}{\beta_{2}^{\prime}}\left[\gamma^{\prime}\left(|U_{1}|^{2}+2|U_{2}|^{2}\right)-\kappa^{\prime}\right]U_{1}=0,$
(4a)
$\displaystyle\ddot{U}_{2}-\frac{2}{\beta_{2}^{\prime\prime}}\left[\gamma^{\prime\prime}\left(|U_{2}|^{2}+2|U_{1}|^{2}\right)-\kappa^{\prime\prime}\right]U_{2}=0,$
(4b)
where the dots denote derivatives with respect to time.
Figure 2: Solitary-wave well with two trapped states. (a) Trapping potential
$V$, wavenumber eigenvalues $\kappa^{\prime\prime}_{n}$, and eigenfunctions
$\phi_{n}$, $n=0,1$. (b) Time-domain propagation dynamics of the soliton and
its trapped state $n=0$. (c) Corresponding spectrum. Filtered view in (b)
details the time-domain view of the frequency range enclosed by the dashed box
in (c). (d,e) Same as (b,c) for $n=1$.
### Trapped states.
Imposing the condition $\max(U_{2})\ll\max(U_{1})$ decouples Eqs. (4):
assuming Eq. (4a) to describe a freely propagating soliton
$U_{1}(t)=\sqrt{P_{0}}\,{\mathrm{sech}}(t/t_{0})$ with
$P_{0}=|\beta_{2}^{\prime}|(\gamma^{\prime}\,t_{0}^{2})^{-1}$ and
$\kappa^{\prime}=\gamma^{\prime}P_{0}/2$, Eq. (4b) takes the form of a
Schrödinger-type eigenvalue problem
$\displaystyle-(\beta_{2}^{\prime\prime}/2)\ddot{\phi}_{n}+V(t)\phi_{n}=\kappa_{n}^{\prime\prime}~{}\phi_{n},$
(5)
with trapping potential
$V(t)=2\gamma^{\prime\prime}P_{0}\,{\mathrm{sech}}^{2}(t/t_{0})$. Since
$\beta_{2}^{\prime\prime}>0$ at $\Omega_{2}=0.57\,\mathrm{rad/fs}$, the
attractive nature of $V$ is enabled by $\gamma^{\prime\prime}<0$ [Fig.
1(b,c)]. In Eq. (5), the wavenumber eigenvalues are real-valued and satisfy
$\kappa_{n}^{\prime\prime}<0$. To each eigenvalue corresponds an eigenfunction
$\phi_{n}(t)$ with $n$ zeros. In analogy to the Pöschl-Teller potential in
one-dimensional quantum scattering theory Landau and Lifshitz (1981); Lekner
(2007), which can be solved exactly, we write
$V(t)=-\nu\,(\nu+1)\,\beta_{2}^{\prime\prime}(2t_{0}^{2})^{-1}\,{\mathrm{sech}}^{2}(t/t_{0})$
with strength-parameter
$\nu=-1/2+\left[1/4+4|(\gamma^{\prime\prime}/\gamma^{\prime})(\beta_{2}^{\prime}/\beta_{2}^{\prime\prime})|\right]^{1/2}$.
The number of trapped states is $N_{\rm{TR}}=\lfloor\nu\rfloor+1$, where
$\lfloor\nu\rfloor$ is the integer part of $\nu$, and the wavenumber
eigenvalues are
$\kappa_{n}^{\prime\prime}=-\beta_{2}^{\prime\prime}\,(2t_{0}^{2})^{-1}\,(\nu-n)^{2}$,
with $n=0,\ldots,\lfloor\nu\rfloor$. Equation (5) suggests an analogy to
quantum mechanics, with $\phi_{n}$ assuming the role of the wavefunction of a
fictitious particle of mass $m=1/\beta_{2}^{\prime\prime}$, confined to a
localized trapping potential. The quantized number of trapped states is akin
to an atomic number, and a bare soliton, with none of its trapped states
occupied, resembles the nucleus of an one-dimensional atom. By this analogy,
the soliton along with its trapped states represents a nonlinear-photonics
meta-atom. The variation of the potential-strength $\nu$ and the discrete
level spectrum of the solitary-wave well $V$ as function of the soliton center
frequency $\Omega_{\rm{S}}$ are shown in Figs. 1(d,e): for decreasing
$\Omega_{\mathrm{S}}$, the trapping potential induced by the soliton features
an increasing number of bound states. An example for the choice
$\Omega_{\rm{S}}=-0.20\,\mathrm{rad/fs}$ and $t_{0}=50\,\mathrm{fs}$, with
$\Omega_{\rm{TR}}=0.57\,\mathrm{rad/fs}$ and $\nu\approx 1.98$ [Fig. 1(d)] is
detailed in Fig. 2. There exist $N_{\rm{TR}}=2$ trapped states at
$(\kappa_{0}^{\prime\prime},\,\kappa_{1}^{\prime\prime})=(-18.29,-4.47)\,\mathrm{m^{-1}}$,
given by $\phi_{0}(t)\propto{\mathrm{sech}}^{\nu}(t/t_{0})$, and
$\phi_{1}(t)\propto{\mathrm{sech}}^{\nu-1}(t/t_{0})\,{\mathrm{tanh}}(t/t_{0})$
[Fig. 2(a)]. In the vicinity of $\Omega_{\mathrm{TR}}$, due to
$\kappa_{0}^{\prime\prime},\,\kappa_{1}^{\prime\prime}<0$, a wavenumber-gap
separates the trapped states from linear waves with propagation constant
$\beta^{\prime\prime}=(\beta_{2}^{\prime\prime}/2)(\Omega-\Omega_{\mathrm{TR}})^{2}\geq
0$. The stable propagation of intitial conditions
$A_{0}(t)=U_{1}(t)e^{-i\Omega_{\rm{S}}t}+\phi_{n}(t)e^{-i\Omega_{\rm{TR}}t}$,
with weak trapped states of amplitude $\max(|\phi_{n}|)=0.05\sqrt{P_{0}}$,
$n=0,1$, in terms of Eq. (1) in absence of the Raman effect ($f_{R}=0$) is
demonstrated in Figs. 2(b-e). To account for the change in group-velocity of
the soliton in presence of a linear variation of $\gamma$ Haus and Ippen
(2001), we consider
$v_{0}^{-1}=\beta_{1}(\Omega_{\mathrm{S}})+\gamma_{1,\rm{eff}}P_{0}$.
Figure 3: Incoherently coupled two-color pulse compounds. (a-c) Paramterized
solution of Eqs. (4) (see text for parameters). (a) Scaled amplitudes
$u_{n}=U_{0,n}/\sqrt{P_{0}}$, (b) pulse duration $t_{n}$, and, (c) shape
exponent $\nu_{n}$, $n=1,2$. (d) Pulse pair for
$\kappa^{\prime\prime}=-7.99\,\mathrm{m^{-1}}$, and, (e) pulse pair for
$\kappa^{\prime\prime}=-4.68\,\mathrm{m^{-1}}$. (f) Time-domain propagation
dynamics of the pulse pair in (d). (g) Corresponding spectrum. Filtered views
in (f) detail the time-domain view of the frequency ranges enclosed by the
dashed boxes in (g).
### Simultaneous solution of the coupled ODEs.
Solitary-wave solutions of the coupled nonlinear Eqs. (4) beyond the above
linear limit yield two-frequency pulse compounds of Eq. (1). Under suitable
conditions, such solutions can be specified analytically Haelterman et al.
(1993); Silberberg and Barad (1995); Afanasyev et al. (1989); Pelinovsky and
Kivshar (2000); Melchert and Demircan (2021). However, in order to obtain
solutions for general parameter settings, Eqs. (4) need to be solved
numerically. This is, e.g., possible via shooting methods Haelterman and
Sheppard (1994); Mitchell et al. (1997), spectral renormalization methods
Ablowitz and Musslimani (2005); Lakoba and Yang (2007), conjugate gradient
methods Lakoba (2009); Yang (2009), or Newton methods Dror and Malomed (2016).
We here employ a Newton method employing a boundary value Runge-Kutta
algorithm Kierzenka and Shampine (2001). So as to systematically study
solutions to Eqs. (4) we set
$\kappa^{\prime}=|\beta_{2}^{\prime}|(2t_{0}^{2})^{-1}$ with
$t_{0}=50\,\mathrm{fs}$, and start at the location
$\kappa^{\prime\prime}=-20\,\mathrm{m^{-1}}$ in parameter space, i.e. below
the lowest eigenvalue obtained from Eq. (5). In this case we expect $U_{2}$ to
vanish, and $U_{1}$ to yield a fundamental soliton
$U_{1}(t)=\sqrt{P_{0}}\,{\mathrm{sech}}(t/t_{0})$ with
$P_{0}=|\beta_{2}^{\prime}|(\gamma^{\prime}\,t_{0}^{2})^{-1}$. We set initial
trial functions with parity similar to the soliton and the lowest lying
trapped state, and continue the obtained solutions to larger values of
$\kappa^{\prime\prime}$. The resulting solutions are of the form
$U_{n}(t)=U_{0,n}\,{\mathrm{sech}}^{\nu_{n}}(t/t_{n})$, $n=1,2$, with
parameters summarized in Figs. 3(a-c). Consistent with our results above, we
find that a weak nonzero solution $U_{2}$ with $t_{2}=t_{0}$ and
$\nu_{2}\approx 1.98$ originates at
$\kappa^{\prime\prime}\approx-18.3\,\mathrm{m^{-1}}$. Let us point out that
the above choice of $\max(\phi_{n})/\sqrt{P_{0}}=0.05$ indeed characterises
weak trapped states [Fig. 3(a)]. For
$\kappa^{\prime\prime}>-18.3\,\mathrm{m^{-1}}$, the amplitude of $U_{1}$
continuously decreases while that for $U_{2}$ increases. Above
$\kappa^{\prime\prime}\approx-4\,\mathrm{m^{-1}}$, $U_{1}$ vanishes and
$U_{2}$ describes a fundamental soliton with wavenumber
$\kappa^{\prime\prime}$. Let us note that at
$\kappa^{\prime\prime}\approx-4.68\,\mathrm{m^{-1}}$ we find a pair of
solutions with hyperbolic-secant shape
$U_{n}=U_{0,n}{\mathrm{sech}}(t/t_{0})$, $n=1,2$ [Fig. 3(e)], i.e. a two-color
soliton pair as in Ref. Melchert and Demircan (2021). The stable propagation
of an initial condition
$A_{0}(t)=U_{1}(t)e^{-i\Omega_{\rm{S}}t}+U_{2}(t)e^{-i\Omega_{\rm{TR}}t}$ with
$U_{0,1}\approx U_{0,2}$ [$\kappa^{\prime\prime}=-8\,\mathrm{m^{-1}}$; Fig.
3(d)] in terms of Eq. (1) with $f_{R}=0$ is demonstrated in Figs. 3(f,g). To
account for the change in group-velocity of the pulse compound in Fig. 3(f),
we consider
$v_{0}^{-1}=\beta_{1}(\Omega_{\mathrm{S}})+\gamma_{1,\rm{eff}}(U_{0,1}^{2}+2U_{0,2}^{2})$,
extending the group-velocity correction of Ref. Haus and Ippen (2001) to two-
color pulse compounds.
Figure 4: Perturbation by the Raman effect. (a) Propagation dynamics of a
soliton and a weak trapped state of order $n=0$
($L_{D}=t_{0}^{2}/|\beta^{\prime}_{2}|$). (b) Same for $n=1$. (c) Propagation
dynamics of the pulse compound of Fig. 3(d). Inset labeled A shows a
spectrogram at $z/L_{D}=600$, with $t_{c}$ indicating the peak-location of the
pulse compound. Further insets are detailed in the text.
### Perturbation by the Raman effect.
We next assess the impact of the Raman effect on the propagation dynamics of
the above pulse compounds. In Fig. 4(a) we show a fundamental soliton and a
weak trapped state of order $n=0$, propagating under Eq. (1). While the
soliton experiences a self-frequency-shift, resulting in a deceleration in the
time-domain, the trapped state remains bound by the trapping potential, see
the spectrogram in Fig. 4(a) (inset A). Let us note that the level-spectrum of
the solitary-wave well is affected by the soliton’s frequency downshift [Fig.
1(e)]. While the soliton decelerates, the trapped state starts to oscillate
within the trapping potential (inset B). This deceleration induced oscillation
within the solitary-wave well bears an unexpected consequence when considering
the trapped state of order $n=1$ [Fig. 4(b)]: upon propagation, the initially
swift oscillations (inset B) grow in size (inset C) until finally, the trapped
state transitions into a trapped state of order $n=0$ (inset D). The shape-
conversion of the mode from $n=1\rightarrow 0$ is also evident in the
spectrogram in Fig. 4(b) (inset A). During this transition, a small amount of
radiation emanates from the localized pulses. When considering instances of
incoherently coupled two-color pulse compounds [Figs. 3(a-c)], the Raman
effect can have different consequences [Fig. 4(c)]: when $U_{0,1}>U_{0,2}$,
the pulse compound decelerates
($\kappa^{\prime\prime}=-14.52\,\mathrm{m^{-1}}$); when $U_{0,1}<U_{0,2}$, the
pulse compound accelerates ($\kappa^{\prime\prime}=-4.84\,\mathrm{m^{-1}}$);
in an intermediate parameter range where $U_{0,1}\approx U_{0,2}$, the pulse
compound is nearly unaffected
($\kappa^{\prime\prime}=-9.68\,\mathrm{m^{-1}}$). The latter is a result of
the deceleration of one subpulse being counterbalanced by an acceleration of
its binding partner.
### Summary and conclusions.
In conclusion, we have demonstrated the existence of two-color pulse compounds
in waveguides with a single zero-dispersion point and adequate frequency-
dependent nonlinearity. A strong mutual binding of two group-velocity matched
pulses at vastly different center frequencies, earlier demonstrated for
waveguides with two domains of anomalous dispersion Melchert et al. (2019a),
can, in the present case, be achieved by having $\gamma<0$ in a domain where
$\beta_{2}>0$. The reported study extends the range of systems that support
two-color pulse compounds, and allows to understand the complex propagation
dynamics reported in a recent study on higher-order soliton evolution in a
photonic crystal fiber with one zero-dispersion point and frequency dependent
nonlinearity Zhao et al. (2022). Instances of such two-color pulse compounds
can readily be identified in the propagation studies reported in Ref. Zhao et
al. (2022).
## Acknowledgements
The authors acknowledge financial support from Deutsche Forschungsgemeinschaft
within the Cluster of Excellence PhoenixD (EXC 2122, projectID 390833453).
## References
* Demircan et al. (2013) A. Demircan, S. Amiranashvili, C. Brée, and G. Steinmeyer, Phys. Rev. Lett. 110, 233901 (2013).
* Demircan et al. (2014a) A. Demircan, S. Amiranashvili, C. Brée, U. Morgner, and G. Steinmeyer, Opt. Lett. 39, 2735 (2014a).
* Driben et al. (2010) R. Driben, F. Mitschke, and N. Zhavoronkov, Opt. Express 18, 25993 (2010).
* Demircan et al. (2014b) A. Demircan, S. Amiranashvili, C. Brée, C. Mahnke, F. Mitschke, and G. Steinmeyer, Appl. Phys. B 115, 343–354 (2014b).
* Melchert et al. (2019a) O. Melchert, S. Willms, S. Bose, A. Yulin, B. Roth, F. Mitschke, U. Morgner, I. Babushkin, and A. Demircan, Phys. Rev. Lett. 123, 243905 (2019a).
* Melchert and Demircan (2021) O. Melchert and A. Demircan, Opt. Lett. 46, 5603 (2021).
* Melchert et al. (2021) O. Melchert, S. Willms, U. Morgner, I. Babushkin, and A. Demircan, Scientific Reports 11, 11190 (2021).
* Willms et al. (2022) S. Willms, O. Melchert, S. Bose, A. Yulin, I. Oreshnikov, U. Morgner, I. Babushkin, and A. Demircan, Phys. Rev. A 105, 053525 (2022).
* Oreshnikov et al. (2022) I. Oreshnikov, O. Melchert, S. Willms, S. Bose, I. Babushkin, A. Demircan, U. Morgner, and A. Yulin, _Cherenkov radiation and scattering of external dispersive waves by two-color solitons_ (2022), arXiv:2207.03541, URL {https://doi.org/10.48550/arXiv.2207.03541}.
* Tam et al. (2019) K. K. K. Tam, T. J. Alexander, A. Blanco-Redondo, and C. M. de Sterke, Opt. Lett. 44, 3306 (2019).
* Lourdesamy et al. (2021) J. P. Lourdesamy, A. F. J. Runge, T. J. Alexander, D. D. Hudson, A. Blanco-Redondo, and C. M. de Sterke, Nat. Phys. 18, 59 (2021).
* Mao et al. (2021) D. Mao, H. Wang, H. Zhang, C. Zeng, Y. Du, Z. He, Z. Sun, and J. Zhao, Nat. Commun. 12, 6712 (2021).
* Driben et al. (2009) R. Driben, A. Husakou, and J. Herrmann, Opt. Express 17, 17989 (2009).
* Bose et al. (2016a) S. Bose, A. Sahoo, R. Chattopadhyay, S. Roy, S. K. Bhadra, and G. P. Agrawal, Phys. Rev. A 94, 043835 (2016a).
* Bose et al. (2016b) S. Bose, R. Chattopadhyay, S. Roy, and S. K. Bhadra, J. Opt. Soc. Am. B 33, 1014 (2016b).
* Arteaga-Sierra et al. (2018) F. R. Arteaga-Sierra, A. Antikainen, and G. P. Agrawal, Phys. Rev. A 98, 013830 (2018).
* Linale et al. (2020) N. Linale, J. Bonetti, A. D. Sánchez, S. Hernandez, P. I. Fierens, and D. F. Grosz, Opt. Lett. 45, 2498 (2020).
* Hernandez et al. (2022) S. M. Hernandez, A. Sparapani, N. Linale, J. Bonetti, D. F. Grosz, and P. I. Fierens, Waves in Random and Complex Media pp. 1–15 (2022).
* Junnarkar and Uesugi (2000) M. R. Junnarkar and N. Uesugi, Opt. Commun. 175, 447 (2000).
* Agrawal (2019) G. P. Agrawal, _Nonlinear Fiber Optics_ (Academic Press, 2019).
* Zhao et al. (2022) S. Zhao, R. Guo, and Y. Zeng, Phys. Rev. A 106, 033516 (2022).
* Blow and Wood (1989) K. J. Blow and D. Wood, IEEE J. Quantum Electron. 25, 2665 (1989).
* Heidt (2009) A. M. Heidt, IEEE J. Lightwave Tech. 27, 3984 (2009).
* Melchert and Demircan (2022) O. Melchert and A. Demircan, Computer Physics Communications 273, 108257 (2022).
* Melchert et al. (2019b) O. Melchert, B. Roth, U. Morgner, and A. Demircan, SoftwareX 10, 100275 (2019b).
* Landau and Lifshitz (1981) L. D. Landau and L. M. Lifshitz, _Quantum Mechanics Non-Relativistic Theory, Third Edition: Volume 3_ (Elsevier Science, Oxford, 1981).
* Lekner (2007) J. Lekner, Am. J. Phys. 75, 1151 (2007).
* Haus and Ippen (2001) H. A. Haus and E. P. Ippen, Opt. Lett. 26, 1654 (2001).
* Haelterman et al. (1993) M. Haelterman, A. Sheppard, and A. Snyder, Opt. Lett. 18, 1406 (1993).
* Silberberg and Barad (1995) Y. Silberberg and Y. Barad, Opt. Lett. 20, 246 (1995).
* Afanasyev et al. (1989) V. V. Afanasyev, Y. S. Kivshar, V. V. Konotop, and V. N. Serkin, Opt. Lett. 14, 805 (1989).
* Pelinovsky and Kivshar (2000) D. Pelinovsky and Y. Kivshar, Phys. Rev. E 62, 8668 (2000).
* Haelterman and Sheppard (1994) M. Haelterman and A. Sheppard, Phys. Rev. E 149, 3376 (1994).
* Mitchell et al. (1997) M. Mitchell, M. Segev, T. Coskun, and D. Christodulides, Phys. Rev. Lett. 79, 4990 (1997).
* Ablowitz and Musslimani (2005) M. Ablowitz and Z. Musslimani, Opt. Lett. 30, 2140 (2005).
* Lakoba and Yang (2007) T. Lakoba and J. Yang, J. Comp. Phys. 226, 1668 (2007).
* Lakoba (2009) T. Lakoba, Physica D 238, 2308 (2009).
* Yang (2009) J. Yang, J. Comp. Phys. 228, 7007 (2009).
* Dror and Malomed (2016) N. Dror and B. Malomed, J. Opt. 18, 014003 (2016).
* Kierzenka and Shampine (2001) J. Kierzenka and L. Shampine, ACM Trans. Math. Softw. 27, 300 (2001).
|
# Characterization of the tree cycles with minimum positive entropy for any
period
David Juher and Francesc Mañosas and David Rojas Departament d’Informàtica,
Matemàtica Aplicada i Estadística, Universitat de Girona, c/ Maria Aurèlia
Capmany 61, 17003 Girona, Spain. ORCID 0000-0001-5440-1705
<EMAIL_ADDRESS>(Corresponding author) Departament de Matemàtiques,
Edifici C, Universitat Autònoma de Barcelona, 08913 Cerdanyola del Vallès,
Barcelona, Spain. ORCID 0000-0003-2535-0501<EMAIL_ADDRESS>Departament
d’Informàtica, Matemàtica Aplicada i Estadística, Universitat de Girona, c/
Maria Aurèlia Capmany 61, 17003 Girona, Spain. ORCID 0000-0001-7247-4705
<EMAIL_ADDRESS>
###### Abstract.
Consider, for any integer $n\geq 3$, the set $\operatorname{Pos}_{n}$ of all
$n$-periodic tree patterns with positive topological entropy and the set
$\operatorname{Irr}_{n}\subset\operatorname{Pos}_{n}$ of all $n$-periodic
irreducible tree patterns. The aim of this paper is to determine the elements
of minimum entropy in the families $\operatorname{Pos}_{n}$,
$\operatorname{Irr}_{n}$ and
$\operatorname{Pos}_{n}\setminus\operatorname{Irr}_{n}$. Let $\lambda_{n}$ be
the unique real root of the polynomial $x^{n}-2x-1$ in $(1,+\infty)$. We
explicitly construct an irreducible $n$-periodic tree pattern
$\mathcal{Q}_{n}$ whose entropy is $\log(\lambda_{n})$. We prove that this
entropy is minimum in $\operatorname{Pos}_{n}$. Since the pattern
$\mathcal{Q}_{n}$ is irreducible, $\mathcal{Q}_{n}$ also minimizes the entropy
in the family $\operatorname{Irr}_{n}$. We also prove that the minimum
positive entropy in the set
$\operatorname{Pos}_{n}\setminus\operatorname{Irr}_{n}$ (which is nonempty
only for composite integers $n\geq 6$) is $\log(\lambda_{n/p})/p$, where $p$
is the least prime factor of $n$.
###### Key words and phrases:
tree maps, periodic patterns, topological entropy
###### 1991 Mathematics Subject Classification:
Primary: 37E15, 37E25
This work has been funded by grants PID2020-118281GB-C31 of Ministerio de
Ciencia e Innovación and 2021 SGR 00113 of Generalitat de Catalunya. D.R. is a
Serra Húnter fellow.
## 1\. Introduction
The field of Combinatorial Dynamics has its roots in the striking
Sharkovskii’s Theorem [31], in the sense that the theory grew up as a
succession of progressive refinements and generalizations of the ideas
contained in the original proof of that result. The core of the theory is the
notion of _combinatorial type_ or _pattern_.
Consider a class $\mathcal{X}$ of topological spaces (closed intervals of the
real line, trees, graphs and compact surfaces are classic examples) and the
family $\mathcal{F}_{\mathcal{X}}$ of all maps $\\{\mbox{$f\colon
X\longrightarrow X$}:X\in\mathcal{X}\\}$ satisfying a given property
(continuous maps, homeomorphisms, etc). Any of such maps gives rise, by
iteration, to a discrete dynamical system. Assume now that we have a map
$f\colon X\longrightarrow X$ in $\mathcal{F}_{\mathcal{X}}$ which is known to
have a periodic orbit $P$. The _pattern of $P$_ is the equivalence class
$\mathcal{P}$ of all maps $g\colon Y\longrightarrow Y$ in
$\mathcal{F}_{\mathcal{X}}$ having an invariant set $Q\subset Y$ that, at a
combinatorial level, behaves like $P$. In this case, we say that every map $g$
in the class _exhibits_ the pattern $\mathcal{P}$. Of course we have to
precise in which sense a periodic orbit _behaves as $P$_. So, we have to
decide which feature of $P$ has to be preserved inside the equivalence class
$\mathcal{P}$. The period of $P$, just a natural number, is a first
possibility (Sharkovskii’s Theorem), but a richer option arises from imposing
that
1. (a)
the relative positions of the points of $Q$ inside $Y$ are the same as the
relative positions of $P$ inside $X$
2. (b)
the way these positions are permuted under the action of $g$ coincides with
the way $f$ acts on the points of $P$.
An example is given by the family $\mathcal{F}_{\mathcal{M}}$ of surface
homeomorphisms. The pattern (or _braid type_) of a cycle $P$ of a map $f\colon
M\longrightarrow M$ from $\mathcal{F}_{\mathcal{M}}$, where $M$ is a surface,
is defined by the isotopy class, up to conjugacy, of
$f\bigr{\rvert}_{M\setminus P}$ [19, 27].
When $\mathcal{F}_{\mathcal{X}}$ is the family of continuous maps of closed
intervals, the points of an orbit $P$ of a map in $\mathcal{F}_{\mathcal{X}}$
are totally ordered and the pattern of $P$ can be simply identified with a
cyclic permutation in a natural way. The notion of pattern for interval maps
was formalized and developed in the early 1990s [12, 30].
In the last decades, a growing interest has arisen in extending the notion of
_pattern_ from the interval case to more general one-dimensional spaces such
as graphs [2, 10] or trees [6, 13, 14]. Precisely, in this paper we deal with
patterns of periodic orbits of continuous maps defined on trees (simply
connected graphs).
Let us precise the conditions (a,b) above in our context. If $f\colon
T\longrightarrow T$ is a continuous map of a tree and $P\subset T$ is a
periodic orbit of $f$, the triplet $(T,P,f)$ will be called a _model_. Two
points $x,y$ of $P$ will be said to be _consecutive_ if the unique closed
interval of $T$ having $x,y$ as endpoints contains no other points of $P$. Any
maximal subset of $P$ consisting only of pairwise consecutive points will be
called a _discrete component_. We will say that two models $(T,P,f)$ and
$(T^{\prime},P^{\prime},f^{\prime})$ are equivalent if there is a bijection
$\phi$ from $P$ to $P^{\prime}$ which sends discrete components to discrete
components and conjugates the action of $f$ on $P$ and the action of
$f^{\prime}$ on $P^{\prime}$, i.e.
$f^{\prime}\circ\phi\bigr{\rvert}_{P}=\phi\circ f\bigr{\rvert}_{P}$. In Figure
1 we show two equivalent 6-periodic models with two discrete components. Note
that two points $x_{i},x_{j}$ of $P$ are consecutive in $T$ when the
corresponding points $x^{\prime}_{i},x^{\prime}_{j}$ of $P^{\prime}$ are
consecutive in $T^{\prime}$.
A _pattern_ is an equivalence class of models by the above equivalence
relation. A map $f\colon T\longrightarrow T$ is said to _exhibit a pattern
$\mathcal{P}$_ if $f$ has an invariant set $P$ such that
$(T,P,f)\in\mathcal{P}.$
Figure 1. Set $P=\\{x_{i}\\}_{i=0}^{5}$ and
$P^{\prime}=\\{x^{\prime}_{i}\\}_{i=0}^{5}$. If $f\colon T\longrightarrow T$
and $f^{\prime}\colon T^{\prime}\longrightarrow T^{\prime}$ are continuous
maps such that $f(x_{i})=x_{i+1}$ and
$f^{\prime}(x^{\prime}_{i})=x^{\prime}_{i+1}$ for $0\leq i\leq 5$,
$f(x_{5})=x_{0}$ and $f^{\prime}(x^{\prime}_{5})=x^{\prime}_{0}$, then the
models $(T,P,f)$ and $(T^{\prime},P^{\prime},f^{\prime})$ are equivalent and
belong to the same pattern $[T,P,f]=[T^{\prime},P^{\prime},f^{\prime}]$.
A usual way of measuring the dynamical complexity of a map $f\colon
X\longrightarrow X$ of a compact metric space is in terms of its _topological
entropy_ , a notion first introduced in 1965 [1]. It is a non-negative real
number (or infinity) that measures how the iterates of the map mix the points
of $X$. It will be denoted by $h(f)$. An interval map with positive entropy is
_chaotic_ in the sense of Li and Yorke [26]. The same is true for more general
compact metric spaces [15]. On the other hand, the dynamics of a map with zero
topological entropy is much simpler.
Given a pattern $\mathcal{P}$ in $\mathcal{F}_{\mathcal{X}}$, we would like to
establish, only in terms of the combinatorial data encoded by $\mathcal{P}$, a
lower bound for the dynamical complexity that will be present in any map in
$\mathcal{F}_{\mathcal{X}}$ exhibiting $\mathcal{P}$. In view of what have
been said in the previous paragraph, it is natural to define the _topological
entropy of the pattern $\mathcal{P}$_, denoted from now on by
$h(\mathcal{P})$, as the infimum of the topological entropies of all maps in
$\mathcal{F}_{\mathcal{X}}$ exhibiting $\mathcal{P}$.
Although computing the entropy of a continuous map is difficult in general, in
some cases the computation of the entropy of a pattern $\mathcal{P}$ in
$\mathcal{F}_{\mathcal{X}}$ can be easily performed thanks to the existence of
the so called _canonical models_. A _canonical model_ of a pattern
$\mathcal{P}$ in $\mathcal{F}_{\mathcal{X}}$ is a map
$f\in\mathcal{F}_{\mathcal{X}}$ that exhibits $\mathcal{P}$ and satisfies at
least the following properties:
1. (1)
$f$ is essentially unique and can be constructed from the combinatorial data
enclosed in $\mathcal{P}$
2. (2)
$f$ has minimum entropy in the set of all maps exhibiting $\mathcal{P}$
3. (3)
the dynamics of $f$ can be completely described using algebraic tools that, in
particular, allow us to compute $h(f)$.
From (1–3) it follows that $h(\mathcal{P})$, defined as the infimum of
entropies of maps, is in fact a minimum and can be easily computed as the
entropy of the canonical model of $\mathcal{P}$. The existence of canonical
models for patterns has been proved for continuous maps of closed intervals
(see [9] for a list of references), homeomorphisms of compact surfaces [22,
33] and continuous maps on trees [6].
Now we are ready to explain the aim of this paper. Several natural questions
concerning patterns and entropy arise. Fix $n\in\mathbb{N}$ and consider the
(finite) set of all $n$-periodic tree patterns. An important classification in
this set is given by the zero/positive entropy character of its elements. On
the one hand, the zero entropy tree patterns are well understood and several
equivalent characterizations can be found in the literature [18, 6, 5]. On the
other hand, let $\operatorname{Pos}_{n}$ be the subset of all $n$-periodic
tree patterns with positive entropy. One would like to describe the patterns
with maximal/minimal entropy in $\operatorname{Pos}_{n}$.
Several advances in the description of the entropy-maximal tree patterns have
been reported [4], but the problem is still open. In fact, the maximality
problem is unsolved even in the particular case of interval patterns [20, 21,
24]. Indeed, the maximal-entropy cyclic permutations of order $n$, when $n$
has the form $4k+2$, are still unknown, although [3] tackles this case from a
computational point of view and proposes a conjecture.
In this paper we face the opposite problem: the characterization of the
patterns of minimal entropy in $\operatorname{Pos}_{n}$. For interval maps,
the description of the minimum entropy cycles is known when $n$ is not a power
of two (see [9] for a review). In the setting of tree maps and for any $n\geq
3$, an $n$-periodic tree pattern $\mathcal{Q}_{n}$ was defined in [7] that
conjecturally has minimal entropy in the set $\operatorname{Pos}_{n}$ (the
problem makes no sense when $n=1,2$, since every periodic pattern of period 1
or 2 has entropy zero), and the conjecture was proved to be true when $n$ is a
power of a prime. See the canonical model of $\mathcal{Q}_{n}$ in Figure 2.
The entropy of $\mathcal{Q}_{n}$ turns out to be $\log(\lambda_{n})$, where
$\lambda_{n}$ is the unique real root of the polynomial $x^{n}-2x-1$ in
$(1,+\infty)$.
Figure 2. The canonical model $(T,P,f)$ of the pattern $\mathcal{Q}_{n}$, for
which $P=\\{x_{i}\\}_{i=0}^{n-1}$ is time labeled and $f(y)=y$.
The first main result of this paper states that the conjecture is in fact true
for every $n\geq 3$.
###### Theorem A.
Let $n\geq 3$ be a positive integer. Then, $\mathcal{Q}_{n}$ has minimum
entropy in the set $\operatorname{Pos}_{n}$ of all $n$-periodic patterns with
positive entropy. Moreover,
$h(\mathcal{P})>h(\mathcal{Q}_{n})=\log(\lambda_{n})$ for any
$\mathcal{P}\in\operatorname{Pos}_{n}$ such that
$\mathcal{P}\neq\mathcal{Q}_{n}$, where $\lambda_{n}$ is the unique real root
of the polynomial $x^{n}-2x-1$ in $(1,+\infty)$.
Traditionally, reducibility/irreducibility has been another important
classification for tree patterns. A pattern is _reducible_ when it has a block
structure (see Section 3). Roughly speaking, this means that the points of the
orbit can be partitioned into disjoint subtrees that are permuted under the
action of the map. The notion of reducibility arose early in the study of
interval maps and has been recently extended to the setting of tree patterns
[5]. The irreducible tree patterns are closely related to pseudo-Anosov braid
types of periodic orbits of orientation preserving disk homeomorphisms [23].
As we will see, every irreducible tree pattern has positive entropy. The
dynamic relevance of the patterns from $\operatorname{Irr}_{n}$ motivates the
study of the minimality of the entropy in this subclass of
$\operatorname{Pos}_{n}$. For interval maps, the problem was solved in [29].
Since the minimum entropy pattern $\mathcal{Q}_{n}$ turns out to be
irreducible, Theorem A incidentally proves that $\mathcal{Q}_{n}$ also
minimizes the topological entropy in the subclass $\operatorname{Irr}_{n}$.
###### Corollary B.
Let $n\geq 3$ be a positive integer. Then, $\mathcal{Q}_{n}$ has minimum
entropy in the set $\operatorname{Irr}_{n}$ of all $n$-periodic irreducible
patterns. Moreover, $h(\mathcal{P})>h(\mathcal{Q}_{n})=\log(\lambda_{n})$ for
any $\mathcal{P}\in\operatorname{Irr}_{n}$ such that
$\mathcal{P}\neq\mathcal{Q}_{n}$.
Now, the problem of determining the minimum (positive) entropy in the family
of all reducible patterns arises. It is not difficult to see that
$\operatorname{Pos}_{n}\setminus\operatorname{Irr}_{n}\neq\emptyset$ if and
only if $n$ is not a prime and $n\geq 6$. By Theorem A, the minimum positive
entropy for any reducible pattern is strictly larger than $\log(\lambda_{n})$.
The second main result of this paper gives the minimum entropy in
$\operatorname{Pos}_{n}\setminus\operatorname{Irr}_{n}$. In this case,
however, the minimum entropy pattern is not unique.
###### Theorem C.
Let $n\geq 6$ be a composite number. Then, the minimum positive entropy in the
set of all reducible $n$-periodic patterns is $\log(\lambda_{n/p})/p$, where
$p$ is the smallest prime factor of $n$.
This paper is organized as follows. In Section 2 we introduce formally the
basic notions of pattern, canonical model and path transition matrix, and
recall how to compute the topological entropy of a pattern. In Section 3 we
review some classic notions and results about block structures and
reducibility for tree patterns, that we use in Section 6 to recall the
characterization of zero entropy periodic patterns. A deeper study of the
structure of zero entropy paterns is carried out in Section 7. In Section 4 we
briefly recall a mechanism, first introduced in [7], that allows us to compare
the entropies of two patterns $\mathcal{P}$ and $\mathcal{O}$ when
$\mathcal{O}$ has been obtained by joining together several discrete
components of $\mathcal{P}$. Section 5 is devoted to the task of explaining
the strategy of the proof of Theorem A. As we will see, the proof is by
induction on the period $n$ and relies on a core result, Theorem D, that is
stated in the same section and proved in Section 8 using the results of
Section 7. The use of this result allows us to prove Theorem A for almost all
patterns, with two particular exceptions: the _$k$ -flowers_ (patterns with
$k$ discrete components attached at a unique central point) and the _triple
chain_ , a pattern with three consecutive discrete components. We deal with
these two cases in Sections 9 and 10 respectively. Putting all together, we
prove Theorem A in Section 11. Finally, Section 12 is devoted to the proof of
Corollary B and Theorem C.
## 2\. Patterns and canonical models
In this section we formalize the definitions outlined in the Introduction. We
also recall how to compute the topological entropy of a pattern by using
purely combinatorial tools. Finally we define the pattern that will be proved
to have minimum positive entropy.
A _tree_ is a compact uniquely arcwise connected space which is a point or a
union of a finite number of intervals (by an _interval_ we mean any space
homeomorphic to $[0,1]$). Any continuous map $f\colon T\longrightarrow T$ from
a tree $T$ into itself will be called a _tree map_. A set $X\subset T$ is said
to be _$f$ -invariant_ if $f(X)\subset X$. For each $x\in T$, we define the
_valence_ of $x$ to be the number of connected components of
$T\setminus\\{x\\}$. A point of valence different from 2 will be called a
_vertex_ of $T$ and the set of vertices of $T$ will be denoted by $V(T)$. Each
point of valence 1 will be called an _endpoint_ of $T$. The set of such points
will be denoted by $\operatorname{En}(T)$. Also, the closure of a connected
component of $T\setminus V(T)$ will be called an _edge of $T$_.
Given any subset $X$ of a topological space, we will denote by
$\operatorname{Int}(X)$ and $\operatorname{Cl}(X)$ the interior and the
closure of $X$, respectively. For a finite set $P$ we will denote its
cardinality by $|P|$.
A triplet $(T,P,f)$ will be called a _model_ if $f\colon T\longrightarrow T$
is a tree map and $P$ is a finite $f$-invariant set such that
$\operatorname{En}(T)\subset P$. In particular, if $P$ is a periodic orbit of
$f$ and $|P|=n$ then $(T,P,f)$ will be called an _$n$ -periodic model_. Given
$X\subset T$ we will define the _connected hull_ of $X$, denoted by $\langle
X\rangle_{T}$ or simply by $\langle X\rangle$, as the smallest closed
connected subset of $T$ containing $X$. When $X=\\{x,y\\}$ we will write
$[x,y]$ to denote $\langle X\rangle$. The notations $(x,y)$, $(x,y]$ and
$[x,y)$ will be understood in the natural way.
An $n$-periodic orbit $P=\\{x_{i}\\}_{i=0}^{n-1}$ of a map $\theta$ will be
said to be _time labeled_ if $\theta(x_{i})=x_{i+1}$ for $0\leq i<n-1$ and
$\theta(x_{n-1})=x_{0}$.
Let $T$ be a tree and let $P\subset T$ be a finite subset of $T$. The pair
$(T,P)$ will be called a _pointed tree_. Two points $x,y$ of $P$ will be said
to be _consecutive_ if $(x,y)\cap P=\emptyset$. Any maximal subset of $P$
consisting only of pairwise consecutive points will be called a _discrete
component_ of $(T,P)$. We say that two pointed trees $(T,P)$ and
$(T^{\prime},P^{\prime})$ are _equivalent_ if there exists a bijection
$\phi\colon P\longrightarrow P^{\prime}$ which preserves discrete components.
The equivalence class of a pointed tree $(T,P)$ will be denoted by $[T,P]$.
Let $(T,P)$ and $(T^{\prime},P^{\prime})$ be equivalent pointed trees, and let
$\theta\colon P\longrightarrow P$ and $\theta^{\prime}\colon
P^{\prime}\longrightarrow P^{\prime}$ be maps. We will say that $\theta$ and
$\theta^{\prime}$ are _equivalent_ if
$\theta^{\prime}=\phi\circ\theta\circ\phi^{-1}$ for a bijection $\phi\colon
P\longrightarrow P^{\prime}$ which preserves discrete components. The
equivalence class of $\theta$ by this relation will be denoted by $[\theta]$.
If $[T,P]$ is an equivalence class of pointed trees and $[\theta]$ is an
equivalence class of maps then the pair $([T,P],[\theta])$ will be called a
_pattern_. We will say that a model $(T,P,f)$ _exhibits_ a pattern
$(\mathcal{T},\Theta)$ if $\mathcal{T}=[\langle P\rangle_{T},P]$ and
$\Theta=[f\bigr{\rvert}_{{}_{P}}]$.
Despite the fact that the notion of a discrete component is defined for
pointed trees, by abuse of language we will use the expression _discrete
component of a pattern_ , which will be understood in the natural way since
the number of discrete components and their relative positions are the same
for all models of the pattern.
Recall that the topological entropy of a continuous tree map $f$ is denoted by
$h(f)$. Given a pattern $\mathcal{P}$, the topological entropy of
$\mathcal{P}$ is defined to be
$h(\mathcal{P}):=\inf\\{h(f)\,\colon(T,P,f)\ \text{is a model exhibiting}\
\mathcal{P}\\}.$
The simplest models exhibiting a given pattern are the monotone ones, defined
as follows. Let $f\colon T\longrightarrow T$ be a tree map map. Given $a,b\in
T$ we say that $f\bigr{\rvert}_{[a,b]}$ is _monotone_ if $f([a,b])$ is either
an interval or a point and $f\bigr{\rvert}_{[a,b]}$ is monotone as an interval
map. Let $(T,P,f)$ be a model. A pair $\\{a,b\\}\subset P$ will be called a
_basic path of $(T,P)$_ if it is contained in a single discrete component of
$(T,P)$. We will say that $f$ is _$P$ -monotone_ if $f\bigr{\rvert}_{[a,b]}$
is monotone for any basic path $\\{a,b\\}$. The model $(T,P,f)$ will then be
said to be _monotone_. In such case, Proposition 4.2 of [6] states that the
set $P\cup V(T)$ is $f$-invariant (recall that $V(T)$ stands for the set of
vertices of $T$). Hence, the map $f$ is also $(P\cup V(T))$-monotone. Observe
that the notion of $P$-monotonicity is much more restrictive than the usual
topological notion of a _monotone map_ (full preimages of continua are
continua).
Theorem A of [6] states that every pattern $\mathcal{P}$ has monotone models,
and that for every monotone model $(T,P,f)$ of $\mathcal{P}$,
$h(f)=h(\mathcal{P})$. Moreover, there exists a special class of monotone
models, satisfying several extra properties that we omit here, called
_canonical models_. Theorem B of [6] states that every pattern has a canonical
model. Moreover, given two canonical models $(T,P,f)$ and
$(T^{\prime},P^{\prime},f^{\prime})$ of the same pattern there exists a
homeomorphism $\phi\colon T\longrightarrow T^{\prime}$ such that
$\phi(P)=P^{\prime}$ and $f^{\prime}\circ\phi\bigr{\rvert}_{P}=\phi\circ
f\bigr{\rvert}_{P}$. Hence, the canonical model of a pattern is essentially
unique. Summarizing, we have the following result.
###### Theorem 2.1.
Let $\mathcal{P}$ be a pattern. Then the following statements hold.
1. (a)
There exists a canonical model of $\mathcal{P}$.
2. (b)
The canonical model $(T,P,f)$ of $\mathcal{P}$ satisfies
$h(f)=h(\mathcal{P})$.
It is worth noticing that the proof of Theorem 2.1 gives a finite algorithm to
construct the canonical model of any pattern. For instance, the model
$(T,P,f)$ in the right picture of Figure 1 is the canonical model of the
corresponding pattern. The $P$-monotonicity of $f$ determines that $f(a)=b,$
$f(b)=c,$ and $f(c)=c.$ Observe also that the left model
$(T^{\prime},P^{\prime},f^{\prime})$ of Figure 1, a representative of the same
pattern, cannot be $P^{\prime}$-monotone, since in this case we would have
$f^{\prime}(v)\in f^{\prime}([x^{\prime}_{2},x^{\prime}_{6}])\cap
f^{\prime}([x^{\prime}_{4},x^{\prime}_{5}])=[x^{\prime}_{3},x^{\prime}_{1}]\cap[x^{\prime}_{5},x^{\prime}_{6}]=\emptyset.$
There is a combinatorial procedure to compute the entropy of a pattern
$\mathcal{P}$ which does not require the construction of its canonical model.
Indeed, $h(\mathcal{P})$ can be obtained from the transition matrix of a
combinatorial directed graph that can be derived independently of the images
of the vertices in any particular monotone model of the pattern. Let us recall
this procedure.
A _combinatorial directed graph_ is a pair $\mathcal{G}=(V,U)$ where
$V=\\{v_{1},v_{2},\dots,v_{k}\\}$ is a finite set and $U\subset V\times V$.
The elements of $V$ are called the _vertices_ of $\mathcal{G}$ and each
element $(v_{i},v_{j})$ in $U$ is called an _arrow_ (from $v_{i}$ to $v_{j}$)
in $\mathcal{G}$. Such an arrow is usually denoted by $v_{i}\rightarrow
v_{j}$. The notions of _path_ and _loop_ in $\mathcal{G}$ are defined as
usual. The _length_ of a path is defined as the number of arrows in the path.
The _transition matrix_ of $\mathcal{G}$ is a $k\times k$ binary matrix
$(m_{ij})_{i,j=1}^{k}$ such that $m_{ij}=1$ if and only if there is an arrow
from $v_{i}$ to $v_{j}$, and $m_{ij}=0$ otherwise.
Let $\\{\pi_{1},\pi_{2},\ldots,\pi_{k}\\}$ be the set of basic paths of the
pointed tree $(T,P)$. We will say that $\pi_{i}$ _$f$ -covers_ $\pi_{j}$,
denoted by $\pi_{i}\rightarrow\pi_{j}$, whenever $\pi_{j}\subset\langle
f(\pi_{i})\rangle_{T}$. The _$\mathcal{P}$ -path graph_ is the combinatorial
directed graph whose vertices are in one-to-one correspondence with the basic
paths of $(T,P)$, and there is an arrow from the vertex $i$ to the vertex $j$
if and only if $\pi_{i}$ $f$-covers $\pi_{j}$. The associated transition
matrix, denoted by $M_{\mathcal{P}}$, will be called the _path transition
matrix of $\mathcal{P}$_. It can be seen that the definitions of the
$\mathcal{P}$-path graph and the matrix $M_{\mathcal{P}}$ are independent of
the particular choice of the model $(T,P,f)$. Thus, they are well-defined
pattern invariants.
For any square matrix $M$, we will denote its _spectral radius_ by $\rho(M)$.
We recall that it is defined as the maximum of the moduli of the eigenvalues
of $M$.
###### Remark 2.2.
Let $M_{\mathcal{P}}$ be the path transition matrix of a pattern
$\mathcal{P}$. Then (see [6]), the topological entropy of $\mathcal{P}$ can be
computed as $h(\mathcal{P})=\log\max\\{\rho(M_{\mathcal{P}}),1\\}$.
To end this section we define the patterns that will be showed to have minimum
positive entropy. Let $n\in\mathbb{N}$ with $n\geq 3$. Let $\mathcal{Q}_{n}$
be the $n$-periodic pattern $([T,P],[\theta])$ such that
$P=\\{x_{0},x_{1},\ldots,x_{n-1}\\}$ is time labeled and $(T,P)$ has two
discrete components, $\\{x_{n-1},x_{0}\\}$ and
$\\{x_{0},x_{1},\ldots,x_{n-2}\\}$. In Figure 2 we show the canonical model of
$\mathcal{Q}_{n}$. Observe that $\mathcal{Q}_{3}$ is nothing but the
3-periodic Štefan cycle of the interval [32]. In [7] the authors prove that
$h(\mathcal{Q}_{n})=\log(\lambda_{n})$, where $\lambda_{n}$ is the unique real
root of the polynomial $x^{n}-2x-1$ in $(1,+\infty)$. We will use the
following properties of the numbers $\lambda_{n}$. Statement (a) is proved in
Proposition 3.1 of [7], while statement (b) is an easy exercise.
###### Proposition 2.3.
Let $n$ be any positive integer with $n\geq 3$. Then:
1. (a)
$\lambda_{n+1}<\lambda_{n}$
2. (b)
$\sqrt[n]{4}>\lambda_{n}$.
## 3\. Block structures, skeletons and $\pi$-reducibility
The zero entropy tree patterns will play a central role in this paper. The
characterization of such patterns was first given in [6], and another
description was proven to be equivalent in [5]. We will use this second
approach, and this section is devoted to recall the necessary notions and
results. The characterization of zero entropy periodic patterns relies on the
notion of _block structure_ , that is classic in the field of Combinatorial
Dynamics. In the literature one can find several kinds of block structures and
related notions for periodic orbits. In the interval case, the Sharkovskii’s
_square root construction_ [31] is an early example of a block structure. The
notion of _extension_ , first appeared in [17], gives rise to some particular
cases of block structures. Also the notion of _division_ , introduced in [25]
for interval periodic orbits and generalized in [11] in order to study the
entropy and the set of periods for tree maps, is a particular case of block
structure.
###### Remark 3.1.
All patterns considered in this paper will be periodic. Given an $n$-periodic
pattern $\mathcal{P}$, by abuse of language we will speak about the _points_
of $\mathcal{P}$, and by default we will consider that such points are time
labeled with the integers $\\{0,1,\ldots,n-1\\}$. Often we will identify a
point in $\mathcal{P}$ with its time label. In agreement with such
conventions, the points of the patterns shown in the pictures will be simply
integers in the range $[0,n-1]$. See for instance Figure 3.
A pattern will be said to be _trivial_ if it has only one discrete component.
It is easy to see that the entropy of any trivial pattern is zero.
Let $\mathcal{P}=([T,P],[f])$ be a nontrivial $n$-periodic pattern with $n\geq
3$. For $n>p\geq 2$, we will say that $\mathcal{P}$ _has a $p$-block
structure_ if there exists a partition $P=P_{0}\cup P_{1}\cup\ldots\cup
P_{p-1}$ such that $f(P_{i})=P_{i+1\bmod p}$ for $i\geq 0$, and $\langle
P_{i}\rangle_{T}\cap P_{j}=\emptyset$ for $i\neq j$. In this case, $p$ is a
strict divisor of $n$ and $|P_{i}|=n/p$ for $0\leq i<p$. The sets $P_{i}$ will
be called _blocks_ , and the blocks will be said to be _trivial_ if each
$P_{i}$ is contained in a single discrete component of $\mathcal{P}$
(equivalently, each pattern $([\langle P_{i}\rangle_{T},P_{i}],[f^{p}])$ is
trivial). Note that $\mathcal{P}$ can have several block structures, but only
one $p$-block structure for any given divisor $p$ of $n$. If $\mathcal{P}$ has
structures of trivial blocks, the one with blocks with maximum cardinality
will be called a _maximal structure_.
From the equivalence relation which defines the class of models belonging to
the pattern $\mathcal{P}$ it easily follows that the notions defined in the
previous paragraph do not depend on the particular model $(T,P,f)$
representing $\mathcal{P}$.
###### Remark 3.2 (Standing convention).
Let $\mathcal{P}$ be an $n$-periodic pattern whose points are time labeled as
$\\{0,1,\ldots,n-1\\}$. When $\mathcal{P}$ has a block structure of $p$ blocks
$P_{0}\cup P_{1}\cup\ldots\cup P_{p-1}$, by convention we will always assume
that the time labels of the blocks have been chosen in such a way that $0\in
P_{0}$.
Let $(T,P,f)$ be the canonical model of $\mathcal{P}$. A $p$-block structure
$P_{0}\cup P_{1}\cup\ldots\cup P_{p-1}$ for $\mathcal{P}$ will be said to be
_separated_ if $\langle P_{i}\rangle_{T}\cap\langle
P_{j}\rangle_{T}=\emptyset$ for $i\neq j$. Note that the separability of a
block structure for a pattern depends on the particular topology of its
canonical model and, in consequence, cannot be determined directly from the
combinatorial data of $\mathcal{P}$ a priori. However, recall that the
canonical model of a pattern $\mathcal{P}$ is unique and can be
algorithmically computed from $\mathcal{P}$. So, this is an intrinsic notion.
In Figure 3 we show an example of a 8-periodic pattern $\mathcal{P}$ admitting
both a 4-block structure given by $P_{0}=\\{0,4\\}$, $P_{1}=\\{1,5\\}$,
$P_{2}=\\{2,6\\}$, $P_{3}=\\{3,7\\}$ and a 2-structure given by
$Q_{0}=\\{0,2,4,6\\}$, $Q_{1}=\\{1,3,5,7\\}$. Note that in both cases the
blocks are trivial, and $Q_{0}\cup Q_{1}$ is a maximal structure by
definition. As it has been said, one can determine these block structures
directly in the combinatorial representation of $\mathcal{P}$, without
checking any particular topology. See Figure 3 (left). On the contrary, to
determine the separability of a block structure one has to construct the
canonical model of $\mathcal{P}$, which is shown in the same figure (right).
Here we see that $Q_{0}\cup Q_{1}$ is separated, while $P_{0}\cup P_{1}\cup
P_{2}\cup P_{3}$ is not (the convex hulls of the blocks $P_{0}$ and $P_{2}$,
which are respectively the intervals $[0,4]$ and $[2,6]$, intersect at the
vertex $a$).
Figure 3. Left: an 8-periodic pattern $\mathcal{P}$ admitting two block
structures with trivial blocks. Right: the canonical model $(T,P,f)$ of
$\mathcal{P}$, for which the images of the vertices are $f(a)=c$, $f(b)=0$ and
$f(c)=a$.
Let $\mathcal{P}$ be an $n$-periodic pattern and let $(T,P,f)$ be the
canonical model of $\mathcal{P}$. Let $P=P_{0}\cup P_{1}\cup\ldots\cup
P_{p-1}$ be a separated $p$-block structure for $\mathcal{P}$. Then,
$f(\langle P_{i}\rangle)=\langle P_{i+1\bmod p}\rangle$. The _skeleton of
$\mathcal{P}$_ (associated to this block structure) is a $p$-periodic pattern
$\mathcal{S}$ defined as follows. Consider the tree $S$ obtained from $T$ by
collapsing each tree $\langle P_{i}\rangle$ to a point $x_{i}$. Let
$\kappa\colon T\longrightarrow S$ be the standard projection, which is
bijective on $T\setminus\cup_{i}\langle P_{i}\rangle$ and satisfies
$\kappa(\langle P_{i}\rangle)=x_{i}$. Set
$Q=\kappa(P)=\\{x_{0},x_{1},\ldots,x_{p-1}\\}$ and define $\theta\colon
Q\longrightarrow Q$ by $\theta(x_{i})=x_{i+1\bmod p}$. Then the _skeleton_
$\mathcal{S}$ of $\mathcal{P}$ is defined to be the $p$-periodic pattern
$([S,Q],[\theta])$.
###### Remark 3.3 (Standing convention).
Let $\mathcal{P}$ be an $n$-periodic pattern whose points are time labeled as
$\\{0,1,\ldots,n-1\\}$. Assume that $\mathcal{P}$ has a separated $p$-block
structure. From the convention established in Remark 3.2, each point of
$\mathcal{P}$ labeled as $i$ belongs to the block $P_{i\bmod{p}}$. From now on
we adopt the convention that the $p$ points of the skeleton have time labels
$\\{0,1,\ldots,p-1\\}$ such that the point $i$ of the skeleton corresponds to
the collapse of the block $P_{i}$.
###### Example 3.4.
Let us see an example of construction of the skeleton. Consider the 8-periodic
pattern $\mathcal{P}$ consisting of two discrete components $\\{0,2,6\\}$,
$\\{0,1,3,4,5,7\\}$ (Figure 4, left). Then, $P_{0}=\\{0,4\\}$,
$P_{1}=\\{1,5\\}$, $P_{2}=\\{2,6\\}$, $P_{3}=\\{3,7\\}$ defines a structure of
4 trivial blocks. By checking the canonical model $(T,P,f)$, which is shown in
Figure 4 (center), we see that $\langle P_{i}\rangle_{T}\cap\langle
P_{j}\rangle_{T}=\emptyset$ when $i\neq j$. Thus, the structure is separated.
The corresponding skeleton is obtained by collapsing the convex hull of each
block to a point, giving the 4-periodic pattern $\mathcal{S}$ shown in Figure
4 (right).
Figure 4. Left: an 8-periodic pattern $\mathcal{P}$ with a separated structure
of 4 trivial blocks. Center: the canonical model $(T,P,f)$ of $\mathcal{P}$,
the convex hulls of the blocks marked with thick lines. Right: the
corresponding skeleton.
The entropies of a pattern $\mathcal{P}$ with a separated structure of trivial
blocks and its associated skeleton coincide, as the following result (a
reformulation of Proposition 8.1 of [6]) states.
###### Proposition 3.5.
Let $\mathcal{P}$ be a pattern with a separated structure of trivial blocks.
Let $\mathcal{S}$ be the corresponding skeleton. Then,
$h(\mathcal{S})=h(\mathcal{P})$.
Going back to Example 3.4, note that the obtained skeleton $\mathcal{S}$ is a
zero entropy interval pattern. Then, $h(\mathcal{P})=0$ by Proposition 3.5.
As a consequence of Proposition 3.5 we have the following result, that will be
used in the proof of the main theorem of this paper.
###### Corollary 3.6.
Let $\mathcal{P}$ an $n$-periodic pattern with a separated structure of $p$
trivial blocks. Let $\mathcal{S}$ be the corresponding skeleton. If
$h(\mathcal{S})\geq\log(\lambda_{p})$, then
$h(\mathcal{P})>\log(\lambda_{n})$.
###### Proof.
Since $p$ is a strict divisor of $n$, it is a direct consequence of
Propositions 3.5 and 2.3(a). ∎
The existence of a separated structure of trivial blocks for a pattern
$\mathcal{P}$ has a strong connection with the path transition matrix of
$\mathcal{P}$, via the iterative behaviour of some particular basic paths of
$\mathcal{P}$. Let us explain it. Let $\mathcal{P}$ be a periodic pattern and
let $\pi$ be a basic path of $\mathcal{P}$. Consider any model $(T,P,f)$ of
$\mathcal{P}$. For $k\geq 1$, we will say that $\pi$ _splits in $k$ iterates_
if $f^{i}(\pi)$ is a basic path of $\mathcal{P}$ for $0\leq i<k$ and
$f^{k}(\pi)$ is not a basic path of $\mathcal{P}$. Equivalently, $f^{i}(\pi)$
only $f$-covers $f^{i+1}(\pi)$ for $0\leq i<k$ and $f^{k-1}(\pi)$ $f$-covers
at least two different basic paths. We say that a basic path $\pi$ _never
splits_ if $f^{i}(\pi)$ is a basic path for every $i\geq 0$. In this case, we
will say that $\mathcal{P}$ is _$\pi$ -reducible_. As an example, the path
$\pi=\\{0,4\\}$ for the pattern $\mathcal{P}$ in Figure 4 never splits, so
$\mathcal{P}$ is $\pi$-reducible. On the other hand, let $\sigma$ be the path
$\\{4,7\\}$ on the same pattern. Note that $f(\sigma)=\\{5,0\\}$ is a basic
path, while $f^{2}(\sigma)=\\{6,1\\}$ is not. Then, $\sigma$ splits in 2
iterates and $f^{2}$-covers the two basic paths $\\{6,0\\}$ and $\\{0,1\\}$.
The $\pi$-reducibility of a pattern with respect to a basic path $\pi$ is
equivalent to the existence of a separated structure of trivial blocks, as the
following result states.
###### Proposition 3.7.
Let $\mathcal{P}$ be a periodic pattern. Then, $\mathcal{P}$ is
$\pi$-reducible for a basic path $\pi$ if and only if $\mathcal{P}$ has a
maximal and separated structure of trivial blocks. In this case, $\mathcal{P}$
is $\sigma$-reducible for any basic path $\sigma$ contained in a block.
###### Proof.
The ‘only if’ part of the first statement is Proposition 9.5 of [7], while its
‘if’ part and the second claim easily follow from the definition of a trivial
block structure. ∎
## 4\. A mechanism to compare entropies
Another key ingredient to prove Theorem A is a tool, first introduced in [7],
that allows us to compare the entropies of two patterns $\mathcal{P}$ and
$\mathcal{O}$ when $\mathcal{O}$ has been obtained by joining together several
discrete components of $\mathcal{P}$. For the sake of brevity, here we will
give a somewhat informal (though completely clear) version of this procedure.
Let $(T,P,f)$ be a model of a pattern $\mathcal{P}$. We recall that two
discrete components of $(T,P)$ are either disjoint or intersect at a single
point of $P$. Two discrete components $A,B$ of $(T,P)$ will be said to be
_adjacent at $x\in P$_ (or simply _adjacent_) if $A\cap B=\\{x\\}$. A point
$z\in P$ will be said to be _inner_ if $z$ belongs to $k\geq 2$ discrete
components of $(T,P)$, all being pairwise adjacent at $z$.
Now let $x\in P$ be an inner point and let $A,B$ be two discrete components
adjacent at $x$. If we join together $A$ and $B$ to get a new discrete
component $A\cup B$ and keep intact the remaining components, we get a new
pattern $\mathcal{O}$. We will say that $\mathcal{O}$ is an _opening of
$\mathcal{P}$_ (with respect to the inner point $x$ and the discrete
components $A$ and $B$). As an example, see Figure 5, where $\mathcal{O}$ is
an opening of $\mathcal{P}$ with respect to the inner point 5 and the discrete
components $A=\\{2,5,6\\}$ and $B=\\{0,5\\}$, while $\mathcal{R}$ is an
opening of $\mathcal{P}$ with respect to the inner point 5 and the discrete
components $B$ and $C=\\{1,3,5\\}$.
Figure 5. Two different openings of $\mathcal{P}$.
###### Remark 4.1 (Standing convention).
As it is clear from the examples shown in Figure 5, we are implicitly assuming
that the labeling of the points of an $n$-periodic pattern $\mathcal{P}$ fixes
the labeling of the points of any opening of $\mathcal{P}$.
As one may expect from intuition, the entropy of a model decreases when
performing an opening, as the following result (Theorem 5.3 of [7]) states.
###### Theorem 4.2.
Let $\mathcal{P}$ and $\mathcal{O}$ be $n$-periodic patterns. If $\mathcal{O}$
is an opening of $\mathcal{P}$, then $h(\mathcal{P})\geq h(\mathcal{O})$.
We finish this section stating that the property for a pattern of having a
block structure is preserved by openings. The result is a direct consequence
of the definition of a block structure and the fact that no new inner points
are created after performing an opening.
###### Lemma 4.3.
Let $\mathcal{P}$ be a periodic pattern with a block structure and let
$\mathcal{O}$ be an opening of $\mathcal{P}$. Then, $\mathcal{O}$ has a block
structure.
## 5\. Strategy of the proof of Theorem A
In this section we give a general overview of the proof of Theorem A, in order
to justify the need for the several techniques and results deployed in the
subsequent sections.
We will prove Theorem A by induction on the period $n$. So, assume that we
have an $n$-periodic pattern $\mathcal{P}$ and that the result is true for
every pattern with period less than $n$.
The first step is a simplification process based on the opening mechanism.
Recall (Theorem 4.2) that after performing an opening on $\mathcal{P}$, the
entropy $h$ of the obtained pattern is less or equal to $h(\mathcal{P})$. If
$h$ is still positive, we can perform again an opening and so on, until we get
a pattern with positive entropy such that every new opening leads to entropy
zero. In other words, we can assume that $\mathcal{P}$ satisfies the following
property:
($\star$) $\mbox{Every opening of $\mathcal{P}$ is a zero entropy pattern}.$
Property ($\star$ ‣ 5) is very restrictive and has a strong consequence: a
pattern satisfying ($\star$ ‣ 5) is, _generically_ , $\pi$-reducible. More
precisely, we have the following result, that will be proved in Section 8.
###### Theorem D.
Let $\mathcal{P}$ be an $n$-periodic pattern with positive entropy such that
any opening of $\mathcal{P}$ has entropy zero. Assume that $\mathcal{P}$ has
at least two inner points and at least three openings. Then, $\mathcal{P}$ is
$\pi$-reducible for some basic path $\pi$.
If $\mathcal{P}$ satisfies the hypothesis of Theorem D, then it is
$\pi$-reducible. So, we can consider its skeleton $\mathcal{S}$, with the same
entropy but with a period that strictly divides $n$, and use the induction
hypothesis.
Figure 6. A $k$-flower (left) and a triple chain (right).
The above argument is the core idea of the proof of Theorem A, but we are left
with two special cases for which we cannot assure that property ($\star$ ‣ 5)
implies $\pi$-reducibility: the _$k$ -flowers_ and the _triple chain_. A _$k$
-flower_ is a pattern consisting on $k\geq 2$ discrete components (the
_petals_) attached at a unique inner point. A pattern having three discrete
components and two inner points will be called a _triple chain_. See Figure 6.
The reader will find easy to convince that the flowers and the triple chain
are the two sort of patterns that do not satisfy the property of having at
least two inner points and at least three openings.
The cases of the $k$-flowers and the triple chain will be tackled in Sections
9 and 10 respectively. Concerning the $k$-flowers, the case $k=2$ is specially
simple since Theorem A follows directly from a previous result in [8]. On the
other hand, for $k\geq 3$ we construct an $n^{\prime}$-periodic pattern, where
$n^{\prime}$ is a strict divisor of $n$, whose entropy can be put in relation
with that of $\mathcal{P}$, and then we use the induction hypothesis. Finally,
in the case of the triple chain we compute directly lower bounds of the
entropy by counting coverings in the $\mathcal{P}$-path graph (equivalently,
entries in the path transition matrix).
## 6\. Structure of zero entropy patterns
Although a point is an element of a topological space and a pattern is a
combinatorial object defined as an equivalence class of pointed trees, recall
that by abuse of language we talk about the _points_ of a pattern. The same
translation from topology to combinatorics can be applied to the terms
_valence_ , _inner point_ and _endpoint_. The (combinatorial) _valence_ of a
point $x$ of a pattern $\mathcal{P}$ is defined as the number of discrete
components of $\mathcal{P}$ containing $x$. Recall that an _inner point of
$\mathcal{P}$_ has been defined as a point of combinatorial valence larger
than 1. Otherwise, the point will be called an _endpoint of $\mathcal{P}$_.
Let $x$ be a point of $\mathcal{P}$ of combinatorial valence $\nu$. Obviously,
for any model $(T,P,f)$ of $\mathcal{P}$, the (topological) valence of the
point of $T$ corresponding to $x$ is the same and equals $\nu$. In
consequence, $x$ is an endpoint (respectively, an inner point) of
$\mathcal{P}$ if and only if the point corresponding to $x$ in any model
$(T,P,f)$ is an endpoint (respectively, a point of valence larger than 1) of
the tree $T$. So, in what follows we will drop the words _combinatorial_ and
_topological_ and will use these terms indistinctly in both senses.
The strategy outlined in Section 5 relies strongly in using property ($\star$
‣ 5), that depends on the notion of _zero entropy pattern_. So, we start this
section with the following recursive characterization of zero entropy
patterns, that uses the notions of block structure and skeleton presented in
Section 3. It is Proposition 5.6 of [5].
###### Proposition 6.1.
Let $\mathcal{P}$ be an $n$-periodic pattern. Then, $h(\mathcal{P})=0$ if and
only if either $\mathcal{P}$ is trivial or has a maximal separated structure
of trivial blocks such that the associated skeleton has entropy $0$.
Figure 7. Top: a sequence of skeletons. Bottom: the sequence of combinatorial
collapses according to Definition 6.4.
Obviously we can use Proposition 6.1 recursively, in the sense that the
skeleton $\mathcal{S}$, with entropy zero and a period that strictly divides
that of $\mathcal{P}$, has also a maximal separated structure of trivial
blocks with an associated skeleton $\mathcal{S}^{\prime}$ of entropy zero. We
can thus iterate the process as many times as necessary to finally obtain a
trivial pattern. Consider, for instance, the zero entropy pattern
$\mathcal{P}$ of Example 3.4, whose skeleton $\mathcal{S}$ was shown in Figure
4. This skeleton has a maximal separated structure of 2 trivial blocks, with
the associated skeleton $\mathcal{S}^{\prime}$ being a trivial pattern of 2
points. See the complete sequence of skeletons in Figure 7 (top). Note that
the previous simplification process cannot be carried out without checking the
particular topology of the involved canonical models. Indeed, if we ignore the
topology of the tree $T$ in the canonical model $(T,P,f)$ of $\mathcal{P}$
(that is shown in Figure 4), for the skeleton it is not possible to decide,
only from the combinatorics of $\mathcal{P}$, between the patterns
$\mathcal{S}$ and $\mathcal{C}$ depicted in Figure 7. To overcome this
dependence from the topology, next we propose a similar but purely
combinatorial simplification mechanism over zero entropy patterns.
###### Definition 6.2.
Let $\mathcal{P}=([T,P],[f])$ be a zero entropy $n$-periodic pattern. Let
$P_{0}\cup P_{1}\cup\ldots\cup P_{p-1}$ be the maximal and separated structure
of trivial blocks given by Proposition 6.1. A $p$-periodic pattern
$\mathcal{C}=([S,Q],[g])$ will be called the _combinatorial collapse of
$\mathcal{P}$_ if the following properties are satisfied:
1. (a)
$g(i)=j$ if and only if $f(P_{i})=P_{j}$
2. (b)
For any $0\leq i<j\leq p-1$, there is a discrete component of $\mathcal{P}$
intersecting the blocks $P_{i},P_{j}$ if and only if there is a discrete
component of $\mathcal{C}$ containing the points $i,j$.
We will say that the point $i$ of $\mathcal{C}$ is the _collapse_ of the block
$P_{i}$ of $\mathcal{P}$. Property (a) above implies that the standing
convention established in Remark 3.3 about the labeling of the points of a
skeleton translates verbatim to the labeling of the points of a combinatorial
collapse.
Note that, by definition, the combinatorial collapse is unique, since it is
always carried out over the maximal structure of trivial blocks.
As an example, the pattern $\mathcal{C}$ shown in Figure 7 (bottom) is the
combinatorial collapse of $\mathcal{P}$. Note that the skeleton $\mathcal{S}$
does not satisfy property (b) of Definition 6.2: the blocks $P_{0}=\\{0,4\\}$
and $P_{1}=\\{1,5\\}$ intersect a single discrete component in $\mathcal{P}$,
while the corresponding points $0,1$ of $\mathcal{S}$ are contained in
different discrete components.
Notice that, if $\mathcal{P}$ is a zero entropy pattern, then the
combinatorial collapse $\mathcal{C}$ of $\mathcal{P}$ can be obtained from the
skeleton $\mathcal{S}$ of $\mathcal{P}$ simply by performing openings. Then,
Theorem 4.2 assures us that $h(\mathcal{C})=h(\mathcal{S})=0$. Therefore, we
get the following translation of Proposition 6.1 to the context of
combinatorial collapses.
Figure 8. An example of a zero entropy 18-periodic pattern $\mathcal{P}_{2}$
and the corresponding sequence of collapses.
###### Proposition 6.3.
Let $\mathcal{P}$ be a nontrivial periodic pattern with entropy zero. Then,
the combinatorial collapse of $\mathcal{P}$ has entropy zero.
###### Definition 6.4.
As an immediate consequence of Proposition 6.3, a zero entropy $n$-periodic
pattern $\mathcal{P}$ has associated a sequence of patterns
$\\{\mathcal{P}_{i}\\}_{i=0}^{r}$ and a sequence of integers
$\\{p_{i}\\}_{i=0}^{r}$ for some $r\geq 0$ such that:
1. (a)
$\mathcal{P}_{r}=\mathcal{P}$
2. (b)
$\mathcal{P}_{0}$ is a trivial $p_{0}$-periodic pattern
3. (c)
For $1\leq i\leq r$, $\mathcal{P}_{i}$ has a maximal separated structure of
$\prod_{j=0}^{i-1}p_{j}$ trivial blocks of cardinality $p_{i}$ and
$\mathcal{P}_{i-1}$ is the corresponding combinatorial collapse.
The sequence $\\{\mathcal{P}_{i}\\}_{i=0}^{r}$ will be called _the sequence of
collapses of_ $\mathcal{P}$. Notice that $\prod_{j=0}^{r}p_{j}=n$. See Figure
8 for an example with $p_{0}=3$, $p_{1}=2$, $p_{2}=3$.
###### Remark 6.5.
Let $\mathcal{P}$ be a zero entropy $n$-periodic pattern and let
$\\{\mathcal{P}_{i}\\}_{i=0}^{r}$ be the corresponding sequence of collapses.
Consider any particular time labeling $\\{0,1,\ldots,n-1\\}$ of the points of
$\mathcal{P}$. By Remark 3.3, this choice fixes the time labels of all points
in all patterns of the sequence of collapses. Note also that, for any $0\leq
i<r$, the integers labeling the points of $\mathcal{P}_{i}$ persist as labels
of points in $\mathcal{P}_{i+1}$. In particular, if $p_{0}$ is the period of
the trivial pattern $\mathcal{P}_{0}$, then $\\{0,1,\ldots,p_{0}-1\\}$ are the
only integers in the rank $\\{0,1,\ldots,n-1\\}$ that persist as labels of
points in any pattern of the sequence of collapses. See Figure 8 for an
example with $p_{0}=3$.
## 7\. Branching sequences
In this Section we dive deeper into the very particular combinatorial
structure of zero entropy patterns. The obtained results will be used in
Section 8 to prove Theorem D.
Let $\mathcal{P}$ be an $n$-periodic pattern and let $x$ be a point of
$\mathcal{P}$ of valence $\nu\geq 1$. Consider any model $(T,P,f)$ of
$\mathcal{P}$. Then, $T\setminus\\{x\\}$ has $\nu$ connected components
$K_{1},K_{2},\ldots,K_{\nu}$. We want to register how the forward iterates of
the point $x$ are distributed among the connected components of
$T\setminus\\{x\\}$. To this end, consider the integer time labeling of the
points of $\mathcal{P}$ such that $x=0$. Now, $\\{P\cap K_{i}\\}_{i=1}^{\nu}$
can be viewed as a partition of $\\{1,2,\ldots,n-1\\}$. The set $(P\cap
K_{i})\cup\\{0\\}$ of points of $\mathcal{P}$ will be called an _$x$ -branch_.
Note that this notion is independent of the chosen model $(T,P,f)$
representing $\mathcal{P}$. As an example, consider the 7-periodic pattern
$\mathcal{P}$ shown in Figure 5. Let $x$ be the point of valence 3 labeled as
5 in that figure. Shift all labels by $-5$ (mod 7). The discrete components of
$\mathcal{P}$ read now as $\\{0,3,5\\}$, $\\{3,6\\}$, $\\{0,2\\}$ and
$\\{0,1,4\\}$. The $x$-branches of $\mathcal{P}$ are then $\\{0,3,5,6\\}$,
$\\{0,2\\}$ and $\\{0,1,4\\}$.
###### Remark 7.1.
Let $\mathcal{P}$ be a periodic pattern and let $x$ be any point of
$\mathcal{P}$. Observe that any discrete component of $\mathcal{P}$ is
contained in a single $x$-branch. As a direct consequence of this fact, if in
addition $\mathcal{P}$ has entropy zero then any block of the maximal
structure of trivial blocks is contained in a single $x$-branch.
To understand the following result, it is crucial to keep in mind Remarks 3.3
and 6.5 concerning the labeling conventions of points and blocks in zero
entropy patterns. In particular, the labels of all points in the combinatorial
collapse of a pattern $\mathcal{P}$ persist as labels of points in
$\mathcal{P}$.
###### Lemma 7.2.
Let $\mathcal{P}$ be a zero entropy periodic pattern with a maximal separated
structure $P_{0}\cup P_{1}\cup\ldots\cup P_{p-1}$ of trivial blocks. Let
$\mathcal{C}$ be the combinatorial collapse of $\mathcal{P}$. If $0\leq
i,j,k<p$ are three points of $\mathcal{C}$ such that $\\{j,k\\}$ is contained
in a single $i$-branch of $\mathcal{C}$, then $P_{j}\cup P_{k}$ is contained
in a single $i$-branch of $\mathcal{P}$.
###### Proof.
Assume by way of contradiction that $P_{j}$ and $P_{k}$ are respectively
contained in two different $i$-branches of $\mathcal{P}$. Then, for some
$N\geq 2$ there exist $N+1$ different points of $\mathcal{P}$,
$x_{0},x_{1},\ldots,x_{N}$, such that:
1. (a)
$x_{0}=j$ and $x_{N}=k$
2. (b)
$x_{n}$ is inner for all $0<n<N$
3. (c)
$\\{x_{n},x_{n+1}\\}$ is contained in a discrete component of $\mathcal{P}$
for $0\leq n<N$
4. (d)
$x_{m}=i$ for some $1\leq m<N$
Intuitively, the above ordered sequence of points accounts for all points of
$\mathcal{P}$ successively met in the shortest path going from $j$ to $k$. The
assumption that $i$ separates $j$ from $k$ is imposed by property (d).
Consider now, for any point $x_{n}$ of the above sequence, the collapse of the
trivial block of $\mathcal{P}$ containing $x_{n}$. It is a point of
$\mathcal{C}$ that we denote by $y_{n}$. Note that $y_{0}=j$, $y_{m}=i$ and
$y_{N}=k$. Observe also that, for a pair of consecutive points
$x_{n},x_{n+1}$, it may happen that $\\{x_{n},x_{n+1}\\}$ is contained in a
block. In this case, since the blocks are trivial,
$\\{x_{n},x_{n+1},x_{n+2}\\}$ is not contained in a block. Therefore,
$y_{n}=y_{n+1}\neq y_{n+2}$. On the other hand, if $\\{x_{n},x_{n+1}\\}$ is
not contained in a block, by the definition of the combinatorial collapse,
$\\{y_{n},y_{n+1}\\}$ is a binary set contained in a discrete component of
$\mathcal{C}$. This observations lead to the existence of a sequence
$z_{0},z_{1},\ldots,z_{M}$ of $M+1\leq N+1$ points of $\mathcal{C}$ such that
1. (a’)
$z_{0}=j$ and $z_{M}=k$
2. (b’)
$z_{n}$ is inner for all $0<n<M$
3. (c’)
$\\{z_{n},z_{n+1}\\}$ is contained in a discrete component of $\mathcal{C}$
for $0\leq n<M$
4. (d’)
$x_{m^{\prime}}=i$ for some $1\leq m^{\prime}<M$
By property (d’), $j$ and $k$ belong to different $i$-branches in
$\mathcal{C}$, in contradiction with the hypothesis of the lemma. ∎
Let $\mathcal{P}$ be a zero entropy periodic pattern and let $\mathcal{C}$ be
its combinatorial collapse. Let us call $\\{P_{i}\\}$ and $\\{Q_{i}\\}$ the
blocks of the respective maximal structures of trivial blocks. Let $x$ be a
point of $\mathcal{P}$ and let $P_{i}$ be the block of $\mathcal{P}$
containing $x$. Let us call $y$ the point of $\mathcal{C}$ corresponding to
the collapse of $P_{i}$ and let $Q_{j}$ be the block of $\mathcal{C}$
containing $y$. By Remark 7.1, there exists a unique $x$-branch $Z$ containing
$P_{i}$. On the other hand, Remark 7.1 yields also that $Q_{j}$ is contained
in a single $y$-branch of $\mathcal{C}$. Recall now that the labels of the
points in $\mathcal{C}$ persist as labels of points in $\mathcal{P}$. So, we
can view $Q_{j}$ also as a subset of points of $\mathcal{P}$. Then, by Lemma
7.2, there exists a unique $x$-branch $Z^{\prime}$ containing $Q_{j}$. The
point $x$ will be called _bidirectional_ if $Z\neq Z^{\prime}$.
###### Lemma 7.3.
Any periodic pattern with entropy zero has bidirectional inner points.
###### Proof.
Let $\mathcal{P}=([T,P],[f])$ be a zero entropy pattern and let $\mathcal{C}$
be the combinatorial collapse of $\mathcal{P}$. Let $P_{0}\cup
P_{1}\cup\ldots\cup P_{p-1}$ and $Q_{0}\cup Q_{1}\cup\ldots Q_{q-1}$ be the
maximal separated block structures of $\mathcal{P}$ and $\mathcal{C}$
respectively.
Let $x$ be any inner point of $\mathcal{P}$. Assume that $x$ is not
bidirectional. In order to do not overload the notation, assume without loss
of generality that $x=0$. By the standing labeling conventions, $0\in P_{0}$
and the collapse of $P_{0}$ is the point of $\mathcal{C}$ labeled as 0, that
belongs to the block $Q_{0}=\\{0,q,2q,\ldots,(p/q-1)q\\}$. Since $P_{0}$ is a
trivial block, $P_{0}\subset C$ for a discrete component $C$ of $\mathcal{P}$.
Set
$X:=\bigcup_{1\leq k<p/q}P_{kq}.$
The set $X$ is the expansion of all points in $Q_{0}\setminus\\{0\\}$ to the
corresponding blocks in $\mathcal{P}$. Since we are assuming that 0 is not
bidirectional, Remark 7.1 and Lemma 7.2 imply that
(1) $P_{0}\cup X\mbox{ is contained in a single 0-branch $Z$ of }\mathcal{P}.$
We start by distinguishing two cases.
* Case 1.
$X\cap C=\emptyset$.
We claim that in this case $C=P_{0}$. Indeed, $Q_{0}$ is contained in a
discrete component of $\mathcal{C}$. By definition of the combinatorial
collapse, all blocks $P_{iq}$ for $0\leq i<p/q$ must intersect a single
discrete component $D$ of $\mathcal{P}$. Since $X\cap C=\emptyset$, by (1)
this is only possible if $C=P_{0}$ (as claimed), $D$ is contained in the
0-brach $Z$ and $D$ is adjacent to $C$. See Figure 9 (center). Let
$x^{\prime}$ be the only point in $C\cap D=P_{0}\cap D$, whose collapse is the
point 0 in $\mathcal{C}$. Then, the $x^{\prime}$-branch containing $P_{0}$ and
the $x^{\prime}$-branch containing $Q_{0}$ are different. Therefore,
$x^{\prime}$ is bidirectional and we are done.
* Case 2.
$X\cap C\neq\emptyset\mbox{ and }X\not\subset C$.
In this case, all blocks $P_{iq}$ intersect $C$ and at least one block, say
$P_{jq}$, has an inner point $x^{\prime}$ in common with $C$, whose collapse
is the point $jq$ in $\mathcal{C}$. See Figure 9 (right). Then, the
$x^{\prime}$-branch containing $P_{jq}$ and the $x^{\prime}$-branch containing
$Q_{0}$ are different. Therefore, $x^{\prime}$ is bidirectional and we are
done.
Figure 9. The two cases in the proof of Lemma 7.3. The arrows mark the two
different $x^{\prime}$-branches implying that $x^{\prime}$ is bidirectional.
Note that if $\mathcal{P}$ has no bidirectional inner points, then from above
we are not in the hypotheses of cases 1 and 2 and, in consequence, $X\subset
C$. Since $P_{0}\subset C$, we get that
$\tilde{P}_{0}:=P_{0}\cup X=\bigcup_{0\leq k<p/q}P_{kq}\subset D.$
Set $\tilde{P}_{i}:=\bigcup_{k=0}^{(p/q)-1}P_{i+kq}$ for $0\leq i<q$. From
above, if $\mathcal{P}$ has no bidirectional inner points then $\tilde{P}_{i}$
is contained in a single discrete component of $\mathcal{P}$. Moreover, since
$f(\tilde{P}_{i})=\tilde{P}_{i+1}$, it follows that
$\tilde{P}_{0}\cup\tilde{P}_{1}\cup\ldots\cup\tilde{P}_{q-1}$ is a trivial
block structure for $\mathcal{P}$, in contradiction with the maximality of the
structure $P_{0}\cup P_{1}\cup\ldots\cup P_{p-1}$. ∎
Let $x$ be a point of an $n$-periodic pattern $\mathcal{P}$ and let $\nu\geq
1$ be the valence of $x$. It is convenient to fix an indexing of the set of
$x$-branches. Next we define a natural indexing method that will be used by
default from now on. Recall that, arithmetically, an $x$-branch is nothing but
a subset of $\\{0,1,\ldots,n-1\\}$. Moreover, each $x$-branch contains 0 by
definition and the intersection of two different $x$-branches is $\\{0\\}$. We
will index the set of $x$-branches according to the minimum (positive) time
distance from $x$ to a point in the branch. More precisely, for any $x$-branch
$Z$, let $d_{Z}$ be the minimum positive integer in $Z$. From now on, we will
assume that the set $\\{Z_{i}\\}_{i=1}^{\nu}$ of $x$-branches is indexed in
such a way that $d_{Z_{i}}<d_{Z_{j}}$ if and only if $i<j$. As an example,
consider the 7-periodic pattern $\mathcal{P}$ shown in Figure 5. Let $x$ be
the point of valence 3 labeled as 5 in that figure. The $x$-branches of
$\mathcal{P}$ are then $X=\\{0,3,5,6\\}$, $Y=\\{0,2\\}$ and $W=\\{0,1,4\\}$,
with $d_{X}=3$, $d_{Y}=2$ and $d_{W}=1$. So, for this example we would denote
the set of $x$-branches as $\\{Z_{1},Z_{2},Z_{3}\\}$, with $Z_{1}=W$,
$Z_{2}=Y$ and $Z_{3}=X$.
Let $\mathcal{P}$ be an $n$-periodic pattern and let $x$ be an inner point of
$\mathcal{P}$, of valence $\nu>1$. There exists a unique $n$-periodic
$\nu$-flower (a pattern with a unique inner point $y$ and $\nu$ discrete
components) whose set of $y$-branches, that coincides with its set of discrete
components (petals) when $y$ is labeled as 0, coincides with the set of
$x$-branches of $\mathcal{P}$. Such a pattern will be denoted by
$\mathcal{F}_{x}(\mathcal{P})$. Note that $\mathcal{F}_{x}(\mathcal{P})$ is in
some sense the simplest pattern having the set of $x$-branches of
$\mathcal{P}$, and is obtained from $\mathcal{P}$ by performing iteratively
all possible openings that do not consist of joining two discrete components
adjacent at $x$. For an example, consider the 7-periodic pattern $\mathcal{P}$
shown in Figure 5. Let $x$ be the point of valence 3 labeled as 5 in that
figure. In this case, $\mathcal{F}_{x}(\mathcal{P})$ is the 3-flower whose
petals are $\\{5,1,3,4\\}$, $\\{5,0\\}$ and $\\{5,2,6\\}$. After shifting the
labels by $-5$ (mod 7) in order that the central point of the flower reads as
0, the petals are written as $\\{0,3,5,6\\}$, $\\{0,2\\}$ and $\\{0,4,1\\}$,
that are precisely the $x$-branches of $\mathcal{P}$.
###### Remark 7.4.
Let $\mathcal{P},\mathcal{Q}$ be $n$-periodic patterns. For any
$x,y\in\\{0,1,\ldots,n-1\\}$, the set of $x$-branches of $\mathcal{P}$ and the
set of $y$-branches of $\mathcal{Q}$ coincide if and only if
$\mathcal{F}_{x}(\mathcal{P})=\mathcal{F}_{y}(\mathcal{Q})$.
The previous remark says that in fact the notation
$\mathcal{F}_{x}(\mathcal{P})$, that denotes a pattern, could have been
reserved to denote simply the (arithmetic) set of $x$-branches of
$\mathcal{P}$. We have used the construction of the flower just as a trick
that hopefully supports the geometric visualization.
The following result is true for any point of a periodic pattern but, in
pursuit of simplicity, is stated without loss of generality for a point
labeled as 0.
###### Lemma 7.5.
Let $\mathcal{P}$ be a zero entropy periodic pattern and let
$\\{\mathcal{P}_{i}\\}_{i=0}^{r}$ be the associated sequence of collapses. For
any $0\leq i\leq r$, let $P_{0}^{i}$ be the block of the maximal structure of
$\mathcal{P}_{i}$ containing the point $0\in\mathcal{P}_{i}$. Then,
$P_{0}^{i}$ is contained in a single 0-branch of $\mathcal{P}$.
###### Proof.
By Proposition 6.3, $h(\mathcal{P}_{i})=0$. Then, by Remark 7.1, $P_{0}^{i}$
is contained in a single 0-branch of $\mathcal{P}_{i}$. The result follows
then immediately by using iteratively Lemma 7.2. ∎
A sequence $\\{(p_{i},\delta_{i})\\}_{i=0}^{r}$ of pairs of integers will be
called a _branching sequence_ if the following conditions hold:
1. (bs1)
$p_{i}\geq 2$ for $0\leq i\leq r$.
2. (bs2)
$\delta_{1}=1$.
3. (bs3)
For any $1\leq i\leq r$, if $\delta_{i}\notin\\{\delta_{j}\\}_{j=0}^{i-1}$
then $\delta_{i}=1+\max\\{\delta_{j}\\}_{j=0}^{i-1}$.
Let $\mathcal{P}$ be a zero entropy $n$-periodic pattern and let
$\\{\mathcal{P}_{i}\\}_{i=0}^{r}$ be the associated sequence of collapses. Let
$x$ be any point of $\mathcal{P}$, with valence $\nu\geq 1$. Relabel the
points of $\mathcal{P}$ in such a way that $x=0$. Now, for any pattern
$\mathcal{P}_{i}$ in the sequence of collapses, Lemma 7.5 tells us that the
block of the maximal structure of $\mathcal{P}_{i}$ containing $0$ is
contained in a single 0-branch $\delta_{i}$ of $\mathcal{P}$. It is easy to
check that the sequence $\\{(p_{i},\delta_{i})\\}_{i=0}^{r}$, where $p_{0}$ is
the period of $\mathcal{P}_{0}$ and $p_{i}$ is the cardinality of the blocks
of the maximal structure in $\mathcal{P}_{i}$ for any $1\leq i\leq r$,
satisfies properties (bs1–3) above. It will be called _the branching sequence
of $\mathcal{P}$ around $x$_. Once the indexing of the $x$-branches is fixed
after the accorded convention, it is uniquely determined by the pattern
$\mathcal{P}$ and the chosen point $x$ of $\mathcal{P}$. See Figure 10 for an
example of construction of the branching sequence. For the pattern
$\mathcal{P}$ shown in that figure, the 0-branches are
$Z_{1}=\\{0,1,2,3,5,6,7,8,9,10,11,13,14,15\\}$ and $Z_{2}=\\{0,4,12\\}$. The
maximal trivial blocks have cardinality 2 in each pattern of the sequence of
collapses. The blocks containing 0 are $\\{0,1\\}$ in $\mathcal{P}_{0}$,
$\\{0,2\\}$ in $\mathcal{P}_{1}$, $\\{0,4\\}$ in $\mathcal{P}_{2}$ and
$\\{0,8\\}$ in $\mathcal{P}$. Seen as sets of points of $\mathcal{P}$, they
are respectively contained in $Z_{1}$, $Z_{1}$, $Z_{2}$ and $Z_{1}$.
Collecting it all, we get that the branching sequence of $\mathcal{P}$ around
0 is $\\{(2,1),(2,1),(2,2),(2,1)\\}$.
Figure 10. A pattern $\mathcal{P}$ whose branching sequence around 0 is
$\\{(2,1),(2,1),(2,2),(2,1)\\}$. The two 0-branches in $\mathcal{P}$ are
denoted with $Z_{1}$ and $Z_{2}$ with the standard indexing convention.
The following observation follows directly from the definitions.
###### Remark 7.6.
Let $\\{(p_{i},\delta_{i})\\}_{i=0}^{r}$ be the branching sequence of a zero
entropy pattern around an inner point $x$. Then, $x$ is bidirectional if and
only if $\delta_{r-1}\neq\delta_{r}$.
Now we reverse the process and consider an (abstract) branching sequence
$S=\\{(p_{i},\delta_{i})\\}_{i=0}^{r}$. Let us see that from such a sequence
we can construct a zero entropy $n$-periodic $\nu$-flower, where
$n=p_{0}p_{1}\cdots p_{r}$ and $\nu=\max\\{\delta_{i}\\}_{i=0}^{r}$. Consider
a $p_{0}$-periodic trivial pattern $\mathcal{P}_{0}$ and let us denote its
unique discrete component by $C^{0}_{\delta_{1}}=C^{0}_{1}$ (property (bs2)).
Assume now that a zero entropy periodic pattern $\mathcal{P}_{i}$ of period
$p_{0}p_{1}\cdots p_{i}$ has been defined, with
$d_{i}:=\max\\{\delta_{j}\\}_{j=0}^{i}$ discrete components labeled as
$\\{C^{i}_{1},C^{i}_{2},\ldots,C^{i}_{d_{i}}\\}$, all adjacent to the point 0.
Now we define a new pattern $\mathcal{P}_{i+1}$ of period $p_{0}p_{1}\cdots
p_{i+1}$ by applying the following procedure. For any point $j$ of
$\mathcal{P}_{i}$, set
$K_{j}:=\\{j+p_{i},j+2p_{i},\ldots,j+(p_{i+1}-1)p_{i}\\}$. Note that, by
(bs3), either $\delta_{i+1}\leq d_{i}$, and in this case we set
$d_{i+1}:=d_{i}$, or $\delta_{i+1}=d_{i}+1$, and in this case we set
$d_{i+1}:=d_{i}+1$. The pattern $\mathcal{P}_{i+1}$ is then defined as a
$d_{i+1}$-flower with inner point 0 and discrete components labeled as
$\\{C^{i+1}_{1},C^{i+1}_{2},\ldots,C^{i+1}_{d_{i}+1}\\}$, in such a way that
$K_{0}\subset C^{i+1}_{\delta_{i+1}}$ and for any point $j\neq 0$ of
$\mathcal{P}_{i}$, $K_{j}\subset C^{i+1}_{k}$ if and only if $j\in C^{i}_{k}$.
By iterating $r$ times this procedure, finally we obtain the prescribed
$\nu$-flower $\mathcal{P}_{r}$, with the inner point conventionally labeled as
0 by construction. Such a flower, algorithmically constructed from the
branching sequence $S$, will be denoted by $\mathcal{F}(S)$. To fit the
intuition into the description of the algorithm, note that the combinatorial
collapse of a zero entropy $k$-flower is either a $(k-1)$-flower when a petal
fully coincides with a block of the maximal structure, and a $k$-flower
otherwise.
###### Example 7.7.
Let $S=\\{(2,1),(3,2),(2,2),(2,3)\\}$. In Figure 11 we have shown the sequence
of patterns leading to $\mathcal{F}(S)$ according to the prescribed algorithm.
Figure 11. The steps of the algorithm to generate the flower $\mathcal{F}(S)$
from the branching sequence $S=\\{(2,1),(3,2),(2,2),(2,3)\\}$.
A branching sequence $S=\\{(p_{i},\delta_{i})\\}_{i=0}^{r}$ will be called
_minimal_ if $\delta_{i+1}\neq\delta_{i}$ for all $0\leq i<r$.
###### Lemma 7.8.
Let $S$ and $R$ be minimal branching sequences such that
$\mathcal{F}(S)=\mathcal{F}(R)$. Then $S=R$, i.e. $S$ and $R$ have the same
length and are identical term by term.
###### Proof.
Set $S=\\{(p_{i},\delta_{i})\\}_{i=0}^{r}$ and
$R=\\{(q_{i},\kappa_{i})\\}_{i=0}^{t}$. By Remark 7.4, the hypothesis that
$\mathcal{F}(S)$ and $\mathcal{F}(R)$ are the same pattern can be reworded as
follows: if both flowers are labeled in such a way that the respective inner
points read as 0, then the respective sets of 0-branches coincide. In
particular,
(2) $\prod_{i=0}^{r}p_{i}=\prod_{i=0}^{t}q_{i}.$
First we claim that $(p_{1},\delta_{1})=(q_{1},\kappa_{1})$. Indeed, by
property (bs2), $\delta_{1}=\kappa_{1}=1$. Assume by way of contradiction that
$p_{1}<q_{1}$ (the argument is symmetric when $q_{1}<p_{1}$). Then, from (2)
it follows that $r\geq 2$. Moreover, since $S$ is minimal,
$\delta_{2}\neq\delta_{1}$. Property (bs3) yields then that $\delta_{2}=2$.
So, the algorithm of construction of $\mathcal{F}(S)$ and $\mathcal{F}(R)$
implies that the 0-branch indexed as 1 in $\mathcal{F}(S)$ contains the points
$0,1,2,\ldots,p_{1}-1$ and the point $p_{1}$ is contained in the 0-branch
indexed as 2, while the 0-branch indexed as 1 in $\mathcal{F}(R)$ contains at
least the points $0,1,2,\ldots,p_{1}-1,p_{1}$. In consequence,
$\mathcal{F}(S)$ and $\mathcal{F}(R)$ are not the same pattern, a
contradiction that proves the claim.
Assume now that all terms of $S$ and $R$ are identical up to an index $j\geq
1$ (the previous claim states that this is true when $j=1$). In this case, if
$S$ has length $j$, then (2) implies that $R$ has also length $j$ and we are
done. Assume that $r>j$ (the arguments and conclusions are the same if $t>j$).
Set $k:=\prod_{i=0}^{j}p_{i}=\prod_{i=0}^{j}q_{i}$. From the algorithm of
construction of $\mathcal{F}(S)$ and $\mathcal{F}(R)$, it follows that all
points from 0 to $k-1$ are distributed identically inside the 0-branches of
both flowers. The same arguments used above show then that $t>j$, and that if
we assume $(p_{j+1},\delta_{j+1})\neq(q_{j+1},\kappa_{j+1})$, we reach a
contradiction since the points $k,k+1,k+2,\ldots,kp_{j+1}-1$ will be
distributed in different 0-branches of $\mathcal{F}(S)$ and $\mathcal{F}(R)$.
∎
###### Remark 7.9.
If $\mathcal{P}$ is a zero entropy flower, then the branching sequence of
$\mathcal{P}$ around its unique inner point is minimal. Indeed, if for an
index $i$ we had two consecutive terms $(p_{i},\delta_{i})$,
$(p_{i+1},\delta_{i+1})$ with $\delta_{i}=\delta_{i+1}$, then, in the sequence
$\\{\mathcal{P}_{i}\\}_{i=0}^{r}$ of collapses, the trivial blocks for the
pattern $\mathcal{P}_{i}$ would not be maximal, since there would exist
greater trivial blocks of cardinality $p_{i}p_{i+1}$. For example, let
$\mathcal{P}$ be the rightmost pattern shown in Figure 11, that is in fact the
3-flower constructed from $S=\\{(2,1),(3,2),(2,2),(2,3)\\}$. The sequence of
collapses of $\mathcal{P}$ is _not_ $\\{\mathcal{P}_{i}\\}_{i=0}^{3}$ but
$\\{\mathcal{P}^{\prime}_{i}\\}_{i=0}^{2}$, with
$\mathcal{P}^{\prime}_{0}=\mathcal{P}_{0}$,
$\mathcal{P}^{\prime}_{1}=\mathcal{P}_{2}$ and
$\mathcal{P}^{\prime}_{2}=\mathcal{P}_{3}$. The branching sequence of
$\mathcal{P}$ around 0 is then $S^{\prime}=\\{(2,1),(6,2),(2,3)\\}$, which is
minimal.
Let $S=\\{(p_{i},\delta_{i})\\}_{i=0}^{r}$ be a branching sequence. Assume
that $S$ is not minimal, i.e. for some $0\leq j<r$ we have that
$\delta_{j+1}=\delta_{j}$. Then we can consider a _reduced sequence_
$S^{\prime}=\\{(p^{\prime}_{i},\delta^{\prime}_{i})\\}_{i=0}^{r-1}$ defined as
$(p^{\prime}_{i},\delta^{\prime}_{i})=(p_{i},\delta_{i})$ for $0\leq i<j$,
$(p^{\prime}_{j},\delta^{\prime}_{j})=(p_{j}p_{j+1},\delta_{j})$ and
$(p^{\prime}_{i},\delta^{\prime}_{i})=(p_{i+1},\delta_{i+1})$ for $j<i\leq
r-1$. One can easily check that $S^{\prime}$ satisfies (bs1–3) and is thus a
branching sequence. The following result states that $S$ and $S^{\prime}$
generate the same flower. It follows immediately from the algorithm of
construction of $\mathcal{F}(S)$.
###### Lemma 7.10.
Let $S,S^{\prime}$ be branching sequences such that $S^{\prime}$ has been
reduced from $S$. Then, $\mathcal{F}(S^{\prime})=\mathcal{F}(S)$.
The process of reducing a non-minimal branching sequence
$S=\\{(p_{i},\delta_{i})\\}_{i=0}^{r}$ can be iterated as many times as
necessary in order to finally obtain what we call the _sequence fully reduced
from $S$_, a minimal branching sequence
$\widehat{S}=\\{(\widehat{p}_{i},\widehat{\delta}_{i})\\}_{i=0}^{\widehat{r}}$
satisfying $\prod_{i=0}^{r}p_{i}=\prod_{i=0}^{\widehat{r}}\widehat{p}_{i}$.
One can easily check that it is unique and well defined. As a direct corollary
of Lemma 7.10, we get the following result.
###### Corollary 7.11.
Let $S$ be a branching sequence and let $\widehat{S}$ be the sequence fully
reduced from $S$. Then, $\mathcal{F}(S)=\mathcal{F}(\widehat{S})$.
In this section we have defined two procedures to generate a flower
(equivalently, a set of branches). The first one uses openings to get a flower
$\mathcal{F}_{x}(\mathcal{P})$ given a pattern $\mathcal{P}$ and a point $x$
of $\mathcal{P}$, while the second one constructs a flower $\mathcal{F}(S)$
given an abstract branching sequence $S$. The next lemma, that follows
immediately from the definitions and the labeling conventions of the points
and branches, states that if $S$ is precisely the branching sequence of
$\mathcal{P}$ around $x$, both flowers are the same as patterns.
###### Lemma 7.12.
Let $\mathcal{P}$ be a zero entropy pattern. Let $x$ be a point of
$\mathcal{P}$ and let $S$ be the branching sequence of $\mathcal{P}$ around
$x$. Then, $\mathcal{F}(S)=\mathcal{F}_{x}(\mathcal{P})$.
Now we are ready to use all techniques and results of this section to get the
following proposition and the subsequent corollary, that will be crucial in
the proof of Theorem D.
###### Proposition 7.13.
Let $\mathcal{P}$ be a zero entropy periodic pattern and let $x$ be a point of
$\mathcal{P}$. Let $S$ be the branching sequence of $\mathcal{P}$ around $x$
and let $\widehat{S}$ be the sequence fully reduced from $S$. Then, the
branching sequence of $\mathcal{F}_{x}(\mathcal{P})$ around $x$ is
$\widehat{S}$.
###### Proof.
By Lemma 7.12,
(3) $\mathcal{F}(S)=\mathcal{F}_{x}(\mathcal{P}).$
Let $R$ be the branching sequence of $\mathcal{F}_{x}(\mathcal{P})$ around
$x$. We want to see that $R=\widehat{S}$. Since
$\mathcal{F}_{x}(\mathcal{F}_{x}(\mathcal{P}))=\mathcal{F}_{x}(\mathcal{P})$,
using again Lemma 7.12 yields
(4) $\mathcal{F}(\mathcal{R})=\mathcal{F}_{x}(\mathcal{P}).$
On the other hand, by Corollary 7.11,
(5) $\mathcal{F}(S)=\mathcal{F}(\widehat{S}).$
From (3), (4) and (5) we get then that
(6) $\mathcal{F}(R)=\mathcal{R}(\widehat{S}).$
Since $\widehat{S}$ is minimal by definition of a fully reduced sequence and
$R$ is minimal by Remark 7.9, then (6) and Lemma 7.8 imply that
$R=\widehat{S}$. ∎
###### Corollary 7.14.
Let $\mathcal{P}$ and $\mathcal{Q}$ be two zero entropy $n$-periodic patterns.
Let $x$ and $y$ be inner points of $\mathcal{P}$ and $\mathcal{Q}$
respectively. Let $\widehat{S}$ and $\widehat{R}$ be the fully reduced
sequences of $\mathcal{P}$ and $\mathcal{Q}$ around $x$ and $y$ respectively.
If $\mathcal{F}_{x}(\mathcal{P})=\mathcal{F}_{y}(\mathcal{Q})$ then
$\widehat{S}=\widehat{R}$, i.e. both sequences have the same length and are
identical term by term.
## 8\. Proof of Theorem D
Recall that the hypothesis of Theorem D is that we have an $n$-periodic
pattern $\mathcal{P}$ with at least two inner points and at least three
openings. Moreover, $h(\mathcal{P})>0$ and any opening has entropy zero. Under
these conditions, we have to prove that $\mathcal{P}$ is $\pi$-reducible for
some basic path $\pi$. The name $\mathcal{O}$ that we will use to denote zero
entropy patterns in this section stands for _opening_ , in the spirit of
Theorem D.
The following is a simple remark about how the set of $x$-branches, where $x$
is a point of a pattern $\mathcal{P}$, can change after performing an opening
of $\mathcal{P}$.
###### Remark 8.1.
Let $x$ be a point of a pattern $\mathcal{P}$ and let $\mathcal{O}$ be an
opening of $\mathcal{P}$. If $\mathcal{O}$ has been obtained by joining two
discrete components not adjacent at $x$ (equivalently, the valence of $x$ in
$\mathcal{O}$ equals the valence of $x$ in $\mathcal{P}$), then
$\mathcal{F}_{x}(\mathcal{O})=\mathcal{F}_{x}(\mathcal{P})$. As an example,
consider the pattern $\mathcal{P}$ and the opening $\mathcal{O}$ shown in
Figure 5. Take $x=1$. In this case,
$\mathcal{F}_{x}(\mathcal{O})=\mathcal{F}_{x}(\mathcal{P})$ is a 2-flower
whose petals can be labeled as $\\{0,3\\}$ and $\\{0,1,2,4,5,6\\}$. On the
other hand, if $\mathcal{O}$ has been obtained by joining two discrete
components adjacent at $x$ (equivalently, the valence of $x$ in $\mathcal{O}$
is one less than the valence of $x$ in $\mathcal{P}$), then
$\mathcal{F}_{x}(\mathcal{O})$ is an opening of
$\mathcal{F}_{x}(\mathcal{P})$. As an example, take $x=5$ in the previous
example. Here $\mathcal{F}_{x}(\mathcal{P})$ is a 3-flower whose petals can be
labeled as $\\{0,3,5,6\\}$, $\\{0,2\\}$ and $\\{0,1,4\\}$, while
$\mathcal{F}_{x}(\mathcal{O})$ is a 2-flower whose petals can be labeled as
$\\{0,3,5,6\\}$ and $\\{0,1,2,4\\}$, i.e. an opening of
$\mathcal{F}_{x}(\mathcal{P})$.
Recall that the integer labels of the points of a pattern $\mathcal{P}$ are by
default preserved when performing an opening of $\mathcal{P}$. So, in the
following statement we use the same letter $x$ to refer indistinctly to a
point of a pattern and to the corresponding point of an opening.
###### Lemma 8.2.
Let $\mathcal{P}$ be an $n$-periodic pattern with positive entropy such that
any opening of $\mathcal{P}$ has entropy zero. Assume that $\mathcal{P}$ has
at least two inner points and at least three openings. Then, there exist a
point $x$ of $\mathcal{P}$ and two different openings $\mathcal{O}$ and
$\mathcal{R}$ of $\mathcal{P}$ such that:
1. (a)
$x$ is a bidirectional inner point in $\mathcal{O}$.
2. (b)
$x$ is an inner point in $\mathcal{R}$.
3. (c)
One of the following statements holds:
1. (c1)
$\mathcal{F}_{x}(\mathcal{O})=\mathcal{F}_{x}(\mathcal{R})$
2. (c2)
$\mathcal{F}_{x}(\mathcal{O})$ is an opening of
$\mathcal{F}_{x}(\mathcal{R})$.
###### Proof.
To prove the result we consider two cases.
* Case 1.
$\mathcal{P}$ has exactly two inner points.
In this case, the hypothesis imply that at least one inner point has valence
larger than 2 and that $\mathcal{P}$ has at least four different openings. Let
us consider for instance that $\mathcal{P}$ has one inner $\alpha$ with
valence 2 and one inner $\beta$ with valence 3. The proof can be trivially
extended to any other case. In this situation, $\mathcal{P}$ has four discrete
components, which we label by $C_{0}$, $C_{1}$, $C_{2}$ and $C_{3}$. See
Figure 12 for a representation of $\mathcal{P}$ and the three openings that we
will use below.
According to the notation in Figure 12 we consider $\mathcal{O}$ to be the
opening of $\mathcal{P}$ corresponding to the union $C_{0}\cup C_{2}$. The
pattern $\mathcal{O}$ is a triple chain with two inner points $\alpha$ and
$\beta$. Let $x$ be a bidirectional inner point of $\mathcal{O}$, that exists
by Proposition 7.3. Then, (a) holds. Consider now a relabeling of the points
of $\mathcal{P}$ (and, in consequence, of $\mathcal{O}$) such that $x=0$. We
have now two possibilities.
Figure 12. Three possible openings for case 1 in the proof of Lemma 8.2.
If $0=\alpha$ then we take $\mathcal{R}$ as the opening $\mathcal{O}^{\prime}$
corresponding to the union $C_{0}\cup C_{3}$. So, (b) is satisfied. Moreover,
neither $\mathcal{O}$ nor $\mathcal{R}$ have been formed by joining together
discrete components adjacent to 0. It follows that the valence of 0 in
$\mathcal{P}$, $\mathcal{O}$ and $\mathcal{R}$ is the same and (c1) follows
from Remark 8.1.
If $0=\beta$ then we take $\mathcal{R}$ as the opening
$\mathcal{O}^{\prime\prime}$ corresponding to the union $C_{0}\cup C_{1}$. So,
(b) is satisfied. Moreover, $\mathcal{R}$ has only one inner point, $\beta=0$.
In this case the valence of $0$ in $\mathcal{R}$ equals the valence of $0$ in
$\mathcal{P}$ and it is one larger than in $\mathcal{O}$. Thus, (c2) follows
from Remark 8.1.
* Case 2.
$\mathcal{P}$ has at least three inner points.
Let $\mathcal{O}$ be an arbitrary opening of $\mathcal{P}$. Let $x$ be a
bidirectional inner point of $\mathcal{O}$, that exists by Proposition 7.3.
Then, (a) holds. Consider now a relabeling of the points of $\mathcal{P}$
(and, in consequence, of $\mathcal{O}$) such that $x=0$. Let $\alpha\neq 0$
and $\beta\neq 0$ be two different inner points of $\mathcal{P}$.
If $\mathcal{O}$ has been obtained by joining two discrete components adjacent
to $\alpha$, then we choose $\mathcal{R}$ as any opening obtained by joining
two discrete components adjacent to $\beta$. In this case, the valence of $0$
is the same in the three patterns $\mathcal{P}$, $\mathcal{O}$ and
$\mathcal{R}$ (See Figure 13). Thus, (b) holds and, by Remark 8.1, (c1) is
also satisfied.
Figure 13. Illustration of case 2 (first subcase) in the proof of Lemma 8.2.
Figure 14. Illustration of case 2 (second subcase) in the proof of Lemma 8.2.
Finally, if $\mathcal{O}$ has been formed by joining two discrete components
adjacent to $0$, then we choose $\mathcal{R}$ as an opening obtained by
joining two discrete components adjacent to $\beta$. In this case, the valence
of $0$ in $\mathcal{P}$ and $\mathcal{R}$ is the same and one larger than the
valence of $0$ in $\mathcal{O}$. In particular, the valence of $0$ in
$\mathcal{P}$ and $\mathcal{R}$ is larger than two. Thus, (b) holds and, by
Remark 8.1, (c2) is satisfied (see Figure 14). ∎
To prove Theorem D, we will use branching sequences in the two situations (c1)
and (c2) given by Lemma 8.2(c). To deal with (c2), we need to relate the
branching sequences of both a flower $\mathcal{F}$ and an opening
$\mathcal{F}^{\prime}$ of $\mathcal{F}$.
###### Lemma 8.3.
Let $\mathcal{F}$ be a zero entropy periodic $\nu$-flower and let
$R=\\{(q_{i},\kappa_{i})\\}_{i=0}^{t}$ be the branching sequence of
$\mathcal{F}$ around its unique inner point $x$. Let $\mathcal{F}^{\prime}$ be
an opening of $\mathcal{F}$ obtained by joining two discrete components
corresponding to two $x$-branches labeled as $j_{1},j_{2}$, with $1\leq
j_{1}<j_{2}\leq\nu$. Set
$R^{\prime}:=\\{(q_{i},\kappa^{\prime}_{i})\\}_{i=0}^{t}$, with
$\kappa^{\prime}_{i}$ defined as
$\kappa^{\prime}_{i}=\left\\{\begin{array}[]{lcl}\kappa_{i}&\mbox{if}&\kappa_{i}<j_{2}\\\
j_{1}&\mbox{if}&\kappa_{i}=j_{2}\\\
\kappa_{i}-1&\mbox{if}&\kappa_{i}>j_{2}\end{array}\right.$
Then, $R^{\prime}$ is a branching sequence and the sequence fully reduced from
$R^{\prime}$ is the branching sequence of $\mathcal{F}^{\prime}$ around $x$.
###### Proof.
It is easy to check directly from the definition of $R^{\prime}$ that
properties (bs1–3) satisfied by $R$ are inherited by $R^{\prime}$. Thus,
$R^{\prime}$ is a branching sequence. By checking the steps of the algorithm
of construction of the flower $\mathcal{F}(R^{\prime})$, one easily gets that
$\mathcal{F}(R^{\prime})=\mathcal{F}^{\prime}$.
Let $\widehat{R^{\prime}}$ be the sequence fully reduced from $R^{\prime}$. By
Corollary 7.11,
$\mathcal{F}^{\prime}=\mathcal{F}(R^{\prime})=\mathcal{F}(\widehat{R^{\prime}})$.
Let $B$ be the branching sequence of $\mathcal{F}^{\prime}$ around its unique
inner point $x$. We want to see that $B=\widehat{R^{\prime}}$. Since
$\mathcal{F}_{x}(\mathcal{F}^{\prime})=\mathcal{F}^{\prime}$, Lemma 7.12
yields $\mathcal{F}(B)=\mathcal{F}^{\prime}$. Therefore,
$\mathcal{F}(B)=\mathcal{F}(\widehat{R^{\prime}})$. Since
$\widehat{R^{\prime}}$ is minimal by definition of a fully reduced sequence
and $B$ is minimal by Remark 7.9, the previous equality and Lemma 7.8 imply
$B=\widehat{R^{\prime}}$. ∎
To illustrate Lemma 8.3, let $\mathcal{F}$ be the 4-flower shown in Figure 15.
The discrete components (equivalently, the 0-branches) of $\mathcal{F}$ are
$Z_{1}=\\{0,1,3,5,7,9,11,13,15\\}$, $Z_{2}=\\{0,2,6,10,14\\}$,
$Z_{3}=\\{0,4,12\\}$, $Z_{4}=\\{0,8\\}$. One can check that the branching
sequence of $\mathcal{F}$ around 0 is $R=\\{(2,1),(2,2),(2,3),(2,4)\\}$. Now
let $\mathcal{F}^{\prime}$ be the opening obtained by joining the discrete
components $Z_{1}$ and $Z_{3}$. The 0-branches of $\mathcal{F}^{\prime}$,
indexed according to the standing convention, are then $Y_{1}=Z_{1}\cup
Z_{3}$, $Y_{2}=Z_{2}$, $Y_{3}=Z_{4}$. The sequence $R^{\prime}$ defined in the
statement of Lemma 8.3 is $R^{\prime}=\\{(2,1),(2,2),(2,1),(2,3)\\}$, that is
minimal. According to Lemma 8.3, it is the branching sequence of
$\mathcal{F}^{\prime}$ around 0. As another example, let
$\mathcal{F}^{\prime}$ be the opening of $\mathcal{F}$ obtained by joining the
discrete components $Z_{2}$ and $Z_{3}$. In this case, the 0-branches of
$\mathcal{F}^{\prime}$ are $Y_{1}=Z_{1}$, $Y_{2}=Z_{2}\cup Z_{3}$ and
$Y_{3}=Z_{4}$. The sequence $R^{\prime}$ defined in the statement of Lemma 8.3
reads as $R^{\prime}=\\{(2,1),(2,2),(2,2),(2,3)\\}$ and its fully reduced
sequence $\\{(2,1),(4,2),(2,3)\\}$ is the branching sequence of
$\mathcal{F}^{\prime}$ around 0.
Figure 15. A 16-periodic 4-flower with entropy zero.
Now we are in position of proving Theorem D.
###### Proof of Theorem D.
Let $\mathcal{O}$ and $\mathcal{R}$ be the two openings of $\mathcal{P}$ given
by Lemma 8.2, let $S=\\{(p_{i},\delta_{i})\\}_{i=0}^{r}$ and
$R=\\{(q_{i},\kappa_{i})\\}_{i=0}^{t}$ be the corresponding branching
sequences around $x$, and let
$\widehat{S}=\\{(\widehat{p}_{i},\widehat{\delta}_{i})\\}_{i=0}^{\widehat{r}}$
and
$\widehat{R}=\\{(\widehat{q}_{i},\widehat{\kappa}_{i})\\}_{i=0}^{\widehat{t}}$
be the sequences fully reduced, respectively, from $S$ and $R$. From the
definition of a reduced sequence,
(7) $\widehat{q}_{\widehat{t}}=q_{t-j}q_{t-j+1}\cdots q_{t-1}q_{t}\mbox{ for
some }j\geq 0.$
On the other hand, since $x$ is bidirectional in $\mathcal{O}$, then, by
Remark 7.6, $\delta_{r-1}\neq\delta_{r}$. Therefore, using again the
definition of a reduced sequence we get
(8)
$(\widehat{p}_{\widehat{r}},\widehat{\delta}_{\widehat{r}})=(p_{r},\delta_{r}).$
We claim that $q_{t}$ divides $p_{r}$. To prove this claim we will consider
the two cases produced by Lemma 8.2(c).
Assume first that Lemma 8.2(c1) holds. Then, by Corollary 7.14, $\widehat{S}$
and $\widehat{R}$ are identical term by term. In particular,
$\widehat{q}_{\widehat{t}}=\widehat{p}_{\widehat{r}}$, which is equal to
$p_{r}$ by (8). Thus, (7) implies that $q_{t}$ divides $p_{r}$, as claimed.
Assume now that Lemma 8.2(c2) holds. From Proposition 7.13 we have that
$\widehat{S}$ is the branching sequence of the flower
$\mathcal{F}^{\prime}:=\mathcal{F}_{x}(\mathcal{O})$ and $\widehat{R}$ is the
branching sequence of the flower $\mathcal{F}:=\mathcal{F}_{x}(\mathcal{R})$.
Since $\mathcal{F}^{\prime}$ is an opening of $\mathcal{F}$, Lemma 8.3 tells
us that
$\widehat{S}=\\{(\widehat{p}_{i},\widehat{\delta}_{i})\\}_{i=0}^{\widehat{r}}$
has been obtained from
$\widehat{R}=\\{(\widehat{q}_{i},\widehat{\kappa}_{i})\\}_{i=0}^{\widehat{t}}$
in two steps. First, we consider a sequence
$\widehat{R}^{\prime}=\\{(\widehat{q}_{i},\widehat{\kappa}^{\prime}_{i})\\}_{i=0}^{\widehat{t}}$
and then fully reduce it to obtain
$\\{(\widehat{p}_{i},\widehat{\delta}_{i})\\}_{i=0}^{\widehat{r}}$. Again the
definition of a reduction implies that
$\widehat{p}_{\widehat{r}}=\widehat{q}_{\widehat{t}-\ell}\widehat{q}_{\widehat{t}-\ell+1}\cdots\widehat{q}_{\widehat{t}-1}\widehat{q}_{\widehat{t}}\mbox{
for some }\ell\geq 0.$
The previous equality and (7) imply that $q_{t}$ divides $p_{r}$ also in this
case. In consequence, the claim is proved.
To end up we claim that the divisibility of $p_{r}$ by $q_{t}$ implies the
$\pi$-reducibility of $\mathcal{P}$. We recall that $p_{r}$ and $q_{t}$ are
the cardinalities of the trivial blocks in the respective maximal structures
of $\mathcal{O}$ and $\mathcal{R}$ given by Proposition 6.3. Relabel if
necessary the points of $\mathcal{P}$ in such a way that $x=0$. The inner
point $0$ belongs to the block of $\mathcal{O}$
$O_{0}=\\{0,\tfrac{n}{p_{r}},\tfrac{2n}{p_{r}},\dots,\tfrac{(p_{r}-1)n}{p_{r}}\\}.$
By Proposition 3.7, $\mathcal{O}$ is $\pi$-reducible for any basic path $\pi$
contained in $O_{0}$. On the other hand, the inner point $0$ belongs to the
block of $\mathcal{R}$
$R_{0}=\\{0,\tfrac{n}{q_{t}},\tfrac{2n}{q_{t}},\dots,\tfrac{(q_{t}-1)n}{q_{t}}\\}.$
Again, $\mathcal{R}$ is $\pi$-reducible for any basic path $\pi$ contained in
$R_{0}$. Since $q_{t}$ divides $p_{r}$, the point $\frac{n}{q_{t}}$ belongs to
$O_{0}\cap R_{0}$. Take $\pi:=\\{0,\frac{n}{q_{t}}\\}$. Note that $\pi$ is a
basic path in $\mathcal{P}$, $\mathcal{Q}$ and $\mathcal{R}$. Moreover, $\pi$
never splits in both $\mathcal{O}$ and $\mathcal{R}$. Since all inner points
of $\mathcal{P}$ are inner points either in $\mathcal{O}$ or in $\mathcal{R}$,
it follows that $\pi$ never splits in $\mathcal{P}$. ∎
## 9\. $k$-Flowers
Following the sketch of the proof of Theorem A outlined in Section 5, we have
to deal now with the special case of patterns with only one inner point and
$k\geq 2$ discrete components. When $k=2$, the following result (Theorem 5.2
of [8]) does the job.
###### Theorem 9.1.
Let $\mathcal{P}$ be an $n$-periodic pattern with two discrete components. If
$h(\mathcal{P})>0$, then $h(\mathcal{P})\geq\log(\lambda_{n})$.
For $k\geq 3$, and in the spirit of the proof by induction outlined in Section
5, we need to relate our pattern of period $n$ with another pattern with
period less than $n$ and positive entropy. So, let $\mathcal{P}=([T,P],[f])$
be an $n$-periodic pattern. A pattern $\mathcal{P}^{\prime}$ will be said to
be _subordinated to_ $\mathcal{P}$ if for some divisor $n>p>1$ of $n$ there is
an $(n/p)$-periodic orbit $P^{\prime}\subset P$ of $f^{p}$ such that
$\mathcal{P}^{\prime}=([\langle
P^{\prime}\rangle_{T},P^{\prime}],[f^{p}\bigr{\rvert}_{P^{\prime}}])$.
Clearly, this definition is independent of the particular model $(T,P,f)$
representing $\mathcal{P}$.
The following result is Lemma 9.1 of [7]. It allows us to estimate the entropy
of a pattern from the entropy of a subordinated.
###### Lemma 9.2.
Let $\mathcal{P}$ be an $n$-periodic pattern. Let $\mathcal{P}^{\prime}$ be an
$n^{\prime}$-periodic pattern subordinated to $\mathcal{P}$. If
$h(\mathcal{P}^{\prime})\geq\log(\lambda_{n^{\prime}})$ then
$h(\mathcal{P})>\log(\lambda_{n})$.
A discrete component of a pattern will be said to be _extremal_ if it contains
only one inner point. As an example, the discrete components $A$, $B$ and $D$
are extremal for the pattern $\mathcal{P}$ shown in Figure 5.
Let $(T,P,f)$ be a model of a periodic pattern $\mathcal{P}$. Let $C$ be a
discrete component of $(T,P)$. We will say that a point $x\in C$ _escapes from
$C$_ if $f(x)$ does not belong to the connected component of
$T\setminus\\{x\\}$ that intersects $\operatorname{Int}(\langle C\rangle)$.
Any discrete component $C$ of $(T,P)$ without points escaping from it will be
called a _scrambled component_ of $\mathcal{P}$. Clearly, this notion does not
depend on the particular chosen model of $\mathcal{P}$. So, it makes sense to
say that the pattern $\mathcal{P}$ _has a scrambled component_. As an example,
the point 7 escapes from $\\{1,7,13\\}$ in the 18-periodic pattern
$\mathcal{P}_{2}$ shown in Figure 8, while does not scape from
$C:=\\{0,3,5,7,9,11,15,17\\}$. In fact, no point in $C$ escapes from $C$. So,
$C$ is a scrambled component for $\mathcal{P}_{2}$. It is easy to see that
every periodic pattern has scrambled components (Lemma 4.2 of [7]).
###### Theorem 9.3.
Let $\mathcal{P}$ be an $n$-periodic pattern with positive entropy and at
least three discrete components. Assume that any opening of $\mathcal{P}$ has
entropy zero. If $\mathcal{P}$ has an extremal scrambled component, then
$\mathcal{P}$ has subordinated patterns with positive entropy.
###### Proof.
Let $(T,P,f)$ be a model of $\mathcal{P}$ and let $C$ be the extremal
scrambled component of $\mathcal{P}$. Then, there is only one inner point $x$
in $C$, and $f(x)\in C$ by definition of a scrambled component. Consider a
sequence of openings that joins together all discrete components different
from $C$ into a single discrete component $D$, leading to a pattern
$\mathcal{P}^{\prime}$ with two discrete components, $C$ and $D$. Since
$h(\mathcal{P^{\prime}})=0$ by hypothesis, $\mathcal{P^{\prime}}$ has a
_division_ [7] with respect to $C$. In consequence, there exists $p\geq 2$, a
divisor of $n$, such that $f^{i}(D)\subset C$ for $1\leq i<p$ and
$f^{p}(D)=D$. In other words, $\cup_{i=0}^{p-1}P_{i}$ is a $p$-block structure
for $\mathcal{P}$, where $P_{i}:=f^{i}(D)$. Note that the blocks
$P_{1},P_{2},\ldots P_{p-1}$ are contained in $C$ and are, thus, trivial.
Consider the pattern $\mathcal{Q}:=([\langle
P_{0}\rangle_{T},P_{0}],[f^{p}\bigr{\rvert}_{P_{0}}])$. Then, $\mathcal{Q}$ is
subordinated to $\mathcal{P}$. Moreover, its entropy is positive, for
otherwise the fact that all blocks but one are trivial would easily imply that
$h(\mathcal{P})=0$. ∎
When a pattern has only one inner point $x$, the discrete component containing
the image of $x$ is clearly scrambled and extremal. So, we have the next
result as an immediate consequence of Theorem 9.3.
###### Corollary 9.4.
Let $\mathcal{P}$ be a positive entropy $k$-flower, with $k\geq 3$. Assume
that any opening of $\mathcal{P}$ has entropy zero. Then, $\mathcal{P}$ has
subordinated patterns with positive entropy.
## 10\. Triple chains
The final stage in the proof of Theorem A outlined in Section 5 leaves us with
the special case of a pattern with exactly two inner points and three discrete
components, a _triple chain_. In order to find lower bounds for the entropy of
a triple chain $\mathcal{P}$, it is unavoidable to count coverings in the
$\mathcal{P}$-path graph (equivalently, entries in the path transition
matrix). This section is devoted to this task. In our context, it is assumed
that $\mathcal{P}$ is $\pi$-irreducible and any of the two possible openings
of $\mathcal{P}$ has entropy zero (property ($\star$ ‣ 5)). Note that any
opening of a triple chain has two discrete components. So, to obtain a lower
bound of the entropy of $\mathcal{P}$ we will proceed in two steps. First, we
will study the coverings in the path graph of zero entropy patterns with two
discrete components. This is the aim of Lemmas 10.3 and 10.6. Finally, we will
study how the previous coverings, present in the two possible openings of the
triple chain $\mathcal{P}$, imply the existence of a number of coverings in
the $\mathcal{P}$-path graph (Lemma 10.8) that forces enough entropy for our
purposes.
The results mentioned in the previous scheme are extremely technical. Readers
are cautioned to follow the arguments using examples, as the ones shown in the
figures.
A basic path $\pi$ for a pattern with a separated structure of trivial blocks
will be said to be _in-block_ if it is contained in a block. Otherwise, it
will be said to be _inter-block_. As an example, $\\{1,13\\}$ is an in-block
basic path of $\mathcal{P}_{2}$ in Figure 8, while $\\{0,15\\}$ is inter-
block. The second statement of Proposition 3.7 says that an in-block path
never _splits_ (as defined in page 3.6). On the other hand, next result states
that inter-block basic paths do always split.
###### Lemma 10.1.
Let $\mathcal{P}$ an $n$-periodic pattern with a separated structure of
trivial blocks. Then any inter-block basic path of $\mathcal{P}$ splits before
$n$ iterates.
###### Proof.
The proof strongly relies on the construction of the maximal separated
structure of trivial blocks in Proposition 9.5 of [7] and its uniqueness (see
Section 3). The construction shows that if $\mathcal{P}$ is $\sigma$-reducible
for a basic path $\sigma$, then each trivial block is obtained as the set of
endpoints of a connected component of $\cup_{i\geq 0}\langle
f^{i}(\sigma)\rangle$. In particular, $\sigma$ is contained in one block and
is thus an in-block basic path. The uniqueness of the maximal structure of
trivial blocks implies that the same is true for any basic path
$\sigma^{\prime}\neq\sigma$ such that $\mathcal{P}$ is
$\sigma^{\prime}$-reducible. Now we note that if $\pi$ does not split in $n$
iterates, then $\mathcal{P}$ is $\pi$-reducible. Then, by the previous
discussion, $\pi$ has to be in-block, a contradiction. ∎
Let $\mathcal{P}$ be a non-trivial $n$-periodic pattern with entropy zero. By
Proposition 6.3, $\mathcal{P}$ has a maximal structure of trivial blocks and
the corresponding combinatorial collapse $\mathcal{C}$ has entropy zero. Let
$x$ be a point of $\mathcal{P}$. The point of $\mathcal{C}$ corresponding to
the collapse of the block containing $x$ will be denoted by $\overline{x}$,
and this will be a standing notation throughout this section. In fact, if $x$
is contained in the block $P_{i}$, then $\overline{x}$ is precisely the point
of $\mathcal{C}$ labeled as $i$. Let $\pi=\\{x,y\\}$ be an inter-block basic
path of $\mathcal{P}$. Then, $\overline{x}\neq\overline{y}$. The binary set
$\\{\overline{x},\overline{y}\\}$ will be denoted by $\overline{\pi}$. Note
that, by property (b) of Definition 6.2, $\overline{\pi}$ is a basic path in
$\mathcal{C}$. As an example, consider the pattern $\mathcal{P}_{2}$ shown in
Figure 8. The basic paths $\pi_{1}=\\{11,8\\}$ and $\pi_{2}=\\{0,7\\}$ are
inter-block. In this case, $\overline{\pi}_{1}=\\{5,2\\}$ and
$\overline{\pi}_{2}=\\{0,1\\}$ are (respectively, in-block and inter-block)
basic paths of the combinatorial collapse $\mathcal{P}_{1}$.
The notation $\mathcal{O}$ for patterns of entropy zero used in the statements
of this section suggests, as in Section 8, the term _opening_.
###### Lemma 10.2.
Let $\mathcal{O}$ be a zero entropy periodic pattern and let $\mathcal{C}$ be
the combinatorial collapse of $\mathcal{O}$. Let $\pi$ be an inter-block basic
path of $\mathcal{O}$. If $\overline{\pi}$ splits in $\ell$ iterates on
$\mathcal{C}$, then $\pi$ splits in at most $\ell$ iterates on $\mathcal{O}$.
###### Proof.
Consider any model $(T,P,f)$ of $\mathcal{O}$ and assume that $f^{i}(\pi)$ is
a basic path for $0\leq i<\ell$. From the definition of a block structure it
follows that, for all $0\leq i<\ell$, the basic path $f^{i}(\pi)$ is inter-
block in $\mathcal{O}$. Set $\\{a,b\\}:=f^{\ell}(\pi)$. By hypothesis,
$\overline{f^{i}(\pi)}$ is a basic path in $\mathcal{C}$ for $0\leq i<\ell$,
while $\overline{a}$ and $\overline{b}$ are separated by at least one inner
point in $\mathcal{C}$. We have to see that $a$ and $b$ are also separated in
$\mathcal{O}$. Assume, by way of contradiction, that there exists a discrete
component $D$ of $\mathcal{O}$ containing $\\{a,b\\}$. In particular, the
trivial blocks $K_{a}$ and $K_{b}$ in $\mathcal{O}$ whose collapse gives
respectively the points $\overline{a}$ and $\overline{b}$ of $\mathcal{C}$
satisfy $K_{a}\cap D\neq\emptyset$ and $K_{b}\cap D\neq\emptyset$. By
definition of the combinatorial collapse, this implies that there exists a
single discrete component of $\mathcal{C}$ containing
$\\{\overline{a},\overline{b}\\}$, a contradiction. ∎
Given a pattern $\mathcal{P}=([T,P],[f])$ and two basic paths $\pi$ and
$\sigma$ of $\mathcal{P}$, we will say that $\pi$ is a _strict pre-image of_
$\sigma$ if there exists $j\geq 1$ such that $f^{i}(\pi)$ is a basic path for
$0\leq i\leq j$ and $f^{j}(\pi)=\sigma$. Note that, in this case, $f^{i}(\pi)$
are also strict pre-images of $\sigma$ for $1\leq i<j$.
The following result computes the number of iterations necessary for an inter-
block basic path to split in a zero entropy pattern with two discrete
components. At this point we recover the notation introduced in Section 2 and
write $\\{a,b\\}\rightarrow\\{c,d\\}$ to indicate that the basic path
$\\{a,b\\}$ $f$-covers the basic path $\\{c,d\\}$.
###### Proposition 10.3.
Let $\mathcal{O}=([T,P],[f])$ be a zero entropy $n$-periodic pattern with two
discrete components and a maximal structure of trivial blocks of cardinality
$q$. Let $\mathcal{C}$ be the corresponding combinatorial collapse. Assume
that $\mathcal{O}$ is labeled in such a way that 0 is the unique inner point.
Let $\pi$ be an inter-block basic path $\pi$ of $\mathcal{O}$. Then,
1. (i)
either splits in at most $\frac{n}{q}$ iterates,
2. (ii)
or it is an strict pre-image of a basic path $\sigma=\\{0,a+\frac{q-1}{q}n\\}$
with $0<a<\frac{n}{q}$. In this case, $\overline{\sigma}$ is in-block in
$\mathcal{C}$ and $\pi$ splits in at most $\frac{2n}{q}$ iterates.
If in addition $\overline{\pi}$ is in-block in $\mathcal{C}$, then the
following statements hold:
1. (a)
If $\pi=\\{0,a\\}$ with $0<a<\frac{n}{q}$, then $\pi$ splits in
$\frac{n}{q}-a$ iterates.
2. (b)
If $\pi=\\{0,a+\tfrac{q-1}{q}n\\}$ with $0<a<\frac{n}{q}$, then $\pi$ splits
in $\frac{n}{q}$ iterates.
###### Proof.
Let $\\{\mathcal{O}_{i}\\}_{i=0}^{s}$ be the sequence of collapses of
$\mathcal{O}$ according to Remark 6.4 and let $q_{i}$ be the cardinality of
the blocks of $\mathcal{O}_{i}$. Since $\mathcal{O}$ is not trivial, $s\geq
1$. The proof follows in two steps. First, we prove the result for a sequence
of collapses of length $s=1$. Then we tackle the general case $s>1$ using the
case $s=1$ on a particular subordinated pattern of $\mathcal{O}$.
Assume first that $s=1$ and let $q_{0}=n/q_{1}$ be the period of the
combinatorial collapse $\mathcal{C}=\mathcal{O}_{0}$, which is a trivial
pattern. The pattern $\mathcal{O}=\mathcal{O}_{1}$ is formed by
$q_{0}=n/q_{1}$ trivial blocks of $q_{1}$ points. Let us denote by $P_{i}$,
$i=0,\dots,q_{0}-1$, the trivial blocks of the pattern $\mathcal{O}$ according
to the standing convention in Remark 3.2. Notice that the block $P_{0}$,
formed by the multiples of $q_{0}$ (mod $n$), is one of the two discrete
components of $\mathcal{O}$.
Let $\pi$ be an inter-block basic path of $\mathcal{O}$. Since $\mathcal{C}$
is trivial, $\overline{\pi}$ is in-block in $\mathcal{C}$. The labeling of
$\mathcal{O}$ is fixed by the unique inner point. So, we can write the points
in $P_{i}$ as $i+\ell q_{0}$ with $0\leq\ell\leq q_{1}-1$. The point
$i+(q_{1}-1)q_{0}$ will be called the last point in $P_{i}$. The inter-block
basic path $\pi$ connects a point of the block $P_{i}$ with a point of the
block $P_{j}$. Along the proof we consider that the blocks are ordered in such
a way that $0\leq i<j\leq q_{0}-1$. We distinguish three types of inter-block
basic paths of $\mathcal{O}$ depending on the points that are connected.
Figure 16. The three types of paths in Proposition 10.3.
Type I. $\pi$ connects any point of $P_{i}$ with one of $P_{j}$ that is not
the last one, $0\leq i<j\leq q_{0}-1$. In this situation, we can write
$\pi=\\{i+\ell q_{0},j+rq_{0}\\}$ with $0\leq\ell\leq q_{1}-1$ and $0\leq
r\leq q_{1}-2$.
Type II. $\pi$ connects a point of the block $P_{i}$ that is not the last one
with the last point of $P_{j}$, $0\leq i<j\leq q_{0}-1$. In this case,
$\pi=\\{i+\ell q_{0},j+(q_{1}-1)q_{0}\\}$ with $0\leq\ell\leq q_{1}-2$.
Type III. $\pi$ connects the last points of the blocks $P_{i}$ and $P_{j}$,
$1\leq i<j\leq q_{0}-1$. In this latter case,
$\pi=\\{i+(q_{1}-1)q_{0},j+(q_{1}-1)q_{0}\\}$.
Since $\pi$ is an inter-block basic path, if $i=0$ for Type I and II then
$\ell=0$. That is, only the point $0\in P_{0}$ can be connected to a point of
a different trivial block. For this reason, $i\geq 1$ in Type III.
Notice that an inter-block basic path $\pi=\\{a,b\\}$ splits in $k$ iterates
if $k$ is the smallest integer such that either $a+k$ or $b+k$ is a multiple
of $q_{0}$ different from $0$ (mod $n$). Indeed, since $\pi$ is inter-block,
$a+k$ and $b+k$ cannot be both multiple of $q_{0}$. Otherwise, $f^{k}(\pi)$ is
a basic path joining two points of $P_{0}$ and is, therefore, in-block. Since
$0$ is the only inner point and $P_{0}$ is a whole discrete component, the
previous condition implies that $a+k$ and $b+k$ are on different discrete
components. On account of the previous, now we compute the iterates that an
inter-block basic path of each type requires to split.
If $\pi$ is of Type I, then it splits in $q_{0}-j$ iterates:
$\pi\rightarrow\overset{q_{0}-j)}{\cdots}\rightarrow\\{i+q_{0}-j+\ell
q_{0},0\\}\cup\\{0,(r+1)q_{0}\\}.$
Indeed, since $0\leq i<j\leq q_{0}-1$ then $j+rq_{0}$ reaches the point
$(r+1)q_{0}$ in $q_{0}-j$ iterates, whereas $i+\ell q_{0}$ needs
$q_{0}-i>q_{0}-j$ iterates to reach a multiple of $q_{0}$. Since $0\leq r\leq
q_{1}-2$ then $(r+1)q_{0}\neq 0$ (mod $n$), so the splitting occurs.
If $\pi$ is of Type II, then it splits in $q_{0}-i$ iterates:
$\pi\rightarrow\overset{q_{0}-i)}{\cdots}\rightarrow\\{(\ell+1)q_{0},0\\}\cup\\{0,j-i\\}.$
Indeed, in this case, although $i<j$ and $j+(q_{1}-1)q_{0}$ reaches a multiple
of $q_{0}$ in $q_{0}-j$ iterates, the multiple is $q_{1}q_{0}=n=0$ (mod $n$).
Therefore, there is no splitting in $q_{0}-j$ iterates. On the other hand, in
$q_{0}-i$ iterates the splitting occurs.
Finally, if $\pi$ is of Type III, then it splits in $2q_{0}-j$ iterates.
Indeed, in $q_{0}-j$ iterates:
$\pi\rightarrow\overset{q_{0}-j)}{\cdots}\rightarrow\\{i+q_{0}-j+(q_{1}-1)q_{0},0\\}$
The basic path $\\{i+q_{0}-j+(q_{1}-1)q_{0},0\\}$ is of Type II with $i=0$.
So, it splits in $q_{0}$ iterates. Summing up, $\pi$ splits in $2q_{0}-j$
iterates.
The previous discussion proves the result for $s=1$. Indeed, every inter-block
basic path splits in at most $q_{0}=\frac{n}{q_{1}}$ iterates with the
exception of the strict pre-images of $\\{0,a+(q_{1}-1)q_{0}\\}$ with
$0<a=i+q_{0}-j<q_{0}$, which split in $2q_{0}-j<\frac{2n}{q_{1}}$ iterates.
Moreover, the case (a) corresponds to a Type I basic path by taking
$i=\ell=0$, $j=a$ and $r=0$, so $\pi$ splits in $q_{0}-a=n/q_{1}-a$ iterates.
Taking $i=\ell=0$ and $j=a$ on a Type II basic path,
$\pi=\\{0,a+(q_{1}-1)q_{0}\\}$ splits in $q_{0}=n/q_{1}$ iterates, proving
(b). In Figure 16 we show examples of each type for $n=16$ and $q_{0}=4$.
Figure 17. Notation in the proof of Proposition 10.3.
Assume now that the sequence of collapses of $\mathcal{O}$ has length $s>1$.
Set $\mathcal{C}:=\mathcal{O}_{s-1}$ and $\mathcal{O}:=\mathcal{O}_{s}$.
Moreover, each pattern $\mathcal{O}_{i}$ for $1\leq i\leq s$ has a unique
inner point, labeled as $0$ according to Remark 3.3. Let $q_{0}$ be the period
of $\mathcal{O}_{0}$ and, for $1\leq i\leq n$, let $q_{i}$ be the cardinality
of the blocks of $\mathcal{O}_{i}$. Then, $n=\prod_{i=0}^{s}q_{i}$. According
to the notation in the statement, $q_{s}=q$. The pattern $\mathcal{O}$ has a
maximal structure of $n/q$ trivial separated blocks of $q$ points, and the
pattern $\mathcal{C}$ has a maximal structure of $n/(q_{s-1}q)$ trivial blocks
of cardinality $q_{s-1}$. Let us denote by $S_{i}$ and $P_{i}$ the blocks of
the patterns $\mathcal{C}$ and $\mathcal{O}$, respectively. Since 0 is the
unique inner point in $\mathcal{O}$, from Lemma 7.3 it follows that 0 is
bidirectional. Then, the trivial block $P_{0}$ of $\mathcal{O}$ and the
trivial block $S_{0}$ of $\mathcal{C}$ are contained in different 0-branches.
See Figure 17 for an example with $s=2$, $q_{0}=3$, $q_{1}=4$, $q=q_{2}=2$,
$n=24$.
Let $\pi$ be an inter-block basic path of $\mathcal{O}$. If $\overline{\pi}$
is inter-block in $\mathcal{C}$, by Lemma 10.1 $\overline{\pi}$ splits before
$n/q$ iterates. By Lemma 10.2, this property is inherited by $\pi$, which
splits in at most $n/q$ iterates, as desired.
Let us assume now that $\pi$ is an inter-block basic path of $\mathcal{O}$
such that $\overline{\pi}$ is in-block in $\mathcal{C}$. That is,
$\overline{\pi}$ is contained in $S_{\eta}$ for some
$\eta\in\\{0,\dots,\frac{n}{q_{s-1}q}-1\\}$. Note that all points in a block
of $\mathcal{C}$ differ by a multiple of $n/(q_{s-1}q)$, while all points in a
block of $\mathcal{O}$ differ by a multiple of $n/q$. It follows that
$\overline{\pi}$ has the form
$\overline{\pi}=\\{\overline{a},\overline{b}\\}=\\{\eta+i\tfrac{n}{q_{s-1}q},\eta+j\tfrac{n}{q_{s-1}q}\\}$
with $0\leq\eta\leq\frac{n}{q_{s-1}q}-1$ and $0\leq i<j\leq q_{s-1}-1$, while
$\pi$ has the form
$\pi=\\{a,b\\}=\\{\overline{a}+\ell\tfrac{n}{q},\overline{b}+r\tfrac{n}{q}\\}$
with $0\leq\ell,r\leq q-1$. For the sake of intuition, note that $\eta$ labels
the block $S_{\eta}$ of $\mathcal{C}$ containing $\overline{\pi}$, while the
blocks of $\mathcal{O}$ containing $a$ and $b$ are, respectively,
$P_{\overline{a}}$ and $P_{\overline{b}}$. Going back to the example shown in
Figure 17, if we take $\pi=\\{16,22\\}$, then $\overline{\pi}=\\{4,10\\}$,
$\eta=1$, $i=1$, $j=3$, $\overline{a}=4$, $\overline{b}=10$, $\ell=r=1$.
Let us study the iterates $f^{i}(\pi)$. Since 0 is the unique inner point in
$\mathcal{O}$, a pair $\\{x,y\\}$ of points of $\mathcal{O}$ is not a basic
path if and only if 0 separates $x$ and $y$. Since $\overline{\pi}$ is in-
block in $\mathcal{C}$, it never splits by Proposition 3.7. Moreover, $0\in
P_{0}$. It follows that a basic path $\pi$ may split in $k$ iterates only if
$\overline{f^{k}(\pi)}\subset S_{0}$. Let $\pi_{0}$ be the first iterate of
$\pi$ such that $\overline{\pi}_{0}\subset S_{0}$. Then,
$\pi_{0}=\\{\tfrac{n}{q_{s-1}q}(i+\ell
q_{s-1}),\tfrac{n}{q_{s-1}q}(j+rq_{s-1})\\}$
for some $0\leq i<j\leq q_{s-1}-1$ and $0\leq\ell,r\leq q-1$. Let us look at
the worst-case scenario by assuming that $\pi_{0}$ is a basic path. That is to
say, we have the sequence of non-splitting coverings
$\pi\rightarrow f(\pi)\rightarrow
f^{2}(\pi)\rightarrow\overset{\frac{n}{q_{s-1}q}-\eta)}{\cdots}\rightarrow\pi_{0}.$
In order to bound the number of iterates required by $\pi$ to split, we study
$\pi_{0}$. Let us consider the subordinated pattern
$\mathcal{O}^{\prime}:=([\langle
P_{0}\rangle_{T},P_{0}],[f^{\frac{n}{q_{s-1}q}}])$. Note that
$\mathcal{O}^{\prime}$ has two discrete components, entropy zero and a maximal
structure of $q_{s-1}$ trivial blocks given by
$P_{0}\cup P_{\frac{n}{q_{s-1}q}}\cup\ldots\cup
P_{\frac{(q_{s-1}-1)n}{q_{s-1}q}}.$
Moreover, the corresponding combinatorial collapse $\mathcal{C}^{\prime}$ is a
trivial pattern of period $q_{s-1}$. In other words, the sequence of collapses
of $\mathcal{O}^{\prime}$ reduces to
$\\{\mathcal{C}^{\prime},\mathcal{O}^{\prime}\\}$ and thus we can apply the
discussion about types of basic paths and coverings used in the case $s=1$.
Let us take the labeling of $\mathcal{O}^{\prime}$ such that the only inner
point reads as 0. See Figure 18 for a picture of the patterns
$\mathcal{C}^{\prime}$ and $\mathcal{O}^{\prime}$ corresponding to the example
shown in Figure 17.
Figure 18. The subordinated pattern $\mathcal{O}^{\prime}$ and its collapse
$\mathcal{C}^{\prime}$ for the example shown in Figure 17.
Notice that there is a correspondence between the basic path $\pi_{0}$ in
$\mathcal{O}$ and the basic path $\\{i+\ell q_{s-1},j+rq_{s-1}\\}$ in
$\mathcal{O}^{\prime}$. Since $\pi_{0}$ may only split when
$\overline{\pi}_{0}$ returns to $S_{0}$, it suffices to study the number of
iterates required by $\\{i+\ell q_{s-1},j+rq_{s-1}\\}$ to split in
$\mathcal{O}^{\prime}$ and then multiply the length of the sequence of paths
by $\frac{n}{q_{s-1}q}$. As it was stated in the discussion of the case $s=1$,
we have three situations depending on the type of path.
If $\\{i+\ell q_{s-1},j+rq_{s-1}\\}$ in $\mathcal{O}^{\prime}$ is of Type I
then $\pi_{0}$ splits in
$\frac{n}{q_{s-1}q}(q_{s-1}-j)=\frac{n}{q}-\frac{j}{q_{s-1}q}n$ iterates in
$\mathcal{O}$. Taking $i=\ell=r=0$ and $a=\frac{j}{q_{s-1}q}n$, this proves
(a).
If $\\{i+\ell q_{s-1},j+rq_{s-1}\\}$ in $\mathcal{O}^{\prime}$ is of Type II
then $r=q-1$ and $\pi_{0}$ splits in
$\frac{n}{q_{s-1}q}(q_{s-1}-i)=\frac{n}{q}-\frac{i}{q_{s-1}q}n$ iterates in
$\mathcal{O}$. Taking $i=\ell=0$ and $a=\frac{j}{q_{s-1}q}n$, this proves (b).
Lastly, if $\\{i+\ell q_{s-1},j+rq_{s-1}\\}$ in $\mathcal{O}^{\prime}$ is of
Type III then $\ell=r=q-1$ and $\pi_{0}$ splits in
$\frac{n}{q_{s-1}q}(2q_{s-1}-j)=\frac{2n}{q}-\frac{j}{q_{s-1}q}n$ iterates in
$\mathcal{O}$ and it is an strict pre-image of the basic path
$\\{0,\frac{n}{q_{s-1}q}(i+q_{s-1}-j+(q-1)q_{s-1})\\}$.
The previous holds for $\pi_{0}$. In order to bound the iterates required by
$\pi$ to split we add $\frac{n}{q_{s-1}q}-\eta$ to the previous. So, depending
on the types before, for $1\leq\eta\leq\frac{n}{q_{s-1}q}-1$, either
* •
$\pi$ splits in $\frac{n}{q}-\frac{j-1}{q_{s-1}q}n-\eta$ iterates, or
* •
$\pi$ splits in $\frac{n}{q}-\frac{i-1}{q_{s-1}q}n-\eta$ iterates, or
* •
$\pi$ splits in $\frac{2n}{q}-\frac{j-1}{q_{s-1}q}n-\eta$ iterates and it is
an strict pre-image of
$\\{0,\tfrac{n}{q_{s-1}q}(i+q_{s-1}-j+(q-1)q_{s-1})\\}.$
This proves that every inter-block basic path of $\mathcal{O}$ splits after at
most $2n/q$ iterates. Moreover, the only inter-block basic paths splitting in
more than $n/q$ iterates are strict pre-images of some
$\\{0,a+\frac{q-1}{q}n\\}$, with
$0<a=\frac{n}{q_{s-1}q}(i+q_{s-1}-j)<\frac{n}{q}$, proving the result. ∎
The previous result states that almost every inter-block basic path of a zero
entropy pattern with two discrete components splits in at most $n/q$ iterates
with the exception of those considered in (ii). The following results are
concerned with the bound for the latter case. The first result states that the
“time reverse” of a zero entropy pattern $\mathcal{O}$ with two discrete
components coincides with $\mathcal{O}$. Figure 19 shows an example that
illustrates this remarkable property, that is not true for general zero
entropy patterns. It is possible to prove it using sequences of collapses and
Proposition 6.1, but we use a result from [8] to get a considerably shorter
proof.
Figure 19. A pattern $\mathcal{O}$ and its time reverse $\mathcal{Q}$ as
defined in Lemma 10.4.
###### Lemma 10.4.
Let $(T,P,f)$ be the canonical model of an $n$-periodic pattern $\mathcal{O}$
with entropy zero and two discrete components. Let $P=\\{x_{i}\\}_{i=0}^{n-1}$
be time labeled. Consider the relabeling of $P$ given by $y_{i}:=x_{n-i\bmod
n}$ and the map $g\colon P\longrightarrow P$ defined by $g(y_{i}):=y_{i+1\bmod
n}$ for $0\leq i<n$. Then, $([T,P],[g])=\mathcal{O}$.
###### Proof.
Assume without loss of generality that $x_{0}$ is the unique inner point of
$\mathcal{O}$. From the definitions we get that $P$ is an $n$-periodic orbit
of $g$, time labeled as $P=\\{y_{i}\\}_{i=0}^{p-1}$. Thus, $([T,P],[g])$ is an
$n$-periodic pattern $\mathcal{Q}$. By definition, $y_{0}=x_{0}$, so that
$y_{0}$ is the only inner point of $\mathcal{Q}$. To see that
$\mathcal{O}=\mathcal{Q}$ we have to show that both patterns have the same
discrete components.
For any tree map $F\colon S\longrightarrow S$, an ordered set $(a,b,c)$ of
three points of $S$ is called a _forward triplet of $F$_ if $b\in(a,c)$,
$f(a)=b$, $f(b)=c$, and $\\{a,b,c\\}$ is contained in a periodic orbit of $F$.
By Theorem 1.1 of [8], $F$ has positive entropy if and only if there exists
$k\geq 1$ such that $F^{k}$ has a forward triplet. Thus, since
$h(f)=h(\mathcal{O})=0$, $f$ cannot have forward triplets. It easily follows
that both $x_{i}$ and $x_{n-i}$ belong to the same discrete component of
$\mathcal{O}$ for all $1\leq i<n$. But
$\\{x_{i},x_{n-i}\\}=\\{y_{n-i},y_{i}\\}$, implying that both $\mathcal{O}$
and $\mathcal{Q}$ have exactly the same discrete components. ∎
###### Lemma 10.5.
The basic path $\sigma=\\{0,a+\frac{q-1}{q}n\\}$ with $0<a<\frac{n}{q}$ in
(ii) of Proposition 10.3 has at most $a-1$ strict pre-images. Moreover, a
basic path $\pi=\\{0,y\\}$ cannot be an strict pre-image of $\sigma$.
###### Proof.
By Lemma 10.4, the pattern $\mathcal{O}$ coincides with its time reverse. In
particular, the basic path $\\{0,a+\frac{q-1}{q}n\\}$ has as many pre-images
as basic paths are covered by $\\{0,\frac{n}{q}-a\\}$ before splitting. The
basic path $\sigma$ is inter-block and $\overline{\sigma}$ is in-block in the
corresponding combinatorial collapse $\mathcal{C}$ of $\mathcal{O}$.
Therefore, the same is true for the basic path $\\{0,\frac{n}{q}-a\\}$. Since
$0<\frac{n}{q}-a<\frac{n}{q}$, by Proposition 10.3 (a), the basic path
$\\{0,\frac{n}{q}-a\\}$ splits in
$\frac{n}{q}-\bigl{(}\frac{n}{q}-a\bigr{)}=a$ iterates. This proves the first
assertion of the lemma.
The second assertion, using the time reverse property, is equivalent to show
that the basic path $\\{0,n-y\\}$ is not covered by $\\{0,\frac{n}{q}-a\\}$
before splitting. That is, before $a$ iterates. This is clear, since neither
$0$ nor $\frac{n}{q}-a$ map on $0$ before $a$ iterates. ∎
Now we can use Lemma 10.5 together with Proposition 10.3 to find the desired
coverings.
###### Lemma 10.6.
Let $\mathcal{O}$ be a zero entropy $n$-periodic pattern with two discrete
components and a maximal structure of trivial blocks of cardinality $q$. If
$q\geq 3$ then any inter-block basic path of $\mathcal{O}$ covers at least
four basic paths in $n$ iterates.
###### Proof.
Let us label $\mathcal{O}$ in such a way that $0$ is the unique inner point
and let $\pi$ be an inter-block basic path of $\mathcal{O}$. By Proposition
10.3,
1. (i)
either $\pi$ splits in at most $\frac{n}{q}$ iterates,
2. (ii)
or $\pi$ is a strict pre-image of a basic path $\\{0,a+\frac{q-1}{q}n\\}$ with
$0<a<\frac{n}{q}$ which splits in $\frac{n}{q}$ iterates.
In the case (i), $\pi$ covers two basic paths $\\{0,y\\}$ and $\\{0,z\\}$
before $\frac{n}{q}$ iterates. Notice that both $\\{0,y\\}$ and $\\{0,z\\}$
cannot be in-block basic paths of $\mathcal{O}$. Otherwise, since $0$ is inner
of $\mathcal{O}$ and $y$ and $z$ are contained in different discrete
components, the trivial block that contains $0$ would contain points of two
different discrete components, a contradiction. Therefore, we can assume
$\\{0,y\\}$ to be an inter-block basic path of $\mathcal{O}$. Moreover, by
Lemma 10.5, an inter-block basic path of the form $\\{0,y\\}$ cannot be an
strict pre-image of a basic path of the form $\\{0,a+\frac{q-1}{q}n\\}$ with
$0<a<\frac{n}{q}$. Therefore, again by Proposition 10.3, $\\{0,y\\}$ splits in
at most $\frac{n}{q}$ iterates, covering two basic paths $\\{0,y_{1}\\}$ and
$\\{0,y_{2}\\}$. Again one of them is inter-block of $\mathcal{O}$ and splits
in at most $\frac{n}{q}$ iterates. Therefore, $\pi$ covers at least four basic
paths in $\frac{3n}{q}\leq n$ iterates. This proves the result in the case
(i).
In the case (ii), by Lemma 10.5, $\pi$ covers $\\{0,a+\frac{q-1}{q}n\\}$ in at
most $a-1$ iterates and $\\{0,a+\frac{q-1}{q}n\\}$ covers $\\{0,a\\}$ and
$\\{0,\frac{n}{q}\\}$ in $\frac{n}{q}$ iterates. By Proposition 10.3(a), the
basic path $\\{0,a\\}$ splits and covers two basic paths $\\{0,u\\}$ and
$\\{0,v\\}$ in $\frac{n}{q}-a$ iterates and, since one must be inter-block in
$\mathcal{O}$, again splits in at most $\frac{n}{q}$ iterates as shown before.
Therefore, $\pi$ covers at least four basic paths in
$a-1+\frac{n}{q}+\frac{n}{q}-a+\frac{n}{q}=\frac{3n}{q}-1<n$ iterates, proving
the result in the case (ii). ∎
###### Remark 10.7.
Let $\mathcal{P}$ be a pattern and let $\mathcal{O}$ be an opening of
$\mathcal{P}$. Let $\pi$ be a basic path of $\mathcal{P}$. Then $\pi$ is also
a basic path of $\mathcal{O}$. Moreover, if $\pi$ covers $k$ basic paths in
$\ell$ iterates in $\mathcal{O}$ then $\pi$ covers at least $k$ basic paths in
$\ell$ iterates in $\mathcal{P}$.
By collecting all previous results, finally we get the desired lower bound for
coverings in a triple chain.
Figure 20. An illustration of the proof of Proposition 10.8. Some loops of the
$\mathcal{P}$-path graph obtained in the proof are shown. The underlined basic
paths are in-block in $\mathcal{O}_{2}$.
###### Proposition 10.8.
Let $\mathcal{P}$ be an $n$-periodic $\pi$-irreducible triple chain. Assume
that the two possible openings $\mathcal{O}_{1}$ and $\mathcal{O}_{2}$ of
$\mathcal{P}$ have entropy zero. Then, any basic path $\pi$ of $\mathcal{P}$
covers at least four basic paths in $n$ iterates.
###### Proof.
By Remark 10.7 a basic path $\pi$ of $\mathcal{P}$ is also a basic path of
both $\mathcal{O}_{1}$ and $\mathcal{O}_{2}$. We claim that $\pi$ is inter-
block for some $\mathcal{O}_{i}$. Indeed, if $\pi$ is in-block in both
$\mathcal{O}_{1}$ and $\mathcal{O}_{2}$, then $\pi$ does not split through any
of the two inner points of $\mathcal{P}$. Consequently, $\pi$ never splits in
$\mathcal{P}$ and so $\mathcal{P}$ is $\pi$-reducible, a contradiction.
The patterns $\mathcal{O}_{i}$, $i=1,2$, have zero entropy. So, by Proposition
6.1, each of them has a maximal structure of trivial blocks of cardinality
$q_{i}\geq 2$.
Let us first prove the result when $q_{i}\geq 3$. As stated before, $\pi$ is
inter-block for some of the openings, let us say $\mathcal{O}_{1}$ without
loss of generality. Since $q_{1}\geq 3$, by Lemma 10.6, $\pi$ covers at least
four basic paths in $n$ iterates in $\mathcal{O}_{1}$. By Remark 10.7, this
property is inherited in $\mathcal{P}$, so $\pi$ covers at least four basic
paths in $n$ iterates in $\mathcal{P}$. This proves the result in the first
situation.
Now assume that $q_{1}=q_{2}=2$. In this case, the basic path
$\\{0,\frac{n}{2}\\}$ is in-block in both $\mathcal{O}_{1}$ and
$\mathcal{O}_{2}$. By the discussion at the beginning of the proof, this
produces contradiction with the $\pi$-irreducibility of $\mathcal{P}$.
We are left with the case $q_{1}=2$ and $q_{2}\geq 3$. Again, $\pi$ is inter-
block in $\mathcal{O}_{1}$ or $\mathcal{O}_{2}$. If $\pi$ is inter-block in
$\mathcal{O}_{2}$, the result follows as in the first case since $q_{2}\geq
3$. So, we can assume that $\pi$ is in-block in $\mathcal{O}_{2}$ and, in
consequence, inter-block in $\mathcal{O}_{1}$.
Let us relabel $\mathcal{P}$ and, accordingly, the openings $\mathcal{O}_{i}$,
in such a way that the inner point of $\mathcal{O}_{1}$ is $0$. We denote by
$j$ the inner point of $\mathcal{O}_{2}$. The basic path $\pi$ is in-block in
$\mathcal{O}_{2}$, so in $\mathcal{P}$ the first splitting is through the
inner $0$. Since $\pi$ is inter-block in $\mathcal{O}_{1}$, by Proposition
10.3, one of the following situations occurs in $\mathcal{O}_{1}$:
1. (i)
either $\pi$ splits in at most $\frac{n}{2}$ iterates,
2. (ii)
or $\pi$ is a strict pre-image of a basic path $\\{0,a+\frac{n}{2}\\}$ with
$0<a<\frac{n}{2}$, which splits in $\frac{n}{2}$ iterates.
In both cases $\pi$ covers two basic paths in $\mathcal{O}_{1}$ after the
first splitting. Since $\mathcal{P}$ is a triple chain, at least one of such
paths is also a basic path in $\mathcal{P}$. For the sake of brevity, we will
focus on the worst scenario which corresponds to assuming that the two basic
paths covered in $\mathcal{O}_{1}$ are also basic paths in $\mathcal{P}$. The
reader may easily check that if this is not the case, then a third basic path
is covered in $\mathcal{P}$ during the first splitting, and the upper bounds
obtained below are valid for the basic path shared between $\mathcal{O}_{1}$
and $\mathcal{P}$.
Consider the case (i). Since $0$ is the inner point of $\mathcal{O}_{1}$, then
$\pi$ covers in $\mathcal{O}_{1}$ two basic paths $\\{0,y\\}$ and $\\{0,z\\}$
in at most $\frac{n}{2}$ iterates. As noticed above, we are assuming that both
$\\{0,y\\}$ and $\\{0,z\\}$ are basic paths in $\mathcal{P}$. Clearly, $y\neq
z$ and so we can assume also $z\neq\frac{n}{2}$. Consequently, $\\{0,z\\}$ is
an inter-block in $\mathcal{O}_{1}$. Moreover, by Lemma 10.5, $\\{0,z\\}$ is
not an strict pre-image of a basic path of the form $\\{0,a+\frac{n}{2}\\}$.
Then, by Proposition 10.3, $\\{0,z\\}$ splits in at most $\frac{n}{2}$
iterates covering two basic paths $\\{0,z_{1}\\}$ and $\\{0,z_{2}\\}$. Since
$\\{0,z\\}$ is a basic path in $\mathcal{P}$, then $\\{0,z\\}$ covers at least
two basic paths in $\frac{n}{2}$ iterates in $\mathcal{P}$. Now we have two
cases depending on the value of $y$. If $y\neq\frac{n}{2}$ the same argument
applies for $\\{0,y\\}$ and, summing up, $\pi$ covers at least four basic
paths in $n$ iterates in $\mathcal{P}$, proving the result in this case. The
following diagram illustrates the coverings in this first situation inside
case (i).
If $y=\frac{n}{2}$ then $\\{0,\frac{n}{2}\\}$ is in-block in
$\mathcal{O}_{1}$. Since $\\{0,\frac{n}{2}\\}$ is a basic path in
$\mathcal{P}$, it is also a basic path in $\mathcal{O}_{2}$. Moreover, it must
be inter-block. By Proposition 10.3, either $\\{0,\frac{n}{2}\\}$ covers two
basic paths before $\frac{n}{q_{2}}$ iterates or it is a strict pre-image of a
basic path $\\{j,j+b+\frac{(q_{2}-1)n}{q_{2}}\\}$, where $0<b<\frac{n}{q_{2}}$
and $j$ is the inner point of $\mathcal{O}_{2}$. The second alternative,
however, cannot be satisfied. Indeed, the time distance between the two points
of an iterate of a basic path is conserved while there is no splitting. If
$\\{0,\frac{n}{2}\\}$ is a strict pre-image of
$\\{j,j+b+\frac{(q_{2}-1)n}{q_{2}}\\}$, then the distance should be conserved,
but $b+\frac{(q_{2}-1)n}{q_{2}}\geq\frac{n}{2}$. Therefore,
$\\{0,\frac{n}{2}\\}$ covers two basic paths in $\mathcal{O}_{2}$ in at most
$\frac{n}{q_{2}}$ iterates. Since $q_{2}\geq 3$ then, summing up, $\pi$ covers
at least four basic paths in $n$ iterates in $\mathcal{P}$, proving the result
for the case (i). The following diagram illustrates the coverings in this
second situation inside case (i).
The basic paths $\\{0,8\\}$ and $\\{3,7\\}$ in Figure 20 are examples of
maximal length of case (i). The basic path $\\{0,8\\}$ splits in
$\frac{n}{2}=6$ iterates and covers $\\{0,y\\}=\\{0,6\\}$ and
$\\{0,z\\}=\\{0,2\\}$. The path $\\{0,6\\}$ is of the form
$\\{0,\frac{n}{2}\\}$, so it is in-block in $\mathcal{O}_{1}$ and inter-block
in $\mathcal{O}_{2}$. It splits in $3<\frac{n}{2}=6$ iterates. The path
$\\{0,2\\}$ is inter-block in both $\mathcal{O}_{1}$ and $\mathcal{O}_{2}$ and
splits in $2<\frac{n}{2}=6$ iterates. A similar phenomenon occurs for
$\\{3,7\\}$.
Let us now consider the case (ii). By Lemma 10.4 the basic path
$\\{0,a+\frac{n}{2}\\}$ has, at most, $a-1$ strict pre-images. Thus, $\pi$
covers $\\{0,a+\frac{n}{2}\\}$ in at most $a-1$ iterates. Since $\pi$ is in-
block in $\mathcal{O}_{2}$, $\\{0,a+\frac{n}{2}\\}$ must also be an in-block
path of $\mathcal{O}_{2}$. Hence, $a+\frac{n}{2}=k\frac{n}{q_{2}}$ for some
$1\leq k\leq q_{2}-1$. Moreover, $\\{0,a+\frac{n}{2}\\}$ splits in
$\frac{n}{2}$ iterates and covers the basic paths $\\{0,a\\}$ and
$\\{0,\frac{n}{2}\\}$. Recall that we are assuming that both $\\{0,a\\}$ and
$\\{0,\frac{n}{2}\\}$ are basic paths in $\mathcal{P}$ and so in
$\mathcal{O}_{2}$. The basic path $\\{0,\frac{n}{2}\\}$ is inter-block in
$\mathcal{O}_{2}$ and, as proved in case (i), covers two basic paths before
$\frac{n}{q_{2}}$ iterates. On the other hand, $\\{0,a\\}$ is inter-block for
$\mathcal{O}_{1}$ and, since $a<\frac{n}{2}$, it covers two basic paths in
$\frac{n}{2}-a$ iterates by Proposition 10.3(a). Summing up, $\pi$ covers two
basic paths in $a-1+\frac{n}{2}+\frac{n}{q_{2}}=(k+1)\frac{n}{q_{2}}-1\leq
n-1$ iterates through $\\{0,\frac{n}{2}\\}$ and two basic paths in
$a-1+\frac{n}{2}+\frac{n}{2}-a=n-1$ iterates through $\\{0,a\\}$, which proves
that $\pi$ covers at least four basic paths in $n$ iterates. The following
diagram illustrates the coverings in case (ii).
The basic path $\\{11,7\\}$ is the only one satisfying case (ii) in Figure 20.
Here $\\{0,a+\frac{n}{2}\\}=\\{0,8\\}$ with $a=2$. Indeed, $\\{0,8\\}$ has at
most $a-1=1$ pre-images and $\\{0,8\\}$ splits exactly in $\frac{n}{2}=6$
iterates covering $\\{0,a\\}=\\{0,2\\}$ and $\\{0,\frac{n}{2}\\}=\\{0,6\\}$. ∎
Let $A=(a_{ij})$ be an $n\times n$ nonnegative matrix. Recall that $\rho(A)$
stands for the spectral radius of $A$. For $1\leq i\leq n$, let
$r_{i}(A)=\sum_{j=1}^{n}a_{ij}$ be the $i$-th row sum of $A$. The following
result is well-known [28].
###### Theorem 10.9.
If $A$ is a nonnegative matrix then
$\min_{1\leq i\leq n}r_{i}(A)\leq\rho(A)\leq\max_{1\leq i\leq n}r_{i}(A).$
###### Corollary 10.10.
Let $\mathcal{P}$ be an $n$-periodic and $\pi$-irreducible triple chain.
Assume that the two possible openings of $\mathcal{P}$ have entropy zero.
Then, $h(\mathcal{P})>\log(\sqrt[n]{4})$.
###### Proof.
By Remark 2.2, $h(\mathcal{P})=\log\max\\{\rho(M),1\\}$, where $M$ is the path
transition matrix of $\mathcal{P}$. By Proposition 10.8, any basic path of
$\mathcal{P}$ covers at least four basic paths in $n$ iterates. In particular,
the sum of the elements on each row of $M^{n}$ is $r_{i}(M^{n})\geq 4$. By
Theorem 10.9, $4\leq\rho(M^{n})\leq\rho(M)^{n}$. In consequence,
$\rho(M)\geq\sqrt[n]{4}$ and the result follows. ∎
## 11\. Proof of Theorem A |
# Grass-roots optimization of coupled oscillator networks
Pranick R. Chamlagai Department of Mathematics, Trinity College, Hartford, CT
06106, USA Dane Taylor Department of Mathematics, University at Buffalo,
State University of New York, Buffalo, NY 14260, USA Per Sebastian Skardal
<EMAIL_ADDRESS>Department of Mathematics, Trinity College,
Hartford, CT 06106, USA
###### Abstract
Synchronization is critical for system function in applications ranging from
cardiac pacemakers to power grids. Existing optimization techniques rely
largely on global information, and while they induce certain local properties,
those alone do not yield optimal systems. Therefore, while useful for
designing man-made systems, existing theory provides limited insight into
self-optimization of naturally-occurring systems that rely on local
information and offer limited potential for decentralized optimization. Here
we present a method for “grass-roots” optimization of synchronization, which
is a multiscale mechanism involving local optimizations of smaller subsystems
that are coordinated to collectively optimize an entire system, and the
dynamics of such systems are particularly robust to islanding or targeted
attacks. In addition to shedding light on self-optimization in natural
systems, grass-roots optimization can also support the parallelizable and
scalable engineering of man-made systems.
The ability for large systems of dynamical units to self-organize and produce
robust collective behavior continues to drive a large body of research
Pikovsky2003 ; Strogatz2004 . Applications include cardiac dynamics
Bychkov2020JACC , brain dynamics Kopell2000PNAS , cell signaling
Prindle2012Nature , and power grids Rohden2012PRL . Weak synchronization and
desynchronization events often lead to pathological behavior, e.g., spiral
wave breakup in cardiac tissue Fenton2002Chaos ; Panfilov2007PNAS and black
outs in power grids Dorfler2013PNAS , thereby motivating optimized systems for
strong, robust synchronization. While man-made systems such as power grids can
be designed and calibrated using global structural and dynamical information
Pecora1998PRL ; Nishikawa2006PRE , such information is likely unavailable to
naturally occurring systems. Notably, a great deal is known about how
biological systems function, however comparatively little is understood about
the self-optimization processes that are tasked with constructing and
maintaining/repairing such systems. Prominent examples include cardiac
pacemakers that initialize strongly synchronized pulses that propagate through
tissue Mangoni2008 and coordination of chromosomal activity through cell
differentiation Rajapakse2009PNAS . For synchronizing systems that rely on
collective behavior, it is reasonable to assume that the related optimization
mechanisms are themselves a collective, coordinated behavior. A stronger
theoretical understanding of such mechanisms for collective (self)
optimization will deepen our understanding of diverse types of biological (and
other) systems and has the potential to revolutionize the way we engineer
systems—or rather, design systems to engineer themselves. To this end,
collective optimizations constitute an under-explored family of collective
behavior, and there is a lack of multiscale optimization theory to provide
insight into how local optimizations might coordinate to optimize
globally–both in the context of synchronization and more broadly.
In this paper, we explore grass-roots optimization for coupled oscillator
networks, whereby the parallel optimization of smaller subsystems can be
coordinated to collectively optimize the global synchronization properties of
the entire system. In general, subsystems of a network can be defined in a
variety of ways: communities Girvan2002PNAS , spatially distinct regions in a
geometric network Barthelemy2011PhysRep , or other partitions of a network
after embedding in a more general metric space Coiffman2005PNAS . Our main
finding is an intuitive multiscale mechanism for grass-roots optimization of
synchronization that involves two steps: local subsystem optimization, whereby
subsystems are optimized in parallel; and global subsystem balancing, whereby
the subsystems are balanced with one another. We derive this mechanism from
first principles using the Synchrony Alignment Function (SAF) framework, which
provides an objective measure of a system’s synchronization properties and has
been used in a number of synchronization optimization tasks Skardal2014PRL ;
Skardal2016Chaos ; Taylor2016SIAM ; Skardal2017Chaos ; Skardal2019SIAM ;
Arola2021Chaos . We demonstrate the utility of grass-roots optimization across
a range of networks where structural subsystems arise naturally: networks with
community structure, a power grid, and noisy geometric networks that systems
with spatial constraints for connections, such as calcium release sites in
cardiac pacemaker cells Bychkov2020JACC and self-coordinating chromosomes
Rajapakse2009PNAS . We show that the global synchronization properties of
grass-roots optimized systems are nearly identical to those of globally-
optimized systems, and importantly, these properties are also more robust to
subsystem dismantling, e.g., due to targeted attack or intentional
‘islanding’. Grass-roots optimization provides a viable mechanism by which
biological systems can robustly self-optimize and provides engineering
strategies that are decentralized, parallelizable, and scalable.
Figure 1: Grass-roots synchronization. Illustrations of (a) a random network
with two communities, (b) the IEEE RTS 96 power grid, and (c) a random
geometric network. (d)–(f) The degree of synchronization $r$ and (g)–(i)
synchronization error $1-r$ as a function of coupling strength $K$ for the
three respective network types with either randomly allocated frequencies
(green triangles), globally-optimized frequencies (blue circles), or grass-
roots optimized frequencies (red crosses).
We consider networks of coupled, heterogeneous phase oscillators whose
dynamics are given by
$\displaystyle\dot{\theta}_{i}=\omega_{i}+K\sum_{j=1}^{N}A_{ij}H(\theta_{j}-\theta_{i}),$
(1)
where $\theta_{i}$ and $\omega_{i}$ are the phase and natural frequency of
oscillator $i=1,\dots,N$, parameter $K$ is the global coupling strength,
network structure is encoded in an adjacency matrix $A$, and $H(\cdot)$ is a
$2\pi$-periodic coupling function. Here, we focus on the case of unweighted,
undirected networks with $A_{ij}=1$ if oscillators $i$ and $j$ are connected
and $0$ otherwise, although these properties may be relaxed without much
trouble. We also use classical Kuramoto coupling Kuramoto , given by
$H(\cdot)=\sin(\cdot)$, but emphasize that one may choose other functions $H$
provided that $H^{\prime}(0)>0$ and $H(\Delta\theta)=0$ for some
$\Delta\theta$ near zero. Notably, phase oscillator models such as Eq. (1)
have been found to be suitable models for naturally-occuring phenomena such as
chromosomal coordination Rajapakse2009PNAS and integrate and fire dynamics of
cardiac pacemakers Politi2015PRE , as well as mechanical systems such as power
grids Porco2013 ; Skardal2015SciAdv . The degree of synchronization is
measured by the magnitude $r\in[0,1]$ of the Kuramoto order parameter
$re^{i\psi}=N^{-1}\sum_{j=1}^{N}e^{i\theta_{j}}$. By linearizing around the
synchronized state one obtains
$\displaystyle r\approx
1-\frac{J(\bm{\omega},L)}{2K^{2}},~{}\text{where}~{}J(\bm{\omega},L)=\frac{1}{N}\sum_{j=2}^{N}\frac{\langle\bm{v}^{j},\bm{\omega}\rangle^{2}}{\lambda_{j}^{2}}$
(2)
is the Synchrony Alignment Function (SAF) Skardal2014PRL . The SAF utilizes
the alignment of the natural frequencies $\bm{\omega}$ with the eigenvalues
$\\{\lambda^{j}\\}_{j=1}^{N}$ and eigenvectors $\\{\bm{v}^{j}\\}_{j=1}^{N}$ of
the combinatorial Laplacian, $L=D-A$, where $D=\text{diag}(k_{1},\dots,k_{N})$
is a diagonal matrix that encodes the nodal degrees,
$k_{i}=\sum_{j=1}^{N}A_{ij}$. Synchronization is optimized (i.e., $r$ is
maximized) by minimizing $J(\bm{\omega},L)$, which may be done by aligning
$\bm{\omega}$ with the eigenvectors of $L$ that are associated with larger
eigenvalues. Minimizing the SAF by setting $\omega=v^{N}$ also reveals
intuitive key properties of synchrony optimized systems including degree-
frequency correlations and anti-correlations between the frequencies of
neighboring oscillators Skardal2014PRL . While such local properties are
associated with synchronization, they alone do not guarantee it, nor do they
offer insight toward mesoscale/multiscale properties and mechanisms enabling
collective optimization.
We now present a method for grass-roots optimization of synchronization,
including a multiscale mechanism in which subsystems coordinate local
optimizations to optimize a system’s global synchronization properties. We
will present a detailed derivation later, and first summarize our main
findings and implications. We consider networks that can be partitioned into
$C$ subsystems such that the adjacency matrix $A$ may be rewritten in a block
form $A=A_{D}+B$, where $A_{D}=\text{diag}(A^{(1)},\dots,A^{(C)})$ is a block-
diagonal matrix containing the subsystems’ separate adjacency matrices, and
the off-diagonal blocks of $B$ encode edges between subsystems. We assume that
the blocks in $B$ are sparser than the diagonal blocks in $A_{D}$. For each
subsystem $s$, we define its associated combinatorial Laplacian matrix
$L^{(s)}$ and its associated vector $\bm{\omega}^{(s)}$ of frequencies. As we
will show below, under the condition where the subsystems’ mean oscillator
frequencies are equal, then the SAF for the full system may be approximated by
a linear combination of the subsystem-specific SAFs,
$\displaystyle
J(\bm{\omega},L)\approx\eta_{1}J(\bm{\omega}^{(1)},L^{(1)})+\cdots+\eta_{C}J(\bm{\omega}^{(C)},L^{(C)}),$
(3)
where $\eta_{s}$ is the fraction of nodes in subsystem $s$. This result leads
to the following multiscale mechanism for grass-roots optimization: (i)
_Global balancing of subsystems_ : achieve a balanced set of local mean
frequencies across all $C$ subsystems, i.e., minimize
$\text{max}_{s,s^{\prime}}|\langle\bm{\omega}^{(s)}\rangle-\langle\bm{\omega}^{(s^{\prime})}\rangle|$;
(ii) _Local optimization of subsystems_ : optimize the local SAFs, i.e.,
minimize $J(\bm{\omega}^{(s)},L^{(s)})$ for each $s$. This framework is
flexible and may be used under a wide range of application-specific
constraints. Notably, these two intuitive steps are a plausible mechanism that
can be utilized by biological (and other natural) systems to self-optimize
using local/global mechanisms, and it helps fill the theoretical gap between
existing (global) optimization theory and known (local) heuristic properties
that promote synchrony (e.g., degree-frequency correlations).
We now illustrate the effectiveness of grass-roots optimization across three
classes of networks: (i) networks with community structure (which are
generated by the stochastic block model Holland1983 , contain two communities
of sizes $N^{(1;2)}=100$, and have mean intra-degree $\langle
k^{(1;2)}\rangle=5$ and mean inter-degree $\langle k^{(12)}\rangle=1$); (ii)
the RTS 96 power grid Grigg1999IEEE ; (iii) and noisy geometric networks
Taylor2015NatComms (which consist of $N=200$ nodes placed uniformly within a
$4\times 1$ box with $95\%$ of links placed between the closest possible nodes
pairs and the other $5\%$ of links placed randomly, resulting in a mean degree
of $\langle k\rangle=8$). As shown in Figs. 1(a)–(c), we partition the three
classes of networks into two, three, and four subsystems, respectively. (The
four subsystems of the geometric networks are defined by the $\pm$ sign
combinations in the first two non-trivial eigenvectors of $L$.)
For each network, we assume that natural frequencies are given and cannot be
modified, but may be rearranged. Thus, a global balance between subsystems
[step (i)] may be obtained by shuffling frequencies between subsystems, while
the subsystems may be locally optimized [step (ii)] by then shuffling
frequencies within each subsystem. To optimize each network, we use an accept-
reject algorithm, proposing $5\times 10^{4}$ switches between randomly chosen
pairs of frequencies and accepting switches that decrease the SAF. In Figs.
1(d)–(f), we plot the synchronization profile, $r$ vs $K$, for systems with
randomly allocated frequencies (green triangles), globally optimized
frequencies (blue circles) and grass-roots optimized frequencies (red crosses)
for the three classes of networks. All data points are averaged across $50$
random networks and natural frequency realizations (drawn from the standard
normal distribution) except for the power grid, where the same network is used
throughout. Note the comparably strong synchronization properties for both the
global and grass-roots optimized cases. To differentiate the two cases we plot
the synchronization error $1-r$ vs $K$ in a log-log scale in Figs. 1(g)–(i),
revealing that grass-roots optimization is very effective across a wide range
of network structures.
Figure 2: Robustness to islanding and target attacks. (a) Example of local
(subsystem) order parameters for the RTS 96 power grid before ($t<0$) and
after ($t\geq 0$) islanding for global (solid blue) and grass-roots (dashed
red) optimization. (b) Density of local (subsystem) SAFs after islanding for
global (solid blue) and grass-roots (dashed red) optimization. (c)
Illustration of the islanded subsystems in the RTS 96 power grid.
Next we show that grass-roots optimized networks outperform globally optimized
networks when subsystems are islanded from one another or otherwise dismantled
by a targeted attack. For instance, modern power grids feature microgrids that
are smaller subsystems that may be separated, i.e., “islanded”, from the
larger grid Porco2013 . We predict such a feature to be advantageous to
synchronizing biological processes, which is a main motivator for our work. As
an example, we consider the RTS 96 power grid before and after the islanding
of three subsystems [as indicated in Fig. 2(c)]. In Fig. 1(a) we plot time
series of three local order parameters for system designed using global (solid
blue) and grass-root (dashed red) optimization. We use $K=1$ and normally-
distributed frequencies. Edges between subsystems are removed at time $t=0$.
Before islanding ($t<0$) both cases display strong synchronization properties.
After islanding ($t\geq 0$) the globally-optimized system displays
significantly weaker synchronization properties and a desynchronization event
(indicated by oscillations). On the other hand, the grass-roots optimized
system maintains its strong synchronization properties. This is further
demonstrated in Fig. 2(b), where we plot the density of local, i.e.,
subsystem-specific, SAFs for globally (solid blue) and grass-roots (dashed
red) optimized systems obtained from $10^{4}$ realizations. We indicate the
respective means $\overline{J(\bm{\omega},L)}=0.1427$ and $0.0629$ of the
local SAFs for the globally and grass-roots optimized cases with vertical
lines.
We conclude by finally presenting our local approximation of the SAF, which
has allowed us to identify the multiscale mechanism (i.e., steps i-ii)
underlying grass-roots optimization. For simplicity, we first consider the
case of two subsystems, leaving further generalization to the Supplemental
Material (SM). Writing the adjacency matrix as
$A=\begin{bmatrix}A^{(1)}&B^{(12)}\\\ B^{(12)T}&A^{(2)}\end{bmatrix}$, where
$A^{(1)}\in\mathbb{R}^{N_{1}\times N_{1}}$, $A^{(2)}\in\mathbb{R}^{N_{2}\times
N_{2}}$, $B^{(12)}\in\mathbb{R}^{N_{1}\times N_{2}}$, and $N_{1}$ and $N_{2}$
are the sizes of the respective subsystems, the Laplacian is given by
$L=L_{0}+L_{B}$, where $L_{0}=\begin{bmatrix}L^{(1)}&0\\\
0&L^{(2)}\end{bmatrix}$, $L_{B}=\begin{bmatrix}D_{B^{(12)}}&-B^{(12)}\\\
-B^{(12)T}&D_{B^{(12)T}}\end{bmatrix}$, and $L^{(1,2)}=D^{(1,2)}-A^{(1,2)}$
with diagonal matrices $D_{B^{(12)}}$ and $D_{B^{(12)T}}$ whose entries
correspond to row sums of $B^{(12)}$ and $B^{(12)T}$, respectively. We assume
$B^{(12)}$ to be sparser than $A^{(1)}$ and $A^{(2)}$ so that
$\left\|L_{B}\right\|\ll\left\|L_{0}\right\|$ under a suitable matrix norm
(e.g., the Frobenius norm). We then define $\Delta
L=(\|L_{0}\|/\|L_{B}\|)L_{B}$ so that $L(\epsilon)=L_{0}+\epsilon\Delta L$
recovers the original network structure for the choice
$\epsilon=\|L_{B}\|/\|L_{0}\|\ll 1$.
Next, we discuss the spectral properties of $L_{0}$. Since this matrix encodes
the two subsystems in isolation, its eigenvalue spectrum is the union of the
eigenvalue spectrum of $L^{(1)}$ and $L^{(2)}$. Specifically, ordering the
eigenvalues of $L^{(1)}$ and $L^{(2)}$, respectively,
$0=\mu_{1}<\mu_{2}\leq\cdots\leq\mu_{N_{1}}$ and
$0=\nu_{1}<\nu_{2}\leq\cdots\leq\nu_{N_{2}}$ (where we assume that the
subsystems are themselves connected), this implies that $L_{0}$ has two zero
eigenvalues, $\lambda_{1}=\lambda_{2}=0$, and the rest are positive. Since $0$
is a repeated eigenvalue of $L_{0}$, its nullspace requires some care. Rather
than choosing eigenvectors $\bm{v}^{1}\propto[\bm{1},\bm{0}]^{T}$ and
$\bm{v}^{2}\propto[\bm{0},\bm{1}]^{T}$, whose entries are constant within one
subsystem and zero within the other, it is advantageous to instead choose
$\bm{v}^{1}=\frac{1}{\sqrt{N}}[\bm{1},\bm{1}]^{T}$ and
$\bm{v}^{2}=\frac{\sqrt{N_{1}N_{2}}}{N}[\bm{1}/N_{1},-\bm{1}/N_{2}]^{T}$ so
that $\bm{v}^{1}$ is independent of $\epsilon$ and characterizes the nullspace
of $L(\epsilon)$, and $\bm{v}^{2}$ is associated with an eigenvalue that
converges to 0 as $\epsilon\to 0$ but is strictly positive for $\epsilon>0$.
The other $N-2$ eigenvectors of $L_{0}$ are given by
$\\{\bm{v}^{j}\\}_{j=3}^{N}=\left\\{[\bm{u}^{j},\bm{0}]^{T}\right\\}_{j=2}^{N_{1}}\bigcup\left\\{[\bm{0},\bm{x}^{j}]^{T}\right\\}_{j=2}^{N_{2}}$,
where $\\{\bm{u}^{j}\\}_{j=1}^{N_{1}}$ and $\\{\bm{x}^{j}\\}_{j=1}^{N_{2}}$
are the eigenvectors of $L^{(1)}$ and $L^{(2)}$.
Considering $0<\epsilon\ll 1$, each eigenvalue of $L(\epsilon)$ varies
continuously with $\epsilon$ Kato2013 , so we may write
$\lambda_{j}(\epsilon)=\lambda_{j}+\epsilon\delta\lambda_{j}^{(1)}+\epsilon^{2}\delta\lambda_{j}^{(2)}+\mathcal{O}(\epsilon^{3})$.
We similarly assume
$\bm{v}^{j}(\epsilon)=\bm{v}^{j}+\epsilon\delta\bm{v}^{j(1)}+\epsilon^{2}\delta\bm{v}^{j(2)}+\mathcal{O}(\epsilon^{3})$.
Since $\lambda_{2}(\epsilon)\ll 1$ and $\lambda_{j}(\epsilon)\sim 1$ for
$j=3,\dots,N$, the term associated with $j=2$ needs to be treated separately
from the others, and
$\displaystyle
J(\bm{\omega},L(\epsilon))=\frac{1}{N}\left(\frac{\langle\bm{\omega},\bm{v}^{2}(\epsilon)\rangle}{\lambda_{2}(\epsilon)}\right)^{2}+\frac{1}{N}\sum_{j=3}^{N}\left(\frac{\langle\bm{\omega},\bm{v}^{j}(\epsilon)\rangle}{\lambda_{j}(\epsilon)}\right)^{2}.$
(4)
Upon expanding the $N-1$ terms contributing to the SAF in Eq. (4), we find
that they all take a similar form except for a factor of $\epsilon$,
$\displaystyle\left(\frac{\langle\bm{\omega},\bm{v}^{j}(\epsilon)\rangle}{\lambda_{j}(\epsilon)}\right)^{2}=\epsilon^{\alpha_{j}}\left(\frac{\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{(\lambda_{j})^{2}}\right)+\epsilon^{1+\alpha_{j}}\left(\frac{2\langle\bm{\omega},\bm{v}^{j}\rangle\langle\bm{\omega},\delta\bm{v}^{j(1)}\rangle}{(\lambda_{j})^{2}}-\frac{2\delta\lambda_{j}^{(1)}\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{(\lambda_{j})^{3}}\right)$
$\displaystyle+\epsilon^{2+\alpha_{j}}\left(\frac{\langle\bm{\omega},\delta\bm{v}^{j(1)}\rangle^{2}+2\langle\bm{\omega},\bm{v}^{j}\rangle\langle\bm{\omega},\delta\bm{v}^{j(2)}\rangle}{(\lambda_{j})^{2}}-\frac{4\delta\lambda_{j}^{(1)}\langle\bm{\omega},\bm{v}^{j}\rangle\langle\bm{\omega},\delta\bm{v}^{j(1)}\rangle}{(\lambda_{j})^{3}}+\frac{(3(\delta\lambda_{j}^{(1)})^{2}-2\lambda_{j}\delta\lambda_{j}^{(2)})\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{(\lambda_{j})^{4}}\right)+\mathcal{O}(\epsilon^{3+\alpha_{j}}),$
(5)
where $\alpha_{j}$ is a term that equals -2 when $j=2$ and 0 when $j\geq 3$.
Due the the different scaling with $\epsilon$, the terms associated with $j=2$
are much larger than those for $j\geq 3$. Inserting Eq. (5) into Eq. (4)
yields
$\displaystyle J(\bm{\omega},$ $\displaystyle
L(\epsilon))=N^{-1}\epsilon^{-2}\left(\frac{\langle\bm{\omega},\bm{v}^{2}\rangle^{2}}{(\delta\lambda_{2}^{(1)})^{2}}\right)+N^{-1}\epsilon^{-1}\left(\frac{2\langle\bm{\omega},\bm{v}^{2}\rangle\langle\bm{\omega},\delta\bm{v}^{2(1)}\rangle}{(\delta\lambda_{2}^{(1)})^{2}}-\frac{2\delta\lambda_{2}^{(2)}\langle\bm{\omega},\bm{v}^{2}\rangle^{2}}{(\delta\lambda_{2}^{(1)})^{3}}\right)$
$\displaystyle+N^{-1}\left(\frac{\langle\bm{\omega},\delta\bm{v}^{2(1)}\rangle^{2}+2\langle\bm{\omega},\bm{v}^{2}\rangle\langle\bm{\omega},\delta\bm{v}^{2(2)}\rangle}{(\delta\lambda_{2}^{(1)})^{2}}-\frac{4\delta\lambda_{2}^{(2)}\langle\bm{\omega},\bm{v}^{2}\rangle\langle\bm{\omega},\delta\bm{v}^{2(1)}\rangle}{(\delta\lambda_{2}^{(1)})^{3}}+\frac{(3(\delta\lambda_{2}^{(2)})^{2}-2\delta\lambda_{2}^{(1)}\delta\lambda_{2}^{(3)})\langle\bm{\omega},\bm{v}^{2}\rangle^{2}}{(\delta\lambda_{2}^{(1)})^{4}}\right)$
$\displaystyle+\eta_{1}J(\bm{\omega}^{1},L_{1})+\eta_{2}J(\bm{\omega}^{2},L_{2})+\epsilon\left[N^{-1}\sum_{j=3}^{N}\left(\frac{2\langle\bm{\omega},\bm{v}^{j}\rangle\langle\bm{\omega},\delta\bm{v}^{j(1)}\rangle}{(\lambda_{j})^{2}}-\frac{2\delta\lambda_{j}^{(1)}\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{(\lambda_{j})^{3}}\right)\right]+\mathcal{O}(N^{-1}\epsilon,\epsilon^{2}),$
(6)
where we have used that
$\frac{1}{N}\sum_{j=3}^{N}\frac{\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{\lambda_{j}^{2}}=\eta_{1}J(\bm{\omega}^{(1)},L^{(1)})+\eta_{2}J(\bm{\omega}^{(2)},L^{(2)})$.
While Eq. (6) may appear daunting, the key insight is the presence of an inner
product $\langle\bm{\omega},\bm{v}^{2}\rangle$ in several leading-order terms.
Recalling the structure of $\bm{v}^{2}$, and writing
$\bm{\omega}=[\bm{\omega}^{(1)},\bm{\omega}^{(2)}]^{T}$, where
$\bm{\omega}^{(1)}$ and $\bm{\omega}^{(2)}$ are the frequency vectors
corresponding to the two subsystems, we have that
$\langle\bm{\omega},\bm{v}^{2}\rangle=\sqrt{\eta_{1}\eta_{2}}(\langle\bm{\omega}^{(1)}\rangle-\langle\bm{\omega}^{(2)}\rangle)$.
Thus, if the subsystems’ mean frequencies can be engineered to match,
$\langle\bm{\omega}^{(1)}\rangle=\langle\bm{\omega}^{(2)}\rangle$, then many
terms vanish to yield
$\displaystyle J$
$\displaystyle(\bm{\omega},L(\epsilon))=\eta_{1}J(\bm{\omega}^{1},L_{1})+\eta_{2}J(\bm{\omega}^{2},L_{2})$
$\displaystyle+\epsilon\left[N^{-1}\sum_{j=3}^{N}\left(\frac{2\langle\bm{\omega},\bm{v}^{j}\rangle\langle\bm{\omega},\delta\bm{v}^{j(1)}\rangle}{(\lambda_{j})^{2}}-\frac{2\delta\lambda_{j}^{(1)}\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{(\lambda_{j})^{3}}\right)\right]$
$\displaystyle+N^{-1}\left(\frac{\langle\bm{\omega},\delta\bm{v}^{2(1)}\rangle^{2}}{(\delta\lambda_{2}^{(1)})^{2}}\right)+\mathcal{O}(N^{-1}\epsilon,\epsilon^{2}),$
(7)
which recovers Eq. (3) to leading order for the case of $C=2$ subsystems. See
the SM for further generalization insights.
While recent progress has been made in optimizing collective behavior in
complex systems, the resulting techniques and methodologies rely largely on
global network information. These approaches express certain local properties
such as correlations between nodal degrees and natural frequencies
Skardal2014PRL ; Skardal2016Chaos , however such properties alone do not
optimize systems. This leaves open the critical question of how naturally-
occurring systems tune their own structure and dynamics to self-optimize, and
it is reasonable to consider that the optimization itself is a collective
behavior.
Grass-roots optimization is a multiscale mechanism for coordinating and
optimizing the local synchronization properties of a network’s subsystems and
is a plausible mechanism for collective (self) optimization within naturally-
occurring systems that have access to only local information, such as cardiac
pacemakers Bychkov2020JACC and genetic oscillators Rajapakse2009PNAS . It can
also support the design of decentralized, parallelizable and scalable
algorithms to engineer man-made systems that are robust to network
dismantling. Notably, these very same features may have provided an
evolutionary advantage for biological systems that crucially depend on
synchronization.
###### Acknowledgements.
PRC was supported by the Interdisciplinary Science Program and Summer Research
Program at Trinity College. PSS was supported by NSF grant MCB-2126177. DT was
supported by NSF grant DMS-2052720 and Simons Foundation award #578333.
## References
* [1] A. Pikovsky, M. Rosenblum, and J. Kurths, Synchronization: a universal concept in nonlinear sciences (Cambridge University Press, 2003).
* [2] S. H. Strogatz, Sync: The emerging science of spontaneous order (Penguin UK, 2004).
* [3] R. Bychkov, M. Juhaszova, K. Tsutsui, C. Coletta, M. D. Stern, V. A. Maltsev, and E. G. Lakatta, Synchronized Cardiac Impulses Emerge From Heterogeneous Local Calcium Signals Within and Among Cells of Pacemaker Tissue, JACC Clin. Electrophysiol. 6, 907 (2020).
* [4] N. Kopell, G. B. Ermentrout, M. A. Whittington, and R. D. Traub, Gamma rhythms and beta rhythms have different synchronization properties, Proc. Natl. Acad. Sci. U.S.A. 97, 1867 (2000).
* [5] A. Prindle, P. Samayoa, I. Razinkov, T. Danino, L. S. Tsimring, and J. Hasty, A sensing array of radically couples genertic ‘biopixels’, Nature 481, 39 (2012).
* [6] M. Rohden, A. Sorge, M. Timme, and D. Witthaut, Self-organized synchronization in decentralized power grids, Phys. Rev. Lett. 109, 064101 (2012).
* [7] F. H. Fenton, E. M. Cherry, H. M. Hastings, and S. J. Evans, Multiple mechanisms of spiral wave breakup in a model of cardiac electrical activity, Chaos 12, 852 (2002).
* [8] A. V. Panfilov, R. H. Keldermann, and M. P. Nash, Drift and breakup of spiral waves in reaction–diffusion–mechanics systems, Proc. Natl. Acad. Sci. U.S.A 104, 7922 (2007)
* [9] F. Dörfler, M. Chertkov, and F. Bullo, Synchronization in complex oscillator networks and smart grids, Proc. Natl. Acad. Sci. U.S.A. 110, 1005 (2013).
* [10] M. E. Mangoni and J. Nargeot, Genesis and regulation of the heart automaticity, Physiol. Rev. 88, 919 (2008).
* [11] I. Rajapakse, M. D. Perlman, D. Scalzo, C. Kooperberg, M. Groudine, and S. T. Kosak, The emergence of lineage-specific chromosomal topologies from coordinate gene regulation, Proc. Natl. Acad. Sci. U.S.A. 106, 6679 (2009).
* [12] L. M. Pecora and T. L. Carroll, Master stability function for synchronized coupled systems, Phys. Rev. Lett. 80, 2109 (1998).
* [13] T. Nishikawa and A. Motter, Synchronization is optimal in nondiagonalizable networks, Phys. Rev. E 73, 065106(R) (2006).
* [14] M. Girvan and M. E. J. Newman, Community structure in social and biological networks, Proc. Natl. Acad. Sci U.S.A. 99, 7821 (2002).
* [15] M. Barthélemy, Spatial Networks, Phys. Rep. 499, 1 (2011).
* [16] R. R. Coifman, S. Lafon, A. B. Lee, M. Maggioni, B. Nadler, F. Warner, and S. W. Zucker, Geometric diffusions as a tool for harmonic analysis and structure definition of data: Diffusion maps, Proc. Natl. Acad. Sci. U.S.A. 102, 7426 (2005).
* [17] P. S. Skardal, D. Taylor, and J. Sun, Optimal synchronization of complex networks, Phys. Rev. Lett. 113, 144101 (2014).
* [18] P. S. Skardal, D. Taylor, and J. Sun, Optimal synchronization of directed complex networks, Chaos 26, 094807 (2016).
* [19] D. Taylor, P. S. Skardal, and J. Sun, Synchronization of heterogeneous oscillators under network modifications: Perturbation and optimization of the synchrony alignment function, SIAM J. Appl. Math. 76, 1984 (2016).
* [20] L. Arola-Fernández, P. S. Skardal, and A. Arenas, Geometric unfolding of synchronization dynamics on networks, Chaos 31, 061105 (2021).
* [21] P. S. Skardal, R. Sevilla-Escoboza, V. Vera-Ávila, and J. M. Buldú, Optimal phase synchronization in networks of phase-coherent chaotic oscillators, Chaos 27, 013111 (2017).
* [22] P. S. Skardal, D. Taylor, and J. Sun, Synchronization of network-coupled oscillators with uncertain dynamics, SIAM J. Appl. Math. 79, 2409 (2019).
* [23] Y. Kuramoto, Chemical oscillations, waves, and turbulence (Springer, 2012).
* [24] A. Politi and M. Rosenblum, Equivalence of phase-oscillator and integrate-and-fire models, Phys. Rev. E 91, 042916 (2015).
* [25] J. W. Simpson-Porco, F. Dörfler, and F. Bullo, Synchronization and power sharing for droop-controlled inverters in islanded microgrids, Automatica 49, 2603 (2013).
* [26] P. S. Skardal and A. Arenas, Control of coupled oscillator networks with application to microgrid technologies, Sci. Adv. 1, e1500339 (2015).
* [27] P. W. Holland, K. B. Laskey, and S. Leinhardt, Stochastic blockmodels: First steps, Soc. Networks 5, 109 (1983).
* [28] C. Grigg et al., The IEEE Reliability Test System—1996. A report prepared by the Reliability Test System Task Force of the Application of Probability Methods Subcommittee, IEEE Trans. Power Syst. 14, 1010 (1999).
* [29] D. Taylor, F. Klimm, H. A. Harrington, M. Kramar, K. Mischaikow, M. A. Porter, and P.J. Mucha, Topological data analysis of contagion maps for examining spreading processes on networks, Nat. Commun. 6, 7723 (2015).
* [30] T. Kato, Perturbation theory for linear operators, vol. 132 (Springer Science & Business Media, 2013).
Supplemental Material: Grass-roots optimization of coupled oscillator networks
In this Supplemental Material we generalize the local approximation of the SAF
for more than two subsystems. First we present the full derivation of the
approximation for the case of three subsystems, and then we discuss the
generalization to an arbitrary number of subsystems.
## Local Approximation of the SAF for Networks with Three Subsystems
To provide insight into systems with more than two subsystems, we present here
the case of three subsystems and derive a local approximation to the SAF
analogous to the one which we presented in the main text. In this case the
network adjacency matrix can be written in block form as
$\displaystyle A=\begin{bmatrix}A^{(1)}&B^{(12)}&B^{(13)}\\\
B^{(12)T}&A^{(2)}&B^{(23)}\\\ B^{(13)T}&B^{(23)T}&A^{(3)}\end{bmatrix},$ (1)
where $A^{(1)}$, $A^{(2)}$, and $A^{(3)}$ are the adjacency matrices for the
three subsystems and $B^{(12)}$, $B^{(13)}$, and $B^{(23)}$ captures the
connections between the respective subsystems. We denote the sizes of the
three subsystems by $N_{1}$, $N_{2}$, and $N_{3}$ so that
$A^{(1)}\in\mathbb{R}^{N_{1}\times N_{1}}$, $A^{(2)}\in\mathbb{R}^{N_{2}\times
N_{2}}$, $A^{(3)}\in\mathbb{R}^{N_{3}\times N_{3}}$,
$B^{(12)}\in\mathbb{R}^{N_{1}\times N_{2}}$,
$B^{(13)}\in\mathbb{R}^{N_{1}\times N_{3}}$, and
$B^{(23)}\in\mathbb{R}^{N_{2}\times N_{3}}$. We are interested then in the
perturbed combinatorial Laplacian, given by
$\displaystyle L(\epsilon)=L_{0}+\epsilon\Delta L,$ (2)
where
$\displaystyle L_{0}=\begin{bmatrix}L^{(1)}&0&0\\\ 0&L^{(2)}&0\\\
0&0&L^{(3)}\end{bmatrix},$ (3)
$\Delta L=(\|L_{0}\|/\|L_{B}\|)L_{B}$, and
$\displaystyle
L_{B}=\begin{bmatrix}D_{B^{(12)}+B^{(13)}}&-B^{(12)}&-B^{(13)}\\\
-B^{(12)T}&D_{B^{(12)T}+B^{(23)}}&-B^{(23)}\\\
-B^{(13)T}&-B^{(23)T}&D_{B^{(13)T}+B^{(23)T}}\end{bmatrix}.$ (4)
Once again, the choice $\epsilon=\|L_{B}\|/\|L_{0}\|\ll 1$ recovers the
original Laplacian matrix.
As in the two-subsystem case, it is useful to first discuss the spectral
properties of $L_{0}$. Since it is a block-diagonal matrix, its eigenvalues
are given by the union of the eigenvalues of the respective blocks,
$\displaystyle\\{\lambda_{j}\\}_{j=1}^{N}=\\{\mu_{j}\\}_{j=1}^{N_{1}}\bigcup\\{\nu_{j}\\}_{j=1}^{N_{2}}\bigcup\\{\eta_{j}\\}_{j=1}^{N_{3}},$
(5)
where $\\{\mu_{j}\\}_{j=1}^{N_{1}}$ denotes the eigenvalues of $L^{(1)}$,
$\\{\nu_{j}\\}_{j=1}^{N_{2}}$ denotes the eigenvalues of $L^{(2)}$, and
$\\{\eta_{j}\\}_{j=1}^{N_{3}}$ denotes the eigenvalues of $L^{(3)}$. The
associated eigenvectors are given by
$\displaystyle\\{\bm{v}^{j}\\}_{j=1}^{N}=\left\\{\begin{bmatrix}\bm{u}^{j}\\\
\bm{0}\\\
\bm{0}\end{bmatrix}\right\\}_{j=1}^{N_{1}}\bigcup\left\\{\begin{bmatrix}\bm{0}\\\
\bm{x}^{j}\\\
\bm{0}\end{bmatrix}\right\\}_{j=1}^{N_{2}}\bigcup\left\\{\begin{bmatrix}\bm{0}\\\
\bm{0}\\\ \bm{y}^{j}\end{bmatrix}\right\\}_{j=1}^{N_{3}}.$ (6)
where $\\{\bm{u}^{j}\\}_{j=1}^{N_{1}}$, $\\{\bm{x}^{j}\\}_{j=1}^{N_{2}}$, and
$\\{\bm{y}^{j}\\}_{j=1}^{N_{3}}$ are the associated eigenvectors for
$L^{(1)}$, $L^{(2)}$, and $L^{(3)}$, respectively. The most critical
observation to make is that each diagonal block of $L_{0}$ has a trivial
eigenvalue, namely, $\mu_{1},\nu_{1},\eta_{1}=0$, so the nullspace of $L_{0}$
is three-dimensional since it has a triple eigenvalue degeneracy at
$\lambda_{1,2,3}=0$. It is then convenient to rewrite the basis vectors for
this trivial eigenspace using the following eigenvectors:
$\displaystyle\bm{v}^{1}=\frac{1}{\sqrt{N}}\begin{bmatrix}\bm{1}\\\ \bm{1}\\\
\bm{1}\end{bmatrix},~{}~{}~{}\bm{v}^{2}=\frac{\sqrt{N_{1}N_{2}}}{N_{1}+N_{2}}\begin{bmatrix}\bm{1}/N_{1}\\\
-\bm{1}/N_{2}\\\
\bm{0}\end{bmatrix},~{}~{}~{}\bm{v}^{3}=\frac{\sqrt{N_{2}N_{3}}}{N_{2}+N_{3}}\begin{bmatrix}\bm{0}\\\
\bm{1}/N_{2}\\\ -\bm{1}/N_{3}\end{bmatrix},$ (7)
where, similar to the two subsystem case, $\bm{v}^{1}$ is the constant-valued
eigenvector that is associated with the synchronization manifold and whose
eigenvalue $\lambda_{1}=0$ remains constant as $\epsilon$ increases (i.e.,
$v^{1}(\epsilon)=v^{1}$ regardless of $\epsilon$). On the other hand,
$\bm{v}^{2}$ and $\bm{v}^{3}$ will play important roles in the perturbation
analysis since $\lambda_{2}(\epsilon)$ and $\lambda_{3}(\epsilon)$ must take
positive values for any $\epsilon>0$. We note that the vector
$\sqrt{N_{1}N_{3}}/(N_{1}+N_{3})\begin{bmatrix}\bm{1}/N_{1}\\\ \bm{0}\\\
-\bm{1}^{T}/N_{3}\end{bmatrix}$ may also be used in place of either
$\bm{v}^{2}$ or $\bm{v}^{3}$, but as it is just a linear combination of the
two vectors already chosen, it yields the same results given below.
Given the initial spectral properties of $L_{0}$, we consider the following
perturbative expansions. Specifically, for the eigenvalues of $L(\epsilon)$ we
have
$\displaystyle\lambda_{j}(\epsilon)$
$\displaystyle=\epsilon\delta\lambda_{j}^{(1)}+\epsilon^{2}\delta\lambda_{j}^{(2)}+\mathcal{O}(\epsilon^{3}),$
(8)
for $j=2,3$ and
$\displaystyle\lambda_{j}(\epsilon)$
$\displaystyle=\lambda_{j}+\epsilon\delta\lambda_{j}^{(1)}+\epsilon^{2}\delta\lambda_{j}^{(2)}+\mathcal{O}(\epsilon^{3}),$
(9)
for $j=4,\dots,N$. We again assume that the eigenvectors of $L(\epsilon)$ are
continuously differentiable to approximate
$\displaystyle\bm{v}^{j}(\epsilon)$
$\displaystyle=\bm{v}^{j}+\epsilon\delta\bm{v}^{j(1)}+\epsilon^{2}\delta\bm{v}^{j(2)}+\mathcal{O}(\epsilon^{3}).$
(10)
for $j=2,\dots,N$.
Our primary interest is the SAF of the perturbed network, and as we did in the
two subsystem case with the term associated with $j=2$, here we will treat the
terms associated with $j=2$ and $3$ separately:
$\displaystyle
J(\bm{\omega},L(\epsilon))=\frac{1}{N}\left(\frac{\langle\bm{\omega},\bm{v}^{2}(\epsilon)\rangle}{\lambda_{2}(\epsilon)}\right)^{2}+\frac{1}{N}\left(\frac{\langle\bm{\omega},\bm{v}^{3}(\epsilon)\rangle}{\lambda_{3}(\epsilon)}\right)^{2}+\frac{1}{N}\sum_{j=4}^{N}\left(\frac{\langle\bm{\omega},\bm{v}^{j}(\epsilon)\rangle}{\lambda_{j}(\epsilon)}\right)^{2}.$
(11)
We now consider the contribution of these different terms. Beginning with the
terms associated with $j=2$ and $3$, insert Eqs. (8) and (10) into the
relevant terms in Eq. (11), expand, and collect similar terms to obtain
$\displaystyle\frac{1}{N}\left(\frac{\langle\bm{\omega},\bm{v}^{j}(\epsilon)\rangle}{\lambda_{j}(\epsilon)}\right)^{2}$
$\displaystyle=N^{-1}\epsilon^{-2}\left(\frac{\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{(\delta\lambda_{j}^{(1)})^{2}}\right)+N^{-1}\epsilon^{-1}\left(\frac{2\langle\bm{\omega},\bm{v}^{j}\rangle\langle\bm{\omega},\delta\bm{v}^{j(1)}\rangle}{(\delta\lambda_{j}^{(1)})^{2}}-\frac{2\delta\lambda_{j}^{(2)}\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{(\delta\lambda_{j}^{(1)})^{3}}\right)$
$\displaystyle+N^{-1}\left(\frac{\langle\bm{\omega},\delta\bm{v}^{j(1)}\rangle^{2}+2\langle\bm{\omega},\bm{v}^{j}\rangle\langle\bm{\omega},\delta\bm{v}^{j(2)}\rangle}{(\delta\lambda_{j}^{(1)})^{2}}-\frac{4\delta\lambda_{j}^{(2)}\langle\bm{\omega},\bm{v}^{j}\rangle\langle\bm{\omega},\delta\bm{v}^{j(1)}\rangle}{(\delta\lambda_{j}^{(1)})^{3}}\right.$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\left.+\frac{(3(\delta\lambda_{j}^{(2)})^{2}-2\delta\lambda_{j}^{(1)}\delta\lambda_{j}^{(3)})\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{(\delta\lambda_{j}^{(1)})^{4}}\right)+\mathcal{O}(N^{-1}\epsilon).$
(12)
On the other hand, for $j=4,\dots,N$, we insert Eqs. (9) and (10) into the
relevant terms in Eq. (11), expand, and collect similar terms to obtain
$\displaystyle\frac{1}{N}\left(\frac{\langle\bm{\omega},\bm{v}^{j}(\epsilon)\rangle}{\lambda_{j}(\epsilon)}\right)^{2}$
$\displaystyle=N^{-1}\left(\frac{\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{(\lambda_{j})^{2}}\right)+N^{-1}\epsilon\left(\frac{2\langle\bm{\omega},\bm{v}^{j}\rangle\langle\bm{\omega},\delta\bm{v}^{j(1)}\rangle}{(\lambda_{j})^{2}}-\frac{2\delta\lambda_{j}^{(1)}\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{(\lambda_{j})^{3}}\right)$
$\displaystyle+N^{-1}\epsilon^{2}\left(\frac{\langle\bm{\omega},\delta\bm{v}^{j(1)}\rangle^{2}+2\langle\bm{\omega},\bm{v}^{j}\rangle\langle\bm{\omega},\delta\bm{v}^{j(2)}\rangle}{(\lambda_{j})^{2}}-\frac{4\delta\lambda_{j}^{(1)}\langle\bm{\omega},\bm{v}^{j}\rangle\langle\bm{\omega},\delta\bm{v}^{j(1)}\rangle}{(\lambda_{j})^{3}}\right.$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\left.+\frac{(3(\delta\lambda_{j}^{(1)})^{2}-2\lambda_{j}\delta\lambda_{j}^{(2)})\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{(\lambda_{j})^{4}}\right)+\mathcal{O}(N^{-1}\epsilon^{3}).$
(13)
Inserting Eqs. (12) and (13) into Eq. (11), we then obtain
$\displaystyle J($
$\displaystyle\bm{\omega},L(\epsilon))=N^{-1}\epsilon^{-2}\left(\frac{\langle\bm{\omega},\bm{v}^{2}\rangle^{2}}{(\delta\lambda_{2}^{(1)})^{2}}+\frac{\langle\bm{\omega},\bm{v}^{3}\rangle^{2}}{(\delta\lambda_{3}^{(1)})^{2}}\right)$
$\displaystyle+N^{-1}\epsilon^{-1}\left(\frac{2\langle\bm{\omega},\bm{v}^{2}\rangle\langle\bm{\omega},\delta\bm{v}^{2(1)}\rangle}{(\delta\lambda_{2}^{(1)})^{2}}-\frac{2\delta\lambda_{2}^{(2)}\langle\bm{\omega},\bm{v}^{2}\rangle^{2}}{(\delta\lambda_{2}^{(1)})^{3}}+\frac{2\langle\bm{\omega},\bm{v}^{3}\rangle\langle\bm{\omega},\delta\bm{v}^{3(1)}\rangle}{(\delta\lambda_{3}^{(1)})^{2}}-\frac{2\delta\lambda_{3}^{(2)}\langle\bm{\omega},\bm{v}^{3}\rangle^{2}}{(\delta\lambda_{3}^{(1)})^{3}}\right)$
$\displaystyle+N^{-1}\left(\frac{\langle\bm{\omega},\delta\bm{v}^{2(1)}\rangle^{2}+2\langle\bm{\omega},\bm{v}^{2}\rangle\langle\bm{\omega},\delta\bm{v}^{2(2)}\rangle}{(\delta\lambda_{2}^{(1)})^{2}}-\frac{4\delta\lambda_{2}^{(2)}\langle\bm{\omega},\bm{v}^{2}\rangle\langle\bm{\omega},\delta\bm{v}^{2(1)}\rangle}{(\delta\lambda_{2}^{(1)})^{3}}+\frac{(3(\delta\lambda_{2}^{(2)})^{2}-2\delta\lambda_{2}^{(1)}\delta\lambda_{2}^{(3)})\langle\bm{\omega},\bm{v}^{2}\rangle^{2}}{(\delta\lambda_{2}^{(1)})^{4}}\right.$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\left.\frac{\langle\bm{\omega},\delta\bm{v}^{3(1)}\rangle^{2}+2\langle\bm{\omega},\bm{v}^{3}\rangle\langle\bm{\omega},\delta\bm{v}^{3(2)}\rangle}{(\delta\lambda_{3}^{(1)})^{2}}-\frac{4\delta\lambda_{3}^{(2)}\langle\bm{\omega},\bm{v}^{3}\rangle\langle\bm{\omega},\delta\bm{v}^{3(1)}\rangle}{(\delta\lambda_{3}^{(1)})^{3}}+\frac{(3(\delta\lambda_{3}^{(2)})^{2}-2\delta\lambda_{3}^{(1)}\delta\lambda_{3}^{(3)})\langle\bm{\omega},\bm{v}^{3}\rangle^{2}}{(\delta\lambda_{3}^{(1)})^{4}}\right)$
$\displaystyle+\eta_{1}J(\bm{\omega}^{1},L_{1})+\eta_{2}J(\bm{\omega}^{2},L_{2})+\eta_{3}J(\bm{\omega}^{2},L_{3})+\epsilon\left[N^{-1}\sum_{j=4}^{N}\left(\frac{2\langle\bm{\omega},\bm{v}^{j}\rangle\langle\bm{\omega},\delta\bm{v}^{j(1)}\rangle}{(\lambda_{j})^{2}}-\frac{2\delta\lambda_{j}^{(1)}\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{(\lambda_{j})^{3}}\right)\right]+\mathcal{O}(N^{-1}\epsilon,\epsilon^{2}),$
(14)
where we have used that, for the three subsystem case, we have
$\displaystyle\frac{1}{N}\sum_{j=4}^{N}\frac{\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{\lambda_{j}^{2}}=\eta_{1}J(\bm{\omega}^{1},L_{1})+\eta_{2}J(\bm{\omega}^{2},L_{2})+\eta_{3}J(\bm{\omega}^{3},L_{3}).$
(15)
Lastly, to complete the analysis we consider not only the contributions of
$\langle\bm{\omega},\bm{v}^{2}\rangle$, but also
$\langle\bm{\omega},\bm{v}^{3}\rangle$. In particular, we note that
$\displaystyle\langle\bm{\omega},\bm{v}^{2}\rangle=\frac{\sqrt{\eta_{1}\eta_{2}}}{\eta_{12}}(\langle\omega^{1}\rangle-\langle\omega^{2}\rangle),$
(16)
and
$\displaystyle\langle\bm{\omega},\bm{v}^{3}\rangle=\frac{\sqrt{\eta_{2}\eta_{3}}}{\eta_{23}}(\langle\omega^{2}\rangle-\langle\omega^{3}\rangle),$
(17)
where $\eta_{ij}=(N_{i}+N_{j})/N$. Thus, if we may engineer the network such
that
$\langle\omega^{1}\rangle=\langle\omega^{2}\rangle=\langle\omega^{3}\rangle$,
then all terms in Eq. (14) with $\langle\bm{\omega},\bm{v}^{2}\rangle$ or
$\langle\bm{\omega},\bm{v}^{3}\rangle$ vanish, yielding
$\displaystyle J(\bm{\omega},L(\epsilon))$
$\displaystyle=\eta_{1}J(\bm{\omega}^{1},L_{1})+\eta_{2}J(\bm{\omega}^{2},L_{2})+\eta_{3}J(\bm{\omega}^{2},L_{3})+N^{-1}\left(\frac{\langle\bm{\omega},\delta\bm{v}^{2(1)}\rangle^{2}}{(\delta\lambda_{2}^{(1)})^{2}}+\frac{\langle\bm{\omega},\delta\bm{v}^{3(1)}\rangle^{2}}{(\delta\lambda_{3}^{(1)})^{2}}\right)$
$\displaystyle+\epsilon\left[N^{-1}\sum_{j=4}^{N}\left(\frac{2\langle\bm{\omega},\bm{v}^{j}\rangle\langle\bm{\omega},\delta\bm{v}^{j(1)}\rangle}{(\lambda_{j})^{2}}-\frac{2\delta\lambda_{j}^{(1)}\langle\bm{\omega},\bm{v}^{j}\rangle^{2}}{(\lambda_{j})^{3}}\right)\right]+\mathcal{O}(N^{-1}\epsilon,\epsilon^{2}),$
(18)
where the leading-order behavior of the perturbed SAF is simply given by a
weighted average of the subsystem-specific SAFs and the weights come from
their relative sizes, which is our desired result and the analogous version of
Eq. (7) in the main text.
## Local Approximation of the SAF for Networks with an Arbitrary Number of
Subsystems
Before concluding, we emphasize that the three subsystem case above informs
the generalization of the local approximation to an arbitrary number of
subsystems. In particular, for $C$ subsystems, the unperturbed Laplacian
$L_{0}$ will contain $C$ diagonal blocks, each with a trivial eigenvalue.
Thus, a basis for the trivial eigenspace must be chosen so that, in addition
to $\bm{v}^{1}\propto\bm{1}$, there are $C-1$ eigenvectors whose eigenvalues
will becomes positive for positive $\epsilon$. This can be done by choosing,
for instance,
$\displaystyle\bm{v}^{2}=\begin{bmatrix}\bm{1}/N_{1}\\\ -\bm{1}/N_{2}\\\
\bm{0}\\\ \vdots\\\
\bm{0}\end{bmatrix},~{}~{}\bm{v}^{3}=\begin{bmatrix}\bm{0}\\\ \bm{1}/N_{2}\\\
-\bm{1}/N_{3}\\\ \vdots\\\
\bm{0}\end{bmatrix},~{}~{}\cdots~{}~{},~{}~{}\bm{v}^{j}=\begin{bmatrix}\vdots\\\
\bm{1}/N_{j-1}\\\ -\bm{1}/N_{j}\\\ \vdots\\\
\bm{0}\end{bmatrix},~{}~{}\cdots~{}~{},~{}~{}\bm{v}^{C}=\begin{bmatrix}\bm{0}\\\
\vdots\\\ \bm{0}\\\ \bm{1}/N_{C-1}\\\ -\bm{1}/N_{C}\end{bmatrix}.$ (19)
Then, after expansion, setting
$\langle\omega^{1}\rangle=\cdots=\langle\omega^{C}\rangle$ causes the two
lowest order contributions to $J(\bm{\omega},L(\epsilon))$ originating from
the terms associated with $j=2,\dots,C$ to vanish, yielding, to leading order,
$\displaystyle
J(\bm{\omega},L(\epsilon))\approx\eta_{1}J(\bm{\omega}^{1},L^{(1)})+\cdots+\eta_{C}J(\bm{\omega}^{C},L^{(C)}).$
(20)
|
# KnowMAN: Weakly Supervised Multinomial Adversarial Networks
Luisa März ⋄,†, Ehsaneddin Asgari †, Fabienne Braune †,
Franziska Zimmermann† and Benjamin Roth ⋄
⋄ Digital Philology, Research Group Data Mining and Machine Learning,
University of Vienna, Austria
† NLP Expert Center, Data:Lab, Volkswagen AG, Munich, Germany
###### Abstract
The absence of labeled data for training neural models is often addressed by
leveraging knowledge about the specific task, resulting in heuristic but noisy
labels. The knowledge is captured in labeling functions, which detect certain
regularities or patterns in the training samples and annotate corresponding
labels for training. This process of weakly supervised training may result in
an over-reliance on the signals captured by the labeling functions and hinder
models to exploit other signals or to generalize well. We propose KnowMAN, an
adversarial scheme that enables to control influence of signals associated
with specific labeling functions. KnowMAN forces the network to learn
representations that are invariant to those signals and to pick up other
signals that are more generally associated with an output label. KnowMAN
strongly improves results compared to direct weakly supervised learning with a
pre-trained transformer language model and a feature-based baseline.
## 1 Introduction
Neural approaches rely on labeled data sets for training. For many tasks and
languages, such data is either scarce or not available at all. Knowledge-based
weak supervision tackles this problem by employing _labeling functions (LFs)_.
LFs are manually specified properties, e.g. keywords, that trigger the
automatic annotation of a specific label. However, these annotations contain
noise and biases that need to be handled.
A recent approach for denoising weakly supervised data is Snorkel (Ratner et
al., 2020). Snorkel focuses on estimating the reliability of LFs and of the
resulting heuristic _labels_. However, Snorkel does not address biases on the
_input side_ of weakly supervised data, which might lead to learned
representations that overfit the characteristics of specific LFs, hindering
generalization. We address the problem of overfitting to the LFs in this
paper.
Other approaches tackle such overfitting by deleting the LF signal completely
from the input side of an annotated sample: For example, Go et al. (2009)
strip out emoticons that were used for labeling the sentiment in tweets, and
Alt et al. (2019) mask the entities used for distant supervision of relation
extraction training data Mintz et al. (2009). However, as LFs are often
constructed from the most prototypical and reliable signals (e.g., keywords),
deleting them entirely from the feature space might – while preventing over-
reliance on them – hurt prediction quality considerably. However, we find a
way to blur the signals of the LFs instead of removing them.
In this work we propose KnowMAN (Knowledge-based Weakly Supervised Multinomial
Adversarial Networks), a method for controllable _soft deletion_ of LF
signals, allowing a trade-off between reliance and generalization. Inspired by
adversarial learning for domain adaptation Chen and Cardie (2018a); Ganin and
Lempitsky (2015), we consider LFs as domains and aim to learn a LF-invariant
feature extractor in our model. KnowMAN is composed of three modules: a
feature extractor, a classifier, and a discriminator. Specifically, KnowMAN
employs a classifier that learns the actual task and an adversarial opponent,
the LF- discriminator, that learns to distinguish between the different LFs.
Upstream of both is the shared feature extractor to which the gradient of the
classifier and the reversed gradient of the discriminator are propagated. In
our experiments, the feature extractor for encoding the input is a multi-layer
perceptron on top of either a bag-of-words vector or a transformer
architecture, but KnowMAN is in principle usable with any differentiable
feature extractor.
KnowMAN consistently outperforms our baselines by 2 to 30% depending on the
dataset. By setting a hyperparameter $\lambda$ that controls the influence of
the adversarial part we can control the degree of discarding the information
of LF-specific signals. The optimal $\lambda$ value depends on the dataset and
its properties.
The contributions of this work are i) proposing an adversarial architecture
for controlling the influence of signals associated with specific LFs, ii)
consistent improvements over weakly supervised baselines, iii) release of our
code 111https://github.com/LuisaMaerz/KnowMAN. To our knowledge, we are the
first that apply adversarial learning to overcome the noisiness of labels in
weak supervision.
## 2 Method
Figure 1: KnowMAN architecture. The figure depicts one iteration over a batch
of inputs. The parameters of $\mathcal{C}$ and $\mathcal{F}_{s}$ are updated
together, following the green arrows. The LF discriminator $\mathcal{D}$ is
updated following the red arrows. Solid lines indicate forward, dashed lines
the backward pass.
Our approach is composed of three interacting modules i) the shared feature
extractor $\mathcal{F}_{s}$, ii) the classifier $\mathcal{C}$ and iii) the LF
discriminator $\mathcal{D}$. The loss function of $\mathcal{C}$ rewards the
classifier $\mathcal{C}$ for predicting the correct label for the instance,
and the gradient is used for optimizing the shared feature extractor and
classifier modules towards that goal. At the same time, the loss function for
the LF-discriminator $\mathcal{D}$ rewards predicting which LF was responsible
for labeling an instance. However, in adversarial optimization, KnowMAN
backpropagates the _reversed_ gradient for the LF-discriminator, hence the
information indicative for distinguishing between specific LFs is weakened
throughout the network. The hyperparameter $\lambda$ is used to control the
level of weakening the signals - the higher we choose the value the more
influence is assigned to the discriminator information that goes into
$\mathcal{D}$. The result of the interplay between classifier and LF-
discriminator is a shared feature representation that is good at predicting
the labels while reducing the influence of LF-specific signals, encouraging
the shared feature extractor to take other information (correlated with all
LFs for a class) into account.
In Figure 1, the arrows illustrate the training flow of the three modules. Due
to the adversarial nature of the LF discriminator $\mathcal{D}$, it has to be
trained with a separate optimizer (red arrows), while the rest of the network
is updated with the main optimizer (green arrows). When $\mathcal{D}$ is
trained the parameters of $\mathcal{C}$ and $\mathcal{F}_{s}$ are frozen and
vice versa.
To calculate the losses we utilize canonical negative log-likelihood loss
(NLL) and use it for both, the classifier and the LF discriminator. The
classification NLL can be formalized as:
$\mathcal{L}_{C}(\hat{y_{i}},y_{i})=-\log P(\hat{y_{i}}=y_{i})$ (1)
where $y_{i}$ is the (weakly supervised) annotated label and $\hat{y_{i}}$ is
the prediction of the classifier module $\mathcal{C}$, for a training sample
$i$. Analogously, we can define the NLL for the LF discriminator:
$\mathcal{L_{D}}(\hat{lf}_{i},lf_{i})=-\log P(\hat{lf}_{i}=lf_{i})$ (2)
where $lf_{i}$ is the actual LF used for annotating sample $i$ and
$\hat{lf}_{i}$ is the predicted LF by the discriminator $\mathcal{D}$.
Accordingly, we minimize two different objectives within KnowMAN:
$J_{\mathcal{C}}=\sum_{i=1}^{N}\mathcal{L_{C}}(\mathcal{C}(\mathcal{F}_{s}(x_{i});y_{i}))$
(3)
$J_{\mathcal{D}}=\sum_{i=1}^{N}\mathcal{L_{D}}(\mathcal{D}(\mathcal{F}_{s}(x_{i});lf_{i}))$
(4)
Here the shared feature extractor has two different objectives: i) help
$\mathcal{C}$ to achieve better classification performance and ii) make the
feature distribution invariant to the signals from the LFs. This is captured
by the shared objective:
$J_{\mathcal{F}_{s}}=J_{\mathcal{C}}+\lambda\cdot(-J_{\mathcal{D}})$ (5)
where $\lambda$ is the parameter that controls the adversarial influence i.e.
the degree of LF signal blur. $-J_{\mathcal{D}}$ is the reversed loss of the
LF discriminator $\mathcal{D}$ that represents $\mathcal{C}s$ adversarial
opponent. In general, the exact implementation or architecture of the
individual modules is interchangeable and can be set up as required. This
makes KnowMAN a universally applicable and easily customizable architecture.
## 3 Experiments
### 3.1 Data
For our experiments we use three standard datasets for weak supervision.
Spam. Based on the YouTube comments dataset Alberto et al. (2015) there is a
smaller Spam dataset from Snorkel Ratner et al. (2020) where the task is to
classify if a text is relevant to a certain YouTube video or contains spam.
This dataset is very small and does consist of a train and a test set only.
For the $10$ LFs keywords and regular expressions are used.
Spouse. This dataset for extracting the _spouse_ relation has also been
created by Snorkel, it is based on the Signal Media One-Million News Articles
Dataset Corney et al. (2016). The $9$ LFs use information from a knowledge
base, keywords and patterns. One peculiarity of this dataset is that over 90%
of the instances do not hold a spouse relation.
IMDb. The IMDb dataset contains movie reviews that should be classified in
terms of their sentiment (binary, positive or negative sentiment). The LFs
used for this dataset are occurrences of positive and negative keywords from
Hu and Liu (2004). A particular characteristic of this data set is the large
amount of $6800$ LFs, which constitutes a particular challenge to the Snorkel
denoising framework. As a result Snorkel fails to calculate its generative
model, since its memory consumption exceeds the available limit of 32GB RAM.
### 3.2 Experimental setup
For the experiments we use two different methods for encoding the input: i)
TF-IDF encoding and ii) a DistilBERT transformer. For TF-IDF encoding, we
vectorize222https://scikit-
learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html
the input sentences and feed them to a simple MLP. In the transformer setting,
the sequences of words are encoded using a pretrained DistilBERT. Similar to
BERT Devlin et al. (2019), DistilBERT is a masked transformer language model,
which is a smaller, lighter, and faster version leveraging knowledge
distillation while retaining 97% of BERT’s language understanding capabilities
Sanh et al. (2019).
Our encoder takes the representation of the CLS token from a frozen DistilBERT
and learns a non-linear transformation with a drop-out layer to avoid
overfitting Srivastava et al. (2014):
$h_{i}=DistilBERT(Sentence_{i})_{[CLS]}$ ${F_{s}}_{i}=Dropout(ReLU(f(h_{i})))$
where $DistilBERT(.)_{[CLS]}$ generates the hidden state of the BERT’s
classifier token (CLS) and the function $f$ represents a linear transformation
for the $i^{th}$ sentence.
The classifier and discriminator networks following the feature extractor are
in line with the implementation of Chen and Cardie (2018a) for domain-
adversarial learning. Both are simple sequential models with dropout, batch
normalization, $ReLU$ activation and softmax as the last layer. Please see our
code for implementation details. In the TF-IDF setup we use Adam Kingma and Ba
(2014) for both optimizers. When using transformer encoding the $\mathcal{D}$
optimizer again is Adam and the $\mathcal{C}$ optimizer is AdamW Loshchilov
and Hutter (2018), as this yielded more stable results.
Baselines For each input encoding we implemented several baselines. Weakly
supervised TF-IDF (WS TF-IDF) and Weakly supervised DistilBERT (WS
DistilBERT). Both calculate the labels for each instance in the train set
based on their matching LFs. WS TF-IDF directly applies a logistic regression
classifier to the input and the calculated labels. WS DistilBERT directly uses
the DistilBERT uncased model for English Sanh et al. (2019) as a prediction
model. The second baseline (Feature TF-IDF, Feature DistilBERT) uses feature
extractor and classifier layers of KnowMAN without taking the information of
$\mathcal{D}$ into account (this is equal to setting $\lambda$ to zero). We
also fine-tuned the pure language model (Fine-tuned DistilBERT) without
further transformations and without integrating the KnowMAN architecture.
We also compare with training TF-IDF and DistilBERT models on labels denoised
by Snorke (Snorkel TF-IDF, Snorkel DistilBERT). However, Snorkel denoising
failed for the IMDb data set due to the large amount of LFs.
| Spam | | Spouse | | IMDb
---|---|---|---|---|---
| Acc | P | R | F1 | Acc
WS TF-IDF | 0.87 | 0.12 | 0.83 | 0.20* | 0.65*
Feature TF-IDF | 0.91 | 0.12 | 0.76 | 0.21* | 0.75*
Snorkel TF-IDF | 0.81 | 0.18 | 0.63 | 0.28* | 0.50*
KnowMAN TF-IDF | 0.94 | 0.16 | 0.72 | 0.35 | 0.77
Fine-tuned DistilBERT | 0.92 | 0.14 | 0.78 | 0.24 | 0.70
WS DistilBERT | 0.87 | 0.09 | 0.90 | 0.17* | 0.67*
Feature DistilBERT | 0.86 | 0.18 | 0.80 | 0.29* | 0.74
Snorkel DistilBERT | 0.88 | 0.13 | 0.70 | 0.23* | 0.49*
KnowMAN DistilBERT | 0.90 | 0.27 | 0.67 | 0.39 | 0.76
Table 1: Results on the test sets. The * indicates that KnowMAN performs
significantly better than the marked model. For the Spouse data set we do
report significance for the F1 scores only.
KnowMAN We refer to the KnowMAN architecture as TF-IDF KnowMAN and DistilBERT
KnowMAN. Depending on the dataset we choose different $\lambda$ values. We
also implemented two ways of evaluation and best model saving during training:
i) evaluate after each batch and save the best model, ii) evaluate after a
certain number of steps in between the batches and save the best model.
Hyperparameters We perform hyperparameter tuning using Bayesian optimization
(Snoek et al., 2012) for the IMDb and Spouse datasets. For Spam,
hyperparameters are not optimized, as no validation set is available. Sampling
history and resulting hyperparameters are reported in the Appendix, Figures 2,
3 as well as hyperparameters chosen for the Spam data set.
Evaluation For the evaluation of the IMDb and the Spam datasets we use
accuracy, for the Spouse dataset we use the macro F1 score of the positive
class. To check statistical significance we use randomized testing Yeh (2000).
Results are considered significant if $\rho$ < 0.05.
### 3.3 Results
The results of the experiments are shown in Table 1. For the TF-IDF setup
KnowMAN TF-IDF outperforms the baselines across all datasets. We find the
optimal $\lambda$ values as follows: Spam/Spouse/IMDb = 2/5/4.9. Using the
additional feature extractor layer (Feature TF-IDF) is beneficial compared to
direct logistic regression for all datasets. Snorkel TF-IDF can outperform the
other two baselines for the Spouse dataset only.
Fine tuning of DistilBERT can not outperform our best KnowMAN. However, for
the Spam dataset Fine-tuned DistilBERT gives better results than KnowMAN
DistilBERT but still is worse than KnowMAN TF-IDF. Using WS DistilBERT gives
the same results for the Spam dataset and slightly better results for IMDb,
when compared to WS TF-IDF, for Spouse the performance decreases. Snorkel
DistilBERT can outperform the other two baselines for the Spam dataset only.
The low performance of Snorkel on IMDb (for both DistilBERT and TF-IDF) might
be explained by the very large amount of LF for this dataset. The KnowMAN
DistilBERT results across datasets are in line with the TF-IDF setup - KnowMAN
can outperform all baselines for the Spouse and IMDb dataset. We observe that
$\lambda=5$ for Spouse and $\lambda=1$ for IMDb is most beneficial when using
DistilBERT. For the Spam dataset we observe that KnowMAN (with $\lambda=2$)
outperforms all the baselines, except for the fine-tuned DistilBERT model.
Discussion The performance drop we observe with DistilBERT for KnowMAN
compared to the tf-idf setup of the IMDb dataset could be explained by
implementation details. Due to memory issues we have to truncate the input
when using DistilBERT. Since the movie reviews from IMDb are rather long this
could harm performance. Since the Spam dataset is very small a single wrongly
classified instance can have great impact on the results. This could explain
why KnowMAN TF-IDF outperforms KnowMAN DistilBERT here as well. In general we
could not perform hyperparameter optimization for the DistilBERT experiments
due to memory issues. Therefore the results for that experiments might not
have reached their optimum. However, the results show the value of using
KnowMAN though. Overall our results confirm the assumption that KnowMAN
enables a focus shift of the shared feature extractor from the signals of the
LFs towards signals of other valuable information. KnowMAN consistently
improves over the other experiments significantly - except for the Spam
dataset. We assume that the dataset size is too small to see significant
changes in the results. Compared to the implementation of Chen and Cardie
(2018a) we could not use the specialized domain feature extractor for our
datasets in the experiments. This is due to the fact that our test sets do not
contain information about LF matches. However, we will address this issue by
integrating a mixture of experts module for the specialized feature extractor
as recommended by Chen et al. (2019).
## 4 Related Work
Adversarial neural networks have been used to reduce the divergence between
distributions, such as Goodfellow et al. (2014), Chen et al. (2018) and Ganin
and Lempitsky (2015). The latter proposed an architecture for gradient
reversal and a shared feature extractor. Unlike us, they focused on a binary
domain discriminator. Similarly, Chen and Cardie (2018a) use an adversarial
approach in a multinomial scenario for domain adaptation.
Some works on adversarial learning in the context of weak supervision focus on
different aspects and only share similarity in name with our approach: Wu et
al. (2017) use _virtual adversarial training_ Miyato et al. (2017) for
perturbing input representations, which can be viewed as a general
regularization technique not specific to weakly supervised learning. Qin et
al. (2018); Zeng et al. (2018) use generative adversarial mechanisms for
selecting _negative_ training instances that are difficult to discriminate
from heuristically annotated ones for a classifier.
Several approaches have focused on denoising the labels for weakly supervised
learning Takamatsu et al. (2012); Manning et al. (2014); Lin et al. (2016).
Snorkel Ratner et al. (2020) is one of the most general approaches in this
line of work. However, Snorkel only models biases and correlations of LFs, and
does not consider problems of weak supervision that may stem from biases in
the features and learned representations.
A recent approach that focuses on denoising weakly supervised data is Sedova
et al. (2021). Knodle is a framework for comparison of different methods that
improve weakly supervised learning. We use some of their datasets for our
approach but denoise the signals of the LFs during training.
## 5 Conclusion
We propose KnowMAN - an adversarial neural network for training models with
noisy weakly supervised data. By integrating a shared feature extractor that
learns labeling function invariant features, KnowMAN can improve results on
weakly supervised data drastically across all experiments and datasets in our
setup. The experiments also show that the adverse effect of labeling function-
specific signals is highly dependent on the datasets and their properties.
Therefore, it is crucial to fine-tune the $\lambda$ parameter on a validation
set to find the optimal degree of blurring the labeling function signals.
Since the modules in the KnowMAN architecture are easily exchangeable, KnowMAN
can be applied to any architecture and dataset labeled with heuristic labeling
functions.
## Acknowledgements
This research was funded by the WWTF through theproject ”Knowledge-infused
Deep Learning for Nat-ural Language Processing” (WWTF Vienna ResearchGroup
VRG19-008), by the Deutsche Forschungs-gemeinschaft (DFG, German Research
Foundation) -RO 5127/2-1.
## References
* Alberto et al. (2015) Tulio Alberto, Johannes Lochter, and Tiago Almeida. 2015. Tubespam: Comment spam filtering on youtube. pages 138–143.
* Alt et al. (2019) Christoph Alt, Marc Hübner, and Leonhard Hennig. 2019. Improving relation extraction by pre-trained language representations. In _Automated Knowledge Base Construction (AKBC)_.
* Chen et al. (2019) Xilun Chen, Ahmed Hassan Awadallah, Hany Hassan, Wei Wang, and Claire Cardie. 2019\. Multi-source cross-lingual model transfer: Learning what to share. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3098–3112, Florence, Italy. Association for Computational Linguistics.
* Chen and Cardie (2018a) Xilun Chen and Claire Cardie. 2018a. Multinomial adversarial networks for multi-domain text classification. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1226–1240, New Orleans, Louisiana. Association for Computational Linguistics.
* Chen and Cardie (2018b) Xilun Chen and Claire Cardie. 2018b. Multinomial adversarial networks for multi-domain text classification. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1226–1240, New Orleans, Louisiana. Association for Computational Linguistics.
* Chen et al. (2018) Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2018\. Adversarial deep averaging networks for cross-lingual sentiment classification. volume 6, pages 557–570.
* Corney et al. (2016) D. Corney, M. Albakour, Miguel Martinez-Alvarez, and Samir Moussa. 2016. What do a million news articles look like? In _NewsIR@ECIR_.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Ganin and Lempitsky (2015) Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In _Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37_ , ICML’15, page 1180–1189. JMLR.org.
* Go et al. (2009) Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. _CS224N project report, Stanford_ , 1(12):2009.
* Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. In _NIPS_.
* Hu and Liu (2004) Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In _Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , KDD ’04, page 168–177, New York, NY, USA. Association for Computing Machinery.
* Kingma and Ba (2014) Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _International Conference on Learning Representations_.
* Lin et al. (2016) Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 2124–2133.
* Loshchilov and Hutter (2018) Ilya Loshchilov and Frank Hutter. 2018. Fixing weight decay regularization in adam.
* Manning et al. (2014) Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In _Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations_ , pages 55–60.
* Mintz et al. (2009) Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In _Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP_ , pages 1003–1011.
* Miyato et al. (2017) Takeru Miyato, Andrew M. Dai, and Ian Goodfellow. 2017. Adversarial training methods for semi-supervised text classification.
* Qin et al. (2018) Pengda Qin, Weiran Xu, and William Yang Wang. 2018. DSGAN: Generative adversarial training for distant supervision relation extraction. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 496–505, Melbourne, Australia. Association for Computational Linguistics.
* Ratner et al. (2020) Alexander Ratner, Stephen H. Bach, Henry R. Ehrenberg, Jason A. Fries, Sen Wu, and Christopher Ré. 2020. Snorkel: rapid training data creation with weak supervision. _VLDB J._ , 29(2-3):709–730.
* Sanh et al. (2019) Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. _CoRR_ , abs/1910.01108.
* Sedova et al. (2021) Anastasiya Sedova, Andreas Stephan, Marina Speranskaya, and Benjamin Roth. 2021\. Knodle: Modular weakly supervised learning with pytorch. _CoRR_ , abs/2104.11557.
* Snoek et al. (2012) Jasper Snoek, Hugo Larochelle, and Ryan Adams. 2012. Practical bayesian optimization of machine learning algorithms. In _Proc. NIPS_.
* Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. _The journal of machine learning research_ , 15(1):1929–1958.
* Takamatsu et al. (2012) Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervision for relation extraction. In _Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 721–729.
* Wu et al. (2017) Yi Wu, David Bamman, and Stuart Russell. 2017. Adversarial training for relation extraction. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 1778–1783.
* Yeh (2000) Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In _COLING 2000 Volume 2: The 18th International Conference on Computational Linguistics_.
* Zeng et al. (2018) Daojian Zeng, Yuan Dai, Feng Li, R Simon Sherratt, and Jin Wang. 2018. Adversarial learning for distant supervised relation extraction. _Computers, Materials & Continua_, 55(1):121–136.
## Appendix A Appendix
### A.1 Dataset statistics
The datasets used for the KnowMAN experiments have different properties.
Especially the numer of labeling functions and the dataset sizes varies a lot.
dataset | classes | train/test samples | lfs
---|---|---|---
Spam | 2 | 1586/250 | 10
Spouse | 2 | 22254/2701 | 9
IMDb | 2 | 40000/5000 | 6786
Table 2: Dataset statistics for KnowMAN experiments. Lfs are labeling
functions.
### A.2 Hyperparameter optimization
We perform hyperparameter tuning using Bayesian optimization (Snoek et al.,
2012). Bayesian Optimization is an approach that uses the Bayes Theorem to
direct the search in order to find the minimum or maximum of a black-box
objective function. In comparison with random search and grid search, it tends
to obtain better hyperparameters in fewer steps by making a proper balance
between exploration and exploitation steps. Our hyperparameter space includes
batch size, dropout, number of iterations over $\mathcal{D}$, the shared
hidden size of the models, learning rate for $\mathcal{D}$ and
$\mathcal{F}_{s},\mathcal{C}$ and the number of layers of
$\mathcal{C},\mathcal{D}$ and $\mathcal{F}_{s}$. We implemented two ways of
evaluation and best model saving during training: i) evaluate after each batch
and save the best model, ii) evaluate after a certain number of steps in
between the batches and save the best model. We also optimized the number of
steps if logging in between a batch.
We evaluated the models for IMDb and Spouse on the respective validation set.
For the Spam dataset, there is no development set available and we used the
following hyperparameters for KnowMAN TF-IDF following the parameters used in
Chen and Cardie (2018b): Batch size: 32, dropout: 0.4, n critic: 5, lambda:
2.0, shared hidden size: 700, learning rate C & F: 0.0001, learning rate D:
0.0001 , number of F layers: 1, number of C layers: 1, number of D layers: 1.
Figure 2: Sampled hyperparameters for KnowMAN TF-IDF on IMDb. Optimal
hyperparameters are indicated in red.
Batch size: 895, dropout: 0.275, n critic: 50, lambda: 4.9, shared hidden
size: 585, learning rate C & F: 0.0001, learning rate D: 0.0001, number of F
layers: 1 , number of C layers: 1, number of D layers: 10\.
Histograms on the diagonal show how, for each hyperparameter, how many samples
have been drawn during optimization. Figure 3: Sampled hyperparameters for
KnowMAN DistilBERT on Spouse. Optimal hyperparameters are indicated in red.
Batch size: 16, dropout: 0.379, n critic: 1, lambda: 5.0, shared hidden size:
988, learning rate C & F: 0.0005, learning rate D: 0.001 , number of F layers:
5, number of C layers: 10, number of D layers: 1\.
Histograms on the diagonal show how, for each hyperparameter, how many samples
have been drawn during optimization.
### A.3 Experimental details
We ran our experiments on a DGX-1 server with one V100 GPU per experiment. The
runtime of one model depends on the dataset: 0.25 hours for the Spam dataset,
0.25 hours for the Spouse dataset, and 8 hours for the IMDb dataset.
Please find our implementation at https://github.com/LuisaMaerz/KnowMAN.
|
# Atomic Gas Scaling Relations of Star-forming Galaxies at $z\approx 1$
Aditya Chowdhury National Centre for Radio Astrophysics, Tata Institute of
Fundamental Research, Pune, India. Nissim Kanekar National Centre for Radio
Astrophysics, Tata Institute of Fundamental Research, Pune, India. Jayaram N.
Chengalur National Centre for Radio Astrophysics, Tata Institute of
Fundamental Research, Pune, India.
###### Abstract
We use the Giant Metrewave Radio Telescope (GMRT) Cold-Hi AT $z\approx 1$
(CAT$z1$) survey, a 510 hr Hi 21cm emission survey of galaxies at
$z=0.74-1.45$, to report the first measurements of atomic hydrogen (Hi)
scaling relations at $z\approx 1$. We divide our sample of 11,419 blue star-
forming galaxies at $z\approx 1$ into three stellar mass ($\textrm{M}_{*}$)
subsamples and obtain detections (at $\geq 4\sigma$ significance) of the
stacked Hi 21cm emission signal from galaxies in all three subsamples. We fit
a power-law relation to the measurements of the average Hi mass
($\textrm{M}_{\rm H{\textsc{i}}}$) in the three stellar-mass subsamples to
find that the slope of the $\textrm{M}_{\rm H{\textsc{i}}}-\textrm{M}_{*}$
relation at $z\approx 1$ is consistent with that at $z\approx 0$. However, we
find that the $\textrm{M}_{\rm H{\textsc{i}}}-\textrm{M}_{*}$ relation has
shifted downwards from $z\approx 1$ to $z\approx 0$, by a factor of $3.54\pm
0.48$. Further, we find that the Hi depletion timescales (${\rm
t_{dep,H{\textsc{i}}}}$) of galaxies in the three stellar-mass subsamples are
systematically lower than those at $z\approx 0$, by factors of $\approx 2-4$.
We divide the sample galaxies into three specific star-formation rate (sSFR)
subsamples, again obtaining $\geq 4\sigma$ detections of the stacked Hi 21cm
emission signal in all three subsamples. We find that the relation between the
ratio of Hi mass to stellar mass and the sSFR evolves between $z\approx 1$ and
$z\approx 0$. Unlike the efficiency of conversion of molecular gas to stars,
which does not evolve significantly with redshift, we find that the efficiency
with which Hi is converted to stars is much higher for star-forming galaxies
at $z\approx 1$ than those at $z\approx 0$.
Galaxy evolution — Neutral hydrogen clouds — High-$z$ galaxies
††software: astropy (Astropy Collaboration et al., 2013)
## 1 Introduction
Measurements of the neutral atomic hydrogen (Hi) properties of galaxies as a
function of their redshift, environment, and stellar properties are important
to obtain a complete picture of galaxy evolution. In the local Universe, the
Hi properties of galaxies are known to depend on their global stellar
properties, e.g. the stellar mass ($\textrm{M}_{*}$), the star-formation rate
(SFR), etc. (see Saintonge & Catinella, 2022, for a review). Such “Hi scaling
relations” at $z\approx 0$ serve as critical benchmarks for numerical
simulations and semi-analytical models of galaxy formation and evolution (e.g.
Lagos et al., 2018; Diemer et al., 2018; Davé et al., 2019).
Unfortunately, the faintness of the Hi 21 cm line has severely hindered the
use of Hi 21 cm emission studies to probe the Hi properties of galaxies at
cosmological distances. Even very deep integrations with today’s best radio
telescopes (e.g. Jaffé et al., 2013; Catinella & Cortese, 2015; Gogate et al.,
2020) have yielded detections of Hi 21 cm emission from individual galaxies
out to only $z\approx 0.376$ (Fernández et al., 2016). Thus, until very
recently, nothing was known about the Hi properties of high-$z$ galaxies and
how the Hi properties depend on the stellar mass, the SFR, or other galaxy
properties.
The above lack of information about Hi scaling relations at high redshifts has
meant that simulations of galaxy evolution are not well constrained with
regard to gas properties beyond the local Universe. Specifically, while a
number of simulations broadly reproduce the Hi scaling relations at $z\approx
0$ (e.g. Lagos et al., 2018; Diemer et al., 2018; Davé et al., 2019), the
predictions for the evolution of these relations differ significantly (e.g.
Davé et al., 2020). Measurements of Hi scaling relations at $z\gtrsim 1$,
along with similar relations for the molecular component (e.g. Tacconi et al.,
2020), would hence provide a crucial benchmark for simulations of galaxy
evolution. Further, such Hi scaling relations at $z\approx 1$ would be useful
in estimating the individual Hi masses of galaxies at these redshifts, and the
sensitivity of upcoming Hi 21 cm surveys to both individual and stacked Hi 21
cm emission from galaxies at high redshifts (e.g. Blyth et al., 2016).
The Hi 21 cm stacking approach (Zwaan, 2000; Chengalur et al., 2001), in which
the Hi 21 cm emission signals from a large number of galaxies with accurate
spectroscopic redshifts are co-added to measure the average Hi mass of a
galaxy sample, can be used to overcome the intrinsic weakness of the Hi 21 cm
line (e.g. Lah et al., 2007; Delhaize et al., 2013; Rhee et al., 2016; Kanekar
et al., 2016; Bera et al., 2019; Sinigaglia et al., 2022). This approach has
been used to measure the global Hi properties of local Universe galaxies as a
function of their global stellar properties (e.g. Fabello et al., 2011; Brown
et al., 2015; Guo et al., 2021). The Hi scaling relations obtained from these
stacking analyses have been shown to be consistent with those derived from
individual Hi 21 cm detections (e.g. Saintonge & Catinella, 2022). It should
thus be possible to use the Hi 21 cm stacking approach to determine the Hi
scaling relations at cosmological distances (e.g. Sinigaglia et al., 2022).
Hi 21 cm stacking experiments with the Giant Metrewave Radio Telescope (GMRT)
have recently been used to measure the average Hi properties of blue star-
forming galaxies at $z\gtrsim 1$ (Chowdhury et al., 2020, 2021). These studies
have shown that star-forming galaxies at $z\approx 1$ have large Hi masses but
that the Hi reservoirs can sustain the high SFRs of the galaxies for a short
period of only $1-2$ Gyr. More recently, Chowdhury et al. (2022a) used the
GMRT Cold-Hi AT $z\approx 1$ (GMRT-CAT$z1$; Chowdhury et al., 2022b) survey, a
510 hr GMRT Hi 21 cm emission survey of the DEEP2 fields (Newman et al.,
2013), to find that the average Hi mass of star-forming galaxies declines
steeply by a factor of $\approx 3.2$ from $z\approx 1.3$ to $z\approx 1.0$,
over a period of $\approx 1$ Gyr. This is direct evidence that the the rate of
accretion of gas from the circumgalactic medium (CGM) on to galaxies at
$z\approx 1$ was insufficient to replenish their Hi reservoirs, causing a
decline in the star-formation activity of the Universe at $z\lesssim 1$.
Subsequently, Chowdhury et al. (2022c) used the GMRT-CAT$z1$ measurements of
the average Hi mass of galaxies at $z\approx 1.0$ and $z\approx 1.3$ to show
that Hi dominates the baryonic content of high-$z$ galaxies.
In this _Letter_ , we use the GMRT-CAT$z1$ survey to report, for the first
time, measurements of Hi scaling relations at $z\approx 1$, at the end of the
epoch of peak cosmic star-formation activity in the Universe.
## 2 Observations and Data Analysis
### 2.1 The GMRT-CAT$z1$ Survey
The GMRT-CAT$z1$ survey (Chowdhury et al., 2022b) used $\approx$510 hrs with
the upgraded GMRT $550-850$ MHz receivers to carry out a deep Hi 21 cm
emission survey of galaxies at $z=0.74-1.45$, in three sky fields covered by
the DEEP2 Galaxy Survey (Newman et al., 2013). The three DEEP2 fields covered
by the CAT$z1$ survey contain seven sub-fields of size $\approx
52^{\prime}\times 28^{\prime}$, each of which was covered using a single GMRT
pointing. The design, the observations, the data analysis, and the main sample
of galaxies of the GMRT-CAT$z1$ survey are described in detail in Chowdhury et
al. (2022b). We provide here a summary of the information directly relevant to
this paper.
The observations for the GMRT-CAT$z1$ survey were obtained over three GMRT
observing cycles. The data of each subfield from each observing cycle were
analysed separately. This was done to prevent systematic effects present in
the data of one cycle (e.g. low-level RFI, deconvolution errors, etc), from
affecting the quality of the data from the other cycles (see Chowdhury et al.,
2022b, for a detailed discussion). The analysis resulted in $2-3$ spectral
cubes for each of the seven DEEP2 fields. The cubes have channel widths of
$48.8$ kHz, corresponding to a velocity resolution of $18$ km s-1$-25$ km s-1,
over the redshift range $z=0.74-1.45$. The FWHM of the synthesized beams of
the spectral cubes are $4\farcs 0-7\farcs 5$ over the frequency range
$580-830$ MHz, corresponding to spatial resolutions in the range $29$ kpc$-63$
kpc111Throughout this work, we use a flat “737” Lambda-cold dark matter
cosmology, with $\Omega_{m}=0.3$, $\Omega_{\Lambda}=0.7$, and $H_{0}=70$ km
s-1 Mpc-1. for galaxies at $z=0.74-1.45$.
The GMRT-CAT$z1$ survey covers the Hi 21 cm line for 16,250 DEEP2 galaxies
with accurate spectroscopic redshifts (velocity errors $\lesssim 62$ km s-1;
Newman et al., 2013) at $z=0.74-1.45$. We excluded (i) red galaxies,
identified using a cut in the $\rm(U-B)$ vs ${\rm M_{B}}$ colour-magnitude
diagram (Willmer et al., 2006; Chowdhury et al., 2022b), (ii) radio-bright
AGNs, detected in our radio-continuum images at $>4\sigma$ significance with
rest-frame 1.4 GHz luminosities $\textrm{L}_{1.4\textrm{GHz}}\geq 2\times
10^{23}$ W Hz-1 (Condon et al., 2002), (iii) galaxies with stellar masses
$\textrm{M}_{*}<10^{9}~{}\textrm{M}_{\odot}$, and (iv) galaxies whose Hi 21 cm
subcubes were affected by discernible systematic effects (Chowdhury et al.,
2022b). This yielded a total of 11,419 blue star-forming galaxies with
$\textrm{M}_{*}\geq 10^{9}~{}\textrm{M}_{\odot}$ at $z=0.74-1.45$, the main
sample of the GMRT-CAT$z1$ survey. The survey provides a total of 28,993 Hi 21
cm subcubes for the 11,419 galaxies. The subcube of each galaxy covers a
region of $\pm 500$ kpc around the galaxy location, with a uniform spatial
resolution of 90 kpc, and a velocity range of $\pm 1500$ km s-1 around its
redshifted Hi 21 cm frequency, with a channel width of 30 km s-1. The median
spectral RMS noise on the 28,993 Hi 21 cm subcubes is $297\ \mu$Jy per 30 km
s-1 velocity channel, at a spatial resolution of 90 kpc.
We note that the average Hi 21 cm emission signal from the sample of 11,419
galaxies is consistent with being unresolved at a spatial resolution of 90 kpc
(Chowdhury et al., 2022b). Further, the compact resolution of 90 kpc ensures
that the average Hi 21 cm emission signal does not include a significant
contribution from companion galaxies in the vicinity of the target galaxies
(Chowdhury et al., 2022b).
The stellar masses of the individual DEEP2 galaxies were obtained using a
relation between the stellar mass222All stellar masses and SFRs in this work
assume a Chabrier initial mass function (IMF). Estimates in the literature
that assume a Salpeter IMF were converted a Chabrier IMF by subtracting 0.2
dex (e.g. Madau & Dickinson, 2014). and the absolute rest-frame B-band
magnitude (${\rm M_{B}}$), the rest-frame (U$-$B) colour, and the rest-frame
(B$-$V) colour (Weiner et al., 2009). The relation was calibrated using a
subset of the DEEP2 galaxies with K-band estimates of the stellar masses
(Weiner et al., 2009). The SFRs of the individual galaxies were inferred from
their ${\rm M_{B}}$ values and rest-frame (U$-$B) colours, via the SFR
calibration of Mostek et al. (2012); these authors used the SFRs of galaxies
in the Extended Groth Strip (obtained via spectral-energy distribution (SED)
fits to the rest-frame ultraviolet, optical, and near-IR photometry; Salim et
al., 2009) to derive the SFR calibration for the DEEP2 galaxies. Mostek et al.
(2012) found that the scatter between the SED SFRs of Salim et al. (2009) and
the SFRs obtained via the calibration based on the ${\rm M_{B}}$ and (U$-$B)
values is $\approx 0.2$ dex333We note that we used the SFR calibration of
Mostek et al. (2012) that relates the SFR of a galaxy to its ${\rm M_{B}}$,
(U$-$B), and (U$-$B)2 values. We divided our sample of galaxies into multiple
${\rm M_{B}}$ and (U$-$B) subsamples and, for each subsample, compared the
average SFR obtained from the Mostek et al. (2012) calibration with that
obtained from the stack of the rest-frame 1.4 GHz continuum luminosity
density. We find that the difference in SFRs from the two approaches (as a
function of colour and ${\rm M_{B}}$) is consistent with the SFR scatter of
0.2 dex obtained by Mostek et al. (2012)..
### 2.2 The Stacking Analysis
We estimate the average Hi mass and the average SFR of subsamples of galaxies
by stacking, respectively, the Hi 21 cm line luminosities and the rest-frame
1.4 GHz continuum luminosities. The procedures used in stacking the Hi 21 cm
emission signals and the rest-frame 1.4 GHz continuum emission signals are
described in detail in Chowdhury et al. (2022a, b). We provide here, for
completeness, a brief review of the procedures.
The stacked Hi 21 cm spectral cube of a given subsample of galaxies was
computed by taking a weighted-average of the individual Hi 21 cm subcubes, in
luminosity-density units, of the galaxies in the subsample. During the
stacking analysis, each Hi 21 cm subcube is treated as arising from a separate
“object”. The weights were chosen to ensure that the redshift distributions of
the different subsamples are identical; the specific choices of weights for
the different stacks are discussed in Section 3.1 and Section 3.3. For each
subsample, we then fitted a second-order polynomial to the spectra at each
spatial pixel of the stacked Hi 21 cm cube, and subtracted this out to obtain
a residual cube; the polynomial fit was performed after excluding spectral
channels in the velocity range $\pm 250$ km s-1. For each subsample, the RMS
noise at each spatial and velocity pixel of the stacked Hi 21 cm cube was
obtained via Monte Carlo simulations (Chowdhury et al., 2022b). Finally, for
each subsample, the average Hi mass was obtained from the measured velocity-
integrated Hi 21 cm line luminosity.444 Note that the quoted average Hi masses
of the different subsamples in this Letter do not include the mass
contribution of Helium. The velocity integral was carried out over a
contiguous range of central velocity channels containing emission at $\geq
1.5\sigma$ significance, after smoothing the stacked Hi 21 cm subcubes to a
velocity resolution of 90 km s-1.
The average SFR of each subsample was computed by stacking the rest-frame 1.4
GHz luminosity density of the galaxies in the subsample (e.g. White et al.,
2007; Chowdhury et al., 2022a). We used the GMRT 655 MHz radio-continuum
images of the DEEP2 subfields to extract subimages around each of the 11,419
galaxies of the full sample. We convolved all subimages to an uniform spatial
resolution of 40 kpc, regridded them to a uniform grid with $5.2$ kpc pixels
spanning $\pm 260$ kpc, and converted the flux-density values (in Jy) to rest-
frame 1.4 GHz luminosity density values (in W Hz-1), assuming a spectral index
of $\alpha=-0.8$ (Condon, 1992), with $S_{\nu}\propto\nu^{\alpha}$. The
stacked rest-frame 1.4 GHz luminosity density of a subsample of galaxies was
computed by taking a weighted-median of the individual subimages, with the
weights being the same as those used during the Hi 21 cm stacking of the
subsample. Finally, the stacked rest-frame 1.4 GHz continuum luminosity
density of a subsample of galaxies is converted to an estimate of the average
SFR of the subsample, using the relation SFR
$(\textrm{M}_{\odot}/\textrm{yr})=3.7\times 10^{-22}\times{\rm L_{1.4GHz}\
(W~{}Hz^{-1})}$ (Yun et al., 2001). The errors on our measurements of the
average SFRs include both the statistical uncertainty and a 10$\%$ flux-scale
uncertainty (Chowdhury et al., 2022b).
## 3 Results and Discussion
### 3.1 Hi Mass as a Function of Stellar Mass
We divide our sample of 11,419 galaxies (28,993 Hi 21 cm subcubes) into three
stellar-mass subsamples with $1.0\times
10^{9}~{}\textrm{M}_{\odot}<\textrm{M}_{*}\leq 6.0\times 10^{9}\
\textrm{M}_{\odot}$ (“Low”), $6.0\times 10^{9}\
\textrm{M}_{\odot}<\textrm{M}_{*}\leq 1.3\times 10^{10}\ \textrm{M}_{\odot}$
(“Intermediate”), and $\textrm{M}_{*}>1.3\times 10^{10}\ \textrm{M}_{\odot}$
(“High”)555 The stellar-mass ranges of the three subsamples were chosen such
that a clear ($\geq 4\sigma$) detection of the stacked Hi 21 cm emission
signal is obtained for each subsample. However, we emphasise that the
conclusions of this Letter do not depend on the exact choice of the stellar-
mass bins. The number of galaxies and Hi 21 cm subcubes in each subsample are
provided in Table 1. The redshift distributions of the three stellar-mass
subsamples are different (see Figure 1). We correct for this difference by
assigning weights to each Hi 21 cm subcube such that the redshift distribution
of each stellar-mass subsample is effectively identical. Specifically, the
weights ensure that the effective redshift distributions of the intermediate-
and high-stellar-mass subsamples are identical to that of the low-stellar-mass
subsample; the mean redshift of the final redshift distribution is $\langle
z\rangle=1.01$. We use these weights while computing all average quantities
for the three stellar-mass subsamples.
We separately stacked the Hi 21 cm emission and the rest-frame 1.4 GHz
continuum emission of the galaxies in the three stellar-mass subsamples,
following the procedures of Sections 2.2. Figure 2 shows the stacked Hi 21 cm
emission images, the stacked Hi 21 cm spectra, and the stacked rest-frame 1.4
GHz continuum images of the three subsamples. We obtain clear detections of
the average Hi 21 cm emission signal in all three cases, at $4.2-4.9\sigma$
statistical significance. We also detect the stacked rest-frame 1.4 GHz
continuum emission at high significance ($>28\sigma$) in all three subsamples.
The average Hi masses and the average SFRs of galaxies in the three subsamples
are listed in Table 1. We find that the average SFR and the average stellar
mass of the galaxies in the three subsamples are in excellent agreement with
the star-forming main sequence at $z\approx 1$ (see Table 1; Whitaker et al.,
2014; Chowdhury et al., 2022a).
Figure 1: The redshift distributions of the three stellar-mass subsamples. The blue histograms show, for each stellar-mass subsample, the number of Hi 21 cm subcubes in different redshift intervals (N), obtained after normalising by the total number of subcubes in the corresponding subsample. The Hi 21 cm subcubes of each stellar-mass subsample were assigned weights such that each effective redshift distribution is identical to the redshift distribution of the low stellar-mass subsample (orange lines). The number of galaxies in the subsample is indicated in each panel, with the number of Hi 21 cm subcubes shown in parentheses. Figure 2: The average Hi 21 cm emission signal and the average rest-frame 1.4 GHz continuum emission from star-forming galaxies in the three stellar-mass subsamples. Panels [A] show the average Hi 21 cm emission images of galaxies of the three stellar-mass subsamples. The Hi 21 cm subcubes of each subsample were assigned weights such that their effective redshift distributions are identical. The circle on the bottom left of each panel indicates the 90-kpc spatial resolution of the images. The contour levels are at $-3.0\sigma$ (dashed), $+3.0\sigma$, and $+4.0\sigma$ significance. Panels [B] show the average Hi 21 cm emission spectra of the three stellar-mass subsamples. The $\pm 1\sigma$ errors on the stacked Hi 21 cm spectra are indicated by the dashed black curves. We clearly detect the stacked Hi 21 cm emission signals in all three subsamples. Panels [C] show the average rest-frame 1.4 GHz luminosity density of the galaxies in the three stellar-mass subsamples. The contour levels are at $5\sigma,\ 10\sigma,\ 20\sigma,\ 40\sigma,\ {\rm and}\ 80\sigma$ statistical significance. The circle at the bottom left of each panel indicates the 40 kpc resolution of the images. | Low | Intermediate | High
---|---|---|---
Stellar Mass Range ($\times 10^{9}\ \textrm{M}_{\odot}$) | $1.0-6.0$ | $6.0-13$ | $13-240$
Number of Hi 21 cm Subcubes | 13,954 | 8,635 | 6,404
Number of Galaxies | 5,455 | 3,422 | 2,542
Average Redshift | 1.01 | 1.01 | 1.01
Average Stellar Mass ($\times 10^{9}\ \textrm{M}_{\odot}$) | $3.3$ | $8.9$ | $25.9$
Average Hi Mass ($\times 10^{9}\ \textrm{M}_{\odot}$) | $9.5\pm 2.2$ | $20.3\pm 4.1$ | $18.2\pm 4.3$
Average SFR ($\textrm{M}_{\odot}\textrm{yr}^{-1}$) | $4.04\pm 0.43$ | $8.95\pm 0.92$ | $21.1\pm 2.1$
Main-sequence SFR ($\textrm{M}_{\odot}\textrm{yr}^{-1}$) | 3.8 | 8.9 | 17.3
Hi depletion timescale (Gyr) | $2.35\pm 0.61$ | $2.27\pm 0.52$ | $0.86\pm 0.22$
Table 1: Average properties of galaxies in the three stellar-mass subsamples. For each subsample, the rows are (1) the range of stellar masses, in units of $10^{9}\ \textrm{M}_{\odot}$, (2) the number of Hi 21 cm subcubes, (3) the number of galaxies, (4) the average redshift of the galaxies, (5) the average stellar mass of the galaxies, (6) the average Hi mass of the galaxies, measured from the stacked Hi 21 cm emission spectra of Figure 2[B], (7) the average SFR of the galaxies, measured using the stacked rest-frame 1.4 GHz luminosity densities of Figure 2[C], (8) the expected SFR at this average stellar mass, for the star-forming main sequence at $z\approx 1$ (Whitaker et al., 2014), and (9) the characteristic Hi depletion timescale, $\langle\rm{M_{H{\textsc{i}}}}\rangle/\langle{\rm SFR}\rangle$. Note that all quantities are weighted averages, with weights such that the redshift distributions of the three stellar-mass subsamples are identical. | Low | Intermediate | High
---|---|---|---
Stellar Mass Range ($\times 10^{9}\ \textrm{M}_{\odot}$) | $1.0-6.0$ | $6.0-13$ | $13-240$
Average Stellar Mass ($\times 10^{9}\ \textrm{M}_{\odot}$) | $3.3$ | $8.9$ | $25.9$
Average Hi Mass ($\times 10^{9}\ \textrm{M}_{\odot}$) | | |
$z\approx 0$ | $2.7\pm 0.2$ | $4.5\pm 0.4$ | $5.9\pm 0.4$
$z\approx 1$ | $9.5\pm 2.2$ | $20.3\pm 4.1$ | $18.2\pm 4.3$
Average SFR ($\textrm{M}_{\odot}\textrm{yr}^{-1}$) | | |
$z\approx 0$ | $0.44\pm 0.03$ | $0.88\pm 0.07$ | $1.83\pm 0.15$
$z\approx 1$ | $4.07\pm 0.43$ | $8.93\pm 0.86$ | $21.1\pm 2.1$
Hi depletion timescale (Gyr) | | |
$z\approx 0$ | $6.11\pm 0.48$ | $5.12\pm 0.42$ | $3.23\pm 0.29$
$z\approx 1$ | $2.33\pm 0.60$ | $2.27\pm 0.51$ | $0.86\pm 0.26$
Table 2: A comparison of the average Hi properties of blue star-forming
galaxies at $z\approx 1$ with those of blue galaxies in the local Universe.
For galaxies in each of the three stellar-mass subsamples at both $z\approx 0$
and $z\approx 1$, the rows are (1) the range of stellar masses, in units of
$10^{9}\ \textrm{M}_{\odot}$, (2) the average stellar mass, (3) the average Hi
mass, (4) the average SFR, (5) the characteristic Hi depletion timescale,
$\langle\rm{M_{H{\textsc{i}}}}\rangle/\langle{\rm SFR}\rangle$. The Hi and
stellar properties of the $z\approx 0$ subsamples are derived from the xGASS
survey (Catinella et al., 2018), using appropriate weights (see main text for
details). The errors on the local Universe measurements were derived using
bootstrap resampling with replacement.
We use the extended GALEX Arecibo SDSS survey (xGASS; Catinella et al., 2018)
to compare our measurements of the Hi properties of star-forming galaxies at
$z\approx 1$ to those of galaxies in the local Universe. The xGASS Survey used
the Arecibo telescope to measure the Hi masses of a stellar-mass-selected
sample of galaxies with $\textrm{M}_{*}>10^{9}\ \textrm{M}_{\odot}$ at
$z=0.01-0.05$. The stellar masses and SFRs of the xGASS galaxies used in this
work were obtained from the publicly available catalogue of the “xGASS
representative sample”. The stellar masses in this catalogue are from
Kauffmann et al. (2003) and Brinchmann et al. (2004), while the SFRs were
computed using a combination of Galex near-ultraviolet (NUV) and WISE mid-
infrared (MIR) data or via spectral energy distribution fits for galaxies for
which MIR data were not available (Catinella et al., 2018).
The main sample of the GMRT-CAT$z1$ survey consists of blue, star-forming
galaxies at $z=0.74-1.45$. In order to ensure a fair comparison between the Hi
properties of the GMRT-CAT$z1$ galaxies and those of the xGASS galaxies, we
restrict to blue galaxies, with NUV$-$r$<4$, in the xGASS sample. We divide
the xGASS galaxies into three stellar-mass subsamples, using the same “Low”,
“Intermediate”, and “High” stellar-mass ranges as for the DEEP2 galaxies.
Further, for each xGASS subsample, we use weights to ensure that the stellar-
mass distribution within the subsample is effectively identical to that of the
corresponding (Low, Intermediate, or High) subsample at $z\approx 1$. In
passing, we note that the average Hi mass of xGASS galaxies in the three
stellar-mass subsamples obtained using a cut in the SFR-$\textrm{M}_{*}$ plane
to select main-sequence galaxies is consistent with the values obtained by
selecting blue galaxies with NUV$-$r$<4$.
The average Hi masses of the blue xGASS galaxies in the three stellar-mass
sub-samples are listed in Table 2; the errors on the averages were computed
using bootstrap resampling with replacement. The table also lists, for
comparison, the GMRT-CAT$z$1 measurements of the average Hi masses of blue
galaxies in the same stellar-mass subsamples at $z\approx 1$. We find that,
across the stellar-mass range $10^{9}-2.4\times 10^{11}~{}\textrm{M}_{\odot}$,
the average Hi mass of the $z\approx 1$ galaxies is higher than that of local
Universe galaxies, by a factor of $\approx 3.1-4.5$.
We determined the $\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation at
$z\approx 1$ by fitting a power-law relation to our measurements of the
average Hi mass of blue star-forming galaxies in the three stellar-mass
subsamples at $z\approx 1$, following the procedures in Appendix A. We find
that the $\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation for main-sequence
galaxies at $z\approx 1$ is
$\log\left[\rm{M_{H{\textsc{i}}}}/\textrm{M}_{\odot}\right]=(0.32\pm
0.13)\log\left[{\textrm{M}_{*,10}}\right]+(10.183\pm 0.056)\;,$ (1)
where ${\textrm{M}}_{*,10}=\textrm{M}_{*}/10^{10}~{}\textrm{M}_{\odot}$. In
order to compare the $\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation of blue
star-forming galaxies at $z\approx 1$ to that of blue star-forming galaxies at
$z\approx 0$, we also fitted a power-law relation, using the procedures of
Appendix A, to the measurements of $\langle\rm{M_{H{\textsc{i}}}}\rangle$ in
blue xGASS galaxies in the three stellar-mass subsamples of Table 2, with
stellar-mass distributions identical to those of the subsamples of galaxies at
$z\approx 1$. We find that the best-fit
$\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation for blue galaxies at
$z\approx 0$ is
$\log\left[\rm{M_{H{\textsc{i}}}}/\textrm{M}_{\odot}\right]=(0.38\pm
0.05)\log\left[{\textrm{M}_{*}}_{,10}\right]+(9.634\pm 0.019)$.666We note that
the $\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation for blue xGASS galaxies
obtained by fitting to the $\langle\rm{M_{H{\textsc{i}}}}\rangle$ values in
the three stellar-mass subsamples is consistent with that obtained by fitting
to $\langle\rm{M_{H{\textsc{i}}}}\rangle$ values in small $\textrm{M}_{*}$
bins, separated by 0.1 dex.
Figure 3[A] shows the $\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relations for
blue star-forming galaxies at $z\approx 1$ and $z\approx 0$. We find no
statistically significant evidence for an evolution in the slope of the
$\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation from $z\approx 1$ to
$z\approx 0$. However, we find clear evidence that the relation has shifted
downwards from $z\approx 1$ to $z\approx 0$. Specifically, our measurements
show that the $\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation of blue star-
forming galaxies at $z\approx 1$ lies a factor of $3.54\pm 0.48$ above the
local Universe relation.
In passing, we emphasize that the $\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$
relations of this Letter, at both $z\approx 0$ and $z\approx 1$, were obtained
by fitting a relation to measurements of
$\langle\rm{M_{H{\textsc{i}}}}\rangle$ in three stellar-mass subsamples. This
approach is different from that typically followed for galaxies at $z\approx
0$, where the $\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation is obtained by
fitting to estimates of $\langle\log\rm{M_{H{\textsc{i}}}}\rangle$ in multiple
stellar-mass subsamples (e.g. Saintonge & Catinella, 2022). The difference
arises from the fact that the averaging in a stacking analysis is carried out
on the Hi masses themselves, rather than on the logarithm of the Hi masses; in
general, the logarithm of the average value of a given quantity is not the
same as the average of the individual logarithms (e.g. Brown et al., 2015).
Care must hence be taken when comparing scaling relations obtained from
simulations with those obtained from stacking analyses such as the present
work, or when comparing the scaling relations from stacking analyses with
those based on direct measurements of $\rm{M_{H{\textsc{i}}}}$, and hence on
estimates of $\langle\log\rm{M_{H{\textsc{i}}}}\rangle$. Specifically, the
scaling relations obtained from the stacking analysis yield the mean Hi mass
at a given stellar mass. Conversely, for a log-normal distribution of Hi
masses, the scaling relations obtained from direct measurements yield the
median HI mass at a given stellar mass. Further, again for a log-normal
distribution of Hi masses with scatter $\sigma$,
$\langle\log\rm{M_{H{\textsc{i}}}}\rangle$ =
$\log\langle\rm{M_{H{\textsc{i}}}}\rangle-\frac{\ln 10}{2}\sigma^{2}$.
Assuming that the scatter of the scaling relation at $z\approx 1$ is
independent of $\textrm{M}_{*}$ and that it is equal to the scatter of $0.4$
dex measured at $z=0$ (Catinella et al., 2018), the “direct”
$\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation would be offset downward from
Equation 1 by 0.184 dex.
Figure 3: The Hi properties of star-forming galaxies at $z\approx 1$, as a
function of their stellar masses. The red circles in panels [A] and [B] show,
respectively, our measurements of the average HI mass,
$\langle\rm{M_{H{\textsc{i}}}}\rangle$, and the characteristic Hi depletion
timescale, $\langle{\rm
t_{dep,H{\textsc{i}}}}\rangle=\langle\rm{M_{H{\textsc{i}}}}\rangle/\langle\textrm{SFR}\rangle$,
for star-forming galaxies at $z\approx 1$ in the three stellar-mass subsamples
of Figure 2. The blue squares indicate the same quantities for the blue xGASS
galaxies in three $\textrm{M}_{*}$ subsamples with stellar-mass distributions
identical to those of the three subsamples at $z\approx 1$. The
$\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation at $z\approx 1$, derived by
fitting a power-law relation to our measurements of the average Hi mass in the
three stellar-mass subsamples, is shown as the green line in Panel [A], with
the green shaded region showing the $1\sigma$ error on the relation. Panel [A]
also shows the $\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation for blue
galaxies at $z\approx 0$ (orange line), obtained by fitting a power-law
relation to the average Hi mass of xGASS galaxies in the three
$\textrm{M}_{*}$ subsamples. In Panel [B], the green curve shows the ${\rm
t_{dep,H{\textsc{i}}}}-\textrm{M}_{*}$ relation at $z\approx 1$, derived by
combining our estimate of the $\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation
at $z\approx 1$ with the equation describing the star-forming main sequence at
$z\approx 1$ (Whitaker et al., 2014); the green shaded region shows the
$1\sigma$ error on the relation. The orange line in panel [B] shows an
estimate of the ${\rm t_{dep,H{\textsc{i}}}}-\textrm{M}_{*}$ relation at
$z\approx 0$ derived in a similar manner, by combining the
$\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation at $z\approx 0$ with the
equation describing the star-forming main sequence at $z\approx 0$ (Whitaker
et al., 2012). The figure shows that blue star-forming galaxies at $z\approx
1$, with stellar masses in the range $\textrm{M}_{*}\approx
10^{9}-10^{11}~{}\textrm{M}_{\odot}$, have $\approx 3-4$ times more Hi than
blue galaxies at $z\approx 0$, but have far lower characteristic depletion
timescales, by a factor of $\approx 2-4$.
### 3.2 The Hi Depletion Timescale as a Function of Stellar Mass
The availability of cold gas regulates the star-formation activity in a
galaxy. The Hi depletion timescale (${\rm t_{dep,H{\textsc{i}}}}$), defined as
the ratio of the Hi mass of the galaxy to its SFR, quantifies the approximate
timescale for which the galaxy can sustain its current SFR, in the absence of
accretion of fresh Hi from the CGM. In other words, accretion of gas from the
CGM on a timescale of $\approx{\rm t_{dep,H{\textsc{i}}}}$ is required to
sustain the current star-formation activity of the galaxy.
We define the “characteristic” Hi depletion timescale of a sample of galaxies
as $\langle{\rm
t_{dep,H{\textsc{i}}}}\rangle\equiv\langle\rm{M_{H{\textsc{i}}}}\rangle/\langle{\rm
SFR}\rangle$. We combined the average SFRs of galaxies in the three subsamples
with their average Hi masses to estimate the characteristic Hi depletion
timescale, $\langle{\rm
t_{dep,H{\textsc{i}}}}\rangle\equiv\langle\rm{M_{H{\textsc{i}}}}\rangle/\langle{\rm
SFR}\rangle$, of galaxies at $z\approx 1$, as a function of their average
stellar masses. Table 1 lists the $\langle{\rm t_{dep,H{\textsc{i}}}}\rangle$
values of the galaxies in the three stellar-mass subsamples at $z\approx 1$,
while the estimates of $\langle{\rm t_{dep,H{\textsc{i}}}}\rangle$ are plotted
against the average stellar mass in Figure 3[B]. For comparison, the figure
also shows the characteristic Hi depletion timescale of the xGASS galaxies in
the same three stellar-mass subsamples, while Table 2 compares the values of
$\langle{\rm t_{dep,H{\textsc{i}}}}\rangle$ for the galaxies at $z\approx 0$
and $z\approx 1$. We find that the characteristic Hi depletion timescale of
blue star-forming galaxies at $z\approx 1$ is $\approx 2-4$ times lower than
that of similar galaxies with the same stellar mass distribution at $z\approx
0$.
In passing, we note that the “characteristic” Hi depletion timescale,
$\langle\rm{M_{H{\textsc{i}}}}\rangle/\langle{\rm SFR}\rangle$, for a sample
of galaxies may be different from the average of the depletion timescales of
the individual galaxies, $\langle\rm{M_{H{\textsc{i}}}}/SFR\rangle$. Indeed,
for the xGASS galaxies, we find that the $\langle\rm{M_{H{\textsc{i}}}}/{\rm
SFR}\rangle$ values in the three stellar-mass subsamples are higher than the
corresponding $\langle\rm{M_{H{\textsc{i}}}}\rangle/\langle{\rm SFR}\rangle$
values by factors of $\approx 1.2-1.6$. However, this does not affect the
results of this _Letter_ because we consistently compare the characteristic
depletion timescales of the different galaxy subsamples, at both $z\approx 1$
and $z\approx 0$.
We obtained the ${\rm t_{dep,H{\textsc{i}}}}-\textrm{M}_{*}$ relation at
$z\approx 1$ by combining our estimate of the
$\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation at $z\approx 1$ (Equation 1)
with a relation for the star-forming main sequence at $z\approx 1$ from
Whitaker et al. (2014). These authors provide best-fitting relations to the
star-forming main sequence for the redshift ranges $z=0.5-1.0$ and
$z=1.0-1.5$; we interpolated the best-fit parameters between the two redshift
intervals to find that the main-sequence relation at $z\approx 1$ is
$\log\left[{\rm SFR}/(\textrm{M}_{\odot}{\rm
yr}^{-1})\right]=0.976+0.720\log\left[{\textrm{M}_{*}}_{,10}\right]-0.205\log\left[{\textrm{M}_{*}}_{,10}\right]^{2}$.
Combining this relation with the $\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$
relation of Equation 1, we find that the ${\rm
t_{dep,H{\textsc{i}}}}-\textrm{M}_{*}$ relation for main-sequence galaxies at
$z\approx 1$ is777We note that the uncertainties on the ${\rm
t_{dep,H{\textsc{i}}}}-\textrm{M}_{*}$ relation of Equation 2 are dominated by
the uncertainties on the $\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation at
$z\approx 1$, with relatively little contribution from the uncertainties in
the main-sequence relation of Whitaker et al. (2014). The errors on the
parameters in Equation 2 were hence obtained by ignoring the uncertainties in
the main-sequence relation.:
$\log\left[{\rm t_{dep,H{\textsc{i}}}}/{\rm Gyr}\right]=(0.207\pm
0.056)+(-0.40\pm
0.13)\log\left[{\textrm{M}_{*}}_{,10}\right]+0.205\log\left[{\textrm{M}_{*}}_{,10}\right]^{2}$
(2)
We emphasise that Equation 2 was not obtained by fitting a relation to our
measurements of the characteristic Hi depletion timescale in the three
$\textrm{M}_{*}$ subsamples. However, Figure 3[B] shows that our measurements
of $\langle{\rm t_{dep,H{\textsc{i}}}}\rangle$ in the three subsamples are
consistent with the ${\rm t_{dep,H{\textsc{i}}}}-\textrm{M}_{*}$ relation of
Equation 2.
Overall, we find that blue star-forming galaxies at $z\approx 1$, with stellar
masses in the range $\textrm{M}_{*}\approx 10^{9}-2.4\times
10^{11}~{}\textrm{M}_{\odot}$ have larger Hi reservoirs than those of blue
galaxies at $z\approx 0$, by a factor of $3.54\pm 0.48$. However, the
evolution of the star-forming main-sequence by a factor of $\approx 10$ from
$z\approx 0$ to $z\approx 1$ (e.g. Whitaker et al., 2014) implies that the
characteristic Hi depletion timescales of blue star-forming galaxies at
$z\approx 1$ are lower, by factors of $\approx 2-4$, than those of local
galaxies. The results of this _Letter_ thus extend the findings of the earlier
GMRT Hi 21 cm stacking studies (Chowdhury et al., 2020, 2021, 2022a, 2022b)
that blue star-forming galaxies at $z\approx 1$ have a large average Hi mass
but a short characteristic Hi depletion timescale to the entire stellar mass
range $\textrm{M}_{*}\approx 10^{9}-2.4\times 10^{11}~{}\textrm{M}_{\odot}$.
### 3.3 The Hi Fraction as a Function of the Specific SFR
The Hi fractions (${\rm f_{\rm
H{\textsc{i}}}}\equiv\rm{M_{H{\textsc{i}}}}/\textrm{M}_{*}$) of galaxies in
the local Universe and their specific SFRs
($\textrm{sSFR}\equiv\textrm{SFR}/\textrm{M}_{*}$) are known to be correlated,
with a scatter of $\approx 0.5$ dex (Catinella et al., 2018); this is one of
the tightest atomic gas scaling relations at $z\approx 0$ (Catinella et al.,
2018). The locations of galaxies in the ${\rm f_{\rm H{\textsc{i}}}}-$sSFR
plane are indicative of the efficiency with which their Hi is being converted
to stars. In this section, we investigate the redshift evolution of the
relation between ${\rm f_{\rm H{\textsc{i}}}}$ and sSFR, for blue star-forming
galaxies, from $z\approx 1$ to $z\approx 0$.
We divide our sample of 11,419 galaxies into three sSFR subsamples with sSFR
$\leq 0.8~{}\textrm{Gyr}^{-1}$ (“Low”),
$0.8~{}\textrm{Gyr}^{-1}<~{}$sSFR$~{}\leq 1.3~{}\textrm{Gyr}^{-1}$
(“Intermediate”), and sSFR$~{}>1.3~{}\textrm{Gyr}^{-1}$ (“High”)888 The sSFR
ranges of the three subsamples were chosen such that a clear ($\geq 4\sigma$)
detection of the stacked Hi 21 cm emission signal is obtained for each
subsample. However, we emphasise that the conclusions of this section do not
depend on the exact choice of the sSFR bins.. The numbers of galaxies and Hi
21 cm subcubes in each subsample are listed in Table 3, while the redshift
distributions of the three sSFR subsamples are shown in Figure 4. The high-
sSFR subsample contains a significantly larger number of galaxies at higher
redshifts than the other two subsamples; this is primarily due to the redshift
evolution of the star-forming main sequence within our redshift coverage,
$z=0.74-1.45$ (e.g. Whitaker et al., 2014). We corrected for this difference
in the redshift distributions of the subsamples by using weights such that the
effective redshift distributions of the intermediate- and high-sSFR subsamples
are identical to that of the low-sSFR subsample. We separately stacked the Hi
21 cm subcubes of the galaxies in the three subsamples, following the
procedures of Section 2.2, using the above weights to ensure that the redshift
distributions of the three subsamples are identical.
Figure 3 shows the stacked Hi 21 cm emission images and the stacked Hi 21 cm
spectra of galaxies in the three sSFR subsamples. We obtain clear detections,
with $\approx 4.3-4.4\sigma$ statistical significance, of the average Hi 21 cm
emission signals from galaxies in the three subsamples. The average Hi mass
and the “characteristic” Hi fraction, $\langle{\rm f_{\rm
H{\textsc{i}}}}\rangle\equiv\langle\rm{M_{H{\textsc{i}}}}\rangle/\langle\textrm{M}_{*}\rangle$,
of the galaxies in each subsample are listed in Table 3.
Figure 4: The redshift distributions of the three sSFR subsamples. The blue histograms show, for each sSFR subsample, the number (N) of Hi 21 cm subcubes normalized by the total number of subcubes in the corresponding subsample, for the different redshift intervals. The Hi 21 cm subcubes of each sSFR subsample were assigned weights such that each effective redshift distribution is identical to the redshift distribution of the low-sSFR subsample (orange lines). The total number of galaxies in the subsample is indicated in each panel, with the number of Hi 21 cm subcubes shown in parentheses. | Low | Intermediate | High
---|---|---|---
sSFR Range ($\textrm{Gyr}^{-1}$) | $0.1-0.8$ | $0.8-1.3$ | $1.3-4.2$
Number of Hi 21 cm Subcubes | 6,975 | 6,049 | 15,969
Number of Galaxies | 2,793 | 2,417 | 6,209
Average Redshift | 0.97 | 0.97 | 0.97
Average sSFR ($\textrm{Gyr}^{-1}$) | $0.5$ | $1.1$ | $1.9$
Average Stellar Mass ($\times 10^{9}\ \textrm{M}_{\odot}$) | $20.6$ | $9.4$ | $4.5$
Average Hi Mass ($\times 10^{9}\ \textrm{M}_{\odot}$) | $15.1\pm 3.4$ | $16.4\pm 3.7$ | $9.1\pm 2.1$
Characteristic Hi Fraction | $0.73\pm 0.17$ | $1.75\pm 0.39$ | $2.02\pm 0.47$
Table 3: Average properties of galaxies in the three sSFR subsamples. For each
sSFR subsample, the rows are (1) the range of sSFR values, in units of
$\textrm{Gyr}^{-1}$, (2) the number of Hi 21 cm subcubes, (3) the number of
galaxies, (4) the average redshift, (5) the average sSFR, (6) the average
stellar mass, (7) the average Hi mass, measured from the stacked Hi 21 cm
emission spectra of Figure 5, and (8) the characteristic Hi fraction, ${\rm
f_{\rm
H{\textsc{i}}}}\equiv\langle\rm{M_{H{\textsc{i}}}}\rangle/\langle\textrm{M}_{*}\rangle$.
Note that all quantities are weighted averages, with weights such that the
redshift distributions of the three sSFR subsamples are identical. Figure 5:
The average Hi 21 cm emission signals from star-forming galaxies in the three
sSFR subsamples. Panels [A] show the average Hi 21 cm emission images of the
three sSFR mass subsamples. The circle on the bottom left of each panel
indicates the 90-kpc spatial resolution of the images. The contour levels are
at $-3.0\sigma$ (dashed), $+3.0\sigma$, and $+4.0\sigma$ significance. Panels
[B] show the average Hi 21 cm emission spectra of the same galaxies in the
three sSFR subsamples. The $\pm 1\sigma$ errors on the stacked Hi 21 cm
spectra are indicated with dashed black curves. We clearly detect the stacked
Hi 21 cm emission signals in all three subsamples. The Hi 21 cm subcubes of
each subsample were assigned weights such that their effective redshift
distributions are identical.
Our measurements of the characteristic Hi fraction of star-forming galaxies in
the three sSFR subsamples at $z\approx 1$ are shown in Figure 6; also shown
for comparison are the characteristic Hi fractions of blue xGASS galaxies at
$z\approx 0$ (Catinella et al., 2018). Note that the average sSFR of the DEEP2
galaxies in the low-sSFR subsample is $0.5~{}\textrm{Gyr}^{-1}$, while there
are only 3 galaxies in the xGASS survey with sSFR $>0.5~{}\textrm{Gyr}^{-1}$.
This is because the main sequence evolves between $z\approx 1$ and $z\approx
0$, with the sSFR of galaxies at a fixed stellar mass being $\approx 10$ times
higher at $z\approx 1$ than at $z\approx 0$ (e.g. Whitaker et al., 2014).
The straight lines in Figure 6 are the loci of constant depletion timescales
on the ${\rm f_{\rm H{\textsc{i}}}}-\textrm{M}_{*}$ plane. The characteristic
Hi depletion timescale of main-sequence galaxies in the local Universe is
$\approx 4.5$ Gyr, with a large scatter around the mean (Saintonge et al.,
2017). Figure 6 shows that the characteristic Hi fractions and the average
sSFRs of blue xGASS galaxies at $z\approx 0$ are consistent with the
$\langle{\rm t_{dep,H{\textsc{i}}}}\rangle=4.5$ Gyr line. However, it is clear
from Figure 6 that star-forming galaxies at $z\approx 1$ do not follow the
${\rm f_{\rm H{\textsc{i}}}}-$sSFR relation of local Universe galaxies. This
is consistent with our earlier results (e.g. Chowdhury et al., 2020, 2021)
that blue star-forming galaxies at $z\approx 1$ have a low characteristic Hi
depletion timescale of $\approx 1-2$ Gyr. This evolution of the ${\rm f_{\rm
H{\textsc{i}}}}-$sSFR relation from $z\approx 1$ to $z\approx 0$ is different
from the behaviour of the molecular component: the molecular gas depletion
timescales in main-sequence galaxies are typically $\approx 0.5-0.7$ Gyr at
$z\approx 0-1.5$, with no significant evidence for redshift evolution over
$z\approx 0-1.5$ (e.g. Saintonge et al., 2017; Genzel et al., 2015).
The short Hi depletion timescale of galaxies at $z\approx 1$ (or,
equivalently, the high Hi star-forming efficiency) is indicative of a very
efficient conversion of Hi to ${\rm H_{2}}$, which then directly fuels the
high star-formation activity. The difference between local Universe galaxies
(with massive Hi reservoirs but low star-forming efficiency) and star-forming
galaxies at $z\approx 1$ may lie in the typical Hi surface densities in the
galaxies; a high Hi surface density is likely to be a requirement for
efficient conversion of Hi to ${\rm H_{2}}$ (e.g. Leroy et al., 2008). In
other words, it appears that the efficiency of conversion of Hi to stars is
different at $z\approx 1$, towards the end of the epoch of peak star-formation
activity in the Universe, from that at $z\approx 0$, with the Hi in galaxies
at $z\approx 1$ being able to fuel star-formation far more efficiently than at
$z\approx 0$. Measurements of the average Hi surface density profiles of the
GMRT-CAT$z1$ galaxies would allow one to test this hypothesis.
Figure 6: The characteristic Hi fractions of star-forming galaxies at
$z\approx 1$, as a function of their specific star-formation rates. The red
circles show our measurements of the characteristic Hi fraction,
$\langle\rm{M_{H{\textsc{i}}}}\rangle$/$\langle\textrm{M}_{*}\rangle$, of
star-forming galaxies at $z\approx 1$ in the three sSFR subsamples of Figure
5. The black squares indicate the characteristic Hi fractions of blue xGASS
galaxies at $z\approx 0$ in multiple sSFR bins. The dashed lines show the loci
of constant gas depletion timescales. The relation between the Hi fraction and
the sSFR shows clear evolution from $z\approx 1$ to $z\approx 0$, with blue
galaxies at $z\approx 0$ having a characteristic Hi depletion timescale of
$\approx 4.5$ Gyr (see also Saintonge et al., 2017) but those at $z\approx 1$
having an Hi depletion timescale of just $\approx 1.5$ Gyr.
## 4 Summary
In this _Letter_ , we report the first determinations of Hi scaling relations
of galaxies at $z\approx 1$, measuring the Hi properties of blue star-forming
galaxies at $z=0.74-1.45$ as a function of stellar mass and sSFR, based on
data from the GMRT-CAT$z$1 survey. We divided our main sample of 11,419 blue
star-forming galaxies at $z\approx 1$ into three stellar-mass subsamples and
detected the stacked Hi 21 cm emission signals from all three subsamples at
$4.3-4.9\sigma$ significance. We fitted a power-law relation for the
dependence of the average Hi mass on the average stellar mass, to obtain
$\log\left[\rm{M_{H{\textsc{i}}}}/\textrm{M}_{\odot}\right]=(0.32\pm
0.13)\log\left[{\textrm{M}_{*}}_{,10}\right]+(10.183\pm 0.056)$. We compared
the $\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation at $z\approx 1$ to that
for blue galaxies at $z\approx 0$ to find that the slope of the
$\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation at $z\approx 1$ is consistent
with that at $z\approx 0$. However, we find that the
$\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation at $z\approx 1$ has shifted
upwards from the relation at $z\approx 0$, by a factor of $3.54\pm 0.48$. We
combined our measurements of the average Hi mass in the three stellar-mass
subsamples with measurements of their average SFRs, obtained by stacking the
rest-frame 1.4 GHz continuum emission, to obtain the characteristic Hi
depletion timescale,
$\langle\rm{M_{H{\textsc{i}}}}\rangle/\langle\textrm{SFR}\rangle$, of the
three subsamples. We find that the characteristic Hi depletion timescale of
blue star-forming galaxies at $z\approx 1$, over the stellar mass range
$\textrm{M}_{*}\approx 10^{9}-2.4\times 10^{11}~{}\textrm{M}_{\odot}$, is
$\approx 2-4$ times lower than that at $z\approx 0$, for blue galaxies with
similar stellar masses. We also divided the galaxies into three sSFR
subsamples, obtaining detections of the stacked Hi 21 cm emission signals in
all three subsamples, at $\approx 4.3-4.4\sigma$ significance. We find that
the ${\rm f_{\rm H{\textsc{i}}}}-$sSFR relation shows evidence for redshift
evolution, with galaxies at $z\approx 1$ having a lower characteristic Hi
fraction, by a factor of $\approx 3$, than what is expected from the
extrapolation of the relation at $z\approx 0$ to higher sSFR values. We thus
find that star-forming galaxies at $z\approx 1$ are able to convert their Hi
reservoirs into stars with much higher efficiency than galaxies at $z\approx
0$. This is unlike the situation for molecular gas, where the efficiency of
conversion of molecular gas to stars in main-sequence galaxies shows no
significant evolution over $z\approx 0-1.5$.
We thank the staff of the GMRT who have made these observations possible. The
GMRT is run by the National Centre for Radio Astrophysics of the Tata
Institute of Fundamental Research. We thank an anonymous referee for
suggestions that improved this manuscript. NK acknowledges support from the
Department of Science and Technology via a Swarnajayanti Fellowship
(DST/SJF/PSA-01/2012-13). AC, NK, $\&$ JNC also acknowledge the Department of
Atomic Energy for funding support, under project 12-R&D-TFR-5.02-0700.
## References
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
* Bera et al. (2019) Bera, A., Kanekar, N., Chengalur, J. N., & Bagla, J. S. 2019, ApJ, 882, L7, doi: 10.3847/2041-8213/ab3656
* Blyth et al. (2016) Blyth, S., Baker, A. J., Holwerda, B., et al. 2016, in MeerKAT Science: On the Pathway to the SKA, 4
* Brinchmann et al. (2004) Brinchmann, J., Charlot, S., White, S. D. M., et al. 2004, MNRAS, 351, 1151, doi: 10.1111/j.1365-2966.2004.07881.x
* Brown et al. (2015) Brown, T., Catinella, B., Cortese, L., et al. 2015, MNRAS, 452, 2479, doi: 10.1093/mnras/stv1311
* Catinella & Cortese (2015) Catinella, B., & Cortese, L. 2015, MNRAS, 446, 3526, doi: 10.1093/mnras/stu2241
* Catinella et al. (2018) Catinella, B., Saintonge, A., Janowiecki, S., et al. 2018, MNRAS, 476, 875, doi: 10.1093/mnras/sty089
* Chengalur et al. (2001) Chengalur, J. N., Braun, R., & Wieringa, M. 2001, A&A, 372, 768, doi: 10.1051/0004-6361:20010547
* Chowdhury et al. (2022a) Chowdhury, A., Kanekar, N., & Chengalur, J. N. 2022a, ApJ, 931, L34, doi: 10.3847/2041-8213/ac6de7
* Chowdhury et al. (2022b) —. 2022b, ApJ, 937, 103. https://arxiv.org/abs/2207.00031
* Chowdhury et al. (2022c) —. 2022c, ApJ, 935, L5, doi: 10.3847/2041-8213/ac8150
* Chowdhury et al. (2020) Chowdhury, A., Kanekar, N., Chengalur, J. N., Sethi, S., & Dwarakanath, K. S. 2020, Nature, 586, 369, doi: 10.1038/s41586-020-2794-7
* Chowdhury et al. (2021) Chowdhury, A., Kanekar, N., Das, B., Sethi, S., & Dwarakanath, K. S. 2021, ApJ, Submitted
* Condon (1992) Condon, J. J. 1992, ARA&A, 30, 575, doi: 10.1146/annurev.aa.30.090192.003043
* Condon et al. (2002) Condon, J. J., Cotton, W. D., & Broderick, J. J. 2002, AJ, 124, 675, doi: 10.1086/341650
* Davé et al. (2019) Davé, R., Anglés-Alcázar, D., Narayanan, D., et al. 2019, MNRAS, 486, 2827, doi: 10.1093/mnras/stz937
* Davé et al. (2020) Davé, R., Crain, R. A., Stevens, A. R. H., et al. 2020, MNRAS, 497, 146, doi: 10.1093/mnras/staa1894
* Delhaize et al. (2013) Delhaize, J., Meyer, M. J., Staveley-Smith, L., & Boyle, B. J. 2013, MNRAS, 433, 1398, doi: 10.1093/mnras/stt810
* Diemer et al. (2018) Diemer, B., Stevens, A. R. H., Forbes, J. C., et al. 2018, ApJS, 238, 33, doi: 10.3847/1538-4365/aae387
* Fabello et al. (2011) Fabello, S., Catinella, B., Giovanelli, R., et al. 2011, MNRAS, 411, 993, doi: 10.1111/j.1365-2966.2010.17742.x
* Fernández et al. (2016) Fernández, X., Gim, H. B., van Gorkom, J. H., et al. 2016, ApJ, 824, L1, doi: 10.3847/2041-8205/824/1/L1
* Genzel et al. (2015) Genzel, R., Tacconi, L. J., Lutz, D., et al. 2015, ApJ, 800, 20, doi: 10.1088/0004-637X/800/1/20
* Gogate et al. (2020) Gogate, A. R., Verheijen, M. A. W., Deshev, B. Z., et al. 2020, MNRAS, 496, 3531, doi: 10.1093/mnras/staa1680
* Guo et al. (2021) Guo, H., Jones, M. G., Wang, J., & Lin, L. 2021, ApJ, 918, 53, doi: 10.3847/1538-4357/ac062e
* Jaffé et al. (2013) Jaffé, Y. L., Poggianti, B. M., Verheijen, M. A. W., Deshev, B. Z., & van Gorkom, J. H. 2013, MNRAS, 431, 2111, doi: 10.1093/mnras/stt250
* Kanekar et al. (2016) Kanekar, N., Sethi, S., & Dwarakanath, K. S. 2016, ApJ, 818, L28, doi: 10.3847/2041-8205/818/2/L28
* Kauffmann et al. (2003) Kauffmann, G., Heckman, T. M., White, S. D. M., et al. 2003, MNRAS, 341, 33, doi: 10.1046/j.1365-8711.2003.06291.x
* Lagos et al. (2018) Lagos, C. d. P., Tobar, R. J., Robotham, A. S. G., et al. 2018, MNRAS, 481, 3573, doi: 10.1093/mnras/sty2440
* Lah et al. (2007) Lah, P., Chengalur, J. N., Briggs, F. H., et al. 2007, MNRAS, 376, 1357, doi: 10.1111/j.1365-2966.2007.11540.x
* Leroy et al. (2008) Leroy, A. K., Walter, F., Brinks, E., et al. 2008, AJ, 136, 2782, doi: 10.1088/0004-6256/136/6/2782
* Madau & Dickinson (2014) Madau, P., & Dickinson, M. 2014, ARA&A, 52, 415, doi: 10.1146/annurev-astro-081811-125615
* Mostek et al. (2012) Mostek, N., Coil, A. L., Moustakas, J., Salim, S., & Weiner, B. J. 2012, ApJ, 746, 124, doi: 10.1088/0004-637X/746/2/124
* Newman et al. (2013) Newman, J. A., Cooper, M. C., Davis, M., et al. 2013, ApJS, 208, 5, doi: 10.1088/0067-0049/208/1/5
* Rhee et al. (2016) Rhee, J., Lah, P., Chengalur, J. N., Briggs, F. H., & Colless, M. 2016, MNRAS, 460, 2675, doi: 10.1093/mnras/stw1097
* Saintonge & Catinella (2022) Saintonge, A., & Catinella, B. 2022, ARA&A, 60, 319, doi: 10.1146/annurev-astro-021022-043545
* Saintonge et al. (2017) Saintonge, A., Catinella, B., Tacconi, L. J., et al. 2017, ApJS, 233, 22, doi: 10.3847/1538-4365/aa97e0
* Salim et al. (2009) Salim, S., Dickinson, M., Michael Rich, R., et al. 2009, ApJ, 700, 161, doi: 10.1088/0004-637X/700/1/161
* Sinigaglia et al. (2022) Sinigaglia, F., Rodighiero, G., Elson, E., et al. 2022, arXiv e-prints, arXiv:2208.01121. https://arxiv.org/abs/2208.01121
* Tacconi et al. (2020) Tacconi, L. J., Genzel, R., & Sternberg, A. 2020, ARA&A, 58, 157, doi: 10.1146/annurev-astro-082812-141034
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, doi: 10.1038/s41592-019-0686-2
* Weiner et al. (2009) Weiner, B. J., Coil, A. L., Prochaska, J. X., et al. 2009, ApJ, 692, 187, doi: 10.1088/0004-637X/692/1/187
* Whitaker et al. (2012) Whitaker, K. E., van Dokkum, P. G., Brammer, G., & Franx, M. 2012, ApJ, 754, L29, doi: 10.1088/2041-8205/754/2/L29
* Whitaker et al. (2014) Whitaker, K. E., Franx, M., Leja, J., et al. 2014, ApJ, 795, 104, doi: 10.1088/0004-637X/795/2/104
* White et al. (2007) White, R. L., Helfand, D. J., Becker, R. H., Glikman, E., & de Vries, W. 2007, ApJ, 654, 99, doi: 10.1086/507700
* Willmer et al. (2006) Willmer, C. N. A., Faber, S. M., Koo, D. C., et al. 2006, ApJ, 647, 853, doi: 10.1086/505455
* Yun et al. (2001) Yun, M. S., Reddy, N. A., & Condon, J. J. 2001, ApJ, 554, 803, doi: 10.1086/323145
* Zwaan (2000) Zwaan, M. A. 2000, PhD thesis, Ph.D. Thesis, Groningen: Rijksuniversiteit, 2000
## Appendix A Fitting Power-law Relations to Stacked Measurements
We fitted a power-law relation of the form in Equation A1 to our measurements
of the average Hi mass in the three stellar-mass subsamples to determine the
dependence of the Hi mass of star-forming galaxies at $z\approx 1$ on their
stellar mass.
$\log\left[\rm{M_{H{\textsc{i}}}}(\alpha,\beta)/\textrm{M}_{\odot}\right]=\alpha+\beta\log\left[{\textrm{M}_{*}}_{,10}\right]\;.$
(A1)
The fitting was done via a $\chi^{2}$ minimization, taking into account the
stellar-mass distribution of the galaxies in each of the three subsamples.
Specifically, for given trial values of $\alpha$ and $\beta$, we use the
stellar masses of the 11,419 galaxies of our sample in Equation A1 to estimate
their individual Hi masses, ${\rm{M_{H{\textsc{i}}}}}(\alpha,\beta)$. Next, we
use these individual Hi masses to compute the weighted-average Hi mass of the
$i$’th subsample, $\langle{\rm{M_{H{\textsc{i}}}}}(\alpha,\beta)\rangle^{i}$,
with the weights being the same as those used to stack the Hi 21 cm emission
signals of the subsample. Through this procedure, we effectively obtain the
average Hi masses of the three subsamples as a function of $\alpha$ and
$\beta$, assuming that the $\rm{M_{H{\textsc{i}}}}-\textrm{M}_{*}$ relation at
$z\approx 1$ can be described by Equation A1. The parameters $\alpha$ and
$\beta$ are finally obtained by minimising, using a standard steepest-descent
approach999The optimization was carried out using an implementation of the
Levenberg-Marquardt algorithm in the scipy package (Virtanen et al., 2020).,
the $\chi^{2}$ given by
$\chi^{2}(\alpha,\beta)=\sum_{i=1}^{3}\left(\frac{\langle{\rm{M_{H{\textsc{i}}}}}\rangle^{i}-\langle{\rm{M_{H{\textsc{i}}}}}(\alpha,\beta)\rangle^{i}}{\sigma^{i}_{\rm{M_{H{\textsc{i}}}}}}\right)^{2}$
(A2)
In the above equation, ${\langle\rm{M_{H{\textsc{i}}}}\rangle}^{i}$ and
$\sigma^{i}_{\rm{M_{H{\textsc{i}}}}}$ are the measurement of the average Hi
mass in the $i$’th subsample and the uncertainty on the measurement,
respectively.
|
# Generalized Spectral Clustering for Directed and Undirected Graphs
Harry Sevi Matthieu Jonckheere Argyris Kalogeratos
###### Abstract
Spectral clustering is a popular approach for clustering undirected graphs,
but its extension to directed graphs (digraphs) is much more challenging. A
typical workaround is to naively symmetrize the adjacency matrix of the
directed graph, which can however lead to discarding valuable information
carried by edge directionality. In this paper, we present a _generalized
spectral clustering_ framework that can address both directed and undirected
graphs. Our approach is based on the spectral relaxation of a new functional
that we introduce as the generalized Dirichlet energy of a graph function,
with respect to an arbitrary positive regularizing measure on the graph edges.
We also propose a practical parametrization of the regularizing measure
constructed from the iterated powers of the natural random walk on the graph.
We present theoretical arguments to explain the efficiency of our framework in
the challenging setting of unbalanced classes. Experiments using directed
$K$-NN graphs constructed from real datasets show that our graph partitioning
method performs consistently well in all cases, while outperforming existing
approaches in most of them.
Machine Learning, ICML
## 1 Introduction
Clustering is one of the most popular techniques in analyzing large datasets
and has widespread applications in machine learning, network analysis, and
biology. Typically, when viewing the data as a graph, the problem of
clustering is to partition the graph into several weakly interconnected
clusters. This notion is formalized as a discrete optimization problem aiming
to minimize a functional such as the graph cut or the normalized cut (Von
Luxburg, 2007; Shi & Malik, 2000). The spectral relaxation of this
minimization problem leads to finding the eigenvectors of a certain graph
Laplacian and using these as input features for the $k$-means algorithm. This
forms the backbone of spectral clustering.
Over the last three decades, spectral clustering has become one of the most
widely used clustering methods due to its simplicity, efficiency, and strong
theoretical background (Ng et al., 2002; Peng et al., 2015; Boedihardjo et
al., 2021). Unfortunately, although many graphs carry valuable information in
their directed edges, the vast majority of spectral clustering algorithms only
operate on undirected graphs. Typical examples of directed graphs (digraphs)
are social or content networks, as well as networks with flows (e.g. roads,
electrical networks, rivers). Another fundamental source of digraphs in data
processing comes from the representation of points clouds in $d$-dimensions
through, e.g. $K$-nearest neighbors ($K$-NN) graphs or other kernel-based
procedures.
Therefore, on the one hand, the information encoded in the edge directionality
of digraphs should be used. On the other hand, the extension of the spectral
clustering in the digraph setting is not straightforward. The adjacency matrix
of a digraph is non-symmetric. It thus seems to be no obvious definition of a
symmetric and real-valued graph Laplacian with a full set of real eigenvalues
that uniquely encodes any digraph. The commonly used approach for clustering
digraphs is to build a symmetrized adjacency matrix from the original non-
symmetric one, and then to apply spectral clustering techniques to the graph
Laplacian of it (Satuluri & Parthasarathy, 2011). As explained, this
potentially discards valuable information.
In an attempt to overcome this challenging problem, a slew of works has been
proposed in the last two decades. In (Zhou et al., 2005), they use the
Laplacian on digraphs defined in (Chung, 2005) to propose spectral clustering
on digraphs. In (Meilă & Pentney, 2007), they attack the clustering problem on
digraphs from the original asymmetric adjacency matrix of a given digraph
through the weighted cut formulation. In (Rohe et al., 2016), they propose a
novel spectral co-clustering on digraphs based on the singular value
decomposition of a modified adjacency matrix. In recent years, some clustering
approaches based on Hermitian operators on digraphs have been investigated
(Cucuringu et al., 2020; Laenen & Sun, 2020). Note that all the approaches
cited above are based on the construction of symmetric operators on digraphs.
In this paper, we present a unifying spectral clustering framework on directed
and undirected graphs based on the spectral relaxation of a novel energy
functional, which in turn allows us to generalize graph Laplacians. This
functional is termed _generalized Dirichlet energy_ (GDE) as it extends the
well-known notion of Dirichlet energy. In particular, GDE is defined with
respect to any positive regularizing measure and any Markov transition matrix.
We propose for practical use a parametrized family of such measures. The
resulting _generalized spectral clustering_ (GSC) approach extends standard
spectral clustering, usually applied on strongly connected digraphs (Zhou et
al., 2005; Palmer & Zheng, 2020), to any digraphs. More importantly, it
achieves that without involving the Pagerank’s teleporting random walk (Page
et al., 1999) as a surrogate of the natural random walk.
The rest of the paper is organized as follows. In Sec. 2, we present basic
concepts of graph theory and Markov chains. In Sec. 3, we introduce the
generalized Dirichlet energy and the new generalized graph Laplacians. In Sec.
4, we present the formulation of the GSC and an algorithmic scheme. In Sec. 5,
we provide theoretical results proving the efficiency of GSC in the
challenging setting of unbalanced clusters compared to classical spectral
clustering. Sec. 6 includes our extensive experimental study on a toy dataset
and various real-world UCI datasets. Technical proofs are provided in the
Appendix.
## 2 Preliminaries and background
Essential concepts. Let $\mathcal{G}=(\mathcal{V},\mathcal{E},w)$ be a
weighted directed graph (digraph) where $\mathcal{V}$ is the finite set of
$N=|\mathcal{V}|$ vertices, and
$\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}$ is the finite set of edges.
Each edge $(x,y)$ is an ordered vertex pair representing the direction of a
link from vertex $x$ to vertex $y$.
Any function $\nu:\mathcal{V}\rightarrow\mathbb{R}_{+}$, associating a
nonnegative value to each graph vertex, can be regarded as a positive vertex
measure; respectively any function $q:\mathcal{E}\rightarrow\mathbb{R}_{+}$
can be regarded as a positive edge measure.
The edge weight function
$w:\mathcal{V}\times\mathcal{V}\rightarrow\mathbb{R}_{+}$ associates a
nonnegative real value to every vertex pair: $w(x,y)\geq 0$, iff
$(x,y)\in\mathcal{E}$, otherwise $w(x,y)=0$.
The graph $\mathcal{G}$ can be entirely represented by its weighted adjacency
matrix $\mathbf{W}=\\{w_{ij}\\}_{i,j=1}^{N}\in\mathbb{R}_{+}^{N\times N}$,
where $w_{ij}=w(x_{i},x_{j})$ is the weight of the edge $(x_{i},x_{j})$. We
define the out-degree and the in-degree of the $i$-th vertex by $\textstyle
d_{i}^{+}=\sum_{j=1}^{N}w_{ij}$ and $d_{i}^{-}=\sum_{j=1}^{N}w_{ji}$,
respectively. Also, the function $\operatorname{diag}(\nu)$ returns a square
diagonal matrix with the elements of the input vector $\nu$ in its diagonal.
Given a subset of vertices $S\subseteq\mathcal{V}$, we denote its complement
by $\bar{S}=\mathcal{V}\backslash S$. Also, we denote the characteristic
function of a set $S$ by
$\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}\in\\{0,1\\}^{N}$, which
gives $\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(x)=1$, iff $x\in S$,
and $\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(x)=0$ otherwise.
Consider a graph function ${f}$ mapping all of its vertices to an
$N$-dimensional complex column vector :
$f=[\,f(x)\,]_{x\in\mathcal{V}}^{{\mkern-1.5mu\mathsf{T}}}\in\mathbb{C}^{N}$.
We assume that graph functions are defined in $\ell^{2}(\mathcal{V},\nu)$,
which is the Hilbert space of functions defined over the vertex set
$\mathcal{V}$ of $\mathcal{G}$, endowed with the inner product, and $\nu$ is a
positive measure. Hence, for all $f,g\in\ell^{2}(\mathcal{V},\nu)$ it holds:
$\langle{f},{g}\rangle_{\nu}=\sum_{x\in\mathcal{V}}\overline{f(x)}g(x)\nu(x),$
where $\overline{f(x)}$ denotes the complex conjugate of $f(x)$.
What we call in short as random walk on a weighted graph $\mathcal{G}$, is
defined more formally as a natural random walk on the graph as a homogeneous
Markov chain $\mathcal{X}=(X_{t})_{t\geq 0}$ with a finite state space
$\mathcal{V}$, and with state transition probabilities proportional to the
edge weights.
The entries of the transition matrix
$\mathbf{P}=[\,p(x,y)\,]_{x,y\in\mathcal{V}}$ are defined by:
$p(x,y)=\mathbb{P}(X_{t+1}=y\,|\,X_{t}=x)=\frac{w(x,y)}{\sum_{z\in\mathcal{V}}w(x,z)}.$
For a directed and strongly connected $\mathcal{G}$, the random walk
$\mathcal{X}$ is irreducible. Under mild conditions, $\mathcal{X}$ is also
ergodic, and therefore as $t\rightarrow\infty$, the measures $p^{t}(x,\cdot)$,
$\forall x\in\mathcal{V}$, converge towards the _unique_ stationary
distribution denoted by the row vector $\pi\in\mathbb{R}_{+}^{N}$ (Brémaud,
2013). Within the undirected setting: $d_{i}^{+}=d_{i}^{-}=d_{i}$, where
$d\in\mathbb{R}_{+}^{N\times 1}$ is the vector of the vertex degrees;
moreover, the ergodic distribution is proportional to the vertex degree
distribution, i.e. $\pi\propto d$.
The emphasis of our presentation is put on digraphs, however, _our theoretical
framework applies to any type of graph with nonnegative weights_.
Dirichlet energy and graph Laplacians.
In the literature of Dirichlet forms (Saloff-Coste, 1997; Montenegro et al.,
2006) or harmonic analysis on graphs (Sevi et al., 2018), the definition of
the Dirichlet energy of a graph function $f$ is usually as follows.
###### Definition 2.1.
Dirichlet energy of a graph function. Let $\mathcal{X}$ be a random walk on a
digraph $\mathcal{G}$, with transition matrix $\mathbf{P}$. Let also be the
ergodic distribution $\pi$ of the random walk, and $\pi(x)$ referring to
vertex $x\in\mathcal{V}$. The Dirichlet energy of a graph function $f$ is
defined by:
$\mathcal{D}(f)=\sum_{x,y\in\mathcal{V}}\pi(x)p(x,y)|f(x)-f(y)|^{2}.$ (1)
This quantity can be also expressed in its quadratic form:
$\mathcal{D}(f)=2\,\langle
f,\mathbf{L}_{\textnormal{RW}}f\rangle_{\pi}=2\,\langle f,\mathbf{L}f\rangle.$
In this form, the Dirichlet energy reveals the random walk Laplacian
$\mathbf{L}_{\textnormal{RW}}$ and equivalently the unnormalized Laplacian
$\mathbf{L}$ on directed graphs (Chung, 2005; Sevi et al., 2018). These
matrices are formally defined as follows:
$\displaystyle\mathbf{L}_{\textnormal{RW}}$
$\displaystyle=\mathbf{I}-\frac{1}{2}(\mathbf{P+\Pi^{-1}P^{{\mkern-1.5mu\mathsf{T}}}\Pi}),$
(2) $\displaystyle\mathbf{L}$
$\displaystyle=\mathbf{\Pi}-\frac{1}{2}(\mathbf{\Pi
P+P^{{\mkern-1.5mu\mathsf{T}}}\Pi}),$ (3)
where $\mathbf{I}$ is the identity matrix of suitable size (here $N$), and
$\boldsymbol{\Pi}=\operatorname{diag}(\pi)$ is the diagonal matrix associated
with an ergodic measure $\pi$.
It is worth mentioning that $\mathbf{L}_{\textnormal{RW}}$ and $\mathbf{L}$ on
directed graphs, are the counterpart of the random walk Laplacian and
unnormalized Laplacian on undirected graphs. For undirected graphs it holds
$\boldsymbol{\Pi}\propto\mathbf{D}=\operatorname{diag}(d)$ and
$\mathbf{P}=\mathbf{D}^{-1}\mathbf{W}$; therefore, in that case the random
walk and unnormalized Laplacians become respectively:
$\mathbf{L}_{\textnormal{RW}}=\mathbf{I-P}$ and
$\mathbf{L}=\mathbf{D}-\mathbf{W}$.
## 3 Generalized Dirichet energy and Laplacians on graphs
We have seen in the previous section the conventional way certain concepts
appear in the literature. In this section, we introduce the generalized
Dirichlet energy (GDE), which is defined under an arbitrary positive measure
$q$ over the graph edges, and the associated generalized graph Laplacians.
These concepts constitute the foundation of our framework.
###### Definition 3.1.
Generalized Dirichlet Energy of a graph function. Let $q$ be an arbitrary
positive edge measure on a digraph $\mathcal{G}$, and
$\mathbf{Q}=\\{\,q(x,y)\,\\}_{x,y\in\mathcal{V}}$ the edge measure operator.
The generalized Dirichlet energy of a graph function ${f}$ associated with the
edge measure $q$ on $\mathcal{G}$ is expressed as:
$\mathcal{D}_{\mathbf{Q}}^{2}(f)=\sum_{x,y\in\mathcal{V}}q(x,y)|f(x)-f(y)|^{2}.$
(4)
The broad generality of this definition stems from the fact that it integrates
all the graph-related information into the arbitrary positive edge measure
$q$, thus its operator $\mathbf{Q}$. As can be noted, Definition 2.1 is a
particular case of our generalized form, as $\mathcal{D}_{\mathbf{Q}}^{2}(f)$
= $\mathcal{D}(f)$, when $q(x,y)=\pi(x)p(x,y)$. More generally, $q(x,y)$ can
be a function combining an arbitrary vertex measure $\nu$ and an edge measure
based on the transition matrix $\mathbf{P}=[\,p(x,y)\,]_{x,y\in\mathcal{V}}$
of the random walk $\mathcal{X}$ on $\mathcal{G}$. We refine accordingly the
GDE of a graph function ${f}$ associated with a random walk as:
$\mathcal{D}_{\nu,\mathbf{P}}^{2}(f)=\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)|f(x)-f(y)|^{2}.$
(5)
This formulation suggests that we can go further and derive interesting energy
functionals by replacing the stationary distribution $\pi$ of the random walk
with other more sophisticated or better adapted vertex measures for digraphs.
Note that, although $\nu$ can be an arbitrary measure, it is easy to see that
replacing it by its $\ell^{1}$-normalized counterpart
$\nu^{\prime}=\frac{\nu}{|\\!|\nu|\\!|_{1}}$ would merely scale the GDE of Eq.
(5) by $\frac{1}{|\\!|\nu|\\!|_{1}}$. Therefore, we could safely restrict
$\nu$ to be a probability vertex measure.
We are now ready to introduce the generalized graph Laplacians that rely on
the GDE of Eq. (5).
###### Definition 3.2.
Generalized graph Laplacians. Let $\mathcal{X}$ be a random walk on a digraph
$\mathcal{G}$, with transition matrix $\mathbf{P}$. Under an arbitrary
positive vertex measure $\nu$ on $\mathcal{G}$, consider the positive vertex
measure:
$\xi(y)=\sum_{x\in\mathcal{V}}\nu(x)p(x,y),\quad\forall y\in\mathcal{V}.$ (6)
Let be the diagonal matrices $\mathbf{N}=\operatorname{diag}(\nu)$ and
$\mathbf{\Xi}=\operatorname{diag}(\xi)$. The generalized random walk Laplacian
and the unnormalized generalized Laplacian on $\mathcal{G}$ are defined by:
$\mathbf{L}_{\textnormal{RW}}(\nu)=\mathbf{I-(I+N^{-1}\Xi)^{-1}(P+N^{-1}P^{{\mkern-1.5mu\mathsf{T}}}N)},$
(7)
$\mathbf{L}(\nu)=\mathbf{N+\Xi-(NP+P^{{\mkern-1.5mu\mathsf{T}}}N}).$ (8)
$\mathbf{L}_{\textnormal{RW}}(\nu)$ and $\mathbf{L}(\nu)$ extend the graph
Laplacians defined in Eq. (2) and Eq. (3). Moreover, $\mathbf{L}(\nu)$ has the
property of being self-adjoint in $\ell^{2}(\mathcal{V})$, i.e.
$\mathbf{L}(\nu)=\mathbf{L}(\nu)^{{\mkern-1.5mu\mathsf{T}}}$.
Next, we establish the connection between the GDE and the generalized random
walk Laplacian.
###### Proposition 3.1.
Let $\mathcal{X}$ be a random walk on a digraph $\mathcal{G}$, with transition
matrix $\mathbf{P}$. Let $\nu$ be an arbitrary positive vertex measure on
$\mathcal{G}$, and $\xi$ be the vertex measure defined by Eq. (6). The
generalized Dirichlet energy of a graph function $f$ and the generalized
random walk Laplacian $\mathbf{L}_{\textnormal{RW}}(\nu)$ of Eq. (7) are
associated as follows:
$\mathcal{D}_{\nu,\mathbf{P}}^{2}(f)=\langle{f},\mathbf{L}_{\textnormal{RW}}(\nu){f}\rangle_{\nu+\xi}.\,$
As we can appreciate, $\mathbf{L}_{\textnormal{RW}}(\nu)$ is self-adjoint in
$\ell^{2}(\mathcal{V},\nu+\xi)$.
Finally, we introduce the normalized GDE (also known as Rayleigh quotient) of
a graph function $f$:
$\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\nu,\mathbf{P}}(f)=\frac{\mathcal{D}_{\nu,\mathbf{P}}^{2}(f)}{\|{f}\|_{\nu+\xi}^{2}}.$
(9)
Given a positive vertex measure $\mu$, we can introduce a parametrized vertex
measure $\nu_{t}$, $t\geq 0$, derived from the iterated powers of the natural
random walk on a given graph:
$\nu_{t}(x)={\textstyle\mu}^{{\mkern-1.5mu\mathsf{T}}}{\mathbf{P}}^{t}{\delta}_{x}\,,\quad\mu\in\mathbb{R}_{+}^{N}\,,$
(10)
where $\delta_{x}\in\\{0,1\\}^{N\times 1}$ is the vector output of the
Kronecker delta function at $x\in\mathcal{V}$. We can now derive the following
proposition for the ergodic setting.
###### Proposition 3.2.
Let $\mathcal{X}$ be an ergodic random walk on a digraph $\mathcal{G}$, whose
transition matrix is $\mathbf{P}$ with stationary distribution $\pi$. At
$t\rightarrow\infty$, we have:
$\lim_{t\to\infty}\mathcal{D}_{\nu_{t},\mathbf{P}}^{2}(f)=\mathcal{D}_{\pi,\mathbf{P}}^{2}(f).$
This interesting result indicates that, as $t\to\infty$, the GDE associated
with the transition matrix $\mathbf{P}$ of a graph function $f$, under a
parametrized vertex measure $\nu_{t}\\!$, coincides with the respective energy
of an ergodic random walk under the usual unnormalized Laplacian
$\mathbf{L}(\pi)$.
## 4 Generalized spectral clustering on graphs
This section presents our general spectral clustering formulation for any type
of graphs, which is based on the GDE and the generalized graph Laplacians.
### 4.1 Graph partitioning functional based on GDE
We commence with some reminders and additional preliminary concepts. Recall
that $\mathcal{G}$ is a digraph of $N=|\mathcal{V}|$ vertices, and
$\mathcal{X}=(X_{t})_{t\geq 0}$ is a natural random walk on $\mathcal{G}$,
with transition matrix $\mathbf{P}=[\,p(x,y)\,]_{x,y\in\mathcal{V}}$. In the
general setting, $\mathcal{X}$ may be transient, thus not having an ergodic
distribution. Let $\nu:\mathcal{V}\rightarrow\mathbb{R}_{+}$ be a vertex
measure, and $\nu(S)$ be its evaluation over a subset $S\subseteq\mathcal{V}$:
$\nu(S)=\sum_{x\in S}\nu(x)$.
Now, let $q:\mathcal{E}\rightarrow\mathbb{R}_{+}$ be a composite edge measure
such that $q(x,y)=\nu(x)p(x,y)$. Respectively, consider the edge measure
between two disjoint vertex subsets $S,U\subseteq\mathcal{V}$ by:
$\displaystyle q(S,U)$ $\displaystyle=\sum_{x\in S,y\in U}q(x,y)\ =\sum_{x\in
S,y\in U}\nu(x)p(x,y)$ $\displaystyle=\mathbb{P}(X_{t}\in S,X_{t+1}\in
U),\quad\text{for any }t\geq 0.$ (11)
$q(S,U)$ is a generic measure related to Markov chains (Sinclair, 1992; Levin
& Peres, 2017). In our setting, it quantifies the probability that the random
walk escapes from the set $S$ to $U$ in one step, when the starting vertex of
the walk is drawn according to the arbitrary vertex measure $\nu$. When
considering $U=\bar{S}$, this discussion becomes very interesting for graph
partitioning. In essence, $q(S,\bar{S})$ offers a _probabilistic point of view
over the graph cut_ between a set $S$ and the rest of the graph (Meilă & Shi,
2001).
###### Proposition 4.1.
Let $\mathcal{X}$ be a random walk on a digraph $\mathcal{G}$, with transition
matrix $\mathbf{P}$. Let $\nu$ be a positive vertex measure, and $q$ be a
positive edge measure, both on $\mathcal{G}$. Let $S\subseteq\mathcal{V}$ and
$\bar{S}=\mathcal{V}\backslash S$. Consider the characteristic function
$\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}$, associated with the set
$S$, as a graph function. The composite edge measure $q(S,\bar{S})$ and the
generalized Dirichlet energy
$\mathcal{D}_{\nu,\mathbf{P}}^{2}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}})$
are related as follows:
$q(S,\bar{S})+q(\bar{S},S)=\mathcal{D}_{\nu,\mathbf{P}}^{2}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}).$
(12)
To bring this discussion closer to the clustering setting, we can imagine
$\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}$ to be a _decision
function_ produced by some algorithm that aims to partition the graph into two
parts. In that sense, Eq. (12) offers a meaningful interpretation of the GDE
of any graph partitioning decision vector:
$\mathcal{D}_{\nu,\mathbf{P}}^{2}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}})$
quantifies how difficult it is for a random walk with transitions $q$ to
escape from $S$ and reach $\bar{S}$, or vice versa.
Note also the symmetricity , i.e.
$\mathcal{D}_{\nu,\mathbf{P}}^{2}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}})=\mathcal{D}_{\nu,\mathbf{P}}^{2}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{${\bar{S}}$}}})$.
The multiway graph partitioning problem aims to partition a digraph into a
given number of disjoint subgraphs, such that the edge density among them is
minimal. Given a $k$-partition of the graph vertices, denoted by
$\boldsymbol{V}=\\{V_{i}\\}_{i=1}^{k}$, where
$\bigcup_{i=1}^{k}\\!\\!V_{i}=\mathcal{V}$, then, under an arbitrary vertex
measure $\nu$, we define the _partition’s Dirichlet energy_ by:
$\mathcal{D}_{\nu,\mathbf{P}}^{2}(\boldsymbol{V})=\sum_{i=1}^{k}\mathcal{D}_{\nu,\mathbf{P}}^{2}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{i}$}}}).$
(13)
This makes concrete our interest in finding a $k$-partition of the digraph
$\mathcal{G}$ that has minimal GDE
$\mathcal{D}_{\nu,\mathbf{P}}^{2}(\boldsymbol{V})$. Let us rewrite the energy
of the $k$-partition as:
$\mathcal{D}_{\nu,\mathbf{P}}^{2}(\boldsymbol{V})=\textnormal{tr}(\mathbf{U}^{{\mkern-1.5mu\mathsf{T}}}\mathbf{L}(\nu)\mathbf{U}),$
where $\mathbf{U}=[\,u_{i}\,]_{i=1}^{k}\in\mathbb{R}^{N\times k}$ is a matrix
whose $i$-th column is
$u_{i}=\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{i}$}}}$. Therefore, the
_generalized Dirichlet graph partitioning problem_ can be formulated as:
$\min_{\boldsymbol{V}=\\{V_{1},...,V_{k}\\}}\textnormal{tr}(\mathbf{U}^{{\mkern-1.5mu\mathsf{T}}}\mathbf{L}(\nu)\mathbf{U})\quad\textnormal{s.t.}\>u_{i}=\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{i}$}}},\>\forall
i\in\\{1,...,k\\}.$
As mentioned earlier, the generalized Dirichlet graph partitioning problem is
NP-hard. We thus proceed as in spectral clustering, by relaxing the
combinatorial constraint of $\mathbf{U}$ and seeking instead a solution among
all matrices $\mathbf{U}$ with orthonormal columns. For a given arbitrary
measure $\nu$, the relaxed problem becomes:
$\min_{\mathbf{U}}\
\textnormal{tr}(\mathbf{U}^{{\mkern-1.5mu\mathsf{T}}}\mathbf{L}(\nu)\mathbf{U})\quad\textnormal{s.t}.\>\mathbf{U}^{{\mkern-1.5mu\mathsf{T}}}\mathbf{U}=\mathbf{I}_{k},$
(14)
whose solution $\mathbf{U}$ can be shown to be the eigenvectors corresponding
to the $k$ smallest eigenvalues of the unnormalized generalized Laplacian
$\mathbf{L}(\nu)$.
The novelty of our framework lies in the definition of a generalized Laplacian
associated with an arbitrary positive measure, and its connection to a general
formulation of spectral clustering. In the case where $\nu$ is the stationary
measure $\pi$, the spectral relaxation of the normalized Dirichlet energy Eq.
(9) leads to the approach for strongly connected digraphs proposed by (Zhou et
al., 2005).
### 4.2 The GSC algorithm
The framework we have presented so far led to Eq. (14), which relies on an
arbitrary positive vertex measure $\nu$ to compute a set of $k$ decision
functions
$\\{\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{i}$}}}\\}_{i=1}^{k}$ of
minimal GDE. Each $\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{i}$}}}$
indicates one of the $k$ pairwise disjoint vertex clusters versus the rest of
the graph.
To render our framework more flexible for practical use, we extend what was
previously described in Eq. (10) by introducing a parametrized vertex measure
$\nu$ derived from the iterated powers of the natural random walk on a graph.
Specifically, we consider three optional points of parametrization: i) the
number of iterations $t\in\mathbb{N}$ of the random walk, ii) a uniform mixing
parameter $\gamma\in[0,1]$ for the transition matrix, and iii) an exponent
$\alpha\in\mathbb{R}$. Therefore, at a given vertex $x$, the proposed measure
is given by:
$\nu_{(t,\gamma)}^{\alpha}(x)=\left({\textstyle\frac{1}{N}}\boldsymbol{1}_{N\times
1}^{{\mkern-1.5mu\mathsf{T}}}\tilde{\mathbf{P}}_{\\!\\!\gamma}^{t}{\delta}_{x}\right)^{\alpha},$
(15)
where $\boldsymbol{1}_{N\times 1}$ is the all-ones vector ,
$\delta_{x}\in\\{0,1\\}^{N\times 1}$ is the vector output of the Kronecker
delta function at $x\in\mathcal{V}$, and
$\tilde{\mathbf{P}}_{\\!\\!\gamma}=\gamma\mathbf{P}+(1-\gamma){\textstyle\frac{1}{N}}\boldsymbol{1}_{N\\!\times\\!N}\,,\quad\gamma\in[0,1].$
(16)
Note that $\tilde{\mathbf{P}}_{\\!\\!\gamma}$ is a dense matrix, since it is a
convex combination of the original transition matrix and a uniform edge
measure (in the form of a complete graph
$\frac{1}{N}\boldsymbol{1}_{N\\!\times\\!N}$). Moreover, $\lim_{\gamma\to
1}\tilde{\mathbf{P}}_{\\!\\!\gamma}=\mathbf{P}$. Interestingly,
$\tilde{\mathbf{P}}_{\\!\\!\gamma}$ makes us recall the teleporting random
walk (Page et al., 1999); this connection is discussed in Sec. 5.1.
Plugging $\nu_{(t,\gamma)}^{\alpha}$ to Eq. (5), gives us the expression of
the GDE of one decision function
$\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{i}$}}}$ (seen as a graph
function), under this new composite edge measure:
$\mathcal{D}_{\nu_{(t,\gamma)}^{\alpha}\\!,\mathbf{P}}^{2}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{i}$}}})=\langle\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{i}$}}},\mathbf{L}_{t,\gamma}^{(\alpha)}\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{i}$}}}\rangle.$
(17)
For a given $\alpha$, and for any $t\in\mathbb{N}$, our derived _generalized
spectral graph partitioning problem_ , associated with the GDE of all the
cluster-related decision functions, that we expressed earlier as
$\mathcal{D}_{\nu_{(t,\gamma)}^{\alpha}\\!,\mathbf{P}}^{2}(\boldsymbol{V})$,
is:
$\min_{\mathbf{U}}\
\textnormal{tr}(\mathbf{U}^{{\mkern-1.5mu\mathsf{T}}}\mathbf{L}_{t,\gamma}^{(\alpha)}\mathbf{U})\quad\textnormal{s.t}.\>\mathbf{U}^{{\mkern-1.5mu\mathsf{T}}}\mathbf{U}=\mathbf{I}_{k}.$
(18)
Simply put, the GSC algorithm employs the same optimization procedure (see
Alg. 1) as the classical spectral clustering. The novelty is that we rely on a
generalized graph Laplacian $\mathbf{L}_{t,\gamma}^{(\alpha)}$ and compute its
eigenvectors $\mathbf{U}_{t,\gamma}^{(\alpha)}\in\mathbb{R}^{N\times k}$.
Letting the random walk iteration (time) parameter vary, provides a set of
eigenmaps $\\{\mathbf{U}_{t,\gamma}^{(\alpha)}\\}_{t=1}^{t_{\max}}$, for each
of which we obtain a suggested graph $k$-partition using $k$-means. The
different partitions can be compared according to suitable quality metrics to
select the best one. It is also possible to obtain partitions of the same
quality, which means there exists a subset of generalized Laplacian whose
embeddings produce similar graph partitions results.
Algorithm 1 Generalized Spectral Clustering (GSC)
Input: $\mathbf{W}$: weighted adjacency matrix; $k$: number of clusters,
xxxxxx$\gamma$: the uniform mixing parameter (see Eq. (16));
xxxxxx$\alpha$: power (see Eq. (15)); $t_{\max}$: maximum number of power
xxxxxxiterations (representing time) to perform over the transition
xxxxxxmatrix of the natural random walk.
Output: $\\{\boldsymbol{V}^{(\alpha)}_{t,\gamma}\\}_{t=1}^{t_{\max}}$: the
graph $k$-partition for each time $t$.
1: for $t=0$ to $t_{\max}$ do
2: Compute the generalized Laplacian $\mathbf{L}_{t,\gamma}^{(\alpha)}$, see
Eq. (8).
3: Compute $\mathbf{U}_{t,\gamma}^{(\alpha)}\in\mathbb{R}^{N\times k}$ whose
columns are the eigenvectors corresponding to the $k$ smallest eigenvalues of
$\mathbf{L}_{t,\gamma}^{(\alpha)}$.
4: Consider each $x_{i}\in\mathbb{R}^{k}$, $i=1,...,N$, to be the embedding of
the $i$-th vertex, represented by the $i$-th row of
$\mathbf{U}_{t,\gamma}^{(\alpha)}$, and apply a clustering method ($k$-means)
to all these vectors.
5: Obtain the $k$-partition
$\boldsymbol{V}^{(\alpha)}_{t,\gamma}=\big{\\{}V_{j,t,\gamma}^{(\alpha)}\big{\\}}_{j=1}^{k}$
of the graph vertices based on the clustering result of Step 4.
6: end for
7: return $\\{\boldsymbol{V}^{(\alpha)}_{t,\gamma}\\}$, for all
$t\in[0,...,t_{\max}]$.
## 5 Discussion
### 5.1 Misconception about the use of teleporting random walk for non-
strongly connected digraphs
A frequently encountered misconception about how to deal with non-strongly
connected digraphs in graph machine learning tasks, such as spectral
clustering (Zhou et al., 2005) or node classification (Peach et al., 2020),
concerns the use of the teleporting random walk (Page et al., 1999). This
particular type of random walk is ergodic, however it is used as a substitute
for the natural random walk that is generally non-ergodic in the digraph
setting. In this sense, we realize that the teleporting random walk has been
seen mainly as a trick to overcome non-ergodicity and be consistent with the
standard ergodic theoretical framework.
Nevertheless, the use of the teleporting random walk as a direct proxy for the
natural random walk may potentially bring disadvantages. Firstly, introducing
teleportation, from any vertex to any other vertex, is equivalent to mixing
with an unweighted complete graph. Consequently, this may modify drastically
the graph topology and cause non-local perturbations to the random walk
dynamics. Secondly, teleportation imposes ergodicity despite that may not be
the case for the natural random walk on a given graph. Hence, the conclusions
drawn when using this approach may be questionable (Schaub et al., 2019).
In our framework, we rather propose to incorporate teleportation as a
regularizing measure of the GDE (see the involvement of
$\tilde{\mathbf{P}}_{\\!\\!\gamma}$ in Eq. (15)) without changing the
structure of the random walk itself (see that Eq. (18) minimizes
$\mathcal{D}_{\nu_{(t,\gamma)}^{\alpha}\\!,\mathbf{P}}^{2}(\boldsymbol{V})$
that still depends on the original $\mathbf{P}$).
### 5.2 Why using measure regularized graph operators: the special case of
unbalanced data
To explain why our generalized graph Laplacian operator can be instrumental to
improve the performance of vanilla spectral clustering (VSC), we consider a
toy model where the latter might fail depending on how unbalanced and
separable the data classes are, whereas a reasonably tuned regularized
operator will be successful.
We first analyze a mathematical caricature of a $K$-NN graph representing two
clusters, and we then provide a simple experimental validation on a toy
dataset. For simplicity, we consider an undirected graph (since the underlying
stationary measure is explicit there and leads to easy explicit computations),
but the argument could be generalized for directed graphs. Here, we use as
regularizing measure a simplified version of our proposal:
$\nu(x)=\pi(x)^{\alpha}$, where $\pi$ is the stationary measure proportional
to vertex degrees.
###### Proposition 5.1.
Consider a two-cluster graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ of
$|\mathcal{V}|=N$ vertices whose ground truth is indicated by the sets
$V_{1}^{*}$ and $V_{2}^{*}$ with cardinality $|V_{1}^{*}|=N_{1}^{*}$ and
$|V_{2}^{*}|=N_{2}^{*}$, respectively, such that $V_{1}^{*}\cup
V_{2}^{*}=\mathcal{V},V_{1}^{*}\cap V_{2}^{*}=\emptyset$. Given a set
$V\subset\mathcal{V}$, we define the internal frontier set of $V$ as
$\partial^{-}\mathcal{(}V)=\\{x\in
V:\exists\,y\in\bar{V}\,\textnormal{s.t}\,(x,y)\in\mathcal{E}\\}.$
Let $N_{1}=N_{1}^{*}-\big{|}\partial^{-}\mathcal{(}V_{1}^{*})\big{|}$ and
$N_{2}=N_{2}^{*}-\big{|}\partial^{-}\mathcal{(}V_{2}^{*})\big{|}$ be
respectively the interior points of $V_{1}^{*}$ and $V_{2}^{*}$. There is a
frontier of $c_{N}=c\,\omega_{N}$ points, such that
$N_{1}^{*}+N_{2}^{*}+c_{N}=N$. Assume further that the number of neighbors are
constant, equal to $\epsilon_{N}$ inside the clusters, and equal to
$\rho\,\epsilon_{N},\,\rho<1$, along the frontier (this amounts to a
separability assumption). We also assume that cutting inside the clusters
leads to a frontier of order $\omega_{N}$ points. In the case where this
latter is fulfilled, this leads to consider the subsets $V_{1}$ and $V_{2}$
with respectively $N_{1}$ and $N_{2}$ interior points such that
$N_{1}+N_{2}+\omega_{N}=N$. Simple computations lead to the following
property.
Assume ${\frac{\omega_{N}}{N}}\to 0$, ${\frac{N_{1}^{*}}{N}}\to b$,
$\frac{N_{2}^{*}}{N}\to 1-b$.
Then if $c>b$, there exists a non empty set $V\subset V_{2}^{*}$ such that:
$\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi,\mathbf{P}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}$}}})>\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi,\mathbf{P}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}\cup
V$}}}).$
On the other hand, if $\alpha>\frac{\log(b/c)}{\log(\rho)}$, then for all
$V\subset V_{2}^{*}$
$\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}\\!,\mathbf{P}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}$}}})<\
\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}\\!,\mathbf{P}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}\cup
V$}}}).$
We have hence shown that unbalanced data can reveal the inefficiency of the
usual VSC, and that this can be corrected by a sufficient regularization of
the measure. In Sec. 6.1 we validate this finding empirically using a relevant
toy dataset.
Note that there are prior works motivating the operator regularization (Qin &
Rohe, 2013; Amini et al., 2013; Zhang & Rohe, 2018), but they concern mainly
the stochastic block model and not specifically the problem of unbalanced
datasets. What we intended to stress in this subsection is that our
theoretical GDE-based framework offers a new viewpoint to see and analyze such
difficult data aspects.
(a) Ground truth
(b) $k$-means result
(c) VSC result
(d) GSC result
(e)
$\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\\!\alpha}\\!\\!,\\!\mathbf{P}}\\!(\\!\chi_{\raisebox{-0.73497pt}{\scalebox{0.6}{$V^{*}_{\\!1}$}}}\\!\\!)\textnormal{\,-vs-\,}\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\\!\alpha}\\!\\!,\\!\mathbf{P}}\\!(\\!\chi_{\raisebox{-0.73497pt}{\scalebox{0.6}{$V^{*}_{\\!1}\\!\cup\\!V$}}})$
Figure 1: Comparison of VSC and GSC on an easy synthetic toy dataset with
unbalanced classes. (a) Ground truth; (b) clustering result from $k$-means;
(c) VCS result; (d) the result of the proposed GSC; (e) comparison of the
quantity
$\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}\\!,\mathbf{P}}(\chi_{\raisebox{-0.94496pt}{\scalebox{0.6}{$V_{1}^{*}$}}})$
and
$\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}\\!,\mathbf{P}}(\chi_{\raisebox{-0.94496pt}{\scalebox{0.6}{$V_{1}^{*}\cup
V$}}}).$
## 6 Experiments
General setup. The experimental study concentrates on directed graphs
naturally arising when processing point clouds. We consider input data of the
form $X=\\{x_{i}\\}_{i=1}^{N}$, $\forall x_{i}\in\mathbb{R}^{d}$. Graph
construction is a core phase of the graph partitioning pipeline and can affect
the whole process. Based on $X$ and pairwise point instances, we need to
construct a generally sparse graph that is the first step towards representing
the data. There are several options for this step, with different levels of
sophistication and complexity. For instance, one could consider the simple yet
natural approach of truncating a distance measure to create edges only for
points that are sufficiently close to each other, e.g. via a $K$-NN or
$\varepsilon$-graph. Alternatively, one could also employ parametrized kernels
(e.g. an RBF) to create a similarity matrix.
Since the main focus of this work is on determining the right graph Laplacian
operator, we rather choose a simple graph construction approach. We construct
an unweighted directed $K$-NN graph that is represented by its non-symmetric
adjacency matrix $\mathbf{W}=\\{w_{ij}\\}_{i,j=1}^{N}$, with entries:
$w_{ij}=\mathds{1}\left\\{\frac{|x_{i}-x_{j}|^{2}}{\text{dist}_{K}(x_{i})^{2}}\leq
1\right\\}.$ (19)
In the above, $x_{i}\in\mathbb{R}^{d}$ represents the original coordinates of
the data point corresponding to the $i$-th vertex, $\text{dist}_{K}(x)$ is the
distance between $x$ and its $K$-th-NN, and
$\mathds{1}\\{\,\cdot\,\\}\in\\{0,1\\}$ is the indicator function that
evaluates the truth of the input condition. We always fix
$K=\lceil\log(N)\rceil$, which makes the constructed graphs relatively sparse
and not strongly connected.
### 6.1 Demonstration on a synthetic toy dataset
We first demonstrate empirically the insights discussed in Sec. 5.2. We
generate a point could $X=\\{x_{i}\\}_{i=1}^{N}$, $\forall
x_{i}\in\mathbb{R}^{2}$, of $N=330$ data points drawn independently for two
classes using two Gaussian distributions of different centers and same unit
variance: $V^{*}_{1}:x_{i1},x_{i2}\sim\mathcal{N}(-2,1)$ or
$V^{*}_{2}:x_{i1},x_{i2}\sim\mathcal{N}(2,1)$.
Let $N_{1}=30$ and $N_{2}=300$ be the number of points drawn from $V^{*}_{1}$
and $V^{*}_{2}$, respectively. Here $K=\lceil\log(N)\rceil=6$. Exclusively for
this case, we symmetrize the adjacency matrix $\mathbf{W}$ with
$\tilde{\mathbf{W}}=\frac{1}{2}(\mathbf{W}+\mathbf{W}^{{\mkern-1.5mu\mathsf{T}}})$,
in order to meet the conditions of the setting described in Sec. 5.2. Recall
that
$\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}$}}}\in\\{0,1\\}^{N}$ is
the decision function deciding which vertices belong to the first cluster
$V^{*}_{1}$. Let us set respectively $\alpha_{\textnormal{th}}$ and
$\alpha_{\textnormal{xp}}$ the theoretical and experimental exponent $\alpha$
we are looking to determine.
Fig. 1(a) shows the ground truth data classes. Fig. 1(b) shows the clustering
result of the data obtained with $k$-means, which makes only one mistake w.r.t
the ground truth and confirms that this is an easy scenario. Fig. 1(c) shows
the clustering result obtained by VSC. We refer to the clusters obtained by
VSC as sets $V_{1}$ and $V_{2}$, respectively. As we can observe,
$V_{1}=V_{1}^{*}\cup V$ where $V\subset V_{2}^{*}$. Consequently, let us
define $\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}\cup V$}}}$ the
decision function for the vertices belonging to $V_{1}$. Finally, Fig. 1(d)
shows the clustering result obtained by GSC based on the generalized Laplacian
$\mathbf{L}(\pi^{\alpha})$ (see Eq. (8)). As observed, VSC fails completely to
find the correct partition. On the other end, GSC recovers the same partition
as the results from $k$-means applied directly to the data. To put this result
in perspective to the toy model of Sec. 5.2, we compute the necessary
parameters to get $\alpha$. For ${b=\frac{N_{1}}{N}\approx{0.08}},\rho\approx
0.75,c\approx 0.29$, we obtain $\alpha_{\textnormal{th}}\approx 4.5$. In order
to validate $\alpha_{\textnormal{th}}$, we introduce Fig. 1(e) that compares
the values of the generalized normalized Dirichlet energies
$\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}\\!,\mathbf{P}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}$}}})$
and
$\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}\\!,\mathbf{P}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}\cup
V$}}})$ associated respectively of the decision functions
$\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}$}}}$ and
$\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}\cup V$}}}$ for
increasing values of $\alpha$ (x-axis). We note that for $\alpha<2.5$,
$\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}\\!,\mathbf{P}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}$}}})>\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}\\!,\mathbf{P}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}\cup
V$}}})$. From the spectral clustering perspective, this means to choosing
between the sets ${V_{1}^{*}}$ and ${V_{1}^{*}\cup V}$ which minimizes the
normalized Dirichlet energy, i.e.
$\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}\\!,\mathbf{P}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}\cup
V$}}})$. When $\alpha>2.5$,
$\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}\\!,\mathbf{P}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}$}}})<\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}\\!,\mathbf{P}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}\cup
V$}}})$ which is the equivalent of choosing ${V_{1}^{*}}$. Consequently,
$\alpha_{\textnormal{xp}}\approx 2.5<\alpha_{\textnormal{th}}$. We are able
thus to recover a partition close to $V_{1}^{*}$, and this is what we achieve
with GSC in Fig. 1(d).
### 6.2 Results on benchmark datasets
This section reports the results of experiments we conducted to evaluate the
performance of the proposed GSC method on $11$ benchmark datasets from the UCI
repository (Dheeru & Karra Taniskidou, 2017). We use three variants of the GSC
method. The first one, denoted by $\textnormal{GSC}_{1}(\gamma=1,\alpha,t)$ is
associated with the GDE defined in Eq. (17) with $\alpha\in[0,\infty),t\geq 0$
and $\gamma=1$. The second one, denoted by
$\textnormal{GSC}_{2}(\gamma,\alpha,t)$ is also associated with the same GDE,
but uses with $\alpha\in[0,\infty),t\geq 0$ and $\gamma\in[0,1)$. The third
one, denoted by $\textnormal{GSC}_{3}(\gamma,\alpha,t)$ is the normalized
version of $\textnormal{GSC}_{2}(\gamma,\alpha,t)$ obtained thanks to Eq. (9).
GSC and all the competitors we mention below follow the spectral clustering
setting but use the eigenvectors of different graph operators to finally apply
$k$-means clustering (we report the best score out of $100$ restarts). To
ensure fair evaluations, we select for each method the optimal parameter
values, obtained through cross-validation over a grid search, yielding the
closest partition to the ground truth. The grid used for each parameter of GSC
is: $\alpha\in\\{0,0.1,...,1\\}$, $t\in\\{0,1,...,100\\}$, and
$\gamma\in\\{0,0.05,...,0.95\\}$.
Competitors. We compare against the following methods:
$\bullet$ $\textnormal{DSC}\\!+\\!(\gamma)$ (Zhou et al., 2005; Palmer &
Zheng, 2020) spectral clustering on strongly connected digraphs. To extend
this method to the graphs used in our experiments, we employ the teleporting
random walk (Page et al., 1999) defined in Eq. (16) endowed with the parameter
$\gamma\in[0,1)$. We use the same cross-validation for $\gamma$ as what
mentioned earlier for GSC.
$\bullet$ $\textnormal{DI-SIM}_{\textnormal{L}}(\tau)$ and $\textnormal{DI-
SIM}_{\textnormal{R}}(\tau)$ (Rohe et al., 2016) are two variants that are
based on the left and the right singular vectors, respectively, of a given
regularized and normalized operator whose regularization is denoted by the
parameter $\tau\geq 0$. We use cross-validation to search the optimal
parameter with a grid search over $\tau\in\\{1,2,...,20\\}$.
$\bullet$ $\textnormal{SC-SYM}_{1}$ and $\textnormal{SC-SYM}_{2}$, two
variants of the vanilla spectral clustering (Von Luxburg, 2007) based on the
unnormalized and the normalized graph Laplacian obtained from the
symmetrization of the adjacency matrix $\mathbf{W}$, respectively.
Table 1: Clustering performance (NMI) on UCI datasets with optimal parameters
in brackets.
Dataset $N$ $d$ $k$ $\textnormal{SC-SYM}_{1}$ $\textnormal{SC-SYM}_{2}$
$\textnormal{DI-SIM}_{\textnormal{L}}$ $\textnormal{DI-SIM}_{\textnormal{R}}$
$\textnormal{DSC}\\!+\\!(\gamma)$
$\textnormal{GSC}_{1}(\gamma\\!=\\!1,\alpha,t)$
$\textnormal{GSC}_{2}(\gamma,\alpha,t)$
$\textnormal{GSC}_{3}(\gamma,\alpha,t)$ Iris 150 3 4 80.58 80.58 74.98 (1)
66.57 (1) 68.63 (0.80) 90.11 (0.9,4) 90.11 (0.95,0.7,20) 90.11 (0.95,0.7,3)
Glass 214 9 6 38.59 38.92 38.95 (1) 36.41 (1) 39.72 (0.80) 45.73 (0.1,42)
45.96 (0.95,1,14) 38.56 (0.85, 0.1,73) Wine 178 13 3 86.33 86.33 83.66 (1)
85.62 (1) 91.09 (0.80) 86.33 (0.1,1) 86.33 (0.95,0.1,53) 91.09 (0.95,0.8,2)
WBDC 569 30 2 67.73 69.47 68.54 (2) 53.43 (1) 61.12 (0.10) 72.02 (1,5) 73.24
(0.95,0.8,3) 71.45 (0.95,0.3,8) Control Chart 600 60 6 81.17 81.17 82.94 (1)
77.72 (1) 79.45 (0.90) 85.62 (0.1,90) 82.79 (0.95,0.3,65) 82.82 (0.90,0.7,96)
Parkinson 185 22 2 21.96 19.13 28.89 (1) 27.36 (13) 25.82 (0.95) 32.65
(0.2,19) 36.08 (0.95,0.4,10) 31.93 (0.95,0.1,23) Vertebral 310 6 3 39.26 39.26
52.06 (2) 41.76 (2) 56.63 (0.80) 64.26 (1,5) 59.37 (0.95,1,1) 51.50
(0.85,1,15) Breast Tissue 106 9 6 54.03 54.43 54.04 (2) 49.33 (2) 51.64 (0.20)
56.66 (0.1,40) 58.64 (0.95,1,56) 56.40 (0.95,0.6,88) Seeds 210 7 3 73.90 73.90
76.29 (1) 73.06 (1) 74.80 (0.80) 80.10 (1,4) 80.10 (0.95,0.9,4) 80.10
(0.95,0.9,4) Image Seg. 2310 19 7 67.06 67.41 67.42 (1) 64.77 (1) 31.83 (0.99)
73.40 (0.2,50) 68.11 (0.95,1,64) 68.05 (0.95,0.1,56) Yeast 1484 8 10 30.58
31.11 31.37 (2) 28.89 (1) 27.50 (0.90) 37.46 (0.5,9) 35.59 (0.95,1,40) 31.70
(0.95,0.4,67) Average – – – 58.29 58.34 59.92 54.77 56.37 65.85 65.12 63.06
Table 2: Clustering performance (NMI) on UCI datasets with optimal parameters
in brackets using Calinski-Harabasz index.
Dataset $N$ $d$ $k$ $\textnormal{SC-SYM}_{1}$ $\textnormal{SC-SYM}_{2}$
$\textnormal{DI-SIM}_{\textnormal{L}}(\tau)$ $\textnormal{DI-
SIM}_{\textnormal{R}}(\tau)$ $\textnormal{DSC}\\!+\\!(\gamma)$
$\textnormal{GSC}_{1}(\gamma=1,\alpha,t)$ Iris 150 3 4 80.58 80.58 74.98 (1)
68.57 (1) 68.63 (0.85) 83.66 (0.2,31) Glass 214 9 6 38.59 38.92 37,39 (2)
35.87 (1) 36.58 (0.85) 43.15 (0.1,38) Wine 178 13 3 86.33 86.33 83.66 (1)
82.02 (1) 63.16 (0.85) 86.33 (0.1,28) WBDC 569 30 2 67.73 69.47 64.77 (1)
53.43 (1) 61.12 (0.10) 69.47 (0.1,46) Control Chart 600 60 6 81.17 81.17 82.94
(2) 77.44 (1) 79.45 (0.90) 85.62 (0.4,17) Parkinson 185 22 2 21.96 19.13 28.89
(2) 27.36 (13) 22.97 (0.30) 31.10 (0.1,45) Vertebral 310 6 3 39.26 39.26 45.89
(1) 39.62 (1) 54.24 (0.80) 51.83 (0.2,34) Breast Tissue 106 9 6 54.03 54.43
54.04 (2) 49.27 (1) 51.64 (0.20) 55.16 (0.1,24) Seeds 210 7 3 73.90 73.90
76.26 (1) 73.06 (1) 74.80 (0.80) 77.44 (0.8,2) Image Seg. 2310 19 7 67.06
67.41 67.42 (1) 64.77 (1) 31.46 (0.99) 69.60 (0.1,73) Yeast 1484 8 10 30.58
31.11 31.22 (1) 28.89 (1) 27.47 (0.90) 32.16 (0.2,2) Average – – – 58.29 58.34
58.86 54.57 51.95 62.32
Results. The obtained partitions are first evaluated by the normalized mutual
information (NMI) (Strehl & Ghosh, 2002) and the adjusted Rand index (ARI)
(Hubert & Arabie, 1985). Both are supervised cluster evaluation measures that
make use of the ground truth labels of the data. Also for both, larger values
are better. Tab. 1 summarizes the comparative results based on NMI, while the
ARI results are provided in the Appendix. In nearly all cases, we observe that
the proposed $\textnormal{GSC}_{1}$, $\textnormal{GSC}_{2}$ and
$\textnormal{GSC}_{3}$ outperform significantly the other methods and
$\textnormal{GSC}_{1}$ gives the best result in average. Our approach performs
much better than $\textnormal{SC-SYM}_{1}$ and $\textnormal{SC-SYM}_{2}$, or
VSC on the symmetric version of the directed $K$-NN graph. This allows us to
state that our GSC associated with the GDE of Eq. (17), defined with respect
to the vertex measure of Eq. (15), brings indeed real added value in the
spectral clustering problem. It also allows obtaining better graph embeddings.
The case related to the WINE dataset is interesting to analyze. The highest
NMI score is achieved by $\textnormal{DSC}\\!+\\!(\gamma)$, which outperforms
$\textnormal{GSC}_{1}$. Nevertheless, we remark that $\textnormal{GSC}_{3}$
also achieves the highest NMI score on this dataset. This indicates that using
the teleporting random walk as a regularizing measure is beneficial even
without affecting the graph topology, unlike what
$\textnormal{DSC}\\!+\\!(\gamma)$ does. Moreover,
$\textnormal{DSC}\\!+\\!(\gamma)$ gives on average results below the
symmetrized version, which suggests that $\textnormal{DSC}\\!+\\!(\gamma)$ can
also indeed deteriorate the spectral clustering performance by considering the
teleportation random walk instead of the natural one.
To further validate the efficiency of GSC, we also evaluated our framework
without using the ground truth labels as an input. For this setting, we
restrict the comparison of the proposed methods to $\textnormal{GSC}_{1}$.
Since our framework constructs a list of graph partitions, we use the
Calinski-Harabasz (CH) (Caliński & Harabasz, 1974) as a measure of the quality
of a partition of a dataset corresponding to the normalized ratio between the
overall inter-cluster variance and the overall intra-cluster variance. We
estimate the parameters $\alpha$ and $t$ that maximize the CH index, to select
a solution among all the obtained partitions. The results of the comparison
are shown in Tab. 2. As noticed, $\textnormal{GSC}_{1}$ outperforms
significantly the other methods in nearly all cases, and on average
outperforms significantly the other methods. Compared to the unsupervised
evaluation reported in Tab. 1, here the NMI of $\textnormal{GSC}_{1}$ stays
lower by few percent. This indicates that the fully unsupervised version
offers us comparable graph partition qualities to the case where we have the
ground truth.
## 7 Conclusion
We have proposed the _generalized spectral clustering_ (GSC) framework that
applies to both directed and undirected graphs. First, we introduced the
_generalized Dirichlet energy_ (GDE) associated with an arbitrary positive
edge measure, as an extension of the classical Dirichlet energy for graph
functions. Through the GDE formulation, we have proposed generalized Laplacian
operators on graphs associated with an arbitrary positive vertex measure. We
then provided a random walk interpretation of the GDE essential to our
framework. Our proposal comes with an algorithm for our framework, where the
vertex measure corresponds to the iterated powers of the natural random walk
on the graph. We demonstrated theoretically that our framework is efficient in
the unbalanced setting. Finally, numerical results showed that the GSC
approach outperforms existing approaches for directed graphs on several
datasets.
## 8 Acknowledgment
Harry Sevi and Argyris Kalogeratos were funded by the Industrial Analytics and
Machine Learning (IdAML) Chair hosted at ENS Paris-Saclay, University Paris-
Saclay. Matthieu Jonckheere was funded by the International Centre for
Mathematics and Computer Science (CIMI) in Toulouse.
## References
* Amini et al. (2013) Amini, A. A., Chen, A., Bickel, P. J., and Levina, E. Pseudo-likelihood methods for community detection in large sparse networks. _Annals of Statistics_ , 41(4):2097–2122, 2013\.
* Boedihardjo et al. (2021) Boedihardjo, M., Deng, S., and Strohmer, T. A performance guarantee for spectral clustering. _SIAM Journal on Mathematics of Data Science_ , 3(1):369–387, 2021.
* Brémaud (2013) Brémaud, P. _Markov chains: Gibbs fields, Monte Carlo simulation, and queues_ , volume 31. Springer Science & Business Media, 2013.
* Caliński & Harabasz (1974) Caliński, T. and Harabasz, J. A dendrite method for cluster analysis. _Communications in Statistics-theory and Methods_ , 3(1):1–27, 1974.
* Chung (2005) Chung, F. Laplacians and the cheeger inequality for directed graphs. _Annals of Combinatorics_ , 9(1):1–19, 2005.
* Cucuringu et al. (2020) Cucuringu, M., Li, H., Sun, H., and Zanetti, L. Hermitian matrices for clustering directed graphs: insights and applications. In _International Conference on Artificial Intelligence and Statistics_ , pp. 983–992, 2020.
* Dheeru & Karra Taniskidou (2017) Dheeru, D. and Karra Taniskidou, E. UCI repository of machine learning databases. _University of California, Irvine, School of Information and Computer Sciences_ , 2017.
* Hubert & Arabie (1985) Hubert, L. and Arabie, P. Comparing partitions. _Journal of Classification_ , 2(1):193–218, 1985\.
* Laenen & Sun (2020) Laenen, S. and Sun, H. Higher-order spectral clustering of directed graphs. _preprint arXiv:2011.05080_ , 2020.
* Levin & Peres (2017) Levin, D. A. and Peres, Y. _Markov chains and mixing times_ , volume 107. American Mathematical Society, 2017.
* Meilă & Pentney (2007) Meilă, M. and Pentney, W. Clustering by weighted cuts in directed graphs. In _Proceedings of the SIAM international Conference on Data Mining_ , pp. 135–144, 2007.
* Meilă & Shi (2001) Meilă, M. and Shi, J. A random walks view of spectral segmentation. In _International Workshop on Artificial Intelligence and Statistics_ , pp. 203–208, 2001.
* Montenegro et al. (2006) Montenegro, R., Tetali, P., et al. Mathematical aspects of mixing times in markov chains. _Foundations and Trends® in Theoretical Computer Science_ , 1(3):237–354, 2006.
* Ng et al. (2002) Ng, A. Y., Jordan, M. I., and Weiss, Y. On spectral clustering: Analysis and an algorithm. In _Advances in Neural Information Processing Systems_ , pp. 849–856, 2002.
* Page et al. (1999) Page, L., Brin, S., Motwani, R., and Winograd, T. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999.
* Palmer & Zheng (2020) Palmer, W. R. and Zheng, T. Spectral clustering for directed networks. In _International Conference on Complex Networks and Their Applications_ , pp. 87–99. Springer, 2020.
* Peach et al. (2020) Peach, R. L., Arnaudon, A., and Barahona, M. Semi-supervised classification on graphs using explicit diffusion dynamics. _Foundations of Data Science_ , 2(1):19–33, 2020\.
* Peng et al. (2015) Peng, R., Sun, H., and Zanetti, L. Partitioning well-clustered graphs: Spectral clustering works! In _Conference on Learning Theory_ , pp. 1423–1455. PMLR, 2015\.
* Qin & Rohe (2013) Qin, T. and Rohe, K. Regularized spectral clustering under the degree-corrected stochastic blockmodel. _preprint arXiv:1309.4111_ , 2013.
* Rohe et al. (2016) Rohe, K., Qin, T., and Yu, B. Co-clustering directed graphs to discover asymmetries and directional communities. _Proceedings of the National Academy of Sciences_ , 113(45):12679–12684, 2016.
* Saloff-Coste (1997) Saloff-Coste, L. Lectures on finite markov chains. In _Lectures on Probability Teory and Statistics_ , pp. 301–413. Springer, 1997.
* Satuluri & Parthasarathy (2011) Satuluri, V. and Parthasarathy, S. Symmetrizations for clustering directed graphs. In _Proceedings of the International Conference on Extending Database Technology_ , pp. 343–354, 2011.
* Schaub et al. (2019) Schaub, M. T., Delvenne, J.-C., Lambiotte, R., and Barahona, M. Multiscale dynamical embeddings of complex networks. _Physical Review E_ , 99(6):062308, 2019.
* Sevi et al. (2018) Sevi, H., Rilling, G., and Borgnat, P. Harmonic analysis on directed graphs and applications: from Fourier analysis to wavelets. _preprint arXiv:1811.11636_ , 2018.
* Shi & Malik (2000) Shi, J. and Malik, J. Normalized cuts and image segmentation. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 22(8):888–905, 2000.
* Sinclair (1992) Sinclair, A. Improved bounds for mixing rates of markov chains and multicommodity flow. _Combinatorics, probability and Computing_ , 1(4):351–370, 1992.
* Strehl & Ghosh (2002) Strehl, A. and Ghosh, J. Cluster ensembles—a knowledge reuse framework for combining multiple partitions. _Journal of Machine Learning Research_ , 3(Dec):583–617, 2002.
* Von Luxburg (2007) Von Luxburg, U. A tutorial on spectral clustering. _Statistics and computing_ , 17(4):395–416, 2007\.
* Zhang & Rohe (2018) Zhang, Y. and Rohe, K. Understanding regularized spectral clustering via graph conductance. _preprint arXiv:1806.01468_ , 2018.
* Zhou et al. (2005) Zhou, D., Huang, J., and Schölkopf, B. Learning from labeled and unlabeled data on a directed graph. In _Proceedings of the International Conference on Machine learning_ , pp. 1036–1043, 2005.
## Appendix A Technical proofs
See 3.1
###### Proof.
$\displaystyle\mathcal{D}_{\nu,\mathbf{P}}^{2}(f)$
$\displaystyle=\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)|f(x)-f(y)|^{2}$
$\displaystyle=\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)|f(x)|^{2}+\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)|f(y)|^{2}-2\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)|f(x)||f(y)|$
$\displaystyle=\langle{f},\mathbf{N}{f}\rangle+\langle{f},\boldsymbol{\Xi}{f}\rangle-\langle{f},(\mathbf{NP}+\mathbf{P}^{{\mkern-1.5mu\mathsf{T}}}\mathbf{N}){f}\rangle$
$\displaystyle=\langle{f},\big{(}\mathbf{N}+\boldsymbol{\Xi}-(\mathbf{NP}+\mathbf{P}^{{\mkern-1.5mu\mathsf{T}}}\mathbf{N})\big{)}{f}\rangle$
$\displaystyle=\langle{f},\mathbf{\big{(}I-(I+N^{-1}\Xi)^{-1}(P+N^{-1}P^{{\mkern-1.5mu\mathsf{T}}}N)\big{)}}{f}\rangle_{\nu+\xi}$
$\displaystyle=\langle{f},\mathbf{L}_{\textnormal{RW}}(\nu){f}\rangle_{\nu+\xi}.$
∎
See 4.1
###### Proof.
$\displaystyle q(S,\bar{S})$ $\displaystyle=\sum_{x\in
S,y\in\bar{S}}\nu(x)p(x,y)$
$\displaystyle=\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(x)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$\bar{S}$}}}(y)$
$\displaystyle=\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(x)(1-\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(y))$
$\displaystyle=\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(x)-\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(x)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(y)$
$\displaystyle\Rightarrow\ q(S,\bar{S})$
$\displaystyle=\sum_{x\in\mathcal{V}}\nu(x)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(x)-\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(x)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(y)$
(20)
$\displaystyle q(\bar{S},S)$
$\displaystyle=\sum_{x\in\bar{S},y\in{S}}\nu(x)p(x,y)$
$\displaystyle=\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$\bar{S}$}}}(x)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(y)$
$\displaystyle=\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)(1-\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(x))\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(y)$
$\displaystyle=\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(y)-\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(x)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(y)$
$\displaystyle\Rightarrow\ q(\bar{S},S)$
$\displaystyle=\sum_{y\in\mathcal{V}}\bigg{(}\sum_{x\in\mathcal{V}}\nu(x)p(x,y)\bigg{)}\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(y)-\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(x)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(y)$
(21)
$\displaystyle\mathcal{D}_{\nu,\mathbf{P}}^{2}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}})$
$\displaystyle=\sum_{(x,y)\in\mathcal{E}}\nu(x)p(x,y)|\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(x)-\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(y)|^{2}.$
$\displaystyle=\sum_{x\in\mathcal{V}}\nu(x)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(x)+\sum_{y\in\mathcal{V}}\bigg{(}\sum_{x\in\mathcal{V}}\nu(x)p(x,y)\bigg{)}\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(y)-2\sum_{x,y\in\mathcal{V}}\nu(x)p(x,y)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(x)\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$S$}}}(y)$
$\displaystyle\Rightarrow\eqref{eq:qssbar}+\eqref{eq:qsbarss}$
∎
See 3.2
###### Proof.
$\mathbf{P}$ admits the eigen-decomposition
$\mathbf{P}=\sum_{j=1}^{N}\lambda_{j}\phi_{j}\psi_{j}^{*}$ with
$1=|\lambda_{1}|>|\lambda_{2}|\geq|\lambda_{3}|\geq...\geq|\lambda_{N-1}|\geq-1$.
Let $\mu$ be a positive vertex measure such that
$\langle\mu,\phi_{1}\rangle=1$. The Dirichlet energy of a graph function $f$
with respect to the measure $\nu_{(t)}$ (see Eq. (10)) has the following
expression:
$\mathcal{D}_{\nu_{t}\\!,\mathbf{P}}^{2}(f)=\mathcal{D}_{\pi,\mathbf{P}}^{2}(f)+\langle{f},\mathbf{E}_{t}{f}\rangle=\langle{f},\big{(}\mathbf{L}(\pi)+\mathbf{E}_{t}\big{)}{f}\rangle,$
where
$\displaystyle\mathbf{E}_{t}$ $\displaystyle=\sum_{j\geq
2}c_{j,t}(\mu)\mathbf{Z}_{j},$ $\displaystyle\mathbf{L}(\pi)$
$\displaystyle=\boldsymbol{\Pi}-\frac{1}{2}(\boldsymbol{\Pi}\mathbf{P}+\mathbf{P}^{{\mkern-1.5mu\mathsf{T}}}\boldsymbol{\Pi}),$
$\displaystyle\mathbf{Z}_{j}$
$\displaystyle=\boldsymbol{\Psi}_{j}-\frac{1}{1+\lambda_{j}}(\boldsymbol{\Psi}_{j}\mathbf{P}+\mathbf{P}^{{\mkern-1.5mu\mathsf{T}}}\boldsymbol{\Psi}_{j})$
$\displaystyle\boldsymbol{\Psi}_{j}$
$\displaystyle=\operatorname{diag}(\psi_{j})$ $\displaystyle c_{j,t}(\mu)$
$\displaystyle=(1+\lambda_{j})\hat{\vartheta}_{j,t}(\mu)$
$\displaystyle\hat{\vartheta}_{j,t}(\mu)$
$\displaystyle=\langle\mu,\phi_{j}\rangle\lambda_{j}^{t}.$
$\displaystyle\nu_{t}(x)$
$\displaystyle=\mu^{{\mkern-1.5mu\mathsf{T}}}{\mathbf{P}}^{t}{\delta}_{x}$
$\displaystyle=\mu^{{\mkern-1.5mu\mathsf{T}}}\bigg{(}\sum_{j=1}^{N}\lambda_{j}^{t}\phi_{j}\psi_{j}^{*}\bigg{)}{\delta}_{x}$
$\displaystyle=\mu^{{\mkern-1.5mu\mathsf{T}}}\bigg{(}\phi_{1}\pi^{{\mkern-1.5mu\mathsf{T}}}+\sum_{j\geq
2}^{N}\lambda_{j}^{t}\phi_{j}\psi_{j}^{*}\bigg{)}{\delta}_{x}$
$\displaystyle=\langle\mu,\phi_{1}\rangle\pi(x)+\sum_{j\geq
2}^{N}\lambda_{j}^{t}\langle\phi_{j},\mu\rangle\psi_{j}(x)$
$\displaystyle\nu_{t}(x)$ $\displaystyle=\pi(x)+\sum_{j\geq
2}^{N}\lambda_{j}^{t}\langle\phi_{j},\mu\rangle\psi_{j}(x)$ (22)
Remplacing the explicit form of $\nu_{t}(x)$ from Eq. (22) in
$\mathcal{D}_{\nu_{t}\\!,\mathbf{P}}^{2}(f)$ yields :
$\displaystyle\mathcal{D}_{\nu_{t}\\!,\mathbf{P}}^{2}(f)$
$\displaystyle=\sum_{x,y\in\mathcal{V}}\nu(x,t)p(x,y)|f(x)-f(y)|^{2},$
$\displaystyle=\sum_{x,y\in\mathcal{V}}\bigg{(}\pi(x)+\sum_{j\geq
2}^{N}\lambda_{j}^{t}\langle\phi_{j},\mu\rangle\psi_{j}(x)\bigg{)}p(x,y)|f(x)-f(y)|^{2},$
$\displaystyle=\sum_{x,y\in\mathcal{V}}\pi(x)p(x,y)|f(x)-f(y)|^{2}+\sum_{j\geq
2}^{N}\lambda_{j}^{t}\langle\phi_{j},\mu\rangle\bigg{(}\sum_{x,y\in\mathcal{V}}\psi_{j}(x)p(x,y)|f(x)-f(y)|^{2}\bigg{)},$
$\displaystyle\mathcal{D}_{\nu_{t}\\!,\mathbf{P}}^{2}(f)$
$\displaystyle=\mathcal{D}_{\pi,\mathbf{P}}^{2}(f)+\sum_{j\geq
2}^{N}\hat{\vartheta}_{j,t}(\mu)\mathcal{D}_{\psi_{j},\mathbf{P}}^{2}(f).$
(23) $\displaystyle\mathcal{D}_{\psi_{j},\mathbf{P}}^{2}(f)$
$\displaystyle=\sum_{x,y\in\mathcal{V}}\psi_{j}(x)p(x,y)|f(x)-f(y)|^{2}$ (24)
$\displaystyle=\sum_{x,y\in\mathcal{V}}\psi_{j}(x)p(x,y)|f(x)|^{2}+\sum_{x,y\in\mathcal{V}}\psi_{j}(x)p(x,y)|f(y)|^{2}-2\sum_{x,y\in\mathcal{V}}\psi_{j}(x)p(x,y)|f(x)||f(y)|$
$\displaystyle=\sum_{x\in\mathcal{V}}\psi_{j}(x)|f(x)|^{2}+\lambda_{j}\sum_{y\in\mathcal{V}}\psi_{j}(x)|f(y)|^{2}-2\sum_{x,y\in\mathcal{V}}\psi_{j}(x)p(x,y)|f(x)||f(y)|$
$\displaystyle=(1+\lambda_{j})\sum_{x\in\mathcal{V}}\psi_{j}(x)|f(x)|^{2}-2\sum_{x,y\in\mathcal{V}}\psi_{j}(x)p(x,y)|f(x)||f(y)|$
$\displaystyle=(1+\lambda_{j})\langle
f,\bigg{(}\boldsymbol{\Psi}_{j}-\frac{1}{(1+\lambda_{j})}(\boldsymbol{\Psi}_{j}\mathbf{P}+\mathbf{P}^{{\mkern-1.5mu\mathsf{T}}}\boldsymbol{\Psi}_{j})\bigg{)}f\rangle,$
$\displaystyle\mathcal{D}_{\psi_{j},\mathbf{P}}^{2}(f)$
$\displaystyle=(1+\lambda_{j})\langle f,\mathbf{Z}_{j}f\rangle.$ (25)
By putting Eq. (25) into Eq. (26), it yields
$\displaystyle\mathcal{D}_{\nu_{t}\\!,\mathbf{P}}^{2}(f)$
$\displaystyle=\mathcal{D}_{\pi,\mathbf{P}}^{2}(f)+\sum_{j\geq
2}^{N}\hat{\vartheta}_{j,t}(\mu)\mathcal{D}_{\psi_{j},\mathbf{P}}^{2}(f)$ (26)
$\displaystyle=\mathcal{D}_{\pi,\mathbf{P}}^{2}(f)+\sum_{j\geq
2}^{N}\hat{\vartheta}_{j,t}(\mu)(1+\lambda_{j})\langle
f,\mathbf{Z}_{j}f\rangle,$
$\displaystyle=\mathcal{D}_{\pi,\mathbf{P}}^{2}(f)+\sum_{j\geq
2}^{N}c_{j,t}(\mu)\langle f,\mathbf{Z}_{j}f\rangle$
$\displaystyle\mathcal{D}_{\nu_{t}\\!,\mathbf{P}}^{2}({f})$
$\displaystyle=\mathcal{D}_{\pi,\mathbf{P}}^{2}(f)+\langle{f},\mathbf{E}_{t}{f}\rangle.$
(27)
At $t\to\infty$, we have
$\lim_{t\to\infty}\mathbf{E}_{t}=0.$
Consequently, we have
$\lim_{t\to\infty}\mathcal{D}_{\nu_{t}\\!,\mathbf{P}}^{2}(f)=\mathcal{D}_{\pi,\mathbf{P}}^{2}(f).$
Therefore, this result indicates that the GDE of a graph function $f$,
associated with the transition matrix $\mathbf{P}$ and under a parametrized
measure $\nu_{t}\\!$, is the sum of a quadratic form involving the usual
unnormalized Laplacian $\mathbf{L}(\pi)$ and an operator $\mathbf{E}_{t}$ that
tends to $0$ as $t\to\infty$. ∎
See 5.1
###### Proof.
Given the assumption on our toy model graph:
$\displaystyle\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}$}}})$
$\displaystyle=\frac{{c_{N}(\rho\epsilon_{N})^{\alpha}}}{N_{1}\epsilon_{N}^{\alpha}+o(N_{1}\epsilon_{N}^{\alpha})}=\frac{\omega_{N}}{N}{c\frac{\rho^{\alpha}}{a}}+o\Big{(}\frac{\omega_{N}}{N}\Big{)}.$
(28)
Now, with $V=F\cup H$, where $H\in V_{2}^{*}$ and $|V_{1}^{*}\cup
V|=\tilde{a}N+o(N)$:
$\displaystyle\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}\cup
V$}}})$
$\displaystyle={\frac{\omega_{N}(\epsilon_{N})^{\alpha}}{N\tilde{a}\epsilon_{N}^{\alpha}+o(N\epsilon_{N}^{\alpha})}}={\frac{\omega_{N}}{N}}{\frac{1}{\tilde{a}}}+o\Big{(}{\frac{\omega_{N}}{N}}\Big{)}.$
(29)
Hence if $c\rho>b$,
$\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}\cup
V$}}})<\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}$}}})$
for some $V$, whereas if $c\rho^{\alpha}<b$,
$\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}\cup
V$}}})>\overline{\mathcal{D}^{2}}_{\\!\\!\\!\\!\\!\pi^{\alpha}}(\chi_{\raisebox{-1.04996pt}{\scalebox{0.6}{$V_{1}^{*}$}}})$,
$\forall V$. ∎
## Appendix B Additional experimental results
Table 3: Clustering performance (ARI) on UCI datasets.
Dataset $N$ $d$ $k$ $\textnormal{SC-SYM}_{1}$ $\textnormal{SC-SYM}_{2}$
$\textnormal{DI-SIM}_{\textnormal{L}}$ $\textnormal{DI-SIM}_{\textnormal{R}}$
$\textnormal{DSC}\\!+\\!(\gamma)$ $\textnormal{GSC}_{1}(\gamma=1,\alpha,t)$
$\textnormal{GSC}_{2}(\gamma,\alpha,t)$
$\textnormal{GSC}_{3}(\gamma,\alpha,t)$ Iris 150 3 4 75.92 75.92 69.41(1)
58.44 (1) 52.96 (0.80) 92.22 (0.9,4) 92.22 (0.95,0.7,20) 92.22 (0.95,0.7,3)
Glass 214 9 6 23.12 24.80 22.05 (1) 18.89 (1) 20.93 (0.80) 28.00 (0.1,42)
28.01 (0.95,0.9,22) 26.85 (0.85,0.7,52) Wine 178 13 3 87.82 87.82 84.98 (1)
89.74 (1) 92.95 (0.80) 87.82 (0.1,1) 87.92 (0.95,0.1,53) 92.95 (0.95,0.8,2)
WBDC 569 30 2 76.69 77.30 77.95 (2) 57.75 (1) 64.58 (0.30) 80.48 (1,4) 81.12
(0.95,0.8,3) 78.56 (0.95,0.3,8) Control 600 60 6 62.25 62.25 66.79 (1) 59.83
(1) 60.04 (0.90) 71.91 (0.1,90) 64.77 (0.95,0.3,65) 64.78 (0.90.0.7,96)
Parkinson 185 22 2 35.42 32.91 28.90 (1) 24.82 (1) 24.82 (0.95) 42.26 (1,3)
43.73 (0.95,4,10) 40.28 (0.95,0.2, 60) Vertebral 310 6 3 29.70 29.70 38.85 (2)
31.03 (2) 54.64 (0.80) 63.14 (1,5) 58.40 (0.95,1,1) 38.56 (0.85,1,9) Breast
Tissue 106 9 6 36.96 38.41 41.94 (2) 30.60 (2) 21.76 (0.90) 41.01 (0.1,40)
38.02 (0.95,0.6,77) 44.69 (0.95,0.4,96) Seeds 210 7 3 78.46 78.46 81.09 (1)
74.41 (1) 77.64 (0.80) 83.52 (1,5) 83.52 (0.95,0.9,4) 83.52 (0.95,0.9,4) Image
Seg. 2310 19 7 47.83 51.75 50.83 (1) 36.54 (1) 08.89 (0.99) 52.15 (0.2,50)
61.17 (0.95,1,78) 55.81 (0.95,0.4,41) Yeast 1484 8 10 19.41 21.17 19.97 (2)
19.49 (2) 16.62 (0.90) 28.48 (0.5,9) 26.85 (0.95,0.9,93) 21.78 (0.95,0.4,19)
Average – – – 52.14 52.77 52.98 45.59 48.70 59.95 60.52 58.18
|
# On large deviations in the averaging principle for SDE’s with a “full
dependence”, correction
A. Yu. Veretennikov
(School of Mathematics, University of Leeds, UK
& Institute for Information Transmission Problems, Russia)
We establish the large deviation principle for stochastic differential
equations with averaging in the case when all coefficients of the fast
component depend on the slow one, including diffusion. 111AMS 1991 subject
classifications. 60F10, 60J60. 222Key words and phrases. Large deviations,
averaging, stochastic differential equation.
## 1 Introduction
This is a corrected version of the paper Veretennikov (1999). We consider the
SDE system
$\displaystyle dX_{t}=f(X_{t},Y_{t})dt,\quad X_{0}=x_{0},$ $\displaystyle
dY_{t}=\varepsilon^{-2}B(X_{t},Y_{t})dt+\varepsilon^{-1}C(X_{t},Y_{t})dW_{t},\quad
Y_{0}=y_{0}.$ (1)
Here $X_{t}\in E^{d},\,Y_{t}\in M$, $M$ is a compact manifold of dimension
$\ell$ (e.g. torus $T^{\ell}$), $f$ is a function with values in
$d$–dimensional Euclidean space $E^{d}$, $B$ is a function with values in
$TM$, $C$ is a function with values in $(TM)^{\ell}$ (i.e., in local
coordinates an $\ell\times\ell$ matrix), $W_{t}$ is an $\ell$–dimensional
Wiener process on some probability space $(\Omega,F,P)$, $\varepsilon>0$ is a
small parameter. Concerning SDE’s on manifolds we refer to Watanabe and Ikeda
(1989).
The large deviation principle (LDP) for such systems with a “full dependence”,
that is, $C(X_{t},Y_{t})$, was not treated earlier. Only the case $C(Y_{t})$
was considered in papers by Freidlin (1976), Freidlin (1978), Freidlin and
Wentzell (1984) for a compact state space and by Veretennikov (1994) for a
non-compact one. There are, as well, recent papers on more general systems
with small additive diffusions by Liptser and by the author which also only
concern the case $C(Y_{t})$.
The LDP for systems like (1) is important in averaging and homogenization, in
the KPP equation theory, for stochastic approximation algorithms with
averaging and so forth. The problem of an LDP for the case $C(X_{t},Y_{t})$
has arisen since Freidlin (1976), Freidlin (1978). Intuitively, the scheme
used for $C(Y_{t})$ should work; at least, almost all main steps go well.
Indeed, there was only one lacuna; the use of Girsanov’s transformation did
not allow freezing of $X_{t}$ if $C$ depended on the slow motion while it
worked well and very naturally for the drift $B(X_{t},Y_{t})$. Yet the problem
remained unresolved for years and the answer was not clear at all. Notice that
this difficulty does not appear in analogous discrete-time systems [see
Gulinsky and Veretennikov (1993), Chapter 11].
It turned out that the use of Girsanov’s transformation in some sense
prevented resolving the problem. Our approach in this paper is based on a new
technical lemma, Lemma 5 below. The main idea is to use two different scales
of partitions of the interval $[0,T]$, a “first-order partition” by points
$\Delta,\,2\Delta,\,\ldots$, which do not depend on the small parameter
$\varepsilon$ and “second-order partitions” which depend on $\varepsilon$ in a
special way, by points
$\varepsilon^{2}t(\varepsilon),\,2\varepsilon^{2}t(\varepsilon),\ldots\,$.
Then the exponential estimates needed for the proof of the result can be
established by two steps. First, the estimates for a “small” partition
interval are derived using the uniform bound of Lemma 3 (see below) and the
estimates for stochastic integrals. It is important that in the “second” scale
the fast motion is still close enough to its frozen version [the bound (13)
below]. Second, the bounds for “small” partitions and induction give one the
estimate for a “large” partition interval.
The original proof in the paper 1999 contains a gap. It relates to a
boundedness of some auxiliary constant $b$ in the proof, – in the original
version this constant may depend implicitly on the partition size $\Delta$,
hence generating a vicious circle. The main aim of this version of the paper
is to present the “patch”. The correction uses improved approximations that
keep this constant $b$ bounded in the lower and upper bounds, and it uses also
a truncated Legendre transformation in the upper bound. The author is deeply
indebted to Professor Yuri Kifer for discovering this vicious circle in the
original version of the paper. The main technical tool remains the Lemma 5.
All standing assumptions are the same as in the original version.
The main result is stated in Section 2. In Section 3 we expose auxiliary
lemmas, among them the main technical Lemma 5 with its proof and a version of
an important lemma from Freidlin and Wentzell (1984) (see Lemma 6) which
requires certain comments. Those comments along with other related remarks are
given in the Appendix, the latter has been also slightly extended. The proof
of the main theorem is presented in Section 4.
## 2 Main result
We make the following assumptions.
$(A_{f})$
The function $f$ is bounded and satisfies the Lipschitz condition.
$(A_{C})$
The function $CC^{*}$ is bounded, uniformly nondegenerate, $C$ satisfies the
Lipschitz condition.
$(A_{B})$
The function $B$ is bounded and satisfies the Lipschitz condition.
Some conditions may be relaxed; for example, $B$ can be locally bounded, $C$
locally (w.r.t. $x$) nondegenerate and so on.
The family of processes $X^{\varepsilon}$ satisfies a large deviation
principle in the space $C([0,T];R^{d})$ with a normalizing coefficient
$\varepsilon^{-2}$ and a rate function $S(\varphi)$ if three conditions are
satisfied:
$\limsup_{\varepsilon\to 0}\varepsilon^{2}\log P_{x}(X^{\varepsilon}\in
F)\leq-\inf_{F}S(\varphi),\quad\forall F\mbox{ closed },$ (2)
$\liminf_{\varepsilon\to 0}\varepsilon^{2}\log P_{x}(X^{\varepsilon}\in
G)\geq-\inf_{G}S(\varphi),\quad\forall G\mbox{ open },$ (3)
and $S$ is a “good” rate function; that is, for any $s\geq 0$, the set
$\Phi(s):=(\varphi\in C([0,T];R^{d}):\,\,S(\varphi)\leq s,\,\,\varphi(0)=x)$
is compact in $C([0,T];R^{d})$. In fact, we will establish the following
equivalent set of assertions due to Freidlin and Wentzell:
$\limsup_{\delta\to 0}\limsup_{\varepsilon\to 0}\varepsilon^{2}\log
P_{x}(\rho_{0,T}(X^{\varepsilon},\Phi(s)\geq\delta)\leq-s,\quad\forall s>0,$
(4)
where $\Phi(s):=\\{\varphi\in C[0,T;R^{d}],\,S(\varphi)\leq s\\}$, and
$\liminf_{\delta\to 0}\liminf_{\varepsilon\to 0}\varepsilon^{2}\log
P_{x}(\rho_{0,T}(X^{\varepsilon},\varphi)<\delta)\geq-S(\varphi),\quad\forall\varphi,$
(5)
where $S$ is a “good” rate function (see above).
Let $\tilde{W}_{t}=\varepsilon^{-1}W_{t\varepsilon^{2}}$,
$y_{t}=Y_{t\varepsilon^{2}}$, $x_{t}=X_{t\varepsilon^{2}}$ and let $y^{x}_{t}$
denote a solution of SDE,
$dy^{x}_{t}=B(x,y^{x}_{t})dt+C(x,y^{x}_{t})d\tilde{W}_{t},\quad
y^{x}_{0}=y_{0}.$ (6)
###### Theorem 1
Let $(A_{f})$, $(A_{B})$, $A_{C})$ be satisfied. Then the family
$(X^{\varepsilon}_{t}=X_{t},\;0\leq t\leq T)$ satisfies the LDP as
$\varepsilon\to 0$ in the space $C([0,T];R^{d})$ with an action function
$S(\varphi)=\int_{0}^{T}L(\varphi_{t},\dot{\varphi}_{t})dt,$
where
$L(x,\alpha)=\sup_{\beta}(\alpha\beta-H(x,\beta)),$
$H(x,\beta)=\lim_{t\to\infty}t^{-1}\log
E\exp\left(\int_{0}^{t}f(x,y^{x}_{s})ds\right).$
The limit $H$ exists and is finite for any $\beta$, the functions $H$ and $L$
are convex in their last arguments $\beta$ and $\alpha$ correspondingly,
$L\geq 0$ and $H$ is continuously differentiable in $\beta$.
The differentiability of $H$ at any $\beta$ is provided by the compactness of
the state space of the fast component.
## 3 Auxiliary lemmas
Let us consider the semigroup of operators $T^{\beta}_{t},t\geq 0$, on $C(M)$
defined by the formula
$T^{x^{\prime},x,\beta}_{t}g(y)=T^{\beta}_{t}g(y)=E_{y}g(y^{x}_{t})\exp\left(\int^{t}_{0}\beta
f(x^{\prime},y^{x}_{s})ds\right),$
where $\beta\in E^{d}$, $\beta f$ is a scalar product.
###### Lemma 1
Let assumptions $(A_{f})$, $(A_{B})$, $(A_{C})$ be satisfied. Then for any
$\beta$ the operator $T^{\beta}_{1}$ is compact in the space $C([0,T];R^{d})$.
###### Lemma 2
Let assumptions $(A_{f})$, $(A_{B})$, $(A_{C})$ be satisfied. Then the
spectral radius $r(T^{\beta}_{1})$ is a simple eigenvalue of $T^{\beta}_{1}$
separated from the rest of the spectrum and its eigen–function $e_{\beta}$
belongs to the cone $C^{+}(M)$. Moreover, function $r(T^{\beta}_{1})$ is
smooth (of $C^{\infty}$) in $\beta$ and the function $e_{\beta}$ is bounded
and separated away from zero uniformly in $|\beta|<b$ and any $x^{\prime},x$.
###### Lemma 3
Let $\beta\in E^{d}$, and let assumptions $(A_{f})$, $(A_{B})$, $(A_{C})$ be
satisfied. Then there exists a limit
$H(x^{\prime},x,\beta)=\lim_{t\to 0}t^{-1}\log
E_{y}\exp\left(\beta\int^{t}_{0}f(x^{\prime},y^{x}_{s})ds\right);$
moreover, $H(x^{\prime},x,\beta)=\log r(T^{x^{\prime},x,\beta}_{1})$. The
function $H(x^{\prime},x,\beta)$ is of $C^{\infty}$ in $\beta$ and convex in
$\beta$. For any $b>0$ there exists $C(b)$ such that, for any $y$, $|\beta|<b$
$|t^{-1}\log
E_{y}\exp\left(\beta\int^{t}_{0}f(x^{\prime},y^{x}_{s})ds\right)-H(x^{\prime},x,\beta)|\leq
C(b)t^{-1}.$ (7)
Notice that $|H(x^{\prime},x,\beta)|\leq\|f\|_{C}|\beta|$.
###### Lemma 4
Let assumptions $(A_{f})$, $(A_{B})$, $(A_{C})$ be satisfied. Then for any
$b>0$ the functions $H$ and $\nabla_{\beta}H$ are uniformly continuous in
$(x^{\prime},x,\beta)$, $|\beta|<b$.
Lemmas 1-4 are standard [cf. Veretennikov (1994) or (1992)]. They are based on
Frobenius-type theorems for positive compact operators [see Krasnosel’skii,
Lifshitz and Sobolev (1989)] and the theory of perturbations of linear
operators [see Kato (1976), Chapter 2]. Denote
$\tilde{F}_{t}=F_{t\varepsilon^{2}}$.
###### Lemma 5
Let assumptions $(A_{f})$, $(A_{B})$, $(A_{C})$, $b>0$,
$t(\varepsilon)\to\infty$ and $t(\varepsilon)=o(\log\varepsilon^{-1})$ as
$\varepsilon\to 0$. Then for any $\nu>0$ there exist $\delta(\nu)>0$,
$\varepsilon(\nu)>0$ such that for $\varepsilon\leq\varepsilon(\nu)$ uniformly
w.r.t. $t_{0},\,x^{\prime},\,x,\,x_{0},\,y_{0}$, $x_{t_{0}}=x_{0}$,
$|\beta|\leq b$, the inequality holds on the set
$\\{|x_{t_{0}}-x|<\delta(\nu)\\}$,
$\left|\log
E(\exp(\beta\int\limits_{t_{0}}^{t_{0}+t(\varepsilon)}f(x^{\prime},y_{s})ds)|\tilde{F}_{t_{0}})-t(\varepsilon)H(x^{\prime},x,\beta)\right|\leq\nu
t(\varepsilon).$ (8)
Moreover, if $\Delta\leq\Delta(\nu)=(1+\|f\|_{C})^{-1}\delta(\nu)/2$ and
$\varepsilon$ is small enough, then uniformly w.r.t.
$t_{0},\,x^{\prime},\,x,\,x_{0},\,y_{0}$, $\delta\leq\delta(\nu)$,
$|x_{0}-x|<\delta$, and $|\beta|\leq b$,
$\displaystyle\exp(\varepsilon^{-2}\Delta
H(x^{\prime},x,\beta)-\nu\Delta\varepsilon^{-2})$ $\displaystyle\leq
E\left(\exp(\beta\varepsilon^{-2}\int\limits_{t_{0}}^{t_{0}+\Delta}f(x^{\prime},Y_{s})ds)|F_{t_{0}}\right)$
$\displaystyle\leq\exp(\varepsilon^{-2}\Delta
H(x^{\prime},x,\beta)+\nu\Delta\varepsilon^{-2}).$ (9)
Remark. We reiterate and emphasize that any couple $(\Delta,\delta)$
satisfying only $\Delta\leq\Delta(\nu)$ and $\delta\leq\delta(\nu)$ would do.
Proof. Step 1\. It is sufficient to prove (8) and (5) for $t_{0}=0$. Moreover,
since $H$ is continuous, it suffices to check both inequalities for $x=x_{0}$.
Indeed, the bound
$\left|\log
E\exp\left(\beta\int\limits_{0}^{t(\varepsilon)}f(x^{\prime},y_{s})ds\right)-t(\varepsilon)H(x^{\prime},x_{0},\beta)\right|\leq\nu
t(\varepsilon)$
implies
$\displaystyle\left|\log
E\exp\left(\beta\int\limits_{0}^{t(\varepsilon)}f(x^{\prime},y_{s})ds\right)-t(\varepsilon)H(x^{\prime},x,\beta)\right|$
$\displaystyle\leq
t(\varepsilon)(\nu+|H(x^{\prime},x,\beta)-H(x^{\prime},x_{0},\beta)|),$
and we use the uniform continuity of the function $H$ on compact sets (remind
that $|\beta|\leq b$). The same arguments are applicable to the second
inequality of the assertion of the lemma. So, in the sequel we consider the
case $x_{0}=x$.
Let us show first that
$\sup_{x^{\prime},x_{0}}\left|t(\varepsilon)^{-1}\log
E\exp\left(\beta\int_{0}^{t(\varepsilon)}f(x^{\prime},y_{s})ds\right)-H(x^{\prime},x,\beta)\right|\leq\nu$
(10)
if $\varepsilon$ is small enough. Due to Lemma 3, it would be correct if
$y_{s}$ were replaced by $y^{x}_{s}$ and $t(\varepsilon)\geq\nu^{-1}C(b)$. We
will also use the bounds
$\sup_{0\leq s\leq t}|x_{s}-x_{0}|\leq\|f\|_{C}\varepsilon^{2}t\quad\forall
C,\;\;\;\exp(Ct(\varepsilon))t(\varepsilon)^{2}\varepsilon^{2}\to
0,\;\;\varepsilon\to 0.$ (11)
Let $|f(x^{\prime},y)-f(x^{\prime},y^{\prime})|\leq L_{f}|y-y^{\prime}|$ for
all $y,y^{\prime},x^{\prime}$, $L_{f}>0$, $C_{f}=\|f\|_{C}$. We estimate for
$t(\varepsilon)>\nu^{-1}C(b)/4$,
$\displaystyle
E\exp\left(\beta\int\limits_{0}^{t(\varepsilon)}f(x^{\prime},y_{s})ds\right)$
$\displaystyle\times\left\\{I\left(\sup_{0\leq t\leq
t(\varepsilon)}|y_{t}-y^{x}_{t}|\leq\nu/(4L_{f}b)\right)+I\left(\sup_{0\leq
t\leq t(\varepsilon)}|y_{t}-y^{x}_{t}|>\nu/(4L_{f}b)\right)\right\\}$
$\displaystyle\leq
E\exp\left(\beta\int\limits_{0}^{t(\varepsilon)}f(x^{\prime},y^{x}_{s})ds+t(\varepsilon)\nu/4\right)I\left(\sup_{0\leq
t\leq t(\varepsilon)}|y_{t}-y^{x}_{t}|\leq\nu/(4L_{f}b)\right)$
$\displaystyle+\exp(C_{f}bt(\varepsilon)\nu)EI\left(\sup_{0\leq t\leq
t(\varepsilon)}|y_{t}-y^{x}_{t}|>\nu/(4L_{f}b)\right)$ $\displaystyle\leq
E\exp\left(\beta\int\limits_{0}^{t(\varepsilon)}f(x^{\prime},y^{x}_{s})ds\right)\exp(t(\varepsilon)\nu/4)$
$\displaystyle+\exp\left(C_{f}bt(\varepsilon)\nu\right)\nu^{-2}E\sup_{t\leq
t(\varepsilon)}|y_{t}-y^{x}_{t}|^{2}.$ (12)
By virtue of Lemma 3 we have
$E\exp\left(\beta\int\limits_{0}^{t(\varepsilon)}f(x^{\prime},y^{x}_{s})ds\right)\leq\exp(t(\varepsilon)(H(x^{\prime},x,\beta)+\nu/4))$
if $\varepsilon$ is small enough. A similar lower bound holds true also.
Let us estimate the second term. By virtue of the inequalities for the Itô and
Lebesgue integrals, we have
$\displaystyle E\sup_{t^{\prime}\leq
t}|y_{t^{\prime}}-y^{x}_{t^{\prime}}|^{2}$ $\displaystyle\leq
CE\int\limits_{0}^{t}|C(x_{s},y_{s})-C(x,y^{x}_{s}))|^{2}ds$
$\displaystyle+CtE\int\limits_{0}^{t}|B(x_{s},y_{s})-B(x,y^{x}_{s}))|^{2}ds$
$\displaystyle\leq
C\int\limits_{0}^{t}E|x_{s}-x|^{2}ds+C\int\limits_{0}^{t}E\sup_{u\leq
s}|y_{s}-y^{x}_{s}|^{2}ds$ $\displaystyle\leq
Ct^{2}\varepsilon^{2}+C\int\limits_{0}^{t}E\sup_{u\leq
s}|y_{u}-y^{x}_{u}|^{2}ds.$
By virtue of Gronwall’s lemma, one gets
$E\sup_{t^{\prime}\leq t}|y_{t^{\prime}}-y^{x}_{t^{\prime}}|^{2}\leq
Ct^{2}\varepsilon^{2}\exp(Ct).$
In particular,
$E\sup_{t^{\prime}\leq
t(\varepsilon)}|y_{t^{\prime}}-y^{x}_{t^{\prime}}|^{2}\leq
Ct(\varepsilon)^{2}\varepsilon^{2}\exp(Ct(\varepsilon)).$ (13)
So the second term in (3) does not exceed the value
$\exp(C_{f}bt(\varepsilon)\nu)\nu^{-2}Ct(\varepsilon)^{2}\varepsilon^{2}$
which is $o(\exp(Kt(\varepsilon)))$ for any $K>0$. Indeed,
$\exp(t(\varepsilon)(C_{f}b\nu-K))\nu^{-2}Ct(\varepsilon)^{2}\varepsilon^{2}\to
0$ due to the assumption
$t(\varepsilon)=o(\log\varepsilon^{-1}),\;\varepsilon\to 0$. This proves (10).
Notice that the bound (10) is uniform w.r.t. $|\beta|\leq b$ and
$x^{\prime},\,x,\,y_{0}$. Since the function $H$ is continuous, we get on the
set $\\{|x_{t_{0}}-x|<\delta(\nu)\\}$,
$\displaystyle\sup\limits_{x^{\prime},x,y_{0},t_{0},|\beta|\leq b}\left|\log
E\left(\exp\left(\beta\int\limits_{t_{0}}^{t_{0}+t(\varepsilon)}f(x^{\prime},y^{x}_{s})ds\right)\mid\tilde{F}_{t_{0}}\right)-t(\varepsilon)H(x^{\prime},x,\beta)\right|\leq\nu
t(\varepsilon)$ (14)
if $\delta(\nu)$ is small enough.
Step 2\. Let $\Delta\leq(1+\|f\|_{C})^{-1}\delta(\nu)/2=\Delta(\nu)$ and
$N=\Delta\varepsilon^{-2}t(\varepsilon)^{-1}$. Then $\sup_{0\leq s\leq
Nt(\varepsilon)}|x_{s}-x_{0}|\leq\delta(\nu)/2$. Let
$|x-x_{0}|<\delta(\nu)/2$. So, $\sup_{0\leq s\leq
Nt(\varepsilon)}|x_{s}-x|<\delta(\nu)$. In particular,
$|x_{kt(\varepsilon)}-x|<\delta(\nu)$ for any $1\leq k\leq N$. By induction,
we get from (14) for such $k$,
$\displaystyle\exp(kt(\varepsilon)H(x^{\prime},x,\beta)-\nu kt(\varepsilon))$
$\displaystyle\leq
E\exp\left(\beta\int\limits_{0}^{kt(\varepsilon)}f(x^{\prime},y_{s})ds\right)$
$\displaystyle\leq\exp(kt(\varepsilon)H(x^{\prime},x,\beta)+\nu
kt(\varepsilon)),$
or, after the time change,
$\displaystyle\exp(kt(\varepsilon)H(x^{\prime},x,\beta)-\nu kt(\varepsilon))$
$\displaystyle\leq
E\exp\left(\beta\varepsilon^{-2}\int\limits_{0}^{kt(\varepsilon)\varepsilon^{-2}}f(x^{\prime},Y_{s})ds\right)$
$\displaystyle\leq\exp(kt(\varepsilon)H(x^{\prime},x,\beta)+\nu
kt(\varepsilon)).$
Since $H$ is continuous then we obtain for $k=N$,
$\displaystyle\exp(\varepsilon^{-2}\Delta
H(x^{\prime},x,\beta)-\nu\Delta\varepsilon^{-2})$ $\displaystyle\leq
E\exp\left(\beta\varepsilon^{-2}\int\limits_{0}^{\Delta}f(x^{\prime},Y_{s})ds\right)$
$\displaystyle\leq\exp(\varepsilon^{-2}\Delta
H(x^{\prime},x_{0},\beta)+\nu\Delta\varepsilon^{-2}).$ (15)
Lemma 5 is proved. QED
The next Lemma is an improved version of the Lemma 7.5.2 from Freidlin and
Wentzell. Although we will not use it, its technique is essential.
###### Lemma 6
[Freidlin (1978), Freidlin and Wentzell (1984)]. Let $S(\varphi)<\infty$. If
$\psi^{n}$ is a sequence of step functions tending uniformly to $\varphi$ in
$C[0,T];R^{d})$ as $n\to\infty$, then there exists a sequence of piecewise
linear functions $\chi^{n}$ (with the same partitions) which also tend
uniformly to $\varphi$ and such that
$\limsup_{n\to\infty}\int_{0}^{T}L(\psi^{n}_{s},\dot{\chi}^{n}_{s})ds\leq
S(\varphi).$
Moreover, one may assume without loss of generality that for any $s$ there
exists a value
$\beta_{s}=\mathop{\rm
argmax}\nolimits\limits_{\beta}(\beta\dot{\chi}^{n}_{s+}-H(\psi^{n}_{s},\psi^{n}_{s},\beta))$
and
$L(\psi^{n}_{s},\alpha)>L(\psi^{n}_{s},\dot{\chi}^{n}_{s+})+(\alpha-\dot{\chi}^{n}_{s+})\beta_{s}\quad\forall\alpha\not=\dot{\chi}^{n}_{s}.$
If $\hat{\psi}$ is close enough to $\psi^{n}_{s}$ then there exists a value
$\hat{\beta}_{s}=\mathop{\rm
argmax}\nolimits\limits_{\beta}(\beta\dot{\chi}^{n}_{s+}-H(\psi^{n}_{s},\hat{\psi},\beta)),$
$L(\psi^{n}_{s},\hat{\psi},\alpha)>L(\psi^{n}_{s},\hat{\psi},\dot{\chi}^{n}_{s+})+(\alpha-\dot{\chi}^{n}_{s+})\hat{\beta}_{s}\quad\forall\alpha\not=\dot{\chi}^{n}_{s}$
and
$L(\psi^{n}_{s},\hat{\psi},\dot{\chi}^{n}_{s+})\to
L(\psi^{n}_{s},\psi^{n}_{s},\dot{\chi}^{n}_{s+}),\quad\hat{\psi}\to\psi^{n}_{s}.$
We added to the original assertion the property that $\chi^{n}_{t}$ may be
chosen piecewise linear. Indeed, such functions are used in the proof; see
Freidlin and Wentzell (1984), Section 7.5. The existence of $\beta_{s}$
asserted in the lemma also follows from the proof; see Freidlin and Wentzell
(1984) or Freidlin (1978). Assertions about $\hat{\psi}$ and $\hat{\beta}_{s}$
also added to the original assertion can be deduced from the proof using
similar arguments.
In fact, there is a little gap in the original proof, namely, an additional
assumption was used which was not formulated explicitly. This is why we have
to present a precise statement and give necessary comments on it in the
Appendix.
## 4 Proof of theorem 1
1. 1.
First part of the proof: the lower bound. Let $S(\varphi)<\infty$, and
$\nu>0$. To establish the lower bound, we will show the inequality: given any
$\nu>0$, and any $\delta>0$, we have for $\varepsilon>0$ small enough,
$\varepsilon^{2}\log
P_{x}(\rho_{0,T}(X^{\varepsilon},\varphi)<\delta)\geq-S(\varphi)-\nu.$
Denote $H(x,\beta)=H(x,x,\beta)$. The existence of the limit
$H(x,x^{\prime},\cdot)$ for any $x,x^{\prime}$, and its differentiability and
continuity are asserted in Lemmas 3 and 4. Throughout the proof, we may and
will assume that for any $s$, $L(\varphi_{s},\dot{\varphi}_{s})<\infty$.
Indeed, this may be violated only on a set of $s$ of Lebesgue measure zero.
Notice that due to the boundedness of the function $f$, this inequality
implies $\sup_{s}|\dot{\varphi}_{s}|\leq\|f\|_{C}$. Indeed, for any
$|\alpha|>\|f\|_{C}$, we have $L(x,\alpha)=+\infty$.
In the sequel, both $X_{0}=x$ and $Y_{0}=y$ are fixed, hence, the probability
symbol $P$ will be used without indices.
2. 2.
We are going to reduce the problem of estimation from below the probability
$P(\rho(X,\varphi)<\delta)$
to that for the probability
$P(\rho(X^{\varphi},\varphi)<\delta^{\prime}),\quad\mbox{where}\quad
X^{\psi}_{t}:=x_{0}+\int_{0}^{t}f(\psi_{s},Y_{s})ds,\;\forall\psi,$
and further to
$P(\rho(X^{\psi},\chi)<\delta^{\prime}),$
where both $\psi,\chi$ approximate $\varphi$. The rough idea is eventually to
choose a step function as $\psi$ and piecewise linear one as $\chi$, however
we are going to perform these approximations gradually. A step function is
needed because we only have a technical tool – the Lemma 5 – established for
this very case. A piecewise linear $\psi$ is not necessary, but convenient.
Eventually we consider a finite-dimensional subset of the set
$\\{\rho(X,\varphi)<\delta\\}$, of the form (slightly abusing notations which
will be explained in the sequel)
$\\{\rho(X^{\psi}_{\Delta},\chi_{\Delta})<\delta^{\prime}_{1},\rho(X^{\psi}_{2\Delta},\chi_{2\Delta)}<\delta^{\prime}_{2},\ldots,\rho(X^{\psi}_{T},\chi_{T})<\delta^{\prime}_{T/\Delta}\\},$
with appropriately chosen $\Delta$, $X^{\psi}$, deterministic curves
$\psi,\chi$, and constants $\delta^{\prime}_{k}$: in particular, we will
choose
$\delta^{\prime}_{1}<<\delta^{\prime}_{2}<<\ldots<<\delta^{\prime}_{T/\Delta}<<\delta$.
While performing all these approximations, we need to establish simultaneously
a special property: at any point $s$, the Fenchel-Legendre adjoint to the
$\dot{\chi}_{s}$ variable $\beta_{s}=\beta_{s}[\psi_{s},\dot{\chi}_{s}]$ (see
below) can be chosen uniformly bounded.
There will be several, – actually, a lot of, – small constants chosen
consequently throughout this proof. In the beginning, $\delta>0$ and $\nu>0$
are fixed; due to the Lemma 5, we have also $\Delta(\nu)$ and $\delta(\nu)$.
Here we prompt in advance the order of the choice for most of them (by this
diagram we do not claim that every following constant only depends on the
previous one, just the order):
$b\,\&\,\tilde{\delta}^{\prime}\mapsto\delta^{\prime}\mapsto\Delta\,\&\,\delta^{\prime\prime}\mapsto\delta^{\prime}_{m}\mapsto\delta^{\prime}_{m-1}\mapsto
z_{m-1}\mapsto\nu\,^{\prime}_{m-1}\ldots\mapsto\delta^{\prime}_{1}\mapsto
z_{1}\mapsto\nu\,^{\prime}_{1}.$
Remark. Emphasize that the final set (above) will be a subset of the
$\\{\rho(X,\varphi)<\delta\\}$, however, it is not necessary that
$\\{\rho(\varphi_{\Delta},\chi_{\Delta})<\delta^{\prime}_{1},\rho(\varphi_{2\Delta},\chi_{2\Delta)}<\delta^{\prime}_{2},\ldots,\rho(\varphi_{T},\chi_{T})<\delta^{\prime}_{T/\Delta}\\},$
that is, the curve $\varphi$ itself does not have to belong to this subset.
3. 3.
For any nonrandom curve $\psi$, – although we will apply this firstly to
$\varphi$, but other functions are also necessary for the analysis below, – we
have, due to the Lipschitz condition on $f$,
$\\{\rho(X,\varphi)<\delta\\}\supset\\{\rho(X^{\psi},\chi)<\delta^{\prime}\\}$
(16)
if $\delta^{\prime}$ and $\lambda:=\rho_{0,T}(\varphi,\psi)$ are small enough
with respect to $\delta$. (A small constant $\lambda>0$ is used just within
this step.) E.g.,
$\delta^{\prime}<\delta(e^{CT}CT+1)^{-1}/2,\quad\lambda<\delta(e^{CT}CT+1)^{-1}/2$
suffice, see below. Indeed,
$X_{t}=x+\int_{0}^{t}f(X_{s},Y_{s})ds,\quad
X^{\psi}_{t}=x+\int_{0}^{t}f(\psi_{s},Y_{s})ds,$
thence,
$\displaystyle|X_{t}-X^{\psi}_{t}|\leq\int_{0}^{t}|f(X_{s},Y_{s})ds-f(\psi_{s},Y_{s})|ds\leq
C\int_{0}^{t}|X_{s}-\psi_{s}|ds$ $\displaystyle\leq
C\int_{0}^{t}|X_{s}-X^{\psi}_{s}|ds+C\int_{0}^{t}|X^{\psi}_{s}-\chi_{s}|ds+C\int_{0}^{t}|\chi_{s}-\psi_{s}|ds;$
so on the set $\\{\rho(X^{\psi},\chi)<\delta^{\prime}\\}$,
$\displaystyle|X_{t}-X^{\psi}_{t}|\leq
C\int_{0}^{t}|X_{s}-X^{\psi}_{s}|ds+C\delta^{\prime}t+C\lambda t,$
and, moreover (on the same set),
$\sup_{0\leq t^{\prime}\leq t}|X_{t^{\prime}}-X^{\psi}_{t^{\prime}}|\leq
C\int_{0}^{t}\sup_{0\leq s^{\prime}\leq
s}|X_{s^{\prime}}-X^{\psi}_{s^{\prime}}|ds+C(\delta^{\prime}+\lambda)t,$
which implies by Gronwall’s inequality that on the same set,
$\rho(X,X^{\psi})\leq e^{CT}C(\delta^{\prime}+\lambda)T.$
Now,
$\displaystyle\rho(X,\varphi)\leq\rho(X,X^{\psi})+\rho(X^{\psi},\chi)+\rho(\chi,\varphi)$
$\displaystyle\leq
e^{CT}C(\delta^{\prime}+\lambda)T+\delta^{\prime}+\lambda=(\delta^{\prime}+\lambda)(e^{CT}CT+1).$
Therefore, (16) holds true. E.g.,
$\delta^{\prime}<\delta(e^{CT}CT+1)^{-1}/2,\quad\lambda<\delta(e^{CT}CT+1)^{-1}/2$
suffice. In particular, it is true that
$\\{\rho(X,\varphi)<\delta\\}\supset\\{\rho(X^{\varphi},\chi)<\delta^{\prime}\\},$
if $\delta^{\prime}$ and $\lambda$ are small enough with respect to $\delta$.
This bound will be used while establishing a lower bound.
4. 4.
While establishing an upper bound, an opposite inclusion will be useful,
$\\{\rho(X,\varphi)<\delta\\}\subset\\{\rho(X^{\psi},\chi)<2\delta(KT+1)\\},$
(17)
if $\lambda=\max\left(\rho(\varphi,\psi),\rho(\varphi,\chi)\right)\leq\delta$.
Indeed,
$\displaystyle|X_{t}-X^{\psi}_{t}|\leq\int_{0}^{t}|f(X_{s},Y_{s})ds-f(\psi_{s},Y_{s})|ds\leq
K\int_{0}^{t}|X_{s}-\psi_{s}|ds$ $\displaystyle\leq
K\int_{0}^{t}|X_{s}-\varphi_{s}|ds+K\int_{0}^{t}|\psi_{s}-\varphi_{s}|ds;$
so on the set $\\{\rho(X,\varphi)<\delta\\}$,
$\displaystyle|X_{t}-X^{\psi}_{t}|\leq K\delta t+K\lambda t,$
and, moreover (on the same set),
$\rho(X,X^{\psi})\leq K(\delta+\lambda)T.$
Now, (17) follows from the inequalities,
$\displaystyle\rho(X^{\psi},\chi)\leq\rho(X,X^{\psi})+\rho(X,\varphi)+\rho(\chi,\varphi)$
$\displaystyle\leq K(\delta+\lambda)T+\delta+\lambda.$
5. 5.
Our next goal is the choice of an appropriate $\chi=\varphi^{b}$; essential is
to keep the integral
$\int\limits_{0}^{T}L(\varphi_{s},\dot{\varphi}^{b}_{s})\,ds$ close to
$S(\varphi)$. Suppose for some $s\in[0,T]$, the set
$\\{\alpha:\,L(\varphi_{s},\alpha)<\infty\\}$ has a non-empty interior, – for
the latter we will use a notation ${\cal L}^{\circ}[f,\varphi_{s}]$, – with
respect to its linear hull ${\cal L}[f,\varphi_{s}]$. Since
$L(\varphi_{s},\dot{\varphi}_{s})<\infty$, this value is attained as a
$\,\liminf\,$ of the values $L(\varphi_{s},\alpha)$, $\alpha\in{\cal
L}^{\circ}[f,\varphi_{s}]$, as $\alpha\to\dot{\varphi}$, see Rockafellar
(1970). It is a property of any such $\alpha$ that there exists a finite
adjoint vector $\beta=\mathop{\rm
argmax}\nolimits_{\beta}(\alpha\beta-H(\varphi_{s},\beta))$ given $\alpha$,
although this adjoint may be not unique which we will discuss shortly. Notice
that, in particular, we have
$H(\varphi_{s},\beta)=(\alpha\beta-L(\varphi_{s},\alpha)),\;\;\mbox{and}\;\;L(\varphi_{s},\alpha)=(\alpha\beta-H(\varphi_{s},\beta)).$
Then we choose a vector $\dot{\tilde{\varphi}_{s}}:=\alpha\in{\cal
L}^{\circ}[f,\varphi_{s}]$ so that the value
$L(\varphi_{s},\dot{\tilde{\varphi}_{s}})$ is close enough to
$L(\varphi_{s},\dot{\varphi_{s}})$.
The adjoint $\beta$-value is unique iff $L$ is differentiable at $\alpha$,
which is also equivalent to strict convexity of $H$ at $\beta$ (see
Rockafellar (1970)). If this is not a case, that is, $L$ is non-differentiable
at $\alpha$, then there is a sub-gradient to the graph of $L$ at $\alpha$
which is a non-trivial cone (although its dimension may be less than $d$). In
this case one can choose various adjoint vectors $\beta$’s. Although not
unique, both $\dot{\tilde{\varphi}_{s}}$ and
$\beta[\varphi_{s},\dot{\tilde{\varphi}_{s}}]$ can be chosen as Borel
functions of $s$ (due to the Measurable Choice Theorem). Denote thus chosen
adjoint by $\beta[\varphi_{s},\dot{\tilde{\varphi}_{s}}]$.
If the set ${\cal L}^{\circ}[f,\varphi_{s}]$ is empty, one can choose
$\beta[\varphi_{s},\dot{\tilde{\varphi}_{s}}]=0$, see Appendix A.
Now set
$\tilde{\varphi}_{t}:=x+\int_{0}^{t}\dot{\tilde{\varphi}_{s}}\,ds.$
We can choose this new curve to be as close to $\varphi$ as we like, and the
values $\int_{0}^{T}L(\varphi_{s},\dot{\tilde{\varphi}_{s}})\,ds$ and
$\int_{0}^{T}L(\varphi_{s},\dot{\varphi}_{s})\,ds$ are arbitrarily close to
each other, too, say,
$\left|\int\limits_{0}^{T}L(\varphi_{s},\dot{\tilde{\varphi}_{s}})\,ds-S(\varphi)\right|\leq\nu/3.$
6. 6.
For any $s$, let us choose a (measurable) vector $\hat{\alpha}_{s}$ such that
$L(\varphi_{s},\hat{\alpha}_{s})=0$.
Notice that $L(\varphi_{s},\cdot)\geq 0$ since $H(\varphi_{s},0)=0$; moreover,
$H(\varphi_{s},\cdot)$ is convex at the origin, hence, there does exist
$\alpha$ such that $L(\varphi_{s},\alpha)=0$. If not unique, this vector can
be still chosen measurable due to the Measurable Choice Theorem. Moreover,
$|\hat{\alpha}_{s}|\leq\|f\|_{C}$ for any vector where $L$ is finite.
It is shown in the Appendix C that $\hat{\alpha}_{s}\in{\cal
L}^{\circ}[f,\varphi_{s}]$, if the latter set is not empty. A corresponding
adjoint vector $\beta$ (i.e.
$\beta=\mbox{argsup}\,[\langle\cdot\,,\hat{\alpha}_{s}\rangle-H(\varphi_{s},\cdot)]$)
does exist, and may be chosen as a zero-vector in $R^{d}$ that we denote by
$\bar{0}$. This vector is, indeed, one of (if not unique) adjoint vectors for
the vector $\hat{\alpha}_{s}$, since $H(\varphi_{s},\bar{0})=0$, and
$\bar{0}\hat{\alpha}_{s}-H(\varphi_{s},\bar{0})=0$. If unique (that depends on
the differentiability of the function $L$, or, equivalently, on strict
convexity of the function $H$), this adjoint is precisely $\bar{0}$.
7. 7.
Next, given $b$, define
$\displaystyle\varphi^{b}_{t}:=x+\int_{0}^{t}\left(\dot{\tilde{\varphi}_{s}}\,1(|\beta[\varphi_{s},\dot{\tilde{\varphi}_{s}}]|\leq
b)+\hat{\alpha}_{s}\,1(|\beta[\varphi_{s},\dot{\tilde{\varphi}_{s}}]|>b)\right)\,ds.$
Since $|\hat{\alpha}_{s}|\leq\|f\|_{C}$, we can find such a $b$ that the curve
$\varphi^{b}$ is still as close to $\varphi$ as we like in the uniform norm,
and the values $\,\int_{0}^{T}L(\varphi_{s},\dot{\varphi^{b}_{s}})\,ds\,$ and
$\,\int_{0}^{T}L(\varphi_{s},\dot{\varphi}_{s})\,ds\,$ are arbitrarily close
to each other.
At the same time, if for some $s$ we have
$|\beta[\varphi_{s},\dot{\tilde{\varphi}_{s}}]|>b$, then the $\beta$-value is
transformed into
$\beta[\varphi_{s},\dot{\varphi_{s}^{b}}]=\beta[\varphi_{s},\hat{\alpha}_{s}]=0$,
so that for any $s\in[0,T]$ the inequality holds true,
$|\beta[\varphi_{s},\dot{\varphi^{b}_{s}}]|\leq b.$
We have,
$\\{\rho(X,\varphi)<\delta\\}\supset\\{\rho(X,\varphi^{b})<\delta/2\\}$,
if $b$ is large enough.
8. 8.
Next, the values of rate functions $S^{\varphi}_{s}(\tilde{\varphi})$ and
$S^{\varphi}(\varphi^{b})$ are arbitrarily close, say,
$|S^{\varphi}_{s}(\tilde{\varphi})-S^{\varphi}_{s}(\varphi^{b})|<\nu/3$, so
that
$\left|S^{\varphi}(\varphi^{b})-S(\varphi)\right|\leq 2\nu/3,$
if $b$ is large enough. Moreover, in addition,
$\\{\rho(X,\varphi^{b})<\delta/2\\}\supset\\{\rho(X^{\varphi},\varphi^{b})<\tilde{\delta}^{\prime}\\}$,
(18)
if $\tilde{\delta}^{\prime}$ and $\lambda=\rho(\varphi,\varphi^{b})$ are small
enough with respect to $\delta$; hence, we can fix the values $b$ and
$\tilde{\delta}^{\prime}$ here.
9. 9.
The next transform is the change of both $\varphi$ and $\varphi^{b}$ so that
the first becomes a step function, while the second becomes piecewise linear
on $[0,T]$. In the meantime, all adjoint $\beta$-values will remain bounded.
Consider the approximations
$\psi_{s}=\varphi_{\kappa_{m}(s+a)-a},\quad\dot{\chi}_{s}=\dot{\varphi^{b}}_{\kappa_{m}(s+a)-a},$
where $m=T/\Delta$ is a positive integer, $\kappa_{m}(s):=[s/\Delta]\Delta$,
and $a$ is ‘almost any’ value from $[0,T]$; we assume
$\varphi_{s}\equiv\varphi_{0},\,s<0$, and
$\dot{\varphi^{b}}_{s}\equiv\dot{\varphi^{b}}_{0},\,s<0$. Since
$\sup_{t}|\dot{\varphi}_{t}|<\infty$, we have $\rho(\varphi,\psi)\to
0,\,m\to\infty$. Next, it is well-known333Possibly this folklore fact can be
found in N. V. Krylov’s Lectures on Stochastic Processes (Moscow State
University, 1980s, Russian, vol. 2; the English translation of this book has
been published recently by the AMS). For the reader’s convenience we provide a
folklore proof here. Indeed, due to a standard technique, it suffices to show
for the first integral that
$\int_{0}^{T}\int_{0}^{T}|\dot{\varphi^{b}}_{\kappa_{m}(s+a)-a}-\dot{\varphi^{b}}_{s}|\,da\,ds\to
0,\quad m\to\infty.$ Let $g_{s},\,0\leq s\leq T,$ be a smooth bounded function
such that $\int|\dot{\varphi^{b}}_{s}-g_{s}|\,ds<\nu$; all functions are
extended to $-\infty<s<\infty$ so that all vanish outside $[0,T]$. Then
$\displaystyle\int_{0}^{T}\,da\int|\dot{\varphi^{b}}_{\kappa_{m}(s+a)-a}-g_{\kappa_{m}(s+a)-a}|\,ds=\int\,ds\int_{0}^{T}|\dot{\varphi^{b}}_{\kappa_{m}(s+a)-a}-g_{\kappa_{m}(s+a)-a}|\,da$
$\displaystyle=\sum_{i=1}^{m}\Delta\left(\Delta^{-1}\int_{(i-1)\Delta}^{i\Delta}|\dot{\varphi^{b}}_{s}-g_{s}|\,ds\right)\equiv\int|\dot{\varphi^{b}}_{s}-g_{s}|\,ds<\nu.$
Hence, it suffices to show that
$\int_{0}^{T}\int_{0}^{T}|g_{\kappa_{m}(s+a)-a}-g_{s}|\,da\,ds\to 0,\quad
m\to\infty,$ which for smooth functions follows from the Lebesgue dominated
convergence theorem, or even from uniform convergence under the integrals (of
course, the smooth function $g$ with a compact support is bounded). In turn,
the existence of a smooth function $g$ claimed above is due to the property
that smooth functions are dense in $L_{1}[0,T]$; this is, e.g., because so are
step functions, while the indicator of any Borel set may be approximated in
this function space by a finite set of intervals, and any indicator of an open
interval can be smoothed. In turn, the ‘standard technique’ above means that
one chooses next $m_{n}$ so that the Lebesgue measure of the set
$\\{a:\,\int|\dot{\varphi^{b}}_{\kappa_{m}(s+a)-a}-\dot{\varphi^{b}}_{s}|\,ds>2^{-n}\\}$
does not exceed $2^{-n}$, then finally the almost sure convergence is due to
the Borel-Cantelli Lemma. The reasoning for the second integral is similar: we
treat the integrand as a function of $s$ and use the same freezing. that there
exists such a subsequence $m\to\infty$ (we keep notation $m$ for this
subsequence) that almost surely w.r.t. $a$, the values of the two integrals
are close to zero,
$\int_{0}^{T}|\dot{\varphi}_{s}-\dot{\chi}_{s}|\,ds+\int_{0}^{T}|L(\varphi_{s},\dot{\varphi^{b}}_{s})-L(\psi_{s},\dot{\chi}_{s})|\,ds\approx
0.$ (19)
So, we can choose a value $m$ from this subsequence ($m\to\infty$) so that,
firstly, $\Delta\leq\Delta(\nu)$ (a value from the Lemma 5); secondly,
$\int_{0}^{T}|L(\varphi_{s},\dot{\varphi^{b}}_{s})-L(\psi_{s},\dot{\chi}_{s})|\,ds\leq\nu/3$,
so that
$\left|S^{\psi}(\chi)-S(\varphi)\right|\leq\nu;$ (20)
thirdly,
$\int_{0}^{T}|\dot{\varphi}^{b}_{s}-\dot{\chi}_{s}|\,ds\leq\tilde{\delta}^{\prime}/10$,
and finally,
$\\{\rho(X^{\varphi},\varphi^{b})<\tilde{\delta}^{\prime}\\}\supset\\{\rho(X^{\varphi},\chi)<\tilde{\delta}^{\prime}\times
9/10\\}$, (21)
and (see above (16)), for $\delta^{\prime}$ small enough (i.e. we can fix the
value $\delta^{\prime}$ here),
$\\{\rho(X^{\varphi},\chi)<\tilde{\delta}^{\prime}\times
9/10\\}\supset\\{\rho(X^{\psi},\chi)<\delta^{\prime}\\}$, (22)
for the latter we need $\rho(\varphi,\chi)+\rho(\varphi,\psi)$ to be small
enough which means, in particular, $m$ large enough.
10. 10.
For simplicity of presentation, we assume that $a=0$; otherwise the
discretisations below should be read
$\varphi^{\Delta}=(\varphi_{\Delta-\tilde{a}},\varphi_{2\Delta-\tilde{a}},\ldots,\varphi_{m\Delta-\tilde{a}},\varphi_{T})$,
where $\tilde{a}=a-[a/\Delta]$; this general case is considered similarly. So,
we assume $a=0$, and denote
$\varphi^{\Delta}=(\varphi_{\Delta},\varphi_{2\Delta},\ldots,\varphi_{m\Delta})$,
$m\Delta=T$. Since $\|f\|_{C}<\infty$, we have (see Freidlin and Wentzell,
proof of the Lemma 7.5.1),
$\framebox{$\\{\rho(X^{\psi},\chi)<\delta^{\prime}\\}\supset\\{\rho((X^{\psi})^{\Delta},\chi^{\Delta})<\delta^{\prime\prime}\\},$}$
(23)
if $\delta^{\prime\prime}$ and $\Delta$ are small enough,
$\delta^{\prime\prime}<\delta^{\prime\prime}(\delta^{\prime})\quad\mbox{and}\quad\Delta\leq\Delta(\delta^{\prime})$
(24)
(however, $\Delta\leq\Delta(\delta^{\prime\prime})$ is not required!), and
assuming all our curves start at $x_{0}$ at time zero (hence, we do not
include the starting point into the definition of $\varphi^{\Delta}$). Here
for discretised curves we use the metric,
$\rho(\psi^{\Delta},\chi^{\Delta}):=\sup_{k}|\psi_{k\Delta}-\chi_{k\Delta}|.$
Now, we are going to estimate from below the value in the right hand side of
the inequality,
$\framebox{$P(\rho((X^{\psi})^{\Delta},\chi^{\Delta})<\delta^{\prime\prime})\geq
E\prod_{k=1}^{m}I(|X^{\psi}_{k\Delta}-\chi_{k\Delta}|<\delta^{\prime}_{i}),$}$
(25)
where
$\delta^{\prime}_{1}<\delta^{\prime}_{2}<\ldots<\delta^{\prime}_{m}=\min(\delta(\nu),\delta^{\prime\prime})$,
$i=1,\ldots,m$, and $\delta(\nu)$ is from the Lemma 5; here all values
$\delta^{\prime}_{i}$ and certain auxiliary values $z_{i}$ will be chosen in
the next two steps as follows:
$m_{\nabla
H}(\delta^{\prime}_{k-1}+z_{k-1})+\frac{\kappa}{2}\,\delta^{\prime}_{k-1}\leq\frac{\kappa}{2}\,\delta^{\prime}_{k},\quad\&\quad\delta^{\prime}_{k-1}\leq\frac{\delta^{\prime}_{k}}{2},\quad\&\quad
m_{H}(\delta^{\prime}_{k})\leq\nu,$
where $0<\kappa\leq 1$, and $m_{g}$ stands for the modulus of continuity of
any function $g$ with respect to all its variables restricted to $|\beta|\leq
b+1$. Emphasize that $\delta^{\prime\prime}$ and $\Delta$ can be chosen
arbitrarily small at this stage, in particular, in addition to they should
satisfy the conditions of the Lemma 5 that will be used in the sequel, that
is, we require also $\delta^{\prime\prime}\leq\delta(\nu)$ and
$\Delta\leq\Delta(\nu)$. Hence, both $\delta^{\prime\prime}$ and $\Delta$ are
fixed at this stage.
11. 11.
Now everything is prepared for the lower estimate. We start with the
estimation of the conditional expectation
$E(I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})\mid
F_{(m-1)\Delta})$ on the set
$\\{|X^{\psi}_{(m-1)\Delta}-\chi_{(m-1)\Delta}|<\delta^{\prime}_{m-1}\\}$, or,
in the other words, on the set
$\\{X^{\psi}_{(m-1)\Delta}=\hat{\psi}_{(m-1)\Delta}\\}$ with
$|\hat{\psi}_{(m-1)\Delta}-\chi_{(m-1)\Delta}|~{}<~{}\delta^{\prime}_{m-1}$.
Let us apply the Cramér transformation of measure. Let $|\beta|\leq b$, we
will choose this vector a bit later (as
$\beta[\psi_{(m-1)\Delta},\dot{\chi}_{(m-1)\Delta+}]$). We get,
$\displaystyle
E\left(I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})|F_{(m-1)\Delta}\right)=E^{\beta}\left(I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})\times\right.$
$\displaystyle\left.\times\exp\left(-\varepsilon^{-2}\beta(X^{\psi}_{m\Delta}-X^{\psi}_{(m-1)\Delta})+\varepsilon^{-2}\Delta
H^{\varepsilon,\psi}_{m}(\hat{\psi}_{m-1},\beta)\right)|F_{(m-1)\Delta}\right),$
where $E^{\beta}$ is the (conditional) expectation with respect to the measure
$P^{\beta}$ defined on the sigma-field $F_{m\Delta}$ given $F_{(m-1)\Delta}$,
by its density
$\frac{dP^{\beta}}{dP}(\omega)=\exp\left(\varepsilon^{-2}\beta(X^{\psi}_{m\Delta}-X^{\psi}_{(m-1)\Delta})-\varepsilon^{-2}\Delta
H^{\varepsilon,\psi}_{m}(\hat{\psi}_{(m-1)\Delta},\beta)\right),$
where
$\varepsilon^{-2}\Delta H^{\varepsilon,\psi}_{m}(\hat{\psi}_{m-1},\beta)=\log
E\left(\exp\left(\varepsilon^{-2}\beta(X^{\psi}_{m\Delta}-X^{\psi}_{(m-1)\Delta})\right)|F_{(m-1)\Delta}\right).$
(We remind that $\hat{\psi}_{(m-1)\Delta}=X^{\psi}_{(m-1)\Delta}$.) By virtue
of the Lemma 5, on the set
$\\{|X^{\psi}_{(m-1)\Delta}-\chi_{(m-1)\Delta}|<\delta^{\prime}_{m-1}\\}$ we
estimate,
$\displaystyle
E\left[I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})\mid
F_{(m-1)\Delta}\right]$ (26)
$\displaystyle=E^{\beta}\left[I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})\right.$
$\displaystyle\left.\times\exp\left(\varepsilon^{-2}\beta(X^{\psi}_{m\Delta}-X^{\psi}_{(m-1)\Delta})-\varepsilon^{-2}\Delta
H^{\varepsilon,\psi}_{m}(\hat{\psi}_{(m-1)\Delta},\beta)\right)\mid
F_{(m-1)\Delta}\right]$ $\displaystyle\geq
E^{\beta}\left(I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})\exp\left(-\varepsilon^{-2}\Delta\beta\left((\chi_{m\Delta}-\chi_{(m-1)\Delta})/\Delta\right)\right.\right.$
$\displaystyle\left.\left.-\frac{\Delta}{\varepsilon^{2}}(H(\psi_{(m-1)\Delta},\hat{\psi}_{(m-1)\Delta},\beta)+\nu)-\frac{(\delta^{\prime}_{m}+\delta^{\prime}_{m-1})}{\varepsilon^{2}}\right)|F_{(m-1)\Delta}\right).$
Let us choose
$\beta=\beta(m)=\beta[\psi_{(m-1)\Delta},\dot{\chi}_{(m-1)\Delta+}]\;\;\\{=\mathop{\rm
argmax}\nolimits_{\beta}(\beta\dot{\chi}_{(m-1)\Delta+}-H(\psi_{(m-1)\Delta},\beta))\\}$.
As was explained above, $|\beta(m)|\leq b$, moreover,
$\displaystyle\beta(m)\dot{\chi}_{(m-1)\Delta+}-H(\psi_{(m-1)\Delta},\beta(m))=L(\psi_{(m-1)\Delta},\dot{\chi}_{(m-1)\Delta+}),$
and
$\dot{\chi}_{(m-1)\Delta+}=\nabla_{\beta}H(\psi_{(m-1)\Delta},\beta(m)).$
So (26) implies (with $\beta=\beta(m)$),
$\displaystyle
E\left(I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})|F_{(m-1)\Delta}\right)$
$\displaystyle\geq\exp\left(-\varepsilon^{-2}\Delta(L(\psi_{(m-1)\Delta},\dot{\chi}_{(m-1)\Delta+})+\nu)-\varepsilon^{-2}(\delta^{\prime}_{m}+\delta^{\prime}_{m-1})\right)\times$
$\displaystyle\times\exp\left(-\varepsilon^{-2}\Delta(H(\psi_{(m-1)\Delta},\hat{\psi}_{(m-1)\Delta},\beta)-H(\psi_{(m-1)\Delta},\psi_{(m-1)\Delta},\beta))\right)$
$\displaystyle\times
E^{\beta(m)}\left(I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})|F_{(m-1)\Delta}\right)$
$\displaystyle\geq\exp\left(-\varepsilon^{-2}\Delta\left(L(\psi_{(m-1)\Delta},\dot{\chi}_{(m-1)\Delta+})+2\nu\right)-\varepsilon^{-2}(\delta^{\prime}_{m}+\delta^{\prime}_{m-1})\right)\times$
$\displaystyle\times
E^{\beta(m)}\left(I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})|F_{(m-1)\Delta}\right).$
(27)
We have used the uniform continuity of $H(x,\cdot,\beta)$ over $|\beta|\leq b$
and $x~{}\in~{}R^{d}$:
$\displaystyle|H(\psi_{(m-1)\Delta},\hat{\psi}_{(m-1)\Delta},\beta)-H(\psi_{(m-1)\Delta},\psi_{(m-1)\Delta},\beta)|$
$\displaystyle\leq m_{H}(|\hat{\psi}_{(m-1)\Delta}-\psi_{(m-1)\Delta}|)\leq
m_{H}(\delta^{\prime}_{m-1})\leq\nu$
(here $m_{H}$ stands for the modulus of continuity of $H$ for $|\beta|\leq b$
with $b$ fixed), as $\delta^{\prime}_{m-1}$ is small enough.
12. 12.
Let us show the bound
$E^{\beta(m)}\left(I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})|F_{(m-1)\Delta}\right)\geq
1-\exp(-C_{m}\Delta\varepsilon^{-2})$ (28)
with some $C_{m}>0$ which may depend on $\nu$, on the set
$\\{|\hat{\psi}_{(m-1)\Delta}-\chi_{(m-1)\Delta}|<\delta^{\prime}_{m-1}\\}$,
if $\varepsilon$ is small enough. There exists a finite number of vectors
$v_{1},\,v_{2},\,\ldots,v_{N}$ such that $\|v_{k}\|=1\;\forall k$, $N=2d$ (any
orthonormal basis would do accomplished by its “symmetric” transformation,
i.e. with each coordinate vector $v$ we consider $-v$ as well), and
$\displaystyle
E^{\beta(m)}(I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|>\delta^{\prime}_{m})\mid
F_{(m-1)\Delta})$
$\displaystyle\leq\sum^{N}_{k=1}E^{\beta(m)}(I((X^{\psi}_{m\Delta}-X^{\psi}_{(m-1)\Delta}-\chi_{m\Delta}+\chi_{(m-1)\Delta})v_{k}$
$\displaystyle>\kappa(\delta^{\prime}_{m}-\delta^{\prime}_{m-1})\mid
F_{(m-1)\Delta}),$
given
$\\{|X^{\psi}_{(m-1)\Delta}-\chi_{(m-1)\Delta}|<\delta^{\prime}_{m-1}\\}$,
where $\kappa=(2/N)^{1/2}$ (notice that $\kappa\leq 1$). Given any
$\nu\,^{\prime}_{m-1}>0$ (a new constant which has nothing to do with $\nu$
and will be fixed shortly; we need it so to say only locally, while
establishing the inequality (28)), we estimate, for any $v:=v_{k}$, $0\leq
z\leq 1$,
$\displaystyle
E^{\beta(m)}\left(I((X^{\psi}_{m\Delta}-X^{\psi}_{(m-1)\Delta}-\chi_{m\Delta}+\chi_{(m-1)\Delta})v>\kappa(\delta^{\prime}_{m}-\delta^{\prime}_{m-1}))|F_{(m-1)\Delta}\right)$
$\displaystyle\leq\exp(-(\delta^{\prime}_{m}-\delta^{\prime}_{m-1})\Delta
z\kappa\varepsilon^{-2})\exp\left(\Delta\varepsilon^{-2}[-zv\dot{\chi}_{(m-1)\Delta+}\right.$
$\displaystyle\left.+H^{\varepsilon,\psi}(\hat{\psi}_{(m-1)\Delta},\beta(m)+vz)-H^{\varepsilon,\psi}(\hat{\psi}_{(m-1)\Delta},\beta(m))+2\nu\,^{\prime}_{m-1}]\right)$
$\displaystyle\leq\exp(-(\delta^{\prime}_{m}-\delta^{\prime}_{m-1})\Delta
z\kappa\varepsilon^{-2})\exp\left(\Delta\varepsilon^{-2}[-zv\dot{\chi}_{(m-1)\Delta+}\right.$
$\displaystyle\left.+H(\psi_{(m-1)\Delta},\hat{\psi}_{(m-1)\Delta},\beta(m)+vz)\right.$
$\displaystyle\left.-H(\psi_{(m-1)\Delta},\hat{\psi}_{(m-1)\Delta},\beta(m))+2\nu\,^{\prime}_{m-1}]\right),$
(29)
if $\varepsilon$ is small enough. Denote
$\displaystyle h(z):=(\delta^{\prime}_{m}-\delta^{\prime}_{m-1})\kappa
z+\dot{\chi}_{(m-1)\Delta+}vz$
$\displaystyle-[H(\psi_{(m-1)\Delta},\hat{\psi}_{(m-1)\Delta},\beta(m)+vz)-H(\psi_{(m-1)\Delta},\hat{\psi}_{(m-1)\Delta},\beta(m))].$
We have, $h(0)=0$. Moreover,
$\displaystyle
h^{\prime}(0)=(\delta^{\prime}_{m}-\delta^{\prime}_{m-1})\kappa+\dot{\chi}_{(m-1)\Delta+}v-\nabla_{\beta}H(\psi_{(m-1)\Delta},\hat{\psi}_{(m-1)\Delta},\beta(m))v$
$\displaystyle=(\delta^{\prime}_{m}-\delta^{\prime}_{m-1})\kappa+\nabla_{\beta}H(\psi_{(m-1)\Delta},\psi_{(m-1)\Delta},\beta(m))v$
$\displaystyle-\nabla_{\beta}H(\psi_{(m-1)\Delta},\hat{\psi}_{(m-1)\Delta},\beta(m))v$
$\displaystyle\geq(\delta^{\prime}_{m}-\delta^{\prime}_{m-1})\kappa-m_{\nabla
H}(\delta^{\prime}_{m-1})=:C_{m-1}>0$
(where $m_{\nabla H}$ stands for the modulus of continuity of the function
$\nabla_{\beta}H$ given $|\beta(m)|\leq b+1$ ($b+1$ is needed for the sequel,
although here $b$ would be enough)), because
$\dot{\chi}_{(m-1)\Delta+}=\nabla_{\beta}H(\psi_{(m-1)\Delta},\psi_{(m-1)\Delta},\beta(m))$.
The latter inequality,
$C_{m-1}=(\delta^{\prime}_{m}-\delta^{\prime}_{m-1})\kappa-m_{\nabla
H}(\delta^{\prime}_{m-1})>0$, holds true provided $\delta^{\prime}_{m-1}$ is
small enough in compare to $(\delta^{\prime}_{m}-\delta^{\prime}_{m-1})$,
e.g.,
$m_{\nabla
H}(\delta^{\prime}_{m-1})\leq\frac{\kappa}{2}(\delta^{\prime}_{m}-\delta^{\prime}_{m-1}),$
or, equivalently,
$m_{\nabla
H}(\delta^{\prime}_{m-1})+\frac{\kappa}{2}\,\delta^{\prime}_{m-1}\leq\frac{\kappa}{2}\,\delta^{\prime}_{m}.$
(30)
Moreover, since $\nabla_{\beta}H$ is bounded and continuous due to the Lemma
4, then $h^{\prime}(z)\geq C_{m-1}/2$ for small $z$, say, for $0\leq z\leq
z_{m-1}$ (thus, $z_{m-1}$ has been chosen here). Indeed,
$\displaystyle
h^{\prime}(z)=(\delta^{\prime}_{m}-\delta^{\prime}_{m-1})\kappa+\dot{\chi}_{(m-1)\Delta+}v$
$\displaystyle-\nabla_{\beta}H(\psi_{(m-1)\Delta},\hat{\psi}_{(m-1)\Delta},\beta(m)+vz)v$
$\displaystyle=(\delta^{\prime}_{m}-\delta^{\prime}_{m-1})\kappa+\nabla_{\beta}H(\psi_{(m-1)\Delta},\psi_{(m-1)\Delta},\beta(m))v$
$\displaystyle-\nabla_{\beta}H(\psi_{(m-1)\Delta},\hat{\psi}_{(m-1)\Delta},\beta(m)+vz)v$
$\displaystyle\geq(\delta^{\prime}_{m}-\delta^{\prime}_{m-1})\kappa-m_{\nabla
H}(\delta^{\prime}_{m-1}+z).$
So, $h(z_{m-1})\geq C_{m-1}z_{m-1}/2$, provided $z_{m-1}$ along with
$\delta^{\prime}_{m-1}$ are small in compare to
$(\delta^{\prime}_{m}-\delta^{\prime}_{m-1})$, e.g.,
$m_{\nabla
H}(\delta^{\prime}_{m-1}+z_{m-1})\leq(\delta^{\prime}_{m}-\delta^{\prime}_{m-1})\kappa/2,$
(31)
rather than (30). Hence, under assumption of (31), the r.h.s. in (12) with
$z=z_{m}$ does not exceed the value
$\exp(\Delta\varepsilon^{-2}(2\nu\,^{\prime}_{m-1}-h(z)))\leq\exp(-C_{m-1}\Delta
z_{m-1}\varepsilon^{-2}/4])$
if we choose
$\nu\,^{\prime}_{m-1}<C_{m-1}z_{m-1}/8.$ (32)
Remind that the constant $\nu\,^{\prime}_{m-1}$ should be fixed in the
beginning of this step of the proof; hence, we can do it now, once we have
chosen $z_{m-1}$, since the latter does not require any knowledge of
$\nu\,^{\prime}_{m-1}$. This gives the bound, given
$\\{|X^{\psi}_{(m-1)\Delta}-\chi_{(m-1)\Delta}|<\delta^{\prime}_{m-1}\\}$,
$E^{\beta(m)}\left(I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|\geq\delta^{\prime}_{m})|F_{(m-1)\Delta}\right)\leq\exp(-C_{m-1}\Delta\varepsilon^{-2}/4),$
which is equivalent to (28). In turn, (28) implies the estimate
$\displaystyle
P(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m}|F_{(m-1)\Delta})$
$\displaystyle\geq\exp\left(-\varepsilon^{-2}\Delta(L(\psi_{(m-1)\Delta},\dot{\chi}_{(m-1)\Delta+})+3\nu)-\varepsilon^{-2}(\delta^{\prime}_{m}+\delta^{\prime}_{m-1})\right),$
if $\varepsilon$ is small enough. Indeed, $\nu$ being fixed, one can choose
$\varepsilon$ so that
$1-\exp(-C_{m-1}\Delta\varepsilon^{-2})\geq\exp(-1)\geq\exp(-\nu(\Delta\varepsilon^{-2})).$
13. 13.
By “backward” induction from $k=m$ to $k=1$, choosing at each step
$\delta^{\prime}_{k-1}$ and $z_{k-1}$ small enough in compare to
$\delta^{\prime}_{k}-\delta^{\prime}_{k-1}$,
$m_{\nabla
H}(\delta^{\prime}_{k-1}+z_{k-1})+\frac{\kappa}{2}\delta^{\prime}_{k-1}\leq\frac{\kappa}{2}\delta^{\prime}_{k},\;\;\&\;\;\delta^{\prime}_{k-1}\leq\delta^{\prime}_{k}/2,\;\;\&\;\;m_{H}(\delta^{\prime}_{k-1})<\nu$
(33)
(cf. (31)), as well as all auxiliary values $C_{k-1}$, we get the for
$\varepsilon$ small enough the desired lower bound:
$\displaystyle P(|X^{\psi}_{\Delta m}-\varphi_{\Delta
m}|<\delta^{\prime}_{m},\ldots,|X^{\psi}_{\Delta}-\varphi_{\Delta}|<\delta^{\prime}_{1})$
$\displaystyle\geq\exp\left(-\varepsilon^{-2}\Delta\sum^{m}_{i=1}(L(\psi_{(m-i)\Delta},\dot{\chi}_{(m-i)\Delta+})+3\nu)-2\varepsilon^{-2}\sum_{k=1}^{m}\delta^{\prime}_{k}\right)$
$\displaystyle=\exp\left(-\varepsilon^{-2}(\int_{0}^{T}L(\psi_{s},\dot{\chi}_{s})\,ds+3\nu
T)-4\varepsilon^{-2}\delta^{\prime}_{m}\right)$
$\displaystyle\geq\exp\left(-\varepsilon^{-2}(S_{0T}(\varphi)+\nu(3T+2))\right),\qquad\varepsilon\to
0,$
provided $4\delta^{\prime}_{m}<\nu$, and due to (20). This is equivalent to
(5). This bound is uniform in $x\in E^{d},\,|y|\leq r$, and
$\varphi\in\Phi_{x}(s)$ for any $r,s>0$ (similar to the Lemma 7.4.1 from
Freidlin and Wentzell (1984)).
For the reader’s convenience we remind the order of all choices by repeating
the diagram in the third step of the proof. The (any) constants $\delta>0$ and
$\nu>0$ are fixed; due to the Lemma 5, we have also $\Delta(\nu)$ and
$\delta(\nu)$; all other constants are chosen consequently as follows:
$b\,\&\,\tilde{\delta}^{\prime}\mapsto\delta^{\prime}\mapsto\Delta\,\&\,\delta^{\prime\prime}\mapsto\delta^{\prime}_{m}\mapsto\delta^{\prime}_{m-1}\mapsto
z_{m-1}\mapsto\nu\,^{\prime}_{m-1}\ldots\mapsto\delta^{\prime}_{1}\mapsto
z_{1}\mapsto\nu\,^{\prime}_{1};$
one may say that all these values have been constructed via $\nu$. Some other
constants (namely, $C_{k}$’s) are constructed via $\delta^{\prime}_{k}$’s; the
value $m$ is defined as soon as $\Delta$ is chosen, as $\Delta=T/m$; and
eventually $\varepsilon$ should be small enough to ensure all our asymptotic
inequalities.
14. 14.
The property of the rate function $S$ to be a “good rate function” can be
shown as in Freidlin and Wentzell (1984), using the semi-continuity of the
function$L(x,y)$ w.r.t. $y$ and continuity w.r.t. $x$ variable (see Lemma
7.4.2 from Freidlin and Wentzell).
15. 15.
Second part of the proof: the upper bound. Assume that the assertion is not
true, that is, there exist $s$ and $\nu>0$ with the following properties:
$\forall\bar{\delta}>0,\;\mbox{there
exists}\;\delta_{0}<\bar{\delta},\;\forall\bar{\varepsilon},\;\mbox{there
exists}\;\varepsilon<\bar{\varepsilon}:$
$P(\rho(X,\Phi_{x}(s))>\delta_{0})>\exp(-\varepsilon^{-2}(s-\nu)).$
In the other words, for some (hence, actually, for any) $\delta_{0}>0$
arbitrarily close to zero, there exists a sequence $\varepsilon_{n}\to 0$ such
that
$P(\rho(X,\Phi_{x}(s))>\delta_{0})>\exp(-\varepsilon_{n}^{-2}(s-\nu)).$ (34)
We fix any such $\delta_{0}>0$.
16. 16.
Since $f$ is bounded, all possible paths $X^{\psi}$ for any $\psi$ belong to
some compact $F\subset C[0,T;R^{d}]$. Due to semi-continuity of the functional
$S^{\psi}(\varphi)$ w.r.t. $\psi$, for any $\nu>0$ there exists a value
$\delta_{\nu}(\varphi)>0$ such that $\rho(\varphi,\psi)<\delta_{\nu}(\varphi)$
and $S(\varphi)>s$ imply $S^{\psi}(\varphi)>s-\nu/2$.
Since $S^{\psi}(\varphi)$ is lower semi-continuous w.r.t. $\varphi\,$, too, it
follows that $\delta_{\nu}(\varphi)$ is also lower semi-continuous w.r.t.
$\varphi$. Hence, it attains its minimum on any compact.
Consider $F_{1}$ = the compact obtained from $F$ by dropping the
$\delta_{0}/2$-neighbourhood of the set $\Phi_{x}(s)=\\{\varphi\in
C[0,T;R^{d}]:\,\varphi_{0}=x,\,S(\varphi)\leq s\\}$. Denote
$\bar{\delta}_{\nu}=\inf_{\varphi\in F_{1}}\delta_{\nu}(\varphi)$, and take
any
$\delta^{\prime}\leq\min\left(\bar{\delta}_{\nu}/(4KT+2),\delta_{0}/2\right)$
where $K$ is a Lipschitz constant of $f$.
Choose a $\delta^{\prime}$-net in $F_{1}$, let
$\varphi^{1},\ldots,\varphi^{N}$ be its elements. All of them do not belong to
$\Phi_{x}(s)$, hence, $S(\varphi^{i})\geq s^{\prime}>s$.
17. 17.
Note that
$\\{\rho(X,\Phi_{x}(s))>\delta_{0}\\}\subset\bigcup_{i=1}^{N}\\{\rho(X,\varphi^{i})<\delta^{\prime}\\}.$
Then, by the Dirichlet principle, for any $n$ there exists an index444Indeed,
otherwise for any $i$, $P(\rho(X,\varphi^{i})\leq\delta^{\prime})\leq
N^{-1}\exp(-\varepsilon_{n}^{-2}(s-\nu)),$ and
$P(\rho(X,\Phi_{x}(s))>\delta_{0})\leq
P\left(\bigcup_{i=1}^{N}\\{\rho(X,\varphi^{i})<\delta^{\prime}\\}\right)\leq\exp(-\varepsilon_{n}^{-2}(s-\nu)),$
which does contradict (34). $i$ such that
$P(\rho(X,\varphi^{i})\leq\delta^{\prime})>N^{-1}\exp(-\varepsilon_{n}^{-2}(s-\nu)).$
(35)
There is a finite number of $i=1,\ldots,N$. Thus, there exists at least one
$i$ such that (35) holds true for this $i$ for some subsequence
$n^{\prime}\to\infty$ and correspondingly $\varepsilon_{n^{\prime}}\to 0$;
however, we will keep a notation $n$ for simplicity.
We can rewrite (35) as
$P(\rho(X,\varphi^{i})\leq\delta^{\prime})>\exp(-\varepsilon_{n}^{-2}(s-\nu)),$
(36)
since $N$ does not depend on $\varepsilon_{n}$, strictly speaking with a new
$\nu>0$; however, it is convenient to keep the same notation. Denote
$\varphi^{i}=:\varphi(\delta^{\prime})$.
18. 18.
Consider a sequence $\delta^{\prime}\to 0$ such that a corresponding function
$\varphi(\delta^{\prime})$ does exist for any $\delta^{\prime}$ from this
sequence. Recall that $\delta_{0}$ is fixed. All these functions satisfy the
inequality
$S(\varphi(\delta^{\prime}))\geq s^{\prime}>s,$
since $\rho(\varphi^{i},\Phi_{x}(s))\geq\delta_{0}/2$, and also
$S(\varphi(\delta^{\prime}))<\infty,$
which implies
$\sup_{t}|\dot{\varphi}_{t}(\delta^{\prime})|\leq C.$
Due to the Arcela-Ascoli Theorem, it is possible to extract from this set of
functions a subsequence which converges in $C[0,T;R^{d}]$ to some limit,
$\bar{\varphi}$. Since
$\rho(\varphi(\delta^{\prime}),\Phi_{x}(s))\geq\delta_{0}/2$, we have,
$\rho(\bar{\varphi},\Phi_{x}(s))\geq\delta_{0}/2$, hence,
$S(\bar{\varphi})>s,$
and, in particular, the lower bound (5) can be applied. However, due to the
construction, the function $\bar{\varphi}$ satisfies one more lower bound,
$\liminf_{\delta^{\prime}\to 0}\limsup_{\varepsilon\to 0}\varepsilon^{2}\ln
P(\rho(X,\bar{\varphi})<\delta^{\prime})\geq-s+\nu.$ (37)
Indeed, the latter follows from (36) because, e.g.,
$P\left(\rho(X,\bar{\varphi})\leq\delta^{\prime}+\rho(\bar{\varphi},\varphi(\delta^{\prime}))\right)\geq
P(\rho(X,\varphi(\delta^{\prime}))\leq\delta^{\prime})>\exp(-\varepsilon_{n}^{-2}(s-\nu)).$
Due to (37), there exists $\hat{\delta}^{\prime}>0$ such that for smaller
$\delta^{\prime}$’s (a sequence)
$\limsup_{\varepsilon\to 0}\varepsilon^{2}\ln
P(\rho(X,\bar{\varphi})<\delta^{\prime})\geq-s+\nu/2.$
In fact, this implies the same inequality for any $\delta^{\prime}>0$, because
with any $\delta^{\prime}$ for which the inequality holds true, each greater
value would do as well. Therefore, for any $\delta^{\prime}$, there exists
$\varepsilon>0$ (arbitrarily small) such that
$\varepsilon^{2}\ln
P(\rho(X,\bar{\varphi})<\delta^{\prime})\geq-s+\nu/3=-(s-\nu/3).$ (38)
We are going to show that this leads to a contradiction.
19. 19.
Consider the case $S(\bar{\varphi})<\infty$. Remind that $S(\bar{\varphi})>s.$
Denote
$\displaystyle L^{b}(x,y)=\sup_{|\beta|\leq b}(\beta y-H(x,\beta)),$
$\displaystyle\ell^{b}(x,y):=L(x,y)-L^{b}(x,y)$
$\displaystyle\equiv\sup_{\beta}(\beta y-H(x,\beta))-\sup_{|\beta|\leq
b}(\beta y-H(x,\beta)).$
Consider the function $\ell^{b}(\bar{\varphi}_{t},\dot{\bar{\varphi}}_{t})$.
We have,
$0\leq\ell^{b}(\bar{\varphi}_{t},\dot{\bar{\varphi}}_{t})\leq
L(\bar{\varphi}_{t},\dot{\bar{\varphi}}_{t}).$
Moreover,
$\ell^{b}(\bar{\varphi}_{t},\dot{\bar{\varphi}}_{t})\to 0,\quad b\to\infty,$
and the function $\ell$ is decreasing with $b\to\infty$. Hence, given $\nu>0$,
one can choose a $b>0$ such that
$\int_{0}^{T}\ell^{b}(\bar{\varphi}_{t},\dot{\bar{\varphi}}_{t})\,dt<\nu/20.$
Notice that we have chosen $b$. Moreover, one can also choose a discretisation
step $\Delta$ (see above, item 9 of the proof) such that
$\int_{0}^{T}\ell^{b}(\bar{\varphi}_{\kappa_{m}(t+a)-a},\dot{\bar{\varphi}}_{\kappa_{m}(t+a)-a})\,dt<\nu/10,$
and, correspondingly,
$\int_{0}^{T}L^{b}(\bar{\varphi}_{\kappa_{m}(t+a)-a},\dot{\bar{\varphi}}_{\kappa_{m}(t+a)-a})\,dt>s-\nu/10;$
(39)
again assume for simplicity of presentation that $a=0$. In addition, we
require $\Delta\leq\Delta(\nu/20)$ (this notation is from the Lemma 5). Hence,
we have chosen $\Delta$ and $m=T/\Delta$.
20. 20.
So, with $a=0$, let
$\psi_{t}:=\bar{\varphi}_{\kappa_{m}(t)},\quad\dot{\chi}_{t}:=\dot{\bar{\varphi}}_{\kappa_{m}(t)},\quad\chi_{0}=x.$
We have, with a unique $C=2(KT+1)$ (see (17)) and for any $\delta^{\prime}$,
$P(\rho(X,\bar{\varphi})<\delta^{\prime})\leq
P(\rho(X^{\psi},\chi)<C\delta^{\prime})\leq
P(\rho(X^{\psi,\Delta},\chi^{\Delta})<C\delta^{\prime}).$
Denote $\delta^{\prime\prime}=C\delta^{\prime}$. Let us choose
$\delta^{\prime\prime}\leq\delta(\nu/20)$ (the notation from the Lemma 5 is
used), and consider the following inequality, with the sequence
$(\delta^{\prime}_{i},\ 1\leq i\leq m)$,
$\delta^{\prime}_{m}=\delta^{\prime\prime}$, constructed via the value
$\nu/20$ instead of $\nu$ (compare to (33), where we can now drop the
requirement related to $m_{\nabla H}$),
$P(\rho(X,\bar{\varphi})<\delta^{\prime}_{1})\leq
E\prod_{i=1}^{m}1(|X^{\psi,\Delta}-\chi^{\Delta}|<\delta^{\prime}_{i}).$
In particular, we require
$4\delta^{\prime\prime}=4\delta^{\prime}_{m}\leq\nu/20$, and
$\sum_{i=1}^{m}\delta^{\prime}_{i}\leq 2\delta^{\prime\prime}$. Then, due to
the Lemma 5 and using the calculus as in the steps (11-12), – the only change
is that now we need an upper bound, and an estimation of the indicator
function can be skipped, – we get on the set
$\\{|X^{\psi}_{(m-1)\Delta}-\chi_{(m-1)\Delta}|<\delta^{\prime}_{m-1}\\}$ and
for any $|\beta|\leq b$,
$\displaystyle
E\left(I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})|F_{(m-1)\Delta}\right)$
$\displaystyle\leq
E^{\beta}\left(I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})\exp\left(-\varepsilon^{-2}\Delta\beta\left((\chi_{m\Delta}-\chi_{(m-1)\Delta})/\Delta\right)\right.\right.$
$\displaystyle\left.\left.-\varepsilon^{-2}\Delta(H(\psi_{(m-1)\Delta},\hat{\psi}_{(m-1)\Delta},\beta)-\nu/20)+\frac{\delta^{\prime}_{m}+\delta^{\prime}_{m-1}}{\varepsilon^{2}}\right)|F_{(m-1)\Delta}\right)$
(40)
(compare to (26)). Estimate here
$I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})\leq 1$ and drop
the expectation sign, then on the set
$\\{|X^{\psi}_{(m-1)\Delta}-\chi_{(m-1)\Delta}|<\delta^{\prime}_{m-1}\\}$, for
any $|\beta|\leq b$,
$\displaystyle
E\left(I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})|F_{(m-1)\Delta}\right)$
$\displaystyle\leq\exp\left(-\varepsilon^{-2}\Delta\beta\left((\chi_{m\Delta}-\chi_{(m-1)\Delta})/\Delta\right)\right.$
$\displaystyle\left.-\varepsilon^{-2}\Delta(H(\psi_{(m-1)\Delta},\hat{\psi}_{(m-1)\Delta},\beta))+\frac{\delta^{\prime}_{m}+\delta^{\prime}_{m-1}}{\varepsilon^{2}}\right),$
$\displaystyle\leq\exp\left(-\varepsilon^{-2}\Delta\beta\left((\chi_{m\Delta}-\chi_{(m-1)\Delta})/\Delta\right)\right.$
$\displaystyle\left.-\varepsilon^{-2}\Delta(H(\psi_{(m-1)\Delta},\psi_{(m-1)\Delta},\beta)-\nu/20)+\frac{\delta^{\prime}_{m}+\delta^{\prime}_{m-1}}{\varepsilon^{2}}\right),$
(41)
once we have chosen $m_{H}(\delta^{\prime\prime})\leq\nu/20$ (remind that
$m_{H}$ is the modulus of continuity of the function $H$ on the set
$|\beta|\leq b$), because of the inequality
$|\hat{\psi}_{(m-1)\Delta}-\psi_{(m-1)\Delta}|\leq\delta^{\prime}_{m-1}\leq\delta^{\prime\prime}$.
Let $\beta$ satisfy a condition,
$\displaystyle\beta(\chi_{m\Delta}-\chi_{(m-1)\Delta})/\Delta-H(\psi_{(m-1)\Delta},\psi_{(m-1)\Delta},\beta)$
$\displaystyle=\sup_{|\beta|\leq
b}\left(\beta(\chi_{m\Delta}-\chi_{(m-1)\Delta})/\Delta-H(\psi_{(m-1)\Delta},\psi_{(m-1)\Delta},\beta)\right)$
$\displaystyle=L^{b}(\psi_{(m-1)\Delta},\dot{\chi}_{(m-1)\Delta+}).$
Then, on the set
$\\{|X^{\psi}_{(m-1)\Delta}-\chi_{(m-1)\Delta}|<\delta^{\prime}_{m-1}\\}$,
$\displaystyle
E\left(I(|X^{\psi}_{m\Delta}-\chi_{m\Delta}|<\delta^{\prime}_{m})|F_{(m-1)\Delta}\right)$
$\displaystyle\leq\exp\left(-\varepsilon^{-2}\Delta(L^{b}(\psi_{(m-1)\Delta},\dot{\chi}_{(m-1)\Delta+})+\varepsilon^{-2}\Delta\frac{\nu}{20}+\frac{\delta^{\prime}_{m}+\delta^{\prime}_{m-1}}{\varepsilon^{2}}\right).$
(42)
Similarly, using induction and due to (39), we get
$\displaystyle P(\rho(X,\chi)<\delta^{\prime}_{1})$
$\displaystyle\leq\exp\left(-\varepsilon^{-2}\int\limits_{0}^{T}L^{b}(\bar{\varphi}_{\kappa_{m}(t)},\dot{\bar{\varphi}}_{\kappa_{m}(t)})\,dt+\varepsilon^{-2}\nu/20+4\varepsilon^{-2}\delta^{\prime}_{m}\right)$
$\displaystyle\leq\exp\left(-\varepsilon^{-2}(s-\nu/5)\right).$ (43)
This evidently contradicts (38) where $\delta^{\prime}$ may be arbitrarily
small.
21. 21.
Consider the case $\bar{\varphi}$ absolute continuous, and
$S(\bar{\varphi})=\infty$. In this case, due to monotone convergence $L^{b}\to
L$, there exist $b>0$, $m$ and $a\in[0,T]$ such that
$\int_{0}^{T}L^{b}(\bar{\varphi}_{t},\dot{\bar{\varphi}_{t}})\,dt\geq
s-\nu/20,\;\int_{0}^{T}L^{b}(\bar{\varphi}_{\kappa_{m}(t+a)-a},\dot{\bar{\varphi}}_{\kappa_{m}(t+a)-a})\,dt\geq
s-\nu/10.$
The rest is similar to the main case, $S(\bar{\varphi})<\infty$, and leads
again to
$P(\rho(X,\bar{\varphi})<\delta^{\prime}_{1})\leq\exp\left(-\varepsilon^{-2}(s-\nu/5)\right).$
This contradicts (38).
22. 22.
Consider the last possible case, $\bar{\varphi}$ not absolute continuous. In
this case, for any constant $c$, in particular, for $c=\|f\|_{C}+1$, there
exist two values $0\leq t_{1}<t_{2}\leq T$, such that
$|\bar{\varphi}_{t_{2}}-\bar{\varphi}_{t_{1}}|>c(t_{2}-t_{1})$; indeed,
otherwise $\bar{\varphi}$ must be Lipschitz with $|\dot{\bar{\varphi}}|\leq
c$. Therefore, for $\delta<(t_{2}-t_{1})/2$, the probability
$P(\rho(X,\bar{\varphi})<\delta)$ necessarily equals zero, because the event
$\\{\rho(X,\bar{\varphi})<\delta\\}$ is empty. This evidently contradicts
(38). In all possible cases, we got to contradictions. Hence, the assumption
is wrong, that is, the upper bound (4) holds true. The Theorem is proved.
APPENDIX
A. Comments on the Lemma 6. To explain that the Lemma 6 is valid without
additional assumptions, we have to review very briefly its proof and show
those assumptions.
Let $0=t_{0}<t_{0}<\ldots<t_{m}=T$ be a partition,
$\gamma_{k}(\beta):=\int_{t_{k-1}}^{t_{k}}H(\varphi_{s},\beta)ds$,
$\ell_{k}(\alpha)=\sup_{\beta}(\alpha\beta-\gamma_{k}(\beta))$,
$A_{k}=\\{\alpha:\;\ell_{k}(\alpha)<\infty\\}$, $A_{k}^{\circ}$ its interior
w.r.t. the linear hull $L_{A_{k}}$.
The inequality
$S(\varphi)=\int_{0}^{T}L(\varphi_{t},\dot{\varphi}_{t})dt<\infty$ implies
$\sum\limits_{k=1}^{m}\sup\limits_{\beta}\left((\varphi_{t_{k}}-\varphi_{t_{k-1}})-\gamma_{k}(\beta)\right)=\sum\limits_{k=1}^{m}\ell_{k}(\varphi_{t_{k}}-\varphi_{t_{k-1}})\leq
S(\varphi).$
Under additional assumption $A^{\circ}_{k}\not=\emptyset$ it is proved in
Freidlin and Wentzell (1984) using the arguments from Rockafellar (1970) that
for any $\nu>0$, there exists a function $\tilde{\varphi}$ such that
$\rho(\varphi,\tilde{\varphi})<\nu$ and there exist $\beta_{k}$ such that
$\ell_{k}(\tilde{\varphi}_{t_{k}}-\tilde{\varphi}_{t_{k-1}})=(\tilde{\varphi}_{t_{k}}-\tilde{\varphi}_{t_{k-1}})\beta_{k}-\gamma_{k}(\beta_{k})$
(44)
and
$\tilde{\varphi}_{t_{k}}-\tilde{\varphi}_{t_{k-1}}=\nabla\gamma_{k}(\beta_{k}).$
(45)
The proof goes well if $A^{\circ}_{k}\not=\emptyset\;\forall k$.
Let us show that the same is true if $A^{\circ}_{k}=\emptyset$ for some $k$’s.
The property $A^{\circ}_{k}=\emptyset$ is equivalent to $\dim L_{A_{k}}=0$. In
this case, $\gamma_{k}(\beta)=c_{k}\beta$ with some $c_{k}\in R^{d}$. Hence,
$\ell_{k}(\alpha_{k})<\infty$ means that $\ell_{k}(\alpha_{k})=0$ and for any
other $\alpha$, $\ell_{k}(\alpha)=+\infty$ and
$\gamma_{k}(\beta)=\alpha_{k}\beta$. So, we have
$\ell_{k}(\varphi_{t_{k}}-\varphi_{t_{k-1}})=0=(\varphi_{t_{k}}-\varphi_{t_{k-1}})\beta-\gamma_{k}(\beta)$
for any $\beta$. Let $\beta_{k}=0$. Evidently,
$\varphi_{t_{k}}-\varphi_{t_{k-1}}=\nabla\gamma_{k}(\beta_{k}).$
Hence, in the case $A^{\circ}_{k}=\emptyset$, one should not just change the
curve $\varphi_{s}$ on the interval $(t_{k-1},t_{k})$; that is, (44)and (45)
are valid in this case also.
The rest of the proof is not changed. For any step function $\zeta$, one
defines a piecewise linear $\chi$ by the formula
$\chi_{0}=\varphi_{0},\quad\dot{\chi}_{s}=\nabla_{\beta}H(\zeta_{s},\beta_{k})),\;t_{k-1}<s<t_{k},\;k=1,2,\ldots,m.$
Then it is shown that $\zeta^{n}\to\varphi$ implies $\chi^{n}\to\varphi$ due
to the property that the convergence of smooth convex functions to the limit
implies the convergence of their gradients. Then there exists a partition such
that this construction gives one
$\int_{0}^{T}L(\zeta_{t},\dot{\chi}_{t})\,dt\leq S(\varphi)+\nu.$
So, the lemma holds true without additional assumptions. The assertions about
$\hat{\zeta}$ and $\hat{\beta}_{s}$ can be shown similarly.
B. Comments on the property $A^{\circ}_{k}\not=\emptyset$, and
characterization of the set ${\cal L}^{\circ}[f,x]$. Denote the interior of
$A(x)=\\{\alpha:\,L(x,\alpha)<\infty\\}$ with respect to its linear hull
$L_{A(x)}$ by $A^{\circ}(x)$. Then $A^{\circ}_{k}=\emptyset\Longleftrightarrow
A^{\circ}(\varphi_{t_{k-1}})=\emptyset$. In this section we show the following
equivalence:
$card(f\in R^{d}:\;f=f(x,y),y\in M)=1\Longleftrightarrow\dim
L_{A(x)}=0\Longleftrightarrow A^{\circ}(x)=\emptyset.$
Since $A(x)$ is convex, clearly the first two conditions are equivalent.
If $\\{f(x,\cdot)\\}$ contains only one point then $H(x,\beta)$ is linear
w.r.t. $\beta$; hence, $A(x)$ consists of a unique point and
$A^{\circ}(x)=\emptyset$.
Now, let $\\{f(x,\cdot)\\}$ contain at least two different points, say,
$f(x,y_{1})\not=f(x,y_{2})$. Then there exists $1\leq k\leq d$ such that
$(f(x,y_{1})-f(x,y_{2}))_{k}\not=0$. Denote
$M_{k}=\sup_{y}f^{k}(x,y),\;m_{k}=\inf_{y}f^{k}(x,y)$. Let
$0<\nu<(f(x,y_{1})-f(x,y_{2}))_{k}/2$. Take two points $y^{\prime}$ and
$y^{\prime\prime}$ such that $f^{k}(x,y^{\prime})<m_{k}+\nu/5$ and
$f^{k}(x,y^{\prime\prime})>M_{k}-\nu/5$. There exist two open sets
$B^{\prime}\subset M$ and $B^{\prime\prime}\subset M$ such that $\sup_{y\in
B^{\prime}}f^{k}(x,y)<m_{k}+\nu/4$ and $\inf_{y\in
B^{\prime\prime}}f^{k}(x,y)>M_{k}-\nu/4$.
Since the process $y^{x}_{t}$ is a nondegenerate ergodic diffusion, there
exists $\lambda>0$ such that
$P(y^{x}_{s}\in B^{\prime},\;1\leq s\leq t)\geq\lambda^{t-1},\quad
P(y^{x}_{s}\in B^{\prime\prime},\;1\leq s\leq t)\geq\lambda^{t-1},\quad
t\to\infty.$
Let $\beta=z\beta_{k}$ where $\beta_{k}\in E^{d}$ is a $k$th unit coordinate
vector and $z\in R$. Then for $z>0$ we have,
$\displaystyle z^{-1}t^{-1}\log
E\exp(z\beta_{k}\int_{0}^{t}f(x,y_{s}^{x})\,ds)$ $\displaystyle\geq
z^{-1}t^{-1}\log E\exp(z\beta_{k}\int_{0}^{t}f(x,y_{s}^{x})\,ds)I(y^{x}_{s}\in
B^{\prime\prime},\;1\leq s\leq t)$ $\displaystyle\geq
z^{-1}t^{-1}\log\\{\exp(z(M_{k}-\nu/2)t)\lambda^{t-1}\\}$
$\displaystyle=M_{k}-\nu/4+\frac{t-1}{t}z^{-1}\log\lambda\geq\frac{t-1}{t}M_{k}-\nu/2,$
if $z$ is large enough. In the other words, for large positive $z$ one has
$H(x,z\beta_{k})\geq z(M_{k}-2\nu)$. Similarly, for large negative $z$
$\displaystyle|z|^{-1}t^{-1}\log
E\exp(z\beta_{k}\int_{0}^{t}f(x,y_{s}^{x})\,ds)$
$\displaystyle\geq|z|^{-1}t^{-1}\log
E\exp(z\beta_{k}\int_{0}^{t}f(x,y_{s}^{x})\,ds)I(y^{x}_{s}\in
B^{\prime\prime},\;1\leq s\leq t)$
$\displaystyle\geq|z|^{-1}t^{-1}\log\\{\exp(z(m_{k}+\nu/4)t)\lambda^{t-1}\\}$
$\displaystyle=-(m_{k}+\nu/4)+\frac{t-1}{t}|z|^{-1}\log\lambda\geq-\frac{t-1}{t}m_{k}-\nu/2,$
if $|z|$ is large enough. In other words, for negative $z$ with large absolute
values one has $H(x,z\beta_{k})\geq z(m_{k}+\nu)$. Therefore,
$\\{\alpha:\;\alpha=\beta_{k}\theta,\,m_{k}+\nu<\theta<M_{k}-\nu\\}\,\subset\,A(x)$.
On the other hand, it is obvious that if $\alpha=\beta_{k}\theta$, $\theta\in
R^{1}$, with $\theta>M_{k}$ or $\theta<m_{k}$, then $L(x,\alpha)=\infty$,
because $m_{k}z\leq H(x,\beta_{k}z)\leq M_{k}z$, and, hence (say, if
$\theta>M_{k}$), for $z>>1$,
$\beta_{k}\theta\beta_{k}z-H(x,\beta_{k}z)\geq(\theta-M_{k})z\to+\infty,\quad
z\to+\infty.$
A similar calculus and inequalities are valid for any unit vector $\beta_{0}$.
This shows, in particular, that $\dim L_{A}(x)=\dim L_{f}(x)$, and, moreover,
that $L_{A}(x)=L_{f}(x)$. Since $A(x)$ is convex, it shows also that the
interior $A^{\circ}(x)$ w.r.t. $L_{A(x)}$ is not empty., except only the case
dim$(L_{A(x)})=1$. Hence, the third condition is equivalent to the second one
and the first.
So, the condition $A^{\circ}_{k}\not=\emptyset$ is always satisfied if the set
$\\{f(x,\cdot)\\}$ for any $x$ consists of more than one point. In fact, if
$card\\{f(x,\cdot)\\}=1$ for any $\,x$ then $f$ does not depend on $y$. In
this case, one has nothing to average.
Notice that our considerations above provide the following description of the
set ${\cal L}^{\circ}[f,x]$:
$\displaystyle{\cal L}^{\circ}[f,x]=\\{\alpha\in
R^{d}:\,m_{\beta}(x)<\langle\alpha,\beta\rangle<M_{\beta}(x),\;\forall|\beta|=1,\;\mbox{with}\;m_{\beta}(x)<M_{\beta}(x),$
$\displaystyle\mbox{and}\;\langle\alpha,\beta\rangle=M_{\beta}(x),\;\forall|\beta|=1,\;\mbox{with}\;m_{\beta}(x)=M_{\beta}(x)\\},$
where
$m_{\beta}(x):=\inf\limits_{y}\langle\frac{\beta}{|\beta|},f(x,y)\rangle,\,M_{\beta}(x):=\sup\limits_{y}\langle\frac{\beta}{|\beta|},f(x,y)\rangle$.
Moreover, it can be shown similarly that for any $x,\tilde{x}$ (although we do
not need it here),
${\cal L}^{\circ}[f,x,\tilde{x}]={\cal L}^{\circ}[f,x].$
C. About $\hat{\alpha}_{s}\in{\cal L}^{\circ}[f,\varphi_{s}]$. Let
$x=\varphi_{s}$, $\hat{\alpha}=\hat{\alpha}[x,\dot{\chi}]$ as described in the
proof of the theorem 1. If we show that for any direction $v$ (a unit vector)
satisfying the property $m_{v}<M_{v}$, the strict double inequality holds true
$m_{v}<\partial H(x,zv)/\partial z|_{z=0}<M_{v},$
$z\in R^{1}$, then it would follow $\hat{\alpha}_{s}\in{\cal
L}^{\circ}[f,\varphi_{s}]$. Let $\nu>0$ and again two open sets $B^{\prime}$
and $B^{\prime\prime}$ be chosen such that $\sup_{y\in
B^{\prime}}vf(x,y)<m_{v}+\nu/2$, and $\inf_{y\in
B^{\prime\prime}}vf(x,y)>M_{v}-\nu/2$. Let $\mu_{inv}(B^{\prime\prime})$ be
invariant measure for the event $\\{y^{x}_{t}\in B^{\prime\prime}\\}$. We can
choose $\nu$ and correspondingly $B^{\prime\prime}$ so that
$\mu_{inv}(B^{\prime\prime})<1$. Then, due to large deviation asymptotics for
the process $y^{x}_{t}$, for any $\mu_{inv}(B^{\prime\prime})<\zeta<1$ there
exists $\lambda>0$ such that
$P\left(t^{-1}\int_{0}^{t}1(y^{x}_{s}\in
B^{\prime\prime})\,ds\geq\zeta\right)\leq\exp(-\lambda t),\quad t\geq
t_{\zeta}.$
Denote $A_{\zeta}=\left\\{t^{-1}\int_{0}^{t}1(y^{x}_{s}\in
B^{\prime\prime})\,ds<\zeta\right\\}$,
$A^{c}_{\zeta}=\left\\{t^{-1}\int_{0}^{t}1(y^{x}_{s}\in
B^{\prime\prime})\,ds\geq\zeta\right\\}$, then for $z>0$,
$\displaystyle E\exp(zv\int_{0}^{t}f(x,y^{x})\,ds)$ $\displaystyle\leq
E\exp(z\int_{0}^{t}\left(M_{v}1(y^{x}_{s}\in
B^{\prime\prime})+(M_{v}-\nu)1(y^{x}_{s}\not\in
B^{\prime\prime})\right)\,ds)\,1(A^{c}_{\zeta})$
$\displaystyle+E\exp(z\int_{0}^{t}\left(M_{v}1(y^{x}_{s}\in
B^{\prime\prime})+(M_{v}-\nu)1(y^{x}_{s}\not\in
B^{\prime\prime})\right)\,ds)\,1(A_{\zeta})$ $\displaystyle\leq
E\exp(ztM_{v}+zt(M_{v}-\nu))\,1(A^{c}_{\zeta})+E\exp(ztM_{v}\zeta+zt(M_{v}-\nu))\,1(A_{\zeta})$
$\displaystyle\leq\exp(ztM_{v}+zt(M_{v}-\nu)-z\lambda
t/z)+\exp(ztM_{v}\zeta+zt(M_{v}-\nu)),$
hence,
$\limsup_{z\to
0}\,\limsup_{t\to\infty}\,(tz)^{-1}E\exp(zv\int_{0}^{t}f(x,y^{x})\,ds)<M_{v}.$
Similarly, using $B^{\prime}$ one can get
$\liminf_{z\to
0}\,\liminf_{t\to\infty}\,(tz)^{-1}E\exp(zv\int_{0}^{t}f(x,y^{x})\,ds)>m_{v}.$
Thus,
$m_{v}<\partial H(x,zv)/\partial z|_{z=0}<M_{v}.$
Therefore, $\hat{\alpha}\in{\cal L}^{\circ}[x,f]$.
REREFERCES
Freidlin, M.I. (1976) Fluctuations in dynamical systems with averaging. Dok.
Acad. Nauk SSSR (Soviet Doklady Acad. Sci.) 226 273-276 (in Russian).
Freidlin, M.I. (1978) Averaging principle and large deviations. Uspekhi Matem.
Nauk. 33 107-160 (in Russian).
Freidlin, M.I. and Wentzell, A.D. (1984) Random perturbations of dynamical
systems. N.Y., Springer.
Gulinsky, O.V. and Veretennikov, A.Yu. (1993) Large Deviations for
Discrete–Time Processes with Averaging. VSP, Utrecht.
Ikeda, N. and Watanabe, S. (1989) Stochastic differential equations and
diffusion processes. 2nd ed. North-Holland, Amsterdam.
Kato, T. (1976) Perturbation Theory for Linear Operators. 2nd ed. Springer,
New York.
Krasnosel’skii, M.A., Lifshitz, E.A. and Sobolev, A.V. (1989) Positive linear
systems. Helderman, Berlin.
Rockafellar, R.T. (1970) Convex analysis. Princeton Univ. Press.
Veretennikov, A.Yu. (1992) On large deviations in the averaging principle for
stochastic differential equations with periodic coefficients 2. Math. USSR
Izvestiya, 39 677-701.
Veretennikov, A.Yu. (1994) Large deviations in averaging principle for
stochastic differential equation systems (noncompact case). Stochastics
Stochastics Rep. 48 83-96.
Veretennikov, A.Yu. (1999) On large deviations in the averaging principle for
SDE’s with a “full dependence”, Ann. Probab. 27 no. 1, 284–296.
|
We study the problem of designing minimax procedures in linear regression under the quantile risk. We start by considering the realizable setting with independent Gaussian noise, where for any given noise level and distribution of inputs, we obtain the exact minimax quantile risk for a rich family of error functions and establish the minimaxity of OLS. This improves on the lower bounds obtained by Lecué and Mendelson, 2016 and Mendelson, 2017 for the special case of square error, and provides us with a lower bound on the minimax quantile risk over larger sets of distributions.
Under the square error and a fourth moment assumption on the distribution of inputs, we show that this lower bound is tight over a larger class of problems. Specifically, we prove a matching upper bound on the worst-case quantile risk of a variant of the procedure proposed by Lecué and Lerasle, 2020, thereby establishing its minimaxity, up to absolute constants. We illustrate the usefulness of our approach by extending this result to all $p$-th power error functions for $p \in (2, \infty)$.
Along the way, we develop a generic analogue to the classical Bayesian method for lower bounding the minimax risk when working with the quantile risk, as well as a tight characterization of the quantiles of the smallest eigenvalue of the sample covariance matrix.
minimax procedures, linear regression, sample covariance matrix, quantile risk.
§ INTRODUCTION
We study the problem of designing minimax procedures in linear regression under the quantile risk over large classes of distributions. Specifically, for some $d \in \N$, there is an input random vector $X \in \R^{d}$ and an output random variable $Y \in \R$, and we are provided with $n \in \N$ samples $(X_i, Y_i)_{i=1}^{n}$ from their joint distribution $P$, with the goal of constructing a predictor of $Y$ given $X$. We consider the set of linear predictors $\brace*{x \mapsto \inp{w}{x} \mid w \in \R^{d}}$, and measure the error of a predictor $w \in \R^{d}$ on an input/output pair $(X, Y)$ through $e(\inp{w}{X} - Y)$ for an error function of our choice $e: \R \to \R$. We evaluate the overall error of a predictor $w \in \R^{d}$ through the expected error $E(w) \defeq \Exp\brack*{e(\inp{w}{X} - Y)}$, and define $\mathcal{E}(w) \defeq E(w) - \inf_{v \in \R^{d}} E(v)$.
For a user-chosen failure probability $\delta \in (0, 1)$, we evaluate the performance of a procedure $\hat{w}_{n, \delta}: (\R^{d} \times \R)^{n} \to \R^{d}$ on a particular distribution $P$ through its quantile risk
\begin{equation}
\label{eq:quantile_risk}
R_{n, \delta}(P, \hat{w}_{n, \delta}) \defeq Q_{\mathcal{E}(\hat{w}_{n,\delta})}(1 - \delta) = \inf\brace*{t \geq 0 \st \Prob\paren*{\mathcal{E}(\hat{w}_{n, \delta}) \leq t} \geq 1-\delta},
\end{equation}
where we shortened $\hat{w}_{n, \delta}((X_i, Y_i)_{i=1}^{n})$ to $\hat{w}_{n, \delta}$.
We consider the scenario where all that is known about $P$ is that it belongs to a class of distributions $\mathcal{P}$ on $\R^{d} \times \R$. This justifies evaluating the overall performance of a procedure through its worst-case risk
\begin{equation*}
R_{n,\delta}(\mathcal{P}, \hat{w}_{n, \delta}) \defeq \sup_{P \in \mathcal{P}} R_{n, \delta}(P, \hat{w}_{n, \delta}).
\end{equation*}
Our goal is to characterize the minimax risk $R^{*}_{n, \delta}(\mathcal{P}) \defeq \inf_{\hat{w}_{n, \delta}} R_{n,\delta}(\mathcal{P}, \hat{w}_{n, \delta})$ and design minimax procedures for rich classes of distributions and error functions.
Note on terminology. In this paper, we reserve the terms `risk' and `loss' to refer to the corresponding decision-theoretic concepts, see e.g. Lehmann and Casella, 2006 for background on these notions. To avoid any confusion, we have used the terms `error' and `expected error' to refer to what is commonly called `prediction loss' and `prediction risk' in statistical learning theory.
Motivation. Our motivation for studying this problem has its roots in the work of Catoni, 2012, who showed that the empirical mean is no longer minimax over the set of all distributions with finite variance under the square loss if one replaces the classical notion of risk, given by the expected loss, with the quantile risk, given by the $1-\delta$ quantile of the loss, for any user-chosen failure probability $\delta \in (0, 1)$. Since then, minimax procedures were discovered for this problem [Devroye et al., 2016, Lee and Valiant, 2022], and there has been a lot of effort to construct minimax procedures under this new notion of risk for a variety of statistical problems [Lugosi and Mendelson, 2019, Lugosi and Mendelson, 2019, Mendelson and Zhivotovskiy, 2020]. We view our work as part of this larger effort.
Known results. To understand why previous results are insufficient to accomplish our stated goal, let us briefly review the most relevant ones. Most previous work has focused on the case of square error $e(t) = t^{2}/2$ [Audibert and Catoni, 2011, Hsu and Sabato, 2016, Lugosi and Mendelson, 2019, Lecué and Lerasle, 2020]. In this case, a natural class of distributions to consider is
\begin{equation}
\label{eq:class_2}
\mathcal{P}_{2}(P_{X}, \sigma^2) \defeq \brace*{P \st (X, Y) \sim P : X \sim P_{X} \text{ and } \esssup(\Exp\brack{\xi^{2} \mid X}) \leq \sigma^2},
\end{equation}
for a given distribution $P_{X}$ of inputs, noise level $\sigma^2 \in (0, \infty)$, and where $\xi \defeq Y - \inp{w^{*}}{X}$ is the noise and $w^{*}$ is the unique minimizer of the expected error $E(w)$ under square error. The best lower bound on the minimax risk over this class has been obtained by considering the subclass
\begin{equation*}
\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2) \defeq \brace{P \mid (X, Y) \sim P : (X, \eta) \sim P_{X} \times \mathcal{N}(0, \sigma^2), Y = \inp{w^{*}}{X} + \eta \text{ for } w^{*} \in \R^{d}}.
\end{equation*}
The following results yield the best upper and lower bounds on the minimax risk over $\mathcal{P}_{2}(P_{X}, \sigma^2)$.
Suppose that $e(t) = t^{2}/2$. There exist absolute constants $C, c > 0$ such that for all $\delta \in (0, 1/8)$, it holds that
\begin{equation*}
R^{*}_{n,\delta}(\mathcal{P}_{\normalfont\textrm{Gauss}}(P_{X}, \sigma^2)) \geq \begin{dcases*}
\infty & if $n \leq d/C$, \\
c \cdot \frac{\sigma^{2}(d + \log(1/\delta))}{n} & otherwise.
\end{dcases*}
\end{equation*}
Suppose that $e(t) = t^{2}/2$. There exists a procedure $\hat{w}_{n, \delta}$ and absolute constants $C, c > 0$ such that the following holds. If
\begin{equation*}
n \geq c \cdot \theta^2(P_{X}) (d + \log(1/\delta)), \quad \text{ where } \quad \theta(P_{X}) \defeq \sup_{w \in \R^{d}\setminus\brace*{0}}\frac{\Exp\brack*{\inp{w}{X}^{2}}^{1/2}}{\Exp\brack*{\abs{\inp{w}{X}}}},
\end{equation*}
\begin{equation*}
R_{n,\delta}(\mathcal{P}_2(P_{X}, \sigma^2), \hat{w}_{n, \delta}) \leq
C \cdot \theta^{2}(P_{X}) \cdot \frac{\sigma^{2} \cdot (d + \log(1/\delta))}{n}.
\end{equation*}
In the prescribed regime $(n, \delta)$ stated in Proposition <ref>, and on the set of distributions for which $\theta(P_{X})$ is upper bounded by an absolute constant, the combination of Propositions <ref> and <ref> proves the minimaxity, up to an absolute constant, of the procedure in Proposition <ref> over $\mathcal{P}_2(P_{X}, \sigma^2)$.
Unfortunately, this minimaxity result is unsatisfactory for two important reasons. First, the set of distributions for which $\theta(P_{X})$ is bounded by an absolute constant is both difficult to characterize and too small to cover classes of problems of interest. Indeed, by using the relationship between $\theta(P_{X})$ and the small-ball constants [Lecué and Lerasle, 2019], and using the lower bounds derived on the latter by [Saumard, 2018], it is possible to derive dimension-dependent lower bounds on $\theta(P_{X})$ for standard linear regression problems with bounded inputs. Second, this minimaxity result is specific to the square error function. While procedures with guarantees have been studied for other error functions [Chinot et al., 2020], no lower bounds are known outside of Proposition <ref>.
Main challenges and our approach
The first challenge we are faced with is to derive lower bounds on the minimax quantile risk for a richer class of error functions beyond the square error. Unfortunately, the argument leading to Proposition <ref> is quite specific to the square error. In related work, still for the square error, and under the classical notion of risk given by the expected excess error $\Exp\brack*{\mathcal{E}(\hat{w}(D_{n})}$ (c.f. (<ref>)), Mourtada, 2022 recently computed the exact minimax risk over $\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^{2})$. His proof strategy relies on a classical Bayesian argument: lower bounding the minimax risk by a sequence of increasing Bayes risks, and showing that ERM achieves the limit of the Bayes risks, see e.g. <cit.> for more details. However, an inspection of Mourtada's argument shows that it can be generalized to a rich class of error functions by an application of Anderson's Lemma, see e.g. <cit.>.
Our strategy is to try to adapt Mourtada's argument to the case of the quantile risk. Unfortunately, it is not obvious how to translate his argument to our setting. Indeed, it is not even clear what the notions of average risk and Bayes risk should be in this case, and whether the technical tools used in his proof carry over when working with quantiles instead of expectations. This turns out to to be the main obstacle in obtaining the exact minimax quantile risk over the class $\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2)$ for a rich class of error functions, and we overcome this by developing a generic analogue to the above-described classical Bayesian method when working with the quantile risk.
On the upper bound side, one way to settle the case of square error is to derive a better upper bound on the procedure of Oliveira and Resende, 2023. We will argue that it is more appropriate to study the performance of this procedure under a fourth moment assumption on $P_{X}$. Even with this additional assumption, we are faced with two additional challenges. Firstly, the proof of Proposition <ref> is a refinement of arguments developed by Lugosi and Mendelson, 2019, Lecué and Lerasle, 2020 which are in some places tailored for the use of the small-ball method. We overcome this by carefully studying the truncation function used in the design of some of these estimators [Lugosi and Mendelson, 2021, Oliveira and Resende, 2023]. Secondly, once this is overcome, we are faced with the problem of lower bounding with high probability the smallest eigenvalue of the sample covariance matrix, subject to a direction-dependent adversarial truncation. We achieve this by obtaining a generic upper bound on the suprema of truncated empirical processes, and combining it with the use of matrix Khintchine inequalities <cit.>.
Below we summarize our main results related to linear regression.
* We compute the exact minimax quantile risk over the class $\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2)$ for a rich set of error functions, and show that OLS is minimax in this setting (Theorem <ref>). We deduce from this result the asymptotic minimax quantile risk over this class (Proposition <ref>).
* Focusing on the non-asymptotic setting with $e(t)=t^2/2$, we complement our exact computation with tight upper and lower bounds (Proposition <ref>). We then recover the lower bound of Proposition <ref> and identify a setting in which it is tight (Corollary <ref>). We give an analogous result under the error function $e(t)=\abs{t}^{p}/[p(p-1)]$ for $p \in (2,\infty)$ (Proposition <ref>).
* We then turn to finding minimax procedures on larger classes of distributions. For the square error, we establish the minimaxity, up to an absolute constant, of a variant of the min-max regression procedure [Audibert and Catoni, 2011, Lecué and Lerasle, 2020] over the class $\mathcal{P}_{2}(P_{X}, \sigma^2)$, and under a fourth moment assumption on $P_{X}$ (Theorem <ref>).
* Finally, we study minimax linear regression under the error function $e(t)=\abs{t}^{p}/[p(p-1)]$ for $p \in (2, \infty)$. Guided by our results, we identify a rich class of distributions analogous to $\mathcal{P}_{2}(P_{X}, \sigma^2)$, and show that the min-max regression procedure is minimax, up to a constant that depends only on $p$, and under a fourth moment assumption on $P_{X}$ (Theorem <ref>).
Our contributions on linear regression are supported by the following more general results.
* We consider the quantile risk in full generality. We develop an analogue to the Bayesian method for lower bounding the minimax quantile risk (Theorem <ref>). We then prove that the minimaxity of procedures under the quantile risk is invariant to strictly increasing left-continuous transformations of the loss (Proposition <ref>).
* We illustrate the generality of our methods by applying them to two unrelated problems: multivariate mean estimation with Gaussian data, in which we recover a strengthening of the recent result of [Depersin and Lecué, 2022] (Proposition <ref>), and variance estimation with Gaussian data and known mean, where we show that, surprisingly, the sample variance is suboptimal, and design a new minimax estimator (Proposition <ref>).
* We conclude by studying the smallest eigenvalue of the sample covariance matrix. We prove a new tight asymptotic lower bound on its quantiles, and a nearly matching fully non-asymptotic upper bound (Proposition <ref>), both under a fourth moment assumption on $P_{X}$.
Organization The rest of the paper is organized as follows. In Section <ref>, we present our results on the minimax quantile risk over the class $\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2)$. In Section <ref>, we present new upper bounds on the worst-case quantile risk of the min-max regression procedure for the error functions $e(t) = \abs{t}^{p}/[p(p-1)]$ for $p \in [2,\infty)$, showing its minimaxity over suitable classes of distributions up to constants. In Section <ref> we study the quantile risk in full generality. Finally, in Section <ref>, we present our results on the smallest eigenvalue of the sample covariance matrix.
Notation. We call a function $f: \R \to \R$ increasing if $x \leq x'$ implies $f(x) \leq f(x')$. If $f: \R \to \R$ is an increasing function, we define its pseudo-inverse $f^{-}: [-\infty, \infty] \to [-\infty, \infty]$ by $f^{-}(y) \defeq \inf\brace*{x \in \R \st f(x) \geq y}$. For a random variable $X: \Omega \to \R$, we denote its CDF by $F_{X}$ and its quantile function by $Q_{X} \defeq F^{-}_{X}$. We allow random variables of the form $X: \Omega \to [0, \infty]$, but we restrict the definition of their CDFs to $[0, \infty)$. Without loss of generality, we assume throughout that the support of the distribution of inputs $P_{X}$ is not contained in any hyperplane. We write $\Sigma = \Exp\brack*{XX^{T}}$ for the covariance matrix of the random vector $X$. We write $a \asymp b$ to mean that there exist absolute constants $C, c > 0$ such that $c \cdot b \leq a \leq C \cdot b$.
§ MINIMAX QUANTILE RISK OVER TEXT
The following is the main result of this section.
Let $P_{X}$ be a distribution on $\R^{d}$ and $\sigma^{2} \in (0, \infty)$. Assume that $e$ is strictly convex, differentiable, and symmetric i.e. $e(t) = e(-t)$ for all $t \in \R$. Define, for $(X, \eta) \sim P_{X} \times \mathcal{N}(0, \sigma^2)$,
\begin{equation*}
\widetilde{E}(\Delta) \defeq \Exp\brack*{e(\inp{\Delta}{X} + \eta)}, \quad\quad \widetilde{\mathcal{E}}(\Delta) \defeq \widetilde{E}(\Delta) - \widetilde{E}(0).
\end{equation*}
If $P_{X}$ is such that $\widetilde{E}$ is finite everywhere and differentiable at $0$ with $\nabla \widetilde{E}(0) = \Exp\brack*{\nabla e(\eta)}$, then
\begin{equation*}
R_{n, \delta}^{*}(\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2)) = Q_{\widetilde{\mathcal{E}}(Z)}(1 - \delta),
\end{equation*}
where the random variable $Z$ is jointly distributed with $(X_i)_{i=1}^{n} \sim P_{X}^{n}$ such that $Z \mid (X_i)_{i=1}^{n} \sim \mathcal{N}(0, \frac{\sigma^{2}}{n} \widehat{\Sigma}_{n}^{-1})$ on the event that the sample covariance matrix $\widehat{\Sigma}_{n} \defeq n^{-1} \sum_{i=1}^{n} X_{i}X_{i}^{T}$ is invertible, otherwise $\widetilde{\mathcal{E}}(Z) \defeq \infty$. Moreover, all procedures satisfying the following condition are minimax
\begin{equation*}
\hat{w}_{n, \delta}((X_i, Y_i)_{i=1}^{n}) \in \argmin_{w \in \R^{d}} \frac{1}{n}\sum_{i=1}^{n} (\inp{w}{X_i} - Y_i)^{2}.
\end{equation*}
Assume that $e$ is strictly convex, differentiable, and symmetric. If $P_{X}$ is such that $E$ is finite and differentiable with $\nabla E(w) = \Exp\brack*{\nabla e(\inp{w}{X} - Y)}$ for all $w \in \R^{d}$, and $\sigma^2 \in (0, \infty)$, then
\begin{equation*}
R_{n, \delta}^{*}(\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2)) = Q_{\widetilde{\mathcal{E}}(Z)}(1 - \delta),
\end{equation*}
where, for $(X, \eta) \sim P_{X} \times \mathcal{N}(0, \sigma^2)$,
\begin{equation*}
\widetilde{\mathcal{E}}(\Delta) \defeq \Exp\brack*{e(\inp{\Delta}{X} + \eta)} - \Exp\brack*{e(\eta)},
\end{equation*}
and where the random variable $Z$ is jointly distributed with $(X_i)_{i=1}^{n} \sim P_{X}^{n}$ such that $Z \mid (X_i)_{i=1}^{n} \sim \mathcal{N}(0, \frac{\sigma^{2}}{n} \widehat{\Sigma}_{n}^{-1})$ on the event that the sample covariance matrix $\widehat{\Sigma}_{n} \defeq n^{-1} \sum_{i=1}^{n} X_{i}X_{i}^{T}$ is invertible, otherwise $\widetilde{\mathcal{E}}(Z) \defeq \infty$. Moreover, all the procedures satisfying the following condition are minimax
\begin{equation*}
\hat{w}_{n, \delta}((X_i, Y_i)_{i=1}^{n}) \in \argmin_{w \in \R^{d}} \frac{1}{n}\sum_{i=1}^{n} (\inp{w}{X_i} - Y_i)^{2}.
\end{equation*}
We make a few remarks about this result before interpreting its content. First, Theorem <ref> improves on the best known comparable result, Proposition <ref>, in two distinct ways: it provides the exact minimax risk over the class $\mathcal{P}_{\normalfont \text{Gauss}}(P_{X}, \sigma^{2})$ for the error function $e(t) = t^2/2$, and it generalizes this result to a rich collection of error functions. Second, and as can readily be seen from the proof, the strict convexity hypothesis on $e(t)$ in Theorem <ref> can be weakened to the strict quasiconvexity of $E(w)$, and the strictness can be replaced by the existence of a unique minimizer of $E(w)$. Finally, the proof of Theorem <ref> is based on the Bayesian method we develop in Theorem <ref>, an adaptation of an argument of Mourtada, 2022, and Anderson's Lemma <cit.>.
While exact, the result in Theorem <ref> is both difficult to interpret and hard to manipulate. In particular, the dependence of the minimax risk on the problem parameters $(n, \delta, P_{X}, \sigma^2)$ as well as the error function $e$ remains implicit in Theorem <ref>. This is not too surprising as the error function can interact with the parameters of the problems in quite complicated ways.
In the rest of this section, we develop tools to make these dependencies explicit. Specifically, in Section <ref>, we compute the asymptotic minimax risk for generic error functions as $n \to \infty$ and show that it takes on a simple form. In Section <ref>, we focus on the case of square error function, and identify a setting where the lower bound of Proposition <ref> is tight. In Section <ref>, we extend this result to the case of the $p$-th power error function for $p \in (2,\infty)$.
§.§ General error functions
The following result shows that under a mild assumption on the error function, the asymptotic minimax risk is a pleasingly simple function of the parameters of the problem. In particular, this result shows that the lower bound of Proposition <ref> is asymptotically tight.
Under the setup of Theorem <ref>, further assume that $e$ is twice differentiable and $\widetilde{E}$ is twice differentiable at $0$ with $\nabla^2 \widetilde{E}(0) = \Exp\brack*{\nabla^{2}e(\eta)}$, and let $\alpha \defeq \Exp\brack*{e''(\eta)}/2$. Then
\begin{equation*}
\lim_{n \to \infty} n \cdot R_{n, \delta}^{*}(\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2)) = \sigma^{2} \alpha \cdot Q_{\norm{Z}_2^2}(1 - \delta) \asymp \sigma^2 \alpha \cdot \brack*{d + \log(1/\delta)},
\end{equation*}
where $Z \sim \mathcal{N}(0, I_{d \times d})$, and where the relation $\asymp$ holds when $\delta \in (0, 1/2)$.
Non-asymptotically, and with no more assumptions on the error function, it is difficult to say much more about the minimax risk than Proposition <ref>. However, determining when the minimax risk is infinite is tractable, as the next result shows.
Under the setup of Theorem <ref>, let $\eps_{n} \defeq \Prob\paren*{\rank(\widehat{\Sigma}_{n}) < d}$. Then
\begin{equation*}
R_{n, \delta}^{*}(\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2)) = \infty \Leftrightarrow \delta \leq \eps_{n} \quad \text{ and } \quad
\rho(P_{X})^{n} \leq \eps_{n} \leq \left(\genfrac{}{}{0pt}{}{n}{d-1}\right) \rho(P_{X})^{n - d - 1},
\end{equation*}
where $\rho(P_{X}) \defeq \sup_{w \in \R^{d} \setminus \brace*{0}} \Prob(\inp{w}{X} = 0) < 1$.
The upper bound on $\eps_{n}$ as well as the statement $\rho(P_{X}) < 1$ in Lemma <ref> are due to El Hanchi and Erdogdu, 2023. At a high level, Lemma <ref> says that the range of failure probabilities for which the risk is infinite gets exponentially small as a function of $n$. This is in sharp contrast with the result of Mourtada, 2022 under the classical notion of risk and the square error, where it was shown that the minimax risk in that case is infinite for all sample sizes as soon as $\rho(P_{X}) > 0$.
§.§ Square error
We assume throughout this section that $e(t) = t^2/2$. We derive increasingly loose but more interpretable upper and lower bounds on the minimax risk in this setting. Our motivation is to better understand the influence of each of the parameters $(n, \delta, P_{X}, \sigma^2)$ of the problem on the minimax risk. Practically, the main result of this section is the identification of a general sufficient condition under which the lower bound in Proposition <ref> is tight. With that achievement to look forward to, we start with a simple Corollary of Theorem <ref>.
Under the setup of Theorem <ref>,
\begin{equation*}
R_{n, \delta}^{*}(\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2)) = \frac{\sigma^2}{2n} \cdot Q_{\norm{Z}_2^2}(1 - \delta),
\end{equation*}
where the random variable $Z$ is jointly distributed with $(X_i)_{i=1}^{n} \sim P_{X}^{n}$ such that $Z \mid (X_i)_{i=1}^{n} \sim \mathcal{N}(0, \widetilde{\Sigma}_{n}^{-1})$ on the event that the sample covariance matrix $\widehat{\Sigma}_{n}$ is invertible, and where $\widetilde{\Sigma}_{n} = \Sigma^{-1/2} \widehat{\Sigma}_{n} \Sigma^{-1/2}$ is the whitened sample covariance matrix; otherwise $\norm{Z}_2^2 \defeq \infty$.
Corollary <ref> already makes explicit the dependence of the minimax risk on $(n, \sigma^2)$, but the dependence on $(P_{X}, \delta)$ remains implicit. The next result is a step towards clarifying this relationship.
Under the setup of Theorem <ref>, and for all $\delta \in (0, (1-\eps_n)/4)$,
\begin{equation*}
R_{n, \eps_{n} + \delta}^{*}(\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2)) \begin{dcases}
&\leq 2 \cdot \frac{\sigma^2}{n} \brack*{Q_{\Tr\paren*{\widetilde{\Sigma}_{n}^{-1}}}(1 - \eps_n - \delta/2) + Q_{W}(1 - \eps_n - \delta/2)}, \\
&\geq \frac{1}{6428} \cdot \frac{\sigma^{2}}{n} \brack*{Q_{\Tr\paren*{\widetilde{\Sigma}_{n}^{-1}}}(1 - \eps_n - 4\delta) + Q_{W}(1 - \eps_n - 4\delta)},
\end{dcases}
\end{equation*}
where we defined $\Tr(\widetilde{\Sigma}_{n}^{-1}) \defeq \infty$ when $\widetilde{\Sigma}_{n}$ is not invertible, and $W$ is a random variable jointly distributed with $(X_i)_{i=1}^{n} \sim P_{X}^{n}$ and with conditional distribution $W \mid (X_i)_{i=1}^{n} \sim {\normalfont\text{Exp}}(\lambdamin(\widetilde{\Sigma}_{n}))$, with the convention that the exponential distribution ${\normalfont\text{Exp}}(0)$ refers to the unit mass at $\infty$.
It is interesting to compare this result with the exact minimax risk under the classical notion of risk computed by Mourtada, 2022, and given by $(\sigma^{2}/n) \cdot \Exp\brack{\Tr({\widetilde{\Sigma}_{n}^{-1})}}$. Proposition <ref> says that the minimax quantile risk is upper and lower bounded by a `strong' term given by a quantile of $\Tr({\widetilde{\Sigma}_{n}^{-1})}$, and a `weak' term governed by the distribution of $\lambdamin(\widetilde{\Sigma}_{n})$. Our next result shows that the lower bound from Proposition <ref> improves on the one from Proposition <ref>.
Let $\delta \in (\eps_{n}, 1)$. Then
d ·(1 - δ) ≤ Q_*Σ_n^-1(1 - δ) ≤Q_(Σ_n^-1)(1 - δ) ·d,
log(1/δ) ≤ Q_W(1 - δ) ≤Q_(Σ_n^-1)(1 - δ/2) ·log(2/δ).
This lemma further shows that a sufficient condition for the lower bound of Proposition <ref> to be tight is the boundedness of $Q_{\lambdamax(\widetilde{\Sigma}_{n}^{-1})}(1 - \delta/2)$ by an absolute constant. Under what conditions on $(n, \delta, P_{X})$ does this hold ? Our results from Section <ref> provide a satisfying answer.
Assume that $P_{X}$ has fourth moments. If $\delta \in (0, 1/2)$ and
\begin{equation*}
n \geq \max\brace*{128 \brack*{4 \log(3d) \lambdamax(S(P_{X})) + R(P_{X}) \log(2/\delta)}, \frac{\log(3d)}{18 \lambdamax(S(P_{X}))}, \frac{\log(2/\delta)}{R(P_{X})}},
\end{equation*}
\begin{equation*}
R_{n, \delta}^{*}(\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2)) \asymp \frac{\sigma^2 (d + \log(1/\delta))}{n},
\end{equation*}
where the parameters $S(P_{X})$, $R(P_{X})$ are as defined in (<ref>).
Corollary <ref> can be interpreted as a non-asymptotic version of Proposition <ref> for the square error function. As we argue in Section <ref>, the fourth moment assumption is very natural in this setting, and the sample size restriction is, in a sense, optimal. The main restriction on the sample size comes from the first term, as both $\lambdamax(S(P_{X}))$ and $R(P_{X})$ are expected to be large.
Suppose that there exists an $\alpha > 0$ such that
\begin{equation*}
\lim_{t \to \infty} t^{\alpha} \Prob\paren*{\lambdamin(\widetilde{\Sigma}_{n}) < \frac{1}{t}} = 0
\end{equation*}
Then there exists a sequence $(\delta_k)_{k=1}^{\infty}$ in $(0, 1-\eps_n)$ satisfying $\delta_k \to 0$ as $k \to \infty$ such that
\begin{equation*}
R_{n, \eps_{n} + \delta}^{*}(\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2)) \asymp Q_{\Tr\paren*{\widetilde{\Sigma}_{n}^{-1}}}(1 - \eps_n - \delta_{k}) + Q_{W}(1 - \eps_n - \delta_{k}).
\end{equation*}
We note that a sufficient condition for the hypothesis of Corollary <ref> to hold is the finiteness of $\Exp\brack*{\lambdamax(\widetilde{\Sigma}^{-1}_{n}) \mathbbm{1}_{[0, \infty)}(\lambdamax(\widetilde{\Sigma}_{n}^{-1}))}$ for some $\alpha > 0$. We also mention that the conclusion can be strengthened to the existence of a $\delta_0 \in (0, 1-\eps_n)$ such that the statement holds for all $\delta \in (0, \delta_0)$ under an additional mild hypothesis, we discuss this more in Appendix (REFERENCE).
§.§.§ When is the lower bound of Proposition <ref> tight ?
Let $\delta \in (0, 1-\eps_{n})$. Then
d ·(1 - δ) ≤ Q_*Σ_n^-1(1 - _n - δ) ≤d ·Q_(Σ_n^-1)(1 - _n - δ),
(1-_n) log((1-_n)/δ) ≤ Q_W(1 - _n - δ) ≤Q_(Σ_n^-1)(1 - _n - δ/2) log(2/δ).
Lemma <ref> shows that the lower bound from Proposition <ref> improves on the one from Proposition <ref>. Furthermore, the Lemma shows that a sufficient condition for the lower bound of Proposition <ref> to be tight is that $Q_{\lambdamax(\widetilde{\Sigma}_{n}^{-1})}(1 - \eps_{n} - \delta/2)$ be bounded by an absolute constant. Under what conditions on $P_{X}$, $n$ and $\delta$ does this hold ? This is what we address next.
To gain some intuition about what a reasonable set of conditions would be, we recall the fact that $\widetilde{\Sigma}_{n}$ is an empirical average of the random matrices $\tilde{X}\tilde{X}^{T}$ where $\tilde{X} \defeq \Sigma^{-1/2}X$ and $X \sim P_{X}$. Therefore, by the law of large numbers, $\Sigma_{n} \overset{d}{\to} I_{d \times d}$, and by the continuous mapping theorem, $\lambdamax(\widetilde{\Sigma}_{n}^{-1}) \overset{d}{\to} 1$ as $n \to \infty$. To say something about the rate of this convergence, the most natural assumption to make is that the second moment of the random matrix $\tilde{X}\tilde{X}^{T}$ exists so that the central limit theorem holds, which is equivalent to assuming that $P_{X}$ has fourth moments. Under this assumption, our results in Section <ref> provide a full characterization of $Q_{\lambdamax(\widetilde{\Sigma}_{n}^{-1})}(1 - \delta)$. Building on these results, we obtain the following sufficient joint conditions on $(P_{X}, n, \delta)$ that guarantee that the lower bound of Proposition <ref> is tight.
Assume that $P_{X}$ has fourth moments and define
\begin{equation*}
S(P_{X}) \defeq \Exp\brack*{\paren*{\tilde{X}\tilde{X}^{T} - I}^{2}}, \quad\quad R(P_{X}) \defeq \sup_{v \in S^{d-1}} \Exp\brack*{\paren*{\inp{v}{\tilde{X}}^{2} - 1}^2}.
\end{equation*}
and assume that $n \geq 64 (1 + \log(d)) \lambdamax(S(P_{X}))$ and $\delta \in [\delta_{0}(P_{X}), 1)$. Then
\begin{equation*}
\inf_{\hat{w}_{n}} \sup_{P \in \mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^{2})} R_{n, \delta}(P, \hat{w}_{n}) \asymp \frac{\sigma^2 (d + \log(1/\delta))}{n},
\end{equation*}
\begin{equation*}
\delta_{0}(P_{X}) \defeq \exp\paren*{-\min\brace*{\frac{3n - 4(1 + \log(d))}{16}, \frac{n}{128 R(P_{X})}}}.
\end{equation*}
§.§ TEXT-th power error
The results of the last section are quite specific to the case of the square error, and it is a priori unclear how the minimax risk of other error functions can be studied non-asymptotically. Let us build on the observation that Corollary <ref> is a non-asymptotic version of Proposition <ref> for the square error. Can we do this for more general error functions ?
The underlying proof idea of Proposition <ref> is a simple second order Taylor expansion, which becomes exact as $n \to \infty$. If we have non-asymptotic control over the error in this expansion, we can carry out the argument behind Proposition <ref> non-asymptotically. We implement this idea here, and conclude this section with the following non-asymptotic lower bound on the minimax risk under a $p$-th power error function.
Assume that $e(t) = \abs{t}^{p}/[p(p-1)]$ for some $p \in (2,\infty)$. Under the setup of Theorem <ref>, and for $\delta \in (0, 1/2)$, we have
\begin{equation*}
R_{n, \delta}^{*}(\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2)) \geq \frac{m(p-2)}{16 (p-1)} \cdot \frac{\sigma^{p}\brack*{d + \log(1/\delta)}}{n}
\end{equation*}
where $m(p) \defeq 2^{p/2-1}\Gamma(p/2-1)/\sqrt{\pi}$ is the $p$-th absolute moment of a standard normal variable.
Finally, we use our result to compute the asymptotic minimax risk for any fixed $\delta$ when the number of samples diverges.
In the context of Theorem <ref>, it holds that
\begin{equation*}
\lim_{n \to \infty} n \cdot \paren*{\inf_{\hat{w}_{n}} \sup_{P \in \mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2)} R_{n, \delta}(P, \hat{w}_{n})} =
\end{equation*}
where we used $a \asymp b$ to mean that there exists absolute constants $C, c > 0$ such that $c \cdot b \leq a \leq C \cdot b$.
§.§ Square error function
In this subsection, we focus on the canonical case $e(t) = t^2$, which is trivially convex and symmetric.
When $e(t) = t^{2}/2$, the statement of Theorem <ref> holds with
\begin{equation*}
p_{n}(t) = \Exp\brack*{\Prob\paren*{\norm{Z}_2 \leq \sqrt{t} \st (X_i)_{i=1}^{n}} \mathbbm{1}_{\rank(\widehat{\Sigma}) = d}((X_{i})_{i=1}^{n})}
\end{equation*}
where $Z \mid (X_{i})_{i=1}^{n} \sim \mathcal{N}\paren*{0, \frac{\sigma^{2}}{n}\widetilde{\Sigma}^{-1}}$ and $\widetilde{\Sigma} \defeq \Sigma^{-1/2} \widehat{\Sigma} \Sigma^{-1/2}$.
As $n\to \infty$, $\widehat{\Sigma} \overset{d}{\to} I$, $\eps_{n} \to 0$, so that $W \overset{d}{\to} \text{Exp}(1)$ and $\Tr\paren{\widetilde{\Sigma}^{-1}} \overset{d}{\to} d$ and therefore the minimax risk converges, up to an absolute constant, to
\begin{equation*}
\frac{\sigma^2 (d + \log(1/\delta))}{n}
\end{equation*}
which is what one would expect from the local minimax theorems of Hajek and LeCam, as well as from the asymptotic normality of ERM.
Let $X \in [0, \infty]$ be random variable and let $p \defeq \Prob\paren*{X = \infty}$. Assume that
\begin{equation*}
\lim_{x \to \infty} x^{\alpha}(1 - p - F_{X}(x)) = 0,
\end{equation*}
for some $\alpha > 0$. Then, for all $c > 1$, we have
\begin{equation*}
\liminf_{\delta \downarrow 0} \frac{Q_{X}(1 - p - \delta/c)}{Q_{X}(1 - p - \delta)} \leq c^{1/\alpha}.
\end{equation*}
We make the following remarks.
* A sufficient condition for the hypothesis of the above Lemma is $\Exp\brack*{X^{\alpha} \mathbbm{1}_{[0, \infty)}(X)} < \infty$ for some $\alpha > 0$.
* A consequence of the above Lemma is the existence of a sequence $(\delta_k)_{k=1}^{\infty}$ such that $\delta_k \to 0$ as $k \to \infty$, and $Q_{X}(1-p-\delta_k/c)/Q_{X}(1 - p - \delta_k) \leq c^{1/\alpha}$ for all $k \in \N$. When the limit exists, this is strengthened to the existence of $\delta_{0} \in (0,1-p)$ such that $Q_{X}(1-p-\delta/c)/Q_{X}(1 - p - \delta) \leq c^{1/\alpha}$ for all $\delta \in (0, \delta_0]$.
Suppose that there exists an $\alpha > 0$ such that
\begin{equation*}
\lim_{t \to \infty} t^{\alpha} \Prob\paren*{\lambdamin(\widetilde{\Sigma}) < \frac{1}{t}} = 0
\end{equation*}
and assume that the following limits exist
\begin{equation*}
\lim_{\delta \downarrow 0} \frac{Q_{\Tr\paren*{\widetilde{\Sigma}^{-1}}}(1 - \eps_n - \delta/2)}{Q_{\Tr\paren*{\widetilde{\Sigma}^{-1}}}(1 - \eps_n - \delta/4)} \quad\quad \lim_{\delta \downarrow 0} \frac{Q_{W}(1 - \eps_n - \delta/2)}{Q_{W}(1 - \eps_n - \delta/4)}.
\end{equation*}
Then there exists a $\delta_{0} \in (0, 1-\eps_{n})$ such that, for all $\delta \in (0, \delta_0]$,
\begin{equation*}
\inf_{\hat{w}} \sup_{P \in \mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^{2})} R_{\eps_n + \delta,n}(P, \hat{w}) \asymp Q_{\Tr\paren*{\widetilde{\Sigma}^{-1}}}(1 - \eps_n - \delta) + Q_{W}(1 - \eps_n - \delta)
\end{equation*}
where we used $a \asymp b$ to mean that there exists absolute constants $0 < c < C < \infty$ such that $c \cdot b \leq a \leq C \cdot b$.
CAN I GET THE RESULTS WITHOUT ASSUMING THE EXISTENCE OF THE LIMITS ? May have to reprove the Lemma from scratch. PROVE THIS.
Assume that the error function $e$ is convex and symmetric. Then
\begin{equation*}
w^{*} \in \argmin_{w \in \R^{d}} \Exp\brack*{e(\inp{w}{X}-Y)}
\end{equation*}
and for all distributions $P_{X}$ on $\R^{d}$ and all $\sigma^{2} > 0$,
\begin{equation*}
\inf_{\hat{w}} \sup_{P \in \mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^{2})} R_{\delta,n}(P, \hat{w}) = p_{n}^{-}(1 - \delta),
\end{equation*}
\begin{equation*}
p_{n}(t) \defeq \Exp\brack*{\Prob\paren*{\ell\paren*{\frac{\sigma}{\sqrt{n}} \widehat{\Sigma}^{-1/2}Z} \leq t \st (X_i)_{i=1}^{n}} \mathbbm{1}_{\rank\paren*{\widehat{\Sigma}} = d}((X_i)_{i=1}^{n})},
\end{equation*}
and $Z \sim \mathcal{N}(0, I_{d})$ is independent of $(X_i)_{i=1}^{n}$, $\widehat{\Sigma} = n^{-1} \sum_{i=1}^{n} X_{i}X_{i}^{T}$ is the sample covariance matrix, and $\ell(v) \defeq \Exp\brack*{e(\inp{v}{X} - \eps)} - \Exp\brack*{e(\eps)}$. Furthermore, all the estimators satisfying
\begin{equation*}
\hat{w}((X_i, Y_i)_{i=1}^{n}) \in \argmin_{w \in \R^{d}} \frac{1}{n}\sum_{i=1}^{n} e\paren*{\inp{w}{X_i} - Y_i}
\end{equation*}
are minimax.
Suppose that there exists an $\alpha > 0$ such that
\begin{equation*}
\lim_{t \to \infty} t^{\alpha} \Prob\paren*{\lambdamin(\widetilde{\Sigma}_{n}) < \frac{1}{t}} = 0
\end{equation*}
and assume that the following limits exist
\begin{equation*}
\lim_{\delta \downarrow 0} \frac{Q_{\Tr\paren*{\widetilde{\Sigma}^{-1}}}(1 - \eps_n - \delta/2)}{Q_{\Tr\paren*{\widetilde{\Sigma}^{-1}}}(1 - \eps_n - \delta/4)} \quad\quad \lim_{\delta \downarrow 0} \frac{Q_{W}(1 - \eps_n - \delta/2)}{Q_{W}(1 - \eps_n - \delta/4)}.
\end{equation*}
Then there exists a $\delta_{0} \in (0, 1-\eps_{n})$ such that, for all $\delta \in (0, \delta_0]$,
\begin{equation*}
\inf_{\hat{w}} \sup_{P \in \mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^{2})} R_{\eps_n + \delta,n}(P, \hat{w}) \asymp Q_{\Tr\paren*{\widetilde{\Sigma}^{-1}}}(1 - \eps_n - \delta) + Q_{W}(1 - \eps_n - \delta)
\end{equation*}
where we used $a \asymp b$ to mean that there exists absolute constants $0 < c < C < \infty$ such that $c \cdot b \leq a \leq C \cdot b$.
Let $\delta \in (0, 1-\eps_{n})$. Then
d ·(1 - δ) ≤ Q_*Σ_n^-1(1 - _n - δ) ≤d ·Q_(Σ_n^-1)(1 - _n - δ),
(1-_n) log((1-_n)/δ) ≤ Q_W(1 - _n - δ) ≤Q_(Σ_n^-1)(1 - _n - δ/2) log(2/δ).
§ MINIMAXITY OF THE MIN-MAX LINEAR REGRESSION PROCEDURE
In this section, we establish the minimaxity of a variant of the popular min-max regression procedure, e.g. Audibert and Catoni, 2011, Lecué and Lerasle, 2020, Oliveira and Resende, 2023 over suitably large classes of problems under the $p$-th power error functions $e(t) = \abs{t}^{p}/[p(p-1)]$, for $p \in [2, \infty)$. Before stating our results, we briefly describe the construction of the procedure.
Let $\alpha, \beta \in \R$ such that $\alpha \leq \beta$ and define $\phi_{\alpha,\beta}(x) \defeq \alpha \mathbbm{1}_{(-\infty, \alpha)}(x) + x \mathbbm{1}_{[\alpha, \beta]}(x) + \beta \mathbbm{1}_{(\beta,\infty)}(x)$. For a real valued sequence $a \defeq (a_i)_{i=1}^{n}$, define the sequence $a^{*} = (a^{*}_{i})_{i=1}^{n}$ by $a^{*}_i \defeq a_{\pi(i)}$ where $\pi$ is a permutation that orders $a$ increasingly.
Fix $k \in \brace*{1, \dotsc, \floor{n/2}}$, and define $\varphi_{k}[a] \defeq \sum_{i=1}^{n} \phi_{a^{*}_{1+ k}, a^{*}_{n-k}}(a_i)$. Given samples $(X_i, Y_i)_{i=1}^{n}$, and for $w, v \in \R^{d}$, define
\begin{gather*}
\psi_{k}(w, v) \defeq n^{-1} \varphi_{k}\brack*{\paren*{e(\inp{w}{X_i} - Y_i) - e(\inp{v}{X_i} - Y_i)}_{i=1}^{n}},
\end{gather*}
and consider the procedure
\begin{equation}
\label{eq:procedure}
\hat{w}_{n, k}((X_i, Y_i)_{i=1}^{n}) \in \argmin_{w \in \R^{d}} \max_{v \in \R^{d}} \psi_{k}(w, v).
\end{equation}
§.§ Square error
Our first result shows that for the square error, and under appropriate conditions, the procedure (<ref>) is minimax up to absolute constants over $\mathcal{P}_{2}(P_{X}, \sigma^2)$ when $P_{X}$ has finite fourth moments.
Under the square error $e(t) = t^2/2$, let $\delta \in (0,1/4)$ be such that $k \defeq 8 \log(4/\delta)$ is an integer satisfying $1 \leq k \leq \floor{n/8}$. Assume that $P_{X}$ has finite fourth moments. If
\begin{equation*}
n \geq 800^{2} \cdot \paren*{8 \log(6d) \cdot \brack*{\lambdamax(S(P_{X})) + 1} + \brack*{R(P_{X}) + 1} \log(1/\delta)},
\end{equation*}
where $S(P_{X})$ and $R(P_{X})$ are as defined in (<ref>), then
\begin{equation*}
R_{n,\delta}(\mathcal{P}_{2}(P_{X}, \sigma^2), \hat{w}_{n,k}) \leq (100)^{2} \cdot \frac{\sigma^2 (d + \log(1/\delta))}{n}.
\end{equation*}
Compared to Proposition <ref>, the upper bound in Theorem <ref> contains no distribution-dependence, showing the minimaxity of the procedure $(\ref{eq:procedure})$ up to an absolute constant by Proposition <ref>. On the other hand, we require the existence of fourth moments, which is more than what is required in Proposition <ref>. As we argue in Section <ref> however, the fourth moment assumption is quite natural. We also note that the sample size restriction in Theorem <ref> nearly matches that in Corollary <ref>, which as we discuss in Section <ref> is optimal in a certain sense.
We claim however assuming the existence of fourth moments of $P_{X}$ is in some sense the most natural setting to consider.
Indeed, the difference between these two
However, recall the fact that $\widetilde{\Sigma}_{n}$ is an empirical average of the random matrices $\tilde{X}\tilde{X}^{T}$ where $\tilde{X} \defeq \Sigma^{-1/2}X$ and $X \sim P_{X}$. Therefore, by the law of large numbers, $\Sigma_{n} \overset{d}{\to} I_{d \times d}$, and by the continuous mapping theorem, $\lambdamax(\widetilde{\Sigma}_{n}^{-1}) \overset{d}{\to} 1$ as $n \to \infty$. To say something about the rate of this convergence, the most natural assumption to make is that the second moment of the random matrix $\tilde{X}\tilde{X}^{T}$ exists so that the central limit theorem holds, which is equivalent to assuming that $P_{X}$ has fourth moments. Under this assumption, our results in Section <ref> provide a full characterization of $Q_{\lambdamax(\widetilde{\Sigma}_{n}^{-1})}(1 - \delta)$. Building on these results, we obtain the following sufficient joint conditions on $(P_{X}, n, \delta)$ that guarantee that the lower bound of Proposition <ref> is tight.
§.§ p-th power error
We now move to the case $p \in (2, \infty)$. The first difficulty we are faced with here is that it is a priori unclear what set of problems we should select that is both tractable and large enough to model realistic scenarios. Using our insights from Section <ref>, we propose the following analogue to the class $\mathcal{P}_{2}(P_{X}, \sigma^2)$, under the necessary conditions that $P_{X}$ and the noise $\xi$ have finite $p$-th moments
\begin{equation*}
\mathcal{P}_{p}(P_{X}, \sigma^2, \mu) \defeq \brace*{P \st (X, Y) \sim P : X \sim P_{X} \text{ and (\ref{eq:condition}) holds.} },
\end{equation*}
where $\mu \in (0, m(p) \cdot \sigma^{p-2}]$, $m(p)$ is as in Proposition <ref>, and (<ref>) is the condition
\begin{equation}
\label{eq:condition}
\frac{\esssup(\Exp\brack{\abs{\xi}^{2(p-1)} \mid X})}{\essinf(\Exp\brack*{\abs{\xi}^{p-2} \mid X})} \leq \frac{m(2p-2)}{m(p-2)} \cdot \sigma^{p} \eqdef r(p) \quad \text{ and } \quad \essinf(\Exp\brack{\abs{\xi}^{p-2} \mid X}) \geq \mu \tag{$\star$},
\end{equation}
where $\xi \defeq Y-\inp{w^{*}}{X}$ and $w^{*} \in \R^{d}$ is the unique minimizer of the expected error $E(w)$.
It is straightforward to check that $\mathcal{P}_{\normalfont \text{Gauss}}(P_{X}, \sigma^{2}) \subset \mathcal{P}_{p}(P_{X}, \sigma^2, \mu)$, for all legal choices of $\mu$.
While at first this seems like an overly special class of distributions, let us now argue that this far from the case. In fact, we claim that this class is a natural extension of $\mathcal{P}_{2}(P_{X}, \sigma^2)$. Firstly, by setting $p=2$, we recover $\mathcal{P}_{2}(P_{X}, \sigma^2, \mu) = \mathcal{P}_{2}(P_{X}, \sigma^2)$ for all legal $\mu$. Secondly, we note that $\mathcal{P}_{p}(P_{X}, \sigma^2, \mu) \subset \mathcal{P}_{p}(P_{X}, \sigma^2, \mu')$ whenever $\mu \geq \mu'$. Ideally, we would like to take as large a class as possible, which would correspond to the choice $\mu=0$. Unfortunately, our bounds diverge in this setting. On the positive side however, this means that the upper constraint on $\mu$ is benign as the interesting regime is when $\mu$ is near zero. Finally, note that much like with the set $\mathcal{P}_{2}(P_{X}, \sigma^2)$, one can capture a large set of problems by varying $\sigma^{2}$. As an example, for any linear regression problem where the noise $\xi$ is non-zero, symmetric, and independent of $X$, there exists $(\sigma^{2}, \mu)$ such that $\mathcal{P}_{p}(P_{X}, \sigma^2, \mu)$ contains this problem.
Remarkably, we show that the procedure (<ref>) is minimax over this class under mild assumptions.
Under the $p$-th power error $e(t) = \abs{t}^{p}/[p(p-1)]$ for $p \in (2, \infty)$, let $\delta \in (0,1)$ be such that $k \defeq 8 \log(4/\delta)$ is an integer satisfying $1 \leq k \leq \floor{n/8}$. Assume that $P_{X}$ has finite fourth moments. If
\begin{multline*}
n \geq r^{\frac{p-2}{p-1}}(p) \mu^{-\frac{p}{p-1}} \cdot \brack*{8\log(6d)(\lambdamax(S(P_{X}))+1)+(R(P_{X})+1)\log(1/\delta)} \\ + (2400)^{2} \cdot r(p) \mu^{-\frac{p}{p-2}} p^{4} N^{\frac{2p}{p-2}}(P_{X}, p) \cdot [d + \log(4/\delta)]
\end{multline*}
Define $a(p) \defeq \paren*{\frac{m(2p-2)\sigma^{p}}{m(p-2)}}^{\frac{p-2}{p-1}} \mu^{-\frac{p}{p-1}}$. If
\begin{equation*}
n \geq a(p) \cdot \brack*{\paren*{8\log(6d) [\lambdamax(S(P_{X})) + 1] + [R(P_{X}) + 1] \log(1/\delta)} + (d + \log(1/\delta)) p^{4} N(P_{X}, p)^{\frac{2p}{p-2}}}
\end{equation*}
where $r(p)$ and $\mu$ are as in (<ref>), $S(P_{X})$ and $R(P_{X})$ are as in (<ref>), and $N(P_{X}, p)$ is the norm equivalence constant between the $L^{p}$ and $L^{2}$ norms induced by $P_{X}$ on the set of linear functions on $\R^{d}$, given by $N(P_{X}, p) = \sup_{w \in \R^{d}\setminus \brace{0}} \Exp\brack*{\abs{\inp{w}{X}}^{p}}^{1/p}/\Exp\brack*{\inp{w}{X}^{2}}^{1/2}$, then
\begin{equation*}
R_{n,\delta}(\mathcal{P}_{p}(P_{X}, \sigma^2, \mu), \hat{w}_{n,k}) \leq 120^{2} \cdot K(p) \cdot \frac{\sigma^{p}[d + \log(1/\delta)]}{n},
\end{equation*}
where $K(p) \defeq (p-1)^{2} \cdot m(2p-2)/m(p-2)$.
Combining this result with Proposition <ref> shows the minimaxity of the procedure (<ref>) on this class of problems, up to a constant that depends only on $p$. The closest related result is due to El Hanchi and Erdogdu, 2023 who studied the performance of ERM on linear regression under $p$-th power error. Their result however is specific to ERM, and, as expected, only yields a polynomial dependence on $1/\delta$ instead of the needed $\log(1/\delta)$ to establish minimaxity. Our result combines the insights of that work with the proof techniques used to study the procedure (<ref>) developed by Lugosi and Mendelson, 2019, as well as our new insights on how to leverage the fourth moment assumption to obtain absolute constants instead of distribution-dependent constants in the upper bound.
\begin{equation*}
\sup_{P \in \mathcal{P}_{\text{well}}(P_{X}, \sigma^{2})}R_{\eps_{n} + \delta, n}(P, \hat{w}_{k}) \leq C \cdot \frac{\sigma^2}{n}\paren*{Q_{\Tr\paren*{\widetilde{\Sigma}^{-1}}}(1 - \eps_n - \delta/4) + \log(1/\delta) Q_{W}(1 - \eps_n - \delta/4)}
\end{equation*}
where $C_{2} \defeq $, and where we defined $\Tr(\widetilde{\Sigma}^{-1}) \defeq \infty$ when $\widetilde{\Sigma}$ is not invertible, and $W$ is a random variable with conditional distribution $W \mid (X_i)_{i=1}^{n} \sim \text{Exponential}(1/\lambdamax(\widetilde{\Sigma}^{-1}))$ where we defined $\lambdamax(\widetilde{\Sigma}^{-1}) \defeq \infty$ when $\widetilde{\Sigma}$ is not invertible, and the distribution $\text{Exponential}(0)$ is defined as the unit mass at $\infty$.
show that, under a mild deviation from the conditions of Corollary <ref> on $(P_{X}, n, \delta)$, a slighght variant of the estimators, itself based on the ideas of , is minimax under the square error over the minmax estimators introduced by Audibert and Catoni, 2011, Lugosi and Mendelson, 2019, Lecué and Lerasle, 2020, Oliveira and Resende, 2023 are minimax under the square error $e(t) = t^{2}/2$ over the class $\mathcal{P}_{well}(P_{X}, \sigma^2)$. Before stating our result, we briefly recall the construction of the estimator.
\begin{equation*}
\widetilde{S}(P_{X}) \defeq \Exp\brack*{\norm{\tilde{X}}_2^2 \tilde{X}\tilde{X}^{T}}, \quad\quad \widetilde{R}(P_{X}) \defeq \sup_{v \in S^{d-1}} \Exp\brack*{\inp{v}{\tilde{X}}^{4}}.
\end{equation*}
§ THE QUANTILE RISK
In this section, we study the quantile risk in full generality. Our motivation for doing so is to provide the tools necessary for proving Theorem <ref>. The results we obtain are however more general and can be used to tackle other problems. We illustrate this with two examples at the end of the section.
Before we formulate our results, let us briefly introduce some basic decision-theoretic concepts. To avoid overloading the notation, the symbols we introduce here will be specific to this section. A decision problem has the following components: a set of possible observations $\mathcal{O}$, a subset $\mathcal{P}$ of probability measures on $\mathcal{O}$, a set of available actions $\mathcal{A}$, a loss function $\ell: \mathcal{P} \times \mathcal{A} \to \R$, and a decision rule $d: \mathcal{O} \to \mathcal{A}$. For a fixed distribution $P$, the performance of a decision rule is classically evaluated through its expected loss $\Exp\brack*{\ell(P, d(O))}$ where $O \sim P$. Here instead, for a user-chosen failure probability $\delta \in (0, 1)$, we evaluate the performance of a decision rule through its quantile risk $R_{\delta}(\ell, P, d) \defeq Q_{\ell(P, d(O))}(1 - \delta)$. We define the associated worst-case and minimax risks by $R_{\delta}(\ell, d) \defeq \sup_{P \in \mathcal{P}} R_{\delta}(\ell, P, d)$ and $R^{*}_{\delta}(\ell) \defeq \inf_{d} R_{\delta}(\ell, d)$ respectively. Our aim is to develop methods to understand the minimax risk and establish the minimaxity of candidate decision rules.
§.§ A Bayesian criterion for minimaxity and an invariance principle
A classical way to establish the minimaxity of a decision rule is by computing its worst-case risk and showing that it matches the limit of a sequence of Bayes risks [Lehmann and Casella, 2006]. The following result provides an analogue to this method when working under the quantile risk.
For a distribution $\pi$ on $\mathcal{P}$, define $F^{\pi}_{\ell(P, d(O))}$ to be the cumulative distribution function of the random variable $\ell(P, d(O))$, where $P \sim \pi$ and $O \mid P \sim P$.
Let $(\pi_{k})_{k \in \N}$ be a sequence of distributions on $\mathcal{P}$. For any $t \in \R$, define
\begin{equation*}
p_{\ell, k}(t) \defeq \sup_{d} F^{\pi_{k}}_{\ell(P, d(O))}(t).
\end{equation*}
Assume that the functions $(p_{\ell, k})_{k \in \R}$ are right-continuous and that the sequence is decreasing, i.e. $p_{\ell, k} \geq p_{\ell, k+1}$ and let $p_{\ell} \defeq \inf_{k \in N} p_{\ell, k} = \lim_{k \to \infty} p_{\ell, k}$. If $\hat{d}$ is a decision satisfying
\begin{equation*}
\sup_{P \in \mathcal{P}} R_{\delta}(\ell, P, \hat{d}) = p_{\ell}^{-}(1-\delta),
\end{equation*}
where $p^{-}_{\ell}$ is the pseudo-inverse of $p_{\ell}$, then $\hat{d} \in \argmin_{d} R_{\delta}(\ell, d)$, i.e. it is minimax.
We mention that instantiations of the arguments leading to Theorem <ref> have been used by Depersin and Lecué, 2022 and Gupta et al., 2023 to tackle specific problems. The general form we present here is new, and relies on new analytical results concerning collections of quantile functions.
In applications, it is desirable that the optimality of a decision rule depends as little as possible on the loss function, or conversely, that a single decision rule be minimax for as large a number of loss functions as possible. The following result shows that the minimaxity of a decision rule in the quantile risk is invariant to at least one form of transformation of the loss function.
Let $\varphi: \R \to \R$ be a strictly increasing left-continuous function, and define $\varphi(\infty) \defeq \infty$ and $\varphi(-\infty) \defeq -\infty$. Then $R_{\delta}(\varphi \circ \ell, P, d) = \varphi\paren*{R_{\delta}(\ell, P, d)}$. Furthermore, if $R_{\delta}(\ell, d) < \infty$, then $R_{\delta}(\varphi \circ \ell, d) = \varphi(R_{\delta}(\ell, d))$. Finally, if $R^{*}_{\delta}(\ell) < \infty$, then
\begin{equation*}
d^{*} \in \argmin_{d} R_{\delta}(\ell, d) \implies d^{*} \in \argmin_{d} R_{\delta}(\varphi \circ \ell, d).
\end{equation*}
§.§ Mean estimation revisited
To exhibit the usefulness of the above results, we briefly revisit the problem of mean estimation under Gaussian data. This problem can be embedded in the above decision-theoretic setting as follows. The observations are random vectors $(X_i)_{i=1}^{n} \in (\R^{d})^{n}$ for some $d, n \in \N$, the subset of distributions is the $n$-product of distributions in the class $\mathcal{P}_{\normalfont\text{Gauss}}(\Sigma) \defeq \brace*{\mathcal{N}(\mu, \Sigma) \st \mu \in \R^{d}}$, for a fixed $\Sigma \in S_{++}^{d}$. The set of available actions is $\R^{d}$, and the loss function is given by $\ell(P^{n}, \mu) \defeq e(\mu - \mu(P))$ for some error function $e$ and where $\mu(P)$ is the mean of the distribution $P$. Finally, a decision rule is given by an estimator $\hat{\mu}: (\R^{d})^{n} \to \R^{d}$. The following result gives the minimax quantile risk for this problem under a mild assumption on the error function $e$, generalizing the recent result of [Depersin and Lecué, 2022] which holds under stronger assumptions on $e$.
Assume that the error function $e$ satisfies $e = \varphi \circ s$, where $\varphi$ is a left-continuous strictly increasing function, and $s$ is both quasiconvex, i.e. $s(tv + (1-t)u) \leq \max\brace*{s(v), s(u)}$ for all $t \in [0,1]$ and $u,v \in \R^{d}$, and symmetric, i.e. $s(v) = s(-v)$. Then for all $\Sigma \in S_{++}^{d}$
\begin{equation*}
\inf_{\hat{\mu}}\sup_{P \in \mathcal{P}_{\normalfont\text{Gauss}}(\Sigma)} R_{\delta}(\ell, P^{n}, \hat{\mu}) = Q_{e(Z)}(1- \delta),
\end{equation*}
where $Z \sim \mathcal{N}(0, \Sigma/n)$. Furthermore, the sample mean $\hat{\mu}((X_i)_{i=1}^{n})\defeq n^{-1}\sum_{i=1}^{n}X_i$ is minimax.
Suppose $\delta \in (0, 0.1]$, and $\ell(v) = \norm{v}$ for an arbitrary norm $\norm{\cdot}$. Let $S$ denote the unit sphere of the dual norm, and let $R \defeq \sup_{v \in S} v^{T} \Sigma v$. Then
\begin{equation*}
\inf_{\hat{\mu}}\sup_{P \in \mathcal{P}_{\normalfont\text{Gauss}}(\Sigma)} R_{n, \delta}(P, \hat{\mu}) \asymp \frac{\Exp\brack*{\norm{Z}}}{\sqrt{n}} + \sqrt{\frac{R \log(1/\delta)}{n}}.
\end{equation*}
Furthermore, if $\norm{v}$ is induced by an inner product $\inp{v}{u} = v^{T} A u$, then, with $\widetilde{\Sigma} \defeq A^{1/2} \Sigma A^{1/2}$,
\begin{equation*}
\inf_{\hat{\mu}}\sup_{P \in \mathcal{P}_{\normalfont\text{Gauss}}(\Sigma)} R_{n, \delta}(P, \hat{\mu}) \asymp \sqrt{\frac{\Tr({\widetilde{\Sigma}})}{n}} + \sqrt{\frac{\lambda_{\text{max}}(\widetilde{\Sigma}) \log(1/\delta)}{n}}.
\end{equation*}
In the special case where $d=1$, $\Sigma = \sigma^2$, and $\ell(t) = \abs{t}$, we have
\begin{equation*}
\inf_{\hat{\mu}}\sup_{P \in \mathcal{P}_{\normalfont\text{Gauss}}(\Sigma)} R_{n, \delta}(P, \hat{\mu}) \asymp \sqrt{\frac{\sigma^2 \log\paren*{1/\delta}}{n}}
\end{equation*}
The constraint $\delta \in (0, 0.1]$ is only assumed in Corollary <ref> to ease the exposition. Upper and lower bounds that hold for all $\delta \in (0, 1)$, from which the statements in Corollary <ref> are deduced, are available in the Appendix.
§.§ Minimax estimation of the variance of a Gaussian with known mean
As a second application of our results, we consider the problem of variance estimation with one-dimensional Gaussian data. For this problem, the observations are random variables $(X_i)_{i=1}^{n} \in \R^{n}$ for some $n \in \N$, the subset of distributions is the $n$-product of distributions in the class $\mathcal{P}_{\normalfont\text{Gauss}}(\mu) \defeq \brace*{\mathcal{N}(\mu, \sigma^{2}) \st \sigma \in (0, \infty)}$,
for a fixed $\mu$. The set of available actions is $(0, \infty)$, and we consider the following loss function which captures the problem of estimating $\sigma^{2}$ in relative error: $\ell(P^{n}, \sigma^{2}) \defeq \abs*{\log(\sigma^2(P)/\sigma^2)}$ where $\sigma^{2}(P)$ is the variance of the distribution $P$. Finally, a decision rule is given by an estimator $\hat{\sigma}^{2}: \R^{n} \to (0, \infty)$. Using Theorem <ref>, we obtain the following result.
For $\alpha \in (0, \infty)$ and $Z \sim {\normalfont\text{Inv-Gamma}}(\alpha, \alpha)$, define $p_{\alpha}:(0,\infty) \to [0,1)$ by
\begin{equation*}
p_{\alpha}(t) \defeq \Prob\paren*{\frac{1-\exp(-2t)}{2t} \leq Z \leq \frac{\exp(2t) - 1}{2t}}.
\end{equation*}
Then for all $\mu \in \R$
\begin{equation*}
\inf_{\hat{\sigma}^{2}} \sup_{P \in \mathcal{P}_{\normalfont\text{Gauss}}(\mu)} R_{\delta}(\ell, P^{n}, \hat{\sigma}^{2}) = p_{n/2}^{-}(1-\delta).
\end{equation*}
Furthermore, the sample variance is not minimax and the estimator
\begin{equation*}
\hat{\sigma}^{2}((X_i)_{i=1}^{n}) \defeq \frac{\sum_{i=1}^{n}(X_i - \mu)^{2}}{n} \phi\paren*{p_{n/2}^{-}(1-\delta)}
\end{equation*}
is minimax, where $\phi(x) \defeq \sinh(x)/x$, and $p_{n/2}^{-}$ is the pseudo-inverse of $p_{n/2}$.
Surprisingly, Proposition <ref> shows that the sample variance is suboptimal under the quantile risk, but that a careful reweighting of it is. We note that as $n \to \infty$, the weight converges to $1$, so that the sample variance is asymptotically minimax. We are not aware of a similar result in the literature.
§ SMALLEST EIGENVALUE OF THE SAMPLE COVARIANCE MATRIX
The results of Sections <ref> and <ref>, and in particular the sample size conditions in Corollary <ref> and Theorems <ref> and <ref>, rely on new high probability lower bounds on the smallest eigenvalue of the sample covariance matrix we describe in this section. We briefly formalize our problem, we then state our main results, and conclude this section by discussing and relating them to the literature.
Let $P_{X}$ be a distribution on $\R^{d}$ with finite second moments, $X \sim P_{X}$, and denote by $\Sigma \defeq \Exp\brack*{XX^{T}}$ its covariance matrix. For samples $(X_i)_{i=1}^{n} \sim P_{X}^{n}$, define the sample covariance matrix $\widehat{\Sigma}_{n} \defeq n^{-1} \sum_{i=1}^{n} X_{i}X_{i}^{T}$. In this section, we want to identify how close $\widehat{\Sigma}_{n}$ is to $\Sigma$ in a relative error sense and in a one-sided fashion. Specifically, we want to characterize the quantiles of the random variable $\lambdamax(I -\Sigma^{-1/2}\widehat{\Sigma}_{n}\Sigma^{-1/2}) = 1 - \lambdamin(\Sigma^{-1/2}\widehat{\Sigma}_{n}\Sigma^{-1/2})$. To ease notation, we introduce the whitened random vector $\widetilde{X} \defeq \Sigma^{-1/2} X$. Notice that $\Exp\brack{\widetilde{X}\widetilde{X}^{T}} = I_{d \times d}$, and that $\widetilde{\Sigma}_{n} \defeq n^{-1}\sum_{i=1}^{n} \widetilde{X}_{i}\widetilde{X}_{i}^{T} = \Sigma^{-1/2}\widehat{\Sigma}_{n}\Sigma^{-1/2}$. We want to understand the quantiles of $1 - \lambdamin(\widetilde{\Sigma}_{n})$.
As already mentioned, our motivation for studying this problem stems from the fact that upper bounds on these quantiles form a crucial step in the analysis of linear regression in general, e.g. Oliveira, 2016, Mourtada, 2022, and in particular our results from Section <ref> and <ref>.
We are now ready to state our results. Define the following variance-like parameters
\begin{equation}
\label{eq:matrix_param}
S(P_{X}) \defeq \Exp\brack*{\paren*{\widetilde{X}\widetilde{X}^{T} - I}^{2}}, \quad\quad R(P_{X}) \defeq \sup_{v \in S^{d-1}} \Exp\brack*{\paren*{\inp{v}{\widetilde{X}}^{2} - 1}^2}.
\end{equation}
Our first result is the following proposition, which provides an asymptotic lower bound on the quantiles of $1 - \lambdamin(\widetilde{\Sigma}_{n})$, and a nearly matching non-asymptotic upper bound.
Assume that $P_{X}$ has finite fourth moments. Then for all $\delta \in (0, 0.1)$,
\begin{equation*}
\lim_{n \to \infty} \sqrt{n} \cdot Q_{1 - \lambdamin(\widetilde{\Sigma}_{n})}(1 - \delta) \geq \frac{1}{40} \cdot \paren*{\sqrt{\lambdamax(S(P_{X}))} + \sqrt{R(P_{X}) \log(1/\delta)}}.
\end{equation*}
Furthermore, for all $n \in \N$ and $\delta \in (0, 1)$,
\begin{equation*}
Q_{1 - \lambdamin(\widetilde{\Sigma}_{n})}(1-\delta) \leq \sqrt{\frac{8\lambdamax(S(P_{X}))\log(3d)}{n}} + \sqrt{\frac{2R(P_{X}) \log(1/\delta)}{n}} + \frac{(2\log(3d) + 4 \log(1/\delta))}{3n}.
\end{equation*}
Our second result extends the upper bound in Proposition <ref> to the case where $\lambdamin(\widetilde{\Sigma}_{n}) = \inf_{v \in S^{d-1}} n^{-1} \sum_{i=1}^{n} \inp{v}{\widetilde{X}_i}^{2}$ is subject to a direction dependent adversarial truncation. This result is needed in our analyses of Section <ref>, from which we recall the definition of $a^{*}$ for a sequence $a$.
Let $\delta \in (0, 1/2)$ such that $k = 8\log(2/\delta)$ is an integer satisfying $1 \leq k \leq \floor{n/2}$. For $(v, i) \in S^{d-1} \times [n]$, define $Y_{i, v} \defeq \inp{v}{\widetilde{X}_i}^{2}$ and $\overline{\lambda}_{\normalfont\text{min}}(\widetilde{\Sigma}_{n}) \defeq \inf_{v \in S^{d-1}} n^{-1}\sum_{i=k+1}^{n-k} Y_{i, v}^{*}$. Then, if $n \geq 8\log(6d)$,
\begin{equation*}
Q_{(1-2k/n) - \overline{\lambda}_{\normalfont\text{min}}(\widetilde{\Sigma}_{n})}(1 - \delta) \leq 100 \paren*{\sqrt{\frac{8\log(6d)(\lambdamax(S(P_{X})) + 1)}{n}} + \sqrt{\frac{(R(P_{X}) + 1) \log(1/\delta)}{n}}}.
\end{equation*}
Comparison with existing results. To the best of our knowledge, the only known lower bound, asymptotic or not, on the quantiles of $1-\lambdamin(\widetilde{\Sigma}_{n})$ is due to <cit.>. This bound however is distribution-free and decays fast, as $\log(1/\delta)/n$. In terms of upper bounds comparable to that of Proposition <ref>, the closest result we are aware of is due to Oliveira, 2016 (see also Zhivotovskiy, 2024), who proved $\sqrt{n} Q_{1 - {\lambda}_{\text{min}}(\widetilde{\Sigma}_{n})}(1 - \delta) \lesssim \sqrt{(R(P_{X})+1)\brack*{d + \log(1/\delta)}}$. In general, our upper bound and theirs are not comparable, and when combined they yield the best of both. Nevertheless, we suspect that our bound is better on heavy-tailed problems. Indeed, by Jensen's inequality, it is not hard to see that $\lambdamax(S(P_{X})) \leq R(P_{X}) \cdot d$, so our upper bound from Proposition <ref> can be at most worse by $\sqrt{\log(d)}$. This occurs when $X$ is a centred Gaussian, for which it is known that Oliveira's bound is tight [Koltchinskii and Lounici, 2017]. On the other hand, consider $X$ with centred independent coordinates, and where the first coordinate has kurtosis $\kappa \gg 1$, while the other coordinates have constant kurtosis. Then Oliveira's bound scales as $\sqrt{\kappa \cdot d}$, while ours scales as $\sqrt{\kappa \cdot \log(d)}$. Finally, versions of Proposition <ref> that mimic Oliveira's bound can be deduced from the recent literature [Abdalla and Zhivotovskiy, 2023, Oliveira and Rico, 2022]. The same considerations apply when comparing these results.
On the fourth moment assumption.
We carried out our analysis under a fourth moment assumption on $P_{X}$. We argue here that this is in some sense the most natural assumption to study this problem. Indeed, recall the fact that $\widetilde{\Sigma}_{n}$ is an empirical average of the random matrix $\widetilde{X}\widetilde{X}^{T}$. Therefore, by the law of large numbers, $\widetilde{\Sigma}_{n} \overset{d}{\to} I_{d \times d}$, and by the continuous mapping theorem, $\lambdamin(\widetilde{\Sigma}_{n}) \overset{d}{\to} 1$ as $n \to \infty$. To say something about the rate of this convergence, the most natural assumption to make is that the entries of the random matrix $XX^{T}$ have finite variance so that the CLT holds. This is equivalent to assuming that $P_{X}$ has fourth moments.
On the critical sample size. Our main application of Propositions <ref> and <ref> is in providing upper bounds on the critical sample size $n^{*}(P_{X}, \delta) \defeq \min\brace*{n \in \N \st Q_{1 - \lambdamin(\widetilde{\Sigma}_{n})}(1-\delta/2) \leq 1/4}$. In particular, these upper bounds correspond to the sample size restrictions in Corollary <ref> and Theorems <ref> and <ref>. We claimed after the statement of these results that these restrictions were in some sense optimal; we expand on this here. Let $L \defeq \lim_{n \to \infty} \sqrt{n} \cdot Q_{1 - \lambdamin(\widetilde{\Sigma}_{n})}(1 - \delta)$, and define $n_{0}(P_{X}, \delta) \defeq \min\brace*{n \in \N \st m \geq n \Rightarrow Q_{1 - \lambdamin(\widetilde{\Sigma}_{m})}(1 - \delta/2) \geq \frac{L}{4\sqrt{m}}}$. If $n_{0}(P_{X}, \delta) \leq n^{*}(P_{X}, \delta)$, then we can reverse the above-mentioned upper bounds using the first item of Proposition <ref>, up to a $\sqrt{\log(d)}$ factor.
In words, if the critical sample size is determined by the asymptotic behaviour of the $1-\delta/2$ quantile of $1-\lambdamin(\widetilde{\Sigma}_{n})$, then our bounds on the critical sample size are tight. When the hypothesis in this last statement is true remains unclear however. The choice of the constant $1/4$ in the above argument is arbitrary, and can be replaced with any absolute constant.
Characterizing the quantiles of $\lambdamax(\Sigma - \Sigma_{n})$ requires deriving matching upper and lower bounds on it. To the best of our knowledge, upper bounds are
This problem is has been studied intensely over the last decade, starting.
We define two variance-like parameters
\begin{equation*}
S(P_{X}) \defeq \Exp\brack*{\paren*{XX^{T} - \Sigma}^{2}}, \quad\quad R(P_{X}) \defeq \sup_{v \in S^{d-1}} \Exp\brack*{\paren*{\inp{v}{X}^{2} - \Exp\brack*{\inp{v}{X}}^{2}}^2}.
\end{equation*}
Our first result is a lower bound on the asymptotic quantiles of $\lambdamax(\Sigma - \widehat{\Sigma}_{n})$.
Assume that $\Exp\brack*{\abs{X^{j}}^{4}} < \infty$ for all $j \in [d]$. Let $Y_{n} \defeq \lambdamax(\Sigma - \widehat{\Sigma}_{n})$. Let $\delta \in (0, 0.1)$. Then
\begin{equation*}
\lim_{n \to \infty} \sqrt{n} \cdot Q_{Y_{n}}(1 - \delta) \geq C \cdot \paren*{\sqrt{\lambdamax(S)} + \sqrt{R \log(1/2\delta)}}
\end{equation*}
where $C \defeq$.
Let $(X_i)_{i=1}^{n}$ be random vectors of dimension $d \geq 2$ with $\Exp\brack{XX^{T}} = \Sigma$. Assume that $\Exp\brack*{\abs{X^{j}}^{4}} < \infty$ for all $j \in [d]$. Let $\widehat{\Sigma}_{n} \defeq n^{-1}\sum_{i=1}^{n} X_{i}X_{i}^{T}$ be the sample covariance matrix. Then with probability at least $1 - \delta$
\begin{equation*}
\lambdamax(\Sigma - \hat{\Sigma}) \leq \sqrt{\frac{8\lambdamax(S)\log(d)}{n}} + \sqrt{\frac{2R \log(1/\delta)}{n}} + \frac{\lambdamax(\Sigma) (2\log(d) + 4 \log(1/\delta))}{3n}
\end{equation*}
Let $\delta \in (0, 1)$ such that $k = 8\log(2/\delta)$ is an integer satisfying $1 \leq k \leq \floor{n/2}$. For $(v, i) \in S^{d-1} \times [n]$, define $Y_{i, v} \defeq \inp{v}{X_i}^{2}$. If
\begin{equation*}
n \geq 4(1 + 2 \ceil{\log(d)}),
\end{equation*}
then with probability at least $1-\delta$
\begin{equation*}
\sup_{v \in S^{d-1}} \sum_{i=k+1}^{n-k} \Exp\brack*{\inp{v}{X}^{2}} - Y_{i, v}^{*} \leq C \cdot \paren*{\sqrt{4(1 + 2\ceil{\log(d)}) n \lambdamax(\tilde{S})} + \sqrt{n \tilde{R} \log(2/\delta)}}
%165 \sqrt{n(1 + 2\ceil{\log(d)}) \lambdamax(\tilde{S})} + 132 \sqrt{n \tilde{R} \log(2/\delta)}.
\end{equation*}
for some universal constant $C$.
Assume that $P_{X}$ has fourth moments, i.e. $\Exp\brack{\norm{X}_{4}^{4}} < \infty$ for $X \sim P_{X}$. Define
\begin{equation*}
S(P_{X}) \defeq \Exp\brack*{\paren*{\tilde{X}\tilde{X}^{T} - I}^{2}}, \quad\quad R(P_{X}) \defeq \sup_{v \in S^{d-1}} \Exp\brack*{\paren*{\inp{v}{\tilde{X}}^{2} - 1}^2}.
\end{equation*}
Let $\delta \in (0, 1)$. If
\begin{equation*}
n \geq \max\brace*{64 \lambdamax(S(P_{X})) (1 + \log(d)), 128 R(P_{X}) \log(1/\delta), \frac{4(1 + \log(d) + 4\log(1/\delta))}{3}}
\end{equation*}
\begin{equation*}
Q_{\lambdamax(\widetilde{\Sigma}_{n}^{-1})}(1 - \delta) \leq 1 + 2\paren*{\sqrt{\frac{8\lambdamax(S)[1+\log(d)]}{n}} + \sqrt{\frac{2R \log(1/\delta)}{n}} + \frac{1 + \log(d) + 4 \log(1/\delta)}{3n}}
\end{equation*}
When do the lower and upper bounds match ? Mention that if $X^1 = 1$, then $\lambdamax(S) \geq d-1$, so it is large enough. Need to also write down the corollary for relative error (i.e. when covariance is identity). Need to compare to previous results (Oliveira's result, Mourtada's result, small-ball results).
§ CONCLUSION
In this paper, we studied minimax linear regression under the quantile risk. We gave an in-depth characterization of the minimax risk over the Gaussian class $\mathcal{P}_{\normalfont\text{Gauss}}(P_{X}, \sigma^2)$, and leveraged these results to establish the minimaxity, up to absolute constants, of the min-max regression procedure for $p$-norm regression problems. While the problem of estimation with high confidence has been studied intensely recently, we are not aware of its formalization through quantiles as was done in this paper. We hope this new perspective proves fruitful in advancing both our understanding of learning problems and our ability to design efficient solutions for them.
Resources used in preparing this research were provided in part by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute. CM acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), RGPIN-2021-03445. MAE was partially supported by NSERC Grant [2019-06167], CIFAR AI Chairs program, and CIFAR AI Catalyst grant.
[Abdalla and Zhivotovskiy, 2023]
Pedro Abdalla and Nikita Zhivotovskiy.
Covariance Estimation: Optimal Dimension-free Guarantees for Adversarial Corruption and Heavy Tails, July 2023.
[Adil et al., 2023]
Deeksha Adil, Rasmus Kyng, Richard Peng, and Sushant Sachdeva.
Fast Algorithms for $\ell_{p}$-Regression, October 2023.
[Audibert and Catoni, 2011]
Jean-Yves Audibert and Olivier Catoni.
Robust Linear Least Squares Regression.
The Annals of Statistics, 2011.
[Bousquet, 2002]
Olivier Bousquet.
A Bennett concentration inequality and its application to suprema of empirical processes.
Comptes Rendus Mathematique, January 2002.
[Catoni, 2012]
Olivier Catoni.
Challenging the empirical mean and empirical variance: A deviation study.
Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, November 2012.
[Catoni, 2016]
Olivier Catoni.
Pac-bayesian bounds for the gram matrix and least squares regression with a random design.
arXiv preprint arXiv:1603.05229, 2016.
[Chinot et al., 2020]
Geoffrey Chinot, Guillaume Lecué, and Matthieu Lerasle.
Robust statistical learning with Lipschitz and convex loss functions.
Probability Theory and Related Fields, April 2020.
[Depersin and Lecué, 2022]
Jules Depersin and Guillaume Lecué.
Optimal robust mean and location estimation via convex programs with respect to any pseudo-norms.
Probability Theory and Related Fields, August 2022.
[Devroye et al., 2016]
Luc Devroye, Matthieu Lerasle, Gabor Lugosi, and Roberto I. Oliveira.
Sub-Gaussian mean estimators.
The Annals of Statistics, December 2016.
[El Hanchi and Erdogdu, 2023]
Ayoub El Hanchi and Murat A. Erdogdu.
Optimal Excess Risk Bounds for Empirical Risk Minimization on $p$-Norm Linear Regression.
In Thirty-Seventh Conference on Neural Information Processing Systems, November 2023.
[Gupta et al., 2023]
Shivam Gupta, Jasper C. H. Lee, Eric Price, and Paul Valiant.
Minimax-Optimal Location Estimation.
In Thirty-Seventh Conference on Neural Information Processing Systems, November 2023.
[Hsu and Sabato, 2016]
Daniel Hsu and Sivan Sabato.
Loss Minimization and Parameter Estimation with Heavy Tails.
Journal of Machine Learning Research, 2016.
[Keener, 2010]
Robert W Keener.
Theoretical statistics: Topics for a core course.
Springer Science & Business Media, 2010.
[Klein and Rio, 2005]
T. Klein and E. Rio.
Concentration around the mean for maxima of empirical processes.
The Annals of Probability, May 2005.
[Koltchinskii and Lounici, 2017]
Vladimir Koltchinskii and Karim Lounici.
Concentration inequalities and moment bounds for sample covariance operators.
Bernoulli, February 2017.
[Lecué and Lerasle, 2019]
Guillaume Lecué and Matthieu Lerasle.
Learning from MOM's principles: Le Cam's approach.
Stochastic Processes and their Applications, November 2019.
[Lecué and Lerasle, 2020]
Guillaume Lecué and Matthieu Lerasle.
Robust machine learning by median-of-means: Theory and practice.
The Annals of Statistics, April 2020.
[Lecué and Mendelson, 2016]
Guillaume Lecué and Shahar Mendelson.
Learning subgaussian classes : Upper and minimax bounds, September 2016.
[Lee and Valiant, 2022]
Jasper C.H. Lee and Paul Valiant.
Optimal Sub-Gaussian Mean Estimation $\mathbb{R}$.
In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), Denver, CO, USA, February 2022.
[Lehmann and Casella, 2006]
Erich L Lehmann and George Casella.
Theory of point estimation.
Springer Science & Business Media, 2006.
[Lugosi and Mendelson, 2019]
Gábor Lugosi and Shahar Mendelson.
Risk minimization by median-of-means tournaments.
Journal of the European Mathematical Society, December 2019a.
[Lugosi and Mendelson, 2019]
Gábor Lugosi and Shahar Mendelson.
Sub-Gaussian estimators of the mean of a random vector.
The Annals of Statistics, April 2019b.
[Lugosi and Mendelson, 2021]
Gábor Lugosi and Shahar Mendelson.
Robust multivariate mean estimation: The optimality of trimmed mean.
The Annals of Statistics, February 2021.
[Mendelson, 2017]
Shahar Mendelson.
“Local” vs. “global” parameters—breaking the Gaussian complexity barrier.
The Annals of Statistics, October 2017.
[Mendelson and Zhivotovskiy, 2020]
Shahar Mendelson and Nikita Zhivotovskiy.
Robust covariance estimation under $L_{4}-L_{2}$ norm equivalence.
The Annals of Statistics, June 2020.
[Mourtada, 2022]
Jaouad Mourtada.
Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices.
The Annals of Statistics, August 2022.
[Oliveira and Resende, 2023]
Roberto I. Oliveira and Lucas Resende.
Trimmed sample means for robust uniform mean estimation and regression, February 2023.
[Oliveira and Rico, 2022]
Roberto I. Oliveira and Zoraida F. Rico.
Improved covariance estimation: Optimal robustness and sub-Gaussian guarantees under heavy tails, September 2022.
[Oliveira, 2016]
Roberto Imbuzeiro Oliveira.
The lower tail of random quadratic forms with applications to ordinary least squares.
Probability Theory and Related Fields, 2016.
[Saumard, 2018]
Adrien Saumard.
On optimality of empirical risk minimization in linear aggregation.
Bernoulli, August 2018.
[Tropp, 2015]
Joel A. Tropp.
The Expected Norm of a Sum of Independent Random Matrices: An Elementary Approach, October 2015a.
[Tropp, 2015]
Joel A. Tropp.
An Introduction to Matrix Concentration Inequalities, January 2015b.
[Van Handel, 2017]
Ramon Van Handel.
Structured random matrices.
Convexity and concentration, pages 107–156, 2017.
[Zhivotovskiy, 2024]
Nikita Zhivotovskiy.
Dimension-free bounds for sums of independent matrices and simple tensors via the variational principle.
Electronic Journal of Probability, January 2024.
§ PRELIMINARIES
§.§ Pseudo-inverses and quantile function
We say a function $f: \R \to \R$ is increasing if $x < y$ implies $f(x) \leq f(y)$, and strictly increasing if $x < y$ implies $f(x) < f(y)$. For a function $f: \R \to \R$, we define $\im(f) = \brace*{f(x) \st x \in \R}$.
Let $f: \R \to \R$ be an increasing function. We define $f^{-}: [-\infty, \infty] \to [-\infty, \infty]$, the pseudo-inverse of $f$, by
\begin{equation*}
f^{-}(y) \defeq \inf\brace*{x \in \R \st f(x) \geq y},
\end{equation*}
with the conventions $\inf \varnothing \defeq \infty$ and $\inf \R \defeq -\infty$.
The following holds.
* Let $f: \R \to \R$ be an increasing function. Then for all $x \in \R$, $f^{-}(f(x)) \leq x$.
* Let $f, g$ be increasing functions from $\R$ to $\R$. If $f \geq g$ then $f^{-} \leq g^{-}$.
* Let $I$ be an index set and let $\brace*{f_i}_{i \in I}$ be a collection of increasing functions from $\R$ to $\R$. Then
\begin{equation*}
\paren*{\sup_{i \in I} f_{i}}^{-} \leq \inf_{i \in I} f_{i}^{-} \leq \sup_{i \in I} f_{i}^{-} \leq \paren*{\inf_{i \in I} f_{i}}^{-}.
\end{equation*}
* Let $f: \R \to \R$ be a strictly increasing function, so that it is a bijection from $\R$ to $\im(f)$. Denote by $f^{-1}:\im(f) \to \R$ the inverse of $f$. Then for all $y \in \im(f)$, $f^{-}(y) = f^{-1}(y)$.
* Let $f: \R \to \R$ be an increasing right-continuous function. Then for all $y \in [-\infty, \infty]$,
\begin{equation*}
f^{-}(y) = \min\brace*{x \in \R \st f(x) \geq y},
\end{equation*}
with the conventions $\min \varnothing \defeq \infty$ and $\min \R \defeq -\infty$.
* Let $I$ be an index set and let $\brace*{f_i}_{i \in I}$ be a collection of increasing right-continuous functions from $\R$ to $\R$. Then
\begin{equation*}
\sup_{i \in I} f_{i}^{-} = \paren*{\inf_{i \in I} f_{i}}^{-}.
\end{equation*}
* Let $f: \R \to \R$ be increasing and right-continuous, and let $g: \R \to \R$ be increasing. Then
\begin{equation*}
(f \circ g)^{-} = g^{-} \circ f^{-}.
\end{equation*}
* Let $(f_k)_{k \in \N}$ be a decreasing sequence of increasing right-continuous functions, and assume that $f_{n} \to f$ pointwise as $n \to \infty$. Then,
\begin{equation*}
\sup_{n \in \N} f_{n}^{-1} = f^{-1}.
\end{equation*}
* We have, since $f(x) \geq f(x)$,
\begin{equation*}
f^{-}(f(x)) = \inf\brace*{z \in \R \st f(z) \geq f(x)} \leq x.
\end{equation*}
* Fix $y \in [-\infty, \infty]$, and define $S_{f} \defeq \brace*{x \in \R \st f(x) \geq y}$, and $S_{g} \defeq \brace*{x \in \R \st g(x) \geq y}$. We claim that $S_{g} \subset S_{f}$ from which the statement follows. If $S_{g} = \varnothing$, the statement follows trivially. Otherwise let $x \in S_{g}$. Then $f(x) \geq g(x) \geq y$, so $x \in S_{f}$.
* We prove the last inequality. The first follows from a similar argument. By definition, for all $j \in I$, $\sup_{i \in I} f_i \geq f_j$. Applying the second item yields that for all $j \in I$, $\paren*{\sup_{i \in I} f_i}^{-} \leq f_{j}^{-}$. Taking the infimum over $j \in I$ yields the result.
* Let $y \in \im(f)$ and let $S \defeq \brace*{x \in \R \st f(x) \geq y}$. We claim that $f^{-1}(y) = \min S$ from which the claim follows since then $f^{-}(y) = \min S$. Indeed, since $f(f^{-1}(y)) = y$, we have $f^{-1}(y) \in S$. Now suppose that there exists an $x \in S$ such that $f^{-1}(y) > x$. Since $f$ is strictly increasing, we would have $y = f(f^{-}(y)) > f(x)$, which contradicts the fact that $x \in S$. Therefore $f^{-1}(y) \leq x$ for all $x \in S$. This proves that $f^{-1}(y) = \min S$.
* The statement holds trivially if $f^{-}(y) \in \brace*{-\infty, \infty}$. Otherwise, $f^{-}(y) \in \R$, and by definition of the infimum, for all $k \in \N$, we have $x_k \defeq f^{-}(y) + 1/k \in S$ and therefore $f(x_k) \geq y$. Furthermore, $\lim_{k \to \infty} x_k = f^{-1}(y)$ and $x_k > f^{-1}(y)$, so by the right-continuity of $f$ we obtain
\begin{equation*}
f(f^{-1}(y)) = \lim_{k \to \infty} f(x_k) \geq y.
\end{equation*}
Therefore $f^{-1}(y) \in S$ which implies $f^{-1}(y) = \min S$.
* The inequality $(\leq)$ is covered by the third item, therefore it is enough to prove the inequality $(\geq)$. Let $y \in [-\infty, \infty]$. We claim that
\begin{equation*}
\sup_{i \in I}f^{-}_{i}(y) \geq \paren*{\inf_{i \in I}f_i}^{-}(y).
\end{equation*}
The statement follows trivially if $\sup_{i \in I}f^{-}_{i}(y) = \infty$. Otherwise, we have $f^{-}_{i}(y) < \infty$ for all $i \in I$. If $\sup_{i \in I}f^{-}_{i}(y) = -\infty$, then $f_{i}^{-}(y) = -\infty$ for all $i \in I$, which implies that $f_{i}(x) \geq y$ for all $x \in \R$. This in turn implies that for all $x \in \R$, $\inf_{i \in I}f_{i}(x) \geq y$ and therefore $\paren*{\inf_{i \in I}f_i}^{-}(y) = -\infty$. It remains to consider the case where $\sup_{i \in I}f^{-}_{i}(y) \in \R$. We claim that
\begin{equation*}
\inf_{i \in I} f_{i}\paren*{\sup_{j \in I}f^{-}_{j}(y)} \geq y
\end{equation*}
from which the main claim follows by definition of the pseudo-inverse. Indeed, let $i \in I$. If $f^{-}_{i}(y) \in \R$, then we have
\begin{equation*}
f_{i}\paren*{\sup_{j \in I}f^{-}_{j}(y)} \geq f_{i}(f_{i}^{-}(y)) \geq y
\end{equation*}
where the first inequality holds since $f_i$ is increasing, and the second by the fifth item and the fact that $f^{-}_{i}(y) \in \R$. Otherwise $f_{i}^{-}(y) = -\infty$, and therefore $f_{i}(x) \geq y$ for all $x \in \R$, which in particular implies the desired statement since $\sup_{i \in I}f^{-}_{i}(y) \in \R$.
* Let $y \in [-\infty, \infty]$. By the assumed properties of $f$ and the fifth item, we have $f(g(x)) \geq y$ if and only if $g(x) \geq f^{-}(y)$. Therefore
\begin{align*}
(f \circ g)^{-}(y) &= \inf\brace*{x \in \R \st f(g(x)) \geq y} \\
&= \inf\brace*{x \in \R \st g(x) \geq f^{-}(y)} \\
&= g^{-}(f^{-}(y)) = (g^{-} \circ f^{-})(y).
\end{align*}
* We start with the inequality ($\leq$). Let $x \in \R$. Since $(f_{n}(x))_{n \in \N}$ is decreasing, we have $f(x) = \lim_{n \to \infty} f_{n}(x) = \inf_{n \in \N} f_{n}(x)$. Therefore, for all $n \in \N$, we have $f_{n} \geq f$. By the second item, we therefore have $f^{-}_{n} \leq f^{-}$. Taking supremum over $n$ yields the result.
For the inequality $(\geq)$, let $y \in \R$, and suppose that $\sup_{n \in \N} f_{n}^{-}(y) < f^{-}(y)$. If $\sup_{n \in \N} f_{n}^{-}(y) = -\infty$, then for all $x \in \R$ and for all $n \in \N$, $f_{n}(x) \geq y$, which implies that for all $x \in \R$, $f(x) = \lim_{n \to \infty} f_{n}(x) \geq y$, and therefore $f^{-1}(y) = -\infty$, contradicting the strict inequality. Otherwise $x^{*} \defeq \sup_{n \in \N} f_{n}^{-}(y) \in \R$, and either $f^{-}(y) = \infty$ or $f^{-}(y) \in \R$.
If $f^{-}(y) = \infty$, then on the one hand, for all $x \in \R$, $\lim_{n \to \infty} f_{n}(x) = f(x) < y$. On the other hand, for all $n \in \N$, $f_{n}\paren*{x^{*}} \geq y$. Indeed, if $f^{-}_{n}(y) \in \R$, then by the fifth item $f_{n}(x^{*}) \geq f_{n}(f^{-}_{n}(y)) \geq y$. Otherwise, $f^{-}_{n}(y) = -\infty$ so that $f(x) \geq y$ for all $x \in \R$, and in particular $f_{n}(x^{*}) \geq y$. But then we get the contradiction $y > \lim_{n \to \infty} f_{n}\paren*{x^{*}} \geq y$.
Finally, if $f^{-}(y) \in \R$, define $\eps \defeq f^{-}(y) - x^{*} > 0$. By definition of $x^{*}$, $f^{-}(y) - \eps \geq f^{-}_{n}(y)$ for all $n \in \N$. We claim that for all $n \in \N$
\begin{equation*}
f_{n}(f^{-}(y) - \eps) \geq y.
\end{equation*}
Indeed, if $f^{-}_{n}(y) \in \R$, then by the fifth item $f_{n}(f^{-}(y) - \eps) \geq f_{n}(f^{-}_{n}(y)) \geq y$. Otherwise, $f^{-}_{n}(y) = -\infty$ so that $f(x) \geq y$ for all $x \in \R$, and in particular $f_{n}(f^{-}(y) - \eps) \geq y$ since $f^{-}(y) - \eps \in \R$. Taking the limit as $n \to \infty$ yields
\begin{equation*}
\lim_{n \to \infty} f_{n}(f^{-}(y) - \eps) = f(f^{-}(y)-\eps) \geq y
\end{equation*}
contradicting the minimality of $f^{-}(y)$.
For a random variable $X$, we denote by $F_{X}: \R \to \R$ its cumulative distribution function $F_{X}(x) \defeq \Prob\paren*{X \leq x}$, and by $Q_{X}: [-\infty, \infty] \to [-\infty, \infty]$ its quantile function $Q_{X}(p) \defeq F_{X}^{-}(p)$. Since $F_{X}$ is right-continuous, then by the fifth item of Lemma <ref> we have
\begin{equation*}
Q_{X}(p) = \min\brace*{x \in \R \st F_{X}(x) \geq p}.
\end{equation*}
Furthermore, since $\lim_{x \to -\infty} F_{X}(x) = 0$ and $\lim_{x \to \infty} F_{X}(x) = 1$, it is easy to verify that $Q_{X}(p) \in \R$ for all $p \in (0, 1)$ and $Q_{X}(0) = -\infty$. If $X, Y$ are two random variables, we define the random variable $F_{X \mid Y}(x) \defeq \Prob\paren*{X \leq x \st Y}$ and we note that $F_{X}(x) = \Exp\brack{F_{X \mid Y}(x)}$ for all $x \in \R$.
Let $\varphi: \R \to \R$ be strictly increasing and left continuous. Then for all $p \in (0, 1)$
\begin{equation*}
Q_{\varphi(X)}(p) = \varphi(Q_{X}(p)).
\end{equation*}
where we define $\varphi(\infty) \defeq \infty$ and $\varphi(-\infty) \defeq -\infty$.
The statement, or more precisely the obvious modification to it to accommodate $Q_{X}(p)$ taking values in $\brace*{-\infty, \infty}$, fails in general for $p \in \brace*{0, 1}$. Indeed take $X \sim \mathcal{N}(0,1)$ and $\varphi(x) = \exp(x)$. Then $Q_{X}(0) = -\infty$, and $\lim_{x \to -\infty} \exp(x) = 0$. On the other hand
\begin{equation*}
Q_{\exp(X)}(0) = \min\brace*{x \in \R \st \Prob\paren*{\exp(X) \leq x} \geq 0} = -\infty
\end{equation*}
Similarly, taking $\varphi(x) = \frac{1}{1+\exp(-x)}$ and $p=1$ with $X \sim \mathcal{N}(0, 1)$ gives the counter example for the case $p=1$.
Let $\eps_{-} \defeq \Prob\paren*{X = \infty}$ and $\eps_{+} = \Prob\paren*{X = \infty}$. By definition, $\varphi(X) = \infty \Leftrightarrow X = -\infty$, so the identity holds trivially for all $p \in (0, \eps_{-}] \cup [1-\eps_{+}, 1)$. Now consider the case $p \in I \defeq (\eps_{-}, 1-\eps_{+})$. First, since $\lim_{x \to -\infty} F_{X}(x) = \eps_{-}$ and $\lim_{x \to \infty} F_{X}(x) = 1- \eps_{+}$ by continuity of probability measures, we have $Q_{X}(p) \in \R$. The same argument shows that $Q_{\varphi(X)}(p) \in \R$. Now, since $\varphi$ is strictly increasing,
\begin{equation*}
F_{X}(x) = \Prob\paren*{X \leq x} = \Prob\paren*{\varphi(X) \leq \varphi(x)} = F_{\varphi(X)}(\varphi(x)) = (F_{\varphi(X)} \circ \varphi)(x).
\end{equation*}
Therefore, by the penultimate item of Lemma <ref>, we have
\begin{equation}
\label{eq:pf_lem_2_1}
Q_{X} = F_{X}^{-} = \varphi^{-} \circ F_{\varphi(X)}^{-} = \varphi^{-} \circ Q_{\varphi(X)}.
\end{equation}
We claim that for all $p \in I$,
\begin{equation}
\label{eq:pf_lem_2_2}
(\varphi \circ \varphi^{-} \circ Q_{\varphi(X)})(p) = Q_{\varphi(X)}(p).
\end{equation}
By the fourth item of Lemma <ref>, it is enough to show that $Q_{\varphi(X)}(p) \in \im(\varphi)$ for all $p \in I$. This will be the goal of the proof. Let $p \in I$, and define
$S \defeq \brace*{x \in \R \st \varphi(x) \leq Q_{\varphi(X)}(p)}$. We claim that $S$ is non-empty and upper bounded. Indeed, suppose not. Then either $\varphi(x) > Q_{\varphi(X)}(p)$ for all $x \in \R$ or $\varphi(x) \leq Q_{\varphi(X)}(p)$ for all $x \in \R$. In the former case, this implies that for all $x \in \R$
\begin{equation*}
F_{X}(x) = \Prob\paren*{X \leq x} = \Prob\paren*{\varphi(X) \leq \varphi(x)} \geq \Prob\paren*{\varphi(X) \leq Q_{\varphi(X)}(p)} \geq p > \eps_{-},
\end{equation*}
where the second inequality follows from the fifth item of Lemma <ref> and the fact that $Q_{\varphi(X)}(p) \in \R$. This leads to the contradiction
\begin{equation*}
\eps_{-} = \lim_{x \to -\infty} F_{X}(x) \geq p > \eps_{-}.
\end{equation*}
In the latter case, we get that for all $x \in \R$
\begin{equation*}
F_{X}(x) = \Prob\paren*{X \leq x} = \Prob\paren*{\varphi(X) \leq \varphi(x)} \leq \Prob\paren*{\varphi(X) \leq Q_{\varphi(X)}(p)}.
\end{equation*}
This leads to
\begin{equation*}
1 - \eps_{+} = \lim_{x \to \infty} F_{X}(x) \leq \Prob\paren*{\varphi(X) \leq Q_{\varphi(X)}(p)} \leq 1 - \eps_{+}.
\end{equation*}
where the last inequality follows from the fact that $Q_{\varphi(X)}(p) \in \R$. Now, since
\begin{equation*}
\Prob\paren*{\varphi(X) \leq \lim_{x \to \infty} \varphi(x)} = 1-\eps_{+}
\end{equation*}
and $\varphi(x) \leq Q_{\varphi(X)}(p)$ for all $x \in \R$, we get by the minimality property of $Q_{\varphi(X)}(p)$
\begin{equation*}
Q_{\varphi(X)}(p) = \lim_{x \to \infty}\varphi(x).
\end{equation*}
But, on the one hand, we have by continuity of probability,
\begin{equation*}
\lim_{n \to \infty} \Prob\paren*{\varphi(X) \leq \varphi(n)} = \Prob\paren*{\varphi(X) \leq \lim_{n \to \infty} \varphi(n)} = 1 - \eps_{+}
\end{equation*}
yet on the other, since $\varphi$ is strictly increasing, we have $\varphi(n) < \lim_{x \to \infty}\varphi(x) = Q_{\varphi(X)}(p)$ for all $n \in \N$, so by the minimality of $Q_{\varphi(X)}(p)$, $\Prob\paren*{\varphi(X) \leq \varphi(n)} < p$ for all $n \in \N$, from which we obtain the contradiction
\begin{equation*}
1 - \eps_{+} = \lim_{n \to \infty} \Prob\paren*{\varphi(X) \leq \varphi(n)} \leq p
\end{equation*}
This proves that $S$ is non-empty and upper bounded. Now define $x_0 \defeq \sup S$, which is guaranteed to satisfy $x_{0} \in \R$ by the upper boundedness of $S$ and its non-emptiness. We claim that $\varphi(x_0) =Q_{\varphi(X)}(p)$. Indeed, by the left-continuity of $\varphi$, we have, for any sequence $(x_n)_{n \in \N}$ in $S$ such that $x_n \to x_0$
\begin{equation}
\label{eq:pf_lem_2_3}
\varphi(x_0) = \lim_{n \to \infty} \varphi(x_n) \leq Q_{\varphi(X)}(p)
\end{equation}
where the last inequality follows from the definition of $S$ and the fact that $x_n \in S$ for all $n \in \N$. On the other hand, by the maximality of $x_0$, we have for all $x > x_0$, $\varphi(x) > Q_{\varphi(X)}(p)$, which implies that
\begin{equation}
\label{eq:pf_lem_2_4}
Q_{\varphi(X)}(p) \leq \lim_{x \to x_{0}^{+}} \varphi(x)
\end{equation}
Combining (<ref>) and (<ref>), we obtain
\begin{equation*}
Q_{\varphi(X)}(p) \in \brack*{\varphi(x_0), \lim_{x \to x_{0}^{+}} \varphi(x)}
\end{equation*}
But for any $y \in \brack*{{\varphi(x_0), \lim_{x \to x_{0}^{+}} \varphi(x)}}$, we have
\begin{equation*}
\Prob\paren*{\varphi(X) \leq y} = \Prob\paren*{\varphi(X) \leq \varphi(x_0)}
\end{equation*}
Indeed on the one hand
\begin{equation*}
\Prob\paren*{\varphi(X) \leq y} \geq \Prob\paren*{\varphi(X) \leq \varphi(x_0)}
\end{equation*}
On the other, if $X > x_0$, then since $\varphi$ is strictly increasing, $\varphi(X) > \lim_{x \to x_{0}^{+}} \varphi(x) \geq y$. Therefore
\begin{equation*}
\Prob\paren*{\varphi(X) \leq y} \leq \Prob\paren*{\varphi(X) \leq \lim_{x \to x_{0}^{+}} \varphi(x)} \leq \Prob\paren*{X \leq x_0} = \Prob\paren*{\varphi(X) \leq \varphi(x_0)}
\end{equation*}
but then, by the minimality of $Q_{\varphi(X)}(p)$, we obtain $Q_{\varphi(X)}(p) = \varphi(x_0)$. This proves (<ref>). Now applying $\varphi$ to both sides of (<ref>) and using (<ref>) yields the result.
§.§ Convexity
A subset $A \subset \R^{d}$ is
* convex if for all $x, y \in A$ and $t \in [0, 1]$, $(1-t) x + t y \in A$.
* symmetric if for all $x \in A$, $-x \in A$.
Let $A$ be a non-empty convex symmetric set. Then for all $\lambda \geq 1$, $A \subseteq \lambda A$.
We start by proving that $\lambda A$ is convex. Indeed let $x, y \in \lambda A$ and $t \in [0,1]$. Then by definition $x/\lambda, y/\lambda \in A$, so by convexity of $A$
\begin{equation*}
(1-t)\frac{x}{\lambda} + t\frac{y}{\lambda} \in A,
\end{equation*}
which implies
\begin{equation*}
(1-t)x + ty = \lambda \cdot \paren*{(1-t)\frac{x}{\lambda} + t\frac{y}{\lambda}} \in \lambda A.
\end{equation*}
Next we prove that $0 \in A$. Let $v \in A$. Since $A$ is symmetric, $-v \in A$, and by convexity of $A$
\begin{equation*}
0 = \frac{1}{2}x - \frac{1}{2}x = \frac{1}{2}x + \frac{1}{2}(-x) \in A.
\end{equation*}
Finally, we prove the main claim. Let $x \in A$. Then by definition $\lambda x \in \lambda A$. But then by convexity of $\lambda A$ and since $\lambda \geq 1$
\begin{equation*}
x = \paren*{1 - \frac{1}{\lambda}} 0 + \frac{1}{\lambda} \lambda x \in \lambda A
\end{equation*}
A function $f:\R^{d} \to \R$ is
* quasiconvex if for all $x, y \in \R^{d}$ and $t \in [0,1]$, $f((1-t)x + ty) \leq \max\paren*{f(x), f(y)}$.
* symmetric if for all $v \in \R^{d}$, $f(v) = f(-v)$.
Every convex function is quasiconvex. The function $f(x) = \log(x)$ is quasiconvex but not convex. Every norm is quasiconvex (and, in fact, convex) and symmetric.
The following holds.
* $f: \R^{d} \to \R$ is quasiconvex and symmetric if and only if for all $y \in \R$, $f^{-1}((-\infty, y])$ is convex and symmetric.
* If $f: \R^{d} \to \R$ is quasiconvex and symmetric then $0 \in \argmin_{x \in \R}f(x)$.
Proof of first item (see Wikipedia entry). For the second, we have for any $x \in \R^{d}$
\begin{equation*}
f(0) = f\paren*{\frac{1}{2}x - \frac{1}{2}x} = f\paren*{\frac{1}{2}x + \frac{1}{2}(-x)} \leq \max\paren*{f(x), f(-x)} = f(x).
\end{equation*}
§.§ Gaussian measures
Let $Z \sim \mathcal{N}(0, \sigma^2)$. Then
\begin{equation*}
\sqrt{1 - \exp\paren*{-\frac{r^2}{2\sigma^2}}} \leq F_{\abs{Z}}(r) \leq \sqrt{1 - \exp\paren*{-\frac{2r^{2}}{\pi\sigma^2}}}.
\end{equation*}
Consider first the case where $\sigma^2 = 1/2$. Then
\begin{multline*}
\paren*{F_{\abs{Z}}(r)}^{2} = \paren*{\Prob\paren*{-r \leq Z \leq r}}^{2}
= \paren*{\frac{2}{\sqrt{\pi}}\int_{0}^{r} e^{-t^2} dt}^{2}
\\
= \frac{4}{\pi} \int_{0}^{r}\int_{0}^{r} e^{-(t^2 + s^2)} dt ds
= \frac{4}{\pi} \int_{S} e^{-(t^{2} + s^{2})} dtds
\end{multline*}
where $S \defeq \brace*{(x,y) \in \R^{2} \st 0 \leq x,y \leq r}$ is the square of length $r$ whose lower left corner is at $0$. For a radius $\rho > 0$, define the quarter disks $D(\rho) \defeq \brace*{(x, y) \subset \R^{2} \st x, y \geq 0, \sqrt{x^2 + y^{2}} \leq \rho}$. Clearly, $D(r) \subset S$, so that
\begin{align*}
\frac{4}{\pi} \int_{S} e^{-(t^{2} + s^{2})} dtds &\leq \frac{4}{\pi} \int_{D(r)} e^{-(t^{2} + s^{2})} dtds = 1 - \exp\paren*{-r^{2}}
\end{align*}
where the last equality is obtained by an explicit integration using polar coordinates. On the other hand, consider the quarter disk $D(2r/\sqrt{\pi})$, and define $A \defeq D(2r/\sqrt{\pi}) \setminus S$ and $B \defeq S \setminus D(2r/\sqrt{\pi})$. Since $S$ and $D(2r/\sqrt{\pi})$ have the same area, so do $A$ and $B$. But for all $(t, s) \in A$ and all $(x, y) \in B$, we have
\begin{equation*}
t^{2} + s^{2} \leq \frac{4r^{2}}{\pi} \leq x^{2} + y^{2} \Rightarrow \exp\paren*{-\paren{t^{2} + s^{2}}} \geq \exp\paren*{-\paren{x^{2} + y^{2}}}
\end{equation*}
\begin{align*}
\frac{4}{\pi} \int_{S} e^{-(t^{2} + s^{2})} dtds &= \frac{4}{\pi} \paren*{\int_{S \cap D(2r/\sqrt{\pi})} e^{-(t^{2} + s^{2})} dtds + \int_{B} e^{-(t^{2} + s^{2})} dtds} \\
&\leq \frac{4}{\pi} \paren*{\int_{S \cap D(2r/\sqrt{\pi})} e^{-(t^{2} + s^{2})} dtds + \int_{A} e^{-(t^{2} + s^{2})} dtds} \\
&= \frac{4}{\pi} \int_{D(2r/\sqrt{\pi})} e^{-(t^{2} + s^{2})} dtds = 1 - \exp\paren*{-\frac{4r^{2}}{\pi}}
\end{align*}
This proves the statement for $\sigma^{2} = 1/2$. For $\sigma^2 \in (0, \infty)$, note that $Z \overset{d}{=} \sqrt{2\sigma^2} \tilde{Z}$ where $\tilde{Z} \sim \mathcal{N}(0, 1/2)$, so
\begin{equation*}
F_{\abs{Z}}(r) = \Prob\paren*{-r \leq Z \leq r} = \Prob\paren*{-\frac{r}{\sqrt{2\sigma^2}} \leq \tilde{Z} \leq \frac{r}{\sqrt{2\sigma^2}}} = F_{\abs{\tilde{Z}}}\paren*{\frac{r}{\sqrt{2\sigma^2}}},
\end{equation*}
and applying the result for $\sigma^2 = 1/2$ yields the general result.
Let $X, Y$ be random vectors such that $X \mid Y = y \sim \mathcal{N}(y, \Sigma)$ for some fixed $\Sigma \in \S_{++}^{d}$ and for all $y$ in the image of $Y$. Then $X - Y \sim \mathcal{N}(0, \Sigma)$.
Let $B$ be a Borel subset of $\R^{d}$, and let $Z \sim \mathcal{N}(0, \Sigma)$. We have
\begin{equation*}
\Prob\paren*{X - Y \in B} = \Exp\brack*{\Prob\paren*{X - Y \in B \st Y}} = \Exp\brack*{\Prob\paren*{Z \in B}} = \Prob\paren*{Z \in B}.
\end{equation*}
Let $Z \sim \mathcal{N}(0, \Sigma)$ for some $\Sigma \in S_{++}^{d}$ and $d \in \N$. Let $A \subset \R^{d}$ be a convex symmetric set. Then for all $a \in \R^{d}$, we have
\begin{equation*}
\Prob\paren*{Z \in A} \geq \Prob\paren*{Z \in A + a}.
\end{equation*}
Let $d \in \N$, $Z \sim \mathcal{N}(0, I_{d \times d})$, and $f: \R^{d} \to \R$ be an $L$-Lipschitz function with respect to the Euclidean metric. Then $\Var\brack*{f(Z)} \leq L^{2}$ and
\begin{equation*}
\Prob\paren*{f(Z) - \Exp\brack*{f(Z)} \geq t} \leq \exp\paren*{-\frac{t^{2}}{2L^2}}.
\end{equation*}
§.§.§ Concentration of norms of Gaussian vectors
For this subsection, fix $d \in \N$, an arbitrary norm $\norm{\cdot}$ on $\R^{d}$, and a covariance matrix $\Sigma \in S_{++}^{d}$. Let $Z \sim \mathcal{N}(0, \Sigma)$, and define $M \defeq \Exp\brack*{\norm{Z}}$. Let $S$ denote the unit sphere of the dual norm $\norm{\cdot}_{*}$, and recall that $\norm{x} = \norm{x}_{**} = \sup_{v \in S} \abs{\inp{v}{x}}$. Define $R \defeq \sup_{v \in S} v^{T} \Sigma v$, and $v_{*} = \max_{v \in S} v^{T} \Sigma v$ where the maximum is attained since $S$ is compact and the function is continuous.
The function $f: \R^{d} \to \R$ given by $f(x) = \norm{\Sigma^{1/2}x}$ is $\sqrt{R}$-Lipschitz in the Euclidean metric.
\begin{multline*}
\abs{f(x) - f(y)} = \abs*{\norm{\Sigma^{1/2}x} - \norm{\Sigma^{1/2}y}} \leq \norm{\Sigma^{1/2}(x - y)} \\
= \sup_{v \in S} \inp{\Sigma^{1/2}v}{x-y}\leq \paren*{\max_{v \in S} \norm{\Sigma^{1/2}v}_{2}} \norm{x - y}_{2}
\end{multline*}
For all $t \geq 0$,
\begin{align*}
&M^{2} \leq \Exp\brack*{\norm{Z}^2} \leq \paren*{1 + \frac{\pi}{2}} M^2, \\
&\Prob\paren*{M - \norm{Z} \geq t} \leq \exp\paren*{-\frac{t^2}{\pi M^2}}.
\end{align*}
Notice that
\begin{equation*}
M = \Exp\brack*{\norm{Z}} = \Exp\brack*{\sup_{v \in S} \abs{\inp{v}{Z}}} \geq \sup_{v \in S} \Exp\brack*{\abs{\inp{v}{Z}}} = \sup_{v \in S} \sqrt{\frac{2}{\pi} v^{T} \Sigma v} = \sqrt{\frac{2}{\pi} v_{*}^{T} \Sigma v_{*}}.
\end{equation*}
where the inequality follows by convexity of the supremum and Jensen's inequality, and the third equality by the fact that $\inp{v}{Z} \sim \mathcal{N}(0, v^{T} \Sigma v)$ and an explicit calculation of the expectation. We now prove the first item. The first inequality follows from Jensen's inequality. For the second, notice that $\Sigma^{-1/2}Z \sim \mathcal{N}(0, I_{d \times d})$, so that an application of Lemmas <ref> and <ref> yields
\begin{equation*}
\Exp\brack*{\norm{Z}^2} - (\Exp\brack*{\norm{Z}})^{2} = \Var\brack*{\norm{Z}} = \Var\brack*{f(\Sigma^{-1/2}Z)} \leq R = v_{*}^{T} \Sigma v_{*} \leq \frac{\pi}{2} (\Exp\brack*{\norm{Z}})^{2}.
\end{equation*}
where $f$ is as defined in Lemma <ref>. For the second item, notice that $-f$ is also $\sqrt{R}$-Lipschitz, so that again an application of Lemma <ref> yields
\begin{multline*}
\Prob\paren*{M - \norm{Z} > t} = \Prob\paren*{-f(\Sigma^{-1/2}Z) - \Exp\brack*{-f(\Sigma^{-1/2} Z)} > t} \\
\leq \exp\paren*{-\frac{t^{2}}{2v_{*}^{T}\Sigma v_{*}}} \leq \exp\paren*{-\frac{t^2}{\pi \paren*{\Exp\brack*{\norm{Z}}}^{2}}}.
\end{multline*}
For all $r \in \R$,
\begin{equation*}
l(r) \leq F_{\norm{Z}}(r) \leq \min\brace*{u_{1}(r), u_{2}(r)}
\end{equation*}
\begin{gather*}
l(r) \defeq \brack*{1 - \exp\paren*{- \frac{(r - M)^{2}}{2R}}} \mathbbm{1}_{[M, \infty)}, \\
u_1(r) \defeq \exp\paren*{\frac{-(M-r)^2}{\pi M^{2}}} \mathbbm{1}_{[0, M)}(r) + \mathbbm{1}_{[M, \infty)}(r), \quad
u_2(r) \defeq \sqrt{1 - \exp\paren*{-\frac{2r^2}{\pi R}}} \mathbbm{1}_{[0,\infty)}(r).
\end{gather*}
We start with the lower bound. Let $r \geq M$. Then
\begin{equation*}
F_{\norm{Z}}(r) = \Prob\paren*{\norm{Z} \leq r} = 1 - \Prob\paren*{\norm{Z} > r} = 1 - \Prob\paren*{\norm{Z} - M > r - M} \geq 1 - \exp\paren*{- \frac{(r - M)^{2}}{2R}}
\end{equation*}
where the last inequality follows from Lemmas <ref> and <ref>. For the lower bound, we have, by Lemma <ref>
\begin{equation*}
F_{\norm{Z}}(r) = \Prob\paren*{\norm{Z} \leq r} = \Prob\paren*{\norm{Z} - M \leq r - M} = \Prob\paren*{M - \norm{Z} \geq M - r} \leq u_1(r).
\end{equation*}
\begin{equation*}
F_{\norm{Z}}(r) = \Prob\paren*{\sup_{v \in S} \abs{\inp{v}{Z}} \leq r} \leq \Prob\paren*{\abs{\inp{v_{*}}{Z}} \leq r} \leq u_{2}(r),
\end{equation*}
where the second inequality follows from the fact that $\inp{v_{*}}{Z} \sim \mathcal{N}(0, R)$ and Lemma <ref>.
§.§ Inverse Gamma measure
Let $\alpha, \beta > 0$. The inverse gamma measure $\text{Inv-Gamma}(\alpha, \beta)$ on $(0, \infty)$ has density
\begin{equation*}
f_{\alpha, \beta}(x) \defeq \frac{\beta^{\alpha}}{\Gamma(\alpha)} x^{-\alpha-1}\exp\paren*{-\frac{\beta}{x}}
\end{equation*}
with respect to Lebesgue measure, where $\Gamma$ is the gamma function.
Let $X \sim \text{Inv-Gamma}(\alpha, \beta)$ and $Z \sim \text{Inv-Gamma}(\alpha, \alpha)$. Let $r > 0$, and define
\begin{equation*}
x_{\alpha, \beta}(r) \defeq \frac{\beta\brack*{\exp(r)-\exp(-r)}}{2\alpha r} \quad\quad p_{\alpha}(r) \defeq \Prob\paren*{\frac{1-\exp(-2r)}{2r} \leq Z \leq \frac{\exp(2r) - 1}{2 r}}
\end{equation*}
Then for all $x \in (0, \infty)$
\begin{equation*}
p_{\alpha}(r) = \Prob\paren*{\abs{\log(X/x_{\alpha, \beta}(r))} \leq r} \geq \Prob\paren*{\abs{\log(X/x)} \leq r}.
\end{equation*}
Fix $r \in (0, \infty)$. Define $h_r(x) \defeq \Prob\paren*{\abs{\log(X/x)} \leq r}$. Then we have
\begin{align*}
\frac{d}{dx}(h_r(x)) &= \frac{d}{dx}\paren*{\Prob\paren*{\abs{\log(X/x)} \leq r}} \\
&= \frac{d}{dx}\paren*{\Prob\paren*{x \exp(-r) \leq X \leq x \exp(r)}} \\
&= \frac{d}{dx} \paren*{\int_{x\exp(-r)}^{x\exp(r)}f_{\alpha, \beta}(t)dt} \\
&= \exp(r) f_{\alpha,\beta}(x\exp(r)) - \exp(-r) f_{\alpha, \beta}(x \exp(-r))
\end{align*}
where in the last line we used Leibniz integral rule. Setting the derivative to $0$ and solving yields $x_{\alpha, \beta}(r)$. Examining its derivative, we notice that $h_r$ is non-decreasing on $(0, x_{\alpha, \beta}(r)]$ and non-increasing on $[x_{\alpha, \beta}(r), \infty)$. Therefore $x_{\alpha, \beta}(r)$ is the global maximizer of $h_r$. Now
\begin{align*}
\Prob\paren*{\abs{\log(X/x_{\alpha, \beta}(r))} \leq r} &= \Prob\paren*{x_{\alpha, \beta}(r)\exp(-r) \leq X \leq x_{\alpha, \beta}(r)\exp(r)} \\
&= \Prob\paren*{\frac{1-\exp(-2r)}{2r} \leq \frac{\alpha}{\beta} X \leq \frac{\exp(2r) - 1}{2 r}} \\
&= p_{\alpha}(r)
\end{align*}
where in the last line we used that if $X \sim \text{Inv-Gamma}(\alpha, \beta)$, then $cX \sim \text{Inv-Gamma}(\alpha, c \cdot \beta)$ for all $c > 0$.
Let $p_{\alpha}$ be as defined in Lemma <ref>. The following holds.
* $p_{\alpha}(r)$ is non-decreasing in $\alpha$ for all $r > 0$.
* $p_{\alpha}(r)$ is strictly increasing in $r$ for all $\alpha > 0$ and $\im(p_{\alpha}) = (0, 1)$
§ SUPREMA OF TRUNCATED EMPIRICAL PROCESSES
§.§ Truncation function
Let $\alpha, \beta \in \R$ such that $\alpha \leq \beta$. Define
\begin{equation}
\label{eq:def_1}
\phi_{\alpha, \beta}(x) \defeq
\begin{dcases*}
\beta & \quad if \quad $x > \beta$, \\
x & \quad if \quad $x \in [\alpha, \beta]$, \\
\alpha & \quad if \quad $x < \alpha$.
\end{dcases*}
\end{equation}
The following holds for all $x \in \R$.
* $c \cdot \phi_{\alpha, \beta}(x) = \phi_{c \alpha, c \beta}(cx)$ for all $c \in [0, \infty)$.
* $-\phi_{\alpha, \beta}(x) = \phi_{-\beta, -\alpha}(-x)$.
* $\phi_{\alpha, \beta}(x) + y = \phi_{\alpha + y, \beta + y} (x + y)$ for all $y \in \R$.
Just check the three possible cases for each item.
Fix $n \in \N$. For a real valued sequence $a \defeq (a_i)_{i=1}^{n}$, define the sequence $a^{*} = (a^{*}_{i})_{i=1}^{n}$ by $a^{*}_i \defeq a_{\pi(i)}$ for all $i \in [n]$ and where $\pi: [n] \to [n]$ is a permutation that orders $a$ non-decreasingly, i.e. $a_{\pi(1)} \leq \dotsc \leq a_{\pi(n)}$. Note that this is well-defined since any such permutation gives the same $a^{*}$. Addition and scalar multiplication of sequences are as usual. For two sequences $a, b$, we say that $a \leq b$ if $a_{i} \leq b_i$ for all $i \in [n]$.
Let $a = (a_i)_{i=1}^{n}$ and $b = (b_i)_{i=1}^{n}$ be real valued sequences.
\begin{equation*}
a \leq b \Rightarrow a^{*} \leq b^{*}
\end{equation*}
Let $\pi$ and $\sigma$ be permutations of $[n]$ that order $a$ and $b$ non-decreasingly, respectively. Let $i \in [n]$. We show that $a_{\pi(i)} \leq b_{\sigma(i)}$. We consider two cases. If $\pi(i) \in \brace*{\sigma(1), \dotsc, \sigma(i)}$, then $a_{\pi(i)} \leq b_{\pi(i)} \leq b_{\sigma(i)}$. Otherwise, $\pi(i) \in \brace*{\sigma(i+1), \dotsc, \sigma(n)}$. This implies that there exists a $j \in \brace*{i+1, \dotsc, n}$ such that $\pi(j) \in \brace*{\sigma(1), \dotsc, \sigma(i)}$, from which we conclude that $a_{\pi(i)} \leq a_{\pi(j)} \leq b_{\pi(j)} \leq b_{\sigma(i)}$.
Let $k \in \brace*{1, \dotsc, \floor{n/2}}$. Define
\begin{equation}
\label{eq:def_2}
\varphi_{k}(a) \defeq \sum_{i=1}^{n} \phi_{a^{*}_{1+ k}, a^{*}_{n-k}}(a_i).
\end{equation}
The following holds for all real-valued sequences $a = (a_i)_{i=1}^{n}$.
* $c \cdot \varphi_{k}(a) = \varphi_k(c \cdot a)$ for all $c \in \R$.
* $\varphi_{k}(a) + n \cdot c = \varphi_{k}(a + c)$ for all $c \in \R$.
* $\varphi_k(a) \leq \varphi_k(b)$ for all sequences $b = (b_i)_{i=1}^{n}$ such that $a \leq b$.
We start with the first item. Let $\pi: [n] \to [n]$ be a permutation that orders $a$ non-decreasingly. The case $c = 0$ is trivial. Now consider the case $c > 0$. Then since $c > 0$, $\pi$ also orders $c \cdot a$ non-decreasingly. Therefore $(c \cdot a)_{i}^{*} = c \cdot a^{*}_i$ and
\begin{equation*}
c \cdot \varphi_{k}(a) = \sum_{i=1}^{n} c \cdot \phi_{a^{*}_{k}, a^{*}_{n-k}}(a_i) = \sum_{i=1}^{n} \phi_{c \cdot a^{*}_{k}, c \cdot a_{n-k}^{*}}(c \cdot a_i) = \sum_{i=1}^{n} \phi_{(c \cdot a)^{*}_{k}, (c \cdot a)_{n-k}^{*}}(c \cdot a_i) = \varphi_{k}(c \cdot a),
\end{equation*}
where the second equality follows from the first item of Lemma <ref>.
Now consider the case $c = -1$. Then the permutation $\pi$ orders $-a$ non-increasingly so that $(-a)_{i}^{*} = -a_{n-i}^{*}$ and
\begin{equation*}
-\varphi_{k}(a) = \sum_{i=1}^{n} - \phi_{a^{*}_{k}, a^{*}_{n-k}}(a_i) = \sum_{i=1}^{n} \phi_{-a^{*}_{n-k}, -a^{*}_{k}}(-a_i) = \sum_{i=1}^{n} \phi_{(-a)^{*}_{k}, (-a)^{*}_{n-k}}(-a_i) = \varphi_{k}(-a),
\end{equation*}
where the second equality follows from the second item of Lemma <ref>. For the case $c < 0$, we have
\begin{equation*}
c \cdot \varphi_{k}(a) = (-c) \cdot -\varphi_{k}(a) = (-c) \cdot \varphi_{k}(-a) = \varphi_{k}(c \cdot a).
\end{equation*}
For the second item, we have by Lemma <ref> that $a^{*} \leq b^{*}$, from which we conclude
\begin{equation*}
\varphi_{k}(a) = \sum_{i=1}^{n} \phi_{a^{*}_{k}, a^{*}_{n-k}}(a_i) = k a^{*}_{k} + \sum_{i=k+1}^{n-k-1} a_{i}^{*} + k a_{n-k}^{*} \leq k b^{*}_{k} + \sum_{i=k+1}^{n-k-1} b_{i}^{*} + k b_{n-k}^{*} = \varphi_{k}(b)
\end{equation*}
Let $a = (a_i)_{i=1}^{n}$ and $b = (b_i)_{i=1}^{n}$ be real valued sequences such that $b \geq 0$. Then
\begin{equation*}
\varphi_{k}(a+b) \geq \varphi_k(a) + \sum_{i=1}^{n-2k} b_{i}^{*}
\end{equation*}
Let $\pi$ and $\sigma$ be permutations that order $a+b$ and $a$ non-decreasingly, respectively. By definition
\begin{equation*}
\varphi_k(a+b) = k (a+b)_{1+k}^{*} + \sum_{i=k+1}^{n-k} (a+b)_{i}^{*} + k (a+b)_{n-k}^{*}.
\end{equation*}
We lower bound each of the three terms separately. For the first, define the sets $I_{1} \defeq \brace*{\pi(1), \dotsc, \pi(1+k)}$ and $J_{1} \defeq \brace*{\sigma(1+k), \dotsc, \sigma(n)}$, and notice that
\begin{equation*}
\abs*{I_{1} \cap J_1} = \abs{I_1} + \abs{J_1} - \abs{I_1 \cup J_1} \geq (1+k) + (n-k) - n = 1.
\end{equation*}
Therefore, we have
\begin{equation*}
(a+b)_{1+k}^{*} = a_{\pi(1+k)} + b_{\pi(1 + k)} = \max\brace*{a_i + b_i \mid i \in I_1} \geq \max\brace*{a_i + b_{i} \mid i \in I_1 \cap J_1} \geq a_{\sigma(1+k)},
\end{equation*}
where the last inequality uses the non-negativity of $b$. Similarly, for the third term, define the sets $I_2 \defeq \brace*{\pi(1), \dotsc, \pi(n-k)}$ and $J_2 \defeq \brace*{\sigma(n-k), \dotsc, \sigma(n)}$, and notice that
\begin{equation*}
\abs*{I_{2} \cap J_2} = \abs{I_2} + \abs{J_2} - \abs{I_2 \cup J_2} \geq (n-k) + (1+k) - n = 1.
\end{equation*}
Therefore, we get
\begin{equation*}
(a+b)_{n-k}^{*} = a_{\pi(n-k)} + b_{\pi(n-k)} = \max\brace*{a_i + b_i \mid i \in I_2} \geq \max\brace*{a_i + b_{i} \mid i \in I_2 \cap J_2} \geq a_{\sigma(n-k)},
\end{equation*}
where again we used the non-negativity of $b$ in the last inequality. It remains to lower bound the second term. Let $S^{*} \subset I_2$ such that $\abs{S} = k$ and $\brace*{\sigma(1), \dotsc, \sigma(k)} \cap I_{2} \subset S$. Notice that
\begin{align*}
\sum_{i=k+1}^{n-k} (a+b)_{i}^{*} &= \sum_{i=k+1}^{n-k} (a_{\pi(i)} + b_{\pi(i)}) \\
&= \max_{\substack{S \subset I_2 \\ \abs{S} = k}} \sum_{i \in I_2 \setminus S} (a_{i} + b_i) \\
&\geq \sum_{i \in I_{2} \setminus S^{*}} a_i + \sum_{i \in I_{2} \setminus S^{*}} b_i
\end{align*}
Let us further bound each term. For the first, notice that by definition of $S^{*}$, we have $(I_{2} \setminus S^{*}) \subset J_{1}$ and $\abs*{I_2 \setminus S^{*}} = n-2k$, therefore
\begin{equation*}
\sum_{i \in I_{2} \setminus S^{*}} a_i \geq \min_{\substack{T \subset J_1 \\ \abs*{T} = n-2k}} \sum_{i \in T} a_i = \sum_{i=k+1}^{n-k} a_{\sigma(i)}.
\end{equation*}
For the second, we have
\begin{equation*}
\sum_{i \in I_2 \setminus S^{*}} b_i \geq \min_{\substack{T \subset [n] \\ \abs*{T} = n-2k}} \sum_{i \in T} b_i = \sum_{i=1}^{n-2k} b_{i}^{*}
\end{equation*}
Combining the bounds yields the desired result.
§.§ Suprema of truncated empirical processes
Let $\mathcal{T}$ be a countable index set, and let $(\brace{Z_{i, s}}_{s \in \mathcal{T}})_{i=1}^{n}$ be independent real-valued $\mathcal{T}$-indexed stochastic processes. Define $Z \defeq \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} Z_{i, s}$. For $s \in \mathcal{T}$, define $Z_{s} \defeq (Z_{i, s})_{i=1}^{n}$. For Rademacher random variables $(\eps_i)_{i=1}^{n}$, define $\mu \defeq \Exp\brack*{\sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \eps_i Z_{i, s}}$. We assume throughout that $\sigma^{2} \defeq \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \Exp\brack*{Z_{i, s}^{2}} < \infty$. We start by recalling the following result.
Assume that for all $s \in \mathcal{T}$ and $i \in [n]$, $\Exp\brack*{Z_{i, s}} = 0$, and that $R \defeq \sup_{(s, i) \in \mathcal{T} \times [n]} \norm{Z_{i, s}}_{\infty} < \infty$. Define $v \defeq 2R\Exp\brack*{Z} + \sigma^{2}$. Then
\begin{equation*}
\Prob\paren*{Z \geq \Exp\brack*{Z} + t} \leq \exp\paren*{-\frac{4v}{9R^{2}}h\paren*{\frac{3Rt}{2v}}},
\end{equation*}
where $h(t) \defeq 1 + t - \sqrt{1 + 2t}$ with inverse $h^{-1}(t) = t + \sqrt{2t}$. Consequently, with probability at least $1-\delta$
\begin{equation*}
Z < \Exp\brack{Z} + \frac{3R \log(1/\delta)}{2} + \sqrt{2v\log(1/\delta)}. %\leq 2\Exp\brack*{Z} + \frac{5R\log(1/\delta)}{2} + \sqrt{2\sigma^{2} \log(1/\delta)}
\end{equation*}
The following result is due to Lugosi and Mendelson, 2021.
Let $T > 0$. Then with probability at least $1 - \delta$
\begin{multline*}
\sup_{s \in \mathcal{T}} \abs*{\brace*{i \in [n] \mid\abs{Z_{i, s}} > T}} \\
< \inf_{\eps \in (0, 1)} \brace*{\frac{2\mu}{\eps T} + \frac{\sigma^{2}}{(1-\eps)^{2}T^{2}} + \sqrt{\paren*{\frac{8\mu}{\eps T} + \frac{2\sigma^{2}}{(1-\eps)^{2}T^{2}}}\log(1/\delta)} + \frac{3\log(1/\delta)}{2}}.
\end{multline*}
Let $T > 0$ and $\eps \in (0, 1)$, and define the function $\chi_{T, \eps}: \R \to [0, 1]$ by
\begin{equation*}
\chi_{T, \eps}(x) \defeq \begin{dcases*}
0 & if $x \leq (1-\eps) T$ \\
\frac{x}{\eps T} - \frac{1-\eps}{\eps} & if $x \in ((1-\eps)T, T]$ \\
1 & if $x > T$.
\end{dcases*}
\end{equation*}
Note that $\mathbbm{1}_{(T, \infty)} \leq \chi_{T,\eps} \leq \mathbbm{1}_{((1-\eps)T, \infty)}$ and $\chi_{T, \eps}$ is $(1/\eps T)$-Lipschitz. Now we have
\begin{multline}
\label{eq:pf_lem5_1}
\sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \mathbbm{1}_{(T, \infty)}(\abs{Z_{i, s}}) \leq \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \chi_{T, \eps}(\abs{Z_{i, s}})
\\
\leq \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \underbrace{\chi_{T, \eps}(\abs{Z_{i, s}}) - \Exp\brack*{\chi_{T, \eps}(\abs{Z_{i, s}})}}_{\textstyle W_{i, s} \defeq } + \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \Exp\brack*{\chi_{T, \eps}(\abs{Z_{i, s}})}
\end{multline}
The second term of (<ref>) is bounded by
\begin{equation}
\label{eq:pf_lem5_2}
\sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \Exp\brack*{\chi_{T, \eps}(\abs{Z_{i, s}})} \leq \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \Prob\paren*{\abs{Z_{i, s}} > (1-\eps) T} \leq \frac{\sigma^{2}}{(1-\eps)^{2}T^{2}}.
\end{equation}
We now turn to the first term of (<ref>) which we denote by $W$. We note that $\Exp\brack{W_{i, s}} = 0$, $\abs{W_{i, s}} \leq 1$, so by Lemma <ref> we have with probability at least $1-\delta$
\begin{equation}
\label{eq:pf_lem5_3}
W < \Exp\brack*{W} + \frac{3\log(1/\delta)}{2} + \sqrt{2\paren*{2\Exp\brack{W} + \alpha^{2}} \log(1/\delta)},
\end{equation}
where $\alpha^{2} \defeq \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \Exp\brack{W_{i, s}^{2}}$.
It remains to bound $\Exp\brack{W}$ and $\alpha^2$. The former is bounded by
\begin{multline}
\label{eq:pf_lem5_4}
\Exp\brack*{W} = \Exp\brack*{\sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \chi_{T, \eps}(\abs{Z_{i, s}}) - \Exp\brack*{\chi_{T, \eps}(\abs{Z_{i, s}})}} \\
\leq 2 \Exp\brack*{\sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \eps_i \chi_{T, \eps}(\abs{Z_{i, s}})}
\leq \frac{2}{\eps T} \Exp\brack*{\sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \eps_i Z_{i,s}},
\end{multline}
where the first inequality is by symmetrization and the second by the contraction principle and the $(1/\eps T)$-Lipschitzness of $\chi_{T, \eps} \circ \abs{\cdot}$. The latter is bounded by
\begin{equation}
\label{eq:pf_lem5_5}
\alpha^2 = \sup_{s \in \mathcal{T}} \sum_{i=1}^{n}\Exp\brack*{W_{i, s}^{2}} \leq \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \Exp\brack*{\chi^{2}_{T, \eps}(\abs{Z_{i, s}})} \leq \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \Prob\paren*{\abs{Z_{i, s}} > (1 - \eps)T} \leq \frac{\sigma^{2}}{(1 - \eps)^{2}T^{2}}.
\end{equation}
Combining (<ref>), (<ref>), and (<ref>) yields that with probability at least $1 - \delta$
\begin{equation}
\label{eq:pf_lem5_6}
W < \frac{2\mu}{\eps T} + \sqrt{\paren*{\frac{8\mu}{\eps T} + \frac{2\sigma^{2}}{(1-\eps)^{2}T^{2}}} \log(1/\delta)} + \frac{3\log(1/\delta)}{2}
\end{equation}
Combining (<ref>), (<ref>), (<ref>), and optimizing over $\eps \in (0, 1)$ yields the result.
Using the same notation as in Lemma <ref>, we have with probability at least $1-\delta$
\begin{equation*}
\sup_{s \in \mathcal{T}} \abs*{\brace{i \in [n] \mid \abs{Z_{i, s}} > T_{0}}} < 8 \log(1/\delta),
\end{equation*}
\begin{equation*}
T_{0} \defeq 2 \max\brace*{\frac{\mu}{\log(1/\delta)}, \sqrt{\frac{\sigma^{2}}{\log(1/\delta)}}}.
\end{equation*}
The result follows from taking $\eps = 1/2$ in the bound of Lemma <ref>, replacing $T$ by $T_{0}$, and straightforwardly bounding the resulting expression.
Let $\delta \in (0, 1)$ be such that $k \defeq 8 \log(2/\delta)$ is an integer satisfying $1 \leq k \leq \floor{n/2}$. Assume that for all $s \in \mathcal{T}$ and $i \in [n]$, $\Exp\brack*{Z_{i, s}} = 0$. Then with probability at least $1-\delta$
\begin{equation*}
\sup_{s \in \mathcal{T}} \abs*{\brace{i \in [n] \mid \abs{Z_{i, s}} > T_{0}}} < k,
\end{equation*}
\begin{equation*}
\sup_{s \in \mathcal{T}} \varphi_{k}(Z_{s}) \leq 50 \max\brace*{\mu, \sqrt{\sigma^{2} \log(2/\delta)}}.
\end{equation*}
\begin{equation*}
T_{0} \defeq 2 \max\brace*{\frac{\mu}{\log(1/\delta)}, \sqrt{\frac{\sigma^{2}}{\log(1/\delta)}}}.
\end{equation*}
By Lemma <ref>, with probability at least $1-\delta/2$, we have for all $s \in \mathcal{T}$
\begin{equation*}
-T_{0} \leq Z^{*}_{1 + k, s} \leq Z^{*}_{n - k, s} \leq T_{0}.
\end{equation*}
\begin{align}
\sup_{s \in \mathcal{T}} \varphi_{k}(Z_{s}) &= \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \phi_{Z^{*}_{1+k, s}, Z^{*}_{n-k, s}}(Z_{i, s}) \nonumber\\
&= \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \phi_{-T_{0}, T_{0}}(Z_{i, s}) + k (Z_{1 + k, s} + T_{0}) + \underbrace{k(Z^{*}_{n-k, s} - T_{0})}_{\textstyle \leq 0} \nonumber \\
&\leq \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \underbrace{\phi_{-T_{0}, T_{0}}(Z_{i, s}) - \Exp\brack*{\phi_{-T_{0}, T_{0}}(Z_{i, s})}}_{\textstyle W_{i, s} \defeq} + \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \Exp\brack*{\phi_{-T_{0}, T_{0}}(Z_{i, s})} + 2kT_{0}. \label{eq:pf_lem7_1}
\end{align}
We now bound the second term of (<ref>) by
\begin{align}
\sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \Exp\brack*{\phi_{-T_{0}, T_{0}}(Z_{i, s})} &= \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \underbrace{\Exp\brack*{Z_{i, s}}}_{\textstyle = 0} + \underbrace{\Exp\brack*{(T_{0} - Z_{i, s})\mathbbm{1}_{(T_0, \infty)}(Z_{i, s})}}_{\textstyle \leq 0} \\
&+ \Exp\brack*{(-T_{0} - Z_{i, s}) \mathbbm{1}_{(-\infty, -T_{0})}(Z_{i, s})} \nonumber \\
&\leq \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \Exp\brack*{Z_{i, s}^{2}}^{1/2} \cdot \Prob\paren*{Z_{i, s} < -T_{0}}^{1/2} \leq \frac{\sigma^2}{T_{0}},
\label{eq:pf_lem7_2}
\end{align}
where we used the Cauchy-Schwarz inequality and Markov's inequality respectively. Denote the first term of (<ref>) by $W$, and note that $\Exp\brack*{W_{i, s}} = 0$ and $\abs{W_{i, s}} \leq 2T_{0}$, so by Lemma <ref> we have with probability at least $1-\delta/2$
\begin{equation}
\label{eq:pf_lem7_3}
W < \Exp\brack*{W} + 2T_{0} \log(2/\delta) + \sqrt{2 (4T_{0}\Exp\brack{W} + \alpha^{2}) \log(2/\delta)},
\end{equation}
where $\alpha \defeq \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \Exp\brack*{W_{i, s}^{2}}$. It remains to bound $\Exp\brack{W}$ and $\alpha^{2}$. The former is bounded by
\begin{multline}
\label{eq:pf_lem7_4}
\Exp\brack*{W} = \Exp\brack*{\sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \phi_{-T_{0}, T_{0}}(Z_{i, s}) - \Exp\brack*{\phi_{-T_{0}, T_{0}}(Z_{i, s})}} \\ \leq 2\Exp\brack*{\sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \eps_i \phi_{-T_{0}, T_{0}}(Z_{i, s})} \leq 2 \Exp\brack*{\sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \eps_i Z_{i, s}},
\end{multline}
where we used symmetrization and the contraction principle along with the $1$-Lipschitzness of $\phi_{-T_{0}, T_{0}}$ respectively. The latter is bounded by
\begin{equation}
\label{eq:pf_lem7_5}
\alpha^{2} \leq \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \Exp\brack*{W^{2}_{i, s}} \leq \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \Exp\brack*{\phi^{2}_{-T_0, T_0}(Z_{i, s})} \leq \sup_{s \in \mathcal{T}} \sum_{i=1}^{n} \Exp\brack*{Z^{2}_{i, s}}
\end{equation}
Combining (<ref>), (<ref>), (<ref>), and using the definition of $T_{0}$, we obtain with probability at least $1-\delta/2$
\begin{equation}
\label{eq:pf_lem7_6}
W < 16 \max\brace*{\mu, \sqrt{\sigma^{2} \log(1/\delta)}}
\end{equation}
Combining (<ref>), (<ref>), (<ref>), and the definition of $T_{0}$ yields the result.
§ PROOFS OF SECTION <REF>
§.§ Proof of Theorem <ref>
By definition of the minimax risk, we have
\begin{align*}
R^{*}_{\delta}(\ell) = \inf_{d} R_{\delta}(\ell, d)
= \inf_{d} \sup_{P \in \mathcal{P}} R_{\delta}(\ell, P, d)
= \inf_{d} \sup_{P \in \mathcal{P}} Q_{\ell(P, d(O))}(1-\delta)
= \inf_{d} \sup_{P \in \mathcal{P}} F^{-}_{\ell(P, d(O))}(1-\delta).
\end{align*}
Applying the sixth item of Lemma <ref> to the last expression yields
\begin{equation*}
R^{*}_{\delta}(\ell) = \inf_{d} \paren*{\inf_{P \in \mathcal{P}} F_{\ell(P, d(O))}}^{-}(1-\delta).
\end{equation*}
Now let $k \in \N$. Since $\inf_{P \in \mathcal{P}} F_{\ell(P, d(O))} \leq \Exp_{P \sim \pi_{k}}\brack*{F_{\ell(P, d(O)) \mid P}} = F^{\pi_{k}}_{\ell(P, d(O))}$, where $O \mid P \sim P$ inside the expectation, we have by the second item of Lemma <ref>
\begin{equation*}
R^{*}_{\delta}(\ell) \geq \inf_{d} \paren*{F^{\pi_{k}}_{\ell(P, d(O))}}^{-1}(1-\delta) \geq \paren*{\sup_{d} F^{\pi_{k}}_{\ell(P, d(O))}}^{-}(1-\delta) = p_{\ell, k}^{-}(1-\delta).
\end{equation*}
where the second inequality follows from the third item of Lemma <ref>, and the last by definition of $p_{\ell, k}$. Taking supremum over $k$, and combining our assumptions on the sequence $(p_{\ell, k})_{k \in \N}$ with the last item of Lemma <ref> yields the result.
§.§ Proof of Proposition <ref>
The first statement follows from the assumption on $\varphi$ and Lemma <ref>. For the second statement, define $S \defeq \brace*{R_{\delta}(\ell, P, d) \st P \in \mathcal{P}} \subset [-\infty, \infty)$ and $x_0 \defeq \sup S$. If $x_{0} = -\infty$, then $\varphi(x_0) = -\infty$, and $\varphi(\ell(P, d(O)))) = \ell(P, d(O)) = -\infty$ with probability at least $1-\delta$ for all $P$, so the statement holds. Otherwise $x_0 \in \R$. Now for any $x \in S$, we have $x \leq x_{0}$, so $\varphi(x) \leq \varphi(x_0)$, and hence $\sup_{x \in S} \varphi(x) \leq \varphi(x_0)$. On the other hand, let $(x_k)_{k \in \N}$ be an increasing sequence in $S$ such that $x_k \to x_{0}$ as $k \to \infty$. Then by the left-continuity of $\varphi$, we obtain $\sup_{x \in S} \varphi(x) \geq \lim_{k \to \infty}\varphi(x_k) = \varphi(x_0)$, which proves the statement. For the last statement, suppose that $d^{*} \in \argmin_{d} R_{\delta}(\ell, d)$, then by assumption $R_{\delta}(\ell, d^{*}) < \infty$, so that by the second statement $R_{\delta}(\varphi \circ \ell, d^{*}) = \varphi\paren*{R_{\delta}(\ell, d^{*})}$. Now let $d$ be any other decision rule. If $R_{\delta}(\ell, d) < \infty$, then by the minimality of $d^{*}$ we get $R_{\delta}(\ell, d^{*}) \leq R_{\delta}(\ell, d)$, and since $\varphi$ is increasing and using the second statement again, $R_{\delta}(\varphi \circ \ell, d^{*}) = \varphi\paren*{R_{\delta}(\ell, d^{*})} \leq \varphi\paren*{R_{\delta}(\ell, d)} = R_{\delta}(\varphi \circ \ell, d)$. If $R_{\delta}(\ell, d) = \infty$ then there exists $P_{0} \in \mathcal{P}$ such that $R_{\delta}(\ell, P_{0}, d) \geq R_{\delta}(\ell, d^{*})$, but then since $\varphi$ is increasing, $R_{\delta}(\varphi \circ \ell, d) = \sup_{P \in \mathcal{P}} R_{\delta}(\varphi \circ \ell, P, d) \geq \varphi(R_{\delta}(\ell, P_{0}, d)) \geq \varphi(R_{\delta}(\ell, d^{*})) = R_{\delta}(\varphi \circ \ell, d^{*})$. This proves the last statement.
§.§ Proof of Proposition <ref>
We present here the proof for the case $\varphi(x) = x$. The general statement follows from Proposition <ref>. Our aim is to apply Theorem <ref>. We select $\pi_k \defeq \mathcal{N}(0, \Sigma/\lambda_k)$ for a decreasing strictly positive sequence $(\lambda_k)_{k \in \N}$ satisfying $\lambda_k \to 0$ as $k \to \infty$. We want to compute, for all $t \in \R$,
\begin{equation*}
p_{\ell, k}(t) = \sup_{\hat{\mu}} \Prob\paren*{e\paren*{\hat{\mu}((X_i)_{i=1}^{n} - \mu)} \leq t},
\end{equation*}
where $\mu \sim \pi_k$ and $X_i \mid \mu \sim \mathcal{N}(\mu, \Sigma)$ for all $i \in [n]$ independently. A classical Bayesian calculation shows that
$\mu \mid (X_i)_{i=1}^{n} \sim \mathcal{N}\paren*{\overline{X}_k, \Sigma_{k}}$ where $\overline{X}_k \defeq \frac{n}{n+\lambda_k} \overline{X}$ and $\Sigma_k \defeq \frac{1}{n + \lambda_k} \Sigma$, where $\overline{X} \defeq n^{-1}\sum_{i=1}^{n}X_i$ is the sample mean.
Now we compute, for $Z_{k} \sim \mathcal{N}(0, \Sigma_{k})$,
\begin{align*}
p_{\ell, k}(t) &= \sup_{\hat{\mu}} \Prob\paren*{e\paren*{\hat{\mu}((X_i)_{i=1}^{n} - \mu)} \leq t} \\
&= \Exp\brack*{\sup_{a \in \R^{d}} \Prob\paren*{e\paren*{\mu - a} \leq t \st (X_i)_{i=1}^{n}}} \\
&= \Exp\brack*{\sup_{a \in \R^{d}} \Prob\paren*{\mu - a \in e^{-1}((-\infty, t]) \mid (X_i)_{i=1}^{n}}} \\
&= \Exp\brack*{\Prob\paren*{\mu - \overline{X}_{k} \in e^{-1}((-\infty, t] \mid (X_i)_{i=1}^{n}}} \\
&= \Prob\paren*{e(Z_{k}) \leq t} = F_{e(Z_k)}(t)
\end{align*}
The second line follows from conditioning on $(X_i)_{i=1}^{n}$ and the symmetry of $e$. The fourth line follows from combining the assumptions on $e$ with the first item of Lemma <ref>, as well as an application of Lemma <ref>, known as Anderson's Lemma. The last line follows from the fact that $\mu - \overline{X}_k \mid (X_i)_{i=1}^{n} \overset{d}{=} Z_k$. To conclude it remains to prove the needed properties for the sequence $(p_{\ell, k})_{k \in N}$. The right-continuity follows directly from the fact that $F_{e(Z_k)}$ is a CDF. To see that the sequence is decreasing, define $n_k \defeq n + \lambda_k$ and note that $n_{k} \geq n_{k+1}$. Then, for all $t \in \R$
\begin{multline*}
F_{e(Z_k)}(t) = \Prob\paren*{e(Z_k) \leq t}
= \Prob\paren*{Z_k \in e^{-1}((-\infty, t])}
= \Prob\paren*{\sqrt{\frac{n_{k+1}}{n_k}} \cdot Z_{k+1} \in e^{-1}((-\infty, t])} \\
= \Prob\paren*{Z_{k+1} \in \sqrt{\frac{n_{k}}{n_{k+1}}} \cdot e^{-1}((-\infty, t])}
\geq \Prob\paren*{Z_{k+1} \in e^{-1}((-\infty, t])}
= F_{e(Z_{k+1})}(t),
\end{multline*}
where the inequality follows from the fact that $\sqrt{n_k/n_{k+1}} \geq 1$, $e^{-1}((-\infty, t]$ is convex and symmetric, and Lemma <ref>. Finally, let $Z \sim \mathcal{N}(0, \Sigma/n)$. We compute
\begin{multline*}
\lim_{k \to \infty} F_{e(Z_k)}(t) = \lim_{k \to \infty} \Prob\paren*{Z_k \in e^{-1}((-\infty, t])}
= \lim_{k \to \infty} \Prob\paren*{Z \in \sqrt{\frac{n_k}{n}} \cdot e^{-1}((-\infty, t])} \\
= \Prob\paren*{Z \in \bigcap_{k=1}^{\infty} \brace*{\sqrt{\frac{n_k}{n}} \cdot e^{-1}((-\infty,t])}}
= \Prob\paren*{Z \in e^{-1}((-\infty, t])}
= F_{e(Z)}(t),
\end{multline*}
Finally, the worst-case risk of the sample mean is given by $Q_{e(Z)}(1 - \delta)$ as can be checked with a simple explicit calculation. An application of Theorem <ref> concludes the proof.
§.§ Proof of Proposition <ref>
We aim at applying Theorem <ref>. We select $\pi_k \defeq \text{Inv-Gamma}(\lambda_k, \lambda_k)$ for a decreasing strictly positive sequence $(\lambda_k)_{k=1}^{\infty}$ satisfying $\lambda_k \to 0$ as $k \to \infty$. We need to compute
\begin{equation*}
p_{\ell, k}(t) = \sup_{\hat{\sigma}} \Prob\paren*{\log\paren*{\frac{\sigma^{2}}{\hat{\sigma}^{2}((X_i)_{i=1}^{n})}} \leq t},
\end{equation*}
where $\sigma^{2} \sim \pi_k$ and $X_i \mid \sigma^2 \sim \mathcal{N}(\mu, \sigma^2)$ for all $i \in [n]$ independently. A classical Bayesian calculation shows that $\sigma^2 \mid (X_i)_{i=1}^{n} \sim \text{Inv-Gamma}(\alpha_k, \beta_k)$, where $\alpha_k \defeq n/2 + \lambda_k$ and $\beta_k \defeq \lambda_k + \sum_{i=1}^{n}(X_i - \mu)^{2}/2$. Recalling the definition of the fucntion $p_{\alpha}$ from the statement, we obtain
\begin{align*}
\sup_{\hat{\sigma}} \Prob\paren*{\log\paren*{\frac{\sigma^{2}}{\hat{\sigma}^{2}((X_i)_{i=1}^{n})}} \leq t}
= \Exp\brack*{\sup_{b \in (0, \infty)} \Prob\paren*{\log\paren*{\frac{\sigma^{2}}{b}} \leq t \st (X_i)_{i=1}^{n}}} = \Exp\brack*{p_{\alpha_k}(t)} = p_{\alpha_k}(t)
\end{align*}
where the last equality follows from Lemma <ref>. It is straightforward to check that $p_{\alpha}$ is continuous for all values of $\alpha \in (0, \infty)$. Furthermore, by Lemma <ref>, the sequence $(p_{\alpha_k})_{k \in \N}$ is decreasing with limit $p_{n/2}$. This provides us with the first part needed for Theorem <ref>. Now note that, for any $\sigma^{2} \in (0, \infty)$ and $X_i \sim \mathcal{N}(\mu, \sigma^2)$ for all $i \in [n]$ and independently, we have $(n \cdot \sigma^2)/\sum_{i=1}^{n}(X_i - \mu)^{2} \sim \text{Inv-Gamma}(n/2, n/2)$, so that for the estimator $\hat{\sigma}^{2}$ defined in the theorem, we have
\begin{align*}
&\Prob\paren*{\abs*{\log(\sigma^2/\hat{\sigma}^{2}((X_i)_{i=1}^{n}))} \leq p_{n/2}^{-1}(1-\delta)} \\
&= \Prob\paren*{\exp(-p_{n/2}^{-1}(1-\delta)) \leq \frac{\sigma^2}{\hat{\sigma}^{2}((X_i)_{i=1}^{n})} \leq \exp(p_{n/2}^{-1}(1-\delta))}\\
&= \Prob\paren*{\frac{1-\exp(-2p^{-1}_{n/2}(1-\delta))}{2p^{-1}_{n/2}(1-\delta)} \leq \frac{n \cdot \sigma^{2}}{\sum_{i=1}^{n}(X_i - \mu)^{2}} \leq \frac{\exp(2p^{-1}_{n/2}(1-\delta)) - 1}{2p^{-1}_{n/2}(1-\delta)}} \\
&= p_{n/2}(p_{n/2}^{-1}(1 - \delta))\\
&= 1-\delta
\end{align*}
and therefore the worst-case risk of this estimator is equal to $p_{n/2}^{-1}(1-\delta)$. Applying Theorem <ref> proves the minimaxity of this estimator. An explicit calculation of the worst-case risk of the sample variance combined with the uniqueness of the minimizer in Lemma <ref> shows that it is not minimax.
We start with the lower bound. Let $k \in \N$. Then
\begin{align}
\inf_{\hat{\sigma}^{2}} \sup_{P \in \mathcal{P}_{\text{Gauss}}(\mu)} R_{n, \delta}(P, \hat{\sigma}^{2}) &= \inf_{\hat{\sigma}^{2}} \sup_{P \in \mathcal{P}_{\text{Gauss}}(\mu)} Q_{\abs{\log(\sigma^{2}/\hat{\sigma}^{2}((X_i)_{i=1}^{n}))}}(1-\delta) \nonumber \\
&= \inf_{\hat{\sigma}^{2}} \sup_{\sigma^{2} \in (0, \infty)} F^{-1}_{\abs{\log(\sigma^{2}/\hat{\sigma}^{2}((X_i)_{i=1}^{n}))}} (1-\delta) \nonumber \\
&= \inf_{\hat{\sigma}^{2}} \paren*{\inf_{\sigma^{2} \in (0, \infty)} F_{\abs{\log(\sigma^{2}/\hat{\sigma}^{2}((X_i)_{i=1}^{n}))}}}^{-1}(1-\delta) \nonumber \\
&\geq \inf_{\hat{\sigma}^{2}} \paren*{\Exp_{\sigma^{2} \sim \text{Inv-Gamma}(k^{-1}, k^{-1})}\brack*{F_{\abs{\log(\sigma^{2}/\hat{\sigma}^{2}((X_i)_{i=1}^{n}))} \mid \sigma^{2}}}}^{-1}(1-\delta) \nonumber \\
&= \inf_{\hat{\sigma}^{2}} F^{-1}_{\abs{\log(\sigma^{2}/\hat{\sigma}^{2}((X_i)_{i=1}^{n}))}}(1-\delta) \label{eq:pf_thm_2_1}
\end{align}
By a classical Bayesian calculation, we get that $\sigma^{2} \mid (X_i)_{i=1}^{n} \sim \Gamma^{-1}(\alpha_{k}, \beta_{k})$ where
\begin{equation*}
\alpha_k \defeq \frac{1}{k} + \frac{n}{2} \quad \beta_k \defeq \frac{1}{k} + \frac{\sum_{i=1}^{n} (X_i - \mu)^{2}}{2}.
\end{equation*}
Recall the definition of $p_{\alpha}(r)$ from Lemma <ref>, we have by the same Lemma, for all $r > 0$ and for all $\hat{\sigma}^{2}$
\begin{align*}
F_{\abs{\log(\sigma^2/\hat{\sigma}^{2}((X_i)_{i=1}^{n}))} \mid (X_i)_{i=1}^{n}}(r) \leq p_{\alpha_k}(r)
\end{align*}
Taking expectation of both sides, noticing that $p_{\alpha_k}(r)$ does not depend on $(X_i)_{i=1}^{n}$, and using the fourth item of Lemma <ref> yields
\begin{equation}
\label{eq:pf_thm_2_2}
\inf_{\hat{\sigma}^2} F^{-1}_{\abs{\log(\sigma^2/\hat{\sigma}^{2}((X_i)_{i=1}^{n}))}} \geq p_{\alpha_k}^{-1}
\end{equation} |
# Comparison of pipeline, sequence-to-sequence, and GPT models for end-to-end
relation extraction: experiments with the rare disease use-case
Shashank Gupta Computer Science Department, University of Kentucky, USA
Xuguang Ai Computer Science Department, University of Kentucky, USA
Ramakanth Kavuluru Division of Biomedical Informatics, Dept. of Internal
Medicine, University of Kentucky, USA Computer Science Department, University
of Kentucky, USA
###### Abstract
Objective: End-to-end relation extraction (E2ERE) is an important and
realistic application of natural language processing (NLP) in biomedicine. In
this paper, we aim to compare three prevailing paradigms for E2ERE using a
complex dataset focused on rare diseases involving discontinuous and nested
entities.
Methods: We use the RareDis information extraction dataset to evaluate three
competing approaches (for E2ERE): NER $\rightarrow$ RE pipelines, joint
sequence to sequence models, and generative pre-trained transformer (GPT)
models. We use comparable state-of-the-art models and best practices for each
of these approaches and conduct error analyses to assess their failure modes.
Results: Our findings reveal that pipeline models are still the best, while
sequence-to-sequence models are not far behind; GPT models with eight times as
many parameters are worse than even sequence-to-sequence models and lose to
pipeline models by over 10 F1 points. Partial matches and discontinuous
entities caused many NER errors contributing to lower overall E2E
performances. We also verify these findings on a second E2ERE dataset for
chemical-protein interactions. Although generative LM-based methods are more
suitable for zero-shot settings, when training data is available, our results
show that it is better to work with more conventional models trained and
tailored for E2ERE.
Conclusion: More innovative methods are needed to marry the best of the both
worlds from smaller encoder-decoder pipeline models and the larger GPT models
to improve E2ERE. As of now, we see that well designed pipeline models offer
substantial performance gains at a lower cost and carbon footprint for E2ERE.
Our contribution is also the first to conduct E2ERE for the RareDis dataset.
The dataset and code for all our experiments are publicly available:
https://github.com/shashank140195/Raredis
## 1 INTRODUCTION
Named entities and relations among them are basic units of information in many
disciplines including biomedicine. A relation is typically expressed as a
triple that has a subject entity and an object entity connected via a
predicate (or relation type) as in the example (subject: atorvastatin,
predicate: treats, object: hyperlipidemia). Disease and treatment mechanisms
are often driven at the biological level by protein-protein and chemical-
protein interactions while clinical relations such as drug-disease treatment
relations and disease-symptom causative relations are helpful in providing
care. Most new relational information is first discussed in textual narratives
(e.g., scientific literature, clinical notes, or social media posts), and
extracting and storing it as triples enable effective search systems [1],
high-level reasoning, hypothesis generation, and knowledge discovery
applications [2]. As such, named entity recognition (NER) and relation
extraction (RE) have become standard tasks in biomedical natural language
processing (BioNLP) [3].
Many RE efforts in the past assume that the entity spans are already provided
as part of the input and hence addressed an easier problem of relation
classification (RC) [4, 5, 6]. However, a more realistic setting is the
ability to extract both entity spans and associated relations from the raw
text where entities are not provided. RE in this setting is generally called
end-to-end relation extraction (E2ERE). With the recent deluge of deep neural
networks (or deep learning methods), the NLP community has been focusing more
on E2ERE efforts [7, 8, 9, 10]. Efforts have also been expanded from single
sentence E2ERE to a more complex setting of extractions at the document level,
involving cross-sentence relations, where entities expressed in different
sentences are to be linked [11, 12]. Additional intricacies arise when named
entities are discontinuous or when their spans overlap [13]. For example,
consider the string “accumulation of fats (lipids) called GM 2 gangliosides,”
where entity span “accumulation of GM 2 gangliosides” is discontinuous with a
gap involving outside words. In the example phrase “central pain syndrome,”
both the full three-word string and the middle word “pain” can constitute two
different entities, where the latter entity is fully nested in the longer
3-word entity. Thus far, we have not seen efforts handling these complex
document-level E2ERE settings involving discontinuous and overlapping/nested
entities. In this paper, we address this using the recently introduced RE
dataset called RareDis [14], which focuses on information extraction for rare
diseases and has the complex traits indicated earlier. Although there is
another dataset that focuses on rare diseases at the sentence level [15], we
use RareDis since it operates at the document level.
Over the past decade, neural methods especially those involving contextual
dense word embeddings have supplanted conventional NLP methods that relied on
n-gram statistics. For E2ERE, joint learning neural methods that
simultaneously optimized for NER and RE objectives [16, 17] have gained
popularity over pipeline-based methods that build two separate models for NER
and RE, where the NER model’s output is fed to the RE model. However, the
recent Princeton University Relation Extraction (PURE) framework [18] proposed
an intuitive pipeline method that takes advantage of the so-called typed
“entity markers” to encapsulate entity spans provided as input to
contextualized language models (LMs). The PURE method reignited the relevance
of cleverly designed pipeline methods when compared with joint learning
methods. Simultaneously, sequence-to-sequence models that became popular for
machine translation have been repurposed [19] effectively for E2ERE where the
encoder-decoder architecture is used to transform raw text to directly output
relations encoded through so-called “linearization schemas” and “copy
mechanism” [20]. The state-of-the-art (SoTA) for this paradigm of models is
the Seq2Rel architecture [21] that inherently allows for E2ERE. Finally,
generative pre-trained transformers (GPTs) have gained traction and publicity
(thanks to ChatGPT), especially for zero-shot and few-shot settings [22, 23].
In biomedicine, BioGPT [24] and BioMedLM [25] have been shown to work well for
relation extraction and question answering, among generative decoder-only
language models (LMs), producing SoTA scores on a few datasets.
Thus we identify PURE, Seq2Rel, and BioMedLM***Although we experimented with
BioGPT models, they are smaller than BioMedLM and were quite inferior (more
later) and as such our focus in this manuscript is more on BioMedLM, the
latest and largest GPT model exclusively trained on biomedical literature as
representative models for the pipeline, sequence-to-sequence, and generative
LMs, respectively. Now the central question is, which of these approaches
works well for the complex document level E2ERE task involving discontinuous
and overlapping entities manifesting in the RareDis dataset? Toward answering
this, we make the following contributions in this paper.
* •
We explore and provide descriptive statistics of the RareDis dataset and fix
certain formatting/annotation errors in the original dataset (acknowledged by
its creators) to ensure availability for the community for further
benchmarking.
* •
We adapt the PURE pipeline approach to the RareDis dataset since the original
method does not handle discontinuous and nested entities.
* •
We design linearization schemas for the Seq2Rel method and appropriate
supervised prompting strategies for BioMedLM in the context of E2ERE for the
RareDis dataset.
* •
We provide quantitative evaluations of the three models (and associated
variants) and conduct qualitative evaluations through manual error analyses.
We make publicly available the modified RareDis dataset and code for all our
experiments: https://github.com/shashank140195/Raredis
To our knowledge, our effort is the first to handle E2ERE with the RareDis
dataset and also to compare SoTA approaches arising from three different
competing paradigms in the neural RE landscape.
Statement of Significance Problem: It is not clear what NLP methods work best
in practice for end-to-end relation extraction What is already known: Although
pipeline methods used to be the norm, recent literature shows a rise in
sequence-to-sequence and decoder-only GPT models for information extraction.
There is also general tendency to prefer the fancier latter models considering
the excitement in the field for them. What this paper adds: With the use-case
of a rare disease information extraction task involving discontinuous and
overlapping entities, we compare three different competing paradigms
(pipeline, seq2seq, and GPT) for end-to-end relation extraction. Our findings
show that a well-designed pipeline model is computationally inexpensive and
more effective than other methods.
## 2 METHODS
### 2.1 The RareDis dataset
The National Institutes of Health (NIH) estimates that around 7,000 rare
diseases impact between 25 and 30 million Americans, which translates to
approximately 1 out of every 10 Americans [26]. Around 95% of the known rare
diseases currently lack any treatment options [26]. Because these diseases are
so rare, they can be challenging to diagnose and treat — nearly 95% of rare
diseases have no known cure, and the number of drugs available for treating
these conditions is limited to 100 [27]. The average diagnostic delay is
around seven years [28]. Many rare diseases are genetic in nature and are
caused by mutations in a single gene. However, because there are thousands of
rare diseases, each with unique symptoms and genetic causes, developing
effective treatments can be a significant challenge. Developing a structured
compendium of information about rare diseases has the potential to help
expedite search, discovery, and hypothesis generation for these conditions.
This necessitates developing NLP models for RE in this setting and toward this
goal, Maritinez-deMiguel et al. [14] created an annotated corpus for rare
disease-related information extraction. This resource is based on the database
of articles about rare diseases maintained by the National Organization for
Rare Disorders (https://rarediseases.org/rare-diseases/). The dataset contains
six entity types and six relation types and the annotation process is
described in detail by the authors [14].
Figure 1: Examples of is_a and anaphora relations in the RareDis dataset.
#### Entity and relations types
The six entity types in RareDis are: disease, rare disease, symptom, sign,
anaphor, and rare skin disease with frequencies shown in the first six rows of
Table 1. There are six relation types (with counts shown in the last six rows
of Table 1): produces (relation between any disease entity and a sign/symptom
produced by that entity), increase_risk_of (relation between a disease entity
and another disease entity where the subject disease increases the likelihood
of suffering from the object disease), is_a (relation between a given disease
and its classification as a more general disease), is_acron (relation between
an acronym and its full or expanded form), is_synon (relation between two
different names designating the same disease) and anaphora (relation of an
anaphor entity with its antecedent entity). Here an anaphor entity refers to
pronouns or pronominal constructs (e.g., ‘it” or “this disease”) that point to
a named entity that is already mentioned in the preceding context (the
“antecedent” of the anaphora relation). An example is shown in Figure 1.
Type | Training | Dev | Test
---|---|---|---
sign | 2945 | 798 | 528
rare disease | 2533 | 624 | 480
disease | 1369 | 278 | 230
anaphor | 913 | 195 | 151
skin rare disease | 393 | 58 | 45
symptom | 275 | 44 | 24
produces | 3256 | 850 | 556
anaphora | 918 | 195 | 151
is_a | 544 | 149 | 88
increase_risk_of | 161 | 8 | 22
is_acron | 142 | 44 | 34
is_synon | 66 | 14 | 16
Table 1: Statistics of entity types (first six rows) and relation types (last
six rows) in the RareDis corpus.
The dataset contains discontinuous and overlapping/nested entities as
discussed with examples in Section 1; Table 2 throws light on the relative
frequency of these situations where “flat” corresponds to continuous entities.
While in both tables in this section we show training, development, and test
set counts, the original dataset consisted of only training and development
datasets where the authors claim to withhold the test set for a future shared
task, which has not happened yet. We split up their training dataset into
training and development for our experiments and their development split
became our test split.
Dataset | Training | Dev | Test
---|---|---|---
Flat | 7103 | 1666 | 1212
Discontinuous | 528 | 136 | 103
Overlapped | 720 | 166 | 112
Nested | 77 | 29 | 31
Total | 8428 | 1997 | 1458
Table 2: Counts of entity types in the corpus.
#### Modifications to the original dataset
While exploring the dataset, we observed some annotation issues that we
confirmed with the creators of the RareDis dataset through email
communication. Next, we describe what they are and how we fixed them at a high
level in this section. We created a custom train, validate, test split of the
full dataset after fixing the following errors and made it available as a
Google Drive link on our GitHub page for this project.
##### Relation argument error
Figure 2 shows an example of how the annotations are provided for each
instance. For this example, we see the entities (T1, …, T9) listed first along
with types, character-based offsets, and lexical spans. Next, relations
between entities are listed (R1, …, R5) along with the relation type and the
arguments (subject and object). Although there are only nine entities, we see
for anaphora relation R5, the second argument is T90 with a trailing 0 after
9. This happened several times — arguments in relations referring to entity
IDs that are not present in the preceding entity list. This almost always
happened with a trailing extra zero. We safely removed that zero and it fixed
all these errors, which accounted for 9% of the total number of relations. In
the example in Figure 2, the anaphora relation R5 was referring to the bigram
“This disorder”.
Figure 2: An example of the argument error due to an extra trailing zero in
entity IDs. Here, T90 ought to be just T9.
##### Span mismatch Error
There were a few occasions (less than 1% of the full dataset) where the
character offsets for entities captured an extra character than needed or
missed the last character of a word. We used simple rules to remove the extra
character or add the missing character. For example, in the sentence
“Balantidiasis is a rare infectious disease caused by the single-celled
(protozoan) parasite Balantidium coli,” the bold phrase was annotated as [T24,
DISEASE,1272 1289, infectious diseas] with a missing trailing character ‘e’.
##### Offset order error
For some discontinuous entities where more than one span is part of the full
entity, the order used for the spans was not left to right and we simply
reordered them as such.
As outlined earlier (in Section 1), we experiment with three different SoTA
approaches each representing a competing paradigm for E2ERE. Each of these
approaches is highly involved and hence we focus on high-level explanations of
how they work.
### 2.2 The three E2ERE methods
#### Pipeline: The PURE Approach
PURE by Zhong and Chen [18] is a span-based model that has two different
models for NER and RE parts of the E2ERE system. It improved upon prior joint
modeling approaches even though it separately trains NER and RE models. The
main argument by Zhong and Chen, the authors of PURE, is that NER and RE need
different representations of tokens because they need different types of
signals to make the predictions; and combining the signals can hurt the
performance of both.
Figure 3: Pipeline approach using SODNER and PURE models for end-to-end
relation extraction.
One weakness of PURE is that it does not handle discontinuous entities in its
NER component while it easily handles flat and nested entities. So we needed
to adapt the PURE approach to the RareDis setting. Since PURE is pipeline-
based, we could simply use a different NER model for identifying discontinuous
entities and retain the PURE model to spot flat and nested entities. Hence, we
use a specialized model that was exclusively developed for handling
discontinuous entities called SODNER [13], which is also a span-based NER
model that models discontinuous NER task as a classification problem to
predict whether entity fragments with gaps ought to be linked to form a new
entity. To do this, SODNER uses dependency parses of the input document to
guide a graph convolutional neural (GCN) network that obtains enhanced
contextual embeddings to link disparate fragments and form discontinuous
entities. Figure 3 shows the schematic of the pipeline we use. It starts on
the left with the SODNER model identifying discontinuous entities.
Even if SODNER successfully identifies discontinuous entities, PURE’s relation
extraction model cannot handle them. The PURE relation model puts exactly one
start and one end entity marker token around each candidate subject (or
object) entity span. This modified input is passed through the contextual
language model (such as PubMedBERT) and the marker token embeddings are used
to predict the relation type. This is reflected by the purple [S:Disease] and
[$\backslash$S:Disease] tokens on the right side of Figure 3. But SODNER
outputs multiple fragments for discontinuous entities. Rather than change the
PURE relation model architecture, we use the discontinuous entity fragments
and straightforward rules to convert the input sentence to a modified one
where the discontinuous entities are rendered in a continuous format. For
instance, consider the input, “weakness in the muscles of the arms and legs,”
which contains two entities: one flat entity, “weakness in the muscles of the
arms and legs” and one discontinuous entity, “weakness in the muscles of the
arms and legs.” Both entities have the gold entity type Sign. Our modified new
input will read as: “weakness in the muscles of the arms and weakness in the
muscles of the legs”. This transformed sentence is now input through the PURE
NER model and then through the PURE relation model.
Neither the PURE NER model nor SODNER can handle cases where the same span has
more than one entity type (e.g., a span being both a disease and a sign). This
is a special case of overlapped entities where the overlap is exact, leading
to the same span having two types. Since most relations involving such spans
only use one of the entity types, this has not caused major issues in RE
evaluation.
#### Sequence-to-Sequence: The Seq2Rel Model
The Seq2Rel model [21] model uses an encoder-decoder framework to process the
input document and output relations akin to machine translation where the
source language sentence is ingested into the encoder and the target language
sentence is output by the decoder one token at a time. Here the target
sequence is essentially a list of relations. Unlike the machine translation
setting where the target is a natural language sequence where an order is
inherent, relations do not have any order amongst them. Hence, during training
an order is imposed on the relations in a document. Special tokens are also
used to represent entity types. For example, the relation R2 in Figure 2
indicates: (Rare disease “Vitamin D Deficiency Rickets”, produces, sign “bone
disease”), where the entity types are in bold. This will be linearized in
Seq2Rel as: Vitamin D Deficiency Rickets @RareDisease@ bone disease @Sign@
@PRODUCES@, where @ENTITY-TYPE@ and @RELATION-TYPE@ are special tokens
indicating entity and relation types, respectively. The @ENTITY-TYPE@ tokens
are preceded by the actual entity spans in the input. If an input does not
contain any relations, a special @NOREL@ is coded as the output. The order
imposed during training is simply the order in which the entities occur in the
document. This is reflected in Figure 2 where relations involving entities
that occur earlier in the document are annotated before relations that involve
entities that occur later. This left-to-right order is followed until all
relations are output followed by a special end of sequence token @END@
signaling that all relations have been output. Besides this linearization
schema, a “copy mechanism” [20] is applied to the decoder, restricting it to
generate tokens only from the observed input sequence, unlike the full
vocabulary of the target language in machine translation. This mechanism
enables the decoder to output spans of the input text that correspond to
entities, as well as special tokens representing relation labels that connect
these entities. The Seq2Rel model [21] uses a PubMedBERT model as the encoder
and a long short-term memory (LSTM) network as the decoder.
#### Generative Pre-trained Transformers: BioMedLM
Generative pre-trained transformers (GPTs) have captured the fascination of
the general public and researchers alike, especially since the introduction of
ChatGPT in December 2022. However, the in-context learning and few-shot
capabilities have already surfaced in June 2020, when Open AI released GPT-3
[23]. Building on the decoder component of the transformer architecture with
the main objective of autoregressive left to right next token prediction task,
they have excelled at text generation tasks (e.g., summarization). However,
there is a growing interest in assessing their capabilities for language
understanding tasks including relation extraction. BioGPT [24] and BioMedLM
[25] have been pre-trained from scratch on biomedical abstracts from PubMed
and full text articles from PubMed Central (from the corresponding subset from
Pile [29]) based on the GPT-2 model [22]. In this effort, we focus on
BioMedLM, a 2.7B parameter model, comprised of 32 layers, a hidden size of
2560, and 20 attention heads. BioMedLM is an order of magnitude larger than
BioGPT and nearly twice as large as BioGPTlarge. It has been shown to be be
superior to BioGPT models (including in our experiments for this paper where
BioGPT underperforms by 10-15% in F-score) and to our knowledge is the largest
public GPT-style model for biomedicine. Hence, we only show BioMedLM results
in this manuscript for the sake of clarify and simplicity. Unlike Seq2Rel
whose sequence generation capabilities are highly constrained to terms
observed in the input, BioMedLM and BioGPT are purely generative, and
supervised fine-tuning involves using appropriate prompts and output
templates. Technically, we could simply use the linearization schemas
introduced for Seq2Rel. However, these generative models generate natural
language statements and not unnatural-looking templates. So our initial
experiments using a Seq2Rel style output schemas have failed. So, we
considered two types of schemas here:
* •
rel-is template: This output template is the same as that used by the original
BioGPT paper for E2ERE: “The relation between subject-span and object-span is
relationType.noun,” where relationType.noun is the noun form of the predicate.
With this template, as an example, the output for the gold relation (Wilm’s
tumor, is_a, kidney cancer) is: “The relationship between Wilm’s tumor and
kidney cancer is hyponym”. We can see here that we converted “is a” predicate
to a noun representation “hyponym” in the template and a similar strategy was
followed for all predicates.
* •
natural-lang: We came up with different natural language templates tailored to
each relation type in RareDis. They are fully specified in Table 3, each with
a representative example.
Relation type | Natural language output template
---|---
(An example for the template)
produces | $ent_{1}Span$ is a $ent_{1}Type$ that produces $ent_{2}Span$, as a $ent_{2}Type$
(Asherman’s syndrome is a rare disease that produces abdominal pain, as a
symptom)
anaphora | The term $ent_{2}Span$ is an anaphor that refers back to the entity of the $ent_{1}Type$ $ent_{1}Span$
(The term “it” is an anaphor that refers back to the entity of the disease
encephalitis)
is_synon | The $ent_{1}Type$ $ent_{1}Span$ and the $ent_{2}Type$ $ent_{2}Span$ are synonyms
(The disease diastrophic dysplasia and the rare disease disastrophic dwarfism
are synonyms)
is_acron | The acronym $ent_{1}Span$ stands for $ent_{2}Span$, a $ent_{2}Type$
(The acronym LQTS stands for long QT syndrome, a rare disease)
increases_risk_of | The presence of the $ent_{1}Type$ $ent_{1}Span$ increases the risk of developing the $ent_{2}Type$ of $ent_{2}Span$
(The presence of the disease neutropenia increases the risk of developing the
disease infections)
is_a | The $ent_{1}Type$ $ent_{1}Span$ is a type of $ent_{2}Span$, a $ent_{2}Type$
(The rare skin disease Bowen disease is a type of skin disorder, a disease)
Table 3: Natural language templates used to encode RareDis relations as
BioMedLM outputs.
### 2.3 Training objectives and evaluation metrics
For the SODNER+PURE pipeline model, the training objective is the well-known
cross entropy function for both NER and RE components. Seq2Rel and BioMedLM,
however, produce sequences (based on the schemas and templates selected) that
need to be interpreted back into the triple format (which we accomplish using
regular expressions). Since their outputs are sequences, the training
objective is the well-known auto-regressive language model objective based on
predicting the next token given previously predicted tokens. The loss function
is the average cross-entropy per target word (more details in Chapter 9.7 of
Jurafsky and Martin [30]).
For evaluation, we note that RareDis annotations are at the span level and
hence the same exact relation connecting the same entities can occur multiple
times if it is discussed several times in the document. However, Seq2Rel and
BioMedLM do not keep track of the number of times a relation occurs as they
are generative and do not operate on spans; but the pipeline models output all
connections as they operate at the span level. To ensure fair evaluation, if
the same relation occurs multiple times within an instance, it is collapsed
into a single occurrence. This is natural and harmless because there is no
loss of information if duplicate relations are ignored. Since Seq2Rel and
BioMedLM produce sequences, we use regular expressions on top of the output
templates and schemas to produce the triples we need. The evaluation metrics
are precision, recall, and F1-score, which are standard in RE. For a relation
to be counted as correctly predicted, the subject and object entity types,
their spans, and the relation type all need to exactly match the ground truth
relation.
## 3 RESULTS AND DISCUSSION
Experiments for the pipeline approach were performed on our in-house cluster
of 32GB GPU. All experiments for Seq2Rel were performed on Google Colab Pro+
using an Nvidia a100-sxm4-40gb GPU with access to high RAM. In Seq2Rel, we use
AllenNLP, an open-source NLP library developed by the Allen Institute for
Artificial Intelligence (AI2). Fairseq, a sequence modeling toolkit, is used
for training custom models for text generation tasks for BioGPT on Google
Colab Pro. We used Lambda Labs to fine-tune BioMedLM on a single H100 80GB
GPU.
Next, we describe model configurations and hyperparameters. Our settings for
learning rate, number of epochs, and other hyperparameters are determined
based on experiments on the validation dataset.
* •
Pipeline (SODNER+PURE): We used a batch size of 8, a learning rate of 1e-3,
and 100 epochs to train the SODNER model for discontinuous entities with a
PubMedBERTbase encoder. For the PURE NER model, we used PubMedBERTbase and
trained for 100 epochs, with a learning rate of 1e-4 and a batch size of 8. We
also experimented with PubMedBERTlarge with the same settings. For the PURE
relation model, we used both PubMedBERTbase and PubMedBERTlarge as encoders
with a learning rate of 1e-5 and trained for 25 epochs with the training batch
size of 8.
* •
Seq2Rel: Training was conducted for 150 epochs, with a learning rate of 2e-5
for the encoder (PubMedBERTbase or PubMedBERTlarge) and 1.21e-4 for the
decoder (LSTM) with a batch size of 2 and a beam size of 3 (for the decoder).
* •
BioMedLM: Despite supervised fine-tuning, it is not uncommon for GPT models to
output strings that were not part of the input. We observed that nearly 3%-7%
of entities output by BioMedLM did not exactly match ground truth spans. Since
we require an exact match for a prediction to be correct, we appended explicit
natural language instructions to the input, directing the model to generate
tokens from the input text: “From the given abstract, find all the entities
and relations among them. Do not generate any token outside the abstract.” We
used a batch size of 1 with gradient_accumulation_steps of 16, a learning rate
of 1e-5, and 30 epochs for BioMedLM.
We also needed some post-processing tricks to handle the idiosyncrasies of the
three different models. As we discussed earlier in Section 2.2.1, for the
pipeline models, since discontinuous entities are not handled natively by the
PURE relation model, we had to transform the inputs to render the
discontinuous entities in a flat fashion before passing them on to the PURE
model. For the Seq2Rel model, due to the WordPiece tokenization in BERT
models, the output sometimes contains extra spaces around hyphens and
brackets. To align such output strings with the input text, as a post-
processing step, we removed these additional spaces, specifically around
hyphens, curved brackets, and forward slashes.
The main results of the comparison using different models are presented in
Table 4. We observe that the SODNER+PURE pipeline (with PubMedBERTbase
encoder) produces the best F1-score of 52.2, which is 5 points more than the
best-performing Seq2Rel model with the PubMedBERTlarge encoder (47.15 F1), and
13 points more than best performing BioMedLM model (38.9 F1). The pipeline’s
performance does not increase when using the PubMedBERTlarge model. For
Seq2Rel, using PubMedBERTlarge outperforms a model with PubMedBERTbase (44.53
F1) by 2.5 points, with an increase in both precision and recall. Potentially,
the increased model capacity of PubMedBERTlarge enables it to capture more
complex and subtle relationships between medical terms and concepts. However,
it is not clear why similar gains were not observed with PubMedBERTlarge in
the pipeline.
Method | Configuration | copyInstruct | Score
---|---|---|---
P | R | F
SODNER + PURE | PubMedBERTbase | NA | 55.99 | 48.89 | 52.20
SODNER + PURE | PubMedBERTlarge | NA | 56.20 | 48.52 | 52.08
Seq2Rel | PubMedBERTbase | NA | 47.60 | 40.90 | 44.53
PubMedBERTlarge | NA | 51.46 | 43.51 | 47.15
BioMedLM | rel-is | yes | 40.19 | 29.68 | 34.14
rel-is | no | 42.14 | 36.1 | 38.89
natural-lang | yes | 38.64 | 32.81 | 35.49
natural-lang | no | 44.22 | 33.76 | 38.29
Table 4: Performances of different models under different settings on the
RareDis dataset.
For BioMedLM, the ‘copyInstruct’ column in Table 4 indicates the additional
input prompt discussed earlier in this section where decoder-only auto-
regressive models are directed to generate tokens observed in the input. The
best performance for BioMedLM is an F1 score of 38.89 using the rel-is
template for prompting the model when copy instructions were not provided.
When copy instructions are not provided, rel-is does slightly better (<1% F1)
and when copy instructions are not provided, natural-lang does better job
(1.35 of points gain) So looks like there is no advantage to using copy
instructions. (However, when using the smaller BioGPT models, the natural
language prompting seemed to perform slightly better than the rel-is
template.) Note that, BioMedLM’s best performance is still $\approx 6$ points
lower than then Seq2Rel’s best score and 11 points lower than the pipeline
score.
Note that BioMedLM is over eight times larger than our best-performing
pipeline model (considering it has three encoders based on the encoder
PubMedBERTbase, which has 110M parameters). However, its low performance
compared to the pipeline is not surprising because GPT models are
autoregressive and do not benefit from language understanding arising from the
bidirectional masked language modeling objective used in BERT models. Although
the original BioMedLM [25] effort did not perform RE, it reports SOTA scores
on biomedical Q&A tasks. The smaller BioGPT models were shown to do better
than BERT models for E2ERE too. Hence we repurposed them for this RE task and
as the largest publicly available GPT-based model, BioMedLM outperformed
BioGPT models [24] by 10–15% in F1 score and we do not see these as worthy of
reporting in this manuscript. We believe much larger models (GPT-3, GPT-3.5,
GPT-4) ought to be used to fully leverage the power of generative LMs.
Furthermore, some recent results also show that using GPT-style models to
generate additional training examples to augment the training data may be a
more effective way of using them, rather than fine-tuning them for RE tasks.
Relation type | SODNER+PURE | Seq2Rel | BioMedLM
---|---|---|---
P | R | F | P | R | F | P | R | F
anaphora | 70.40 | 69.84 | 70.11 | 64.60 | 58.00 | 61.08 | 61.26 | 53.96 | 57.38
is_a | 62.67 | 55.29 | 58.75 | 58.67 | 51.76 | 55.00 | 52.77 | 44.70 | 48.40
is_acron | 70.37 | 57.58 | 63.33 | 50.00 | 42.00 | 45.65 | 55.17 | 48.48 | 51.61
produces | 50.21 | 45.09 | 47.51 | 47.48 | 41.13 | 44.00 | 37.20 | 32.82 | 34.87
is_synon | 75.00 | 18.75 | 30.00 | 100.00 | 12.50 | 22.23 | 0.00 | 0.00 | 0.00
increases_risk_of | 50.00 | 4.55 | 8.33 | 11.80 | 9.52 | 10.52 | 0.00 | 0.00 | 0.00
Table 5: Scores for each relation type of best-performing models in the group.
We also wanted to examine scores per relation type in our models to see if
there are any predicates for which we are underperforming more than expected.
From Table 5, we notice that recall is less than 5% for increases_risk_of
relation type. This is quite awful but not surprising given the prevalence of
such relations is very small in the dataset (from Table 1). But what is very
unusual is the F1 of the ‘produces’ relation being less than 50, when it
constitutes over 60% of all relations in the dataset (from Table 1). Upon
deeper investigation, we found that generally longer object entities lead to
NER errors. We checked this more concretely by examining the errors (for
‘produces’) and found out that we missed 43% of the object spans for the best-
performing pipeline method. Thus, a large portion of performance loss is
simply due to the model not being able to predict the object entity span
correctly; especially for long object entities, even missing a single token
can lead to RE errors.
Thus, the overall performance pattern observed for the RareDis dataset is
Pipeline $>$ Seq2Rel $>$ BioMedLM. We wanted to verify this with at least one
other dataset. Considering our prior experiences with the chemical-protein
interaction extraction task [31], we repeated our E2ERE experiments using the
BioCreative Shared Task VI dataset and the results showed the same performance
pattern with pipeline leading to a 69 F1 score, followed by Seq2Rel with 49,
and BioMedLM with 37 points.
## 4 Error Analysis
Before we proceed, we note that many RE errors appear to arise from NER
errors. This can lead to a snowball effect of errors in the RE phase. Consider
a single entity participating in $n$ gold relations. If it is predicted
incorrectly as a partial match, it may potentially lead to $2n$ relation
errors because it can give rise to $n$ false positives (FPs) (because the
relation is predicted with the wrong span) and $n$ false negatives (FNs)
(because the gold relation with the right span is missed). Thus, even a small
proportion of NER errors can lead to a high loss in RE performance. In this
section, we discuss a few error categories that we observed commonly across
models.
* •
Partial matches: When multi-word entities are involved, the relation error is
often due to the model predicting a partial match (a substring or superstring
of a gold span) and this was frequent in our effort. Consider the snippet
“Kienbock disease changes may produce pain…The range of motion may become
restricted”. Here Kienbock disease is the subject of a produces relation with
the gold object span: “the range of motion may become restricted”. However,
the Seq2Rel model predicted “range of motion restricted” as the object span,
leading to both an FP and FN. But common sense tells us that the model
prediction is also correct (and potentially even better) because it removed
the unnecessary “may become” substring. In a different example, when the
relation involved the gold span “neurological disorder,” the model predicted a
superstring “progressive neurological disorder” from the full context:
“Subacute sclerosing panencephalitis (SSPE) is a progressive neurological
disorder.”
* •
Entity type mismatch: Because our evaluation is strict, predicting the entity
spans and relation type correctly, but missing a single entity type can
invalidate the whole relation leading to both an FP and an FN. The models are
often confused between closely related entity types. Rare disease and skin
rare disease were often confused along with the pair sign and symptom.
* •
Issues with discontinuous entities: Discontinuous entities are particularly
tricky and have led to several errors, even if the prediction is not
incorrect, because the model was unable to split an entity conjunction into
constituent entities. Consider the snippet: “affected infants may exhibit
abnormally long, thin fingers and toes and/or deformed (dysplastic) or absent
nails at birth.” Instead of generating relations with the two gold entities
“abnormally long, thin fingers” and “abnormally long, thin toes”, the model
simply created one relation with “long, thin fingers and toes.”
* •
BioMedLM generations not in the input: In several cases we noticed spans that
were not in the input but were nevertheless closely linked with the gold
entity span’s meaning. For example, for the gold span “muscle twitching”,
BioMedLM predicted “muscle weakness”. It also tried to form meaningful noun
phrases that capture the meaning of longer gold spans. For instance, for the
gold span “ability to speak impaired”, it predicted “difficulty in speaking”.
For the gold span, “progressive weakness of the muscles of the legs” it
outputs “paralysis of the legs”. All these lead to both FPs and FNs,
unfortunately.
* •
Errors due to potential annotation issues: In document-level RE settings, it
is not uncommon for annotators to miss certain relations. But when these are
predicted by a model, they would be considered FPs. Consider the context: “The
symptoms of infectious arthritis depend upon which agent has caused the
infection but symptoms often include fever, chills, general weakness, and
headaches.” Our model predicted that “infectious arthritis” produces “fever”.
However, the gold predictions for this did not have this and instead had the
relation “the infection” (anaphor) produces “fever”. While the gold relation
is correct, we believe what our model extracted is more meaningful. However,
since we missed the anaphor-involved relation, it led to an FN and an FP.
## 5 Conclusion
In this paper, we explored three state of the art representative models for
E2ERE from three competing paradigms: pipelines (SODNER + PURE), sequence-to-
sequence models (Seq2Rel), and generative LMs (BioMedLM). Our evaluations used
a complex dataset (RareDis) involving discontinuous, nested, and overlapping
entities. Even with the advances in Seq2Seq models and generative
transformers, a custom-built pipeline still seems to be the best option based
on our experiments in this paper. The performance gap between Seq2Rel and the
pipeline is not as high as that between BioMedLM and pipeline. As such there
could be other datasets where Seq2Rel matches the pipeline methods especially
for simpler NER scenarios without discontinuous entities. We still would not
want readers to conclude that more advanced models are not suitable for this
task and not to take away from the few-shot abilities of GPT models. Also, the
generative aspects of GPT models may not be suitable for the type of strict
evaluation imposed here where an exact match with gold spans is required. In
the future, this may be mitigated by using vector similarity or edit-distance
metrics to map such phrases to the closest matches of the input. Using
inference-only proprietary large models such as GPT-4 [32] to generate
paraphrases for training instances to create larger augmented training
datasets could also be helpful. However, in the end, a small $\approx$ 200M
parameter pipeline model that can run on consumer desktops may be preferable
for several use-cases even in the current era of excitement over generative
transformers.
## Acknowledgment
This work is supported by the NIH National Library of Medicine through grant
R01LM013240. The content is solely the responsibility of the authors and does
not necessarily represent the official views of the NIH.
## References
* [1] Dietze H, Schroeder M. GoWeb: a semantic search engine for the life science web. BMC bioinformatics. 2009;10:1-13.
* [2] Henry S, McInnes BT. Literature based discovery: models, methods, and trends. Journal of biomedical informatics. 2017;74:20-32.
* [3] Kilicoglu H, Rosemblat G, Fiszman M, Shin D. Broad-coverage biomedical relation extraction with SemRep. BMC bioinformatics. 2020;21:1-28.
* [4] Zeng D, Liu K, Lai S, Zhou G, Zhao J. Relation classification via convolutional deep neural network. In: Proceedings of COLING 2014, the 25th international conference on computational linguistics: technical papers; 2014. p. 2335-44.
* [5] Zhou P, Shi W, Tian J, Qi Z, Li B, Hao H, et al. Attention-based bidirectional long short-term memory networks for relation classification. In: Proceedings of the 54th annual meeting of the association for computational linguistics (volume 2: Short papers); 2016. p. 207-12.
* [6] Kavuluru R, Rios A, Tran T. Extracting drug-drug interactions with word and character-level recurrent neural networks. In: 2017 IEEE International Conference on Healthcare Informatics (ICHI). IEEE; 2017. p. 5-12.
* [7] Miwa M, Bansal M. End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). vol. 1; 2016. p. 1105-16.
* [8] Zhang M, Zhang Y, Fu G. End-to-End Neural Relation Extraction with Global Optimization. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing; 2017. p. 1730-40.
* [9] Pawar S, Bhattacharyya P, Palshikar G. End-to-end relation extraction using neural networks and Markov logic networks. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. vol. 1; 2017\. p. 818-27.
* [10] Tran T, Kavuluru R. An end-to-end deep learning architecture for extracting protein–protein interactions affected by genetic mutations. Database. 2018:1-13.
* [11] Peng N, Poon H, Quirk C, Toutanova K, Yih Wt. Cross-sentence n-ary relation extraction with graph lstms. Transactions of the Association for Computational Linguistics. 2017;5:101-15.
* [12] Yao Y, Ye D, Li P, Han X, Lin Y, Liu Z, et al. DocRED: A Large-Scale Document-Level Relation Extraction Dataset. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics; 2019. p. 764-77.
* [13] Li F, Lin Z, Zhang M, Ji D. A Span-Based Model for Joint Overlapped and Discontinuous Named Entity Recognition. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers); 2021. p. 4814-28.
* [14] Martínez-deMiguel C, Segura-Bedmar I, Chacón-Solano E, Guerrero-Aspizua S. The RareDis corpus: a corpus annotated with rare diseases, their signs and symptoms. Journal of Biomedical Informatics. 2022;125:103961.
* [15] Fabregat H, Araujo L, Martinez-Romo J. Deep neural models for extracting entities and relationships in the new RDD corpus relating disabilities and rare diseases. Computer methods and programs in biomedicine. 2018;164:121-9.
* [16] Eberts M, Ulges A. Span-Based Joint Entity and Relation Extraction with Transformer Pre-Training. In: ECAI 2020. IOS Press; 2020. p. 2006-13.
* [17] Tran T, Kavuluru R. Neural metric learning for fast end-to-end relation extraction. arXiv preprint arXiv:190507458. 2019.
* [18] Zhong Z, Chen D. A Frustratingly Easy Approach for Entity and Relation Extraction. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2021. p. 50-61.
* [19] Nayak T, Ng HT. Effective modeling of encoder-decoder architecture for joint entity and relation extraction. In: Proceedings of the AAAI conference on artificial intelligence. vol. 34; 2020. p. 8528-35.
* [20] Zeng X, Zeng D, He S, Liu K, Zhao J. Extracting relational facts by an end-to-end neural model with copy mechanism. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers); 2018. p. 506-14.
* [21] Giorgi J, Bader G, Wang B. A sequence-to-sequence approach for document-level relation extraction. In: Proceedings of the 21st Workshop on Biomedical Language Processing. Dublin, Ireland: Association for Computational Linguistics; 2022. p. 10-25. Available from: https://aclanthology.org/2022.bionlp-1.2.
* [22] Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I. Language models are unsupervised multitask learners; 2019. Available from: https://insightcivic.s3.us-east-1.amazonaws.com/language-models.pdf.
* [23] Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, et al. Language models are few-shot learners. Advances in neural information processing systems. 2020;33:1877-901.
* [24] Luo R, Sun L, Xia Y, Qin T, Zhang S, Poon H, et al. BioGPT: generative pre-trained transformer for biomedical text generation and mining. Briefings in Bioinformatics. 2022;23(6).
* [25] Bolton E, Hall D, Yasunaga M, Lee T, Manning C, Liang P. BioMedLM; 2022. Available from: https://crfm.stanford.edu/2022/12/15/biomedlm.html.
* [26] National Organization for Rare Disorders (NORD). Rare Disease Database Frequently Asked Questions; 2019. Accessed: Month Day, Year. https://rarediseases.org/wp-content/uploads/2019/01/RDD-FAQ-2019.pdf.
* [27] Klimova B, Storek M, Valis M, Kuca K. Global view on rare diseases: a mini review. Current medicinal chemistry. 2017;24(29):3153-8.
* [28] Global Genes. Facts;. Accessed May 1, 2023. https://globalgenes.org/learn/rare-disease-facts/.
* [29] Gao L, Biderman S, Black S, Golding L, Hoppe T, Foster C, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:210100027. 2020.
* [30] Jurafsky D, Martin JH. Speech and Language Processing (3rd Edition); 2023. https://web.stanford.edu/~jurafsky/slp3/.
* [31] Ai X, Kavuluru R. End-to-End Models for Chemical-Protein Interaction Extraction: Better Tokenization and Span-Based Pipeline Strategies. arXiv preprint arXiv:230401344. 2023.
* [32] Bubeck S, Chandrasekaran V, Eldan R, Gehrke J, Horvitz E, Kamar E, et al. Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint arXiv:230312712. 2023.
|
#### 4.1.1 Universal enveloping algebras of Lie algebras
The universal enveloping algebra ${\mathcal{U}}(\mathfrak{g})$ of a Lie
algebra $\mathfrak{g}$ is the quotient of the tensor algebra
$\otimes(\mathfrak{g})$ by the two-sided ideal generated by the identification
of the commutator of generators with their Lie bracket (i.e. the associative
ideal spanned by elements proportional to
$\hat{X}\otimes\hat{Y}-\hat{Y}\otimes\hat{X}-[\hat{X},\hat{Y}]$ for some
$\hat{X},\hat{Y}\in\mathfrak{g}$).
The Poincaré-Birkhoff-Witt theorem asserts that the universal enveloping
algebra ${\mathcal{U}}(\mathfrak{g})$ of the Lie algebra $\mathfrak{g}$ is an
almost-commutative algebra whose associated graded algebra is isomorphic to
the symmetric algebra of $\mathfrak{g}$,
$\text{gr}\,{\mathcal{U}}(\mathfrak{g})\,\cong\,\odot(\mathfrak{g})\,.$ (93)
Note that the universal enveloping algebra ${\mathcal{U}}(\mathfrak{g})$ of a
Lie algebra $\mathfrak{g}$ is never simple, since the subalgebra
${\mathcal{U}}_{>0}(\mathfrak{g})\subset{\mathcal{U}}(\mathfrak{g})$ of
strictly positive degree is always an associative ideal.
Example (Abelian Lie algebra): Any vector space $V$ can be endowed with a
structure of Lie algebra with the trivial Lie bracket. The universal
enveloping algebra of the Abelian Lie algebra $V$ is isomorphic to the
symmetric algebra of $V$: ${\mathcal{U}}(V)\,\cong\,\odot(V)$ .
Remark: When dealing with Lie-Rinehart algebras $\mathfrak{L}$ in the next
subsection, it will be important to distinguish explicitly the universal
enveloping algebra ${\mathcal{U}}_{\mathbb{K}}(\mathfrak{L})$ of the Lie
algebra $\mathfrak{L}$ over the field $\mathbb{K}$ from the universal
enveloping algebra ${\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})$ of the Lie-
Rinehart algebra $\mathfrak{L}$ over the commutative algebra $\mathcal{A}$. In
the former case, the Lie-Rinehart algebra is simply seen as a Lie algebra
(over the field $\mathbb{K}$) while, in the latter case, the richer structure
of Lie-Rinehart algebra (over the commutative algebra $\mathcal{A}$) is taken
into account (see Subsection 4.1.3).
#### 4.1.2 Lie-Rinehart algebras and their associated bimodules
To any Lie-Rinehart algebra $\mathfrak{L}$ over a commutative algebra
$\mathcal{A}$, with anchor $\rho$, is associated another Lie-Rinehart algebra
over $\mathcal{A}$: the semidirect sum
${\mathfrak{B}}=\mathfrak{L}\inplus_{\rho}\mathfrak{A}$ of the Lie-Rinehart
algebra $\mathfrak{L}$ and the commutator algebra $\mathfrak{A}$.
Proof: Indeed, the anchor $\rho:\mathfrak{L}\to\mathfrak{der}({\mathcal{A}})$
of any Lie-Rinehart algebra $\mathfrak{L}$ over a commutative algebra
$\mathcal{A}$ is a representation of the Lie algebra $\mathfrak{L}$ on the
associative algebra $\mathcal{A}$. The corresponding Abelian Lie algebra
$\mathfrak{A}$ can be endowed trivially with a structure of $\mathcal{A}$-Lie
algebra (with trivial anchor and bracket). Therefore, the anchor of
$\mathfrak{L}$ provides a representation
$\rho:\mathfrak{L}\to\mathfrak{der}(\mathfrak{A})$ of the Lie-Rinehart algebra
$\mathfrak{L}$ on the $\mathcal{A}$-Lie algebra $\mathfrak{A}$. This allows to
define the semidirect sum
${\mathfrak{B}}=\mathfrak{L}\inplus_{\rho}\mathfrak{A}$. By construction, the
adjoint representation of the subalgebra $\mathfrak{L}\subset\mathfrak{B}$ on
the Abelian ideal ${\mathfrak{A}}$ is defined by the anchor $\rho$ of
$\mathfrak{L}$, i.e. $[\hat{X},\hat{f}]_{{}_{\mathfrak{B}}}:=\hat{X}[f]$ for
any $\hat{X}\in\mathfrak{L}$ and $f\in\mathcal{A}$. The left
$\mathcal{A}$-module structure of ${\mathfrak{B}}$ is defined in the obvious
way $g\cdot(f\oplus\hat{X}):=(g\cdot f)\oplus(g\cdot\hat{X})$. ∎
What is remarkable is that this Lie-Rinehart algebra
${\mathfrak{B}}={\mathfrak{A}}\niplus\mathfrak{L}$ has a richer structure than
the original Lie-Rinehart algebra $\mathfrak{L}$ in the sense that it is not
only a left $\mathcal{A}$-module but also an $\mathcal{A}$-bimodule, where the
right $\mathcal{A}$-module structure is defined via the action
$(f\oplus\hat{X})\bullet g\,:=\,\big{(}\,f\cdot
g+\hat{X}[g]\,\big{)}\oplus\big{(}\,g\cdot\hat{X}\,\big{)}\,.$ (94)
From this perspective, the anchor of $\mathfrak{L}$ can be seen as the
structure that relates the left and right $\mathcal{A}$-module structures of
${\mathfrak{A}}\niplus\mathfrak{L}$. If the left and right actions are written
by the same symbol $\circ$ in an operatorial form, then the relation (94) can
be written in the more balanced form
$(\hat{f}\oplus\hat{X})\circ\hat{g}\,:=\,\big{(}\,\hat{f}\circ\hat{g}+\hat{X}[g]\,\big{)}\oplus\big{(}\,\hat{g}\circ\hat{X}\,\big{)}$,
which already suggests its later interpretation in the universal enveloping
algebra as arising from a commutator of an associative product.
Example (Lie-Rinehart algebra of smooth vector fields) : In the case when
$\mathcal{A}$ and $\mathfrak{L}$ are respectively the structure algebra
${\mathcal{C}}^{\infty}(M)$ and the Lie algebra ${\mathfrak{X}}(M)$ of vector
fields on a manifold $M$, the associated Lie-Rinehart algebra
${\mathfrak{B}}={\mathfrak{A}}\niplus\mathfrak{L}$ is isomorphic to the Lie-
Rinehart algebra
${\mathfrak{D}}^{1}(M)\cong{\mathcal{C}}^{\infty}(M)\niplus{\mathfrak{X}}(M)$
of first-order differential operators on $M$.
#### 4.1.3 Universal enveloping algebras of Lie-Rinehart algebras
The universal enveloping algebra ${\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})$
of a Lie-Rinehart algebra $\mathfrak{L}$ over the commutative algebra
$\mathcal{A}$ can be defined (see e.g. [11] and refs therein) as the quotient
of the universal enveloping algebra
${\mathcal{U}}_{\mathbb{K}}({\mathfrak{B}})$ of the Lie algebra
${\mathfrak{B}}=\mathfrak{A}\niplus\mathfrak{L}$ by the two-sided ideal
generated by the left action of $\mathcal{A}$ on ${\mathfrak{B}}$, i.e. by the
associative ideal spanned by elements proportional to
$g\cdot(f\oplus\hat{X})-(g\cdot f)\oplus(g\cdot\hat{X})$ for some
$f,g\in\mathcal{A}$ and $\hat{X}\in\mathfrak{L}$.
Example (Smooth differential operators) : The almost-commutatative algebra
${\mathcal{D}}(M)$ of differential operators on $M$ is the universal
enveloping algebra of the Lie-Rinehart algebra ${\mathfrak{X}}(M)$ of vector
fields on $M$, i.e.
${\mathcal{U}}_{C^{\infty}(M)}\big{(}{\mathfrak{X}}(M)\big{)}\cong{\mathcal{D}}(M)\,.$
(95)
Example (Free modules generated by Lie algebras) : Consider the simplest
examples of $\mathcal{A}$-Lie algebras: free ${\mathcal{A}}$-modules generated
by a Lie algebra $\mathfrak{g}$. The universal enveloping algebra of such an
$\mathcal{A}$-Lie algebra $\mathfrak{L}={\mathcal{A}}\otimes\mathfrak{g}$ is
the free ${\mathcal{A}}$-modules generated by the universal enveloping algebra
of the Lie algebra $\mathfrak{g}$,
${\mathcal{U}}_{\mathcal{A}}({\mathcal{A}}\otimes\mathfrak{g})\cong{\mathcal{A}}\otimes{\mathcal{U}}_{\mathbb{K}}(\mathfrak{g})\,.$
(96)
Example (Invariant differential operators) : Consider a Klein geometry, i.e.,
a pair made of a Lie group $G$ and a closed Lie subgroup $H\subset G$ such
that the coset space $G/H$ is connected. It defines a principal $H$-bundle $G$
over $G/H$ whose Atiyah algebroid is the vector bundle $\frac{TG}{H}$ over
$G/H$. Its global sections are $H$-invariant vector fields on $G$. They span
the Atiyah algebra $\mathfrak{X}(G)^{H}=\Gamma(TG/H)$. The universal
enveloping algebra of the Atiyah algebra of such a Klein geometry $H\subset G$
is spanned by $H$-invariant differential operators on $G$ [9, Example 4.26]
${\cal U}_{C^{\infty}(G/H)}\big{(}\mathfrak{X}(G)^{H}\big{)}\simeq{\cal
D}(G)^{H}\simeq{\cal U}_{C^{\infty}(G)}\big{(}\mathfrak{X}(G)\big{)}^{H}.$
(97)
Counter-example (Covariant differential operators on a module) : Consider an
$\mathcal{A}$-module V. The tensor algebra
$\otimes_{\mathcal{A}}(\textsc{V})=\oplus_{r\in\mathbb{N}}\otimes^{r}_{\mathcal{A}}\textsc{V}$
is an $\mathcal{A}$-algebra, $\mathbb{N}$-graded by the rank of tensors. A
covariant derivative on an $\mathcal{A}$-module V can be defined equivalently
as a derivation of the tensor algebra $\otimes_{\mathcal{A}}\textsc{V}$
preserving the rank of tensors. Consider the algebra
${\mathcal{D}}_{\mathcal{A}}(\otimes_{\mathcal{A}}\textsc{V})=\oplus_{q\in\mathbb{Z}}{\mathcal{D}}_{q}(\otimes_{\mathcal{A}}\textsc{V})$
of differential operators on the tensor algebra
$\otimes_{\mathcal{A}}(\textsc{V})$. It is $\mathbb{Z}$-graded: an element of
${\mathcal{D}}_{q}(\otimes_{\mathcal{A}}\textsc{V})$ increases the tensor rank
by $q$. A differential operator on the tensor algebra
$\otimes_{\mathcal{A}}(\textsc{V})$ preserving the rank of tensors will be
called a covariant differential operator on the $\mathcal{A}$-module V since
they are somehow the higher-order generalisation of covariant derivatives. The
almost-commutative subalgebra
${\mathcal{D}}_{0}(\otimes_{\mathcal{A}}\textsc{V})$ spanned by all covariant
differential operators on the $\mathcal{A}$-module V will be denoted
${\mathcal{C}D}_{\mathcal{A}}(\textsc{V})$. In general, it is not isomorphic
to the universal enveloping algebra
${\mathcal{U}}_{\mathcal{A}}(\,\mathfrak{cder}_{\mathcal{A}}(\textsc{V})\,)$
of the Atiyah algebra $\mathfrak{cder}_{\mathcal{A}}(\textsc{V})$ of covariant
derivatives [9, Example 4.28]. However, if the first-order covariant
differential operators generate the whole algebra of covariant differential
operators, then it is isomorphic to a quotient of the universal enveloping
algebra.
Remember that the universal enveloping algebra ${\mathcal{U}}(\mathfrak{g})$
of a Lie algebra $\mathfrak{g}$ is never simple. This remains true for the
universal enveloping algebra ${\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})$ of an
$\mathcal{A}$-Lie algebra $\mathfrak{L}$. However, for a Lie-Rinehart algebra
$\mathfrak{L}$ with non-trivial anchor the universal enveloping algebra
${\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})$ may be simple.
Example (Lie-Rinehart algebra of polynomial differential operators) : The Weyl
algebra ${\mathcal{D}}(A)$ of polynomial differential operators on the affine
space $A$, modeled on the vector space $V$, is simple although it is
isomorphic to the universal enveloping algebra of the Lie-Rinehart algebra
$\mathfrak{der}(\odot V^{*})$ of polynomial vector fields on $A$,
${\mathcal{U}}_{\odot V^{*}}\big{(}\mathfrak{der}(\odot
V^{*})\,\big{)}\cong{\mathcal{D}}(A)\,.$ (98)
#### 4.1.4 Poincaré-Birkhoff-Witt theorem
In order to identify concretely the universal enveloping algebra of a Lie-
Rinehart algebra, the most important result to know is the generalisation [20]
by Rinehart of the Poincaré-Birkhoff-Witt theorem: if $\mathfrak{L}$ is a
projective left $\mathcal{A}$-module, then the universal enveloping algebra
${\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})$ is an almost-commutative algebra
whose graded algebra is isomorphic to the symmetric algebra of $\mathfrak{L}$
over $\mathcal{A}$,
$\text{gr}\,{\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})\,\cong\,\odot_{\mathcal{A}}(\mathfrak{L})\,.$
(99)
It is important to emphasise that the symmetric algebra
$\odot_{\mathcal{A}}(\mathfrak{L})$ over ${\mathcal{A}}$ is much smaller than
the usual symmetric algebra $\odot_{\mathbb{K}}(\mathfrak{L})$ over
${\mathbb{K}}$, because the former takes into account the full
${\mathcal{A}}$-linearity of the corresponding tensor product (while the
latter only takes into account its ${\mathbb{K}}$-linearity).
Example (Smooth differential operators) : The almost-commutatative algebra
${\mathcal{D}}(M)$ of differential operators on $M$ is the universal
enveloping algebra of the Lie-Rinehart algebra ${\mathfrak{X}}(M)$ of vector
fields on $M$. From the generalised Poincaré-Birkhoff-Witt theorem, one
recovers that the classical limit of the almost-commutative algebra
${\mathcal{D}}(M)$ of differential operators on $M$ is isomorphic to the
Schouten algebra ${\mathcal{S}}(M)\cong\Gamma(\odot TM)$ of principal symbols
on $M$, i.e.
$\text{gr}\,{\mathcal{D}}(M)\cong\odot_{{}_{C^{\infty}(M)}}\big{(}{\mathfrak{X}}(M)\big{)}\cong\mathcal{S}(M)\,.$
(100)
Example (Polynomial vs formal differential operators) : The Grothendieck
algebra of differential operators acting on the commutative algebra
$\mathcal{A}=\odot(V^{*})$ of polynomials on the affine space $A$
(respectively, the commutative algebra
$\mathcal{A}=\overline{\odot}(V^{*}):=\odot(V)^{*}$ of formal power series at
the origin of the vector space $V$) is the universal enveloping algebra of the
Lie-Rinehart algebra $\mathfrak{der}({\mathcal{A}})$ of polynomial
(respectively, formal) vector fields $\hat{X}=X^{a}(y)\,\partial_{a}$:
${\mathcal{U}}_{\mathcal{A}}\big{(}\mathfrak{der}({\mathcal{A}})\,\big{)}={\mathcal{D}}({\mathcal{A}})\qquad\text{for}\quad{\mathcal{A}}\,\,=\,\,\text{either}\,\,\odot(V^{*})\,\,\text{or}\,\,\overline{\odot}(V^{*})\,.$
(101)
The Poincaré-Birkhoff-Witt theorem leads to the isomorphism
$\text{gr}\,{\mathcal{D}}({\mathcal{A}})\,\cong\,{\mathcal{A}}\,\otimes\,\odot(V)\qquad\text{for}\quad{\mathcal{A}}\,\,=\,\,\text{either}\,\,\odot(V^{*})\,\,\text{or}\,\,\overline{\odot}(V^{*})\,,$
(102)
since $\mathfrak{der}({\mathcal{A}})\cong{\mathcal{A}}\otimes V$ as vector
spaces. The isomorphism (102) is the property that the commutative algebra
$\text{gr}\,{\mathcal{D}}({\mathcal{A}})$ of symbols
$X=\sum\limits_{r=0}^{k}X^{a_{1}\cdots a_{r}}(y)\,p_{a_{1}}\cdots p_{a_{r}}$
(103)
of differential operators
$\hat{X}=\sum\limits_{r=0}^{k}X^{a_{1}\cdots
a_{r}}(y)\,\partial_{a_{1}}\cdots\partial_{a_{r}}$ (104)
is a free $\mathcal{A}$-module with all monomials $p_{a_{1}}\cdots p_{a_{r}}$
as holonomic basis, i.e. the corresponding components $X^{a_{1}\cdots
a_{r}}(y)$ are polynomials (vs formal power series). In particular, the
symbols of polynomial differential operators are polynomials functions on the
cotangent bundle $T^{*}V=V\oplus V^{*}$, in agreement with the isomorphism
$\odot(V\oplus V^{*})\,\cong\,\odot(V)\,\otimes\,\odot(V^{*})$ (105)
#### 4.1.5 Universality property
An equivalent (but more abstract) definition of the universal enveloping
algebra ${\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})$ of the Lie-Rinehart
algebra $\mathfrak{L}$ over the commutative algebra $\mathcal{A}$ is by the
following universality property (see e.g. [11, 21]).343434Apart from the well-
known constructions of Rinehart [20] and Huebschmann [21], the universal
enveloping algebra of a Lie-Rinehart algebra admits other (equivalent)
realisations (see e.g. [40, 41] and refs therein).
Let $\mathfrak{L}$ be a Lie-Rinehart algebra over $\mathcal{A}$ with $\cdot$
denoting the left action of $\mathcal{A}$ on $\mathfrak{L}$. Let $\cal U$ be
an associative algebra with product denoted by $\circ$ and with commutator
algebra denoted by $\mathfrak{U}$. If
$a\,:\,{\mathcal{A}}\to{\mathcal{U}}\,:\,f\mapsto a(f)$ (106)
is a morphism of associative algebras and
$\ell\,:\,\mathfrak{L}\to\mathfrak{U}\,:\,\hat{X}\mapsto\ell({X})$ (107)
is a morphism of Lie algebras, such that they satisfy the compatibility
conditions ($\forall f\in\mathcal{A}$, $\forall{X}\in\mathfrak{L}$) :
$\ell(f\cdot{X})\,=\,a(f)\circ\ell({X})\,,$ (108)
and
$\ell({X})\,\circ\,a(f)\,-\,a(f)\,\circ\,\ell(X)\,=\,a\big{(}\,\hat{X}[f]\,\big{)}\,,$
(109)
then there exists a unique extension
$u\,:\,{\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})\to{\mathcal{U}}\,,\qquad
u|_{\mathcal{A}}=a\,,\quad u|_{\mathfrak{L}}=\ell\,,$ (110)
which is a morphism of associative algebras.
An important corollary of this universality property is that there is a one-
to-one correspondence between modules of a Lie-Rinehart algebras
$\mathfrak{L}$ over $\mathcal{A}$ and modules of its universal enveloping
algebra ${\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})$. In fact, if a vector
space V carries a representation of a Lie-Rinehart algebra $\mathfrak{L}$ over
$\mathcal{A}$ then it is both a left $\mathcal{A}$-module V (i.e. there is a
morphism $a:{\mathcal{A}}\to\text{End}_{\mathcal{A}}(\textsc{V})$ of
associative algebras) and a left $\mathfrak{L}$-module (i.e. there is a
morphism $\nabla:\mathfrak{L}\to\mathfrak{cder}_{\mathcal{A}}(\textsc{V})$ of
Lie-Rinehart algebras). Setting $\ell=\nabla$ and
${\mathcal{U}}={\mathcal{D}}_{\mathcal{A}}(\textsc{V})$, one finds that there
exists a unique extension
${\mathcal{U}}(\nabla):{\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})\to{\mathcal{D}}_{\mathcal{A}}(\textsc{V})$
as a morphism of associative algebras. This makes V a left
${\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})$-module.
Note that the extension of a faithful representation
$\nabla:\mathfrak{L}\hookrightarrow\mathfrak{cder}_{\mathcal{A}}(\textsc{V})$
of a Lie-Rinehart algebra to a representation
${\mathcal{U}}(\nabla):{\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})\to{\mathcal{D}}_{\mathcal{A}}(\textsc{V})$
of the universal enveloping algebra may not be faithful. However, the
representation of the almost-commutative algebra
${\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})\,/\,\text{Ker}\,{\mathcal{U}}(\nabla)$
defined as the quotient of the universal enveloping algebra
${\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})$ by the kernel of
${\mathcal{U}}(\nabla)$ will be faithful by construction. This quotient will
be called the enveloping algebra of the Lie-Rinehart algebra $\mathfrak{L}$
associated to the faithful representation $\nabla$. In this sense, any
faithful representation of a Lie-Rinehart algebra (such as a flat connection)
can be lifted to a faithful representation of the corresponding enveloping
algebra. And, conversely, any (faithful) representation of an almost-
commutative algebra restricts to a (faithful) representation of a Lie-Rinehart
algebra.
#### 4.1.6 Almost-commutative algebras and associative algebroids
Consider an almost-commutative algebra $\cal U$ and let us denote by
$\mathfrak{U}$ its commutator algebra. Recall that the component of degree
zero is a commutative algebra ${\mathcal{U}}_{0}$ and that the component of
degree one is a Lie-Rinehart algebra $\mathfrak{U}_{1}$ over
${\mathcal{U}}_{0}$. Furthermore, the quotient
$\mathfrak{U}_{1}/\mathfrak{U}_{0}\subset\text{gr}\,\mathfrak{U}$ is a Lie-
Rinehart subalgebra of the grade one component of the classical limit
$\text{gr}\,{\mathcal{U}}$. The Lie-Rinehart algebra $\mathfrak{U}_{1}$ over
${\mathcal{U}}_{0}$ is the extension (41) of the Lie-Rinehart subalgebra
$\mathfrak{U}_{1}/\mathfrak{U}_{0}\subset\text{gr}\,\mathfrak{U}$ by the
Abelian Lie-Rinehart subalgebra $\mathfrak{U}_{0}\subset\mathfrak{U}$.
Another corollary of the universality property is that, for any almost-
commutative algebra $\cal U$, there exists a morphism of associative algebras
from the universal enveloping algebra
$\mathcal{U}_{{}_{{\mathcal{U}}_{0}}}(\mathfrak{U}_{1}/\mathfrak{U}_{0})$ of
the Lie-Rinehart algebra $\mathfrak{U}_{1}/\mathfrak{U}_{0}$ over
${\mathcal{U}}_{0}$ to the almost-commutative algebra $\cal U$ (see e.g. [42,
Section 2.1] for more details). In this sense, “almost-commutative” algebras
could be called “associative-Rinehart” algebras, since almost-commutative
algebras are to Lie-Rinehart algebras what associative algebras are to Lie
algebras.353535This statement can even be made precise in functorial language.
The “commutator” functor associating a Lie-Rinehart algebra to any almost-
commutative algebra is right-adjoint to the “universal enveloping” functor
associating an almost-commutative algebra to any Lie-Rinehart algebra [42,
Proposition 2.9]. Accordingly, an almost-commutative algebra $\mathcal{U}$
whose degree zero component $\mathcal{U}_{0}$ is the structure algebra
$C^{\infty}(M)$ of a manifold $M$ and such that each component
$\mathcal{U}_{k}$ is locally-free of finite-rank, could be called an
associative algebroid over $M$. In particular, an associative algebroid is the
space of sections of a filtered vector bundle over $M$ with two important
vector sub-bundles: the unit bundle $M\times\mathbb{R}$ at degree zero, and a
Lie algebroid $\mathbb{A}$ at degree one. As argued in the introduction, it is
tempting to speculate that associative algebroids should be the proper arena
for discussing geometrically higher-spin gauge symmetries and connections.
An almost-commutative algebra $\cal U$ generated by its component
${\mathcal{U}}_{1}$ of degree one will be called an enveloping algebra of the
Lie-Rinehart algebra $\mathfrak{U}_{1}/\mathfrak{U}_{0}$ over the commutative
algebra ${\mathcal{U}}_{0}$. Another corollary of the universality property is
that any enveloping algebra $\cal U$ of a Lie-Rinehart algebra $\mathfrak{L}$
over the commutative algebra ${\mathcal{A}}$ is isomorphic to a quotient of
the universal enveloping algebra ${\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})$
of the Lie-Rinehart algebra $\mathfrak{L}$ over $\mathcal{A}$. In fact, the
morphism ${\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})\to\cal U$ of associative
algebras is surjective since $\cal U$ is generated by its component of order
one ${\mathcal{U}}_{1}$, therefore one has a short exact sequence of
associative algebra morphisms
$0\to{\mathcal{I}}\stackrel{{\scriptstyle
i}}{{\hookrightarrow}}{\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})\stackrel{{\scriptstyle\pi}}{{\twoheadrightarrow}}{\mathcal{U}}\to
0$ (111)
where the associative ideal ${\mathcal{I}}$ of
${\mathcal{U}}_{\mathcal{A}}(\mathfrak{L})$ is the kernel of $\pi$.
### 4.2 Weyl algebra as enveloping algebra of Heisenberg algebra
Let $V$ be a finite-dimensional vector space. Its cotangent bundle
$T^{*}V\cong V\oplus V^{*}$ is endowed with a canonical symplectic two-form
$\Omega$ defined by $\Omega(v\oplus\alpha,w\oplus\beta)=\alpha(w)-\beta(v)$
for all $v,w\in V$ and $\alpha,\beta\in V^{*}$. Conversely, any finite-
dimensional symplectic vector space $W$ admits a choice of polarisation
$W=V\oplus V^{*}$.
#### 4.2.1 Heisenberg group and algebra
Obviously, the vector space $V\oplus V^{*}$ can be seen as an additive Abelian
Lie group. The Heisenberg group $H(V)$ is a nontrivial central extension
$0\to{\mathbb{K}}\,\hookrightarrow\,H(V)\,\twoheadrightarrow\,V\oplus V^{*}\to
0\,.$ (112)
of the Abelian Lie group $V\oplus V^{*}$ by the Abelian Lie group
${\mathbb{K}}$. It is the vector space $V\oplus V^{*}\oplus{\mathbb{K}}$
endowed with the product
$\displaystyle(v,\alpha,t)\cdot(w,\beta,u)=\Big{(}\,v+w\,,\alpha+\beta\,,\,t+u+\alpha(w)-\beta(v)\,\Big{)}\,,$
(113) $\displaystyle\qquad\forall\,v,w\in V\,,\quad\forall\,\alpha,\beta\in
V^{*},\quad\forall\,t,u\in{\mathbb{K}}\,.$
The Heisenberg group $H(V)$ is a non-Abelian Lie group whose center is
${\mathbb{K}}$.
The Heisenberg algebra $\mathfrak{h}(V)$ is the Lie algebra of the Heisenberg
group $H(V)$. It is the vector space $V\oplus V^{*}\oplus{\mathbb{K}}$ endowed
with the Lie bracket
$\displaystyle\big{[}\,(v,\alpha,t)\,,\,(w,\beta,u)\,\big{]}\,=\,\big{(}\,0,0,\,\alpha(w)-\beta(v)\,\big{)}\,,$
(114) $\displaystyle\qquad\forall\,v,w\in V\,,\quad\forall\,\alpha,\beta\in
V^{*},\quad\forall\,t,u\in{\mathbb{K}}\,.$
Given a basis $\\{e_{a}\\}$ of the vector space $V$, the latter becomes
isomorphic to ${\mathbb{K}}^{n}$ in which case the Heisenberg group
(respectively, algebra) is often denoted by physicists $H_{2n}$ (respectively,
$\mathfrak{h}_{2n}$). Let c denote the central element of $\mathfrak{h}(V)$
corresponding to the unit element $1\in\mathbb{K}$. In the basis
$\\{e_{a},e^{*b},\texttt{c}\\}$ of $\mathfrak{h}_{2n}$, the only nontrivial
Lie brackets are given by $[e^{*b},e_{a}]=\delta_{a}^{b}\texttt{c}$.
#### 4.2.2 Unitary irreducible representations
The theorem of Stone and von Neumann asserts (respectively, implies) that all
unitary irreducible representations of the real Heisenberg group $H(V)$
(respectively, of the real Heisenberg algebra $\mathfrak{h}(V)$ ) which are
not trivial on the center ${\mathbb{R}}$ are unitarily equivalent (up to a
scale, i.e. up to a rescaling of the eigenvalue of the central element).
By Schur’s lemma, all unitary irreducible modules of the Heisenberg algebra
$\mathfrak{h}(V)$ are eigenspaces of the central element c. All unitary
irreducible modules of the Heisenberg algebra $\mathfrak{h}(V)$ for non-
vanishing real eigenvalue look exactly the same, so one may take $1$ as
eigenvalue. This faithful representation of the Lie algebra $\mathfrak{h}(V)$
extends to a representation of its universal enveloping algebra
${\mathcal{U}}\big{(}\,\mathfrak{h}(V)\,\big{)}$ which is not faithful. The
associative ideal
$(\texttt{c}-1)\,{\mathcal{U}}\big{(}\,\mathfrak{h}(V)\,\big{)}$ is the
annihilator of the corresponding unitary irreducible $\mathfrak{h}(V)$-module.
The Weyl algebra ${\mathcal{D}}(A)$ is isomorphic to the quotient
${\mathcal{U}}\big{(}\mathfrak{h}(V)\big{)}\,/\,(\texttt{c}-1){\mathcal{U}}\big{(}\mathfrak{h}(V)\big{)}\cong{\mathcal{D}}(A)$
(115)
of the universal enveloping algebra
${\mathcal{U}}\big{(}\mathfrak{h}(V)\big{)}$ of the Heisenberg algebra
$\mathfrak{h}(V)$ by the primitive ideal
$(\texttt{c}-1){\mathcal{U}}\big{(}\mathfrak{h}(V)\big{)}$. In other words,
the Weyl algebra is isomorphic to the enveloping algebra of the Heisenberg
algebra associated to one of its representation on a unitary irreducible
module, non-trivial on the centre.
The classical limit $\text{gr}\,{\mathcal{D}}(A)$ of the Weyl algebra (seen as
an almost-commutative algebra) is isomorphic to the Schouten algebra (105) of
polynomial functions on the cotangent space $T^{*}V\cong V\oplus V^{*}$. This
algebra (105) of polynomial symbols is isomorphic to the quotient of the
universal enveloping algebra ${\mathcal{U}}\big{(}\mathfrak{h}(V)\big{)}$ of
the Heisenberg algebra by the associative ideal
$\texttt{c}\,{\mathcal{U}}\big{(}\mathfrak{h}(V)\big{)}$,
$\odot(V\oplus
V^{*})\cong{\mathcal{U}}\big{(}\mathfrak{h}(V)\big{)}\,/\,\texttt{c}\,{\mathcal{U}}\big{(}\mathfrak{h}(V)\big{)}\,.$
(116)
In fact, this quotient amounts to take the classical limit where position and
momenta commute with each other.
The many faces of Weyl algebras
Consider an affine space $A$ modeled on a vector space $V$. The Weyl algebra
${\mathcal{D}}(A)$ can be defined in various equivalent ways as:
$\bullet$ the Grothendieck algebra ${\mathcal{D}}(\odot V^{*})$ of the
commutative algebra of polynomial functions on $A$,
$\bullet$ the universal enveloping algebra ${\mathcal{U}}_{\odot
V^{*}}\big{(}\mathfrak{der}(\odot V^{*})\,\big{)}$ of the Lie-Rinehart algebra
of polynomial vector fields on $A$,
$\bullet$ the enveloping algebra
$\frac{{\mathcal{U}}\big{(}\mathfrak{h}(V)\big{)}}{(\texttt{c}-1){\mathcal{U}}\big{(}\mathfrak{h}(V)\big{)}}$
of the Heisenberg algebra associated to one of its representation on a unitary
irreducible module non-trivial on the centre,
### 4.3 Universal enveloping algebras of semidirect sums
For the sake of simplicity of the discussion, let us focus first on the
example of Lie algebras over a field $\mathbb{K}$.
The universal enveloping algebra ${\mathcal{U}}(\mathfrak{g})$ of a semidirect
sum
$\mathfrak{g}=\mathfrak{i}\niplus\mathfrak{h}$ (117)
of the Lie ideal $\mathfrak{i}\subset\mathfrak{g}$ and the Lie subalgebra
$\mathfrak{h}\subset\mathfrak{g}$ is isomorphic to the smash product of the
respective universal enveloping algebras ${\mathcal{U}}(\mathfrak{i})$ and
${\mathcal{U}}(\mathfrak{h})$ [43, Subsection 1.7.11],
${\mathcal{U}}(\mathfrak{i}\niplus\mathfrak{h})\,\cong\,{\mathcal{U}}(\mathfrak{i})\rtimes{\mathcal{U}}(\mathfrak{h})\,,$
(118)
where the action of $\mathfrak{h}$ on ${\mathcal{U}}(\mathfrak{i})$ arises via
the Leibnitz rule from the representation of $\mathfrak{h}$ on $\mathfrak{i}$.
This result admits a generalisation [44, 45] to the case of a linearly-split
extension363636In other words, the arrows in the splitting
$0\leftarrow\mathfrak{i}\twoheadleftarrow\mathfrak{g}\hookleftarrow\mathfrak{h}\leftarrow
0$ are morphisms of vector spaces only. $\mathfrak{g}$ of the Lie algebra
$\mathfrak{h}$ by the ideal $\mathfrak{i}$, in which case the symbol $\niplus$
stands for the “curved” semidirect sum [9, Definition 1.7] while the symbol
$\rtimes$ in (118) stands for the “cross” product [44, Definition 4.1]. The
abstract definitions of the smash and cross products $\rtimes$ will not be
reviewed here (because it involves some concepts in bialgebra theory that are
beyond the scope of the present text).373737For those interested, see e.g.
[46] for a thorough introduction to bialgebras, Hopf algebras, etc. Anyway, in
order to understand the meaning of (118), it is enough to appreciate that the
generalised Poincaré-Birkhoff-Witt theorem implies that
$\text{gr}\,{\mathcal{U}}(\mathfrak{i}\niplus\mathfrak{h})\,\,\cong\,\,\odot(\mathfrak{i}\oplus\mathfrak{h})\,\,\cong\,\,\odot(\mathfrak{i})\,\otimes\,\odot(\mathfrak{h})\,,$
(119)
where the associated graded algebra is with respect to both filtrations, i.e.
of ${\mathcal{U}}(\mathfrak{i})$ and of ${\mathcal{U}}(\mathfrak{h})$. More
concretely, there is a natural choice of ordering for
${\mathcal{U}}(\mathfrak{i}\niplus\mathfrak{h})$: the “normal” ordering where
the dependence in $\mathfrak{i}$ is factored on the left while the dependence
on $\mathfrak{h}$ is factored on the right. The product of two normal-ordered
elements of ${\mathcal{U}}(\mathfrak{i}\niplus\mathfrak{h})$ is not any more
normal-ordered in general. The normal ordering requires to recursively compute
commutators of the form
$[{\mathcal{U}}(\mathfrak{h}),{\mathcal{U}}(\mathfrak{i})]$. In some sense,
the abstract notion of smash product is simply a way to formalise the
systematic calculus (use of Leibnitz rule, etc) involved with the normal
ordering of such expressions.
Example (Direct sum) : The universal enveloping algebra
${\mathcal{U}}(\mathfrak{g})$ of a direct sum
$\mathfrak{g}=\mathfrak{h_{1}}\oplus\mathfrak{h_{2}}$ (120)
of two Lie algebra $\mathfrak{h}_{1}$ and $\mathfrak{h}_{2}$ is isomorphic to
the tensor product of the respective universal enveloping algebras
${\mathcal{U}}(\mathfrak{h}_{1})$ and ${\mathcal{U}}(\mathfrak{h}_{2})$,
${\mathcal{U}}(\mathfrak{h}_{1}\oplus\mathfrak{h}_{2})\,\cong\,{\mathcal{U}}(\mathfrak{h}_{1})\otimes{\mathcal{U}}(\mathfrak{h}_{2})\,.$
(121)
This obvious result corresponds to the isomorphism (118) for the case of a
trivial representation.
Interestingly, the factorisation (118) admits a generalisation for Lie-
Rinehart algebras [9]: for any given curved (respectively, flat) connection,
that is, a linear (respectively, Lie-Rinehart) splitting of a Lie-Rinehart
algebra extension (i.e. a generalised connection), a crossed (resp. smash)
product decomposition of the associated universal enveloping algebra is
provided, and vice versa. As a geometric example for Lie algebroids, the
associative algebra generated by the invariant vector fields on the total
space of a principal bundle is described as a crossed product of the algebra
generated by the vertical ones and the algebra of differential operators on
the base. Such a factorisation can be thought as an alternative
characterisation of an infinitesimal connection on a principal bundle. Its
interest for higher-spin geometry is that such an algebraic characterisation
might admit natural generalisations adapted to the characterisation of higher-
spin connections, e.g. by relaxing in [9, Theorem 3.10] the condition that the
coproduct (hence the filtration) of the universal enveloping algebra is
preserved.
## Acknowledgments
I would like to thank Damien Calaque for pointing to me (a long time ago) the
relevance of Lie-Rinehart algebras for defining properly the universal
enveloping algebra of the vector field Lie algebra. I am also very grateful to
Niels Kowalzig and Paolo Saracco for our collaboration on the universal
enveloping algebra of Lie-Rinehart algebra, from which I learned so much.
Finally, I acknowledge Thomas Basile for his patient reading and useful
comments on some early version of these notes.
The author acknowledge support of the Institut Henri Poincaré (UAR 839 CNRS-
Sorbonne Université) and LabEx CARMIN (ANR-10-LABX-59-01) during his
participation to the trimester “Higher Structures in Geometry and Mathematical
Physics” held at the Institut Henri Poincaré (April-July 2023).
## References
* [1] K. Mackenzie, Lie groupoids and Lie algebroids in differential geometry (Cambridge University Press, 1987).
* [2] K. Mackenzie, General theory of Lie groupoids and Lie algebroids (Cambridge University Press, 2005).
* [3] I. Moerdijk and J. Mrcun, Introduction to Foliations and Lie Groupoids (Cambridge University Press, 2003)
* [4] M. Crampin and D. Saunders, Cartan geometries and their symmetries: a Lie algebroid approach (Atlantis Press, 2016).
* [5] M. Crainic, R. L. Fernandes, “Lectures on Integrability of Lie Brackets,” Geometry & Topology Monographs 17 (2011) 1 [arXiv:math/0611259].
* [6] M. A. Vasiliev, “Higher spin gauge theories in four-dimensions, three-dimensions, and two-dimensions,” Int. J. Mod. Phys. D 5 (1996) 763 [hep-th/9611024]; “Higher spin gauge theories in various dimensions,” Fortsch. Phys. 52 (2004) 702 [hep-th/0401177]; “Higher spin gauge theories in any dimension,” Comptes Rendus Physique 5 (2004) 1101 [hep-th/0409260];
X. Bekaert, S. Cnockaert, C. Iazeolla and M. A. Vasiliev, “Nonlinear higher
spin theories in various dimensions,” hep-th/0503128;
V. E. Didenko and E. D. Skvortsov, “Elements of Vasiliev theory,”
arXiv:1401.2975 [hep-th];
M. A. Vasiliev, “Higher-spin theory and space-time metamorphoses,” Lect. Notes
Phys. 892 (2015) 227 [arXiv:1404.1948 [hep-th]];
D. Ponomarev, “Basic introduction to higher-spin theories,” Int. J. Theor.
Phys. 62 (2023) 146 [arXiv:2206.15385 [hep-th]].
* [7] D. Sorokin, “Introduction to the classical theory of higher spins,” AIP Conf. Proc. 767 (2005) 172 [hep-th/0405069];
X. Bekaert, N. Boulanger and P. Sundell, “How higher-spin gravity surpasses
the spin two barrier: no-go theorems versus yes-go examples,” Rev. Mod. Phys.
84 (2012) 987 [arXiv:1007.0435 [hep-th]];
R. Rahman, “Higher Spin Theory - Part I,” PoS ModaveVIII (2012) 004
[arXiv:1307.3199 [hep-th]]. R. Rahman and M. Taronna, “From Higher Spins to
Strings: A Primer,” arXiv:1512.07932 [hep-th];
P. Kessel, “The Very Basics of Higher-Spin Theory,” PoS Modave2016 (2017) 001
[arXiv:1702.03694 [hep-th]];
A. Bengtsson, Higher Spin Field Theory (Concepts, Methods and History) Volume
1: Free Theory (De Gruyter, 2020);
X. Bekaert, N. Boulanger, A. Campoleoni, M. Chiodaroli, D. Francia, M.
Grigoriev, E. Sezgin and E. Skvortsov, “Snowmass White Paper: Higher Spin
Gravity and Higher Spin Symmetry,” in J. N. Butler, R. S. Chivukula, M. E.
Peskin (eds) Proceedings: 2021 US Community Study on the Future of Particle
Physics: Snowmass 2021 (SLAC, 2023) [arXiv:2205.01567 [hep-th]].
* [8] R. Argurio, G. Barnich, G. Bonelli and M. Grigoriev (eds), Higher Spin Gauge Theories (International Solvay Institutes, 2004);
L. Brink, M. Henneaux and M. A. Vasiliev (eds), Higher Spin Gauge Theories
(World Scientific, 2017).
* [9] X. Bekaert, N. Kowalzig and P. Saracco, “Universal enveloping algebras of Lie-Rinehart algebras: crossed products, connections, and curvature,” [arXiv:2208.00266 [math.RA]].
* [10] M. Crampin, “Cartan connections and Lie algebroids,” SIGMA 5 (2009) 061 [arXiv:0906.2554 [math.DG]];
J. Attard, J. François, S. Lazzarini and T. Masson, “Cartan Connections and
Atiyah Lie Algebroids,” J. Geom. Phys. 148 (2020) 103541 [arXiv:1904.04915
[math-ph]].
* [11] I. Moerdijk and J. Mrcun, “On the universal enveloping algebra of a Lie algebroid,” Proc. Amer. Math. Soc. 138 (2010) 3135 [arXiv:0801.3929 [math.QA]].
* [12] X. Bekaert, “Geometric tool kit for higher-spin gravity (Part I): An introduction to the geometry of differential operators,” Int. J. Mod. Phys. A 38 (2023) 2330003 [arXiv:2301.08069 [hep-th]].
* [13] J. Nestruev, Smooth Manifolds and Observables (Springer, 2020) [2nd edition, augmented].
* [14] J. J. Rotman, Advanced Modern Algebra: Part 1, Graduate Studies in Mathematics 165 (American Mathematical Society, 2020).
* [15] Michel Raynaud and Laurent Gruson, “Critères de platitude et de projectivité, Techniques de ‘platification’ d’un module,” Invent. Math. bf 13 (1971) 1.
* [16] R. G. Swan, “Vector Bundles and Projective Modules,” Trans. Am. Math. Soc. 105 (1962) 264.
* [17] J.-P. Serre, “Faisceaux Algébriques Cohérents,” Ann. Math. bf 61 (1955) 197.
* [18] A. S. Morye, “Note on the Serre‐Swan theorem,” Math. Nachr. 286 (2013) 272 [arXiv:0905.0319 [math.AG]].
* [19] J. Herz, “Pseudo-algèbres de Lie,” C. R. Acad. Sci. Paris 236 (1953) 1935.
* [20] G. Rinehart, “Differential forms for general commutative algebras”, Trans. Amer. Math. Soc. 108 (1963) 195.
* [21] J. Huebschmann, “Poisson cohomology and quantization,” J. für die Reine und Angew. Math. 408 (1990) 57 [arXiv:1303.3903 [math.DG]].
* [22] J. Huebschmann, “On the history of Lie brackets, crossed modules, and Lie-Rinehart algebras,” J. Geom. Mech. 13 (2021) 385 [arXiv:2208.02539 [math.HO]].
* [23] C. Laurent-Gengoux, M. Stiénon and X. Ping, “Poincaré–Birkhoff–Witt isomorphisms and Kapranov dg-manifolds,” Adv. Math. 387 (2021) 107792 [arXiv:1408.2903 [math.DG]].
* [24] J. Pradines, “Théorie de Lie pour les groupoïdes différentiables. Calcul différentiel dans la catégorie des groupoïdes infinitésimaux,” C. R. Acad. Sci. Paris (Série A) 264 (1967) 245.
* [25] A. Douady and M. Lazard, “Espaces fibrés en algèbres de Lie et en groupes,” Invent. Math. 1 (1966) 133.
* [26] Y. Kosmann-Schwarzbach and K. C. H. Mackenzie, “Differential operators and actions of Lie algebroids,” in T. Voronov (Ed.), Quantization, Poisson Brackets and Beyond, Contemporary Mathematics 315 (American Mathematical Society, 2002) 213 [arXiv:math/0209337 [math.DG]].
* [27] P. Dazord, “Groupoïde d’holonomie et géométrie globale,” C.R. Acad. Sci. Paris 324 (1997) 77.
* [28] M. Crainic and R. L. Fernandes, “Lectures on integrability of Lie brackets,” in T. Ratiu, A. Weinstein and N. T. Zung (Eds.), Lectures on Poisson Geometry, Geometry & Topology Monographs 17 (Mathematical Sciences Publishers, 2011) 1.
* [29] M. F. Atiyah, “Complex Analytic Connections in Fibre Bundles,” Trans. Amer. Math. Soc. 85 (1957) 181.
* [30] R. Almeida and P. Molino, “Suites d’Atiyah et feuilletages transversalement complets,” C. R. Acad. Sc. Paris (Série I) 300 (1985) 13.
* [31] A. Grothendieck, “Éléments de géométrie algébrique : IV. Étude locale des schémas et des morphismes de schémas, Quatrième partie,” Publ. Math. IHÉS 32 (1967) 5.
* [32] J. Huebschmann, “Extensions of Lie-Rinehart algebras and the Chern-Weil construction,” in J. McCleary (Ed.), Higher Homotopy Structures in Topology and Mathematical Physics (Festschrift in honour of Jim Stasheff’s 60’th anniversary) Contemporary Mathematics 227 (American Mathematical Society, 1999) 145 [arXiv:dg-ga/9706002].
* [33] J. A. Schouten, Der Ricci-Kalkül – Eine Einführung in die neueren Methoden und Probleme der mehrdimensionalen Differentialgeometrie (Springer, 1924).
* [34] N. Jacobson, “On pseudo-linear transformations,” Proc. Nat. Acad. Sci. 21 (1935) 667.
* [35] R. S. Palais, Foundations of global non-linear analysis (W. A. Benjamin, 1968).
* [36] Y. Kosmann, “On Lie transformation groups and the covariance of differential operators” in M. Cahen and M. Flato (Eds.), Differential geometry and relativity (in honour of André Lichnerowicz on his 60th birthday) Mathematical Physic and Applied Mathematics 3 (Springer, 1976) 75.
* [37] N. Bourbaki, Éléments de mathématique. Algèbre: Chapitre 3 (Masson, 1970) [ Modern edition: N. Bourbaki, Éléments de mathématique. Algèbre: Chapitres 1-3 (Springer, 2007); English translation: Algebra I: Chapters 1-3 (Springer, 2008) ].
* [38] R. L. Fernandes, “Lie Algebroids, Holonomy and Characteristic Classes,” Adv. Math. 170 (2002) 119 [arXiv:math/0007132 [math.DG]].
* [39] F. Kamber and P. Tondeur, Foliated bundles and characteristic classes, Lecture Notes in Mathematics 493 (Springer-Verlag, 1975).
* [40] L. El Kaoutit and P. Saracco, “Topological tensor product of bimodules, complete Hopf algebroids and convolution algebras,” Commun. Contemp. Math. 21 (2019) 1.
* [41] P. Saracco, “Universal enveloping algebras of Lie-Rinehart algebras as a left adjoint functor,” Mediterr. J. Math. 19 (2022) 92.
* [42] X. G. Martínez, Lie-Rinehart Algebras (Master thesis of Universidade de Santiago de Compostela, June 2013).
* [43] J. C. McConnell and J. C. Robson, Noncommutative Noetherian Rings, Graduate Studies in Mathematics 30 (American Mathematical Society, 1987).
* [44] R. J. Blattner, M. Cohen, and S. Montgomery, _Crossed products and inner actions of Hopf algebras_ , Trans. Amer. Math. Soc. 298 (1986) 671.
* [45] S. Montgomery, Hopf algebras and their actions on rings, CBMS Regional Conference Series in Mathematics 82 (American Mathematical Society, 1993).
* [46] M. Hazewinkel, N. Gubareni and V. V. Kirichenko, Algebras, Rings and Modules: Lie Algebras and Hopf Algebras, Mathematical Surveys and Monographs 168 (American Mathematical Society, 2010). |
Flat space spinning massive amplitudes from momentum space CFT
Raffaele Marotta$^a$, Kostas Skenderis$^b$ and Mritunjay Verma$^{b,c}$
$^a$Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Napoli,
Complesso Universitario di Monte S. Angelo ed. 6, via Cintia, 80126, Napoli, Italy
$^b$ Mathematical Sciences and STAG Research Centre, University of Southampton,
Highfield, Southampton SO17 1BJ, UK
$^c$ Indian Institute of Technology Indore, Khandwa Road, Simrol, Indore 453552, India
E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>
We discuss the flat space limit of AdS using the momentum space representation of CFT correlators.
The flat space limit involves sending the AdS radius and the dimensions of operators dual to massive fields to infinity while also scaling appropriately the sources of the dual operators.
In this limit, $d$-dimensional CFT correlators become $(d+1)$-dimensional scattering amplitudes.
We exemplify our discussion with the computation of the flat-space limit of the CFT 3-point function of a conserved current, a non-conserved charged vector operator and its conjugate. The flat-space limit should yield the scattering amplitude of an Abelian gauge field with two massive vector fields.
This scattering amplitude computes the electromagnetic form factors of the electromagnetic current in a spin-1 state, and these form factors encode the electromagnetic properties of the massive vector field (charge, magnetic moment and quadruple moment). In terms of the CFT, the flat-space limit amounts to zooming in the infrared region of the triple-K integrals that determine the 3-point function, while also scaling to infinity the order of (some of) the Bessel functions that feature in the triple-K integrals. In this limit the triple-K integral becomes proportional to the energy-preserving delta function, and the flat space limit correctly yields the corresponding flat space scattering amplitude in complete detail.
§ INTRODUCTION
The AdS/CFT gives a realization in string theory of the holographic principle, providing, at least conceptually, a non-perturbative formulation of string theory on AdS background in terms of a boundary conformal field theory [1, 2, 3]. In its most general formulation, the correspondence is conjectured to be a duality between a quantum gravity theory formulated on a $(d+1)$-dimensional asymptotically locally AdS background (AlAdS) times a compact manifold and a $d$-dimensional quantum field theory located on the boundary of AlAdS [4, 5]. The strong/weak nature of this duality can be exploited to explore the strong-coupling regime of the dual conformal field theories which are dual to a weakly coupled classical bulk theory.
A weakly coupled bulk theory corresponds to the large radius limit of AdS. As the AdS radius approaches infinity, the AdS geometry reduces to the flat space geometry[In the most well understood example of duality, namely when the bulk type IIB string theory is dual to $\mathcal{N}=4$ SYM, the relation between the AdS radius and the boundary parameters is
L ∼ (g_YM^2N)^14
In the 't Hooft limit, one simultaneously sends $N$ to infinity and $g_{YM}^2$ to zero keeping L large but fixed. For the flat space limit, one needs to consider the more subtle limit in which we again send $N$ to infinity but we now keep $g_{YM}^2$ fixed so that $L \to \infty$ [9, 8]. ] and, for consistency, the physics in AdS in this limit should match that of flat space (at least locally). In particular, we could obtain some insight about quantum gravity in flat space by using the
flat-space limit of the AdS/CFT correspondence.
Motivated by this there has been a body of work since the early days of the AdS/CFT correspondence discussing the flat limit of AdS results, starting from [9, 8, 10, 11]. Due to the AdS/CFT correspondence, the limit should also make sense on the CFT side at the level of CFT correlators, at least for holographic CFTs, and $(d+1)$ dimensional flat space-time should emerge from $d$-dimensional CFT correlator in a suitable limit. However, it was also clear from the very beginning that the limit is subtle, and it has been a challenge to make the plausible physical picture into a precise and mathematically well-defined limit. The limit has been analyzed in a variety of different formulations and setups: position space [12, 13, 14, 15], Mellin space [16, 18, 17], partial wave expansion [14, 17], momentum space [19, 20, 21, 22, 23, 24], see also [25] for a comparison of the different formulations, and
[27, 29, 30, 26, 31, 32, 28, 33, 34, 35, 36, 37] for further work. One outcome of these works is that the flat space limit is a singular limit. For example, in the momentum space approach of [19, 20], $(d+1)$-dimensional flat space amplitudes involving gluons and gravitons were obtained from the coefficients of singular terms of the flat limit of $d$-dimensional CFT correlators involving the conserved currents and stress-energy tensor, respectively.
The flat-space limit provides a link to flat space holography. There have been different approaches to flat space holography, including celestial holography and Carrollian holography, and connections to the flat-space limit have been discussed, for example, in
[38, 40, 41, 42, 43, 44, 45, 46, 47, 48]. We are not going to discuss these interesting proposals in this paper[We will also not discuss whether the limit exists as a limit of the dual CFT as a theory (c.f. footnote <ref>) or as a limit of the bulk geometry (c.f., for example, [49]).], but we note that a minimal possibility for flat space holography is that it is the flat-space limit of the AdS/CFT, with the flat space results emerging from correlators of standard relativistic CFT in a suitable limit.
Many of the prior works focused on special cases (e.g. scalar 4-point functions computed by Witten diagrams, bulk massless fields, etc.). In this work we aim to provide a formulation that would apply in generality:
any $n$-point function of massless and massive spinning fields with general interactions. We will focus our analysis in the simplest setup that involves most of these ingredients while it is also physically interesting: the 3-point function of an abelian gauge field with a massive spin-1 complex Proca field.
Our aim is to obtain the scattering of the photon off a massive vector field (Figure <ref>) by taking a limit of the corresponding process in AdS (Figure <ref>). In flat space this scattering process captures the electromagnetic properties of the massive particle (charge, magnetic and quadrupole moments for a spin one particle) and as such it is interesting on its own right. In particular, our analysis may pave a way to obtain non-perturbative results about electromagnetic form factors of higher-spin (hadronic) states using holography and CFT results.
3-point functions in CFT are fixed by conformal invariance, up to constants, so this is a case where the results is known non-perturbatively, and it would allow us to directly take the limit on the CFT side.
On the other hand, to understand what is the precise limit to be taken, it is useful to have a bulk realization in AdS. We will work with Euclidean signature in AdS with flat boundary (AdS in Poincaré coordinates, or more accurately with the boundary conformal structure of AdS represented by a flat metric). We will Fourier transform along the boundary directions and, correspondingly, we will consider the CFT in momentum space.
In AdS/CFT correspondence, the massive field is dual to a non conserved operator whereas the gauge field is dual to a conserved current in the boundary theory, so the relevant CFT 3-point function is that of a conserved current with a non-conserved vector operator and its complex conjugate. This 3-point function (in momentum space) was determined in our earlier work [50] by solving the conformal Ward identities, following [51, 52, 53, 54], and it depends on the conformal dimension $\Delta$ of the non-conserved operator, the spacetime dimension $d$ and three parameters, whose values are theory-specific.
In AdS, we work with the most general effective action of the Proca field coupled of an abelian gauge field, including up to three derivative terms. This action involves three coupling constants: the minimal coupling, and two more couplings that may be associated with the magnetic and quadrupole moments of the massive spin-one field.
This action might be thought as arising from a compactification of ten or eleven dimensional supergravity, where the massive vectors correspond to Kaluza-Klein modes of some higher-dimensional field. The boundary values of the bulk fields act as the sources of the corresponding boundary operators and the holographically renormalized bulk partition function provides the generating functional of the boundary CFT correlators. We work out the 3-point function using the original GKPW prescription [4, 5] and holographic renormalization [55].
Comparison of the 3-point function computed using the AdS/CFT correspondence with the general CFT 3-point function shows that there is an 1-1 relation between the three arbitrary parameters that appear in the solution of the conformal Ward identities and the three AdS bulk coupling. This relation depends on the AdS radius $L$ and the conformal dimension $\Delta$ of the non conserved operators and is valid in the regime where the boundary theory is strongly coupled. This explicit matching provides a
non trivial test of AdS/CFT correspondence for the massive spin-1 field described by a higher derivative effective action.
After computing the above 3-point function, we analyse it in the flat limit where we send the AdS radius $L$ to infinity. The flat space amplitudes arise from the bulk region where the AdS metric reduces to the flat metric with the vanishing Ricci tensor and Ricci scalar. In the standard Poincaré coordinates (see equation (<ref>)), the Ricci tensor can be expressed in terms of the radial coordinate $z$ as $R_{MN} =-d\, \delta_{MN}/z^2$ ($M, N=0,\dots d$). Therefore, the dominant region in the flat limit corresponds to the deep interior of the AdS background where $z$ is large.
We parametrized this AdS region as $z=L\,e^{\frac{\uptau}{L}}$.
In the flat limit, $\uptau$ is interpreted as Euclidean time.[We work in the Euclidean AdS signature and Wick rotate the radial direction to make it time like after taking the flat limit.] Further, in this flat region, the $AdS$ isometry algebra becomes the Poincaré algebra through the Inonu Wigner contraction [56]. In particular, the AdS isometries include scaling and special conformal transformation, and we show how in the flat space limit these isometries disappear and instead we obtain translational invariance in $\uptau$ together with Lorentz transformations that rotate $\uptau$ to the other boundary directions.
[above] at (0,3) $i^+$;
[below] at (0,-3) $i^-$;
[right] at (1.5,1.7) $\mathscr{I}^+$;
[right] at (3,0) $i^0$;
[right] at (1.5,-1.7) $\mathscr{I}^-$;
[right] at (0.3,-1.0) $W$;
[right] at (-0.4,1.3) $W$;
[right] at (0.7,0.5) $\gamma$;
[-, snake=coil] [ thick](0,0) – (1.5,1.5);
[-] [ thick](-3,0) – (0,-3);
[-] [ thick](3,0) – (0,-3);
[-] [ thick](0,3) – (3,0);
[-] [ thick](-3,0) – (0,3);
plot [smooth] coordinates (0,-3) (.5,-1.5) (0,0) (0-.5,1.5) (0,3);
Scattering of a photon $\gamma$ off a massive spin-1 particle $W$ in Minkowski spacetime.
(0,0) circle (3cm);
[-, snake=coil] [ thick](2.1,2.1) – (0,0);
[-] [ thick](-2.1,2.1) – (0,0);
[-] [ thick](0,0) – (0,-3);
[right] at (0.0,-1.5) $W$;
[right] at (-1.1,1.3) $W$;
[right] at (0.7,0.5) $\gamma$;
Same process as in Fig. <ref> but now in Euclidean AdS.
We would like to take the flat space limit in a way that keeps the physics we want to probe. Suppose we want to compute the scattering amplitudes for a theory described by flat space by a Lagrangian $L_{\rm flat}[m^2_i, g_j]$ that depends on set of massless fields, massive fields with masses $m_i^2$ and coupling $g_j$ via a flat-space limit from AdS. Then the proposal is to start with the same action now in AdS (with AdS radius $L$) and then consider the flat space limit $L \to \infty$ keeping fixed the masses $m_i^2$ and coupling $g_j$ (in Planck units). Given the standard relation between masses and conformal dimensions, for example $m^2 L^2 = \Delta (\Delta-d)$ for scalar fields (or equation (<ref>) for the case we consider), keeping fixed the mass implies that the conformal dimension must tend to infinity, $\Delta \to \infty, $ as $L \to \infty$. The crucial question is then whether AdS amplitudes, or more generally CFT correlators, admit such a limit.
The main building blocks for momentum space CFT 3-point functions are the so-called $triple$-$K$ integrals [51],
\begin{eqnarray} \label{Intro_3K}
J_{N\{k_1,k_2,k_3\}}(p_1,p_2,p_3)&\equiv&\int_0^\infty dx\,x^{\frac{d}{2}+N-1} \prod_{i=1}^3 p_i^{\Delta_i-\frac{d}{2}+k_i}\,K_{ \Delta_i-\frac{d}{2}+k_i}(x p_i)\, .
\end{eqnarray}
where $p_i$ are the magnitudes of momenta, $p_i = \sqrt{{\bf p}_i^2}$,
$K_{ \Delta_i-\frac{d}{2}+k_i}(x p_i)$ are modified Bessel functions of the second kind and $N$ and $k_i$ are parameters (which are integers in the cases we discuss). In this integral, the $x=0$ region is the UV part of the integral, while the $x \to \infty$ corresponds to the IR part of the integral. In the AdS computation these integrals arise from the corresponding Witten diagrams with the Bessel functions being the (momentum-space) bulk-to-boundary propagators and the integral over $x$ originating from the integral over the bulk vertex, with $x$ identified with the AdS radial coordinate. The flat-space limit corresponds to considering the deep interior of AdS, $z \to \infty$, and thus the IR region of the $triple$-$K$ integral.
In the flat-space limit the momenta along the boundary directions become the spatial momenta of the flat-space scattering amplitude, and thus we want to keep fixed ${\bf p}_i$ as $x \to \infty$. In addition, we need to send $\Delta \to \infty$ when the corresponding bulk field is massive.
Thus, the flat-space limit rests (in part) in our ability to take the limit of the $triple$-$K$ integrals.
For massless fields this involves taking the large argument limit of a modified Bessel function, while for massive fields we need to take a limit where both the argument and the order and the argument of the Bessel function tends to infinity. This former limit is well known, but the latter (called uniform expansion in the mathematics literature) is less known and we review it in detail in appendix <ref>.
The limits of the Bessel function also tell us how the AdS bulk-to-boundary propagators behave in this limit, and after Wick-rotating to Minkowski spacetime, the answer is that they tend to plane waves,
\begin{equation} \label{K_lim}
{K}_{\Delta-\frac{d}{2}+\ell}(z\,k) \to \frac{1}{\sqrt{Z_\Delta}} e^{-i E t}
\end{equation}
where $t=-i \uptau$ is Minkowski time (with $\uptau=L \log (z/L)$).
$E=\pm \sqrt{k^2+m^2}$ is the energy variable of the flat-space $(d+1)$-momentum vector,
$(E, {\bf k})$, where ${\bf k}$ is the momentum vector in the CFT. In other words, the momentum variable of the CFT directly becomes the spatial part of the momentum variable in flat space and the energy variable is what is dictated by the on-shell condition. Note that the correct on-shell relation for $E$ automatically emerges from the limit. The two signs correspond to whether after Wick-rotation the plane wave corresponds to in- or out-state. The factor $Z_\Delta$ is a renormalization factor. In the cases we discuss, the $Z$-factor tends to infinity for the massless photon and to zero for the massive vector. One would need to renormalize the CFT operators by precisely these factors in order for the flat-space limit to exist. Using (<ref>) in (<ref>) we find that the $triple$-$K$ integral becomes (proportional) to the energy-preserving delta function,
\begin{eqnarray}
\lim_{L \to \infty} J_{N\{k_1,k_2,k_3\}}(p_1,p_2,p_3) \sim \delta(E_1+E_2+E_3)
\end{eqnarray}
where the limit is taken with $\Delta_i/L=m_i$ fixed. Note that the conservation of the spatial momentum is automatic since the momentum space CFT 3-point functions already contain the momentum-preserving delta function, $\delta({\bf k}_1 + {\bf k}_2 + {\bf k}_3)$.
To complete the flat-space limit of the 3-point function one needs to take the limit of the form factors (introduced in equation (<ref>)) and these involve factors of $\Delta$ (which follow from the solution of the conformal Ward identities). These factors are crucial in order to obtain the correct flat space result,
\begin{eqnarray}
\lim_{L \to \infty} \sqrt{Z_{W_1} Z_A Z_{W_3}} \,A_3^{\mu_1\mu_2\mu_3} \;=\; -2 \pi i
\delta(E_1+E_2+E_3)\, {\cal M}_3^{\mu_1\mu_2\mu_3}\, ,
\end{eqnarray}
where $A_3^{\mu_1\mu_2\mu_3}$ is the momentum-space CFT 3-point function and ${\cal M}_3^{\mu_1\mu_2\mu_3}$ is the flat space scattering amplitude.
Together with the 3-point function we also analyse the flat limit of the AdS propagators, with the boundary directions Fourier transformed to momentum space. Again, the flat limit of these propagators corresponds to sending $L$ and $\Delta$ to infinity. An important role is played by the bulk to boundary (Btb) propagators of the gauge and Proca fields. These dictate the external leg factors of the fields in the flat limit which turns out to be very crucial for matching the flat space 3-point amplitude with the CFT 3-point function. More generally, the solution of the field equations in AdS properly limit into corresponding solutions in flat space. The AdS solutions depend on the fields that parametrize their boundary conditions (which play the role of sources in AdS/CFT) and these morph into polarization vectors in the flat space limit.
We also consider the bulk-to-bulk (BtB) propagator of the gauge field. Even though we only need its near boundary behaviour in computing the 3-point function via holographic renormalisation, we have analysed the flat limit of the full BtB propagator in momentum space. Since this propagator plays the role of Green's function in AdS, we expect it to limit to the Feynman propagator since the latter also plays the role of Green's function in flat space. We find that this is indeed the case, as expected. However, this analysis gives an
interesting insight about the longitudinal part of the propagator. As is common in AdS/CFT, we used the radial/axial gauge where $A_0=0$. In the flat space limit, the transverse part of the gauge BtB propagator matches exactly with the transverse part of the Feynman propagator in the flat space limit, while the longitudinal part divergences. This divergence is precisely linked with an additional singularity (an unphysical double pole) that is present in the Feynman propagator in the axial gauge in flat space [57, 58], and our results match these earlier results.
The rest of the paper is organised as follows. In section <ref>, we review results obtained in previous literature: we summarise the expression of the momentum-space CFT 3-point function involving a conserved current and two generic non conserved operators having the same conformal dimension, and we also review results about the flat limit of AdS at the geometric and group algebra level. In section <ref>, we explicitly show how the AdS isometries limit to the Poincaré isometries and how the scaling and special conformal symmetry of the CFT correlators recombine to Poincaré transformations in the large $L$ limit. In section <ref>, we shall introduce the bulk theory involving a gauge field and two charged massive spin-1 fields and derive the boundary CFT 3-point function using this bulk theory and the procedure of holographic renormalisation. This fixes the coefficients appearing in the CFT 3-point function in terms of bulk quantities. In section <ref>, we analyse the flat limit of the BtB propagator of the gauge field and Btb propagators of the gauge and Proca fields. In section <ref>, we consider the flat space limit of the 3-point function and show that it matches with the expected result in the flat space. We end with some discussion in section <ref>.
The papers contains a number of technical computations, which require dealing with many subtle issues. While the techniques and subtleties are all known by the experts, detailed expositions are rare in the literature and we present a comprehensive analysis in a series of appendices.
Appendix <ref> contains our conventions, and in appendix <ref> we discuss the limiting behaviour of the modified Bessel functions. In particular, we present a self-contained discussion of the uniform expansion of the Bessel function when both the argument and the order of the Bessel function goes to infinity. Appendix <ref> contains a derivation of the most general form of effective action in AdS, which contains up to cubic terms in the gauge and Proca fields, and up to the three derivative terms. This is the starting point for our holographic computation in section <ref>.
In appendix <ref> we compute the bulk-to-boundary and the bulk-to-bulk propagators of the gauge field in axial gauge, and the bulk-to-boundary propagator for the Proca field. Appendix
<ref> contains the computation of the gauge field bulk-to-boundary propagator in Lorenz gauge.
In appendices <ref> and <ref> we work out holographic renormalization for the Proca and gauge field, respectively. The massive spin-1 field corresponds to an irrelevant operator and this requires special attention. Appendix <ref> contains the computation of the corresponding flat space scattering amplitude. Finally, in appendix <ref> we present a self-contained summary of the relation between electromagnetic form factors and couplings in the effective action.
§ REVIEW OF CFT RESULTS
In this section, we summarise the CFT 3-point function involving a conserved current and two non conserved spin 1 fields in momentum space following [50]. This will be needed later to compare with the bulk 3-point function of a gauge field and two massive spin-1 Proca fields. The results in [50] are given in an index free notation where Lorentz indices have been contracted with auxiliary vectors. Here, we state the result in terms of explicit indices which will be more useful for our purposes.
The desired 3-point correlator was determined from the CFT Ward identities. Extracting the momentum conserving delta function, it can be expressed as
𝒜_3^μ_1μ_2μ_3 = (2π)^d δ^d(p_1+p_2+p_3) []𝒪_1^μ_1( p_1) J^μ_2(p_2) 𝒪_3^μ_3(p_3) []
The operators $\mathcal{O}_1$ and $\mathcal{O}_3$ can have different conformal dimensions, say $\Delta_1$ and $\Delta_3$ respectively. However, in our case, they will correspond to bulk fields with the same mass, hence, we shall take $\Delta_1=\Delta_3=\Delta$. The reduced correlator in (<ref>) can be decomposed in a transverse and longitudinal part as
[] 𝒪_1^μ_1(p_1) J^μ_2(p_2) 𝒪_3^μ_3(p_3) []
= []𝒪_1^μ_1(p_1) j^μ_2(p_2) 𝒪_3^μ_3(p_3) [] + p_2^μ_2p_2^2[] 𝒪_1^μ_1(p_1) p_2νJ^ν(p_2) 𝒪_3^μ_3(p_3) [] ,
where $j^\mu$ denotes the transverse part of the conserved current
j^μ(p_2) = π^μ_ ν(p_2) J^ν(p_2), π^μν(p_2) =δ^μν-p_2^μp_2^νp_2^2, p_2^μ π_μν(p_2) =0 .
The second term on the right hand side of (<ref>) is the longitudinal contribution and the conservation Ward identity for the symmetry current relates it to the 2-point function of the operators $\mathcal{O}^\mu$.
This relates one of the coefficients of the 3-point function with
the normalization of the 2-point function of $\mathcal{O}^\mu$, as we discuss below.
Focusing on the transverse part, we decompose it in form factors,
[] 𝒪_1^μ_1(p_1) j^μ_2(p_2) 𝒪_3^μ_3(p_3) [] = (π·p_1)^μ_2A^μ_1μ_3+ π^μ_2μ_1B^μ_3 +π^μ_2μ_3C^μ_1 ,
A^μ_1μ_3 = A_1 δ^μ_1μ_3 +A_2 p_1^μ_1 (p_1+p_2)^μ_3+A_3 p_2^μ_1 (p_1+p_2)^μ_3+A_4 p_1^μ_1p_2^μ_3+A_5 p_2^μ_1 p_2^μ_3 ;
B^μ_3 = B_1 (p_1+p_2)^μ_3+B_2 p_2^μ_3 ;
C^μ_1 = C_1 p_1^μ_1+C_2 p_2^μ_1 .
The form factors $A_i , B_k, C_k\ (i=1,...,5, k=1,2)$ depend on the magnitudes of the momenta, $p_j=|{\bf p}_j| = \sqrt{{\bf p}_j^2}\ (j=1, 2, 3)$. In the above expressions
we used the momentum conserving delta function to express $p_3^\mu=-p_1^\mu-p_2^\mu$.[In [51] the momentum conserving delta function was solved differently for different indices,
$ \mu_1 \rightarrow {\bf p}_1, {\bf p}_2, \
\mu_2 \rightarrow {\bf p}_2, {\bf p}_3, \
\mu_3 \rightarrow {\bf p}_3, {\bf p}_1$. This results in form factors $\tilde{A}, \tilde{B}, \tilde{C}$ that relate to the ones we use here by
A_1 = -Ã_1 , A_2= Ã_4-Ã_2 , A_3= Ã_5-Ã_3 , A_4= Ã_4 , A_5=Ã_5
B_1 = B̃_2-B̃_1 , B_2 = -B̃_2 , C_1 = C̃_1 , C_2 = C̃_2 . ]
As discussed in section 3.5 of [50], the correlator is antisymmetric under exchange of $(\mu_1, p_1)$ and $(\mu_3, p_3)$ that this implies,
\begin{align} \label{symm}
A_i(p_1, p_2, p_3) &= A_i(p_3, p_2, p_1), \quad i=1, 2, 5, \qquad A_3(p_1, p_2, p_3) = -A_4 (p_3, p_2, p_1) \\
B_1(p_1, p_2, p_3) & =C_1(p_3, p_2, p_1), \qquad
B_2(p_1, p_2, p_3) =-C_2(p_3, p_2, p_1)\, .\non
\end{align}
The functions $A_i, B_i$ and $C_i$ are determined by solving the Ward identities, and they are given in terms of triple-$K$ integrals:
A_1 = -a_5 J_2{0,1,0}+ a_1J_1{0,0,0} ;
A_2 = -a_5 J_3{-1,2,-1}+ a_2J_1{-1,0,-1}
+2 a_4 J_2{-1,1,-1} ;
A_3 = -A_4= a_5 J_3{0,1,-1}- a_4 J_2{0,0,-1} ;
A_5 = a_5 J_3{0,0,0} ;
B_1 = C_1= -a_5 J_2{0,1,0}+ b_1J_1{0,1,-1}+(b_1-b_2)J_1{1,0,-1} +(b_1-b_2 + a_4)J_1{0,0,0} ;
B_2 = -C_2=-a_5 J_2{0,0,1}+ b_2J_1{0,0,0} ;,
where $J_{N\{k_1,k_2,k_3\}}$ denote the triple K integrals and are defined by
\begin{eqnarray}
J_{N\{k_1,k_2,k_3\}}(p_1,p_2,p_3)&\equiv&\int_0^\infty dx\,x^{\frac{d}{2}+N-1} \prod_{i=1}^3 p_i^{\Delta_i-\frac{d}{2}+k_i}\,K_{ \Delta_i-\frac{d}{2}+k_i}(x p_i)\, . \label{B.51}
\end{eqnarray}
For more details and useful properties of these integrals, see [51, 52, 59].
Note that (<ref>) already satisfy the symmetry constraints (<ref>).
The 3 point function of a conserved current and two arbitrary spin 1 operators with the same conformal dimension $\Delta_1=\Delta_3=\Delta$ is given in terms of only 3 independent parameters. This means that not all the parameters $a_i, b_i$
in (<ref>) are independent. There are relations among different constants and three of the constants are fixed in terms of the remaining three as
a_1 = (d-2)Δa_5-(Δ-1)a_4 +b_2 ; a_2 =2(d-2)Δa_5 -(2Δ+d-4)a_4 +(2Δ-d)(Δ-1)b_2
b_1 = (2Δ-d)2(Δ-1)b_2 .
Thus, the 3-point function is parametrised by three independent parameters as expected, and we have chosen $a_4, a_5$ and $b_2$ to be the three independent parameters.
One of these parameters is fixed in terms of the normalisation of the non-conserved operator. Indeed,
the 2-point function of operators $\mathcal{O}_1$ and $\mathcal{O}_3$ is given by [50]
[] 𝒪^*_μ(p) 𝒪_ν(-p) [] = a_0[ δ_μν -(2Δ-dΔ-1)p_μp_νp^2 ]p^2Δ-d ,
Now, the generating functional of the CFT correlators is given by
Z[A^_(0) μ, 𝒲^_(0) μ,𝒲^*_(0) μ]= ∫𝒟Φ exp[-S_CFT -∫d^dx (𝒥^μA^_(0)μ + 𝒪^*μ𝒲^_(0) μ+ 𝒪^μ𝒲^*_(0) μ)]
where ${A}^{}_{(0) \mu}, \mathcal{W}^{}_{(0) \mu}$ and $\mathcal{W}^{*}_{(0) \mu}$ are the sources for the CFT operators $\mathcal{J}^\mu, \mathcal{O}^{*\mu} $ and $\mathcal{O}^{\mu}$, respectively. In the AdS/CFT correspondence, these sources are the fields that determine the boundary conditions of the corresponding bulk fields. Demanding invariance of the generating functional under the $U(1)$ transformation, namely
δA_(0) μ(x) = _μλ(x) ; δ𝒲^_(0)μ = i gλ(x)𝒲^_(0) μ ; δ𝒲^*_(0)μ = -i gλ(x)𝒲^*_(0) μ
we find the conservation ward identity
\begin{equation}
\partial^\mu \langle \mathcal{J}^\mu(x) \rangle_s
= i g \left(
\mathcal{W}^{}_{(0) \mu}(x) \langle \mathcal{O}^{*\mu}(x) \rangle_s
-\mathcal{W}^{*}_{(0) \mu}(x) \langle \mathcal{O}^{\mu}(x) \rangle_s
\right)\, ,
\end{equation}
where the subscript $s$ indicates that these are identities for expectation values in the presence of sources. Differentiating w.r.t. $\mathcal{W}^{}_{(0) {\mu_1}}(x_1), \mathcal{W}^{}_{(0) \mu_3}(x_3)$, (and renaming $x, \mu \to x_2, \mu_2)$, and Fourier transforming to momentum space yields, [] 𝒪^*μ_1(p_1) p_2μ_2J^μ_2(p_2) 𝒪^μ_3(p_3) []
= ( g
[] 𝒪^*μ_1(-p_3) 𝒪^μ_3(p_3) [] - g[]𝒪^*μ_1(p_1) 𝒪^μ_3(-p_1) [] )
Using this, we find
[50]
\begin{eqnarray}
a_0\;=\; 2^{\frac{d}{2} -4}\, \frac{(d-2\Delta)}{g(d-2)}\,\Gamma \left(\frac{ d-2\Delta}{2}\right)\Gamma \left(\frac{ 2\Delta-d}{2}\right)\,\Gamma\left(\frac{d}{2}\right)\Bigl[(\Delta-1)\left(-a_4+(d-2) a_5\right)+b_2
\Bigl]\label{gtr5d}
\end{eqnarray}
Note that this relation involves a new parameter, namely $g$, which enters via the Ward identity. Altogether, the Ward identity introduces one relation between the parameters in the 3-point function and the normalization of the 2-point, but it also contains an additional parameter (the gauge coupling).
Thus, up to 3-point functions we need a total of three parameters.
Finally, we comment about the divergences appearing in the 3-point function. For integer values of $\Delta$, many of the triple-$K$ integrals appearing in (<ref>) diverge and hence regularisation is required and renormalization may be needed. However, in this paper we consider $\Delta$ to be non-integer. In this case also some of the triple K integrals, namely $J_{1\{0,1,-1\}},J_{1\{1,0,-1\}}, J_{1\{-1,1,0\}} $ and $J_{1\{-1,0,1\}}$ are individually divergent. However, the divergences cancel for the combination in which they appear in the 3-point function. The details of this analysis can be found in [50].
§ POINCARÉ SYMMETRY FROM ADS ISOMETRIES
§.§ Flat space limit of AdS
At the geometric level, taking the flat space limit of AdS corresponds to sending $L$ to infinity. The AdS metric in the Poincaré coordinates is given by
ds^2 =L^2z^2(dz^2+δ_μνdx^μdx^ν) ; x^a =(z, x^μ)
In the limit $L\rightarrow\infty$, taken such that the metric $G_{MN}$ has a (finite) limit,
the Riemann, Ricci and scalar curvatures vanish and one gets a flat geometry (see equation (<ref>)). To analyse this limit efficiently, it is convenient to parametrise the radial coordinate $z$ as [25]
\begin{eqnarray}
\frac{z}{L}=e^{\frac{\uptau}{L}}\qquad;\qquad \uptau\in \; (-\infty,\,\infty)\label{5.43}
\end{eqnarray}
In the large $L$ limit, $\uptau$ becomes $(d+1)^{\mbox{th}}$ flat space direction. Indeed, in this limit, the AdS metric (<ref>) becomes the flat space metric as
\begin{eqnarray}
ds^2\;=\; (d\uptau)^2+ e^{-2\frac{\uptau}{L}\delta_{\mu\nu} dx^\mu dx^\nu}\;=\; \delta_{ab}dx^adx^b+{\cal O}\left(\frac{1}{L}\right) \label{flatmetricfgtr}
\end{eqnarray}
where $a,b=1,\cdots,d+1$ and we have denoted $\uptau$ by $x^{d+1}$ in the second equality. To get to Minkowski space one may additionally Wick rotate $\uptau = -i t$ [Note that the analogous flat space limit of the de Sitter metric directly leads to Minkowski space.].
It is also instructive to see how the Poincaré algebra emerges from the AdS isometry algebra in the flat limit. The isometry algebra of Euclidean AdS$_{d+1}$ is so$(d+1,1)$ , which is also the conformal algebra on $R^d$,
is given by
[M_AB, M_CD] = η_BCM_AD - η_ACM_BD+ η_ADM_BC- η_BDM_AC
where, $\eta_{AB}=(+,\dots,+,\,-)$ and
$$A,B,C,D =1,2,\cdots, d+1,d+2\;\;\equiv\;\; \{a,d+2\}\;\; \equiv\;\; \{\mu, d+1,d+2\}$$
To recast (<ref>) in the conformal algebra, we need to make the following redefinitions [60]
M_μν = L_μν ; M_d+1,μ = 12 (P_μ+K_μ) ; M_d+2,μ = 12 (P_μ-K_μ) ; M_d+2,d+1 = D
With this, the algebra (<ref>) reduces to
[L_μν, L_ρσ] = δ_νρL_μσ - δ_μρL_νσ+ δ_μσL_νρ- δ_νσL_μρ
[L_μν, P_ρ] = δ_νρP_μ-δ_μρP_ν ; [L_μν, K_ρ] = δ_νρK_μ-δ_μρK_ν
[K_μ, P_ν] = 2δ_μν D - 2L_μν ; [D, P_μ] = P_μ ; [D, K_μ] = -K_μThis is the standard conformal algebra: $L_{\mu\nu}, P_\mu, K_\mu, D$ represent the rotation, translation, special conformal transformation and the dilatation generator, respectively.
The (Euclidean) AdS isometry algebra (<ref>) reduces to the algebra of the Euclidean group in the flat space limit via the Inonu Wigner contraction [56]. Upon Wick rotation this becomes the Poincaré algebra, and we will loosely use this terminology even when we work with Euclidean signature.
To see this, we note that upon splitting the $(d+2)^{th}$ component the algebra (<ref>) can be written as
[M_ab, M_ce] = δ_bcM_ae - δ_acM_be+ δ_aeM_bc- δ_beM_ac
[M_ab, M_c, d+2] = δ_bcM_a,d+2 - δ_acM_b,d+2 ; [M_a,d+2, M_b,d+2] = M_ab
Now, writing $M_{a,d+2}\equiv L \,\textbf{P}_a$ and taking the limit $L\rightarrow\infty$, the algebra (<ref>) reduces to
[M_ab, M_ce] = δ_bcM_ae - δ_acM_be+ δ_aeM_bc- δ_beM_ac
[M_ab, P_c] = δ_bc P_a-δ_ac P_b
; [P_a, P_b] = 0
This is the standard algebra of the Euclidean group in flat $d+1$ dimensional space.
§.§ From AdS to Poincaré
It was mentioned in the introduction that CFT correlators are expected to turn into S-matrix in the flat limit. This means that the conformal symmetry should morph into the Poincaré symmetries in the flat limit.
In this subsection, we explicitly show how this happens.
We begin by noting that the generator $M_{a,d+2}$ introduced in the previous subsection
becomes the momentum generator in $d+1$ dimensional flat space, up to a rescaling by the AdS radius. From equation (<ref>), this implies that the combination $P_\mu-K_\mu$ of the CFT algebra becomes the momentum component $\textbf{P}_\mu$ ( with $\mu =1,2,\cdots,d$) whereas the CFT generator $D$ becomes the momentum component $\textbf{P}_{d+1}$ in the flat limit. Together, they form the flat space momentum in $d+1$ dimensions
P_a= (P_μ,P_d+1) ∼(P_μ- K_μ, D)
On the other hand, the combination $P_\mu+K_\mu$ of the CFT generators provides $M_{d+1,\mu}$ components of the Lorentz generator in the flat limit, i.e.
M_ab = (M_μν,M_d+1,μ ) ∼(L_μν, P_μ+K_μ)
To see these relations more explicitly at the level of symmetry transformations, we note that the AdS isometry transformations in $(\uptau,x^\mu)$ coordinates are given by[61]
* $\mbox{Rotations and translation of } x^\mu$.
δx^μ = α^μ_ νx^ν + b^μ
* Scaling of $x^\mu$ and translation of $\uptau$
δx^μ = λx^μ ; δ = L λ
* Special conformal transformation of $(\uptau, x^\mu)$
δx^μ = 2 (δ_σνc^σx^ν)x^μ-x^2 c^μ ; δ = 2 L(δ_μνc^μx^ν)
where, $x^2 \equiv L^2 e^{\f{2\uptau}{L}}+\delta_{\mu\nu}x^\mu x^\nu$.
On the other hand, we have the following isometries in flat space
δx^μ = ω^μ_ M x^M +a^μ = ω^μ_ ν x^ν+ ω^μ_ +a^μ
δ = ω^_ M x^M +β = ω^_ μ x^μ+βWe shall now show how to recover these isometries from the flat limit of AdS and relate the flat space parameters $\omega^\mu_{\;\nu}, \omega^\uptau_{\;\mu}, a^\mu,\beta$ in terms of the AdS isometry parameters $\alpha^\mu_{\;\nu}, b^\mu, c^\mu $ and $\lambda$. We start with the transformation of $\uptau$. From (<ref>), we find that it has the structure of translation in the limit $L\rightarrow\infty$ if we simultaneously send $\lambda$ to 0, i.e.
β= lim_L→∞λ→0 L λ δ= βWe also see that in this limit the
scaling transformation of $x^\mu$ disappears. We now consider the rotation of $\uptau$. From equation (<ref>), we see that it has the correct flat space structure if we define
ω^τ_ ν ≡ lim_L→∞c_ν→02Lc_ν δ= ω^_ νx^ν
This completes the analysis for the transformations of $\uptau$. Next, we consider the transformations of $x^\mu$. In the limit $L\rightarrow\infty$ and $c_\nu\rightarrow0$, equation (<ref>) gives
δx^μ = lim_L→∞c_ν→02 (δ_σνc^σx^ν)x^μ-[ L^2 (1+2L+4^2L^2+⋯)+δ_σνx^σx^ν] c^μ
= ω^μ_ -lim_L→∞c_ν→0 L^2 c^μwhere, we have ignored the terms which vanish when $L\rightarrow \infty$ or $c_\mu\rightarrow0$.
In writing the last equality, we have used equation (<ref>) and $\omega^\mu_{\;\;\uptau}= -\omega^{\;\;\mu}_\uptau$. Combining the above equation with (<ref>), we find
δx^μ= α^μ_ νx^ν+ω^μ_ +b^μ-lim_L→∞c_ν→0 L^2 c^μFinally we consider $b^\mu = L^2 c^\mu + a^\mu$, where $a^\mu$ is independent of $L$, so that
the combination $ (b^\mu- L^2 c^\mu) = a^\mu$ has a finite limit giving a finite translation and we recover the expected Poincaré transformation of $x^\mu$, as given in (<ref>), in the flat limit. From the above derivation, we also see that the translation of $x^\mu$ in the flat limit comes from a combination of original translation and special conformal transformation as indicated in (<ref>). Similarly, the rotation of $x^a$ comes from a linear combination of the original rotation and translation of $x^\mu$ and the special conformal transformation of $(x^\mu, \uptau)$ as suggested by equation (<ref>).
§ 3-POINT FUNCTION FROM BULK THEORY
§.§ Bulk theory
In this section we derive the CFT 3-point function $\Bigl\langle {\cal O}^{*\mu}\mathcal{J}^\tau{\cal O}^\nu \Bigl\rangle $
of a U(1) conserved current $\mathcal{J}^\mu$ with a vector operator ${\cal O}^\nu$ charged under the U(1) using AdS/CFT. For this purpose we need a bulk action in AdS, whose cubic terms are linear in the gauge field $A_M$ and quadratic in massive vector fields, $W_N$. As shown in appendix <ref>, the most general such action in Euclidean signature describing the interaction between a $U(1)$ gauge field and a complex massive spin one field in $d+1$ dimensional curved spacetime up to 3 derivative terms is given by
\begin{eqnarray}
S&=&\!\!\!\!\!\int d^{d+1}x\sqrt{G} \Bigl[-\f{1}{16\pi G_N}(R-2\Lambda)+\frac{1}{4} F^{MN}F_{MN}+\frac{1}{2}W^{*}_{MN} W^{MN} +m^2 W^{*}_M W^M \nonumber\\
&&-ig\,\alpha F^{MN}W^*_MW_N+\,ig\beta F^{MN}\,\Big( \nabla_{M} W^*_P\nabla^PW_{N} -\nabla_{M} W_P\nabla^PW_{N}^*\Big)
%\beta^* F^{M_1M_2} \nabla_{M_1} W_P\nabla^PW^*_{M_2}
\Bigl]\, ,
\label{5.6}
\end{eqnarray}
where $M,\,N,P$ run from $0$ to $d$, $\Lambda$ is the cosmological constant and $F_{MN}=\partial_M A_N - \partial_N A_M$ is the field strength of the gauge field. We have also introduced the field strength of the massive spin 1 field as
\begin{eqnarray}
W_{MN}= D_MW_N-D_NW_M, \qquad D_M= \nabla_M +i g A_M\, ,
\end{eqnarray}
with $\nabla_M$ being the diffeomorphism covariant derivative.
The cubic terms are parametrized by three independent parameters, $g, \alpha, \beta$, matching the number of independent parameters that we found in the CFT analysis. One of them is the gauge coupling constant $g$ and it multiples the terms introduced by minimal coupling. The other two, $\alpha$ and $\beta$, were first introduced in the context of zero cosmological constant and their physical meaning is as follows: $\alpha$ is the gyromagnetic coupling which is related to the magnetic moment of the massive vector field $W_M$ and $\beta$ is related to its quadrupole moment, see, e.g., [62, 63, 64, 65, 66] and the discussion in appendix <ref>.
We shall use the above action in an AdS background. Einstein equations imply that the matter fields $A_M$ and $ W_M$ couple to the metric through their energy momentum tensor. This back-reaction can modify the AdS background. However, we shall ignore such back-reaction. The reason is that we are interested in computing the 3-point function in the CFT, so the corresponding sources are only turned on infinitesimally (to implement the operator insertion) and then are turned off. As the bulk energy momentum tensor is quadratic in the fields, one may then neglect the backreaction.
The gauge field equation derived from (<ref>) in the AdS background is given by
\begin{eqnarray}
\nabla_M\,F^{MN}=J^N\qquad\implies\qquad \Bigl(\nabla_M\nabla^M +\frac{d}{L^2} \Bigl)A_N-\nabla_N\nabla_MA^M=J_N\label{ytr5a}
\end{eqnarray}
with the source current given by
\begin{eqnarray}
J^N&=&2i g\Big( W^*_M\nabla^{[M} W^{N]}-W_M\nabla^{[M} W^{*N]}\Big)+2ig\,\alpha\nabla_M\Big( W^{*[M}\,W^{N]}\Big)\nonumber\\
-2ig\,\beta \nabla_M\Big( \, \nabla^{[M|} W^*_P\,\nabla^PW^{|N]}-\nabla^{[M|} W_P\,\nabla^PW^{*|N]}\Big)\, ,
\label{B.26aa}
\end{eqnarray}
where the antisymmetrization on right hand side is only over the indices $M$ and $N$. In writing (<ref>) we have neglected terms with higher powers in the gauge coupling $g$ since we shall be only interested in the cubic interactions, which are linear in the gauge field, in what follows. Taking the covariant derivative of both sides of (<ref>), we find that that the left hand side vanishes identically giving the conservation equation $\nabla_MJ^M=0$. It is easy to check that the current given in (<ref>) satisfies this conservation condition on-shell. For doing calculations, we shall Fourier transform the boundary directions as
\begin{eqnarray}
T_M(z,\,k)=\int d^d x %\frac{d^dx}{(2\pi)^d}
\;e^{-i k\cdot x}\; T_M(z,\,x),
\end{eqnarray}
where $T_M$ can be any bulk quantity. From now on, we shall work in this Fourier basis.
To proceed further, we need to gauge fix $A_M(z,k)$. We shall work in the axial gauge and in Euclidean signature, setting $A_0(z,k)=0$. For the 3-point function, we shall need the perturbative solution of the gauge field up to first order in the coupling $g$. It is given by (see appendix <ref> for details)
\begin{eqnarray}
A_\mu(z,\,k)=\mathbb{K}_{\mu}^{\;\;\nu}(z,k) A_{(0)\nu}(k) +\,\int dw \sqrt{G} \;{\cal G}_{\mu\nu}(z,\,w;\,k)\,J^\nu(w,\,k)\, , \label{ftr5}
\end{eqnarray}
where $\mathbb{K}_\mu^{\;\;\nu}(z,k)$ and $\mathcal{G}_{\mu\nu}(z,w;k)$ denote the bulk-to-boundary and bulk-to-bulk propagators of the gauge field, respectively. Their expressions are given in equations (<ref>) and (<ref>). The field $A_{(0)\mu}(k)$ denotes the boundary value of the gauge field and $J^\nu(w,k)$ can be obtained from (<ref>) by specialising $N$ to the boundary index $\nu$.
Next, we consider the massive fields. For the 3-point function we are interested in, we only need the free field classical solution for these massive fields. The reason is that we will determine the 3-point function through the back reaction of the massive fields to $A_\mu$, using (<ref>), and since the massive field enters quadratically there, higher-order corrections to the massive field will not contribute to the 3-point function of interest.
These can be expressed in terms of the massive spin-1 bulk-to-boundary propagators $\mathcal{K}_{M}^{\;\;\mu}(z;k)$ and $\bar{\mathcal{K}}_{M}^{\;\;\mu}(z;k)$ as
\begin{eqnarray}
W_M(z,\,k)=\mathcal{K}_{M}^{\;\;\mu}(z,\,k)\, w_\mu(k)\quad;\quad W^*_M(z,\,k)=\bar{\mathcal{K}}_{M}^{\;\;\mu}(z,\,k)\, w^*_\mu(k)\label{mass56t}
\end{eqnarray}
The propagators $\mathcal{K}_{M}^{\;\;\mu}(z;k)$ and $\bar{\mathcal{K}}_{M}^{\;\;\mu}(z;k)$ are given in equations (<ref>) and (<ref>), respectively. The $w_\mu$ and $w^*_\nu$ are related to the boundary values of $W_\mu$ and $W^*_\nu$, respectively. Note that we only need to specify the boundary component of the massive fields. The radial component $W_z$ gets fixed in terms of the boundary components.
The bulk fields $W_M$ and $W^*_M$ are dual to the non conserved CFT operators of section <ref>. Their mass $m$ is related to the conformal dimension $\Delta$ of the boundary operators by the relation
\begin{eqnarray}
L^2\,m^2=(\Delta-1)(\Delta +1-d)\label{3.13}
\end{eqnarray}
which follows from equation (<ref>) of appendix <ref>.
§.§ Three-point function
In this subsection, we use the AdS/CFT correspondence to obtain the 3-point function involving two spin-1 operators and a conserved current in the CFT dual to the bulk theory described above. This 3-point function will be a special case of the 3-point function given in section <ref>. The 3 arbitrary parameters appearing in the CFT result (<ref>) will be fixed in terms of the bulk parameters.
According to the AdS/CFT correspondence,
the on-shell bulk partition function $Z_{\rm onshell}$ with given boundary behaviour of the bulk fields is identified with the generating functional of the dual CFT-correlation functions[4, 5] ,
\begin{eqnarray}
Z_{\rm onshell}[\Phi_{(0)}]&=& \Bigl\langle e^{-\int d^d x\, \Phi_{(0)}(x)\,{\cal O}(x)}\Bigl\rangle
\end{eqnarray}
where $\Phi_{(0)}$ denotes the field parametrizing the Dirichlet boundary conditions of the bulk field $\Phi$ which is dual to the CFT operator ${\cal O}$.
In the saddle point approximation, the generator of the connected QFT correlators, denoted by $W[\Phi_{(0)}]$, is given by the on-shell value of the action, namely,
\begin{eqnarray}
W[\Phi_{(0)}] &=&-S_{\rm onshell}
\end{eqnarray}
This is the main ingredient to compute the correlation functions of boundary CFT operators from the bulk action.
To obtain renormalized correlators we still need to holographically renormalize [55]. We regulate the theory by putting the boundary at $z=\epsilon$ and add counterterms to cancel the infinities.
The full renormalized action is obtained by
\begin{eqnarray}
S_{\rm ren}&=&\lim_{\epsilon\rightarrow 0}\;\Bigl( S_{\rm reg}+S_{\rm ct}\Bigl)
\end{eqnarray}
where $S_{\rm reg}$ denotes the regularised action and $S_{\rm ct}$ denotes the counterterms.
The details of the holographic renormalisation for the bulk theory described by action (<ref>) is given in appendix <ref>. Given the renormalized on-shell action, we can now evaluate the desired 3-point function. The first step for this is to obtain the exact renormalized 1-point function of the conserved current. It is given by (for details, see appendix <ref>)
⟨𝒥^μ(k)⟩ = lim_ϵ→0 1ϵ^d2√(γ) δS_renδA_μ(k,ϵ)
where we have used the Fefferman Graham coordinates,
ds^2 =L^2dρ^24ρ^2+γ_μνdx^μdx^ν, γ_μν =Lρδ_μν .
Here $\gamma_{\mu\nu}$ is the induced metric at $\rho=$constant and the IR regulating boundary is at $\rho=L\epsilon$.
The CFT 3-point function of the conserved current and two spin-1 operators is obtained by differentiating (<ref>) with respect to the sources of the spin-1 operators. The final result is given by
⟨O^*μ(p_1)𝒥^τ(p_2)O^ν(p_3) ⟩=δ^τλ(2π)^d(2π)^d δ^2
δ𝒲_(0) μ (-p_1)
δ𝒲^*_(0) ν(-p_3)∫_0^∞dσ√(G) 𝕂_λκ(σ;p_2)J_(0)^κ(σ,p_2) ,
where $\mathcal{W}_{(0) \mu}$ and $\mathcal{W}_{(0) \mu}^*$
denote the fields associated with the boundary conditions of the bulk fields $W_M$ and $W^*_M$, respectively (see (<ref>), (<ref>)). These are the sources of the boundary operators ${\cal O}^*_\mu$ and ${\cal O}_\mu$ respectively. The $J^\kappa_{(0)}$ denotes the boundary component of the current (<ref>) but with terms only up to $O(g)$ in the coupling constant. Terms with higher orders in $g$ are relevant for bulk calculations of four and higher point correlation functions but do not contribute to the 3-point function considered in this section. The source current $J^\kappa_{(0)}$ is a function of the massive fields $W_\mu$ and $W^*_\mu$ whose classical solutions are given in equation (<ref>).
After a long but straightforward calculation and using the definition of triple-K integrals given in (<ref>), the transverse contribution to the 3-point function is obtained to be
\begin{eqnarray}
\llangle[\Bigl] {\cal O}^*_\nu(p_1)\, {\cal J}_\mu(p_2)\,{\cal O}_\rho(p_3)\rrangle[\Bigl]\;\;=\;\; (\pi_2\cdot p_1)_\mu\,{\cal A}_{\nu\rho}+(\pi_2)_{\mu\nu}\,{\cal B}_\rho+ (\pi_2)_{\mu\rho}\,{\cal C}_\nu\label{4.40}
\end{eqnarray}
The form factors ${\cal A}$, ${\cal B}$ and ${\cal C}$, have the same structure as in equations (<ref>) and (<ref>). However, the
coefficients $a_i$ and $b_i$ appearing in (<ref>) are now given in terms of the AdS bulk parameters as
\begin{eqnarray}
a_1&=& gC_0\left[-2+\f{2(d-2)}{L^2}\beta\right]\nonumber\\[.2cm]
a_2&=&gC_0\left[-4+\,\frac{2(d-2)}{\Delta-1}\alpha+\frac{1}{L^2}\,\frac{2(d-2)(2(2-\Delta)+d(\Delta-1))}{(1-\Delta)}\beta\right] \nonumber\\[.2cm]
%\tilde a_3&=&-\tilde a_4\nonumber\\[.2cm]
a_4&=&gC_0\left[\frac{1-\alpha}{\Delta-1}+\frac{{1}}{L^2}\, \frac{2(d-2+\Delta(1-d))}{1-\Delta}\beta\right]\non\\[.2cm]
a_5 &=&gC_0\left[ \f{2\beta}{L^2}\right]\non\\[.2cm]
b_1&=&gC_0\left[\frac{d-2\Delta}{2(\Delta-1)}\Bigl( 1+\alpha-\frac{2\,{\Delta}}{L^2}\,\beta\Bigl)\right]\nonumber\\[.2cm]
b_2&=&gC_0\left[-(1+\alpha)+\,\frac{{2\Delta}}{L^2}\beta \right]
%\tilde c_1&=& \tilde b_1\non\\[.2cm]
%\tilde c_2&=&-\tilde b_2
\label{4.42fg}
\end{eqnarray}
where, we have defined[The AdS-radius $L^{2\Delta-d-1}$ that appears in the definition of $C_0$ has been extracted from the metric factors involved in the integral of three Btb-propagators. All the other factors appearing in the definition of $C_0$ collect the overall constants present in equations (<ref>) and (<ref>).]
\begin{eqnarray}
C_0= -\frac{2^{2-\frac{d}{2}}}{\Gamma\left(\frac{d}{2}-1\right)}\,\left[\frac{2^{\frac{d}{2}+1-\Delta}}{\Gamma\left(\Delta -\frac{d}{2}\right)}\right]^2\,L^{2\Delta-d-1}\label{4.42}
\end{eqnarray}
The relations given in equation (<ref>) can be easily seen to be satisfied for the values of $a_4,a_5$ and $b_2$ given above. The AdS/CFT correspondence has fixed the 3 arbitrary parameters in the boundary CFT 3-point function in terms of the bulk coupling parameters.
The CFT 3-point function, reviewed in section <ref>, of one conserved current and two non conserved operators ( with same conformal dimensions ) spans a 3-dimensional space. In the bulk effective theory also, we have 3 parameters $g, \alpha$ and $\beta$ which also span a 3-dimensional space. The 3 independent parameters in the CFT side were chosen to be $a_4, a_5$ and $b_2$. Their expression in terms of the bulk parameters is given above. We can also invert these relations to express the bulk parameters in terms of the independent boundary CFT parameters as
\begin{align}
g&=-\frac{(\Delta-1)\left(-a_4+(d-2) a_5\right)+b_2}{2 C_0} \label{eq:g}\\
\alpha&=-\frac{(\Delta-1)(-a_4 + d a_5) + 2 a_5 -b_2}{(\Delta-1)\left(-a_4+(d-2) a_5\right)+b_2} \\
\beta=& -\frac{a_5 L^2}{(\Delta-1)\left(-a_4+(d-2) a_5\right)+b_2}
\end{align}
If we had less than 3 parameters in the bulk, then they would not span the full 3-dimensional CFT space mentioned above. Similarly, if we had more than 3-parameters in the bulk, say coming from the higher derivative terms, then the relation between the CFT and bulk parameters would be degenerate.
One important point to note is that each coupling in the bulk (minimal, gyromagnetic, quadrupole) is consistent with the boundary CFT 3-point function by itself.
This follows from the fact that the bulk action is AdS invariant for any value of the couplings, and the AdS isometries imply that the contribution of each term in the boundary correlator is a CFT correlator on its own.
Moreover, the matching happens for arbitrary values of these parameters.
The matching of the 3-point function considered here is a non-trivial confirmation of the gauge/gravity correspondence for an effective field theory of charged massive spin-1 and gauge field up to three derivative terms.
§.§ Conservation Ward identity from the bulk
The transverse ward identity (<ref>) relates the 2-point function with the longitudinal part of the 3-point function involving the divergence of the conserved current. We shall now show that it is consistent with our bulk analysis. The Ward identity (<ref>) is easiest to derive by the procedure of holographic renormalisation. Using (<ref>), we find the 1-point function of the divergence of the current to be (focusing on odd $d$ for now)
p_2μ𝒥^μ(p_2)= -2/L
δ^μν (d/2 -1) p_2μ A_ν^(d -2)
where $ A_\nu^{(d -2)}$ appears in the asymptotic expansion of the gauge field (see equation (<ref>)).
Now, up to $O(g)$, the RHS of the above equation in momentum space takes the form (see equation (<ref>))
(d-2)δ^μν p_μA_ν^(d-2)(p)
g(2Δ-d)∫d^dk(2π)^d δ^μν[ 𝒲_μ^*(0)(k) 𝒲_ν^(2Δ-d)(p-k)-𝒲_μ^(0)(k) 𝒲_ν^*(2Δ-d)(p-k) ]
where $ \mathcal{W}_\mu^{(0)}$ and $\mathcal{W}_\nu^{(2\Delta-d)}$ (and their complex conjugates) are the source and vev part of the near boundary expansion of the Proca field as given in equations (<ref>) and (<ref>) respectively.
Now, using the 1-point function (<ref>) and the expressions of $\mathcal{W}_\mu^{(2\Delta-d)}$ (and its complex conjugate) given in (<ref>), we find
⟨𝒪^*ν(p_1)p_2μ𝒥^μ(p_2) 𝒪^σ(p_3)⟩
= -1L(d-2)δ^μτp_2μδ^2 A^(d-2)_τ(p_2)δ𝒲^(0)_ν(-p_1)𝒲^*(0)_σ(-p_3)(2π)^d(2π)^d
= -1L g(2Δ-d)[ (p_12)^2Δ-d Γ(d2-Δ) L^2Δ-dΓ(Δ-d2) (δ^νσ+ p_1^νp^σ_1(d-2Δ)p_1^2(Δ-1))
- (p_32)^2Δ-d Γ(d2-Δ) L^2Δ-dΓ(Δ-d2) (δ^σν+ p_3^νp^σ_3(d-2Δ)p_3^2(Δ-1)) ] (2π)^dδ^d(p_1+p_2+p_3)
= g [[] 𝒪^*ν(-p_3)𝒪^σ(p_3)[] - [] 𝒪^*ν(p_1)𝒪^σ(-p_1)[] ] (2π)^d δ^d(p_1+p_2+p_3)
In going to the last equality, we have used the expression of two point function (<ref>) obtained using holographic renormalization. The above result (<ref>) is exactly the transverse ward identity (<ref>) we wanted to show. We can also verify the above Ward identity by directly using (<ref>) and contracting it with the momenta of the current $\mathcal{J}^\mu$. In this derivation, we considered the case of odd dimensions and arbitrary $\Delta$. The analysis for even dimension and arbitrary $\Delta$ is similar and yields the same final result (<ref>).
The Ward identity (<ref>) also gives the relation (<ref>) between the CFT 2- and 3-point function coefficients. Using (<ref>)
we see that the CFT 2-point function coefficient $a_0$ becomes
a_0 = 2^d-2Δ(2Δ-d) Γ(d2-Δ) L^2Δ-d-1Γ(Δ-d2)
This agrees exactly with the two point function coefficient appearing in the 2-point function of Proca field in equation (<ref>) obtained using holographic renormalisation.
§ FLAT SPACE LIMIT OF PROPAGATORS
In this section, we consider the AdS propagators for the gauge and Proca fields and analyse them in the flat space limit. More specifically, we shall consider the bulk-to-bulk (BtB) propagator of the gauge field and the bulk-to-boundary (Btb) propagators of both gauge and Proca fields. We shall show how the BtB propagator of gauge field turns into the momentum representation of the gauge Feynman propagator in the limit to flat space. On the other hand, the Btb propagators will turn out to be related to the external leg factors of the corresponding fields in the flat limit.
In section <ref>, we reviewed how the AdS geometry locally reduces to the flat space geometry when the AdS radius $L$ is taken to be large. We introduced the bulk coordinate $\uptau$ via the relation $\frac{z}{L}=e^{\frac{\uptau}{L}}$. The flat metric corresponds to keeping $\f{z}{L}$ to be $\mathcal{O}(1)$ and neglecting the $\mathcal{O}(\f{1}{L})$ terms in the AdS metric (see equation (<ref>)). It is clear that in this limit, the radial coordinate $z$ is very large. It is consistent with the bulk kinematic region $z\,k>>1$ taken in [33] as the bulk region relevant for reproducing the flat space S matrix in the flat limit. This will also be the limit that we shall consider in this and next section for the BtB and Btb propagators and on the three point correlator for getting the corresponding quantities in flat space.
§.§ Gauge bulk-to-bulk propagator
The derivation of the bulk-to-bulk propagator of an abelian gauge field in momentum space in the radial/axial gauge $A_0=0$ has been reviewed in appendix <ref> and is given by
= -1L^d-3
(zw)^d2-1I_d2-1(k z)K_d2-1(k w)π_μν+z^d-2d-2k_μk_νk^2, if z< w
(zw)^d2-1I_d2-1(k w)K_d2-1(k z)π_μν+w^d-2d-2k_μk_νk^2, if z > w
For taking the flat space limit, we shall work in the $\uptau$ coordinate introduced in (<ref>) and write
\begin{eqnarray} \label{eq:K_exp}
K_{d-1}(z\,k)=K_{d-1}(e^{\frac{\uptau_z}{L}}\,k\,L)\qquad;\qquad I_{d-1}(w\,k)=I_{d-1}(e^{\frac{\uptau_w}{L}}\,k\,L)
\end{eqnarray}
Using the asymptotic expansion of the Bessel function for the large argument given in (<ref>), we find
\begin{eqnarray}
&&K_{d-1}(z\,k)= \left(\frac{\pi}{2\,L \,k}\right)^{\frac{1}{2}} e^{-k\,\left(1+\frac{\uptau_z}{L}\right)\,L}\,+{\cal O}\Bigl(\f{1}{L}\Bigl)\;\;;\quad
I_{d-1}(w\,k)= \frac{1}{\sqrt{2\,\pi\,L\,k}}e^{k\,\left(1+\frac{\uptau_w}{L}\right)\,L}+{\cal O}\Bigl(\f{1}{L}\Bigl)\non
\label{5.50}
\end{eqnarray}
With these results, the bulk-to-bulk propagator takes the form
𝒢_μν(z,w;k)|_L→∞ = -
e^-k(_w -_z )2k π_μν+(Ld-2+_z)k_μk_νk^2+O(1L), if _z< _w
e^-k(_z-_w)2kπ_μν+(Ld-2+_w)k_μk_νk^2+O(1L), if _z > _w
To proceed further, we observe that the longitudinal part of the bulk-to-bulk propagator in the flat space limit can be manipulated as[ The same result can be obtained by writing in equation (<ref>) $\uptau_z=\frac{1}{2} (\uptau_z+\uptau_w) +\frac{1}{2} (\uptau_z-\uptau_w)$ and similarly for $\uptau_w$.]
- L/d-2[(z/L)^d-2Θ(w-z)+(w/L)^d-2Θ(z-w)]
= L/2-d[ e^(d-2)(_z+_w/2L+_z-_w/2L) Θ(L e^_w/L-L e^_z/L)+z↔w]
= (L/2-d -_w+_z/2 )-(_z-_w)/2Θ(_w-_z)-(_w-_z)/2Θ(_z-_w)+O(1L)
where we have kept only the leading order terms in $L$. In the limit $L\rightarrow\infty$, the first term in the above expression diverges. We shall shortly connect this divergence with the singularity of the axial gauge propagator in flat space.
The non-translational invariant piece is a consequence of the divergence. To see this, recall that time translations originate from scaling, $x^{\mu}{}' =e^\lambda x^\mu, \ z' = e^\lambda z$ in the limit $\lambda \to 0, \Lambda \to \infty$, with $\beta = \lambda L$ fixed, see (<ref>). In momentum space, $q^\mu{}' = e^{-\lambda} q^\mu$, and the arguments of the Bessel function, $k z$ and $k w$ are invariant under such rescaling. It follows that
\begin{equation}
\mathcal{G}_{\mu\nu}(e^\lambda z, e^\lambda w;e^{-\lambda} k)
= e^{(d-2) \lambda} \mathcal{G}_{\mu\nu}(z,w;k) \ \Rightarrow \
\delta_\lambda \mathcal{G}_{\mu\nu}(z,w;k) = (d-2) \lambda \mathcal{G}_{\mu\nu}(z,w;k)\, .
\end{equation}
Our computation above shows that the transverse part of the correlation is finite as $L \to \infty$ and thus as $\lambda \to 0$ the transverse part is invariant under time translations,
\begin{equation}
\lim_{L \to\infty, \lambda \to 0} \delta_\lambda \mathcal{G}^{\perp}_{\mu\nu}(z,w;k)
= 0\, .
\end{equation}
On the other hand, the longitudinal part diverges linearly in $L$, and thus
\begin{equation}
\lim_{L \to\infty, \lambda \to 0} \delta_\lambda \mathcal{G}^{||}_{\mu\nu}(z,w;k)
=- \beta\, ,
\end{equation}
since $\lambda L=\beta$ in this limit. This is precisely how the longitudinal part in (<ref>) transforms under $\delta \uptau = \beta$.
Thus, if we remove the divergence, the correlator will also be time-translation invariant.
Ignoring the non-translation invariant part, we have
𝒢^TI_μν(z,w;k)|_L→∞ = -
e^-k(_w -_z )2k π_μν+(Ld-2+τ_z-τ_w/2 )k_μk_νk^2+O(1L), if _z< _w
e^-k(_z-_w)2kπ_μν+(Ld-2+τ_w-τ_z/2)k_μk_νk^2+O(1L), if _z > _w
where the superscript TI indicates that we kept only the translational invariant part.
[] (0,-5) – (0,5);
[] (-9,0) – (9,0);
[thick] (-9,0) – (-6,0);
[thick] (-4,0) – (-0.8,0);
[thick] (0.8,0) – (4,0);
[thick] (6,0) – (9,0);
[red, thick, dashed] (-0.8,0) arc (-180:0:0.8);
[red, thick, dashed] (9,0) arc (0: -20: 9);
[red, thick, dashed] (-9,0) arc (180: 200: 9);
[blue, thick, dashed] (-0.8,0) arc (180:0:0.8);
[blue, thick, dashed] (9,0) arc (0: 20: 9);
[blue, thick, dashed] (-9,0) arc (180: 160: 9);
[-To[scale width=1]] (-9,0) – (-7,0);
[-To[scale width=1]] (7,0) – (8,0);
[ thick] (5, 0) circle (3pt);
[ thick] (-5, 0) circle (3pt);
(5, -0.8) node $ |\vec{k}|$;
(-5, 0.8) node $ -|\vec{k}|$;
(0, 0) circle (4pt);
[-To[scale width=1]] (7,0) – (8,0);
[thick] (6,0) arc (0:180:1);
[thick] (-4,0) arc (0: -180:1);
For $x_0<y_0$, we close the contour in the upper half plane and use the blue contour. For $x_0>y_0$, we close the contour in the lower half plane and use the red contour.
To see how to proceed, let us consider the Feynman propagator of an Abelian gauge field in the axial gauge in flat space. In position space, it is given by [58]
Δ_ab(x-y) = ∫d^d+1q(2π)^d+1 e^-iq·(x-y)D_ab(q)
\begin{eqnarray}
D_{ab}(q)=\frac{i}{q^2}\Big\{ g_{ab}-\frac{q_a\, n_b+q_b\,n_a}{q\cdot n} +\frac{q_a\,q_b( n^2 +\xi \,q^2)}{(q\cdot n)^2}\Big\}
\end{eqnarray}
where we work with mostly minus Minkowski metric,
and $n_a$ is a constant four-vector used to impose the gauge condition $n_a\,A^a=0$. The axial temporal gauge is imposed by taking $n_a\equiv (1,\,0,\dots,\,0)$ and $\xi=0$ which gives
\begin{eqnarray}
D_{\mu\nu}(q)=\frac{-i}{q^2}\Bigl[ \delta_{\mu\nu} -\frac{q_\mu\,q_\nu}{q_0^2} \Bigl]\qquad;\qquad D_{\mu 0}=D_{00}=0\, . \label{jkutyghfrt}
\end{eqnarray}
[] (0,-5) – (0,5);
[] (-9,0) – (9,0);
[thick] (-9,0) – (-6,0);
[thick] (-4,0) – (4,0);
[thick] (6,0) – (9,0);
[-To[scale width=1]] (-9,0) – (-7,0);
[-To[scale width=1]] (7,0) – (8,0);
[ thick] (5, 0) circle (3pt);
[ thick] (-5, 0) circle (3pt);
[ thick] (0, 3) circle (3pt);
[ thick] (0, -3) circle (3pt);
(5, -0.8) node $ +E$;
(-5, 0.8) node $ -E$;
(0.9, 3) node $ +\mu$;
(0.9, -3) node $ -\mu$;
(0, 0) circle (4pt);
[thick] (-.5,-.5) – (0.5,.5);
[thick] (.5,-.5) – (-0.5,.5);
[-To[scale width=1]] (0,0) – (0,2.8);
[-To[scale width=1]] (0,0) – (0,-2.8);
[-To[scale width=1]] (7,0) – (8,0);
[thick] (6,0) arc (0:180:1);
[thick] (-4,0) arc (0: -180:1);
The axial gauge propagator in flat space can be regularised by shifting the double poles at the origin along the imaginary axis. This gives the principle value of the integral.
To compare it with the flat space limit result (<ref>), we need to perform the integration over $q^0$ component in (<ref>). To perform this integral, we note that the integrand given by (<ref>) has the standard single poles of the propagator at the point $q^0= \pm |\vec{q}|=\pm E$ (see figure <ref>), and an unphysical double pole at $q^0=0$. The presence of this double pole makes the integration over $q^0$ divergent. We shall compute this divergent part explicitly. For this, we note that we want to evaluate
I = -i∫dq^0(2π) e^-iq^0(x^0-y^0) 1(q^0)^2-|q⃗|^2 (δ_μν - q_μq_ν(q^0)^2)
We can use the standard Feynman prescription for the single poles. However, we need to avoid the double pole at the origin. Thus, for $x_0< y_0$ and $x_0>y_0$, we use the blue and red contours respectively given in Fig. <ref>. Denoting the radius of the small semi circles around the origin by $\epsilon$ and following the standard method, we find that the result of the above integral is given by
I =
- e^i|q⃗|(x_0-y_0)2|q⃗| π_μν-iq_μq_ν|q⃗|^2( 1πϵ +12(x_0-y_0) ), if x^0< y^0
- e^-i|q⃗|(x^0-y^0)2|q⃗|π_μν-iq_μq_ν|q⃗|^2( 1πϵ -12(x_0-y_0) ), if x^0 > y^0
Making use of the step theta function, the longitudinal part can be written as
1πϵ -[12(x_0-y_0) θ(x_0-y_0)-12(x_0-y_0) θ(y_0-x_0)]
This is identical with the longitudinal part of the flat space limit of the bulk-to-bulk propagator of the gauge field given in equation (<ref>), if we Wick rotate $(x^0,\,y^0)=-i(\uptau_z,\,\uptau_w)$ and
identify $\epsilon \sim 1/L$.
In flat space, a standard approach to regularise the axial gauge propagator is to use the principle value (PV) prescription for the double pole as shown in Fig. <ref>[58]
\begin{eqnarray}
{\rm PV}\left(\frac{1}{(q^0)^2}\right)\;\;=\;\; \frac{1}{2} \left[\frac{1}{(q^0+i\mu)^2}+\frac{1}{(q^0-i\mu)^2}\right]\;;\quad\mu>0
\end {eqnarray}
With this prescription, the double pole at $q^0=0$ gets shifted to $q^0=\pm i\mu$ (see Fig. <ref>). We can now use the standard Feynman contour prescription to perform the integration over $q_0$ and then send $\mu \to 0$. This gives the same expression as given in (<ref>) except that the terms involving $\f{1}{\epsilon}$ are now absent. Note that different prescriptions to deal with the double pole involve a time-translational non-invariant term in the longitudinal part of the propagator [57], as in (<ref>).
Thus, with the understanding that $L \to \infty$ limit is treated in this way, we obtain the final result
\begin{eqnarray}
\mathcal{G}^{\rm FV}_{\mu\nu}(z,w;k)\Big|_{L\rightarrow\infty}\;\; \simeq\;\;\left\{\begin{array}{lr}
-\frac{1}{2k} e^{-k(\uptau_w-\uptau_z)}\pi_{\mu\nu}-\,\frac{k_\mu\,k_\nu}{k^2}\,\frac{(\uptau_z-\uptau_w)}{2}&\mbox{if}\;\; \uptau_z<\uptau_w\\[.4cm]
-\frac{1}{2k} e^{-k(\uptau_z-\uptau_w)}\pi_{\mu\nu}-\,\frac{k_\mu\,k_\nu}{k^2}\, \frac{(\uptau_w-\uptau_z)}{2}&\mbox{if}\;\;\uptau_w<\uptau_z
\end{array}\right.\label{ghtyu}
\end{eqnarray}
where FV stands for "Finite Value".
§.§ Bulk-to-Boundary Propagators
The bulk-to-boundary propagators dictate the external leg factors of the corresponding field in the flat space limit. We start with the gauge field whose bulk-to-boundary propagator is given in equation (<ref>). Its flat space limit is easily obtained by using the asymptotic expansion given in equation (<ref>)
\begin{eqnarray}
{\mathbb K}_{\mu\nu}(e^{\frac{\uptau}{L}}L,\,k)\Big|_{L\rightarrow\infty}\;\;=\;\; \, L^{\frac{d-3}{2}}\,\Biggl[\left(\frac{\pi}{2}\right)^{\frac{1}{2}}\frac{2^{2-\frac{d}{2}}e^{-L \,k}}{\Gamma\left(\frac{d}{2}-1\right)}\,k^{\frac{d-3}{2}} \pi_{\mu\nu} \, e^{-k\,\uptau}+{\cal{O}}\Bigl(\f{1}{L}\Bigl)\Biggl]\;\;+\;\;\frac{k_\mu k_\nu}{k^2}
\end{eqnarray}
Noting (<ref>), the gauge field in the flat limit can be written as
\begin{eqnarray}
A_0=0, \qquad A_\mu^\perp(k)\;\;\simeq\;\;
\pi_{\mu\nu} \frac{1}{\sqrt{Z_A}} A_{(0)}^{\nu}(k) e^{-k\,L} e^{-k\,\uptau}, \qquad A_\mu^{||}(k)\;\;\simeq\;\; -i\frac{k_\mu k_\nu A_{(0)}^\nu(k)}{k^2}
\label{5.59}
\end{eqnarray}
where $A_{(0)}^{\nu}(k)$ is the AdS boundary condition (<ref>), and we have introduced the normalization functions $Z_A$ which depends on the AdS radius and the momentum as
\begin{eqnarray}
\frac{1}{\sqrt{Z_A}}=\pi^{\frac{1}{2}}\,k^{\frac{d-3}{2}}\, L^{\frac{d-3}{2}}
\,\frac{2^{\frac{3-d}{2}}}{\Gamma\left(\frac{d-2}{2}\right)}\, .\label{5.60}
\end{eqnarray}
The factor $e^{-k\,L}$ in (<ref>) may be removed by shifting $\uptau$ by $L$. If we leave this factor in (<ref>) it will cancel out in correlators as a consequence of the time translation invariance of the flat space correlators, or (what is the same) because of the energy-conserving delta function. We will see this explicitly in the next section.
The longitudinal part of the gauge fields $A_\mu$ is independent of $\uptau$, and thus we set it to zero by a gauge transformation that preserves the axial gauge, $A_0=0$. We further define
\begin{equation} \label{eq:scale_a}
a_R^\mu = \frac{1}{\sqrt{Z_A}} A_{(0)}^{\mu}(k)\, .
\end{equation}
The factor $1/\sqrt{Z_A}$ tends to infinity as $L \to \infty$, and thus we need to scale the source $A_{(0)\mu}$ to zero in order for $a_R^\mu$ to be finite.
As the AdS source is arbitrary one may always arrange such that $a_R^\mu$ is finite in the flat-space limit. Thus the flat-space limit of the gauge field becomes
\begin{equation}
A_a(\uptau, k) = {\cal A}_a e^{-k\,\uptau}, \qquad {\cal A}_a\;\;\equiv \;\;
\,\bigl(0,\,\pi_{\mu\nu} a_R^{\nu}(k)\bigl)\, .\label{ghtyr}
\end{equation}
The ${\cal A}_a$ thus defined satisfies the transversality condition $q^a\,{\cal A}_a=0$, where the $(d+1)$ dimensional null momenta is defined as [19]
\begin{eqnarray}
q^a= (q^0,q^\mu)=(\pm ik,\,k^\mu),\qquad \qquad q^2\equiv \delta_{ab}\,q^a\,q^b=0\, ,\label{5.61}
\end{eqnarray}
with $k$ being the magnitude of $k^\mu$. After Wick rotation to Minkowski spacetime,
$q_M^a=(\pm k, k^\mu)$ and $\uptau=i t$, the factor $e^{-k \uptau}$ becomes plane waves, $e^{\mp i q_M^0 t}$, and the two signs are related to whether the external leg is associated with an in- or out-state. The factor ${\cal A}_a$ encodes the $(d-1)$-polarization vectors of the $(d+1)$ vector field in flat space. To see this, let us consider a frame such that the momentum of the photon is along the $d$-direction, $q^a=(\pm i k^d, 0, \ldots, 0, k^q)$, then
\begin{equation} \label{eq:A_pol}
{\cal A}_a(k) = \sum_{\lambda=1}^{d-1} a^{(\lambda)}(k) \epsilon^{(\lambda)}_a\, ,
\qquad \epsilon^{(\lambda)}_a = (0, \delta^\lambda_i, 0),\quad i=1,\ldots, d-1\, ,
\end{equation}
where $\epsilon^{(\lambda)}_a $ are $(d-1)$ polarisation vectors and $a^{(\lambda)}$
is determined by the AdS boundary condition by $a^{(\lambda)} = a_R^\lambda$. Upon quantization $a^{(\lambda)}$ become the annihilation or creation operators (depending on the signs $\pm$ in $q^a$) of the mode with polarization vector $\epsilon^{(\lambda)}_a$ [
Note that this is similar to what happens in Lorentzian AdS solutions that correspond to CFT excited states. The CFT state may be generated by an Euclidean path integral by turning on a source for a dual operator on the boundary of AdS. Using the real-time AdS formalism of [67, 68] one may obtain the bulk Lorentzian solution corresponding to this state and in this solution the annihilation and creation operators are given in terms of the boundary sources [69, 70]. It turns out the resulting solution is precisely
that of HKLL [71], which is then interpreted as corresponding to a bulk coherent state [69, 70].].
One may check that the polarisation vectors satisfy the expected normalization condition,
\begin{equation}
\delta^{a b} \epsilon^{(\lambda)}_a \epsilon^{(\sigma)}_b = \delta^{\lambda \sigma}, \qquad \lambda, \sigma =1, \ldots, d-1\, ,
\end{equation}
and the expected completeness relation,
\begin{equation}
\sum_{\lambda=1}^{d-1} \epsilon^{(\lambda)}_a \epsilon^{(\lambda)}_b = \delta_{ab}
+ \frac{n^2}{(n\cdot q)^2} q^a q^b -\frac{1}{(n\cdot q)} (n^a q^b + n^b q^a)\, ,
\end{equation}
where $n^a=(1,0, \ldots, 0)$ is vector imposing the temporal gauge $n^a A_a=0$.
Next, we consider the massive Proca field whose Btb propagator is given in equation (<ref>). The extension of the above analysis to the massive Proca field is more involved due to the relation among mass, AdS-radius and the conformal dimension of the dual operator given in equation (<ref>). Due to this relation, a finite mass in the large AdS radius limit requires that $\Delta$ is also taken to be large keeping $\Delta/L\simeq m$ finite. This implies that we need to analyse the modified Bessel function appearing in the Btb propagator in the limit of both large argument and large order. This is known as uniform expansion [72] and is reviewed in appendix <ref>. For the modified Bessel functions appearing in the Proca Btb propagator, the uniform expansion gives (see equation (<ref>))
\begin{eqnarray}
{K}_{\Delta-\frac{d}{2}+\ell}(z\,k)\;=\;\left(\f{\pi}{2\,L}\right)^{\f{1}{2}}(k^2+m^2)^{-\f{1}{4}} \left(\f{k}{m+\sqrt{k^2+m^2}}\right)^{-m\,L-\ell}e^{-\sqrt{k^2+m^2}(L+\uptau)}+{\cal O}\Bigl(\f{1}{L}\Bigl)
\label{ghtry}
\end{eqnarray}
With the expansion (<ref>), the flat space limit of the Proca Btb propagator or equivalently the classical solution can be easily worked out. Here, we note the flat space limit of classical solutions given in equations (<ref>) and (<ref>)
\begin{align}
{\cal W}_{a}(k) e^{-L\sqrt{k^2+m^2}} \,e^{-\sqrt{k^2+m^2}\uptau}+O\Bigl(\f{1}{L^{\frac{d-5}{2}}}\Bigl),\nonumber\\
w_R^\mu &= \frac{1}{\sqrt{Z_W}} w_\mu, \quad
{\cal W}_a(k)=\left( i\frac{k_\mu w_R^\mu}{m},\,\tilde{\pi}_{\mu \nu} w_R^\nu\right)\, ,\label{ghtyrgt}
\end{align}
where $w^\mu$ is the AdS boundary condition for the Proca field, see (<ref>). The factor of
$e^{-L\sqrt{k^2+m^2}}$ will cancel out in correlator as a consequence of time translation invariance. The expression of $Z_W$ and $\tilde\pi_{\mu\nu}$ are given by
\begin{eqnarray}
\tilde{\pi}_{\mu\nu}&=&\delta_{\mu\nu} +\frac{k_\mu\,k_\nu}{m(m+\sqrt{k^2+m^2})}\, , \\
\frac{1}{\sqrt{Z_W}}&\equiv &
%\frac{2^{1-mL}\pi^{\frac{1}{2}}\,L^{\frac{d-3}{2}}}{\Gamma(mL)}\,\frac{(m+\sqrt{m^2+k^2})^{m\,L}}{\sqrt{ 2\sqrt{k^2+m^2}}} \nonumber \\
\frac{L^{\frac{d-3}{2}}}{(k^2+m^2)^{\frac{1}{4}}}
\frac{\left((m+\sqrt{m^2+k^2})/2\right)^{mL}}{(m L)^{mL -\frac{1}{2}}}
e^{m L}
\left(1+ {\cal O}\left(\frac{1}{m L}\right)\right)
%\\ %\, .
%&=&\frac{1}{m^{\frac{d-3}{2}} (k^2+m^2)^{\frac{1}{4}} (m L)^{m L}} \nonumber
\label{5.67}
\end{eqnarray}
Notice that $1/\sqrt{Z_W}$ goes to zero as $L\to \infty$, opposite to what happens for $1/\sqrt{Z_A}$, so to keep $w_R^\mu$ finite in the flat-space limit we now need to send the the AdS source $w_\mu$ to infinity, which is always possible since $w^\mu$ is arbitrary.
The uplifted Euclidean momenta of the Proca field in $(d+1)$ dimensions in the flat-space limit can be written as
\begin{eqnarray}
q^a=(\pm i\sqrt{k^2+m^2},\,k^\mu)~~,~~q^2= -m^2
\label{5.68}
\end{eqnarray}
After Wick rotation to Minkowski spacetime,
$q_M^a=(\pm \sqrt{k^2+m^2}, k^\mu)$ and $\uptau=i t$, the factor $e^{-\sqrt{k^2+m^2} \uptau}$ becomes plane waves, $e^{\mp i q_M^0 t}$, and the two signs are related to whether the external leg is associated with an in- or out-state.
It is easy to check that the subsidiary condition ${\cal W}^a q_a=0$ is satisfied as expected (where the indices in ${\cal W}^a q_a$ are contracted using the $(d+1)$ dimensional Euclidean metric $\delta_{ab}$). Exactly as in the gauge field case, we can write ${\cal W}^a$ in terms of polarization vectors. Indeed, let $\epsilon_\mu^{(r)}=\delta_\mu^r, r=1, \ldots, d$, the $d$-unit vectors along the boundary directions. Then
\begin{equation} \label{eq: w_pol}
w^\mu_R = \sum_{r=1}^d w^{(r)}(k) \epsilon_\mu^{(r)}\, ,
\end{equation}
i.e $w^{(r)}(k)$ are Cartesian coordinates of $w^\mu_R$. We now introduce the polarization vectors,
\begin{equation}
\varepsilon_a^{(r)} = \left( i\frac{k^\rho \epsilon_\rho^{(r)}}{m},\,\tilde{\pi}_{\mu}{}^{\nu} \epsilon^{(r)}_\nu \right)\,
\end{equation}
One may check that they satisfy the expected normalization condition,
\begin{equation}
\delta^{a b} \varepsilon^{(r)}_a \varepsilon^{(s)}_b = \delta^{rs}, \qquad r, s =1, \ldots, d\, ,
\end{equation}
and the expected completeness relation,
\begin{equation}
\sum_{r=1}^{d} \varepsilon^{(r)}_a \varepsilon^{(s)}_b = \delta_{ab} + \frac{q_a q_b}{m^2}\, .
\end{equation}
It terms of those,
\begin{equation}
{\cal W}_a(k) =\sum_{r=1}^d w^{(r)}(k) \varepsilon^{(r)}_a\, .
\end{equation}
Exactly as in the gauge field case the field $w_\mu$ that parametrizes the AdS boundary condition has morphed into he creation and annihilation operator (depending on the $\pm$ signs in (<ref>)), which upon quantization give rise to massive modes associated with corresponding polarization vectors, and the AdS radial dependence gave rise to the expected plane wave behavior.
§ FLAT LIMIT OF 3-POINT FUNCTION
In this section we analyse the flat space limit of the CFT 3-point function between a conserved current and two spin one CFT operators computed in section <ref> using AdS/CFT correspondence and compare the resulting expression with the 3-point amplitude
involving a gauge field and two massive spin-1 Proca fields in flat space. As we discussed in the previous section the sources must be scaled in order for the limit to be finite, (<ref>), (<ref>), thus (using the chain rule) we expect,
\begin{equation} \label{eq:flat_limit}
\lim_{L \to \infty} \sqrt{Z_{W_1} Z_A Z_{W_3}} \,A_3^{\mu_1\mu_2\mu_3} \sim \delta(E_1+E_2+E_3) {\cal M}_3^{\mu_1\mu_2\mu_3}
\end{equation}
where $Z_A$ and $Z_W$ are defined in (<ref>) and (<ref>), respectively,
$A_3^{\mu_1\mu_2\mu_3}$ is the AdS 3-point momentum space 3-point amplitude and ${\cal M}_3^{\mu_1\mu_2\mu_3}$ is the corresponding flat space scattering amplitude. As we are working in momentum space, the momentum conserving delta function is already present, but the energy conserving delta function should emerge in the limit.
§.§ Asymptotic Expression of Triple K Integrals
The 3-point CFT correlator given in (<ref>) in momentum space are expressed in terms of the triple-K integrals. The specific integrals appearing in our correlator take the general form (see equation (<ref>))
\begin{eqnarray}
J_{N\{k_i\}}= \int_0^{\infty} dz \,z^{\frac{d}{2} -1+ N} \,p_1^{\Delta -\frac{d}{2} +k_1} \,K_{\Delta-\frac{d}{2} +k_1} (z\,p_1) \,p_2^{\frac{d}{2} -1+k_2} K_{\frac{d}{2} -1+k_2}(z\,p_2)\,p_3^{\Delta -\frac{d}{2} +k_3} \,K_{\Delta-\frac{d}{2} +k_3} (z\,p_3)\non
\end{eqnarray}
We want to evaluate these integrals in the limit $L,\Delta\rightarrow \infty$ keeping $\f{\Delta}{L}$ fixed. For doing this, we use the asymptotic expansions given in equations (<ref>) and (<ref>) to obtain,
\begin{eqnarray}
J_{N\{k_i\}}&\simeq& \left(\frac{\pi}{2}\right)^{\frac{3}{2}}\,L^{\frac{d-5}{2}+N}\,\frac{ (m+\sqrt{ p_1^2+m^2})^{mL+k_1} \,p_2^{\frac{d-3}{2}+k_2}\,(m+\sqrt{p_2^2+m^2})^{m\,L+k_3}}{(p_1^2+m^2)^{1/4}\,(p_3^2+m^2)^{1/4}}\nonumber\\[.4cm]
&&\,e^{-L (\sqrt{p_1^2+m^2}+p_2+\sqrt{p_3^2+m^2})}\,\int_{-\infty}^\infty d\uptau\,e^{-\uptau (\sqrt{p_1^2+m^2}+p_2+\sqrt{p_3^2+m^2})}\;\;+\;\;\cdots\label{5.74}
\end{eqnarray}
where $\cdots$ terms denote the terms subleading in $L$ and $\Delta$.
In the flat space limit, $\uptau$ is interpreted as Euclidean time. We want to perform the integration over this variable. To do this, we use equations (<ref>) and (<ref>) and using the convention to treat all momenta as incoming (or choosing the plus sign in (<ref>) and (<ref>) )
we write $p_2=-iq_2^0$ and $-iq_{1,3}^0= \sqrt{p_{1,3}^2+m^2}$.
Substituting these in the integral in (<ref>) gives
\begin{eqnarray}
\int_{-\infty}^\infty d\uptau\,e^{i\uptau (q_1^0+q_2^0+q_3^0)}&=&2\pi\,\delta(q_1^0+q_2^0+q_3^0)
\end{eqnarray}
Thus, we see that the energy conserving delta function, as needed in equation (<ref>) to interpret the flat limit of the $d$-dimensional CFT correlator as an amplitude in the flat space-time with one more dimension, emerges from the integration over the AdS radial direction.
To account for both in-coming and out-going momenta, one may consider either $q^0>0$ and appropriately adjusts the signs in the delta function or use the convention to consider only plus signs in delta function and consider $q^0<0$ for out-going momenta. In the remainder we choose the latter convention.
With this, the expression in (<ref>) becomes
\begin{eqnarray}
J_{N\{k_i\}}\simeq (-i)^{\frac{d-5}{2}+k_2} L^{N+\frac{d-5}{2}} \left(\frac{\pi}{2}\right)^{3/2}
\frac{(m-iq_1^0)^{m\,L+k_1}}{\sqrt{q_1^0}}\, (q_2^0)^{\frac{d-3}{2}+k_2} \,\frac{(m-iq_3^0)^{m\,L+k_3}}{\sqrt{q^0_3}}\,(2\pi)\delta(q^0_1+q^0_2 +q^0_3)\nonumber
\end{eqnarray}
where, on the support of the delta function, the exponential factor $e^{iL(q_1^0+q_2^0+q_3^0)}$ has been set to 1.
For comparing with the flat space result, we need to analytically continue the above result to Lorentzian signature. This is achieved by performing the inverse Wick rotation $-iq^0=E$ with $E$ denoting the energy of the particles. This gives
\begin{eqnarray}
J_{N\{k_i\}}&\simeq & \frac{ L^{N-1}}{C_0}\,\frac{(m+ E_1)^{k_1}\,E_2^{k_2}\,(m+ E_3)^{k_3}}{\sqrt{Z_{W_1}\,Z_A\,Z_{W_3}}}\,
\, % \frac{ (m+ E_1)^{k_1}\,E_2^{k_2}\,(m+ E_3)^{k_3} }{\sqrt{2\,E_1}\,\sqrt{2E_2}\,\sqrt{2E_3}}
\,(2\pi\,i)\delta(E_1+E_2 +E_3)\label{jn567}
\end{eqnarray}
where $Z_A$ and $Z_W$ are defined in equations (<ref>) and (<ref>) respectively and $C_0$ is defined in equation (<ref>) (with $\Delta$ replaced by $mL+\f{d}{2}$).
As mentioned in section <ref>, some of the triple K integrals appearing in the 3-point function are divergent. However, one can show that these divergences correspond to the $z\rightarrow 0$ end of the integral. Here, we are concerned with the opposite end $z\rightarrow \infty$. In this region, the integrals are well behaved. Due to this, we do not encounter any issue related to the divergences of triple K integrals in the flat limit.
Scalar 3-point functions of primary operators are also given in terms of triple-K integrals [51], and our discussion suffices to compute their flat-space limit, yielding the expected answer, i.e. a delta function in energy and momentum.
§.§ CFT Correlator in Flat Limit
We are now ready to take the flat limit of our 3-point function in (<ref>). This is easily done by using (<ref>). Replacing the triple K integrals appearing in the 3-point function by (<ref>) and keeping the leading order terms in $L$, we find after some rearrangement
\begin{eqnarray}
A_3^{\mu_1\mu_2\mu_3}\Bigl|_{L\rightarrow\infty}&=&2\pi i\;\delta(E_1+E_2+E_3)\, \frac{g}{\sqrt{ Z_{W_1}\,Z_A\,Z_{W_3}}}\,%\frac{
{\cal C}^{\mu_1\mu_2\mu_3}%}{\sqrt{2\,E_1}\,\sqrt{2E_2}\,\sqrt{2E_3}}
\label{5.71}
\end{eqnarray}
\begin{eqnarray}
&&\hspace*{-.9cm}{\cal C}^{\mu_1\mu_2\mu_3}\non\\
&=& -(1+\alpha) \pi^{\mu_2}_{~\mu}\left[ \left( \eta^{\mu_1\mu} +\frac{p_1^{\mu_1}\,p_1^\mu}{m(E_1+m)}\right)\left(\frac{(p_1+p_2)^{\mu_3}\,p_2}{E_3+m} +p_2^{\mu_3}\right)\right.\nonumber\\[.3cm]
&&\left. +\left( \eta^{\mu\mu_3}+\frac{p_3^{\mu_3}\,p_3^{\mu}}{m(E_3+m)}\right) \left(\frac{p_1^{\mu_1}\,p_2}{E_1+m} -p_2^{\mu_1} \right)\right] -2 p_{1\,\mu}\pi^{\mu\mu_2}\left[ \eta^{\mu_1\mu_3} -\frac{ p_1^{\mu_1}\,p_2^{\mu_3}}{m(E_1+m)}\right.\nonumber\\[.3cm]
&&\left. +\;\;\frac{2\, p_1^{\mu_1} \,(p_1+p_2)^{\mu_3} }{(E_1+m)\,(E_3+m)}\;\;-\;\; \frac{2\,E_2\, p_1^{\mu_1} \,(p_1+p_2)^{\mu_3} }{m(E_1+m)\,(E_3+m)}\;\;+\;\;\frac{ p_2^{\mu_1}\,(p_1+p_2)^{\mu_3}}{m(E_3+m)}\;\right] \nonumber\\[.3cm]
&&+2\beta\, p_{1\,\mu}\pi^{\mu\mu_2} \left[\frac{p_1^{\mu_1}\,E_2}{(E_1+m)}\frac{ (p_1+p_2)^{\mu_3} \,E_2}{(E_3+m)} -\frac{p_2^{\mu_1}\,(p_2+p_1)^{\mu_3}\,E_2}{(E_3+m)}+\frac{ p_1^{\mu_1}\, p_2^{\mu_3}\,p_2}{E_1+m} -p_2^{\mu_1}\,p_2^{\mu_3}\right]
% &&{\color{red}+ p_{1\,\mu}\pi^{\mu\mu_2}\left[-\frac{1}{m}-\frac{\alpha}{m}+\gamma m\right]\left[2\frac{p_1^{\mu_1}\,(p_1+p_2)^{\mu_3}\,p_2}{(E_1+m)\,(E_3+m)} -\frac{p_2^{\mu_1}\, (p_1+p_2)^{\mu_3}}{(E_3+m)}+\frac{ p_1^{\mu_1}\,p_2^{\mu_3}}{(E_1+m)}\right]}\label{11.212}
\end{eqnarray}
This expression may look complicated, but we shall show in the next subsection that it precisely has the structure to match with the desired flat space 3-point function.
§.§ Matching with Flat Space Result
The expression (<ref>) should be compared with the flat space 3-point amplitude of a $U(1)$ gauge field and two massive spin-1 fields in $d+1$ dimensions at tree level. This has been computed in appendix <ref> and equation (<ref>) gives the final expression of the flat space amplitude in terms of the $(d+1)$ dimensional polarizations of the external fields. To compare (<ref>) with the result obtained in (<ref>), we need to use the representation of the polarizations suggested by the flat limit of the Btb propagators as given in equations (<ref>) and (<ref>), for the gauge and Proca field, respectively. In Minkowski signature, they can be written as
\begin{eqnarray}
\varepsilon^W_a=\left( \frac{(p\cdot \varepsilon)}{m},\, \varepsilon_\mu +\frac{(p\cdot \varepsilon)}{m(E+m)}\,p_\mu \right)~~;~~\varepsilon^A_a=(0, \,\pi_{\mu\nu}\epsilon^\nu)\, ,\label{11.216a}
\end{eqnarray}
where $\varepsilon_\mu$ is any of the vectors $\varepsilon_\mu^{(r)}$ introduced in (<ref>)
and $\epsilon^\nu$ is any of the vectors $\epsilon^{(\lambda)}_\nu$ introduced in (<ref>).
Below we shall denote these vectors by $\epsilon_{1\mu}, \epsilon_{2\mu},\epsilon_{3\mu}$ according to which vector they are associated in the order they appear in the correlator.
It is easy to see that these polarization vectors satisfy the condition $p\cdot \varepsilon(p)=0$ with $p^a=(E,\,p^\mu)$ where the inner product now involves the Minkowski metric $\eta_{ab}$. For the above basis of the transverse polarization vectors, we have
\begin{eqnarray}
\varepsilon_1^a\,\varepsilon_{3a}&=&\epsilon_{1\mu}\,\epsilon_{3\nu}
\left[ \eta^{\mu\nu}+ \frac {2\,(p_1+p_2)^\nu \,p_1^\mu}{(E_1+m)(E_3+m)} -\frac{2\,p_2\,(p_1+p_2)^\nu\,p_1^\mu}{m(E_1+m)(E_3+m)} +\frac{(p_1+p_2)^\nu\,p_2^\mu}{m(E_3+m)}-\frac{p_1^\mu\,p_2^\nu}{m(E_1+m)}\right]\, ,\nonumber\\[.3cm]
p_2^a\,\varepsilon_{1a} &=&
\epsilon_{1\mu}\left[p_2^\mu -\frac{p_2\,p_1^\mu}{E_1+m} \right]\qquad;\qquad p_2^a\,\varepsilon_{3a} =
\epsilon_{3\mu}\left[
\end{eqnarray}
Using these in equation (<ref>) gives
\begin{eqnarray}
{\cal M}_3& =& \hat g \,\epsilon_{1\mu_1}\,\epsilon_{2\mu_2}\,\epsilon_{3\mu_3}\Bigg[2 p_{1\mu}\,\pi^{\mu\mu_2}\Bigg( \eta^{\mu_1\mu_3}+ \frac {2\,(p_1+p_2)^{\mu_3} \,p_1^{\mu_1}}{(E_1+m)(E_3+m)} -\frac{2\,p_2\,(p_1+p_2)^{\mu_3}\,p_1^{\mu_1}}{m(E_1+m)(E_3+m)} \nonumber\\
&&+\frac{(p_1+p_2)^{\mu_3}\,p_2^{\mu_1}}{m(E_3+m)}-\frac{p_1^{\mu_1}\,p_2^{\mu_3}}{m(E_1+m)}\Bigg)+2\hat{\beta}\, p_{1\mu}\,\pi^{\mu\mu_2} \left(p_2^{\mu_1} -\frac{p_2\,p_1^{\mu_1}}{E_1+m}\right)\left(p_2^{\mu_3}+\frac{ p_2\,(p_1+p_2)^{\mu_3}}{E_3+m}\right)\nonumber\\
&&-\left(1+\hat\alpha\right) \pi^{\mu_2}_{~\mu}\Bigg\{ -\tilde{\pi}_1^{\mu_1\mu}\, \left(p_2^{\mu_3} +\frac{p_2\,(p_1+p_2)^{\mu_3}}{E_3+m}\right)+ \tilde{\pi}_3^{\mu_3\mu}\,\left( p_2^{\mu_1} -\frac{ p_2\,p_1^{\mu_1}}{E_1+m}\right)\Bigg\}\Bigg]
\label{5.83}
\end{eqnarray}
By comparing this $(d+1)$ dimensional amplitude with the $d$-dimensional CFT correlator in flat limit
given in equation (<ref>), we see that they match exactly provided we identify the flat space gyromagnetic ratio $\hat\alpha$ and quadrupole couplings $\hat\beta$ with their AdS counterparts $\alpha,\beta$, respectively. Doing this, we find
\begin{eqnarray}
\lim_{L \to \infty} \sqrt{Z_{W_1} Z_A Z_{W_3}} \,A_3^{\mu_1\mu_2\mu_3} \;=\; -2 \pi i
\delta(E_1+E_2+E_3)\, {\cal M}_3^{\mu_1\mu_2\mu_3}\, ,
\end{eqnarray}
Thus the flat space limit of the CFT correlator correctly reproduces the interacting part of the flat-space S-matrix.
§ DISCUSSION
We discussed in this paper the computation of the flat space scattering amplitude of massive
spin 1 field, its complex conjugate and a $U(1)$ gauge field in $d+1$ dimensions via a flat-space limit of a $d$-dimensional 3-point CFT correlator of a conserved current, a non-conserved vector current and its complex conjugate. This computation may also be formulated as a flat space limit of a corresponding tree-level AdS amplitude, with the bulk interactions involving both minimal and non-minimal couplings, with the latter being the gyromagnetic and the quadrupole couplings.
The bulk AdS computation and the agreement with the CFT result is in itself a new test of the
AdS/CFT. We computed the boundary 3-point correlation function following the procedure of holographic renormalization. This fixes the three coefficients appearing in the general CFT 3-point function of a conserved current and two non conserved operators in terms of bulk parameters. One feature of this matching is that each bulk coupling is separately consistent with the expected conformal invariance. This is not surprising since each bulk coupling is invariant under the AdS isometries by itself. Further, since the matching occurs for arbitrary values of the bulk couplings, conformal symmetry does not impose any restriction on the bulk couplings at the level of 3-point function, leaving for example, the AdS gyromagnetic ratio $\alpha$ completely arbitrary. Unitarity and crossing symmetry may impose constraints which may fix or
restrict the allowed values of $\alpha$ but this would require analysing higher point functions.
The flat-space limit amounts to sending the AdS radius $L$ to infinity while keeping fixed all parameters (masses and coupling constants) that appear in the bulk action. From the CFT perspective, one zooms in on the IR region while sending to infinity the conformal dimension of the operator dual to the massive fields. In this limit, we show that the $d$ dimensional CFT 3-point function matches with a corresponding 3-point scattering amplitude in $d+1$ dimensional flat space.
The flat-space limit turns AdS isometries into Poincaré isometries and classical solutions in AdS to plane wave solutions in flat space, with the fields parametrizing the boundary condition in AdS becoming polarization vectors in flat space.
We also analysed the flat-space limit of the BtB propagator of the gauge field in the axial gauge and explicitly showed that it matches with the flat space Feynman propagator in the axial gauge. The longitudinal part of the Feynman propagator in the axial gauge is prescription dependent and we show that the principle value prescription in flat space agrees with the translation invariant part of AdS expression in the flat limit (as one may have anticipated based on earlier flat space analyses). The polarisation vectors of the fields in the flat-space limit are also dictated by the Btb propagators. In particular, the matching of the 3-point function requires matching the flat space polarisation vectors to that that emerge from the flat-space limit of AdS. The conservation of the spatial momenta in the flat-space limit is ensured by working with momentum space CFT. On the other hand, the energy conserving delta function emerges from the triple-K integrals that underlie momentum space CFT 3-point functions. One of the main ingredients for the flat-space limit matching was the uniform expansion of modified Bessel functions in which both the argument as well as the order of the modified Bessel functions were taken to be large. This was crucial for taking the limit of the modified Bessel functions associated with the non conserved operators.
The bulk AdS computation was done at tree-level, but the CFT three-point function is fixed non-perturbatively by conformal invariance. This implies that bulk loops in AdS will lead to an AdS amplitudes of the same form as at tree-level but with quantum corrected parameters. Moreover, quantum corrections of the flat space gyromagnetic and quadruple coupling may be directly obtained by the flat-space limit of the corresponding AdS diagrams. The reason is that the Feynman rules map 1-1 in the limit: BtB propagators map to Feynman propagators, Btb propagators map to plane waves and interaction vertices are kept fixed in the limit. There were recent progress in setting up loop computation in AdS, see [73] and references therein, and it would be interesting to combine the methods described there with the results we present here in order to obtain explicit loop-level results for flat space scattering amplitudes from AdS.
Note that the matching using the CFT 3-point function is non-perturbative, so if we know the
coefficients of the low-energy effective action non-perturbatively this would provide a non-perturbative determination of the gyromagnetic and quadruple couplings. The coefficients in the low-energy effective action in $d+1$ dimension are linked to coefficients in the low-energy effective action in $10d$ and $11d$ supergravity via compactification, and some of these coefficient may be fixed non-perturbatively using U-duality. It would be interesting to track these relations in detail.
In flat space, we know that the gyromagnetic ratios can take two values ${ \alpha}=2$ or ${ \alpha}=1$ (see, e.g., [74, 75] for recent works on this). Massive fields charged under the gauge fields, which arise from the closed string degrees of freedom (such as the graviton or the Kalb-Ramond field), have gyromagnetic ratio 1 whereas massive fields which are charged under the gauge fields arising from open strings have gyromagnetic ratio 2 [76, 75]. Now, the gyromagnetic ratio $\alpha$ appears in the 3-point function. Hence, noting that $\alpha$ is a constant at tree level, the exact matching of the 3-point amplitude implies that its value in AdS should also be 1 or 2. The fixing of the gyromagnetic ratio in AdS will have implications for the bootstrap program in the dual CFT as the constraints on the bulk coupling will restrict the OPE coefficients in the boundary CFT theory.
We expect our analysis to extend to higher-point functions. As already noted, the perturbative Feynman rules map 1-1 between AdS and flat space, i.e. for each Witten diagram there is a corresponding flat space Feynman diagram. Moreover, as we recover translational invariance in the flat-space limit, the energy-preserving delta function should arise from the Bulk-to-boundary propagators. It would be interesting to work out the details. Non-perturbative things are less clear but also more interesting. The general CFT $n$-point function of scalar operators in momentum space is known [77, 78] (but the corresponding answer for spinning operators is still missing). It would be interesting to analyze the flat-space limit of the general momentum-space CFT $n$-point functions, starting from scalar ones.
Another application of our analysis is in the context of higher spin theories. In 4-dimensional flat space, a fully consistent formulation of massive higher spin theories is still missing and is an active area of research (see e.g. the review [80]). Holography allows us to construct the flat-space couplings from the CFT correlators as we have seen for the massive spin 1 case in this paper. Using this approach should be promising for constructing the consistent massive higher spin theories in the flat space.
We are thankful to C. Corianò, P. Di Vecchia, D. Francia, C. Heissenberg, Yue-Zhou Li, S. Lionetti and R. Loganayagam for useful discussions. KS and MV were supported in part by the STFC consolidated grant ST/T000775/1 “New Frontiers in Particle Physics, Cosmology and Gravity”. MV is also supported in part by the “Young Faculty Research Seed Grant Scheme” of IIT Indore.
§ CONVENTIONS AND USEFUL IDENTITIES
In this appendix, we summarise our conventions and note some useful identities which have been used in this work.
We denote the indices corresponding to the $d+1$ dimensional AdS directions by $M,\,N, P\dots$ which run from $0$ to $d$. On the other hand, the $d$ dimensional boundary indices are denoted by Greek letters $\mu,\,\nu,\rho,\cdots$ which run from $1$ to $ d$. The $d+1$ dimensional flat space indices have been denoted by $a,b,\cdots$ which run from $1$ to $d+1$. The anti-symmetrization of two fields is defined as
\begin{eqnarray}
A_{[M}\,B_{N]}=\frac{1}{2} \Big(A_M\,B_N-A_N\,B_M\Big).
\end{eqnarray}
Throughout this paper, we have worked in the Euclidian AdS$_{d+1}$. Only after taking the flat limit, we perform a Wick rotation $z\equiv x_0^E= i x^0$, with $x^0$ the time coordinate, of the radial direction. We use mostly positive signature convention
for the Minkowski metric. The Wick-rotation transforms the zero component of a generic vector field $M$ in mostly positive
metric as[81, 82]:
\begin{eqnarray}
V^0=iV^E_{0} \qquad ,\qquad {\cal V}_{0\mu}=\partial_0V_\mu-\partial_\mu V_0=i( \partial_0 V_\mu-\partial_\mu V_0^E)
\end{eqnarray}
where $V_M$ can be either a massless or massive vector field.
According to this rule, the square of the field strength of the vector field remains unchanged under the rotation.
The Lorentzian action $e^{iS_L}$ is transformed in the Euclidean one $e^{-S_E}$ getting the identity $S_E=-iS_L$. The action of a massive complex vector field in mostly positive
signature therefore transforms under the wick rotation as
\begin{eqnarray}
i S_{L}&=&i \int d^{d+1} x \left[ -\frac{1}{2} {\cal V}^\dagger_{MN}\,{\cal V}^{MN}- %(\pm)
\,m^2 V^\dagger_M\, V^M\right]\non\\
&=& \int d^{d+1} x_E \left[ -\frac{1}{4} {\cal V}_{MN}^\dagger\,{\cal V}^{MN} -m^2 V^\dagger_M\, V^M\right]_E\non\\
\end{eqnarray}
where we are treating $V_M$ and $V_M^\dagger$ as two independent fields.
Our convention for the Riemann tensor is
\begin{eqnarray}
\end{eqnarray}
For any tensor ${\cal T}_{PQ}$, we have
\begin{eqnarray}
&&[\nabla_M,\nabla_N] {\cal T}_{PQ}=-R^L_{~PMN}{\cal T}_{LQ}-R^L_{~QMN}{\cal T}_{PL}\label{A.24}
\end{eqnarray}
The AdS metric in the Poincaré coordinates is given by
\begin{eqnarray}
ds^2=\frac{L^2}{z^2} \big( dz^2+ \delta_{\mu\nu} \, dx^\mu \,dx^\nu\big)\quad;\qquad \sqrt{G} = \left(\f{L}{z}\right)^{d+1}\label{stanpoin54}
\end{eqnarray}
with $L$ being the AdS-radius. The Christoffel symbols in this coordinates are
\begin{eqnarray}
\Gamma^z_{zz} = -\frac{1}{z}~~;~~\Gamma_{\mu z}^z=0~~;~~\Gamma^z_{\mu\nu}=\frac{1}{z} \delta_{\mu\nu}
~~;~~\Gamma^\mu_{zz}=0~~;~~\Gamma^\mu_{\nu z}=-\,\frac{\delta^\mu_\nu}{z}~~;~~\Gamma^\mu_{\nu\lambda}=0
\end{eqnarray}
The above equation can be compactly written as
\begin{eqnarray}
\Gamma^M_{NP}=-\frac{1}{z}\left(\delta^M_N\,\delta_{Pz} +\delta^M_P\,\delta_{Nz} -\delta^M_z\,\delta_{NP}\right)=-\frac{z}{L^2}\left(\delta^M_N\,g_{Pz} +\delta^M_P\,g_{Nz} -\delta^M_z\,g_{NP}\right)\label{6.21}
\end{eqnarray}
where $g_{MN}$ denotes the AdS-metric in the Poincaré coordinates.
For the purposes of holographic renomalization, it is convenient to use the Fefferman-Graham (FG) coordinates which is related to the Poincaré coordinates by $\rho=\frac{z^2}{L}$.[The purpose of keeping the AdS radius $L$ in $\rho=\frac{z^2}{L}$ is to give both $\rho$ and $z$ the dimension of length. ] Thus, in FG coordinates, the metric takes the form
\begin{eqnarray}
ds^2=L^2\frac{d\rho^2}{4\rho^2}+L \frac{\delta_{\mu\nu}\, dx^\mu\,dx^\nu}{\rho} \quad;\qquad \sqrt{G} = \f{1}{2}\left(\f{L}{\rho}\right)^{\f{d+2}{2}}\label{B.26a}
\end{eqnarray}
The Christoffel symbols in this coordinates are given by
\begin{eqnarray}
\Gamma^{\rho}_{\rho\rho}=-\frac{1}{\rho}~~;~~\Gamma^\rho_{\mu\nu}=\f{2}{L}\delta_{\mu\nu}~~;~~\Gamma^\mu_{\rho\rho}=0~~;~~\Gamma^\nu_{\rho\mu}= -\frac{1}{2\rho}\delta_\nu^\mu~~;~~\Gamma_{\nu\mu}^\sigma=0
\end{eqnarray}
The Riemann tensor, Ricci tensor and the scalar curvature for the AdS can be expressed in coordinate independent manner as
\begin{eqnarray}
R_{MNPQ}= \frac{G_{MQ}\,G_{NP}-G_{MP}G_{NQ}}{L^2}~~;~~R_{MN}=-\frac{d}{L^2}\,G_{MN}\quad;\quad R=-\frac{d(d+1)}{L^2} \label{geomet56}
\end{eqnarray}
with $G_{MN}$ denoting the AdS metric in the corresponding coordinate system.
§ LIMITING BEHAVIOURS OF MODIFIED BESSEL FUNCTIONS
For the calculation of holographic renormalisation and taking the flat limit, we need the expressions of modified Bessel functions in various limits. In this appendix, we review the required results.
§.§ Expansions for large and small arguments
For the large arguments, the asymptotic expansions of the modified Bessel functions are given by
I_ν(z) → e^z(2πz)^12 K_ν(z) → (π2 z)^12e^-z z→∞
On the other hand, in the limit $z\rightarrow0$, we have following leading order approximations
I_ν(z) → 2^-νΓ(ν+1)z^ν K_ν(z) → 2^ν-1Γ(ν) z^-ν z→0
In the above equation (<ref>), the approximation for $I_\nu(z)$ is valid for $\nu\not=-1,-2,\cdots$ and the approximation for $K_\nu(z)$ is valid for $\nu >0$. For the holographic renormalisation of the Proca field, we shall need the expansion of $K_\nu(z)$ in the limit $z\rightarrow0$ in more detail. For non-integer $\nu$ we have
K_ν(z) = π2I_-ν(z)-I_ν(z)sin(πν) ; I_ν(z) = ∑_j=0^∞1Γ(j+1)Γ(ν+j+1)(z2)^ν+2j ,
while for positive integer $n$ the expansion reads
K_n(x) = 12(x2)^-n∑_j=0^n-1 Γ(n-j)Γ(j+1) (-1)^j (x2)^2j +(-1)^n+1 ln(x2)I_n(x)+
+(-1)^n12(x2)^n∑_j=0^∞ ψ(j+1)+ψ(n+j+1)Γ(j+1)Γ(n+j+1) (x2)^2j
ψ(z) =∑_k=1^∞(1k-1z+k-1)-γand $\gamma$ is the Euler Mascheroni constant.
§.§ Uniform expansions
The uniform expansion involves taking the argument as well as the order of the modified Bessel function to be large. Here, we review the derivation of such expansion following [72]. We start by noting that the modified Bessel functions satisfy the differential equation
\begin{eqnarray}
z^2\frac{d^2}{dz^2}F_\nu +z\frac{d}{d z} F_\nu -(z^2+\nu^2) F_\nu\; =\; 0\label{eqrefty6}
\end{eqnarray}
where $F_\nu$ can be $K_\nu(z)$ or $I_\nu(z)$.
Let us start by deriving the asymptotic expansion when $\nu$ is large and $z$ bounded.
To this end, it is convenient to first perform the Liouville-type transformation
\begin{equation}
h_\nu(z) = z^{\frac{1}{2} }\, F\, ,
\end{equation}
and rewrite the differential equation (<ref>) in the form [72]
\begin{eqnarray}
\frac{d^2}{d z^2} h_\nu(z)\;=\; \Bigl(\nu^2 f(z) +g(z)\Bigl)h_\nu(z), \qquad f(z) =\frac{1}{z^2}, \quad g(z) =1-\frac{1}{4\,z^2}\, . \label{B.92a}
\end{eqnarray}
We can remove the $z$-dependence from the coefficient of $\nu^2$ by further change of dependent and independent variables,
\begin{eqnarray}
\xi=\int f^{\frac{1}{2}}(z) \,dz \quad;\qquad h_\nu=f^{-\frac{1}{4}} (z) \, H_\nu(\xi)\label{B.93a}
\end{eqnarray}
In terms of them, equation (<ref>) can be expressed as
\begin{eqnarray}
\frac{d^2}{d\xi^2} H_\nu(\xi) = \left(\nu^2+\psi(\xi)\right) H_\nu(\xi),\quad\qquad\psi(\xi) =
\frac{g(z)}{f(z)} -\frac{1}{f^{3/4}(z)}\frac{d^2}{dz^2} \left( \frac{1}{f^{1/4}(z)}\right) \label{B.94a}
\end{eqnarray}
With $\nu$ large and $z$ bounded such that $\nu \gg \psi(\xi)$, the differential equation (<ref>) can be solved perturbatively in $1/\nu$,
\begin{eqnarray} \label{B.95a}
H_\nu( \xi)= e^{-\nu\,\xi}\sum_{s=0}^\infty \frac{A_s(\xi)}{\nu^s } %\qquad;\qquad H_\nu( \xi)= e^{\nu\,\xi}\sum_{s=0}^\infty (-1)^s\frac{A_s(\xi)}{\nu^s } \label{B.95a}
\end{eqnarray}
As (<ref>) is invariant under $\nu \to -\nu$, there is a second asymptotic expansion which is related to (<ref>) by
$\nu$ with $-\nu$.
The coefficients $A_s$ in (<ref>) can be determined recursively by plugging the above series expansion in equation (<ref>):
\begin{eqnarray}
2A'_{s+1}\;=\;A''_{s} -\psi(\xi) A_s(\xi)\quad\Longrightarrow \quad A_{s+1}\;\; =\;\; \frac{1}{2} f^{-1/2}(z) \frac{d A_s}{dz} -\frac{1}{2}\int dz\, \Lambda(z)\, A_s\,dz\label{zeroder}
\end{eqnarray}
\begin{eqnarray}
\Lambda(z) &=& f^{1/2}(z) \psi(\xi(z))\;\;=\;\;f^{1/2}(z) \left[\frac{g(z)}{f(z)} -f(z)^{-1/2} \left( \frac{5}{16} \frac{(f'(z))^2}{f(z)^2} +\frac{1}{4} \frac{f''(z)}{f(z)}\right)\right]
\end{eqnarray}
Taking $s=-1$ in the differential equation in (<ref>)
we find that $A_0$ should be constant (since $A_{-1}=0$ – there are no the coefficients with negative order in (<ref>)). One may recursively solve for the higher order coefficients. However, it turns out that the coefficients are, in general, divergent near $z\rightarrow\infty$ for the functions $f(z)$ and $g(z)$ given in equation (<ref>), as explained in [72].
To discuss the case when both $\nu$ and $z$ going to infinity, we rescale $z$ to $z\nu$ (<ref>) and repeat the analysis. It turns out one gets the same equation as in (<ref>) but with different $f(z)$ and $g(z)$, namely,
\begin{eqnarray}
\frac{d^2}{d z^2} h_\nu(\nu z)\;=\; \Bigl(\nu^2 f(z) +g(z)\Bigl)h_\nu(\nu z), \;\;\;\;\;\;\;\;\;f(z)= \frac{1+z^2}{z^2}, ~~~~g(z) =-\frac{1}{4\,z^2}
\end{eqnarray}
Assuming $\nu$ to be real and positive (more generally it suffices for the real part of $\nu$ to be positive $|\arg (\nu)| <\frac{1}{2} \pi$), the above expression of $f(z)$ when used in equation (<ref>) gives
\begin{eqnarray}
\xi(z)=(1+z^2)^{1/2} +\ln \frac{z}{1+(1+z^2)^{1/2}}\quad;\qquad h_\nu =\left(\f{z^2}{1+z^2}\right)^{\f{1}{4}}H_\nu(\xi)\label{hyutred}
\end{eqnarray}
In writing the expression of $\xi$, we have set the integration constant to zero. This is allowed because equation (<ref>) is nothing but a change of variable. Finally, we can write a series solution of the modified Bessel function $K_\nu(\nu z)$ by using equation (<ref>) and the relation between $h_\nu(\nu z), H_\nu(\nu z)$ and $K_\nu(\nu z)$
\begin{eqnarray}
K_\nu(\nu z)&=& (\nu\,z)^{-\frac{1}{2}} \,f^{-\frac{1}{4}}H_\nu(\nu z)\;\;=\;\; \frac{e^{-\nu\xi(z)}}{(1+z^2)^{\frac{1}{4}}}\, \sum_{s=0}^\infty \frac{A_s}{\nu^s}
\label{B.101a}
\end{eqnarray}
where $\xi(z)$ is given in (<ref>) and the overall factor $\sqrt{\nu}$ originates from the rescaling of the $z$-variable discussed before.
Next, we want to find the leading order term of the above series solution. As before, the recursive relation (<ref>) again implies that $A_0$ is constant. To find its value, we make use of the fact that for large $z$, we have [83, 72]
\begin{eqnarray}
K_\nu(\nu z) &\sim& \sqrt{\frac{\pi}{2\,\nu}}\,\frac{e^{-\nu \,z}}{z^{1/2}}\label{expectedgtyuh}
\end{eqnarray}
Now, the expression of $\xi(z)$ given in (<ref>) for large $z$ gives $\xi=z+{\cal O}(\f{1}{z})$. Hence, $ e^{-\nu \xi}\sim e^{- \nu z}$. Thus, the leading order term in (<ref>) for large $z$ becomes
K_ν(νz) = A_0 e^-ν z/(ν z)^1/2
Comparing this with the expected result (<ref>), we find $A_0 =\sqrt{\frac{\pi}{2}}$. Using this, we see that the leading order expression for the uniform expansion of the modified Bessel function is given by
K_ν(ν z)|_ν→∞ ≃ (π2ν)^12 e^-ν ξ(z)(1+z^2)^14 ; ξ(z) = (1+z^2)^12 +ln(z1+(1+z^2)^12)
A similar analysis yields,
I_ν(ν z)|_ν→∞ ≃ (12π ν)^12 e^ν ξ(z)(1+z^2)^14
with the same $\xi(z)$ as in equation (<ref>).
§.§ Expansion for $K_{\Delta-\f{d}{2}+\ell}(zk)$
For taking the flat limit, we need to know the expansion of $K_{\Delta-\f{d}{2}+\ell}(zk)$ with $z$ parametrized by $z=Le^{\f{\tau}{L}}$ in the limit $\Delta,L\rightarrow\infty$. Using (<ref>), we find
Δ- d2+ℓ = ℓ+mL√(1+(d-2)^24m^2L^2) = mL+ℓ+O(1L)≡mL + β
where $\beta=\ell +O\left(\f{1}{L}\right)$.
We have
K_Δ-d2+ℓ(zk) = K_mL+β(kL +kτ+O(1L))
= K_ν+β(p ν+ k τ)+O(1L)
= K_ν+β(p ν) +kτK'_ν+β(pν) +(kτ)^22K”_ν+β(pν)+(kτ)^33!K”'_ν+β(pν)+⋯
where, we have defined $p= k/m$ and $\nu =mL$. The derivatives of modified Bessel functions can be expressed in terms of linear combinations of the modified Bessel functions with different orders. E.g.,
dK_σ(x)dx = -12 [K_σ-1(x)+K_σ+1(x)]
d^2K_σ(x)dx^2 = 14 [K_σ-2(x)+2K_σ(x)+K_σ+2(x)]
d^3K_σ(x)dx^3 = -18 [K_σ-3(x)+3(K_σ-1(x)+K_σ+1(x) )+K_σ+3(x)]
Now, using the identity [84]
K_ν+α(νz)K_ν(νz) = (1+√(1+z^2)z)^α[1-1-α√(z^2+1)2(1+z^2)αν+O( 1ν^2 )]
and the uniform expansion result for $K_\nu(\nu z)$ reviewed in the previous subsection, we find
K_Δ-d2+ℓ(zk) = (π2EL)^12 (km+E)^-mL-ℓ e^-EL (1-Eτ+E^2τ^22-E^3τ^33!+⋯)[1+O(1L)]where $E=\sqrt{k^2+m^2}$.
In the above expression, we have kept only the leading order terms in the expansion in $1/L$. The $O(1/\nu)$ term in (<ref>) is of order $1/L$ does not contribute to the leading order term. All terms in the series in $E\tau$ present in the above expression are of the same order w.r.t. expansion in $1/L$
and resum to give an exponential function. Hence, we get
K_Δ-d2+ℓ(zk) = (π2EL)^12 (km+E)^-mL-ℓ e^-EL-Eτ[1+O(1L)]
Following a similar analysis and using [84]
I_ν+α(νz)I_ν(νz) = (1+√(1+z^2)z)^-α[1-1+α√(z^2+1)2(1+z^2)αν+O( 1ν^2 )]
we also find
I_Δ-d2+ℓ(zk) = (12πEL)^12 (km+E)^mL+ℓ e^EL+Eτ [1+O(1L)]
§ GENERAL CUBIC ACTION IN ADS FOR GAUGE AND PROCA FIELDS
In this appendix, we construct the general cubic action involving a gauge field and a complex Proca field in AdS$_{d+1}$. There are general group theoretic constructions of cubic interaction terms involving fields of arbitrary spins (see, e.g., [85, 86]). However, for our purposes, it would be sufficient to consider a perturbative effective field theory approach.
If we are working at a fixed order in perturbation theory, we can eliminate those terms in the Lagrangian which are proportional to lowest order equation of motion. More precisely, we can use field redefinitions to transfer these terms to higher order terms in the perturbative expansion. We start by reviewing this procedure for a general action following [87]. Suppose, we have an action $S[\phi]$ involving a generic field $\phi$ in which terms with different orders are parametrised by a parameter $\epsilon$
S[ϕ]= S_0[ϕ] +ϵS_1[ϕ]+ϵ^2S_2[ϕ]+⋯
Now, suppose at $O(\epsilon^n)$, the $S_n[\phi]$ includes a term $\mathcal{S}_n[\phi]$ which is proportional to the equation of motion for the lowest order action $S_0[\phi]$, i.e.,
𝒮_n[ϕ]=∫d^dx f(x)δS_0δϕ(x) ,
Here $f(x)$ denotes some arbitrary function of the field and its derivatives. We now make the field redefinition
ϕ(x) →ϕ̃(x) = ϕ(x)-ϵ^n f(x)
Under this redefinition, the action (<ref>) becomes
S[ϕ] →S[ϕ̃] =S[ϕ] -ϵ^n ∫d^dx f(x)δS_0δϕ(x) +O(ϵ^n+1)
The second term on the right hand side cancels $\mathcal{S}_n[\phi]$. This shows that the effect of the field redefinition (<ref>) is to remove the term proportional to the lowest order equation of motion in the action without changing any other term up to $O(\epsilon^n)$. Thus, we can only focus on those terms which do not involve lower order equations of motion if we are working at a fixed order in perturbation theory.
Note that the use of the lowest order equation of motion (instead of the full non-linear equations) in the field redefinition was useful in that the redefinition does not mix different orders in the perturbative expansion. Had we used the full non-linear equations, one would need to keep track of how nonlinearities mix different orders in the $\epsilon$ expansion.
We can now apply the above procedure to write the cubic action involving a gauge and the complex Proca field. Gauge invariance implies that the gauge field can appear only in terms of the field strength $F^{MN}$. Further, the complex Proca field is taken to be charged under this gauge field and the conservation of the charge implies that each term involving the Proca field $W_M$ must also have its complex conjugate $W^*_M$. Now, the kinetic terms of the action involving the gauge and complex Proca field are given by
where, the indices $M,N$ run from $0$ to $d$ and $F_{MN}$ denotes the field strength of the gauge field $A_M$,
F_MN = ∇_M A_N -∇_N A_M = _M A_N -_N A_M .
We have also introduced $W_{MN} = D_M W_N -D_N W_M$ with
D_M W_N =∇_M W_N +ig A_M W_N = _M W_N -Γ_MN^P W_P +ig A_M W_N .
This ensures that the kinetic term is invariant under the gauge transformation
W_M →e^igλ(x)W_M , W^*_M →e^-igλ(x)W^*_M ; A_M →A_M-_M λ(x) .
The lowest order equations of motion of the gauge and the Proca field follow from the variation of the kinetic terms and are given by
∇_M F^MN=0 ; D_M W^MN +m^2 W^N =0 ; D_M W^*MN +m^2 W^*N =0 .
An important condition on the massive Proca fields can be obtained by taking the divergence of their equations which gives
m^2D_M W^M =D_M D_N W^NM =D_[MD_N] W^NM = ig2 F_MNW^MN .
This shows that the divergence $D_M W^M$ is actually quadratic in the fields. This will be useful below, as we shall see. Another set of useful equations are
\begin{equation} \label{U1_feq}
\nabla_M F^{MN} =0\ \implies\ \Box A^N = \nabla^N(\nabla\cdot A)+R^{NP}A_P \ \implies\
\Box F^{MN} \;%=\; 2R^{PMNQ}F_{PQ} +R^M_{\;\;P}F^{PN} +R^N_{\;\;P}F^{MP}\;
=\; \f{(2d+2)}{L^2}F^{MN}
\end{equation}
where the last equality holds in $AdS$.
Next, we want to write the cubic interaction terms. We shall write down all possible cubic terms and then eliminate the redundant terms using the procedure described above. We shall focus on terms with up to 3 derivatives. At the lowest order in derivatives (i.e. one derivative), there is only one possible term,
I_1 =i a_12 F_MN(W^*M W^N-W^*N W^M) .
An important point to note is that after integration by parts in the above term, its tensor structure matches with one of the terms in $W^{*MN}W_{MN}$. So, naively, it would seem as if we could forget about the $a_1$ term in (<ref>). However, the structure of $W^{*MN}W_{MN}$ follows from the minimal coupling procedure when we promote the global phase invariance to local gauge invariance, while the term involving $a_1$ in (<ref>) is gauge invariant by itself and does not follow from minimal coupling. Hence, its coefficient is independent of the coefficient in the minimal coupling term in $W^{*MN}W_{MN}$. Thus, we must keep the $a_1$ term. The existence of a new gauge invariant term is responsible for the gyromagnetic coupling.
At the level of 3 derivatives, the terms need to be constructed using $F_{MN}, W_M, W^*_M$ and two derivatives $D_M$. Using an integration by parts we can ensure that $D_M$ acts only on the Proca fields. Using these rules, the most general cubic structure involving 3 derivatives can be written as
= F^MN[ (c_0D_M W^*_P D^P W_N +c_0^*D_M W_P D^P W^*_N )+(c_1D_P W^*_M D^P W_N +c_1^*D_P W_M D^P W^*_N)
+ (c_2D_M W^*_P D_N W^P +c_2^*D_M W_P D_N W^*P )+(c_3D_P W^*P D_M W_N+c_3^*D_P W^P D_M W^*_N)
+ (c_4W^*_M D_P D^P W_N+c_4^*W_M D_P D^P W^*_N)+(c_5W^*_P D^P D_M W_N+c_5^*W^P D_P D_M W^*_N )
+ (c_6W^*_M D_N D_P W^P+c_6^*W_M D_N D_P W^*P)+(c_7W^*_P D_M D_N W^P+c_7^*W_P D_M D_N W^*P)
+ (c_8W^*_P D_M D^P W_N+c_8^*W^P D_M D_P W^*_N )+(c_9W^*_M D_P D_N W^P+c_9^*W_M D_P D_N W^*P)]
The coefficients $c_i$ are in general complex. Now using integration by parts, the explicit form of the AdS curvature and the lower order equations of motion (<ref>), (<ref>) and (<ref>), one can show that all terms except first one is either higher order in fields or give the same structures as either the first term in (<ref>) or the term in (<ref>). Hence, we can ignore all terms in (<ref>) except the first one. Further, for the action to be real the constants $c_0$ may be complex
but an explicit computation shows that the real part of $c_0$ does not contribute to the three-point amplitude on AdS backgrounds (see appendix <ref> for the similar result on flat background).
Hence, we shall take $c_0$ also to be purely imaginary and write $c_0 = i\beta$ with $\beta\in \mathbb{R}$. Thus, we can express the 3 derivative cubic terms in the form
= igF^MN[β(∇_M W^*_P ∇^P W_N -∇_M W_P ∇^P W^*_N)
Thus, the most general cubic Lagrangian involving a gauge field and complex massive spin 1 field takes the form
ℒ = i gF^MN[-αW^*_MW_N +β(∇_M W^*_P ∇^P W_N -∇_M W_P ∇^P W^*_N)]
We shall work with the above form of cubic interaction terms in this paper.
§ CLASSICAL SOLUTIONS ON ADS BACKGROUND
In this appendix, we summarise the classical solutions of the gauge and Proca fields in AdS background from the perspective of the AdS/CFT correspondence.
§.§ Classical Solution of Gauge Field
In this section, we give some details of the solution of the gauge field equation of motion obtained from the Euclidean massive spin-1 Lagrangian
\begin{eqnarray}
S&=&\!\!\!\!\!\int d^{d+1}x\sqrt{G} \Bigl[\frac{1}{4} F^{MN}F_{MN}+\frac{1}{2}W^{*}_{MN} W^{MN} +m^2 W^{*}_M W^M -ig\,\alpha F^{MN}W^*_MW_N \nonumber\\
&&+\,ig\beta F^{MN}\,\Big( \nabla_{M} W^*_P\nabla^PW_{N} -\nabla_{M} W_P\nabla^PW_{N}^*\Big)
\Bigl]
\label{5.6a}
\end{eqnarray}
The length dimension of various quantities appearing in the above action are given by
[W_M] = 1-d2; [A_M] = 1-d2 ; [g] = d-32; [α] = 0; [β] = 2
The gauge field equation of motion in the AdS background is given in equation (<ref>). In the Poincaré coordinates, the $z$ and $\mu$ components of this equation take the form
\begin{eqnarray}
\f{z^2}{L^2}\,\delta^{\mu\nu}\, k_\mu \,\partial_z\,A_\nu(z,\,k)=i\,J_z(z,\,k)\qquad;\qquad\frac{z^2}{L^2} \partial_z^2 A_\mu+(3-d) \frac{z}{L^2} \partial_zA_\mu-\frac{k^2}{L^2}\,\pi^{\;\;\nu}_\mu A_\nu=J_\mu\label{C.37}
\end{eqnarray}
where $k^2=\delta^{\mu\nu}\,k_\mu\,k_\nu$ and we have introduced the transverse projector
π_μν= δ_μν -k_μ k_ν/k^2 ; δ^μν k_μπ_νσ=0 ; π_μν δ^ντπ_τσ=π_μσ .
In the following we shall solve the classical equations of motion of the gauge field perturbatively in $g$ as
\begin{eqnarray}
A_\mu(z,\,k)={\cal A}_\mu^{[0]}(z,\,k) +g\,{\cal A}_\mu^{[1]}(z,\,k)\, , \label{ftr5}
\end{eqnarray}
where ${\cal A}_\mu^{[1]}(z,\,k)$ and ${\cal A}_\mu^{[0]}(z,\,k) $ satisfy (<ref>) with and without the source term, respectively. The ${\cal A}_\mu^{[0]}(z,\,k)$ and ${\cal A}_\mu^{[1]}(z,\,k) $ can be solved easily in terms of the bulk-to-boundary (Btb) and bulk-to-bulk (BtB) propagators. This will be done below. However, before doing this, we note that for solving the equations of motion, it is convenient to split $A_\mu$ and $J_\mu$ in the transverse and longitudinal components as [88]
\begin{eqnarray} \label{decomp_A_J}
A_\mu = A_\mu^\perp+i \,k_\mu\,A^{||} \qquad;\qquad J_\mu = \pi_\mu^{\;\;\nu} J_\mu= J_\mu^\perp+i \,k_\mu\, J^{||}
\end{eqnarray}
where $A_\mu^\perp = \pi_\mu^{\;\;\nu} A_\nu, \ A^{||} = -i k^\mu A_\mu/k^2$ and similar for $J_\mu^\perp$ and $J^{||}$ (indices are contracted with the flat metric $\delta_{\mu\nu}$).
Using the two equations in (<ref>), the equations of motion for the longitudinal modes is found to be
\begin{eqnarray} \label{cons}
J^{||}= \frac{1}{k^2} \partial_zJ_z+\frac{(1-d)}{k^2} \frac{J_z}{z}\, .
\end{eqnarray}
This is same as the conservation condition $\nabla_MJ^M=0$ and hence it is identically satisfied. This also shows that the $z$ component of the equation of motion is satisfied automatically provided the current $J_M$ is conserved.
§.§.§ Bulk-to-boundary propagator
Substituting (<ref>) in (<ref>), we find that ${\cal A}_\mu^{(0)}$ satisfies (<ref>) without the source terms $J_\mu$ and $J_z$ since the source term is linear in the coupling $g$. We can solve the resulting homogeneous equation by introducing the bulk-to-boundary (Btb) propagator $\mathbb{K}_{\mu}^{\;\;\nu}(z,k)$ defined as
A^[0]_μ(z,k) = 𝕂_μ^ ν(z,k) A_(0)ν(k) ,
where $A_{(0)\nu}(k)$ is the boundary value of the gauge field, i.e.,
A^(0)_μ(z→0,k)=A_(0)ν(k) .
The $\mathbb{K}_{\mu}^{\;\;\nu}(z,k)$ satisfies the differential equation
\begin{eqnarray}
\left(z^2 \partial_z^2 +(3-d)z \partial_z\right)\mathbb{K}_\mu^{~\nu}(z,\,k)-k^2\,\pi^{\;\;\sigma}_\mu\mathbb{K}_\sigma^{~\nu}(z,\,k)=0\, , \label{C.41a}
\end{eqnarray}
with the boundary condition
\begin{eqnarray}
\lim_{z\rightarrow 0} z^{\Delta-d+1}\,\mathbb{ K}_\mu^{~\nu}( z,\,k)=\delta_\mu^\nu\qquad;\qquad \Delta =d-1\, .\label{bound6}
\end{eqnarray}
The solution of (<ref>) is easily obtained by splitting the longitudinal and transverse parts as
𝕂_μ^ ν(z, k) = 𝕂^⊥(z, k) π_μ^ ν +𝕂^||(z, k)k_μk^νk^2
These longitudinal and transverse components satisfy decoupled differential equations
z^2_z^2𝕂^⊥+(3-d)z_z𝕂^⊥-z^2k^2𝕂^⊥ =0 ; z^2_z^2 𝕂^|| +(3-d)z_z𝕂^||=0 .
These have the solution
𝕂^⊥ = c_0(k) z^d-22K_d2-1(zk) , 𝕂^||=c_1(k)z^d-2+c_2(k) .
Imposing the boundary condition (<ref>), we find
c_0(k)=2^2-d2Γ(d2-1)k^d2-1 , c_1(k)=0 , c_2(k)=1 .
Thus, the bulk-to-boundary propagator can be written as
𝕂_μν(z,k) = c_0(k)z^d-22K_d2-1(zk)π_μν + k_μk_νk^2 ,
where we have lowered the boundary indices using the flat metric $\delta_{\mu\nu}$.
The leading order solution ${\cal A}^{(0)}_{\mu}$ is, thus, given by
A^[0]_μ(z,k) = c_0(k)z^d-22K_d2-1(zk)π_μ^ ν(k)A_(0)ν(k) + k_μk^νk^2A_(0)ν(k)
= 𝒜_μ^[0]⊥(z,k) + ik_μ𝒜_μ^[0]||
It is straightforward to verify that the above solution automatically satisfies both the equations in (<ref>) with $J_M=0$.
§.§.§ Bulk-to-bulk propagator
The solution of (<ref>) at first order in the gauge coupling constant $g$ can be obtained using the bulk-to-bulk propagator ${\cal G}_{\mu\nu}(z,w;k)$ defined by
\begin{eqnarray}
\left[\left(\frac{z}{L^2}(3-d) \partial_z +\frac{z^2}{L^2}\partial^2_z\right)\delta^{\;\sigma}_\mu -\frac{k^2}{L^2}z^2\pi_\mu^{\;\;\sigma}\right] {\cal G}_{\sigma\nu}(z,\,w;\,k)&=&\frac{G_{\mu \nu}}{\sqrt{G}}\;\delta(z-w)\, , \label{C.41}
\end{eqnarray}
with the boundary conditions at the two ends given by
lim_z→0 z^Δ-d+1 𝒢_μν(z,w;k)= 0 , lim_z→∞ 𝒢_μν(z,w;k)= 0 ; Δ=d-1 .
The solution of the gauge field equation to first order in the gauge coupling can now be expressed as
\begin{eqnarray} \label{amu1}
{\cal A}^{[1]}_\mu(z,\,k)=\int dw \sqrt{G} \,{\cal G}_{\mu\nu}(z,\,w;\,k)\,J^\nu(w,\,k)\, .
\end{eqnarray}
Equation (<ref>) can again be solved by splitting ${\cal G}_{\mu\nu}(z,w;k)$ in the transverse and longitudinal components as
\begin{eqnarray}
{\cal G}_{\mu\nu}(z,\,w;\,k)={\pi}_{\mu\nu} {\cal G}^\perp(z,\,w;\,k)+ \frac{k_\mu k_\nu}{k^2}{\cal G}^\parallel(z,\,w;\,k)\, .
\end{eqnarray}
These components satisfy the equations
\begin{eqnarray}
\left[ \frac{d}{dz} \left(\hat{z}^{3-d}\frac{d}{dz}\right)-\hat{z}^{3-d}k^2
\right]{\cal G}^\perp=\delta(z-w)~~;~~ \left[ \frac{d}{dz} \left(\hat{z}^{3-d}\frac{d}{dz}\right)\right]{\cal G}^\parallel(z,w;k)=\delta(z-w)\, ,\label{N.5}
\end{eqnarray}
where, to simplify the notation, we have introduced $\hat{z}=\frac{z}{L}$.
To solve the two equations in (<ref>), it is useful to recall the Green's function solution of first order inhomogeneous differential equations of the form
\begin{eqnarray}
{\cal L}\;y(z) =f(z)\qquad;\qquad {\cal L} = \frac{d}{dz} \left(p(z) \frac{d}{d z}\right) +q(z)\, , \label{L.1}
\end{eqnarray}
where ${\cal L}$ is a self-adjoint differential operator. The Green's function for this equation is defined by
\begin{eqnarray}
{\cal L}\,G(z,w)=\delta(z-w)\, ,
\end{eqnarray}
and its solution is obtained by following a standard procedure, see e.g., [89]. The general solution, in an interval $(a,b)$, is given by
\begin{eqnarray}
A\,y_1(z)\,y_2(w),&\mbox{ for}\;\;z<w\\
A\,y_2(z)\,y_1(w),&\mbox{ for}\;\;z>w\end{array}\right.\label{3319u}
\end{eqnarray}
$y_1$ and $y_2$ satisfy ${\cal L}\; y_{1}=0={\cal L}\; y_{2}$, and $y_1(z)$ satisfies the suitable boundary condition at $z=a$ while $y_2(z)$ satisfies the suitable boundary condition at $z=b$. The coefficient $A$ is determined by requiring the Green's function to be continuous at $z=w$ but with a discontinuous derivative. This gives
\begin{eqnarray}
A\left[y'_2(w)\,y_1(w)- y'_1(w)\,y_2(w)\right]=\frac{1}{p(w)}\, .
\end{eqnarray}
Following this procedure to solve the two equations in (<ref>), we find that the solution of the homogeneous equation corresponding to the first equation in (<ref>) is given by Bessel functions of the first and second kinds as
y_1(k,z) = ẑ^d2-1 I_d2-1(kz) , y_2(k,z) = ẑ^d2-1 K_d2-1(kz)
where $y_1$ satisfies the boundary condition at $z=0$ (i.e. for $z<w$) and $y_2$ satisfies the boundary condition at $z=\infty$ (i.e. for $z>w$). The constant $A$ in (<ref>) is evaluated to be $A=-1$. Thus, the transverse component $ \mathcal{G}^\perp(z,w;k)$ can be expressed as
\begin{eqnarray}
\mathcal{G}^\perp(z,w;k)=
(\hat{z}\hat{w})^{\f{d}{2}-1}I_{\f{d}{2}-1}(k z)K_{\f{d}{2}-1}(k w),& \text{for } z< w\\[.3cm]
(\hat{z}\hat{w})^{\f{d}{2}-1}I_{\f{d}{2}-1}(k w)K_{\f{d}{2}-1}(k z), & \text{for } z > w
\end{cases}
\end{eqnarray}
Following similar steps, the longitudinal component is obtained to be
𝒢^||_μν(z,w;k) = -Ld-2k_μk_νk^2
ẑ^d-2, if z< w
ŵ^d-2, if z > w
Combining the transverse and longitudinal parts, the full bulk-to-bulk propagator for the gauge field is obtained to be
𝒢_μν(z,w;k) = -L
(ẑŵ)^d2-1I_d2-1(k z)K_d2-1(k w)π_μν+ẑ^d-2d-2k_μk_νk^2, if z< w
(ẑŵ)^d2-1I_d2-1(k w)K_d2-1(k z)π_μν+ŵ^d-2d-2k_μk_νk^2, if z > w
By construction, the bulk-to-bulk propagator satify the second equation in (<ref>). Let us now verify that it satisfies the first equation as well. Using (<ref>) we compute,
\begin{align}
k^\mu {\cal A}^{[1]}_\mu(z,\,k)
&=\int dw \sqrt{G} k^\mu {\cal G}_{\mu\nu}(z,\,w;\,k)\,J^\nu(w,\,k) \nonumber \\
&=-\frac{L^{2}}{d-2} \int_{0}^{\infty} \frac{dw}{w^{d-1}} \left(\Theta(z-w) w^{d-2} + \Theta(w-z) z^{d-2}\right) k^\mu J_\mu(w,\,k)
\end{align}
where in the second equality we used (<ref>). Using (<ref>) and (<ref>) we find
\begin{equation}
k^\mu J_\mu(w,\,k) = i \left(\partial_w J_w + (1-d) \frac{J_w}{w}\right)
\quad \Rightarrow \quad \frac{k^\mu J_\mu(w,\,k)}{w^{d-1}} = i \partial_w \left(\frac{J_w}{w^{d-1}}\right)\, .
\end{equation}
\begin{align}
k^\mu {\cal A}^{[1]}_\mu(z,\,k)
&= -i \frac{L^{2}}{d-2} \int_{0}^{\infty} dw \left(\Theta(z-w) w^{d-2} + \Theta(w-z) z^{d-2}\right) \partial_w \left(\frac{J_w}{w^{d-1}}\right) \nonumber \\
&=-i \frac{L^{2}}{d-2} \left(\left[ \left(\Theta(z-w) w^{d-2} + \Theta(w-z) z^{d-2}\right) \frac{J_w}{w^{d-1}} \right]_0^\infty\right. \nonumber \\
&\left.\qquad \qquad -\int_{0}^{\infty} dw \left(\delta(w-z) (z^{d-2} - w^{d-2}) -(d-2) w^{d-3} \Theta(z-w)\right) \frac{J_w}{w^{d-1}} \right) \nonumber \\
&=i L^2 \int_{0}^{\infty} dw \Theta(z-w) \frac{J_w}{w^2} \label{kA}
\end{align}
where the vanishing of the boundary term at $w=0$ requires that $J_w$ goes to zero faster than $w$, which is guaranteed by the first of (<ref>) and the boundary conditions in (<ref>). Differentiating (<ref>) w.r.t. $z$ and rearranging yields the first of (<ref>).
In computing the 3-point function, we need the expression of the bulk-to-bulk propagator near the boundary $z\rightarrow0$. In this limit, the expression (<ref>) gives
𝒢_μν(z→0,w;k) = -L2^d2-1Γ(d2) (k)^d2-1(ẑ^2w)^d2-1K_d2-1(k w)π_μν-L ẑ^d-2d-2k_μk_νk^2
= -L^3-d(d-2) z^d-2𝕂_μν( w,k)
§.§ Classical Solution of Massive Spin-1 Field
In this section, we review the solution of the massive spin-1 field following the approach given in [90]. We are interested in getting the classical solution of the massive field at the leading order in the gauge coupling $g$. As we shall see below, this can be obtained in terms of the bulk-to-boundary propagator of the massive field. The equation of motion of the massive spin-1 field is given by
\begin{eqnarray}
2\nabla_M\nabla^{[M}W^{N]} -m^2 W^N=0+{\cal O}(g)\, . \label{wm67}
\end{eqnarray}
By acting with the covariant derivative $\nabla_N$, we obtain the following subsidiary condition
\begin{eqnarray}
\nabla_MW^M=0+{\cal O}(g)\quad\implies\qquad \delta^{\mu\nu}\partial_\mu W_\nu +\partial_zW_z- \frac{(d-1)}{z} W_z=0+{\cal O}(g)\, . \label{C.42}
\end{eqnarray}
The classical profile of the massive spin-1 fields must satisfy this constraint at the leading order in the gauge coupling expansion.
Fourier transforming the boundary directions and using the subsidiary condition (<ref>), the $z$ component of the equation of motion (<ref>) gives in Poincaré coordinates,
\begin{eqnarray}
&&z^2 \partial_z^2 W_z-(d-1) z \partial_z W_z -k^2 z^2 W_z+ \Bigl(d-1-\,m^2L^2\Bigl)W_z=0\, . \label{wzq1}
\end{eqnarray}
Demanding regularity at $z=\infty$, the above equation has the solution
\begin{eqnarray}
W_z(z,\,k)=c(k)\, z^{\frac{d}{2}}\,K_\beta(z\,k)\qquad;\qquad \beta^2= \frac{(d-2)^2}{4}+m^2L^2 \quad;\quad \beta=\Delta-\frac{d}{2}\, ,\label{B.45}
\end{eqnarray}
where $K_\beta(z\,k)$ is the modified Bessel function of the second kind and $c(k)$ is an arbitrary function.
Similarly, the $\mu$ component of the equation of motion (<ref>) on using (<ref>) gives
\begin{eqnarray}
z^2\partial_z^2W_\mu+(3-d)z\,\partial_zW_\mu-(z^2\,k^2+m^2L^2) W_\mu= 2iz k_\mu W_z=2i\,c(k) \,k_\mu z^{\frac{d}{2}+1}\,K_\beta(z\,k)\, .\label{wmuq1}
\end{eqnarray}
The solution of this equation has a homogeneous and an inhomegeneous part. The inhomogeneous part should be proportional to $k_\mu$. It is easy to see that the above equation has the following solution consistent with the constraint (<ref>)
W_μ(z,k) = [δ_μ^νz^d-22K_β(kz)+ k^νk_μk(d-Δ-1) z^d2 K_β+1(zk)]a_ν(k) .
For later use, we note that the relation between $c(k)$ and $a_\mu$ following from the constraint (<ref>) is
c(k) (d2-β-1)= ik^μa_μ(k) .
We can obtain the bulk-to-boundary propagator of the massive spin-1 field using the above solution. For this, we need to relate $a_\mu(k)$ to the boundary value of the field $W_\mu(z,k)$. Writing $a_\mu= b_\mu +i \,k_\mu b$ and using the expression of the modified Bessel function in $z\rightarrow0$ limit given in equation (<ref>), we find
≡ z^d-Δ-1w_μ(k) ,
w_μ(k) = 12 (k2)^d2-Δ Γ(Δ-d2)[b_μ+ k_μ((Δ-1)(d-Δ-1)b+2k^νb_ν(Δ-d2)k^2(d-Δ-1)) ] .
We can get rid of term proportional to $k_\mu$ by choosing $b$ to be $ \f{(d-2\Delta)}{(\Delta-1)}\f{k^\nu b_\nu}{k^2}$. This allows us to relate the integration constant with the boundary value of the field. Collecting all results and using Bessel function identities, we can write
= 2 z^d-22Γ(Δ-d2)(k2)^Δ-d2[δ_μ^ν K_Δ-d2(kz)+ z k^νk_μk(Δ-1) K_Δ-d2-1(zk)]
\begin{eqnarray}
W_z(z,\,k)=i \frac{2^{\frac{d}{2}+1-\Delta }}{\Gamma(\Delta -\frac{d}{2})} \,\f{1}{\Delta-1}k^{\Delta-\frac{d}{2}}\, z^{\frac{d}{2}}\,K_{\Delta -\frac{d}{2}}(z\,k)\, k^\nu\,w_\nu(k)\label{C.124}
\end{eqnarray}
The bulk-to-boundary propagator $\mathcal{K}_M^{~\mu} (z,k)$ for the massive spin-1 field can now be defined by
\begin{eqnarray}
W_M(z,\,k) = \mathcal{K}_M^{~\mu} (z,k)\,w_\mu(k)\qquad;\qquad\lim_{z\rightarrow 0} z^{-d+\Delta +1} \, \mathcal{K}_M^{~\mu} (z,k)\,w_\mu(k)=\delta^\mu_M\, .\label{C.11}
\end{eqnarray}
Comparing (<ref>) with (<ref>) and (<ref>), we get
\begin{eqnarray}
\mathcal{K}_\mu^{~\nu}(z,\,k)&=& \frac{ 2^{\frac{d}{2}+1-\Delta}}{\Gamma\left(\Delta-\frac{d}{2}\right)} \,~k^{\Delta-\frac{d}{2}} \, z^{\frac{d}{2}-1} \left[\delta_\mu^\nu~K_{\Delta-\frac{d}{2}}(z k)+\frac{k_\mu\,k^\nu}{k}~\frac{z}{\Delta-1}~K_{\Delta-\frac{d}{2}-1}(zk)\right]\, ,\nonumber\\ |
# A First Look at Cepheids in a SN Ia Host with JWST
Wenlong Yuan Department of Physics & Astronomy, Johns Hopkins University,
Baltimore, MD 21218, USA Adam G. Riess Department of Physics & Astronomy,
Johns Hopkins University, Baltimore, MD 21218, USA Space Telescope Science
Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA Stefano Casertano
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218,
USA Lucas M. Macri George P. and Cynthia W. Mitchell Institute for
Fundamental Physics and Astronomy,
Department of Physics and Astronomy, Texas A&M University, College Station, TX
77843, USA
###### Abstract
We report the first look at extragalactic Cepheid variables with the James
Webb Space Telescope, obtained from a serendipitous (to this purpose)
observation of NGC 1365, host of an SN Ia (SN 2012fr), a calibration path used
to measure the Hubble constant. As expected, the high-resolution observations
with NIRCam through F200W show better source separation from line-of-sight
companions than HST images at similar near-infrared wavelengths, the spectral
region that has been used to mitigate the impact of host dust on distance
measurements. Using the standard star P330E as a zeropoint and PSF reference,
we photometered 31 previously-known Cepheids in the JWST field, spanning
$1.15<\log P<1.75$ including 24 Cepheids in the longer period interval of
$1.35<\log P<1.75$. We compared the resultant Period-Luminosity relations to
that of 49 Cepheids in the full period range including 38 in the longer period
range observed with WFC3/IR on HST and transformed to the JWST photometric
system (F200W, Vega). The P-L relations measured with the two space telescopes
are in good agreement, with intercepts (at $\log P=1$) of 25.74 $\pm$0.04 and
25.72 $\pm$0.05 for HST and JWST, respectively. Our baseline result comes from
the longer period range where the Cepheids have higher signal-to-noise ratios
where we find 25.75$\pm 0.05$ and 25.75$\pm 0.06$ mag for HST and JWST,
respectively. We find good consistency between this first JWST measurement and
HST, and no evidence that HST Cepheid photometry is “biased bright” at the
$\sim 0.2$ mag level that would be needed to mitigate the Hubble Tension,
though comparisons from more SN hosts are warranted and anticipated. We expect
future JWST observations to surpass these in quality as they will be optimized
for measuring Cepheids.
## 1 Introduction
Cepheid variables have held a central role in measuring extragalactic
distances for more than a century (Leavitt & Pickering, 1912). They exhibit
several features which make them uniquely suited for this role. Their nature
is well understood as a consequence of the $\kappa$ mechanism, which drives a
periodic overshooting of hydrostatic equilibrium and produces their pulsations
(Eddington, 1927). Their great luminosities, $\sim 10^{5}L_{\odot}$, make them
visible with modern telescopes at many tens of Megaparsecs. The large
amplitude of their variations uniquely identifies them and their periods
standardize their luminosities to a precision of a few percent. They are
ubiquitous in areas of recent star formation, including many hosts of Type Ia
supernovae (which have still greater range). Lastly, hundreds of Cepheids in
the Milky Way are in range of precise parallaxes from the ESA Gaia satellite
to provide a 1% geometric calibration of their fiducial luminosity (Riess et
al., 2022a; Cruz Reyes & Anderson, 2022). For these reasons, Cepheids are the
primary distance indicator most often selected for measuring long-range
distances and the Hubble constant (Riess et al., 2022b, hereafter, R22).
A succession of technological advancements has extended the reach, precision
and accuracy of Cepheid distance estimates at tens of Megaparsecs. One of the
original goals of the Hubble Space Telescope (HST) was to resolve
extragalactic Cepheids, which was achieved in dozens of galaxies within $\sim$
20 Mpc with the Wide Field Planetary Camera 2 (WFPC2) at optical wavelengths
(Freedman et al., 2001; Sandage et al., 2006). HST instruments with greater
sensitivity and higher resolution, ACS and WFC3/UVIS, extended this reach to
$\sim 50$ Mpc and a greater number of nearby SNe Ia and geometric calibrators
(Macri et al., 2006; Riess et al., 2011; Hoffmann et al., 2016).
Given that Cepheids are found in regions of recent star formation, they are
observed through interstellar dust with a mean reddening (in modestly-inclined
spirals, R22) of $E(V-I)\sim 0.3$ mag. Thus, their visible- (0.5$\micron$) and
infrared- (0.8$\micron$) band measurements must account for a mean of $\sim
0.7$ mag and $\sim 0.4$ mag of extinction, respectively, to provide accurate
distance measurements, which in consequence are sensitive to the uncertain
nature of extragalactic reddening laws.
Wide-scale follow-up of Cepheids in the near-infrared (NIR), to mitigate dust
effects, first became practical with WFC3/IR, allowing measurements at
$1.6\micron$ and reducing the mean impact of extinction to $\sim$ 0.1 mag and
the sensitivity to reddening laws (Riess et al., 2011). However, the advantage
of NIR observations over optical bands came with new challenges; at these
wavelengths, the resolution of HST is 2-3 times lower and the background (in
the form of ubiquitous red giants) is an order of magnitude greater. The
result is an increase in the measurement errors (after statistical removal of
the backgrounds measured using artificial stars) which may limit the precision
of distance measurements without a large number ($>$50) of Cepheids in each
host. While Cepheid distance measurements from either the optical or NIR are
in good agreement (R22), a result most likely if both are accurate, the
pursuit of a 1% measurement of the Hubble constant demands ever more stringent
tests of Cepheid photometry.
The newly-launched James Webb Space Telescope (JWST) offers the twin
advantages of angular resolution comparable to WFC3/UVIS at visible
wavelengths and the lower impact of interstellar dust as WFC3/IR in the same
observation. JWST observations planned for its first GO cycle have been
designed to take advantage of these capabilities and reobserve Cepheids
previously measured with HST, work which is likely to require years to collect
and thoroughly analyze to fully come to fruition. However, an early
observation with JWST of a SN Ia host previously observed by HST offers a
serendipitous (for this endeavor) and valuable preview.
To be clear in setting expectations for future JWST observations, these
serendipitous observations of the Cepheids in NGC 1365 fall short of
demonstrating the full capability of the observatory for this endeavor. They
are shorter in exposure time by a factor of a few than those planned for this
purpose and they are obtained at nearly twice the wavelength needed to
optimally resolve and reduce the contributions of nearby red giants (i.e., the
background). Notably, they cover a more crowded region along a spiral arm (see
Figure 1) compared to most of those observed by HST. Further, they provide
only a single (i.e., “random”) epoch or phase in each Cepheid light curve,
which adds an additional dispersion of $0.1$ to $0.2$ mag depending on the
amplitude of the Cepheid. Lastly, the state of the JWST calibration data
(e.g., flat fields, dark frames, bias frames, geometric distortion maps,
linearity corrections) is in its first iteration and will improve with time.
Nevertheless, and with these limitations in mind, these observations preview
the enhanced capabilities of JWST over HST and provide meaningful, if
preliminary, quantitative results.
In §2 we describe the details of the JWST observations for NGC 1365, as well
as the data reduction and photometry procedures. We show our results in §3 and
a brief discussion in §4. An appendix provides information about past HST
observations of Cepheids in NGC 1365 for easy reference.
## 2 Observations, data reduction, and photometry
### 2.1 Observations & data reduction
The central region of NGC 1365 was recently observed with JWST NIRCam on 2022
August 13 as part of program GO-2107 (PI: Janice Lee), which aims to study the
star formation activity in 19 nearby galaxies. The NGC 1365 field partially
overlaps with an HST WFPC2 time-series field (GO-5972, PI: Jeremy Mould) where
dozens of Cepheids were discovered (Silbermann et al., 1999; Hoffmann et al.,
2016) and followed up in the NIR (R22). With the Cepheid locations and periods
determined from those HST data, we have an opportunity to photometer and study
these Cepheids in the new JWST observations. In Figure 1 we show the
footprints of the JWST observations as well as archival HST observations and
locations of previously-identified Cepheids. The initial WFPC2 time-series and
WFC3 follow-up targeted a less crowded part of the host off the spiral arms,
but the NIRCam observations targeted the center of the galaxy and primarily
contain Cepheids in a small dense, crowded region. Appendix Figure 4 shows
less-crowded Cepheids imaged by HST that are more similar to those typically
studied in HST fields. Due to the overlap of the two observatories, we can
also directly compare the images and measurements of many of the same Cepheids
in the denser regions of the host.
Figure 1: Observation footprints of NGC 1365 with JWST NIRCam (magenta), HST
WFPC2 (cyan), WFC3/UVIS (green), and WFC3/IR (red) overlaid on a color
composite image from the Dark Energy Survey (DOE/FNAL/DECam/
CTIO/NOIRLab/NSF/AURA). The locations of Cepheids used in this study are
indicated by circles. North is up and east is to the left.
We retrieved JWST observations of NGC 1365 from MAST and processed the raw
data (stage 0) using the JWST Science Calibration Pipeline version 1.6.2.
There are 25 exposures in total, with the short-wavelength channel through the
F200W filter and the long-wavelength channel through the F300M, F335M, and
F360M filters. In this study, we only analyzed the F200W data for their depth
and proximity in wavelength coverage compared to the HST F160W band. The F200W
data consist of eight subfields, with each one covered by approximately one
short-wavelength detector. Only the two east-most subfields contain
previously-identified Cepheids; thus, we excluded the other six from the
analysis. The total exposure times are 1202.52s for both analyzed subfields.
We noticed the 1/f noise causing small bias shifts in the calibrated stage 2
data products (see §2 of Merlin et al., 2022). We corrected them by
subtracting the median value of each row and then each column before the JWST
pipeline stage 3 process. Similar to Merlin et al. (2022), we masked all
sources when computing the median values for row and column subtractions.
We used the WCS in the images to locate the Cepheids based on their HST
positions. We identified a global shift of $\sim 0\farcs 5$ between the HST
and JWST positions and accounted for this to register the images. After this
global shift we found point sources at the expected positions of the Cepheids
to a precision of less than a NIRCam pixel ($0\farcs 031$; see Figure 2). The
HST NIR observations in these spiral arms are under-sampled (even after
drizzling to $0\farcs 08/$pixel resolution) and lack the inherent resolution
of JWST (despite the greater wavelength of those observations).
While the Cepheids were easily apparent in the deeper and higher-resolution
images in F200W, they were hard to discern in the accompanying observations at
longer-wavelengths and through medium-width bands due to their much shorter
exposure times, lower angular resolution and lower throughput of these
filters. As a result we only analyzed the F200W images.
Figure 2: Image cuts of 5 example Cepheids analyzed in this study. Their
locations are indicated by the corresponding colors in Figure 1. The circles
cover a radius of 0$\farcs$375 while the image cuts display 3″ in a side. From
left to right, each row shows one (same) Cepheid in HST F555W, F814W, F160W,
and JWST F200W, where the exposure times are 1410s, 1770s, 3618s, and 1203s,
respectively. The orientation of the image cuts is indicated by the white
compass in the top-left panel.
### 2.2 Photometry
We performed point-spread function (PSF) photometry using a crowded-field
photometry package based on DAOPHOT/ALLSTAR Stetson (1987, 1994). We
constructed an empirical model of the PSF using F200W observations of the
standard star P330E (taken on 2022 Aug 29, obs. ID=jw01538o155t002) obtained
in a 160-pixel subarray (using a minimal exposure time to keep the star below
saturation) which included two dithers placed on each of the B-module chips.
We chose not to use the pipeline calibration to obtain the image zeropoints as
they have been found to have limited accuracy (at the time of this writing)
including chip-to-chip offsets (and possible time-dependence between the early
life of the mission and the present, Brammer, 2022; Boyer et al., 2022;
Nardiello et al., 2022). To produce reliable zeropoints for the observation of
NGC 1365 we used the above observations of P330E obtained and combined for
each B-module chip separately to directly calibrate the Cepheids observed in
that chip. We assigned each image of P330E a reference Vega magnitude of 11.42
mag (Rieke et al., 2022). An important advantage of using the Aug 29, 2022
observations of P330E to set the zeropoints for the images of NGC 1365 is that
they were obtained only 2 weeks after the observation of NGC 1365, an interval
during which JWST’s wave front monitoring has shown it to be relatively stable
with modeled photometric variations over the interval of $<$ 0.01 mag (M.
Perrin, 2022 private communication). (We did not make use of aperture
photometry for the Cepheids due to the inability to separate nearby sources as
expected from inspection of Figure 2.)
To avoid a flux bias from the determination of Cepheid positions in HST NIR
images, it is necessary to fix their locations using the uncrowded optical
images (i.e., “forced photometry”, Riess et al., 2009). The algorithm fits the
PSF of the Cepheids at their known, fixed positions, subtracts them from the
images, identifies additional, unresolved sources down to a fixed threshold,
and then simultaneously optimizes the fit to the non-Cepheids (parameters are
x, y and flux) and Cepheids (parameter is flux) to determine the latter’s
flux. We then add “artificial stars” at the same brightness as the Cepheid
(based on the period and iterative fit of the Period-Luminosity relation), and
remeasure these using the same procedure to account for the mean background of
unresolved sources near the position of the Cepheid (i.e., a statistical
crowding correction) and to measure the uncertainty in the Cepheid magnitude.
We also compared our results to the level 3, full-calibrated images produced
by the STScI pipeline and found that the photometry was consistent between the
versions of the images.
Figure 3: Near-infrared Period-Luminosity relations for Cepheids in the range
$1.35<\log P<1.75$ (baseline results) measured with HST and JWST. The JWST
sample (red) includes 24 Cepheids observed in F200W ($2\micron$). The HST
sample includes 38 Cepheids from R22 with F160W magnitudes transformed to
F200W using a color transformation based on their measured $V-I$ colors and
F160W$-$F200W. The inset shows the intercepts of the relations at $\log P=1$.
The solid red curve uses the JWST PSF photometry calibrated to P330E.
## 3 Results
Fixing the slope of the Period-Luminosity relation to the global value of
$-3.30$ determined from the mean of thousands of Cepheids in the MW, LMC, SMC,
M31, NGC 4258 and SN Ia hosts in the NIR (R22), we measured the intercepts at
$\log P=1$.
For our “baseline”, we limited the comparison to a period range of $1.35<\log
P<1.75$ where the Cepheids as measured from both telescopes have strong
signal-to-noise ratios. Below this range the SNR at $F160W=24.5$ (Vega) drops
to $<$10 and above this range Cepheid periods in NGC 1365 are not expected to
be accurate because the original time-series used to find the Cepheids in NGC
1365 spanned only 48 days ($\log P=1.68$), so that a full cycle would not have
been seen. The JWST and HST Cepheid Period-Luminosity relations are shown in
Figure 3.
For JWST with PSF fitting (referenced to P330E) and with 24 Cepheids we find
an intercept of 25.75$\pm 0.06$ (SD=0.36 mag). In Table 2 we provide
intercepts for broader ranges of periods and with and without $\sigma$
clipping.
To directly compare the HST and JWST Period-Luminosity relations observed at
different, though adjacent, bandpasses, it is necessary to account for their
different wavelength responses. Due to the simple spectral energy
distributions of stars, particularly on the Rayleigh-Jeans tail in the NIR, it
is relatively straightforward to estimate this difference, which is the color
F160W$-$F200W, from another measured color such as F555W$-$F814W. To do this
rigorously we used the PARSEC isochrones (Bressan et al., 2012) for stellar
atmospheres which are provided as calculated for the HST and JWST bandpasses
(using version CMD v3.6, http://stev.oapd.inaf.it/cgi-bin/cmd). We limited
these to a range appropriate for Cepheids: ages of 10 to 100 Myr, $T_{\rm
eff}$ of 4000 to 7000 degrees, initial masses $>2M_{\odot}$, and $\log g<$ 2\.
These stars have a tight locus in the color-color plane of WFC3/UVIS for
F555W$-$F814W vs F160W (HST)$-$F200W (JWST). We fit a second-order polynomial
to the color-color relation, finding
$\displaystyle{\it F160W}-{\it F200W}=0.007$ $\displaystyle+0.053({\it
F555W}-{\it F814W})$ $\displaystyle+0.077({\it F555W}-{\it F814W})^{2}$
Table 1: JWST F200W Cepheid Photometry
ID | $P$ | F200W a | $\sigma^{b}$ | R.A.c | Decl. | subfield
---|---|---|---|---|---|---
| [days] | [mag] | [deg] (J2000.0) |
97917 | 24.00 | 24.64 | 0.32 | 53.433156 | -36.158061 | south
60205 | 25.30 | 24.44 | 0.36 | 53.435680 | -36.144208 | south
25668 | 26.13 | 24.38 | 0.51 | 53.432499 | -36.136873 | north
74699 | 26.58 | 24.67 | 0.46 | 53.427052 | -36.156193 | south
40364 | 27.48 | 23.90 | 0.39 | 53.429580 | -36.143996 | south
65664 | 29.34 | 24.74 | 0.35 | 53.431935 | -36.149101 | south
53380 | 30.85 | 24.32 | 0.40 | 53.433946 | -36.143847 | south
100027 | 31.37 | 24.34 | 0.21 | 53.439120 | -36.153423 | south
79315 | 31.46 | 24.15 | 0.31 | 53.432884 | -36.152400 | south
80300 | 32.38 | 24.44 | 0.32 | 53.434414 | -36.151327 | south
94995 | 32.42 | 23.87 | 0.31 | 53.431335 | -36.158716 | south
45761 | 33.03 | 24.45 | 0.44 | 53.430433 | -36.144804 | south
73421 | 33.50 | 23.60 | 0.33 | 53.427504 | -36.155372 | south
61628 | 37.01 | 23.25 | 0.28 | 53.427674 | -36.151793 | south
17203 | 38.12 | 24.24 | 0.43 | 53.430027 | -36.136683 | north
90510 | 39.06 | 23.65 | 0.24 | 53.433311 | -36.155466 | south
77265 | 39.61 | 24.24 | 0.27 | 53.429382 | -36.154908 | south
58983 | 39.67 | 24.32 | 0.29 | 53.427627 | -36.151066 | south
101731 | 47.24 | 22.96 | 0.19 | 53.437542 | -36.155483 | south
8616 | 48.09 | 22.85 | 0.33 | 53.427309 | -36.136727 | north
9712 | 48.33 | 23.41 | 0.29 | 53.426932 | -36.137379 | north
93422 | 51.34 | 23.38 | 0.23 | 53.431778 | -36.157790 | south
94055 | 51.45 | 23.56 | 0.18 | 53.435355 | -36.154809 | south
17544 | 51.94 | 23.56 | 0.24 | 53.430280 | -36.136566 | north
Note. — $a$: These are Vega mag referenced to P330E = 11.42 in F200W. $b$: The
errors are derived from artificial stars and also include a random phase error
in quadrature of 0.15 mag. $c$: Positions are referenced to the WCS of JWST
images processed using JWST pipeline v1.6.2.
Table 2: HST and JWST Intercepts at $\log P=1$ (slope=$-3.30$) for NIR
Cepheids in NGC 1365
Sample | $N$ Cepheids | Period range | F200W Intercepta
---|---|---|---
HST WFC3/IR field, baseline | 38 | 1.35 $<\boldsymbol{\log}~{}\boldsymbol{P}<$ 1.75 | 25.754 $\boldsymbol{\pm}\boldsymbol{0.045}$
HST WFC3/IR field, extended | 49 | 1.15 $<\log P<$ 1.75 | 25.736 $\pm 0.043$
HST WFC3/IR field, SH0ES R22b | 46 | 15.0 $<P<$ 50.0 | 25.750 $\pm 0.045$
JWST NIRCam field, baseline, PSF | 24 | 1.35 $<\boldsymbol{\log}~{}\boldsymbol{P}<$ 1.75 | 25.752 $\boldsymbol{\pm}\boldsymbol{0.059}$
JWST NIRCam field, extended, PSF | 31 | 1.15 $<\log P<$ 1.75 | 25.718 $\pm 0.055$
Note. — $a$: Results from HST measured in F160W and converted to F200W using
${\it F160W}-{\it F200W}=0.007+0.053({\it F555W}-{\it F814W})+0.077({\it
F555W}-{\it F814W})^{2}$. $b$: Same period range and sample used in R22.
The dispersion of the synthetic values around this approximation is 0.007 mag.
The mean Cepheid color of the sample is ${\it F555W}-{\it F814W}=1.08$ mag
(sample SD=0.22 mag) where the relation gives ${\it F160W}-{\it F200W}$=0.15
mag (sample SD=0.05 mag), however we computed the individual values for each
Cepheid, as given in the Appendix. We subtract the individual F160W$-$F200W
colors predicted from the optical colors from the measured HST F160W to
provide a direct comparison to JWST F200W as shown in Figure 3.
The baseline measurements of the HST intercepts use the F160W magnitudes as
given in R22, the F160W$-$F200W colors as given in the Appendix, and include
38 Cepheids in this period range. To increase the sample for the purpose of
this HST to JWST comparison, we added 3 Cepheids with $P=51,51,$ and 52 days
found by Hoffmann et al. (2016) and only slightly above the $P<50$ day limit
used by R22 but still well below the $1.2\times$ time-span of the observations
necessary to be reliable. We find an intercept for HST at $\log P=1$ of
25.75$\pm 0.05$ mag and provide intercepts with other period ranges in Table
2.
The inset in Figure 3 compares the intercepts. The agreement between the HST
and JWST intercepts is very good, below 1$\sigma$ in their difference. The
same mean difference is seen when comparing only identical Cepheids though the
number of Cepheids measured by both is far smaller and thus the comparison is
less significant. The dispersion around the Period-Luminosity relation as
shown in Figure 3 is comparable between HST and JWST and is likely to be
smaller for optimal JWST observations with multiple epochs, better image
calibration and in less crowded regions more typically observed with HST.
To a $\sim$0.05 mag level of preliminary accuracy based on still limited
characterization of JWST and for this case we can conclude that past HST NIR
measurements do not appear biased, let alone “biased bright” at the $\sim$0.2
mag level (i.e., by the systematics of past photometry measurements or by
previously unresolved companions) as could mitigate the “Hubble Tension” in
R22 (and then only if such a bias was not also similarly present in HST
photometry of Cepheids in the geometric anchor, NGC 4258).
## 4 Discussion
The JWST images and measurements of Cepheids in NGC 1365 and in comparison to
those from HST bode well for the quality of such future measurements. We
reiterate that these observations were not optimized for observing Cepheids
and are far from the best that JWST can do. Optimal observations would be
longer in exposure time, cover multiple passbands to the necessary depth,
include shorter wavelengths for better resolution, include multiple epochs to
reduce the random phase noise, have higher signal-to-noise calibration frames
(flats, darks, bias frames, chip offsets, geometric distortion for locating
Cepheids, etc) available and better cover the regions where past HST programs
have found Cepheids and measured their periods.
We also note that it is too early in the life of JWST and NIRCam to identify
and calibrate subtle photometric effects. There is one such effect we are
aware of, the count-rate non-linearity (CRNL), which makes faint objects
appear fainter, though the scale of this effect has been diminishing with
improvements in NIR detector manufacturing and testing used to select the best
chips. Because the level of CRNL has not yet been measured in space for
NIRCam, we did not correct either the NIRCam or the WFC3/IR Cepheid photometry
for this effect, so to first approximation we might expect that CRNL cancels
in the comparisons provided here. For WFC3/IR, CRNL makes the Cepheids in NGC
1365 $\sim 0.03$ mag faint relative to the flux level of standard stars (Riess
et al., 2009). If the CRNL of NIRCam is $\sim$ half the level of WFC3/IR (our
guess), the error in the comparison will be $\sim 0.015$ mag, negligible at
the precision of this study, but important to calibrate for future, larger
samples. The single-epoch sampling of this JWST observation introduces a
statistical bias of $\sim 0.005$ mag in the Cepheid Period-Luminosity relation
compared to the typical flux-averaged (multi-epoch) observations. This bias is
again negligible for the precision of this study.
Nevertheless, the quantitative comparison of the first JWST Cepheid Period-
Luminosity intercepts presented here is promising, and already significant as
a check on past HST measurements. We expect that the calibration of this
observatory will only improve and mature, leading to future observations that
should provide ever more definitive investigations.
We are indebted to all of those who spent years and even decades bringing JWST
to fruition. We are grateful to the proposers of GO-2107 (PI: Janice Lee) for
making their program non-proprietary, enabling the community to undertake
assorted investigations from this data including this study. This research
made use of the NASA’s Astrophysics Data System.
## References
* Boyer et al. (2022) Boyer, M. L., Anderson, J., Gennaro, M., et al. 2022, arXiv e-prints, arXiv:2209.03348. https://arxiv.org/abs/2209.03348
* Brammer (2022) Brammer, G. 2022, grizli, 1.5.0, doi: 10.5281/zenodo.5012699
* Bressan et al. (2012) Bressan, A., Marigo, P., Girardi, L., et al. 2012, MNRAS, 427, 127, doi: 10.1111/j.1365-2966.2012.21948.x
* Cruz Reyes & Anderson (2022) Cruz Reyes, M., & Anderson, R. I. 2022, arXiv e-prints, arXiv:2208.09403. https://arxiv.org/abs/2208.09403
* Eddington (1927) Eddington, A. S. 1927, MNRAS, 87, 539, doi: 10.1093/mnras/87.7.539
* Freedman et al. (2001) Freedman, W. L., Madore, B. F., Gibson, B. K., et al. 2001, ApJ, 553, 47, doi: 10.1086/320638
* Hoffmann et al. (2016) Hoffmann, S. L., Macri, L. M., Riess, A. G., et al. 2016, ApJ, 830, 10, doi: 10.3847/0004-637X/830/1/10
* Leavitt & Pickering (1912) Leavitt, H. S., & Pickering, E. C. 1912, Harvard College Observatory Circular, 173, 1
* Macri et al. (2006) Macri, L. M., Stanek, K. Z., Bersier, D., Greenhill, L. J., & Reid, M. J. 2006, ApJ, 652, 1133, doi: 10.1086/508530
* Merlin et al. (2022) Merlin, E., Bonchi, A., Paris, D., et al. 2022, arXiv e-prints, arXiv:2207.11701. https://arxiv.org/abs/2207.11701
* Nardiello et al. (2022) Nardiello, D., Bedin, L. R., Burgasser, A., et al. 2022, arXiv e-prints, arXiv:2209.06547. https://arxiv.org/abs/2209.06547
* Rieke et al. (2022) Rieke, G. H., Su, K., Sloan, G. C., & Schlawin, E. 2022, AJ, 163, 45, doi: 10.3847/1538-3881/ac3b5d
* Riess et al. (2009) Riess, A. G., Macri, L., Casertano, S., et al. 2009, ApJ, 699, 539, doi: 10.1088/0004-637X/699/1/539
* Riess et al. (2011) —. 2011, ApJ, 730, 119, doi: 10.1088/0004-637X/730/2/119
* Riess et al. (2022a) Riess, A. G., Breuval, L., Yuan, W., et al. 2022a, arXiv e-prints, arXiv:2208.01045. https://arxiv.org/abs/2208.01045
* Riess et al. (2022b) Riess, A. G., Yuan, W., Macri, L. M., et al. 2022b, ApJ, 934, L7, doi: 10.3847/2041-8213/ac5c5b
* Sandage et al. (2006) Sandage, A., Tammann, G. A., Saha, A., et al. 2006, ApJ, 653, 843, doi: 10.1086/508853
* Silbermann et al. (1999) Silbermann, N. A., Harding, P., Ferrarese, L., et al. 1999, ApJ, 515, 1, doi: 10.1086/307002
* Stetson (1987) Stetson, P. B. 1987, PASP, 99, 191, doi: 10.1086/131977
* Stetson (1994) —. 1994, PASP, 106, 250, doi: 10.1086/133378
## Appendix A Cepheid measurements from HST
Table 3: HST F160W Cepheid Photometry
ID | $P$ | F160W | $\sigma$ | F555W$-$ | F160W$-$ | R.A.a | Decl.
---|---|---|---|---|---|---|---
| | | | F814W | F200W | |
| [days] | [mag] | [deg] (J2000.0)
60205 | 25.16 | 25.03 | 0.54 | 1.18 | 0.18 | 53.435572 | -36.144146
136735 | 25.57 | 24.41 | 0.23 | 0.89 | 0.12 | 53.465135 | -36.152743
43927 | 25.57 | 24.98 | 0.50 | 1.17 | 0.17 | 53.440450 | -36.135135
101154 | 25.94 | 23.89 | 0.73 | 0.98 | 0.13 | 53.426225 | -36.165263
106082 | 26.38 | 24.40 | 0.68 | 0.84 | 0.11 | 53.432670 | -36.161386
74699 | 26.44 | 24.28 | 0.55 | 1.23 | 0.19 | 53.426941 | -36.156136
63449 | 26.81 | 24.39 | 0.40 | 1.35 | 0.22 | 53.445400 | -36.136227
138773 | 26.83 | 24.35 | 0.21 | 1.04 | 0.15 | 53.462525 | -36.157297
101112 | 26.88 | 24.37 | 0.63 | 0.98 | 0.13 | 53.426292 | -36.165183
120972 | 27.30 | 24.46 | 0.31 | 1.04 | 0.15 | 53.443078 | -36.160625
126914 | 27.33 | 24.78 | 0.30 | 0.80 | 0.10 | 53.455397 | -36.153646
40364 | 27.34 | 23.94 | 0.48 | 0.91 | 0.12 | 53.429471 | -36.143937
65336 | 27.79 | 24.71 | 0.38 | 0.89 | 0.12 | 53.446800 | -36.135500
124631 | 29.17 | 24.22 | 0.27 | 1.02 | 0.14 | 53.438970 | -36.166817
130859 | 29.21 | 24.56 | 0.24 | 0.98 | 0.13 | 53.458170 | -36.153817
133465 | 29.29 | 24.44 | 0.23 | 1.34 | 0.22 | 53.460423 | -36.154001
105797 | 30.17 | 23.98 | 0.47 | 1.28 | 0.20 | 53.431618 | -36.162206
106470 | 30.23 | 24.30 | 0.32 | 0.93 | 0.12 | 53.427648 | -36.166054
100027 | 31.20 | 24.11 | 0.29 | 0.66 | 0.08 | 53.439011 | -36.153369
73421 | 33.32 | 24.52 | 0.56 | 0.91 | 0.12 | 53.427399 | -36.155314
122163 | 33.91 | 24.20 | 0.19 | 1.23 | 0.19 | 53.449210 | -36.155957
139368 | 34.28 | 24.64 | 0.18 | 1.09 | 0.16 | 53.459369 | -36.160824
87703 | 34.92 | 23.51 | 0.24 | 1.22 | 0.19 | 53.449343 | -36.140061
117850 | 35.09 | 24.21 | 0.46 | 1.04 | 0.15 | 53.445830 | -36.156138
61628 | 36.80 | 23.79 | 0.52 | 1.63 | 0.30 | 53.427562 | -36.151741
103387 | 36.88 | 24.25 | 0.26 | 0.94 | 0.12 | 53.447790 | -36.146777
80315 | 36.90 | 23.12 | 0.27 | 1.20 | 0.18 | 53.449213 | -36.137887
142648 | 36.94 | 24.32 | 0.14 | 0.93 | 0.12 | 53.457318 | -36.166545
90510 | 38.84 | 23.92 | 0.42 | 1.05 | 0.15 | 53.433201 | -36.155411
128912 | 40.51 | 23.39 | 0.38 | 1.14 | 0.17 | 53.437587 | -36.170953
103704 | 40.63 | 24.24 | 0.28 | 1.43 | 0.24 | 53.440403 | -36.153516
104907 | 42.66 | 23.14 | 0.42 | 1.11 | 0.16 | 53.432245 | -36.161310
123489 | 42.77 | 23.96 | 0.23 | 0.95 | 0.13 | 53.431458 | -36.172785
109560 | 46.44 | 23.80 | 0.20 | 1.54 | 0.27 | 53.447707 | -36.149730
101731 | 46.99 | 23.31 | 0.43 | 0.90 | 0.12 | 53.437427 | -36.155425
93422 | 51.07 | 23.65 | 0.37 | 1.49 | 0.26 | 53.431666 | -36.157737
94055 | 51.17 | 24.00 | 0.29 | 1.35 | 0.22 | 53.435250 | -36.154750
134975 | 52.31 | 23.21 | 0.15 | 0.88 | 0.11 | 53.460099 | -36.155567
Note. — $a$: Positions are referenced to the WCS of HST F160W images processed
using AstroDrizzle v2.2.6.
Figure 4: Same as Figure 2 but for 5 examples in the more typical, lower-
density HST F160W field, where JWST observations are not available. See Figure
1 for location.
|
# Bounding the edge cover of a hypergraph
Farhad Shahrokhi
Department of Computer Science and Engineering, UNT
P.O.Box 13886, Denton, TX 76203-3886, USA<EMAIL_ADDRESS>
###### Abstract
Let $H=(V,E)$ be a hypergraph. Let $C\subseteq E$, then $C$ is an edge cover,
or a set cover, if $\cup_{e\in C}\\{v|v\in e\\}=V$. A subset of vertices $X$
is independent in $H,$ if no two vertices in $X$ are in any edge. Let $c(H)$
and $\alpha(H)$ denote the cardinalities of a smallest edge cover and largest
independent set in $H$, respectively. We show that $c(H)\leq{\hat{m}}(H)c(H)$,
where ${\hat{m}}(H)$ is a parameter called the mighty degeneracy of $H$.
Furthermore, we show that the inequality is tight and demonstrate the
applications in domination theory.
## 1 Introduction
We assume the reader is familiar with standard graph theory [5], hypergraph
theory [1], [3], domination theory [11], and algorithm analysis [6].
Throughout this paper we denote by $H=(V,E)$ a hypergraph on vertex set $V$
and the edge set $E$. So any $e\in E$ is a subset of $V$. We do not allow
multiple edges in our definition of a hypergraph, unless explicitly stated.
Every hypergraph can be represented by its incidence bipartite graph $B$ whose
vertex set is $V\cup E$. If $x\in V$ and $e\in E$, then $xe$ is an edge in
$B$, provide that $x\in e$. Let $C\subseteq E$, then $C$ is an edge cover, or
a set cover, if $\cup_{e\in C}\\{v|v\in e\\}=V$. A subset of vertices $X$ is
independent in $H$, if no two vertices in $X$ are in any edge. Let $c(H)$ and
$\alpha(H)$ denote the cardinalities of a largest independent set and a
smallest edge cover in $H$, respectively. It is known that computing
$\alpha(H)$ and $c(H)$ are NP hard problems [10]. Clearly,
$c(H)\geq\alpha(H)$. Furthermore, it is known that $c(H)$ can not bounded
above by a function of $\alpha(H)$, only. However, an important result in this
area is known. Specifically, it is a consequence of the result in [9] that
$c(H)={\alpha(H)}^{O(2^{v})}$ (1)
where $v$ denotes the vc dimension of $H$ [16]. Design of approximation
algorithms for the edge cover problem has been an active and ongoing research
in computer science. A greedy algorithm [7], [13] is known to approximate
$c(H)$ within $O(log(n)$ from its optimal value. Moreover, there are examples
of hypergraphs that show the worst case approximation scenario of $O(log(n))$
can not be improved [4].
The main result of this paper is to show that
$c(H)\leq{\hat{m}}(H)\alpha(H)$ (2)
where the multiplicative factor ${\hat{m}}(H)$ is a parameter called the
mighty degeneracy of $H$ which we introduce here. Recall that a set
$S\subseteq V$ is a transversal set (hitting set) in the hypergraph $H=(V,E)$,
if every $e\in E$ has a vertex in $S$. A set $M\subseteq E$ is a matching in
$H$, if every two edges in $M$ are disjoint. Let $\tau(H)$ and $\rho(H)$
denote the sizes of a smallest transversal and a largest matching in $H$,
respectively, and note that $\tau(H)\geq\rho(H)$.
A direct consequence of (2) is that
$\tau(H)\leq{\hat{m}}(H^{d})\rho(H)$ (3)
where ${\hat{m}}(H^{d})$ is the mighty degeneracy of the dual hypergraph of
$H$, defined as $H^{d}=\\{E,V\\}$.
This paper is organized as follows. In Section Two we introduce some terms and
concepts and set up our notations. Particularly, we introduce the strong
degeneracy of a hypergraph, denoted by ${\hat{s}}(H)$, which is an upper bound
on ${\hat{m}}(H)$. In Section Three we derive (2) which is the main result,
and also present a linear time algorithm for computing ${\hat{s}}(H)$. Section
Four contains the applications to domination theory of graphs. Specifically,
we show ${\hat{s}}(H)=1$ (and hence ${\hat{m}}(H)=1$), when the underlying
graph $G$ is a tree and $H$ is the so called closed or open neighborhood
hypergraph of $G$. Consequently, we provide new proofs (and algorithms) for
two classical results in domination theory [14], [15], by showing that in any
tree the size of a smallest dominating (total domination) set equals to the
size of a largest 2- packing (open 2-packing). The results in Section Four are
conveniently derived utilizing concept of strong degeneracy, instead of mighty
degeneracy, however generally speaking, the former can be much large than the
latter. In Section Five we give examples of hypergraphs with bounded mighty
degeneracy, whose strong degeneracy is a linear function of number of
vertices. Section Six contains our suggestions for future research.
## 2 Preliminaries
Let $H=(V,E)$, let $S\subseteq V$ and $e\in E$, then $e\cap S$ is the trace of
$e$ on $S$. The restriction of $H$ to $S$, denoted by $H[S]$, is the
hypergraph on vertex set $S$ whose edges are the set of all distinct traces of
edges in $E$ on $S$. $H[S]$ is also referred to as the induced subhypergraph
of $H$ on $S$. In general, a hypergraph $I$ is a subhypergraph of $H$, if it
can be obtained by removing some vertices and some edges from $H$111When a
vertex set is removed from $H$, the edges of $H$ will also be updated
accordingly.. $S$ is shattered in $H$, if any $X\subseteq S$ is a trace. Thus
if $S$ is shattered, then it has $2^{|S|}$ traces. The Vapnik–Chervonenkis
(VC) dimension of a hypergraph $H$, denoted by $vc(H)$, is the cardinality of
the largest subset of $V$ which is shattered in $H$. Let $H=(V,E)$ and let
$x\in V$. The degree of $x$ denoted by $d_{H}(x)$ is the number of edges that
contain $x$. The strong degree of $x$ in $H$, denoted by $s_{H}(x)$, is the
number of distinct maximal edges that contain $x$. ( An edge is maximal, if it
is not properly contained in another edge.) Let $\delta(H)$ and $s(H)$ denote
the smallest degree and smallest strong degree, respectively, of any vertex in
$H$. The degeneracy and strong degeneracy of $H$, denoted by
${\hat{\delta}}(H)$ and ${\hat{s}}(H)$, respectively, are the largest minimum
degree and largest minimum strong degree of any induced subhypergraph of $H$.
Let $R\subseteq S$. A strong subset of $V$ in $H$ is a non empty subset of $V$
which is obtained by removing all vertices in $R$ from $H$, as well as all
vertices in the edges that have nonempty intersection with $R$ and all
vertices in such edges.
The mighty degeneracy of $H$, denoted by ${\hat{m}}(H)$, is the largest
minimum strong degree of any strong subhypergraph of $H$. Clearly, for any
$x\in V$ one has ${s}_{H}(x)\leq{d}_{H}(x)$ and consequently
${\hat{m}}(H)\leq{\hat{s}}(H)\leq{\hat{\delta}}(H)$ (4)
## 3 Our Greedy Algorithms
Our next result is the main result of this paper.
###### Theorem 3.1.
Let $H=(V,E)$ be a hypergraph, then there is an an edge cover $C$, and an
independent set $X$ in $H$ so that
$|C|\leq{\hat{m}}(H)|X|$ (5)
Consequently
$|C|\leq{\hat{s}}(H)|X|$ (6)
Moreover, $X$ and and $C$ can be constructed in $O(|V|+\sum_{e\in E}|e|)$
time.
Proof. Consider the following algorithm.
Initially, set $i\leftarrow 1$, $I\leftarrow H,W\leftarrow V$ and $K\leftarrow
E$. While there are vertices in $W$ repeat the following steps: Remove the
vertex of minimum strong degree, denoted by $x_{i}$, from $W$, remove the set
of all distinct maximal edges containing $x_{i}$ from $K$, then, remove and
the set of all vertices contained in theses edges from $W$ and finally set
$i\leftarrow i+1$.
Clearly, the algorithm terminates. Now let $t$ be the number iterations of the
algorithm and at any iteration $i=1,2,...,t$, let $I_{i}$ denote the
constructed hypergraph, (which is strongly induced), and let $W_{i}$ (which is
a strong subset) and $K_{i}$ denote, respectively, the vertices and edges of
this hypergraph. Let $X=\\{x_{1},x_{2},...,x_{t}\\}$ be the set of all
vertices removed from $H$ when the algorithm terminates. Clearly, $X$ is an
independent set in $H$. We denote by $K_{x_{i}}$ the set of all distinct
maximal edges containing the vertex $x_{i}$ in the hypergraph $I_{i}$ at
iteration of $i$ of the algorithm and note that $|K_{x_{i}}|\leq{\hat{m}}(H)$,
since $x_{i}$ is the vertex of minimum strong degree in $I_{i}$. Consequently,
$\sum_{i=1}^{t}|K_{x_{i}}|\leq{\hat{m}}(H)\times t={\hat{m}}(H)\times|X|$ (7)
Now for $i=1,2,...,t$, let $C_{x_{i}}$ be the set of all edges in $H$ obtained
by extending each edge of $K_{x_{i}}$ in $I_{i}$ to an edge in $H$ and let
$C=\cup_{i=1}^{t}C_{x_{i}}$. Clearly, $C$ is an edge cover and furthermore
$|C|=|\cup_{i=1}^{t}F_{x_{i}}|$, and therefore the first claim follows from
(7).
To verify the second inequality note that ${\hat{m}}(H)\leq{\hat{s}}(H)$. We
omit the details of claims regrading time complexity that involves
representing $H$ as a bipartite graph. $\Box$
To use Theorem 3.1 we really need to know ${\hat{m}}(H)$. Alternatively, we
can use ${\hat{s}}(H)$ which is an upper bound for ${\hat{m}}(H)$. At this
time, we still do not know how to efficiency compute ${\hat{m}}(H)$. We finish
this section by presenting a simple greedy algorithm for computing
${\hat{s}}(H)$ which is similar to the known algorithm for computing
degeneracy of $H$, or ${\hat{\delta}}(H)$. The properties of the output of
algorithm will be used to prove our results in the next section.
###### Theorem 3.2.
Let $H=(V,E)$ be a hypergraph on $n$ vertices, then ${\hat{s}}(H)$ can be
computed in $O(|V|+\sum_{e\in E}|e|)$ time.
Proof. Consider the following algorithm. For $i=1,.2,...,n$, select a vertex
$x_{i}$ of of minimum strong degree $s_{i}={s}(H_{i})$ in the induced
subhypergraph $H_{i}=H[V_{i}]$ whose vertex set is
$V_{i}=V-\\{x_{1},x_{2},...,x_{i-1}\\}$ and whose edge set is denoted by
$E_{i}$. Let $s=\max\\{s_{i},i=1,2,...,n\\}$. We claim that ${\hat{s}}(H)=s$.
Note that ${\hat{s}}(H)\geq s$. We will show that ${\hat{s}}(H)\leq s$. Now
let $I=(W,F)$ be an induced subhypergraph of $H$ whose minimum strong degree
equals ${\hat{s}}(H)$ and let $j,1\leq j\leq n,$ be the smallest integer so
that $x_{j}\in W$. Then $s_{I}(x_{j})\leq s_{j}={s}(H_{j})\leq s$, since
$W\subseteq E_{i}$ and and consequently the claim is proved. To verify the
claim for time complexity, one needs to represent $H$ as a bipartite graph $H$
as the input of algorithm. The details are omitted. $\Box$
## 4 Applications in domination theory
For a graph $G$ on vertex set $V$ and $x\in V$ let $N(x)$ denote the open
neighborhood of $x$, that is the set of all vertices adjacent to $x$, not
including $x$. The closed neighborhood of $x$ is $N[x]=N(x)\cup\\{x\\}$. The
closed (open) neighborhood hypergraph of an $n$ vertex graph $G$ is a
hypergraph on the same vertices as $G$ whose edges are all $n$ closed (open)
neighborhoods of $G$. A subset of vertices $S$ in $G$ is a dominating set
[11], if for every vertex $x$ in $G$, $N[x]\cap S\neq\emptyset$. $S$ is a
total or open domination set if, $N(x)\cap S\neq\emptyset$. $S$ is a 2-packing
(packing) , if for any distinct pair $x,y\in S$, $N[x]$ and $N[y]$ do not
intersect. $S$ is an open 2-packing(packing), if for any distinct pair $x,y\in
S$, $N(x)$ and $N(y])$ do not intersect Let
$\gamma(G),\gamma^{o}(G),\alpha_{2}(G)$ and $\alpha^{0}_{2}(G)$ denote the
sizes of a smallest dominating, a smallest open domination, a largest packing
and a largest open packing, respectively, in $G$. Computing
$\gamma(G),\gamma^{o}i(G),\alpha_{2}(G)$ and $\alpha^{o}_{2}(G)$ are known to
be NP-hard. $\gamma(G)$ can be approximated within a factor of $O(log(n))$
times form its optimal solution in $O(n+m)$ time, where $n$ and $m$ are the
number of vertices and edges of $G$. The approximation algorithm is arising
from the approximation algorithm for the set cover problem[13] [7]. It is
known that one can not improve the approximation factor of $O(log(n))$
asymptotically.
Let $G$ be a graph on vertex set $V$. The closed neighborhood hypergraph, of
$G$ is a hypergraph on vertex set $V$ and edge set $\\{N[x],x\in V\\}$. The
open neighborhood hypergraph of graph $G$ is a hypergraph on the vertex set
$V$ and the edge set $\\{N(x),x\in V\\}$. The following summarizes basic
properties of neighborhood hypergraphs as they relate to our work.
###### Observation 4.1.
Let $H$ the closed neighborhood hypergraph of a graph $G$ with the vertex set
$V$.
1. (i)
Let $S\subseteq V$, then $S$ is a dominating set in $G$ if and only if $S$ is
an edge cover in $H$.
2. (ii)
Let $S\subseteq V$, then $S$ is a packing in $G$ if and if $S$ is an
independent set in $H$.
3. (iii)
Let $x\in V$, then $s_{H}(x)\leq deg(x)+1$, where $deg(x)$ is degree of $x$ in
$G$. Consequently, ${\hat{s}}(H)\leq\Delta(G)+1$, where $\Delta(G)$ is the
maximum degree of $G$.
4. (iv)
If $G$ is a tree and $x\in V$ is a leaf, then $s_{H}(x)=1$.
###### Remark 4.1.
Observation 4.1 is valid if $H$ is the open neighborhood hypergraph of $G$,
with the exception that in item $(iii)$, one has $s_{H}(x)\leq deg(x)$ and
consequently ${\hat{s}}(H)\leq\Delta(G)$.
By the above observation, if we apply the greedy algorithm in Theorem 3.1 to
the neighborhood hypergraph of a graph $G$, we obtain a dominating (total
domination) set $C$ and a packing (open packing) $X$ so that
$|C|\leq{\hat{s}}(H)|X|$. To determine how small is $C$, we need to estimate
${\hat{s}}(H)$, for the hypergraph $H$. As stated above, we only know
${\hat{s}(H)}\leq\Delta(G)+1$, where $\Delta(G)$ is the maximum degree of $G$.
For trees one can get a significantly better result.
Let $T$ be a tree and let $T_{1}$ be a tree which is obtained after removing
all leaves of $T$. Then each leaf in $T_{1}$ is a support vertex in $T$
(attached to a leaf) and is called a canonical support vertex in $T$.
Next we derive two classical results in domination theory that were proved
first proved in [14] and [15], respectively.
###### Theorem 4.1.
Let $T$ be a tree on the vertex set $V$ whose closed and open neighborhood
hypergraphs are $H$ and $H^{o}$, respectively. Then, the following hold.
1. (i)
${\hat{s}}(H)={\hat{m}}(H)=1$ and consequently $\gamma(T)=\alpha_{2}(T)$.
2. (ii)
${\hat{s}}(H^{o})={\hat{m}}(H^{o})=1$ and consequently
$\gamma^{0}(T)=\alpha^{o}_{2}(T)$.
Moreover, the domination and packing sets can be obtained in $O(V)$ time
Proof. We first verify that at each iteration of the greedy algorithm in
Theorem 3.2 a vertex of strong degree one is detected. This shows
${\hat{s}i}(H)=1$. We then apply the greedy algorithm in Theorem 3.1 to obtain
the equality of packing and domination numbers.
To prove the first the claim, note that algorithm in Theorem 3.2 can break the
ties arbitrary. So assume that the algorithm selects the leaves in $T$ which
as stated in 4.1 have strong degree one in $H$. Now Consider the execution of
algorithm on Tree $T_{1}$ which is obtained after removing all leaves of $T$.
If $T_{1}$ is empty we are done, since all vertices have already had degree
one. So assume $T_{1}$ is not empty.
Claim. Let $x$ be a leaf in $T_{1}$, then $s_{I}(x)=1$, where $I$ is the
closed induced neighborhood hypergraph which is obtained after removal of all
leaves of $T$.
Proof of claim. Since $x$ is leaf in $T_{1}$, there is exactly one vertex $z$
adjacent to $x$ in $T_{1}$. Now Let $Y\subset V$ be the set of leaves of $T$
adjacent to $x$ (in $T$) and $N_{I}[y]$ denote the closed neighborhood of
$y\in Y$ in $I$ after removal of $y$. Then, we have $N_{I}[y]=x$.
Additionally, note that $N_{I}[x]=\\{x,z\\}\subseteq N_{I}[z]$, since $x\in
N_{I}[z]$, and consequently $s_{I}(x)=1$.
Coming back to the proof, now let algorithm select leaves of $T_{1}$, then,
delete all these leaves and continue the process with the tree obtained after
removal of these leaves. This proves ${\hat{s}}(H)=1$, consequently
${\hat{m}}(H)=1$. Now run the algorithm in Theorem 3.1 on $T$ to prove
$\gamma(T)=\alpha_{2}(T)$.
Proof of second the claim is similar to the first and is omitted. The claim on
the time complexity follows from running times stated in Theorems 3.1, 3.2.
$\Box$
## 5 The gap between ${\hat{m}}(H)$ and ${\hat{s}}(H)$
In the proof of Theorem 4.1, we were able to effectively use ${\hat{s}}(H)$
instead of ${\hat{m}}(H)$. However, in general this may not be possible since
${\hat{s}}(H)$ can be much larger than ${\hat{m}}(H)$ as demonstrated in the
following.
###### Theorem 5.1.
For any integer $n\geq 3$ there is an $n$ vertex hypergraph such that
${\hat{m}}(H)=2$ and ${\hat{s}}(H)=n-2$.
Proof. Let $G$ be a graph on vertex set $V=\\{v_{1},v_{2},...,v_{n}\\}$
composed of a clique on vertex set $\\{v_{2},v_{3},...,v_{n}\\}$ so that
vertex $v_{1}$ (which is not in the clique) is adjacent to vertex $v_{2}$
(which is in the clique). Now define a hypergraph $H=(V,E)$ with
$E=N[v1]\cup_{i=2}^{n}N(v_{i})$. Note that
$s_{H}(v_{1})=2$ (8)
since $N[v_{1}]$ and $N(v_{2})$ are maximal edges of $H$ containing $v_{1}$.
It is also easy to verify that
$s_{H}(v_{2})=n-1\mbox{ and that }s_{H}(v_{i})=n-2\mbox{ for }i=3,4,...,n$ (9)
Next note that the only strong subset of $V$ in $H$ is $V$ itself and thus
equations 8 and 9 imply ${\hat{m}}(H)=s_{H}(v_{1})=2$ as claimed for mighty
degeneracy.
Now consider the induced hypergraph $I$ on vertex set $W=V-\\{v_{1}\\}$, whose
edges are obtained by removing $v_{1}$ from those edges of $H$ that contains
$v_{1}$ (these edges are $N[v_{1}]$ and $N(v_{2})$). One can verify that
$s_{I}(v_{i})=n-2\mbox{ for }i=2,3,...,n$ (10)
which implies ${\hat{s}}(H)=n-2$ as claimed. $\Box$
## 6 Future Work
This paper contains our preliminary results and we suggest several directions
for future research.
It is not known to us yet, if ${\hat{m}}(H)$ can be computed in polynomial
time or not. We suspect that a variation of the algorithm in Theorem 3.1 can
actually compute ${\hat{m}}(H)$, but have not been able to prove it.
The connections between the $vc(H)$ and ${\hat{m}}(H)$ (${\hat{s}}(H)$) needs
to be explored further. Is it true that one can always be bounded by a
function of the other?
The most recent results for approximation of $\gamma(G)$ (domination number of
a graph $G$) in sparse graphs require solving the linear programming
relaxations (fractional versions) of the problem and then rounding the
solutions [2],[8]. For a recent survey see [12]. We suspect that proper
modification of our method in Section Four would give similar results without
the need to actually solve the linear programming problems.
## References
* [1] Berge C.: Theory of Graphs and its Applications. Methuen, London (1962).
* [2] Bansal, Umboh S. W.: Tight approximation bounds for dominating set on graphs of bounded arboricity. Information Processing Letters. (2017), 21-24.
* [3] Bousquet N.: Hitting sets: VC-dimension and Multicuts. Université Montpellier II-Sciences et Techniques du Languedoc (2013).
* [4] Brönnimann H., Goodrich M.T.: Almost Optimal Set Covers in Finite VC-Dimension. Discret. Comput. Geom. 1995, 14, 463–479.
* [5] Chartrand G., Lesniak L., Zhang P.: Graphs and Digraphs; CRC Press: Boca Raton, FL, USA, 2010.
* [6] Cormen T.H., Leiserson C.E., Rivest R.L., Stein C.: Introduction to Algorithms, MIT Press: Cambridge.
* [7] Chvatal V.: A greedy heuristic for the set-covering problem, Mathematics of Operations Research, 4(3):233-235, 1979.
* [8] Dvorak Z. On distance r-dominating and 2r-independent sets in sparse graphs. J. Graph Theory 2017.
* [9] Ding G.L., Seymour P., Winkler P.: Bounding the vertex cover number of a hypergraph, Combinatorica volume 14, pages 23–34 (1994).
* [10] Garey M.R., Johnson D.J.: Computers and Intractability: A Guide to the Theory of NP-Completeness, Freeman, San Francisco, CA, 1978.
* [11] Haynes T. W., Hedetniemi S., Slater P.: Fundamentals of Domination in Graphs, CRC press, 1988.
* [12] Li J., Potru R., Shahrokhi F.: A Performance Study of Some Approximation Algorithms for Computing a Small Dominating Set in a Graph, Algorithms 13 (12), 339, 2021.
* [13] Lovasz L.: On the Ratio of Optimal Integral and Fractional Covers. Discrete Mathematics, Vol. 13, 1975, 383-390.
* [14] Meir A., Moon J. W.: Relations between packing and covering numbers of a tree, Pacific Journal of Mathematics 61 (1975) 225–233.
* [15] Rall D.F.:Total Domination in Categorical Products of Graphs. Discussiones Mathematicae Graph Theory 25(1-2):35-44, 2005.
* [16] Vapnik V. N, Chervonenkis A.: On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities.,Theory of Probability and Its Applications. 16(2), pp. 264-279, Springer (1971).
|
# Scalable Feature Matching Across Large Data Collections
David Degras
###### Abstract
This paper is concerned with matching feature vectors in a one-to-one fashion
across large collections of datasets. Formulating this task as a
multidimensional assignment problem with decomposable costs (MDADC), we
develop extremely fast algorithms with time complexity linear in the number
$n$ of datasets and space complexity a small fraction of the data size. These
remarkable properties hinge on using the squared Euclidean distance as
dissimilarity function, which can reduce ${n\choose 2}$ matching problems
between pairs of datasets to $n$ problems and enable calculating assignment
costs on the fly. To our knowledge, no other method applicable to the MDADC
possesses these linear scaling and low-storage properties necessary to large-
scale applications. In numerical experiments, the novel algorithms outperform
competing methods and show excellent computational and optimization
performances. An application of feature matching to a large neuroimaging
database is presented. The algorithms of this paper are implemented in the R
package `matchFeat` available at github.com/ddegras/matchFeat.
## 1 Introduction
Matching objects across units (e.g. subjects, digital images, or networks)
based on common descriptor variables is an ubiquitous task in applied science.
This problem, variously known as _object matching_ , _feature matching_ ,
_data association_ , or _assignment problem_ , is at the core of applications
such as resource allocation (Pierskalla, 1968), object tracking (Thornbrue et
al., 2010; Bar-Shalom et al., 2011; Dehghan et al., 2015; Rezatofighi et al.,
2015; Smeulders et al., 2014; Wang et al., 2015), object recognition (Lowe,
2001; Belongie et al., 2002; Conte et al., 2004), navigation systems (Doherty
et al., 2019), image registration (Le Moigne et al., 2011; Ashburner, 2007),
optimization of communication networks (Shalom et al., 2010), connectomics in
neuroscience (Haxby et al., 2011; Vogelstein et al., 2015), and more.
The impetus for this work is a task in functional neuroimaging which consists
in matching collections of biomarkers (more precisely, brain connectivity
measures) between the subjects of a study. The matching process may serve in
data exploration to provide new scientific insights and generate hypotheses.
It can also be a preliminary step in a group analysis to ensure meaningful
comparisons across subjects. Key aspects of the matching problem under study
are that: (i) the number of subjects and/or the number of biomarkers per
subject may be large, posing computational challenges, (ii) for two given
subjects, each biomarker of one subject must be matched to at most one
biomarker of the other (_one-to-one matching_), and (iii) the matching must be
consistent, i.e. transitive across subjects (for example, denoting subjects by
letters and biomarkers by numbers, if A1 is matched to B2 and B2 to C3, then
A1 must be matched to C3). This matching problem is not specific to
neuroimaging and is applicable to the research fields mentioned above. It is
generally relevant to _multilevel_ or _hierarchical_ analyses where outputs of
a certain level of analysis must be matched before becoming inputs at the next
level. This situation typically occurs when the outputs to be matched result
from an unsupervised analysis such as clustering, segmentation, or dimension
reduction.
#### Problem formulation.
The matching problem at the core of this paper is as follows. Given $n$ set of
vectors in $\mathbb{R}^{p}$ having the same size, say
$\\{x_{11},\ldots,x_{1m}\\},\ldots,\\{x_{n1},\ldots,x_{nm}\\}$, find
permutations $\sigma_{1},\ldots,\sigma_{n}$ of the vector labels
$\\{1,\ldots,m\\}$ that minimize the sum of pairwise squared Euclidean
distances within clusters $\\{x_{1\sigma_{1}(k)},\ldots,x_{n\sigma_{n}(k)}\\}$
($1\leq k\leq m$). Writing $[r]=\\{1,\ldots,r\\}$ for a positive integer $r$
and letting $\mathbf{S}_{m}$ be the set of all permutations of $[m]$, the
problem expresses as
$\min_{\sigma_{1},\ldots,\sigma_{n}\in\mathbf{S}_{m}}\sum_{1\leq i<j\leq
n}\sum_{k=1}^{m}\big{\|}x_{i\sigma_{i}(k)}-x_{j\sigma_{j}(k)}\big{\|}^{2}$ (1)
where $\|\cdot\|$ denotes the Euclidean norm on $\mathbb{R}^{p}$.
Problem (1) is a sum-of-squares clustering problem with the constraint that
each cluster must contain exactly one vector from each set
$\\{x_{i1},\ldots,x_{im}\\}$, $i\in[n]$. Identifying the $n$ sets with
statistical units, this constraint guarantees that the obtained clusters
reflect common patterns between units, not within units. For this reason, one-
to-one feature matching is particularly suitable in applications where
variations between units dominate variations within units.
In problem (1), all statistical units have the same number $m$ of vectors. It
is natural to also set to $m$ the number of clusters to partition the vectors
into. In practice however, statistical units may have unbalanced numbers of
observations, say $m_{1},\ldots,m_{n}$. It may also be desirable to group the
observations in an arbitrary number of clusters, say $K$. Accordingly, a more
general version of problem (1) would be, for each $i\in[n]$, to assign vectors
$x_{i1},\ldots,x_{im_{i}}$ to $K$ clusters in a one-to-one fashion so as to
minimize the total sum of pairwise squared Euclidean distances within
clusters. Here, one-to-one means that each unit $i$ can contribute at most one
vector to any cluster: if $m_{i}=K$, each vector from unit $i$ is assigned to
a cluster and each cluster contains exactly one vector from unit $i$; if
$m_{i}<K$, some clusters do not contain vectors from unit $i$; and if
$m_{i}>K$, some vectors from unit $i$ are not assigned to a cluster, i.e. they
are unmatched. The matching problem (1) thus generalizes as
$\min_{s_{1},\ldots,s_{n}}\sum_{1\leq i<j\leq
n}\sum_{k=1}^{K}\sum_{\begin{subarray}{c}q\in[m_{i}],r\in[m_{j}]\\\
s_{i}(q)=s_{j}(r)=k\end{subarray}}\left\|x_{iq}-x_{jr}\right\|^{2}$ (2)
where each $s_{i}$ is a map from the set $[m_{i}]$ of vector labels to the set
$\\{0,\ldots,K\\}$ of cluster labels where, by convention, labels of
unassigned/unmatched vectors are mapped to the cluster label 0. The map
$s_{i}$ is such that $s_{i}(q)=s_{i}(r)$ implies that $q=r$ or
$s_{i}(q)=s_{i}(r)=0$. In other words the restriction of $s_{i}$ to
$[m_{i}]\setminus s_{i}^{-1}(\\{0\\})$ must be an injective map. Problem (1)
is recovered when $m_{1}=\cdots=m_{n}=K:=m$, in which case
$s_{i}=\sigma_{i}^{-1}$ for all $i\in[n]$.
For simplicity, only problem (1) is treated in this paper. However, the
proposed matching methods extend to the general problem (2). In complement to
the model-free problem (1), a probabilistic approach to feature matching based
on Gaussian mixtures is detailed in Section 2.5.
#### Related work.
Problem (1) can be viewed through the prism of combinatorial optimization
problems such as the _minimum weight clique partitioning problem_ in a
complete $n$-partite graph, the _quadratic assignment problem_ (Koopmans and
Beckmann, 1957; Çela, 1998), or the _multidimensional assignment problem_
(MAP) (e.g. Burkard et al., 2009). The MAP formalism is well suited to this
work and is recalled hereafter:
$\min_{\sigma_{1},\ldots,\sigma_{n}\in\mathbf{S}_{m}}\sum_{k=1}^{m}c_{\sigma_{1}(k)\sigma_{2}(k)\cdots\sigma_{n}(k)}$
(3)
where $(c_{a_{1}a_{2}\ldots a_{n}})\in\mathbb{R}^{m^{n}}$ is an
$n$-dimensional array containing the costs of assigning the feature vectors
$x_{1a_{1}},\ldots,x_{na_{n}}$ to the same cluster. Problem (1) is an instance
of the MAP and, more precisely, it is a _multidimensional assignment problem
with decomposable costs_ (MDADC) (e.g. Bandelt et al., 1994, 2004) because its
assignment costs decompose as
$c_{a_{1}a_{2}\ldots i_{n}}=\sum_{1\leq i<j\leq n}d(x_{ia_{i}},x_{ja_{j}})$
(4)
where $d$ is a dissimilarity function. The squared Euclidean distance $d$ used
in (1) enables the development of highly efficient computational methods (see
Section 2). The need for efficient computations comes from the exponential
size $(m!)^{n}$ of the search domain $(\mathbf{S}_{m})^{n}$ and from the NP-
hardness of (1) (when $n\geq 3$) as a generalization of the 3D assignment
problem of Spieksma and Woeginger (1996).
The formidable literature on the MAP, which spans more than five decades and
multiple mathematical fields, will not be reviewed here. The interested reader
may fruitfully consult Burkard et al. (2009) and Pardalos and Pitsoulis
(2000). In fact, given the focus of the present work on computations, a broad
review of the general MAP is not necessary. Indeed, optimization methods for
the MAP (e.g. Karapetyan and Gutin, 2011; Pierskalla, 1968; Poore and Rijavec,
1993; Robertson, 2001) are not computationally efficient for the special case
of MDADC (and in particular (1)), especially if the number $n$ of dimensions
is large. We will therefore only discuss the relevant MDADC literature.
Bandelt et al. (1994) provide simple “hub” and “recursive” heuristics for the
MDADC (3)-(4) along with their approximation ratios (worst-case bounds on the
ratio of a method’s attained objective to the optimal objective value). The
hub heuristic consists in selecting one dimension $i\in[n]$ of the MDADC as a
“hub” and matching all other dimensions to this one, i.e. finding for each
dimension $j\neq i$ the assignment that minimizes the total cost with respect
to $i$. The recursive heuristic starts by permuting the $n$ dimensions of the
problem and then recursively finds the best assignment for the $i$th permuted
dimension with respect to the $(i-1)$ first permuted dimensions
($i=2,\ldots,n$). Bandelt et al. (2004) enhance the heuristic methods of
Bandelt et al. (1994) with local neighborhood search methods that attempt to
improve a solution one or two dimensions at a time. They derive lower bounds
for the minimum cost assignment based on a Lagrangian relaxation of the MDADC.
Collins (2012) also exploits the idea of improving solutions one dimension at
a time in the general MAP (3) through a factorization technique. Kuroki and
Matsui (2009) formulate (1) as the problem of finding a clique cover of an
$n$-partite graph with minimum edge weights. They express the clique cover
problem with various mathematical programs (integer linear, nonconvex
quadratic, integer quadratic, and second order cone) which they tackle
directly or after relaxation. They also provide approximation ratios and
computational complexity bounds for their algorithms. Tauer and Nagi (2013)
and Natu et al. (2020) solve Lagrangian relaxations of the integer linear
program formulation of the MDADC, with an emphasis on efficient parallel
computation in a Map-Reduce framework or with GPUs. They derive tight lower
bounds to control the approximation error of their algorithms.
As an alternative from the multidimensional assignment perspective, problem
(1) can be viewed as an instance of _constrained clustering_ or _semi-
supervised learning_ (Basu et al., 2009; Gancarski et al., 2020). The
constraint that each unit $i\in[n]$ contributes exactly one feature vector to
each cluster can be rephrased as: two vector instances from the same unit
cannot be assigned to the same cluster. This type of constraint, namely that
certain pairs of instances cannot be assigned to the same cluster (“cannot
link” constraint) or that certain pairs must be assigned to the same cluster
(“must link” constraint), is called _equivalence constraints_ and can be
handled by constrained $K$-means algorithms (Wagstaff et al., 2001; Bilenko et
al., 2004; Pelleg and Baras, 2007) or through constrained mixture models
(Shental et al., 2004).
Other tasks related to problem (1) but not directly relevant are object
tracking, with applications in engineering and more recently in computer
vision and artificial intelligence, and image registration, which plays a key
role in image processing, object recognition, and remote sensing. The former
involves a temporal dimension absent from (1) whereas the latter involves many
(and often noisy) features that are not matched one-to-one. Matching problems
also have a long history in statistics and have been a topic of intense
scrutiny in machine learning in recent years (DeGroot and Goel, 1976; Collier
and Dalalyan, 2016; Hsu et al., 2017; Pananjady et al., 2018). However, much
of the research in these fields relevant to (1) deals with the case where
$n=2$ and $m$ is large (asymptotically $m\to\infty$) whereas we are chiefly
interested in situations where $m$ is fixed and $n$ is large ($n\to\infty$).
#### Contributions.
The methods for the MDADC (3)-(4) discussed heretofore are applied in practice
to problems of small size, say $n$ in the single digits or a few tens.
Theoretical considerations as well as numerical experiments from this paper
(see Sections 2-3) and from the literature indicate that these methods cannot
handle large-scale problems with $n$ in the hundreds, thousands or more (at
least, not in a reasonable time on a single computer). As a simple example,
the ${n\choose 2}m^{2}$ costs in (4) are typically calculated and stored
before starting the optimization, but even this preliminary step may exceed
computer memory limits for large $n$ and/or $m$. In response to this
methodological gap, our research aims to develop fast, scalable methods for
matching feature vectors in a one-to-one fashion across a large number of
statistical units. The main contributions of the paper are the following.
1. 1.
We develop very fast algorithms for solving the matching problem (1), that is,
(3)-(4) with $d$ as the squared Euclidean distance. The three main algorithms
(Sections 2.1-2.2-2.3) have iteration complexity $O(nm^{3})$ and only take a
few iterations to converge, meaning that they scale linearly with $n$. In
addition, they calculate assignment costs (4) on the fly and have space
requirements $O(mn+mp)$, a fraction of the data size $mnp$. We also present
initialization methods and a refinement method (pairwise interchange).
Further, we take a probabilistic view of (1) as a constrained Gaussian mixture
model and devise an efficient implementation of the Expectation-Maximization
(EM) algorithm.
2. 2.
We provide a broad review of the diverse methods applicable to (1) (integer
linear programming, various relaxations, constrained clustering) which rarely
appear together in a paper. The novel algorithms are compared to these methods
in numerical experiments and show excellent computation and optimization
performances.
3. 3.
An R package `matchFeat` implementing all the algorithms of the paper is made
available at github.com/ddegras/matchFeat.
4. 4.
The matching problem (1) is applied to a large database of neuroimaging data
to study functional connectivity in the human brain. The data analysis
confirms existing knowledge but also generates new insights, thus
demonstrating the practical usefulness of our approach.
#### Organization of the paper.
Section 2 introduces novel algorithms for the matching problem (1). In Section
3, a numerical study assesses the novel algorithms and competing methods with
respect to computation and optimization performance. Section 4 details an
application of our matching approach to a large neuroimaging database (ABIDE)
relating to autism spectrum disorders. Concluding remarks are gathered in
Section 5 and additional details of the data analysis are provided in the
Appendix.
## 2 Novel algorithms for feature matching
This section introduces novel algorithms for the matching problem (1). The
first four are local search methods that aim to improve existing solutions. At
the end of the section, we discuss initialization techniques for the local
search methods.
### 2.1 $K$-means matching
For a given $n$-uple of permutations
$\sigma=(\sigma_{1},\ldots,\sigma_{n})\in(\mathbf{S}_{m})^{n}$, let
$\overline{X}_{\sigma}$ be the average matrix of the permuted data with
columns $\overline{x}_{\sigma,k}=\frac{1}{n}\sum_{i=1}^{n}x_{i\sigma_{i}(k)}$
for $1\leq k\leq m$. Problem (1) is equivalent to
$\min_{\sigma_{1},\ldots,\sigma_{n}}\sum_{i=1}^{n}\sum_{k=1}^{m}\left\|x_{i\sigma_{i}(k)}-\overline{x}_{\sigma,k}\right\|^{2}$
(5)
The following method adapts the standard $K$-means clustering algorithm
(Lloyd, 1982) to the matching problem (5).
1. 1.
Initialize $\sigma=(\sigma_{1},\ldots,\sigma_{n})$ to some arbitrary value,
for example $\sigma=(\mathrm{Id}_{[m]},\ldots,(\mathrm{Id}_{[m]})$. Calculate
the average matrix $\overline{X}_{\sigma}$ and the objective value (5).
2. 2.
Given the average matrix $\overline{X}_{\sigma}$: for $1\leq i\leq n,$ find
the permutation $\sigma_{i}$ that minimizes
$\sum_{k=1}^{m}\|x_{i\sigma_{i}(k)}-\overline{x}_{\sigma,k}\|^{2}$. Update the
solution to $\sigma=(\sigma_{1},\ldots,\sigma_{n})$.
3. 3.
Given $\sigma$: calculate the average matrix $\overline{X}_{\sigma}$ and the
objective value (5). If the objective has not decreased from the previous
iteration, terminate the execution and return $\sigma$. Else go back to step
2.
Steps 2 and 3 above are non-increasing in the objective (5). For this reason,
and due to the finiteness of the search space, the proposed approach converges
in a finite number of iterations. Like the $K$-means, it only finds a local
minimum of (5).
Concerning computations, step 3 can be performed in $O(nm)$ flops. Step 2,
which consists of $n$ separate optimizations, is the computational bottleneck.
Observe that
$\displaystyle\sum_{k=1}^{m}\|x_{i\sigma_{i}(k)}-\overline{x}_{\sigma,k}\|^{2}$
$\displaystyle=\sum_{k=1}^{m}\|x_{i\sigma_{i}(k)}\|^{2}-2\sum_{k=1}^{m}\langle
x_{i\sigma_{i}(k)},\overline{x}_{\sigma,k}\rangle+\sum_{k=1}^{m}\|\overline{x}_{\sigma,k}\|^{2}$
$\displaystyle=\sum_{k=1}^{m}\|x_{ik}\|^{2}-2\sum_{k=1}^{m}\langle
x_{i\sigma_{i}(k)},\overline{x}_{\sigma,k}\rangle+\sum_{k=1}^{m}\|\overline{x}_{\sigma,k}\|^{2}$
where $\langle\cdot,\cdot\rangle$ denotes the Euclidean scalar product. That
is, the minimization of
$\sum_{k=1}^{m}\|x_{i\sigma_{i}(k)}-\overline{x}_{\sigma,k}\|^{2}$ (with
respect to $\sigma_{i}\in\mathbf{S}_{m}$) is equivalent to
$\max_{\sigma_{i}\in\mathbf{S}_{m}}\sum_{k=1}^{m}\langle
x_{i\sigma_{i}(k)},\overline{x}_{\sigma,k}\rangle$ (6)
Problem (6) is an instance of the well-known _linear assignment problem_ (LAP)
(e.g. Burkard et al., 2009, Chap. 4). After calculating the assignment matrix
$A=(\langle\overline{x}_{\sigma,k},x_{il}\rangle)_{1\leq k,l\leq m}$, the LAP
(6) can be solved for example with the Hungarian algorithm (Kuhn, 1955;
Munkres, 1957). Efficient implementations of the Hungarian algorithm have
complexity $O(m^{3})$.
The $K$-means matching algorithm is summarized hereafter. The objective value
in (5) is denoted by $F(\sigma)$.
Algorithm 1 $K$-Means Matching
0: $X_{1},\ldots,X_{n}\in\mathbb{R}^{p\times m}$,
$\sigma=(\sigma_{1},\ldots,\sigma_{n})\in(\mathbf{S}_{m})^{n}$
1:
$\overline{x}_{\sigma,k}\leftarrow(1/n)\sum_{i=1}^{n}x_{i\sigma_{i}(k)}\,(1\leq
k\leq m)$, $F_{new}\leftarrow F(\sigma)$
2: repeat
3: $F_{old}\leftarrow F_{new}$
4: for $i=1,\ldots,n$ do
5: Solve the LAP (6) and call $\sigma_{i}^{+}$ a solution.
6: $\sigma_{i}\leftarrow\sigma_{i}^{+}$
7: end for
8: $\sigma\leftarrow(\sigma_{1},\ldots,\sigma_{n})$
9:
$\overline{x}_{\sigma,k}\leftarrow(1/n)\sum_{i=1}^{n}x_{i\sigma_{i}(k)}\,(1\leq
k\leq m)$, $F_{new}\leftarrow F(\sigma)$
10: until $F_{new}\geq F_{old}$
###### Remark.
If $p=1$, the matrices $X_{i}$ are row vectors and the $x_{ik}$ are scalars.
In this case, step 2 of the proposed method is extremely simple. Indeed for
each $1\leq i\leq n$, the sum
$\sum_{k=1}^{m}x_{i\sigma_{i}(k)}\overline{x}_{\sigma,k}$ is maximized when
the $x_{ik}$ and $\overline{x}_{\sigma,k}$ are matched by rank. More
precisely, take any $s_{i}\in\mathbf{S}_{m}$ such that
$x_{is_{i}(1)}\leq\cdots\leq x_{is_{i}(m)}$ and any $s\in\mathbf{S}_{m}$ such
that $\overline{x}_{\sigma,s(1)}\leq\ldots\leq\overline{x}_{\sigma,s(m)}$.
Then $\sigma_{i}=s_{i}\circ s^{-1}$ maximizes the sum. In other words, the
optimal permutations $\sigma_{i}$ are simply obtained by sorting the
components of the $X_{i}$ and $\overline{x}_{\sigma}$ (computational
complexity $O(nm\log m)$).
### 2.2 Block coordinate ascent method
For convenience problem (1) is rewritten here using permutation matrices
$P_{1},\ldots,P_{n}$ instead of permutation functions
$\sigma_{1},\ldots,\sigma_{n}$. Each $P_{i}$ ($1\leq i\leq n$) is a square
matrix with entries in $\\{0,1\\}$ such that each row and each column contains
the value 1 exactly once. Let $\Pi_{m}$ be the set of all $m\times m$
permutation matrices. Problem (1) expresses as the binary quadratic assignment
problem
$\min_{P_{1},\ldots,P_{n}\in\Pi_{m}}\sum_{i=1}^{n}\sum_{j=1}^{n}\left\|X_{i}P_{i}-X_{j}P_{j}\right\|_{F}^{2}$
(7)
where $\|\cdot\|_{F}$ denotes the Frobenius norm ($\|X\|_{F}=\langle
X,X\rangle_{F}^{1/2}=(\mathrm{tr}(X^{\prime}X))^{1/2}$ with
$\operatorname{tr}(\cdot)$ the trace operator). By expanding the squared
Frobenius norm in the objective and noting that column permutations do not
change the Frobenius norm of matrix, we have
$\displaystyle\sum_{i=1}^{n}\sum_{j=1}^{n}\left\|X_{i}P_{i}-X_{j}P_{j}\right\|_{F}^{2}$
$\displaystyle=\sum_{i=1}^{n}\sum_{j=1}^{n}\left(\|X_{i}P_{i}\|_{F}^{2}+\|X_{j}P_{j}\|_{F}^{2}-2\langle
X_{i}P_{i},X_{j}P_{j}\rangle_{F}\right)$
$\displaystyle=\sum_{i=1}^{n}\sum_{j=1}^{n}\left(\|X_{i}\|_{F}^{2}+\|X_{j}\|_{F}^{2}\right)-2\bigg{\|}\sum_{i=1}^{n}X_{i}P_{i}\bigg{\|}_{F}^{2}.$
Discarding terms that do not depend on $P_{1},\ldots,P_{n}$, problem (7) is
equivalent to
$\max_{P_{1},\ldots,P_{n}\in\Pi_{m}}\bigg{\|}\sum_{i=1}^{n}X_{i}P_{i}\bigg{\|}_{F}^{2}.$
(8)
The maximization problem (8) can be handled one matrix $P_{i}$ at a time
($1\leq i\leq n$), that is, by _block coordinate ascent_ (BCA, e.g. Wright,
2015). Given a current solution $(\hat{P}_{1},\ldots,\hat{P}_{n})$ and an
index $i$, all matrices $\hat{P}_{j},\,j\neq i$ are fixed and the task at hand
is
$\max_{P_{i}\in\Pi_{m}}\bigg{\|}X_{i}P_{i}+\sum_{\begin{subarray}{c}1\leq
j\leq n\\\ j\neq i\end{subarray}}X_{j}\hat{P}_{j}\bigg{\|}_{F}^{2}$
which, after expansion, is equivalent to the linear assignment problem
$\max_{P_{i}\in\Pi_{m}}\Big{\langle}P_{i},X_{i}^{\prime}\sum_{j\neq
i}X_{j}\hat{P}_{j}\Big{\rangle}_{F}.$ (9)
As mentioned in Section 2.1, (9) can be efficiently solved with the Hungarian
algorithm. The permutation matrix $\hat{P}_{i}$ is then updated to a solution
of (9). This operation is repeated with the index $i$ sweeping through the set
$[n]$ until no further increase in the objective (8) has been achieved in a
full sweep. Given that each update of a $\hat{P}_{i}$ is non-decreasing in the
objective (8) and that the search domain $\Pi_{m}^{n}$ is finite, the
algorithm is guaranteed to converge in a finite number of steps. Popular
methods for sweeping through $[n]$ include the cyclical order (also known as
the Gauss-Seidel rule), random sampling, random permutation of $[n]$, and
greedy selection.
The BCA algorithm is summarized hereafter. The objective function in (8) is
denoted by $F$. For simplicity the sweeping order is taken to be cyclical but
any other sweeping method can be used.
Algorithm 2 Block Coordinate Ascent
0: $X_{1},\ldots,X_{n}\in\mathbb{R}^{p\times m}$,
$P_{1},\ldots,P_{n}\in\Pi_{m}$.
1: $S\leftarrow\sum_{i=1}^{n}X_{i}P_{i}$, $F_{new}\leftarrow\|S\|_{F}^{2}$
2: repeat
3: $F_{old}\leftarrow F_{new}$
4: for $i=1,\ldots,n$ do
5: $S_{i}\leftarrow S-X_{i}P_{i}$
6: Solve the LAP
$\max_{P_{i}\in\Pi_{m}}\big{\langle}P_{i},X_{i}^{\prime}S_{i}\big{\rangle}_{F}$
and call $P_{i}^{+}$ a solution.
7: $P_{i}\leftarrow P_{i}^{+}$, $S\leftarrow S_{i}+X_{i}P_{i}$
8: end for
9: $F_{new}\leftarrow\|S\|_{F}^{2}$
10: until $F_{new}\leq F_{old}$
Algorithm 2 can be viewed as a special case of the local search algorithm LS1
of Bandelt et al. (2004). The LS1 algorithm is more general in that it uses an
arbitrary dissimilarity function $d$ in the MDADC (3)-(4). The computational
price to pay for this generality is that for each block update ($i\in[n]$) the
assignment matrix
$A_{i}=(\sum_{j\in[n]\setminus\\{i\\}}d(x_{j\sigma_{j}(k)},x_{il}))_{1\leq
k,l\leq m}$ must be calculated from scratch in $O(nm^{2})$ flops. Hence the
LS1 method has iteration complexity $O(n^{2}m^{2})$ (one iteration meaning one
sweep through $[n]$) which may be prohibitive for large $n$. In comparison,
the squared Euclidean distance $d=\|\cdot\|^{2}$ employed in the BCA method
enables efficient computation of $A_{i}$ in $O(m^{2})$ complexity by keeping
track of the running sum $\sum_{i=1}^{n}X_{i}P_{i}$ with rank-1 updates.
Accordingly, the BCA method has iteration complexity $O(nm^{3})$ linear in
$n$. A variant of the BCA method using asynchronous parallel updates of the
matrices $\hat{P}_{i}$ (the so-called Jacobi update) can further reduce the
iteration complexity, although convergence properties of this approach are not
clear.
### 2.3 Convex relaxation and Frank-Wolfe algorithm
In the previous section, problem (8) was solved one permutation matrix $P_{i}$
at a time while keeping the other $P_{j}$ ($j\neq i$) fixed. As an
alternative, one may relax this problem to the set $\mathcal{D}_{m}$ of doubly
stochastic matrices of dimensions $m\times m$, which is the convex hull of
$\Pi_{m}$. (As a reminder, a doubly stochastic matrix is a square matrix with
elements in $[0,1]$ whose rows and columns all sum to 1.) The relaxed problem
is
$\max_{P_{1},\ldots,P_{n}\in\mathcal{D}_{m}}\Big{\|}\sum_{i=1}^{n}X_{i}P_{i}\Big{\|}_{F}^{2}.$
(10)
Although this relaxation leads to an indefinite program (i.e. maximizing a
convex quadratic form), it is the correct way to relax (7)-(8). In contrast,
directly relaxing (7) (to $\mathcal{D}$) would produce a convex program that
is computationally simpler but does not provide tight bounds (Lyzinski et al.,
2016).
The Frank-Wolfe algorithm (Frank and Wolfe, 1956) is an excellent candidate
for this maximization. Indeed the gradient of (10) is straightforward to
compute. Denoting by $F$ the objective function of (10), the partial
derivatives are simply $\partial F/\partial
P_{i}=X_{i}^{\prime}\sum_{j=1}^{n}X_{j}P_{j}$ ($1\leq i\leq n$). In addition,
the associated linear program
$\max_{Q_{1},\ldots,Q_{n}\in\mathcal{D}_{m}}\sum_{i=1}^{n}\Big{\langle}Q_{i},X_{i}^{\prime}\sum_{j=1}^{n}X_{j}P_{j}\Big{\rangle}_{F}$
(11)
which provides the search direction $(Q_{1},\ldots,Q_{n})$ for the next
algorithm iterate is easily solvable as $n$ separate linear assignment
problems (LAP). Although each LAP is solved over $\mathcal{D}_{m}$, Birkhoff-
von Neumann’s theorem guarantees that a solution can be found in $\Pi_{m}$, a
property referred to as the integrality of assignment polytopes (Birkhoff,
1946; von Neumann, 1953).
Having found the search direction, it remains to select the step size
$\alpha\in[0,1]$. This is often done with a line search:
$\max_{\alpha\in[0,1]}F(P+\alpha(Q-P))$ where $P=(P_{1},\ldots,P_{n})$ and
$Q=(Q_{1},\ldots,Q_{n})$. The expression to maximize is a quadratic polynomial
in $\alpha$ with leading coefficient
$\|\sum_{i=1}^{n}X_{i}(Q_{i}-P_{i})\|_{F}^{2}\geq 0$. Accordingly, the maximum
over $[0,1]$ is attained either at $\alpha=1$ or at $\alpha=0$. In the former
case, the algorithm takes a full step in the direction $Q$ whereas in the
latter case, the current solution cannot be improved upon and the algorithm
ends. Interestingly, the iterates generated by the Frank-Wolfe algorithm for
problem (10) stay in $\Pi_{m}$ although in principle, they could also explore
the interior of $\mathcal{D}_{m}$. This is a consequence of the integrality of
the search direction $Q$ and of the line search method for a quadratic
objective, which make the step size $\alpha$ equal to 0 or 1.
Algorithm 3 Frank-Wolfe
0: $X_{1},\ldots,X_{m}\in\mathbb{R}^{p\times m}$,
$P_{1},\ldots,P_{n}\in\mathcal{D}_{m}$
1: $S\leftarrow\sum_{i=1}^{n}X_{i}P_{i}$, $F_{new}\leftarrow\|S\|_{F}^{2}$
2: repeat
3: $S^{\prime}\leftarrow 0,\ F_{old}\leftarrow F_{new}$
4: for $i=1$ to $n$ do
5: Solve the LAP
$\max_{Q_{i}\in\mathcal{D}_{m}}\big{\langle}Q_{i},X_{i}^{\prime}S\big{\rangle}_{F}$
and call $Q_{i}$ a solution.
6: $S^{\prime}\leftarrow S^{\prime}+X_{i}Q_{i}$
7: end for
8: $F_{new}\leftarrow\|S^{\prime}\|_{F}^{2}$
9: if $F_{new}>F_{old}$ then
10: $P_{i}\leftarrow Q_{i}\ (1\leq i\leq n),\ S\leftarrow S^{\prime}$
11: end if
12: until $F_{new}\leq F_{old}$
### 2.4 Pairwise interchange heuristic
The BCA algorithm of Section 2.2 attempts to improve an existing solution to
(1) one permutation $\sigma_{i}$ at a time. In other words, at each iteration,
it changes all assignments $\sigma^{l}=(\sigma_{1}(l),\ldots,\sigma_{n}(l))$
($1\leq l\leq m$) in a single dimension. Karapetyan and Gutin (2011) call this
approach a _dimensionwise heuristic_. Another strategy called the
_interchange_ or _$k$ -exchange heuristic_ is to change a few assignments
(typically, $k=2$ or $k=3$) in all dimensions by element swaps (e.g. Balas and
Saltzman, 1991; Robertson, 2001; Oliveira and Pardalos, 2004). Here we
consider the 2-assignment exchange algorithm (Algorithm 3.4) of Robertson
(2001) for the general MAP (3) and adapt it to problem (1). In this algorithm,
given two assignments, the search for the best interchange is done
exhaustively. This involves accessing as many as $2^{n}-1$ candidate
assignments for element swaps and comparing their costs, which is reasonable
in the general MAP provided that: (i) costs are precalculated, (ii) $n$ is
small, and (iii) candidate assignments for exchange are easily found among all
feasible assignments. However, for moderate to large $n$, and in the context
of problem (1) where assignment costs are not precalculated, the calculation
and exhaustive search of $2^{n}-1$ interchange assignment costs for at least
each of ${m\choose 2}$ candidate pairs of assignments are untractable. We will
show that in problem (1), the pairwise interchange heuristic can be
efficiently solved as a binary quadratic program.
Given a solution $\sigma=(\sigma_{1},\ldots,\sigma_{n})$ to (1) and two
associated assignments $\sigma^{q}$ and $\sigma^{r}$ ($1\leq q<r\leq m$), the
basic problem of pairwise interchange is to improve the objective in (1) by
interchanging elements between these assignments, i.e. by swapping the values
of $\sigma_{i}(q)$ and $\sigma_{i}(r)$ for one or more indices $i\in[n]$.
Formally, the problem is
$\min_{\sigma_{1}^{\ast},\ldots,\sigma_{n}^{\ast}\in\mathbf{S}_{m}}\sum_{1\leq
i<j\leq
n}\sum_{k=1}^{m}\left\|x_{i\sigma_{i}^{\ast}(k)}-x_{j\sigma_{j}^{\ast}(k)}\right\|^{2}$
(12a) under the constraints
$\left\\{\begin{array}[]{l}\sigma_{i}^{\ast}(k)=\sigma_{i}(k),\
k\in[m]\setminus\\{k,l\\}\\\
(\sigma_{i}^{\ast}(q),\sigma_{i}^{\ast}(r))\in\\{(\sigma_{i}(q),\sigma_{i}(r)),(\sigma_{i}(r),\sigma_{i}(q))\\}\end{array}\right.,\quad
1\leq i\leq n.$ (12b)
To fix ideas, assume without loss of generality that $(q,r)=(1,2)$ and
$\sigma_{i}=\mathrm{Id}_{[m]}$ for $1\leq i\leq n$. Problem (12) becomes
$\min_{\sigma^{\ast}_{1},\ldots,\sigma^{\ast}_{n}\in\mathbf{S}_{2}}\sum_{1\leq
i,j\leq
n}\big{\|}x_{i\sigma^{\ast}_{i}(1)}-x_{j\sigma^{\ast}_{j}(1)}\big{\|}^{2}+\sum_{1\leq
i,j\leq
n}\big{\|}x_{i\sigma^{\ast}_{i}(2)}-x_{j\sigma^{\ast}_{j}(2)}\big{\|}^{2}\,.$
(13)
As in the previous sections, the problem can be transformed to
$\max_{\sigma^{\ast}_{1},\ldots,\sigma^{\ast}_{n}\in\mathbf{S}_{2}}\Big{\|}\sum_{i=1}^{n}x_{i\sigma^{\ast}_{i}(1)}\Big{\|}^{2}+\Big{\|}\sum_{i=1}^{n}x_{i\sigma^{\ast}_{i}(2)}\Big{\|}^{2}.$
Replacing the permutations $\sigma^{\ast}_{i}\in\mathbf{S}_{2}$ by binary
variables $c_{i}$, the problem becomes
$\max_{c_{1},\ldots,c_{n}\in\\{0,1\\}}\Big{\|}\sum_{i=1}^{n}(c_{i}x_{i1}+(1-c_{i})x_{i2})\Big{\|}^{2}+\Big{\|}\sum_{i=1}^{n}((1-c_{i})x_{i1}+c_{i}x_{i2})\Big{\|}^{2}$
and, after simple manipulations,
$\max_{c_{1},\ldots,c_{n}\in\\{0,1\\}}\sum_{i,j}c_{i}c_{j}\langle
d_{i},d_{j}\rangle-n\sum_{i}c_{i}\langle d_{i},\bar{d}\rangle$ (14)
where $d_{i}=x_{i1}-x_{i2}$ and $\bar{d}=(1/n)\sum_{i=1}^{n}d_{i}$. This is an
unconstrained binary quadratic program (UBQP) of size $n$ that can be solved
with standard mathematical software (e.g. Cplex, Gurobi, Mosek). Refer to
Kochenberger et al. (2014) for a survey of the UBQP literature.
Having reduced the basic pairwise interchange problem (12) to the UBQP (14),
We now embed it in Algorithm 3.4 of Robertson (2001) which combines
randomization and greedy selection of interchange pairs. Hereafter $F(\sigma)$
denotes the objective value in (1) and
$\sigma=(\sigma_{1},\ldots,\sigma_{n})\in(\mathbf{S}_{m})^{n}$ is identified
with the assignments $\\{\sigma^{1},\ldots,\sigma^{m}\\}$, where
$\sigma^{l}=(\sigma_{1}(l),\ldots,\sigma_{n}(l))$. The notation
$\mathrm{diag}(\cdot)$ is used for diagonal matrices.
Algorithm 4 Pairwise Interchange with Greedy Selection
0: $X_{1},\ldots,X_{n}\in\mathbb{R}^{p\times m}$,
$\sigma\equiv\\{\sigma^{1},\ldots,\sigma^{m}\\}$
1: $\mathcal{C}\leftarrow\sigma$ {candidate set of assignments for
interchange}
2: while $\mathcal{C}\neq\emptyset$ do
3: $F_{best}\leftarrow F(\sigma)$
4: $\sigma^{+}\leftarrow\emptyset$, $\tau^{+}\leftarrow\emptyset$
5: Select $\sigma^{q}\in\mathcal{C}$
6: for $\sigma^{r}\in\mathcal{C}\setminus\\{\sigma^{q}\\}$ do
7: $d_{i}\leftarrow x_{i\sigma^{q}(i)}-x_{i\sigma^{r}(i)}\ (1\leq i\leq n)$,
$\bar{d}\leftarrow\frac{1}{n}\sum_{i=1}^{n}d_{i}$
8: $Q\leftarrow(\langle d_{i},d_{j}\rangle)_{1\leq i,j\leq
m}-\mathrm{diag}(n\langle d_{1},\bar{d}\rangle,\ldots,n\langle
d_{1},\bar{d}\rangle)$
9: Solve the UBQP (14) with quadratic matrix $Q$ and call
$(c_{1},\ldots,c_{n})$ a solution.
10: $\tilde{\sigma}^{q}(i)\leftarrow
c_{i}\,\sigma^{q}(i)+(1-c_{i})\,\sigma^{r}(i)\,(1\leq i\leq n)$
11: $\tilde{\sigma}^{r}(i)\leftarrow
c_{i}\,\sigma^{r}(i)+(1-c_{i})\,\sigma^{q}(i)\,(1\leq i\leq n)$
12: $\tilde{F}\leftarrow
F(\sigma\setminus\\{\sigma^{q},\sigma^{r}\\}\cup\\{\tilde{\sigma}^{q},\tilde{\sigma}^{r}\\})$
13: if $\tilde{F}<F_{best}$ then
14: $(\sigma^{+},\tau^{+})\leftarrow(\tilde{\sigma}^{q},\tilde{\sigma}^{r})$
{candidate new pair of assignments}
15: ($\sigma^{-},\tau^{-})\leftarrow(\sigma^{q},\sigma^{r})$ {candidate old
pair of assignments}
16: $F_{best}\leftarrow\tilde{F}$
17: end if
18: end for
19: if $\sigma^{+}\neq\emptyset$ then
20:
$\sigma\leftarrow\sigma\setminus\\{\sigma^{-},\tau^{-}\\}\cup\\{\sigma^{+},\tau^{+}\\}$
{perform interchange}
21: $\mathcal{C}\leftarrow\sigma$ {reset candidate set to all assignments}
22: else
23: $\mathcal{C}\leftarrow\mathcal{C}\setminus\\{\sigma^{q}\\}$ {remove
assignment from candidate set}
24: end if
25: end while
### 2.5 Gaussian mixture approach
The matching problem (1) has a probabilistic interpretation in terms of
mixture models. Let $y_{1},\ldots,y_{m}$ be random vectors in $\mathbb{R}^{p}$
with respective probability distributions
$\mathcal{P}_{1},\ldots,\mathcal{P}_{m}$. Assume that these vectors are only
observable after their labels have been shuffled at random. The random
permutation of labels represents the uncertainty about the correspondence
between observations, say $x_{1},\ldots,x_{m}$, and their underlying
distributions $\mathcal{P}_{1},\ldots,\mathcal{P}_{m}$. For mathematical
convenience, $y_{1},\ldots,y_{m}$ are assumed independent and each
$\mathcal{P}_{k}$ $(1\leq k\leq m)$ is taken as a multivariate normal
distribution $N(\mu_{k},\Sigma_{k})$. The data-generating process can be
summarized as
$\left\\{\begin{array}[]{l}y_{k}\sim N(\mu_{k},\Sigma_{k})\quad(1\leq k\leq
m),\\\ s\textrm{ has a uniform distribution over }\mathbf{S}_{m},\\\
(y_{1},\ldots,y_{m})\textrm{ are mutually independent and independent of
}s,\\\ (x_{1},\ldots,x_{m})=(y_{s(1)},\ldots,y_{s(m)}).\end{array}\right.$
(15)
This can be viewed as a Gaussian mixture model with permutation constraints on
cluster assignments. These constraints can be shifted to the mean and
covariance parameters by concatenating observations: the vector
$x=\mathrm{vec}(x_{1},\ldots,x_{m})$ follows a mixture of $m!$ multivariate
normal distributions $N(\mu_{\sigma},\Sigma_{\sigma})$ in $\mathbb{R}^{mp}$
with equal mixture weights $1/m!$, where
$\mu_{\sigma}=\mathrm{vec}(\mu_{\sigma(1)},\ldots,\mu_{\sigma(m)})$ and
$\Sigma_{\sigma}=\mathrm{diag}(\Sigma_{\sigma(1)},\ldots,\Sigma_{\sigma(m)})$
(block-diagonal matrix) for $\sigma\in\mathbf{S}_{m}$; see also Qiao and Li
(2015). In this form, the theory and methods of Gaussian mixture models are
seen to apply to (15), in particular the consistency and asymptotic normality
of maximum likelihood estimators (McLachlan and Peel, 2000, Chapter 2).
###### Remark.
In model (15), the cluster centers
$\\{\overline{x}_{\hat{\sigma},1},\ldots,\overline{x}_{\hat{\sigma},m}\\}$
associated to a global solution
$\hat{\sigma}=(\hat{\sigma}_{1},\ldots,\hat{\sigma}_{n})$ of problem (1) are
_not_ consistent for $\\{\mu_{1},\ldots,\mu_{m}\\}$ as $n\to\infty$. Consider
for example the case where $p=1$, $m=2$ (univariate mixture with two
components), and $\mu_{1}<\mu_{2}$. Then
$\hat{\mu}_{1}=\frac{1}{n}\sum_{i=1}^{n}\min(x_{i1},x_{i2})$ and
$\hat{\mu}_{2}=\frac{1}{n}\sum_{i=1}^{n}\max(x_{i1},x_{i2})$. Accordingly
$E(\hat{\mu}_{1})=E(\min(x_{1},x_{2}))<\mu_{1}$ and
$E(\hat{\mu}_{2})=E(\max(x_{1},x_{2}))>\mu_{2}$, meaning that both estimators
are biased and inconsistent.
###### Remark.
The permutation constraints of model (15) can be formulated as _equivalence
constraints_ (see Shental et al., 2004, and Section 1). However, this general
formulation is unlikely to lead to faster or better optimization, just as the
constrained $K$-means approach of Wagstaff et al. (2001), which also handles
equivalence constraints, does not improve upon the specialized $K$-means
Algorithm 1 for problem (1) (see section 3).
Gaussian mixture models and the Expectation Maximization (EM) algorithm (see
e.g. McLachlan and Peel, 2000; McLachlan and Krishnan, 2008) constitute a
well-known approach to clustering. Here, in view of the matching problem (1),
we propose a computationally efficient EM approach to the Gaussian mixture
model (15). Although in principle, the standard EM algorithm for a Gaussian
mixture model could be applied, the number $m!$ of mixture components and the
potentially high dimension $mp$ of the data in (15) render computations
intractable unless $m$ is very small.
Let $(x_{i1},\ldots,x_{im})$ ($1\leq i\leq n$) be data arising from (15) and
let $s_{1},\ldots,s_{n}$ be associated label permutations. For convenience,
the permutations are expressed in terms of indicator variables $I_{ikl}$
($1\leq i\leq n,\ 1\leq k,l\leq m$): $I_{ikl}=1$ if $x_{ik}=y_{il}$ or
equivalently $s_{i}(k)=l$, $I_{ikl}=0$ otherwise. The $(x_{ik})$ and
$(I_{ikl})$ are the so-called complete data. Call
$\hat{\theta}=\\{(\hat{\mu}_{l},\hat{\Sigma}_{l}):l\in[m]\\}$ the current
estimate of the model parameters of (15) in the EM procedure. The log-
likelihood of the complete data is
$\log
L_{c}=\sum_{i=1}^{n}\sum_{k=1}^{m}\sum_{l=1}^{m}\log\varphi(x_{ik};\hat{\mu}_{l},\hat{\Sigma}_{l})I_{ikl}$
(16)
where
$\varphi(x;\mu,\Sigma)=(2\pi)^{-p/2}|\Sigma|^{-1/2}\exp\big{(}-(x-\mu)^{\prime}\Sigma^{-1}(x-\mu)/2\big{)}$
indicates a multivariate normal density in $\mathbb{R}^{p}$.
#### E step.
The E step of the EM algorithm consists in calculating the expected value of
(16) conditional on the observed data $X_{1},\ldots,X_{n}$ and assuming that
$\theta=\hat{\theta}$. This amounts to deriving, for each $(i,k,l)$, the
quantity
$\displaystyle E_{\hat{\theta}}(I_{ikl}|X_{i})$
$\displaystyle=P_{\hat{\theta}}(I_{ikl}=1|X_{i})$
$\displaystyle=\frac{P_{\hat{\theta}}(X_{i}|I_{ikl}=1)P_{\hat{\theta}}(I_{ikl}=1)}{P_{\hat{\theta}}(X_{i})}$
$\displaystyle=c_{i}P_{\hat{\theta}}(X_{i}|I_{ikl}=1)$
$\displaystyle=c_{i}\sum_{\sigma\in\mathbf{S}_{m}:\sigma(k)=l}P_{\hat{\theta}}\big{(}X_{i}|I_{i1\sigma(1)}=1,\ldots,I_{im\sigma(m)}=1\big{)}$
$\displaystyle\qquad\qquad\qquad\times
P_{\hat{\theta}}\big{(}I_{i1\sigma(1)}=1,\ldots,I_{im\sigma(m)}=1\big{|}I_{ikl}=1\big{)}$
$\displaystyle=\frac{c_{i}}{(m-1)!}\sum_{\sigma\in\mathbf{S}_{m}:\sigma(k)=l}\prod_{r=1}^{m}P_{\hat{\theta}}(x_{ir}|I_{ir\sigma(r)}=1)$
$\displaystyle=\frac{c_{i}}{(m-1)!}\sum_{\sigma\in\mathbf{S}_{m}:\sigma(k)=l}\prod_{r=1}^{m}\varphi\big{(}x_{ir};\hat{\mu}_{\sigma(r)},\hat{\Sigma}_{\sigma(r)}\big{)}\,.$
(17)
Formula (17) can be conveniently expressed with _matrix permanents_. The
permanent of a square matrix $A=(a_{ij})$ of dimension $m\times m$ is defined
as
$\mathrm{per}(A)=\sum_{\sigma\in\mathbf{S}_{m}}\prod_{i=1}^{m}a_{i\sigma(i)}$.
Writing
$A_{i}=(a_{ikl})=(\varphi(x_{ik};\hat{\mu}_{l},\hat{\Sigma}_{l}))\in\mathbb{R}^{m\times
m}$ and $A_{i}^{-(k,l)}=(a_{ik^{\prime}l^{\prime}})_{k^{\prime}\neq
k,l^{\prime}\neq l}\in\mathbb{R}^{(m-1)\times(m-1)}$, (17) reformulates as
$E_{\hat{\theta}}(I_{ikl}|X_{i})=a_{ikl}\,\mathrm{per}(A_{i}^{-(k,l)})/\mathrm{per}(A_{i})$.
The permanent of a matrix has a very similar expression to the Leibniz formula
for determinants, but without the permutation signatures $\pm 1$. It is
however far more expensive to compute: efficient implementations have
complexity $O(2^{m}m^{2})$ (Ryser, 1963) or $O(2^{m}m)$ (Nijenhuis and Wilf,
1978). Stochastic approximation methods running in polynomial time (e.g.
Jerrum et al., 2004; Kuck et al., 2019) and variational bounds (see Uhlmann,
2004, and the references therein) are also available. Given that (17) must be
evaluated for $nm^{2}$ values of $(i,k,l)$, and accounting for the computation
of the matrices $A_{i}$ ($1\leq i\leq n$) (e.g. Press et al., 2007, Chap.
16.1), the E step has overall complexity at least
$O(2^{m}m^{3}n+mp^{3}+m^{2}p^{2}n)$.
The evaluation of permanents requires precautions to avoid numerical
underflow. Indeed, the density values
$\varphi(x_{ik};\hat{\mu}_{l},\hat{\Sigma}_{l})$ are often very small and
multiplying them in (17) may quickly lead to numerical zeros. Preconditioning
greatly helps in this regard: by the properties of the permanent, multiplying
the rows and columns of $A_{i}$ by nonzero numbers has no effect on (17) as
these multiples cancel out between the numerator
$a_{ikl}\mathrm{per}(A_{i}^{-(k,l)})$ and denominator $\mathrm{per}(A_{i})$.
One can exploit this by alternatively rescaling the rows and columns of
$A_{i}$ by their sums. Provided that $A_{i}$ is a positive matrix, this scheme
converges to a doubly stochastic matrix (Sinkhorn, 1964) that in practice
often has at least one “non-small” entry in each row and each column.
#### M step.
By standard least square calculations, the updated estimate
$\theta^{+}=\\{(\mu_{l}^{+},\Sigma_{l}^{+}):1\leq l\leq m\\}$ is
$\begin{split}\mu_{l}^{+}&=\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{m}P_{\hat{\theta}}(I_{ikl}=1|X_{i})x_{ik}\\\
\Sigma_{l}^{+}&=\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{m}P_{\hat{\theta}}(I_{ikl}=1|X_{i})(x_{ik}-\mu_{l}^{+})(x_{ik}-\mu_{l}^{+})^{\prime}\end{split}$
(18)
with $P_{\hat{\theta}}(I_{ikl}=1|X_{i})=E_{\hat{\theta}}(I_{ikl}|X_{i})$ given
by (17). The fact that $\sum_{k=1}^{m}P_{\hat{\theta}}(I_{ikl}=1|X_{i})=1$ for
all $(i,l)$ was used to simplify (18). If the variances
$\Sigma_{1},\ldots,\Sigma_{m}$ are assumed equal, their common estimate should
be $\Sigma^{+}=(1/m)\sum_{l=1}^{m}\Sigma_{l}^{+}$.
#### Log-likelihood.
The log-likelihood of the observed data is given by
$\log L(\hat{\theta})=\sum_{i=1}^{n}\log\left(\frac{1}{m!}\sum_{\sigma\in
S}\prod_{k=1}^{m}\varphi\big{(}x_{ik};\hat{\mu}_{\sigma(k)},\hat{\Sigma}_{\sigma(k)}\big{)}\right).$
(19)
It is simply the sum of the logarithms of the permanents of the matrices
$A_{i}=\big{(}\varphi(x_{ik};\hat{\mu}_{l},\hat{\Sigma}_{l})\big{)}$ defined
earlier. Since these permanents are calculated in the E step, there is
essentially no additional cost to computing the log-likelihood.
The implementation of the EM algorithm for model (15) is sketched in Algorithm
5, The initial covariance matrices $\Sigma_{1},\ldots,\Sigma_{m}$ in this
algorithm should be taken positive definite to avoid degeneracy issues when
evaluating multivariate normal densities. However, the algorithm is easily
extended to handle singular covariance matrices. In practice, stopping
criteria for the EM algorithm are often based on the absolute or relative
increase in log-likelihood between successive iterations.
Algorithm 5 EM for Constrained Gaussian Mixture
0: $X_{1},\ldots,X_{n}\in\mathbb{R}^{p\times m}$,
$\mu_{1},\ldots,\mu_{m}\in\mathbb{R}^{p}$,
$\Sigma_{1},\ldots,\Sigma_{m}\in\mathbb{R}^{p\times p}$
1: $\theta^{0}\leftarrow\\{(\mu_{l},\Sigma_{l}):1\leq l\leq m\\}$
2: for $t=0,1,\ldots$ do
3: Perform Choleski decomposition $\Sigma_{l}=L_{l}^{\prime}L_{l}$ with
$L_{l}$ lower triangular ($1\leq l\leq m$)
4: for $i=1,\ldots,n$ do
5:
$a_{ikl}\leftarrow(2\pi)^{-p/2}|L_{l}|^{-1/2}e^{-\|L_{l}^{-1}(x_{ik}-\mu_{l})\|^{2}/2}\
\ (1\leq k,l\leq m)$, $A_{i}\leftarrow(a_{ikl})$
6: for $k=1,\ldots,m$ do
7: for $l=1,\ldots,m$ do
8: Alternatively rescale rows and columns of $A_{i}^{-(k,l)}$ to sum to 1
9: Calculate $\mathrm{per}(A_{i}^{-(k,l)})$ with Ryser’s inclusion-exclusion
formula
10: $p_{ikl}\leftarrow a_{ikl}\,\mathrm{per}(A_{i}^{-(k,l)})$
11: end for
12: end for
13: $c_{i}\leftarrow\frac{1}{m}\sum_{k=1}^{m}\sum_{l=1}^{m}p_{ikl}$
14: $w_{ikl}\leftarrow p_{ikl}/c_{i}\ (1\leq k,l\leq m)$ {class membership
probability}
15: end for
16: $\ell^{t}\leftarrow\sum_{i=1}^{n}\log c_{i}$ {log-likelihood}
17: for $l=1,\ldots,m$ do
18: $\mu_{l}\leftarrow\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{m}w_{ikl}x_{ik}$
19:
$\Sigma_{l}\leftarrow\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{m}w_{ikl}(x_{ik}-\mu_{l})(x_{ik}-\mu_{l})^{\prime}$
20: end for
21: $\theta^{t+1}\leftarrow\\{(\mu_{l},\Sigma_{l}):1\leq l\leq m\\}$
22: end for
In statistical problems involving a large number of latent variables such as
(15), the EM algorithm is usefully extended by the so-called _deterministic
annealing EM_ algorithm (DAEM, Ueda and Nakano, 1998). The DAEM is identical
to the EM except that in the E step, the assignment probabilities
$P_{\theta}(I_{ikl}=1|X_{i})$ are raised to a power $\beta\in(0,1]$ and
rescaled to remain valid probabilities. This effectively flattens out the
differences between assignment probabilities, keeping the uncertainty about
cluster/class assignment relatively high. As the number $t$ of iterations
grows, the power $\beta=\beta_{t}$, which represents an inverse temperature
parameter, increases to 1. For $t$ sufficiently large, the DAEM reverts back
to the EM. In this way the DAEM offers some control on how many iterations are
spent exploring the latent variable space before converging to a set of (often
highly unbalanced) assignment probabilities. In particular, appropriate use of
the DAEM prevents the convergence from happening too fast.
### 2.6 Algorithm initialization
The matching methods developed for (1) in the previous sections are local
search procedures. As can be expected, the quality of their solutions largely
depends on their starting points. Several strategies for finding good starting
points are presented hereafter.
_Random initialization._ Utilizing multiple random starting points
$\sigma\in(\mathbf{S}_{m})^{n}$ or $P\in(\Pi_{m})^{n}$ often yields at least
one nearly optimal solution. This strategy is particularly suitable when the
computational cost of optimization is cheap, as is the case with Algorithms
1-2-3.
_Template matching._ Given data matrices
$X_{1},\ldots,X_{m}\in\mathbb{R}^{p\times m}$ and a template matrix
$T\in\mathbb{R}^{p\times m}$, solve the matching problem
$\min_{P_{1},\ldots,P_{n}\in\Pi_{m}}\sum_{i=1}^{n}\left\|X_{i}P_{i}-T\right\|_{F}^{2}.$
(20)
The expediency of template matching comes from the fact that it reduces
${n\choose 2}$ related matching tasks between pairs of data matrices in (1) to
$n$ separate matching tasks between the data and the template. A central
question is: which template to use? Bandelt et al. (1994) propose to either
take a single data matrix as template (_single hub heuristic_), e.g.
$T=X_{1}$, or to examine all data matrices in turn:
$T\in\\{X_{1},\ldots,X_{n}\\}$, and retain the assignment
$P(T)=(P_{1}(T),\ldots,P_{n}(T))$ that yields the lowest value of (1)
(_multiple hub heuristic_). More generally, the template need not be a data
point; it could for example be an estimate of cluster centers based on
previous observations.
_Recursive heuristics._ The recursive heuristics of Bandelt et al. (1994) (see
Section 1) are easily applicable to problem (1). Their algorithm RECUR1 for
example, which is related to the BCA Algorithm 2, is implemented as follows.
The first permutation matrix $P_{1}$ can be selected arbitrarily, say
$P_{1}=I_{m}$. Then for $i=1,\ldots,n-1$, the LAP (9) is changed to
$\max_{P_{i+1}\in\Pi_{m}}\Big{\langle}P_{i+1}\,,\,X_{i+1}^{\prime}\sum_{j=1}^{i}X_{j}P_{j}\Big{\rangle}_{F}\,.$
(21)
## 3 Numerical study
This section presents experiments that assess the numerical and computational
performances of the matching methods of Section 2 and other relevant methods
from the literature. Three performance measures are reported: the attained
objective value in the matching problem (1), the Rand index (Rand, 1971) for
evaluating agreement between matchings and data labels, and the computation
time.
#### Simulation setup.
The simulations are based on handwritten digits data available on the UCI
machine learning repository (archive.ics.uci.edu). Unlike classification
problems, the task at hand is to match collections of digits without using
label information. The data are normalized bitmaps of handwritten digits.
After downsampling, images of dimensions $8\times 8$ are obtained with integer
elements in $\\{0,\ldots,16\\}$. The training data used for the simulations
contain 3823 images contributed by 30 people, with about 380 examples for each
digit $0,\ldots,9$. A principal component analysis (PCA) is carried out
separately for each of the $m=10$ digit classes (after vectorizing the
$8\times 8$ input matrices) and the 25 first PCs are retained for each class,
which represents at least 95% of the class variance. Artificial data are then
generated according to the model
$x_{ik}=\sum_{r=1}^{25}\xi_{ikr}\phi_{kr}+\varepsilon_{ikr}$ for $1\leq i\leq
n$ and $1\leq k\leq m$, where the $\phi_{kr}$ are PC vectors of length $p=64$
and the $\xi_{ikr}$ are independent normal random variables with mean zero and
standard deviation given by the PCA. A small amount of Gaussian white noise
$\varepsilon$ with standard deviation 2.5 is added to the simulated data,
which corresponds to 10% of the standard deviation of the original data. The
number $n$ of statistical units varies in
$\\{5,10,20,30,40,50,75,100,200,500,1000\\}$. For each value of $n$, the
simulation is replicated 100 times. The simulations are run in the R
programming environment. Code for the simulations and the R package matchFeat
implementing all methods of this paper are available at
github.com/ddegras/matchFeat.
#### Matching methods.
The methods of Section 2 are combined in three steps: initialization, main
algorithm, and optional post-processing. Four initializations are considered:
identity matrix (ID), 100 random starting points (R100), multiple-hub
heuristic (HUB), and recursive heuristic (REC). A fifth initialization
clustering data vectors by their digit labels (LBL) is also examined as a
benchmark. This initialization is infeasible in practice; it may also not
minimize (1) although it is often nearly optimal. The main algorithms are
$K$-means matching (KM), block coordinate ascent (BCA), and the Frank-Wolfe
method (FW). The pairwise interchange algorithm (2X) and EM algorithm for
constrained Gaussian mixture (EM) are used for post-processing only as they
were seen to perform poorly on their own (that is, with any of the proposed
initializations) in preliminary experiments. The simulations also comprise
matching methods representative of the literature:
* -
_Integer linear program (ILP)._ The standard ILP formulation of the MDADC
(3)-(4) (e.g. Kuroki and Matsui, 2009; Tauer and Nagi, 2013) involves
${n\choose 2}m^{2}$ binary variables (the number of edges in a complete
$n$-partite graph with $m$ nodes in each subgraph), $n(n-1)m$ equality
constraints and ${n\choose 3}m^{3}$ inequality constraints (so-called triangle
or clique constraints). Another formulation of the ILP expresses the triangle
constraints with reference to one the $n$ subgraphs, thereby reducing their
number to ${n\choose 2}m^{3}$.
* -
_ILP relaxation and integer quadratic program (IQP)._ Two of the methods in
Kuroki and Matsui (2009) are considered: the first consists in dropping the
triangle constraints, solving ${n\choose 2}$ separate assignment problems, and
recovering a proper solution with multiple-hub heuristics. The second
expresses the triangle constraints with reference to one of the $n$ subgraphs
as in the above ILP, and formulates the objective function only in terms of
those edges starting from and arriving to the reference subgraph. This reduces
the number of optimization variables to ${n\choose 2}m^{2}$ but transforms the
linear program into a quadratic one.
* -
_Constrained $K$-means._ The COP-KMEANS (Wagstaff et al., 2001), MPCK-MEANS
(Bilenko et al., 2004), LCVQE (Pelleg and Baras, 2007), and CCLS Hiep et al.
(2016) algorithms all handle equivalence constraints and can thus be applied
to (1). They are conveniently implemented in the R package conclust of the
last authors. COP-KMEANS and CCLS treat equivalence constraints as hard
constraints and thus exactly solve (1). MPCK-MEANS and LCVQE handle
equivalence constraints as soft constraints (in addition, MPCK-MEANS
incorporates metric learning) and thus approximately solve (1).
Going forward, these methods will be referred to as ILP, KUR-ILP, KUR-IQP,
COP-KM, MPC-KM, LCVQE, and CCLS. While they are applicable to the sum-of-
squares matching problem (1), these methods are not geared towards it and
should not be expected to outperform the methods of this paper. Lagrangian
heuristics (e.g. Tauer and Nagi, 2013; Natu et al., 2020) are not included in
the simulations because their efficient implementation requires computer
clusters and/or specialized computing architecture, whereas the focus of this
paper is on methods executable on a single machine.
###### Remark.
Initial attempts were made to obtain lower bounds on the global minimum in (1)
using a relaxation method of Bandelt et al. (2004). However, the resulting
bounds are far too small, a fact already noted by these authors in the case of
non-Euclidean distances $d$ (recall that in (1), $d$ is the _squared_
Euclidean distance).
#### Results.
_Optimization accuracy._ To facilitate comparisons, we discuss the relative
error of each method averaged across 100 replications for each $n$. The
relative error of a method is defined as the ratio of its attained objective
value in (1) by the minimum objective value across all methods minus 1. Full
results are available in Table 1. Hereafter and in the table, methods are
listed by order of best performance.
Method | $n=5$ | $n=10$ | $n=20$ | $n=30$ | $n=40$ | $n=50$ | $n=75$ | $n=100$ | $n=200$ | $n=500$ | $n=1000$
---|---|---|---|---|---|---|---|---|---|---|---
R100-BCA | 2E-11 (1E-10) | 0 (0) | 0 (0) | 0 (0) | 0 (0) | 0 (0) | 0 (0) | 0 (0) | 0 (0) | 0 (0) | 0 (0)
R100-BCA-2X | 2E-11 (1E-10) | 0 (0) | 0 (0) | 0 (0) | 0 (0) | 0 (0) | 0 (0) | 0 (0) | | |
KUR-IQP | 2E-11 (1E-10) | | | | | | | | | |
ILP | 0 (0) | 3E-5 (3E-4) | | | | | | | | |
LBL-BCA | 2E-3 (4E-3) | 1E-3 (3E-3) | 4E-4 (1E-3) | 3E-4 (1E-3) | 1E-4 (4E-4) | 2E-4 (6E-4) | 5E-5 (3E-4) | 2E-5 (1E-4) | 3E-6 (3E-5) | 3E-8 (1E-7) | 1E-8 (6E-8)
LBL-BCA-2X | 2E-3 (3E-3) | 8E-4 (2E-3) | 2E-4 (5E-4) | 1E-4 (4E-4) | 6E-5 (2E-4) | 1E-4 (4E-4) | 4E-5 (2E-4) | | | |
HUB-BCA-2X | 1E-3 (2E-3) | 1E-3 (3E-3) | 4E-4 (1E-3) | 2E-4 (1E-3) | 2E-4 (6E-4) | 3E-4 (9E-4) | 6E-5 (3E-4) | | | |
HUB-BCA | 1E-3 (3E-3) | 2E-3 (3E-3) | 8E-4 (2E-3) | 3E-4 (1E-3) | 5E-4 (2E-3) | 5E-4 (1E-3) | 2E-4 (1E-3) | 1E-4 (7E-4) | 1E-4 (6E-4) | 2E-5 (2E-4) | 1E-8 (5E-8)
LBL-FW-2X | 4E-3 (6E-3) | 2E-3 (3E-3) | 5E-4 (9E-4) | 2E-4 (4E-4) | 2E-4 (4E-4) | 2E-4 (4E-4) | 9E-5 (2E-4) | | | |
REC-BCA-2X | 3E-3 (5E-3) | 2E-3 (5E-3) | 8E-4 (3E-3) | 6E-4 (2E-3) | 3E-4 (8E-4) | 2E-4 (7E-4) | 8E-5 (3E-4) | | | |
LBL-KM-2X | 4E-3 (5E-3) | 2E-3 (3E-3) | 5E-4 (9E-4) | 2E-4 (4E-4) | 2E-4 (4E-4) | 2E-4 (4E-4) | 9E-5 (2E-4) | | | |
ID-BCA-2X | 3E-3 (6E-3) | 2E-3 (3E-3) | 1E-3 (3E-3) | 6E-4 (2E-3) | 4E-4 (1E-3) | 4E-4 (1E-3) | 1E-4 (5E-4) | | | |
R100-FW-2X | 7E-3 (9E-3) | 3E-3 (4E-3) | 2E-4 (5E-4) | 4E-5 (1E-4) | 2E-5 (7E-5) | 3E-6 (1E-5) | 3E-6 (2E-5) | 2E-6 (6E-6) | | |
REC-BCA | 4E-3 (7E-3) | 4E-3 (8E-3) | 1E-3 (3E-3) | 1E-3 (3E-3) | 8E-4 (2E-3) | 6E-4 (2E-3) | 5E-4 (2E-3) | 1E-4 (5E-4) | 2E-4 (8E-4) | 3E-4 (1E-3) | 7E-5 (7E-4)
R100-KM-2X | 9E-3 (1E-2) | 3E-3 (4E-3) | 2E-4 (5E-4) | 4E-5 (1E-4) | 2E-5 (7E-5) | 3E-6 (1E-5) | 3E-6 (2E-5) | 2E-6 (6E-6) | | |
ID-BCA | 5E-3 (9E-3) | 5E-3 (8E-3) | 3E-3 (6E-3) | 2E-3 (4E-3) | 8E-4 (2E-3) | 9E-4 (2E-3) | 5E-4 (2E-3) | 6E-4 (1E-3) | 3E-4 (1E-3) | 4E-4 (1E-3) | 1E-8 (5E-8)
HUB-FW-2X | 4E-3 (6E-3) | 4E-3 (6E-3) | 1E-3 (1E-3) | 8E-4 (1E-3) | 6E-4 (8E-4) | 4E-4 (8E-4) | 2E-4 (4E-4) | | | |
HUB-KM-2X | 5E-3 (6E-3) | 5E-3 (6E-3) | 1E-3 (2E-3) | 8E-4 (1E-3) | 6E-4 (8E-4) | 4E-4 (8E-4) | 2E-4 (4E-4) | | | |
REC-KM-2X | 6E-3 (9E-3) | 5E-3 (6E-3) | 2E-3 (4E-3) | 1E-3 (3E-3) | 9E-4 (2E-3) | 4E-4 (1E-3) | 2E-4 (5E-4) | | | |
REC-FW-2X | 6E-3 (8E-3) | 5E-3 (6E-3) | 3E-3 (6E-3) | 1E-3 (3E-3) | 9E-4 (2E-3) | 4E-4 (1E-3) | 2E-4 (5E-4) | | | |
LBL-FW | 2E-2 (2E-2) | 8E-3 (7E-3) | 2E-3 (2E-3) | 1E-3 (1E-3) | 7E-4 (7E-4) | 5E-4 (8E-4) | 3E-4 (5E-4) | 9E-5 (1E-4) | 2E-5 (4E-5) | 3E-6 (4E-6) | 8E-7 (7E-7)
LBL-KM | 2E-2 (2E-2) | 8E-3 (7E-3) | 2E-3 (2E-3) | 1E-3 (1E-3) | 7E-4 (7E-4) | 5E-4 (8E-4) | 3E-4 (5E-4) | 9E-5 (1E-4) | 2E-5 (4E-5) | 3E-6 (4E-6) | 8E-7 (7E-7)
ID-KM-2X | 9E-3 (1E-2) | 7E-3 (9E-3) | 4E-3 (8E-3) | 2E-3 (5E-3) | 2E-3 (5E-3) | 1E-3 (3E-3) | 7E-4 (3E-3) | | | |
ID-FW-2X | 1E-2 (1E-2) | 6E-3 (8E-3) | 4E-3 (8E-3) | 2E-3 (3E-3) | 1E-3 (3E-3) | 1E-3 (4E-3) | 8E-4 (3E-3) | | | |
LBL | 2E-2 (2E-2) | 1E-2 (9E-3) | 5E-3 (3E-3) | 3E-3 (2E-3) | 3E-3 (2E-3) | 3E-3 (2E-3) | 2E-3 (1E-3) | 2E-3 (9E-4) | 2E-3 (6E-4) | 2E-3 (4E-4) | 2E-3 (3E-4)
HUB-KM | 3E-2 (1E-2) | 2E-2 (1E-2) | 6E-3 (5E-3) | 3E-3 (3E-3) | 2E-3 (3E-3) | 1E-3 (2E-3) | 9E-4 (2E-3) | 3E-4 (9E-4) | 2E-4 (7E-4) | 2E-5 (2E-4) | 8E-7 (8E-7)
HUB-FW | 3E-2 (1E-2) | 2E-2 (1E-2) | 6E-3 (5E-3) | 3E-3 (3E-3) | 2E-3 (3E-3) | 1E-3 (2E-3) | 9E-4 (2E-3) | 3E-4 (9E-4) | 2E-4 (7E-4) | 2E-5 (2E-4) | 8E-7 (8E-7)
REC-KM | 2E-2 (2E-2) | 2E-2 (1E-2) | 1E-2 (1E-2) | 5E-3 (6E-3) | 3E-3 (4E-3) | 3E-3 (6E-3) | 1E-3 (3E-3) | 9E-4 (3E-3) | 5E-4 (1E-3) | 5E-4 (2E-3) | 1E-4 (9E-4)
REC-FW | 2E-2 (2E-2) | 2E-2 (1E-2) | 1E-2 (1E-2) | 5E-3 (6E-3) | 3E-3 (4E-3) | 3E-3 (6E-3) | 1E-3 (3E-3) | 9E-4 (3E-3) | 5E-4 (1E-3) | 5E-4 (2E-3) | 1E-4 (9E-4)
2X | 1E-2 (1E-2) | 7E-3 (7E-3) | 5E-3 (5E-3) | 4E-3 (4E-3) | | | | | | |
R100-FW | 9E-2 (3E-2) | 1E-2 (7E-3) | 7E-4 (1E-3) | 1E-4 (2E-4) | 8E-5 (1E-4) | 3E-5 (5E-5) | 1E-5 (3E-5) | 7E-6 (1E-5) | 2E-6 (4E-6) | 3E-7 (5E-7) | 1E-7 (2E-7)
R100-KM | 9E-2 (3E-2) | 1E-2 (7E-3) | 7E-4 (1E-3) | 1E-4 (2E-4) | 8E-5 (1E-4) | 3E-5 (5E-5) | 1E-5 (3E-5) | 7E-6 (1E-5) | 2E-6 (4E-6) | 3E-7 (5E-7) | 1E-7 (2E-7)
REC | 2E-2 (2E-2) | 2E-2 (2E-2) | 2E-2 (2E-2) | 2E-2 (1E-2) | 2E-2 (1E-2) | 2E-2 (1E-2) | 2E-2 (1E-2) | 1E-2 (1E-2) | 9E-3 (8E-3) | 5E-3 (5E-3) | 4E-3 (4E-3)
ID-KM | 3E-1 (9E-2) | 9E-2 (5E-2) | 3E-2 (2E-2) | 1E-2 (1E-2) | 5E-3 (9E-3) | 5E-3 (9E-3) | 3E-3 (6E-3) | 2E-3 (5E-3) | 1E-3 (2E-3) | 5E-4 (2E-3) | 3E-4 (1E-3)
ID-FW | 3E-1 (9E-2) | 9E-2 (5E-2) | 3E-2 (2E-2) | 1E-2 (1E-2) | 5E-3 (9E-3) | 5E-3 (9E-3) | 3E-3 (6E-3) | 2E-3 (5E-3) | 1E-3 (2E-3) | 5E-4 (2E-3) | 3E-4 (1E-3)
HUB | 3E-2 (1E-2) | 3E-2 (1E-2) | 4E-2 (9E-3) | 4E-2 (9E-3) | 4E-2 (8E-3) | 5E-2 (7E-3) | 4E-2 (7E-3) | 4E-2 (6E-3) | 4E-2 (6E-3) | 4E-2 (5E-3) | 4E-2 (4E-3)
KUR-ILP | 3E-2 (1E-2) | 3E-2 (1E-2) | 4E-2 (9E-3) | 4E-2 (9E-3) | 4E-2 (8E-3) | 5E-2 (7E-3) | 4E-2 (7E-3) | 4E-2 (6E-3) | 4E-2 (6E-3) | 4E-2 (5E-3) | 4E-2 (4E-3)
COP-KM | 2E-1 (6E-2) | 1E-1 (5E-2) | 8E-2 (3E-2) | 7E-2 (3E-2) | 6E-2 (2E-2) | 6E-2 (2E-2) | 5E-2 (1E-2) | 5E-2 (1E-2) | | |
MPC-KM | 3E-1 (7E-2) | 2E-1 (7E-2) | 1E-1 (4E-2) | 8E-2 (3E-2) | 8E-2 (2E-2) | 7E-2 (3E-2) | 7E-2 (2E-2) | 7E-2 (2E-2) | | |
EM | 5E-3 (9E-3) | 5E-3 (8E-3) | 3E-3 (6E-3) | 2E-3 (4E-3) | 8E-4 (2E-3) | 9E-4 (2E-3) | 5E-1 (1E-2) | 5E-1 (1E-2) | 5E-1 (7E-3) | |
LCVQE | 3E-1 (7E-2) | 3E-1 (6E-2) | 2E-1 (6E-2) | 2E-1 (6E-2) | 2E-1 (5E-2) | 2E-1 (5E-2) | 2E-1 (6E-2) | 2E-1 (6E-2) | 2E-1 (5E-2) | 2E-1 (6E-2) | 2E-1 (5E-2)
CCLS | 4E-2 (3E-2) | 7E-2 (3E-2) | 1E-1 (5E-2) | 3E-1 (6E-2) | 3E-1 (4E-2) | 3E-1 (3E-2) | 4E-1 (3E-2) | 4E-1 (3E-2) | 4E-1 (2E-2) | |
R100 | 4E-1 (4E-2) | 4E-1 (3E-2) | 5E-1 (2E-2) | 5E-1 (2E-2) | 5E-1 (1E-2) | 5E-1 (1E-2) | 5E-1 (1E-2) | 5E-1 (1E-2) | 5E-1 (7E-3) | 5E-1 (5E-3) | 5E-1 (3E-3)
ID | 5E-1 (6E-2) | 5E-1 (4E-2) | 5E-1 (2E-2) | 5E-1 (2E-2) | 5E-1 (2E-2) | 5E-1 (1E-2) | 5E-1 (1E-2) | 5E-1 (1E-2) | 5E-1 (7E-3) | 5E-1 (5E-3) | 5E-1 (3E-3)
Table 1: Simulations: optimization performance in the matching problem (1).
The relative error (average across 100 replications) is displayed with
standard deviation in parentheses. From top to bottom of the table: best to
worst performance. Missing values are due to execution timeout (running time
$>300s$).
R100-BCA is the best method for each $n$, attaining the best objective value
in virtually every replication. For small values $n\in\\{5,10\\}$, ILP and
KUR-IQP also achieve best performance. The next best methods are LBL-BCA-2X,
HUB-BCA-2X, LBL-BCA, and HUB-BCA, with a relative error decreasing from order
$10^{-3}$ for $n=5$ to order $10^{-4}$ or $10^{-5}$ for $n=100$. Recall that
the LBL initialization is an oracle of sorts since data labels are typically
not available in matching problems. The other combinations of methods of this
paper yield slightly higher yet comparable relative error that goes roughly
from order $10^{-2}$ for $n=5$ to the range $(10^{-4},10^{-6})$ for $n=100$.
As can be expected, the ID and REC initializations yield slightly worse
performance whereas R100 provides the best results. BCA is less sensitive to
the initialization methods than FW and KM. EM, which is initialized with ID-
BCA, gives reasonable results for $n\leq 50$ (relative error of order
$10^{-3}$) although it does not improve upon BCA. For $n>50$ however its
performance with respect to (1) severely deteriorates and its relative error
climbs to about 0.4.
Among the competitor methods, KUR-ILP has the best performance, with a
relative error of order $10^{-2}$ across values of $n$. COP-KM and MPC-KM have
relative errors that decrease from order $10^{-1}$ for small $n$ to order
$10^{-2}$ for $n=100$. LCVQE has a slowly decreasing relative error that goes
from 0.3 for $n=5$ to 0.2 for $n=100$. CCLS sees it relative error increase
from order $10^{-2}$ for small $n$ to 0.4 for $n=100$.
_Rand index._ The Rand index (RI) is a measure of agreement between two
partitions of a set; it is suitable for matching problems which produce
clusters and not individual label predictions. Here the data partition
produced by a matching method is compared to the partition induced by the data
classes, i.e. their underlying digits in $\\{0,\ldots,9\\}$. While the goal of
matching is to produce homogeneous data clusters and not to maximize agreement
between the produced clusters and some underlying class/LBL-induced clusters,
these two goals are aligned in the simulations because data vectors generated
by a same digit class tend to be much closer to each other than to vectors
generated by other digit classes.
Given a set $D$ of size $n$ and two partitions $X$ and $Y$ of $D$ into
clusters, the RI is defined as the ratio $(a+b)/{n\choose 2}$, where $a$ is
the number of pairs of elements in $D$ that are in a same cluster both in $X$
and $Y$, and $b$ is the number of pairs of elements in $D$ that are in
different clusters both in $X$ and $Y$. This can be interpreted as the
fraction of correct decisions to assign two elements of $D$ either to the same
cluster or to different ones.
The RI of each method (averaged across 100 replications) is displayed in
Figure 1 as a function of $n$. Values closer to 1 indicate better agreement
between matching outputs and data labels (digits). For BCA, FW, and KM, the RI
starts from a baseline in the range $[0.92,0.96]$, reaches 0.99 around
$n=100$, and then stays at this level for $n>100$. The REC initialization has
a RI that increases from 0.94 for $n=5$ to 0.98 for $n=1000$. For COP-KM, MPC-
KM, LCVQE, KUR-ILP, and HUB, the RI slowly increases from about 0.9 to 0.95
with $n$. R100 and ID are two initializations nearly or full independent of
the data labels, which are randomly shuffled. They are tantamount to random
guessing and their baseline RI of 0.82 matches its theoretical expectation
($1-(2m-2)/m^{2}$). EM and CCLS show a RI that rapid decreases at or below
random guessing levels, in accord with their modest performance in the
optimization (1).
Figure 1: Rand index versus sample size $n$ (average across 100 replications).
_Running time._ The running times of the algorithms are displayed in Figure 2.
During the simulations, algorithms were given 300 seconds to complete
execution, after which they were interrupted. Accordingly any value 300s on
the figure (often largely) underestimates the actual computation time. The
algorithms can be divided in two groups: those who can solve (1) for $n=1000$
in 100s or far less, and those that time out (execution time over 300s) for
$n\leq 500$ or far less. They are described below by order of decreasing
speed.
Figure 2: Running time versus sample size $n$ (average across 100
replications).
BCA, FW, and KM are the fastest methods with running times of order $10^{-3}$
to $10^{-1}$ seconds across values of $n$. For $n=1000$, they are one order of
magnitude faster than the next best method (LCVQE). The HUB and REC
initializations, although slower than arbitrary starting points like identity
or random permutations, enable overall faster computations because good
starting points reduce the number of iterations required for the main
algorithm to converge. Completion of (100 runs) of BCA, FW, or FW based on the
R100 initialization takes between 200 and 250 times the execution of a single
run based on HUB or REC (instead of roughly 100). This is because the latter
heuristics find good starting points whereas some (or many) of the 100 random
starting points will be bad and require many iterations for the main algorithm
to converge. KUR-ILP enjoys the same speed as the BCA, FW, and KM for small
$n$ but its running time appears to scale polynomially with $n$. LCVQE appears
to scale linearly with $n$ but with a much larger multiplicative constant than
BCA, FW, and KM. Its running time is of order $10^{-2}$s for $n=5$ and 1s for
$n=75$. The running time of CCLS grows steeply with $n$ and exceeds the 300s
limit for $n\geq 500$. MPC-KM, COP-KM and EM are very slow, at least in their
R implementation, and they time out (i.e. their execution times exceed 300s)
for $n\geq 200$. Their computational load seems to grow exponentially with
$n$. In the case of the EM, the computational bottleneck is the evaluation of
matrix permanents. ILP, KUR-IQP and 2X are by far the slowest methods in the
simulations. The first two stall and time out as soon as $n$ exceeds a few
units, although they produce good results when $n\leq 5$. The computational
load of 2X scales exponentially with $n$ (average computation time 110s for
$n=30$); it is much higher when using the ID, R100, REC, and HUB
initializations than when applied as a post-processing step following, say,
the BCA method.
_Summary of simulations._
* -
BCA is the fastest and most accurate of all studied methods. It provides
excellent accuracy when initialized with REC or HUB. For best accuracy, the
R100 initialization should be used at the cost of increased yet still
manageable computations.
* -
BCA, KM, and FW are overall extremely fast and can handle datasets of size
$n=10^{3}$ and up without difficulty. KM and FW are slightly less accurate
than BCA in terms of optimization performance (relative error between
$10^{-3}$ and $10^{-4}$) and Rand index.
* -
2X is computationally costly and fairly inaccurate when used on its own, i.e.
with an arbitrary starting point. It largely improves the accuracy of KM and
FW solutions but not of BCA solutions. It is mostly beneficial in small to
moderate dimension $n$.
* -
HUB and REC are not sufficiently accurate to be used on their own but they
provide good starting points to more sophisticated matching methods. HUB uses
data more extensively than REC and yields slightly better performance.
* -
For moderate to large $n$, EM shows poor performance in both computations (due
to the evaluations of matrix permanents) and optimization. Its performance is
satisfactory for $n\leq 50$, possibly because of the BCA initialization.
* -
ILP and KUR-BQP are only computationally feasible in very small samples
($n\leq 10$ or so). In this setup they virtually always find the global
minimum of (1).
* -
KUR-ILP is relatively fast (it solves (1) for $n=1000$ in 50s) but not highly
accurate (relative error between 3% and 5%). LCVQE is both faster and far less
accurate: it solves (1) for $n=1000$ in 13s but has relative error in
$(0.2,0.3)$ for all values of $n$.
* -
COP-KM and MPC-KM have very similar profiles in computation time and
optimization accuracy. Their relative error goes from 0.2-0.3 for $n=5$ to
0.05-0.06 for $n=100$. They are not able to handle large datasets (at least
not in their R implementation) as their computations stall for $n\geq 200$.
CCLS only performs reasonably well for $n\leq 10$. Its Rand index and relative
error deteriorate quickly as $n$ increases and its computations time out for
$n\geq 500$.
## 4 Application to fMRI data
In this section we harness the matching problem (1) and its proposed solutions
to analyze resting-state functional magnetic resonance imaging (rs-fMRI) data,
the goal being to explore the dynamic functional connectivity (DFC) of the
brain. In short, functional connectivity (FC) relates to the integration of
brain activity, that is, how distant brain regions coordinate their activity
to function as a whole. The dynamic nature of FC, in particular its dependence
on factors such as task-related activity, psychological state, and cognitive
processes, is well established in neuroimaging research (e.g. Chang and
Glover, 2010; Handwerker et al., 2012; Hutchison et al., 2013).
The present analysis aims to extract measures of DFC from individual subject
data and match these measures across subjects to uncover common patterns and
salient features. The data under consideration are part of the ABIDE
preprocessed data (Craddock et al., 2013), a large corpus of rs-fMRI
measurements recorded from subjects diagnosed with autism spectrum disorder
and from control subjects. These data and detailed descriptions are available
at preprocessed-connectomes-project.org/abide/. We selected the following
preprocessing options: Connectome Computation System (CCS) pipeline, spatial
averaging over 116 regions of interest (ROI) defined by the AAL brain atlas,
bandpass temporal filtering, no global signal regression. For simplicity, we
only used data from control subjects and discarded data that did not not pass
all quality control tests. This resulted in $n=308$ subjects with fMRI time
series of average length about 200 scans (SD=62).
#### Subject-level analysis.
Vector autoregressive (VAR) models are widely used to assess FC in fMRI data
(Valdés-Sosa et al., 2005; Friston et al., 2013; Ting et al., 2017). Here we
represent the fMRI time series of a subject by a piecewise VAR model of order
1:
$y_{t}=A_{t}y_{t-1}+b_{t}+\varepsilon_{t}\qquad(1\leq t\leq T)$ (22)
where $y_{t}$ is an fMRI measurement vector of length 116, $A_{t}$ an unknown
regression matrix encoding FC dynamics, $b_{t}$ an unknown baseline vector,
and $\varepsilon_{t}$ a random noise vector with multivariate normal
distribution $N(0,Q_{t})$. The $A_{t}$ are assumed sparse, reflecting the fact
that only a small number of ROIs at time $t-1$ are predictive of ROI activity
at time $t$. The model parameters $(A_{t},b_{t},Q_{t})$ are assumed piecewise
constant with few change points, indicating that FC states persist for some
time (say, between 5 and 50 scans) before the brain switches to a different FC
state.
For each subject, the task at hand is to simultaneously detect change points
in (22) and estimate $(A_{t},b_{t})$ over the associated time segments.
($Q_{t}$ is of secondary importance here and can be ignored). The sparse group
fused lasso (SGFL) approach of Degras (2020) is designed for this purpose. To
simplify the task of determining a suitable range for the SGFL regularization
parameters and calculating regularization paths, we employ the two-step
procedure of this paper. The first step detects change points via the group
fused lasso (e.g. Bleakley and Vert, 2011); the second step recovers sparse
estimates of the $A_{t}$ separately on each segment via the standard lasso
(Tibshirani, 1996).
After fitting the regularization paths, a single lasso estimate
$(\hat{A}_{t},\hat{b}_{t})$ is selected for each segment by the Akaike
Information Criterion. Among all generated model segmentations, we retain the
one with the most segments satisfying the following criteria: (i) _length_ :
the segment must have at least 5 scans, (ii) _goodness of fit_ : the lasso fit
must have a deviance ratio at least 0.3, and (iii) _distinctness_ : the
parameter estimate $\hat{A}_{t}$ for the segment must have at least 10%
relative difference with estimates of other selected segments. To facilitate
interpretation and remove noisy components, 10 segments are retained per
subject at the most.
#### Group-level analysis.
Following the subject-level analysis, a set of change points and associated
model parameter estimates is available for each subject, say
$\\{(\hat{A}_{ik},\hat{b}_{ik},\hat{T}_{ik}):1\leq k\leq m_{i}\\}$ with
$\hat{T}_{ik}$ the $k$th change point and $m_{i}$ the number of segments for
the $i$th subject ($1\leq i\leq n$). The regression matrices $\hat{A}_{ik}$
provide informative FC measures and could in principle be used for group-level
comparisons. They are however highly sparse and matching them using the
squared Euclidean distance of problems (1)-(2) does not produce sensible
results. We thus calculate the empirical correlation matrices on each segment
$\\{\hat{T}_{ik},\ldots,\hat{T}_{i(k+1)}-1\\}$ and take them as inputs for the
group analysis. See e.g. (Wang et al., 2014) for a review of common FC
measures in neuroimaging. After discarding correlation matrices based on short
segments (10 scans or less, for increased estimation accuracy) and extracting
the lower halves of the remaining matrices, we obtain a set $\\{x_{ik}:1\leq
i\leq 306,1\leq k\leq m_{i}\\}$ of 1801 correlation vectors of size
$p=116\times 115/2=6670$. The number $m_{i}$ of vectors per subject varies in
the range $[1,10]$ with an average of 5.88 (SD=1.77). The unbalanced matching
problem (2) is then solved for $K\in\\{10,20,\ldots,100\\}$ using a
generalized version of the BCA Algorithm 2. Based on the inspection of the
cluster centers and cluster sizes, we retain the matching based on $K=100$
clusters. With this choice, cluster sizes are in the range $[12,28]$
(mean=18.01, SD=4.16). Smaller values of $K$, say $K\geq 50$, would be equally
fine for data exploration. $K$ should however not be too small so as to avoid
large clusters in which fine details of FC are averaged out in the cluster
center and only large-scale features remain.
Figure 3 displays the 100 resulting cluster centers, i.e. the average
correlation matrices of the clusters. For easier visualization and
interpretation, the ROI-level correlations are aggregated into six well
established _resting state networks_ (RSN): the attentional network (26 ROIs),
auditory network (6 ROIs), default mode network (32 ROIs), sensorimotor
network (12 ROIs), subcortical network (8 ROIs), and visual network (14 ROIs).
A list of the ROI names and associated RSNs is given in Appendix A. Note that
some ROIs do not belong to any known functional networks while others are
recruited in two networks. The visual network and auditory network have strong
intracorrelation (0.59 and 0.64 on average across cluster centers,
respectively, not including TPOsup in the auditory network). The subcortical
network and sensorimotor network show moderate internal correlation (0.51 on
average each). The default mode and attentional networks comprise more ROIs
and are usually less correlated (0.36 and 0.40 on average, respectively). The
hippocampus (HIP), parahippocampal gyrus (PHG), and amygdala (AMYG) cluster
together fairly strongly (average correlation 0.53). Applying community
detection algorithms to each cluster center with the R package `igraph`, we
noticed that ROIs from the visual network are virtually always in the same
community; the same holds true for the subcortical network. The strongest
correlations found between RSNs are the following: auditory–sensorimotor (0.38
on average across clusters) attentional–default mode (0.36),
attentional–sensorimotor (0.36), and sensorimotor–visual (0.35).
Figure 3: rs-fMRI data analysis. Each column represents the center of a
cluster of matched features, that is, (half) a correlation matrix averaged
across cluster members (subjects) and across ROIs of resting state networks.
ATN: attentional network, AUD: auditory network, DMN: default mode network,
SMT: sensorimotor network, SUB: subcortical network, VIS: visual network.
A remarkable feature (not visible in Figure 3) is the strong positive
correlation between the Rolandic Operculum (ROL) and the regions PUT
(subcortical network), PoCG, PreCG, and SMG (sensorimotor), and HES, STG
(auditory) (between 0.42 and 0.67). In addition, CB9.R, VERMIS10, CB10.R,
PCG.L, VERMIS9 exhibit consistent negative correlation (or at least lower
average correlation) with most other ROIs. In particular, CB9.R (cerebellum)
has 36.5% of negative correlations with other ROIs whereas the overall
proportion of negative correlations in the 100 average correlation matrices is
only 10.6%.
Figure 4 shows interesting examples of average correlation matrices (cluster
centers) at the ROI level. Cluster 5 shows strong positive correlation within
the auditory, subcortical, and visual networks, and in the small groups (HIP,
PHG, AMYG), (CRUS1, CRUS2), and (CB3–CB6, VERMIS1–VERMIS7). ROL has moderate
to strong negative correlation with CRUS1, CRUS2 and regions from the
subcortical network (dark blue stripe towards the top and left) and strong
positive correlation with PoCG, SMG (sensorimotor) and HES, STG (auditory).
The auditory and sensorimotor networks have moderate to strong positive
correlation. Cluster 14 shows clear blocking structure along the diagonal
(correlation within RSN) as well as anticorrelation patterns between CAU, PUT,
PAL, THA (subcortical) and ROL, PoCG (sensorimotor), PCL (sensorimotor); and
between PCG (default mode) and PreCG (sensorimotor), ROL, PoCG (sensorimotor),
PCL (sensorimotor). Community detection reveals three large and heterogeneous
communities (sizes 43, 40, 36). Cluster 19 displays moderate to strong
negative correlation (-0.55,-0.25) between IPL, SMG, ROL, CB10.R on the one
hand and about 40 other ROIs on the other. The alternating clear and dark
lines in cluster 27 reveal lateralized anticorrelation patterns between ROIs
in the attentional network on the left side of the brain with most other ROIs
in the brain. Cluster 42 shows two roughly uncorrelated blocks, a very large
one with strong intracorrelation and a smaller one (CRUS, CB, VERMIS) with
weaker intracorrelation. Cluster 88 displays a checkered correlation structure
with strong anticorrelation between (CRUS, CB, VERMIS) and the rest of the
brain.
Figure 4: rs-fMRI data analysis. Examples of cluster centers (averages
correlation matrices) derived from matching individual correlation matrices
across subjects. Each displayed matrix corresponds to a cluster of 14 to 23
subjects.
_Summary of the data analysis._ The data analysis has established that the
matching approach (1)-(2) provides scientifically meaningful insights into DFC
at the group level. By inspecting the cluster centers (average correlation
matrices) produced by the matching process, one recovers large-scale patterns
consistent with neuroscientific knowledge. For example, known resting state
networks are clearly reflected in the blocking structure of the cluster
centers (see Figure 4). But the cluster centers can also generate new insights
and hypotheses. For example, the Heschl gyrus (HES) is not systematically
included in the auditory network but, according to our analysis, it should.
Similarly, the ROI TPOsup (temporal lobe: superior temporal gyrus), although
it is near to or part of the auditory cortex, has shown only weak correlation
with the other ROI of the auditory network, Superior temporal gyrus (STG).
These elements may lead to a more nuanced understanding of the auditory
network. Other remarkable findings include the strong anticorrelations found
between the Rolandic operculum (ROL), the cerebellum (CER) and the vermis
(VERMIS) on the one hand and (a large part of) the rest of the brain on the
other. Importantly, by design, each of the clusters formed by the matching
process highlights commonalities _between_ subjects and not _within_ subjects.
This is in contrast with unconstrained clustering methods (e.g. $K$-means
clustering) whose clusters may consist in (vectors from) a small number of or
even a single subject in extreme cases.
## 5 Discussion
We have sought to efficiently match feature vectors in a one-to-one fashion
across large collections of datasets or statistical units. In applications
where statistical units are matched in pairs, this task is conveniently framed
as a multidimensional assignment problem with decomposable costs (MDAC).
Taking the squared Euclidean distance as dissimilarity function in the MDADC
enables tremendous computational speedup by transforming ${n\choose 2}$
related matching problems between all pairs of datasets into $n$ separate
matching problems between each dataset and a template. Leveraging this idea,
we have developed extremely fast algorithms whose computational complexity
scales linearly with $n$. These algorithms do not require precalculating and
storing assignment costs, which may be infeasible in large-scale applications.
Instead, they efficiently calculate assignment costs on the fly. To our
knowledge, no other available method to solve the MDADC possesses either of
these linear scaling and storage-free properties necessary to large-scale
applications.
Our proposed algorithms rely on various optimization techniques such as
$K$-means clustering, block coordinate ascent (BCA), convex relaxation, the
Frank-Wolfe algorithm, and pairwise interchange heuristic. We have also taken
a probabilistic view of (1) leading to a constrained Gaussian mixture model
and associated EM algorithm. Altogether the proposed algorithms form a panel
that covers most types of approach to the MDADC found in the literature. (As
discussed earlier, we have not considered Lagrangian relaxation methods as
they require computer clusters and/or GPUs for efficient large-scale
implementation.) These algorithms extend or specialize existing approaches in
a nontrivial way. For example, the BCA and 2-exchange algorithms, which are
specialized versions of existing algorithms, scale linearly with $n$ and are
amenable to large-scale applications whereas the more general algorithms are
not.
The numerical study has shown the excellent performances of the three main
algorithms: $K$-means matching, BCA, and Frank-Wolfe, with respect to
computation and optimization. In particular, these algorithms largely
outperform all competitors and can handle very large collections of data. The
BCA algorithm shows slightly better performance than the other two. The
pairwise interchange heuristic can enhance these two methods to reach near
optimality at a hefty computational price. The EM algorithm displayed fairly
poor performance throughout the study. Upon inspection, the poor optimization
results came from the fact that the algorithm was “too sure” about the
allocation probabilities (of data vectors to classes) which were almost
invariably calculated as 0 or 1. This in turn may arise from the (relatively)
high dimension of the data, short tails of the normal distribution, and/or
error in covariance estimation. Using the deterministic annealing EM and/or
random starting points did not fix the issue. Solutions for improving the EM
optimization may be to impose a diagonal structure on covariance estimates or
to consider (mixtures of) distributions with heavier tails such as
multivariate $t$-distributions. The computational slowness of the EM could be
remedied by calculating a small fixed number of most likely allocations rather
than computing them all through matrix permanents.
The analysis of the ABIDE preprocessed fMRI data has shown the strong
potential of the proposed feature matching approach for exploring neuroimaging
biomarkers and producing interpretable clusters at the group level. A key
characteristic of one-to-one feature matching is that, unlike unsupervised
clustering, it is guaranteed to produce “representative” clusters that reflect
variations between subjects and not within. While feature matching was
employed in our analysis for data exploration, this technique could also be
used in a more principled way as a preliminary step to disentangle association
ambiguities between biomarkers and/or to stratify subjects into small,
homogenous groups prior to a group-level analysis. Such matching-based
approach could be for example compared to the consensus clustering strategy of
Rasero et al. (2019).
#### Possible extensions and future work.
* •
_Weighted (squared) Euclidean distance._ The squared Euclidean distance in (1)
can easily be generalized to a weighted squared Euclidean distance
$\|x\|_{W}^{2}=x^{\prime}Wx$ with $W\in\mathbb{R}^{p\times p}$ a positive
semi-definite matrix. Decomposing $W$ as $L^{\prime}L$ (e.g. by Cholesky
decomposition), it suffices to premultiply each matrix $X_{i}$ by $L$ to
formulate an equivalent problem (1) using the unweighted (squared) Euclidean
distance.
* •
_Alternative dissimilarity measures._ Although the squared Euclidean distance
for $d$ in the general MDADC problem (3)-(4) enables extremely fast and
scalable algorithms with low memory footprint, it may not adequately capture
relevant differences between feature vectors in some applications. If the
Euclidean distance $\|\cdot\|_{2}$ or the Manhattan distance $\|\cdot\|_{1}$,
for example, is a more sensible choice for $d$, a reasonable approach would be
to use an objective function based on the ($nm$) distances between feature
vectors and their cluster centers instead of one based on the distances
between all ${n\choose 2}m$ pairs of matched vectors. In this case, the
$K$-means matching Algorithm 1 can be adapted as follows. The assignment step
remains the same: given cluster centers $c_{1},\ldots,c_{m}$, the feature
vectors of each unit $i\in[n]$ are assigned to clusters by minimizing the LAP
with assignment matrix $A_{i}=(d(x_{ik},c_{l}))_{1\leq k,l\leq m}$. The
updating step for the cluster centers proceeds from calculating $m$ geometric
medians if $d=\|\cdot\|_{2}$, or $mp$ univariate medians id $d=\|\cdot\|_{1}$.
Both these tasks can be accomplished in near linear time, and like in the case
$d=\|\cdot\|_{2}^{2}$, no distance needs to be pre-calculated and stored.
Accordingly, the modified objective function and modified $K$-means matching
algorithm still enable linear time complexity linear in $n$ and low space
requirements. (The other algorithms of Section 2 do not extend quite so nicely
as they fundamentally rely on the scalar product and separability properties
that underlie $\|\cdot\|_{2}^{2}$.)
* •
Gaining theoretical understanding of the optimization properties of the
algorithms of this paper, for example by establishing deterministic or
probabilistic bounds on their performances, could maybe explain the very good
performances observed and/or give insights on worst case performance in
difficult instances (e.g. Gutin et al., 2008). Also, obtaining tight lower
bounds through suitable Lagrangian relaxations would be desirable in practice.
* •
The rich structure of problem (1) may make it possible to easily construct
instances in which the global minimum and optimal assignment are known (see
e.g. Drugan, 2015, for related work on quadratic assignment problems). This
would be of course useful to benchmark methods.
### Acknowledgments
The author thanks to Vince Lyzinski for early discussions that led to the
convex relaxation/
Frank-Wolfe approach of Section 2.3. He also acknowledges his student Yiming
Chen for his assistance in the literature search and the numerical study.
## References
* Ashburner [2007] John Ashburner. A fast diffeomorphic image registration algorithm. _NeuroImage_ , 38(1):95 – 113, 2007.
* Balas and Saltzman [1991] Egon Balas and Matthew J. Saltzman. An algorithm for the three-index assignment problem. _Oper. Res._ , 39(1):150–161, 1991.
* Bandelt et al. [2004] H.-J. Bandelt, A. Maas, and F. C. R. Spieksma. Local search heuristics for multi-index assignment problems with decomposable costs. _J. Oper. Res. Soc._ , 55(7):694–704, 2004.
* Bandelt et al. [1994] Hans-Jürgen Bandelt, Yves Crama, and Frits C. R. Spieksma. Approximation algorithms for multi-dimensional assignment problems with decomposable costs. _Discrete Appl. Math._ , 49(1-3):25–50, 1994\.
* Bar-Shalom et al. [2011] Y. Bar-Shalom, P.K. Willett, and X. Tian. _Tracking and Data Fusion: A Handbook of Algorithms_. YBS Publishing, 2011.
* Basu et al. [2009] Sugato Basu, Ian Davidson, and Kiri L. Wagstaff, editors. _Constrained clustering: Advances in algorithms, theory, and applications_. Chapman & Hall/CRC Data Mining and Knowledge Discovery Series. CRC Press, 2009.
* Belongie et al. [2002] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. _IEEE Trans. Pattern Anal. Mach. Intell._ , 24(4):509–522, 2002.
* Bilenko et al. [2004] Mikhail Bilenko, Sugato Basu, and Raymond J. Mooney. Integrating constraints and metric learning in semi-supervised clustering. In _Machine Learning, Proceedings of the Twenty-first International Conference (ICML 2004)_ , volume 69. ACM, 2004.
* Birkhoff [1946] Garrett Birkhoff. Tres observaciones sobre el algebra lineal [Three observations on linear algebra]. _Univ. Nac. Tucumán. Revista A._ , 5:147–151, 1946\.
* Bleakley and Vert [2011] Kevin Bleakley and Jean-Philippe Vert. The group fused lasso for multiple change-point detection. Technical Report hal-00602121, 2011.
* Burkard et al. [2009] Rainer Burkard, Mauro Dell’Amico, and Silvano Martello. _Assignment problems_. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2009.
* Çela [1998] Eranda Çela. _The quadratic assignment problem_ , volume 1 of _Combinatorial Optimization_. Kluwer Academic Publishers, Dordrecht, 1998.
* Chang and Glover [2010] C. Chang and G. H. Glover. Time-frequency dynamics of resting-state brain connectivity measured with fMRI. _Neuroimage_ , 50(1):81–98, Mar 2010.
* Collier and Dalalyan [2016] Olivier Collier and Arnak S. Dalalyan. Minimax rates in permutation estimation for feature matching. _J. Mach. Learn. Res._ , 17(6):1–31, 2016.
* Collins [2012] Robert T. Collins. Multitarget data association with higher-order motion models. In _2012 IEEE Conference on Computer Vision and Pattern Recognition_ , pages 1744–1751, 2012.
* Conte et al. [2004] D. Conte, P. Foggia, C. Sansone, and M. Vento. Thirty years of graph matching in pattern recognition. _Int. J. Pattern Recognit. Artif. Intell._ , 18:265–298, 2004.
* Craddock et al. [2013] Cameron Craddock, Yassine Benhajali, Carlton Chu, Francois Chouinard, Alan Evans, András Jakab, Budhachandra Khundrakpam, John Lewis, Qingyang Li, Michael Milham, Chaogan Yan, and Pierre Bellec. The neuro bureau preprocessing initiative: open sharing of preprocessed neuroimaging data and derivatives. _Frontiers in Neuroinformatics_ , 7, 2013.
* Degras [2020] David Degras. Sparse group fused lasso for model segmentation: a hybrid approach. _Adv. Data Anal. Classif._ , 2020. doi: https://doi.org/10.1007/s11634-020-00424-5.
* DeGroot and Goel [1976] Morris H. DeGroot and Prem K. Goel. The matching problem for multivariate normal data. _Sankhyā Ser. B_ , 38(1):14–29, 1976.
* Dehghan et al. [2015] A. Dehghan, S. M. Assari, and M. Shah. GMMCP tracker: Globally optimal generalized maximum multi clique problem for multiple object tracking. In _2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 4091–4099, 2015.
* Doherty et al. [2019] K. Doherty, D. Fourie, and J. Leonard. Multimodal semantic SLAM with probabilistic data association. In _2019 International Conference on Robotics and Automation (ICRA)_ , pages 2419–2425, 2019.
* Drugan [2015] Mădălina M. Drugan. Generating QAP instances with known optimum solution and additively decomposable cost function. _J. Comb. Optim._ , 30(4):1138–1172, 2015.
* Frank and Wolfe [1956] Marguerite Frank and Philip Wolfe. An algorithm for quadratic programming. _Naval Research Logistics Quarterly_ , 3(1?2):95–110, 1956.
* Friston et al. [2013] Karl Friston, Rosalyn Moran, and Anil K Seth. Analysing connectivity with granger causality and dynamic causal modelling. _Current Opinion in Neurobiology_ , 23(2):172 – 178, 2013. Macrocircuits.
* Gancarski et al. [2020] P. Gancarski, T-B-H. Dao, B. Crémilleux, G. Forestier, and T. Lampert. Constrained clustering: Current and new trends. In _A Guided Tour of AI Research_ , pages 447–484. Springer, 2020\.
* Gutin et al. [2008] Gregory Gutin, Boris Goldengorin, and Jing Huang. Worst case analysis of max-regret, greedy and other heuristics for multidimensional assignment and traveling salesman problems. _Journal of heuristics_ , 14(2):169–181, 2008\.
* Handwerker et al. [2012] D. A. Handwerker, V. Roopchansingh, J. Gonzalez-Castillo, and P. A. Bandettini. Periodic changes in fMRI connectivity. _Neuroimage_ , 63(3):1712–1719, Nov 2012.
* Haxby et al. [2011] James V. Haxby, J. Swaroop Guntupalli, Andrew C. Connolly, Yaroslav O. Halchenko, Bryan R. Conroy, M. Ida Gobbini, Michael Hanke, and Peter J. Ramadge. A common, high-dimensional model of the representational space in human ventral temporal cortex. _Neuron_ , 72(2):404–416, 2011.
* Hiep et al. [2016] Tran Khanh Hiep, Nguyen Minh Duc, and Bui Quoc Trung. Local search approach for the pairwise constrained clustering problem. In _Proceedings of the Seventh Symposium on Information and Communication Technology_ , SoICT ’16, pages 115–122. ACM, 2016.
* Hsu et al. [2017] Daniel Hsu, Kevin Shi, and Xiaorui Sun. Linear regression without correspondence. In _Advances in Neural Information Processing Systems_ , volume 30, pages 1531–1540. Curran Associates, Inc., 2017.
* Hutchison et al. [2013] R. M. Hutchison, T. Womelsdorf, E. A. Allen, P. A. Bandettini, V. D. Calhoun, M. Corbetta, S. Della Penna, J. H. Duyn, G. H. Glover, J. Gonzalez-Castillo, D. A. Handwerker, S. Keilholz, V. Kiviniemi, D. A. Leopold, F. de Pasquale, O. Sporns, M. Walter, and C. Chang. Dynamic functional connectivity: promise, issues, and interpretations. _Neuroimage_ , 80:360–378, Oct 2013.
* Jerrum et al. [2004] Mark Jerrum, Alistair Sinclair, and Eric Vigoda. A polynomial-time approximation algorithm for the permanent of a matrix with nonnegative entries. _J. ACM_ , 51(4):671–697, 2004.
* Karapetyan and Gutin [2011] Daniel Karapetyan and Gregory Gutin. Local search heuristics for the multidimensional assignment problem. _Journal of Heuristics_ , 17(3):201–249, 2011\.
* Kochenberger et al. [2014] Gary Kochenberger, Jin-Kao Hao, Fred Glover, Mark Lewis, Zhipeng Lü, Haibo Wang, and Yang Wang. The unconstrained binary quadratic programming problem: a survey. _J. Comb. Optim._ , 28(1):58–81, 2014.
* Koopmans and Beckmann [1957] Tjalling C. Koopmans and Martin Beckmann. Assignment problems and the location of economic activities. _Econometrica_ , 25:53–76, 1957.
* Kuck et al. [2019] Jonathan Kuck, Tri Dao, Hamid Rezatofighi, Ashish Sabharwal, and Stefano Ermon. Approximating the permanent by sampling from adaptive partitions. In _Advances in Neural Information Processing Systems_ , volume 32, pages 8860–8871. Curran Associates, Inc., 2019.
* Kuhn [1955] H. W. Kuhn. The Hungarian method for the assignment problem. _Naval Research Logistics Quarterly_ , 2(1-2):83–97, 1955.
* Kuroki and Matsui [2009] Yusuke Kuroki and Tomomi Matsui. An approximation algorithm for multidimensional assignment problems minimizing the sum of squared errors. _Discrete Appl. Math._ , 157(9):2124–2135, 2009\.
* Le Moigne et al. [2011] J. Le Moigne, N.S. Netanyahu, and R.D. Eastman. _Image Registration for Remote Sensing_. Cambridge University Press, 2011.
* Lloyd [1982] Stuart P. Lloyd. Least squares quantization in PCM. _IEEE Trans. Inform. Theory_ , 28(2):129–137, 1982.
* Lowe [2001] D.G Lowe. Local feature view clustering for 3D object recognition. In _Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001_ , volume 1, pages I–I, 2001\.
* Lyzinski et al. [2016] V. Lyzinski, D. E. Fishkind, M. Fiori, J. T. Vogelstein, C. E. Priebe, and G. Sapiro. Graph matching: Relax at your own risk. _IEEE Trans. Pattern Anal. Mach. Intell._ , 38(1):60–73, 2016.
* McLachlan and Peel [2000] Geoffrey McLachlan and David Peel. _Finite mixture models_. Wiley Series in Probability and Statistics: Applied Probability and Statistics. Wiley-Interscience, New York, 2000.
* McLachlan and Krishnan [2008] Geoffrey J. McLachlan and Thriyambakam Krishnan. _The EM algorithm and extensions_. Wiley-Interscience [John Wiley & Sons], Hoboken, NJ, second edition, 2008\.
* Munkres [1957] James Munkres. Algorithms for the assignment and transportation problems. _J. Soc. Indust. Appl. Math._ , 5:32–38, 1957.
* Natu et al. [2020] Shardul Natu, Ketan Date, and Rakesh Nagi. GPU-accelerated Lagrangian heuristic for multidimensional assignment problems with decomposable costs. _Parallel Computing_ , page 102666, 2020.
* Nijenhuis and Wilf [1978] Albert Nijenhuis and Herbert S. Wilf. _Combinatorial algorithms_. Academic Press, Inc., second edition, 1978.
* Oliveira and Pardalos [2004] Carlos A. S. Oliveira and Panos M. Pardalos. Randomized parallel algorithms for the multidimensional assignment problem. _Appl. Numer. Math._ , 49(1):117–133, 2004. doi: 10.1016/j.apnum.2003.11.014.
* Pananjady et al. [2018] A. Pananjady, M. J. Wainwright, and T. A. Courtade. Linear regression with shuffled data: Statistical and computational limits of permutation recovery. _IEEE Transactions on Information Theory_ , 64(5):3286–3300, 2018.
* Pardalos and Pitsoulis [2000] Panos M. Pardalos and Leonidas S. Pitsoulis, editors. _Nonlinear assignment problems_ , volume 7 of _Combinatorial Optimization_. Kluwer Academic Publishers, Dordrecht, 2000.
* Pelleg and Baras [2007] Dan Pelleg and Dorit Baras. _K_ -means with large and noisy constraint sets. In _ECML 2007_ , pages 674–682. Springer, 2007.
* Pierskalla [1968] William P. Pierskalla. The multidimensional assignment problem. _Operations Research_ , 16(2):422–431, 1968.
* Poore and Rijavec [1993] Aubrey P. Poore and Nenad Rijavec. A Lagrangian relaxation algorithm for multidimensional assignment problems arising from multitarget tracking. _SIAM J. Optim._ , 3(3):544–563, 1993.
* Press et al. [2007] William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. _Numerical Recipes 3rd Edition: The Art of Scientific Computing_. Cambridge University Press, 2007.
* Qiao and Li [2015] Mu Qiao and Jia Li. Gaussian mixture models with component means constrained in pre-selected subspaces, 2015.
* Rand [1971] William M. Rand. Objective criteria for the evaluation of clustering methods. _J. Amer. Statist. Assoc._ , 66(336):846–850, 1971.
* Rasero et al. [2019] Javier Rasero, Ibai Diez, Jesus M. Cortes, Daniele Marinazzo, and Sebastiano Stramaglia. Connectome sorting by consensus clustering increases separability in group neuroimaging studies. _Network Neuroscience_ , 3(2):325–343, 2019.
* Rezatofighi et al. [2015] S. H. Rezatofighi, A. Milan, Z. Zhang, Q. Shi, A. Dick, and I. Reid. Joint probabilistic data association revisited. In _2015 IEEE International Conference on Computer Vision (ICCV)_ , pages 3047–3055, 2015.
* Robertson [2001] Alexander J. Robertson. A set of greedy randomized adaptive local search procedure (GRASP) implementations for the multidimensional assignment problem. _Computational Optimization and Applications_ , 19(2):145–164, 2001.
* Ryser [1963] Herbert J. Ryser. _Combinatorial mathematics_ , volume 14 of _The Carus Mathematical Monographs_. Wiley and Sons, Inc., New York, 1963.
* Shalom et al. [2010] Mordechai Shalom, Prudence W. H. Wong, and Shmuel Zaks. On-line maximum matching in complete multipartite graphs with implications to the minimum ADM problem on a star topology. In _Structural information and communication complexity_ , volume 5869 of _Lecture Notes in Comput. Sci._ , pages 281–294. Springer, Berlin, 2010.
* Shental et al. [2004] Noam Shental, Aharon Bar-hillel, Tomer Hertz, and Daphna Weinshall. Computing gaussian mixture models with EM using equivalence constraints. In _Advances in Neural Information Processing Systems_ , volume 16, pages 465–472. MIT Press, 2004.
* Sinkhorn [1964] Richard Sinkhorn. A relationship between arbitrary positive matrices and doubly stochastic matrices. _Ann. Math. Statist._ , 35:876–879, 1964.
* Smeulders et al. [2014] A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah. Visual tracking: An experimental survey. _IEEE Trans. Pattern Anal. Mach. Intell._ , 36(7):1442–1468, 2014.
* Spieksma and Woeginger [1996] F.C.R. Spieksma and G.J. Woeginger. Geometric three-dimensional assignment problems. _European Journal of Operational Research_ , 91(3):611–618, 1996.
* Tauer and Nagi [2013] Gregory Tauer and Rakesh Nagi. A map-reduce Lagrangian heuristic for multidimensional assignment problems with decomposable costs. _Parallel Computing_ , 39(11):653 – 668, 2013\.
* Thornbrue et al. [2010] James R. Thornbrue, J. Nate Knight, and Benjamin J. Slocumb. Association ambiguity management in mixed data dimension tracking problems. In _Signal and Data Processing of Small Targets 2010_ , volume 7698, pages 255–266. International Society for Optics and Photonics, SPIE, 2010\.
* Tibshirani [1996] Robert Tibshirani. Regression shrinkage and selection via the lasso. _J. Roy. Statist. Soc. Ser. B_ , 58(1):267–288, 1996.
* Ting et al. [2017] C. M. Ting, H. Ombao, S. B. Samdin, and S. H. Salleh. Estimating dynamic connectivity states in fMRI using regime-switching factor models. _IEEE Transactions on Medical Imaging_ , PP(99):1–1, 2017. ISSN 0278-0062.
* Ueda and Nakano [1998] Naonori Ueda and Ryohei Nakano. Deterministic annealing EM algorithm. _Neural Netw._ , 11(2):271–282, 1998.
* Uhlmann [2004] Jeffrey K. Uhlmann. Matrix permanent inequalities for approximating joint assignment matrices in tracking systems. _J. Franklin Inst._ , 341(7):569–593, 2004.
* Valdés-Sosa et al. [2005] Pedro A. Valdés-Sosa, Jose M. Sánchez-Bornot, Agustín Lage-Castellanos, Mayrim Vega-Hernández, Jorge Bosch-Bayard, Lester Melie-García, and Erick Canales-Rodríguez. Estimating brain functional connectivity with sparse multivariate autoregression. _Philosophical Transactions: Biological Sciences_ , 360(1457):969–981, 2005.
* Vogelstein et al. [2015] Joshua T. Vogelstein, John M. Conroy, Vince Lyzinski, Louis J. Podrazik, Steven G. Kratzer, Eric T. Harley, Donniell E. Fishkind, R. Jacob Vogelstein, and Carey E. Priebe. Fast approximate quadratic programming for graph matching. _PLOS ONE_ , 10(4):1–17, 04 2015.
* von Neumann [1953] John von Neumann. A certain zero-sum two-person game equivalent to the optimal assignment problem. In _Contributions to the theory of games, vol. 2_ , Annals of Mathematics Studies, no. 28, pages 5–12. Princeton University Press, Princeton, N. J., 1953.
* Wagstaff et al. [2001] Kiri Wagstaff, Claire Cardie, Seth Rogers, and Stefan Schrödl. Constrained $k$-means clustering with background knowledge. In _ICML ’01_ , pages 577–584, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc.
* Wang et al. [2014] H. E. Wang, C. G. Benar, P. P. Quilichini, K. J. Friston, V. K. Jirsa, and C. Bernard. A systematic framework for functional connectivity measures. _Front. Neurosci._ , 8:405, 2014.
* Wang et al. [2015] L. Wang, T. Liu, G. Wang, K. L. Chan, and Q. Yang. Video tracking using learned hierarchical features. _IEEE Trans. Image Proc._ , 24(4):1424–1435, 2015\.
* Wright [2015] Stephen J. Wright. Coordinate descent algorithms. _Math. Program._ , 151(1, Ser. B):3–34, 2015\.
## Appendix A Brain regions of interest in fMRI data analysis
Label | Name | Abbrv | Label | Name | Abbrv
---|---|---|---|---|---
| Subcortical network | | Default mode network
71 | L Caudate nucleus | CAU.L | 5 | L Superior frontal gyrus, orbital | ORBsup.L
72 | R Caudate nucleus | CAU.R | 6 | R Superior frontal gyrus, orbital | ORBsup.R
73 | L Putamen | PUT.L | 7 | L Middle frontal gyrus | MFG.L
74 | R Putamen | PUT.R | 8 | R Middle frontal gyrus | MFG.R
75 | L Pallidum | PAL.L | 15 | L Inferior frontal gyrus, orbital | ORBinf.L
76 | R Pallidum | PAL.R | 16 | R Inferior frontal gyrus, orbital | ORBinf.R
77 | L Thalamus | THA.L | 23 | L Superior frontal gyrus, medial | SFGmed.L
78 | R Thalamus | THA.R | 24 | R Superior frontal gyrus, medial | SFGmed.R
| Auditory network | 25 | L Superior frontal gyrus, medial orbital | ORBsupmed.L
79 | L Heschl gyrus | HES.L | 26 | R Superior frontal gyrus, medial orbital | ORBsupmed.R
80 | R Heschl gyrus | HES.R | 31 | L Cingulate gyrus, anterior part | ACG.L
81 | L Superior temporal gyrus | STG.L | 32 | R Cingulate gyrus, anterior part | ACG.R
82 | R Superior temporal gyrus | STG.R | 33 | L Cingulate gyrus, mid part | DCG.L
83 | L Temporal pole: superior temporal gyrus | TPOsup.L | 34 | R Cingulate gyrus, mid part | DCG.R
84 | R Temporal pole: superior temporal gyrus | TPOsup.R | 35 | L Cingulate gyurs, posterior part | PCG.L
| Sensorimotor network | 36 | R Cingulate gyrus, posterior part | PCG.R
1 | L Precentral gyrus | PreCG.L | 37 | L Hippocampus | HIP.L
2 | R Precentral gyrus | PreCG.R | 38 | R Hippocampus | HIP.R
19 | L Supplementary motor area | SMA.L | 39 | L Parahippocampus | PHG.L
20 | R Supplementary motor area | SMA.R | 40 | R Parahippocampus | PHG.R
57 | L Postcentral gyrus | PoCG.L | 61 | L Inferior parietal gyrus | IPL.L
58 | R Postcentral gyrus | PoCG.R | 62 | R Inferior parietal gyrus | IPL.R
59 | L Superior parietal gyrus | SPG.L | 65 | L Angular gyrus | ANG.L
60 | R Superior parietal gyrus | SPG.R | 66 | R Angular gyrus | ANG.R
63 | L Supramarginal gyrus | SMG.L | 67 | L Precuneus | PCUN.L
64 | R Supramarginal gyrus | SMG.R | 68 | R Precuneus | PCUN.R
69 | L Paracentral lobule | PCL.L | 85 | L Middle temporal gyrus | MTG.L
70 | R Paracentral lobule | PCL.R | 86 | R Middle temporal gyrus | MTG.R
| Visual network | 87 | L Temporal pole: middle temporal gyrus | TPOmid.L
43 | L Calcarine fissure and surrounding cortex | CAL.L | 88 | R Temporal pole: middle temporal gyrus | TPOmid.R
44 | R Calcarine fissure and surrounding cortex | CAL.R | 89 | L Inferior temporal gyrus | ITG.L
45 | L Cuneus | CUN.L | 90 | R Inferior temporal gyrus | ITG.R
46 | R Cuneus | CUN.R | | Unclassified
47 | L Lingual gyrus | LING.L | 17 | L Rolandic operculum | ROL.L
48 | R Lingual gyrus | LING.R | 18 | R Rolandic operculum | ROL.R
49 | L Superior occipital lobe | SOG.L | 21 | L Olfactory cortex | OLF.L
50 | R Superior occipital lobe | SOG.R | 22 | R Olfactory cortex | OLF.R
51 | L Middle occipital lobe | MOG.L | 27 | L Gyrus rectus | REC.L
52 | R Middle occipital lobe | MOG.R | 28 | R Gyrus rectus | REC.R
53 | L Inferior occipital lobe | IOG.L | 41 | L Amygdala | AMYG.L
54 | R Inferior occipital lobe | IOG.R | 42 | R Amygdala | AMYG.R
55 | L Fusiform gyrus | FFG.L | 91 | L Cerebellum crus 1 | CRUS1.L
56 | R Fusiform gyrus | FFG.R | 92 | R Cerebellum crus 1 | CRUS1.R
| Attentional network | 93 | L Cerebellum crus 2 | CRUS2.L
3 | L Superior frontal gyrus, dorsolateral | SFGdor.L | 94 | R Cerebellum crus 2 | CRUS2.R
4 | R Superior frontal gyrus, dorsolateral | SFGdor.R | 95 | L Cerebellum 3 | CB3.L
5 | L Superior frontal gyrus, orbital | ORBsup.L | 96 | R Cerebellum 3 | CB3.R
6 | R Superior frontal gyrus, orbital | ORBsup.R | 97 | L Cerebellum 4 5 | CB4_5.L
7 | L Middle frontal gyrus | MFG.L | 98 | R Cerebellum 4 5 | CB4_5.R
8 | R Middle frontal gyrus | MFG.R | 99 | L Cerebellum 6 | CB6.L
9 | L Middle frontal gyrus, orbital | ORBmid.L | 100 | R Cerebellum 6 | CB6.R
10 | R Middle frontal gyrus, orbital | ORBmid.R | 101 | L Cerebellum 7 | CB7b.L
11 | L Inferior frontal gyrus, opercular | IFGoperc.L | 102 | R Cerebellum 7 | CB7b.R
12 | R Inferior frontal gyrus, opercular | IFGoperc.R | 103 | L Cerebellum 8 | CB8.L
13 | L Inferior frontal gyrus, triangular | IFGtriang.L | 104 | R Cerebellum 8 | CB8.R
14 | R Inferior frontal gyrus, triangular | IFGtriang.R | 105 | L Cerebellum 9 | CB9.L
15 | L Inferior frontal gyrus, orbital | ORBinf.L | 106 | R Cerebellum 9 | CB9.R
16 | R Inferior frontal gyrus, orbital | ORBinf.R | 107 | L Cerebellum 10 | CB10.L
29 | L Insula | INS.L | 108 | R Cerebellum 10 | CB10.R
30 | R Insula | INS.R | 109 | Vermis 12 | VERMIS1_2
59 | L Superior parietal gyrus | SPG.L | 110 | Vermis 3 | VERMIS3
60 | R Superior parietal gyrus | SPG.R | 111 | Vermis 4 5 | VERMIS4_5
61 | L Inferior parietal gyrus | IPL.L | 112 | Vermis 6 | VERMIS6
62 | R Inferior parietal gyrus | IPL.R | 113 | Vermis 7 | VERMIS7
83 | L Temporal pole: superior temporal gyrus | TPOsup.L | 114 | Vermis 8 | VERMIS8
84 | R Temporal pole: superior temporal gyrus | TPOsup.R | 115 | Vermis 9 | VERMIS9
85 | L Middle temporal gyrus | MTG.L | 116 | Vermis 10 | VERMIS10
86 | R Middle temporal gyrus | MTG.R | | |
89 | L Inferior temporal gyrus | ITG.L | | |
90 | R Inferior temporal gyrus | ITG.R | | |
Table 2: rs-fMRI data analysis. Regions of interest (ROIs) as defined by the
AAL brain atlas and resting state networks (RSN).
|
# Does anti-Unruh effect assist quantum entanglement and coherence?
Shu-Min Wu1111Email<EMAIL_ADDRESS>Xiao-Wei Teng1, Jin-Xuan Li1, Hao-Sheng
Zeng2222Email<EMAIL_ADDRESS>Tonghua
<EMAIL_ADDRESS>(corresponding author) 1 Department of
Physics, Liaoning Normal University, Dalian 116029, China
2 Department of Physics, Hunan Normal University, Changsha 410081, China
3 School of Physics and Optoelectronic Engineering, Yangtze University,
Jingzhou 434023
###### Abstract
In this paper, we use the concepts of quantum entanglement and coherence to
analyze the Unruh and anti-Unruh effects based on the model of Unruh-DeWitt
detector. For the first time, we find that (i) the Unruh effect reduces
quantum entanglement but enhances quantum coherence; (ii) the anti-Unruh
effect enhances quantum entanglement but reduces quantum coherence. This
surprising result refutes the notion that the Unruh effect can only destroy
quantum entanglement and coherence simultaneously, and that the anti-Unruh can
only protect quantum resources. Consequently, it opens up a new source for
discovering experimental evidence supporting the existence of the Unruh and
anti-Unruh effects.
###### pacs:
04.70.Dy, 03.65.Ud,04.62.+v
## I Introduction
Quantum entanglement plays an important role in quantum information science.
It is a necessary ingredient for various computational tasks, such as quantum
remote control, quantum teleportation and quantum communication L1 ; L2 ; L3 .
A lot of progresses have been made in understanding the behavior of quantum
entanglement in various aspects, such as the sudden death and sudden birth,
the degeneration or enhancement of quantum entanglement L4 ; L5 ; L6 ; L7 ; L8
; L9 ; L10 . On the other hand, as a broader concept, quantum coherence is
also a physical resource in quantum technologies, optical experiments and
biological systems L11 ; L12 ; L13 ; L14 ; L15 . Many works have been done
about how the environment influences quantum coherence and how to protect it
L16 ; L17 ; L18 ; L19 . Quantum entanglement and coherence are closely related
to each other. Generally speaking, quantum coherence is a necessary condition
for quantum entanglement. Despite considerable efforts dedicated to
investigating the relationship between quantum entanglement and coherence L20
; AA2 ; L21 ; L22 ; AA ; AA1 , several challenges still remain unresolved.
The Unruh effect, first proposed by Unruh in 1976 L23 ; L24 , stands as a
crucial prediction of quantum field theory. An inertial observer which
undergoes uniform acceleration in the Minkowski vacuum will detect a thermal
bath of particles of a free quantum field with a temperature proportional to
the acceleration. On the other hand, Hawking discovered that black holes can
emit thermal radiation, a phenomenon subsequently named Hawking radiation L25
. According to the equivalence principle, the investigation of the Unruh
effect holds great significance for studying Hawking radiation and its
associated issues, including thermodynamics and the problem of information
loss L26 ; L27 ; L28 . Generally, both the Unruh effect and Hawking radiation
have been observed to reduce quantum entanglement and coherence L29 ; L30 ;
L31 ; L32 ; L33 ; L34 ; L35 ; L36 ; SMW1 ; SMW2 ; SMW3 ; SMW4 ; SMW5 ; SMW6 ;
SMW7 .
Recent research has suggested the existence of the anti-Unruh effect, wherein,
under specific conditions, the acceleration effect can also cool down a
detector L37 ; L38 ; L39 ; qsc1 ; qsc2 . The concept of anti-Hawking radiation
was also proposed L40 . For the global free models in the full space, the
anti-Unruh effect cannot be detected physically. In order to observe the anti-
Unruh effect, the semiclassical Unruh-DeWitt (UDW) detector model is usually
employed, which consists of a two-level atom interacting locally with the
vacuum field, and avoids the physically unfeasible detection of global models.
The UDW detector model is commonly established in experiments, often within a
finite length of optical cavity. The results have demonstrated that the anti-
Unruh effect enhances quantum entanglement for an initially entangled
bipartite state L38 ; L39 . However, the influence of the anti-Unruh effect on
quantum coherence remains unclear. Additionally, it raises the question of
whether the Unruh and anti-Unruh effects exert similar influences on both
quantum entanglement and coherence within the UDW detector model. In previous
studies L29 ; L30 ; L31 ; L32 ; L33 ; L34 ; L35 ; L36 ; SMW1 ; SMW2 ; SMW3 ;
SMW4 ; SMW5 ; SMW6 ; SMW7 , both the free field model and the UDW model did
not take boundary conditions into account. However, in our paper, we consider
the boundary conditions in the UDW model. Through the investigation of these
models, we may derive some intriguing conclusions, particularly highlighting
how quantum correlations and coherence may exhibit distinctive properties
under the influence of acceleration effects.
In this paper, we study the influence of acceleration effect on quantum
entanglement and coherence. Assume that a spin qubit and a two-level atom is
initially in an entangled pure state. The atom is then injected into a vacuum
cavity with finite length and moves at a uniform acceleration along the length
direction of the cavity. The atom plays the role of a detector which can
detect the thermal radiation due to the acceleration effect. We want to know
how the acceleration effect affects quantum entanglement and coherence. The
underlying motivation is to uncover novel aspects of the Unruh effect and
anti-Unruh effect, contributing to a more comprehensive understanding of these
phenomena.
The paper is organized as follows. In Sec. II, we briefly introduce the UDW
detector model. In Sec. III, we study the influence of acceleration effect on
quantum entanglement and coherence based on the UDW detector model. The last
section is devoted to the brief conclusion.
## II Unruh-DeWitt model,
Let us first briefly recall the UDW model and the concept of anti-Unruh
effect. The UDW model consists of a two-level atom (detector) interacting
locally with a massless field $\phi(x(\tau))$ along the trajectory $x(\tau)$
with $\tau$ the detector’s proper time L37 . The detector has ground state
$|g\rangle$ and excited state $|e\rangle$, which are separated by an energy
gap $\Omega$. Suppose that the detector moves in a flat static cylinder with a
spatial circumference ($L>0$). This cylinder topology imposes periodic
boundary conditions which is relevant to laboratory systems, such as the
closed optical cavities, superconducting circuits coupled to periodic
microwave guides and optical-fibre loops L41 ; L42 ; L43 .
In the interaction picture, the UDW Hamiltonian that describes the interaction
between the detector and the field $\phi(x(\tau))$ is
$\displaystyle
H_{I}=\lambda\chi(\tau/\sigma)(e^{i\Omega\tau}\sigma^{+}+e^{-i\Omega\tau}\sigma^{-})\phi(x(\tau)),$
(1)
where $\lambda$ is the coupling strength that is assume to be weak,
$\sigma^{\pm}$ denote the ladder operators of detector, and
$\chi(\tau/\sigma)$ is the switching function which controls the duration of
interaction via the parameter $\sigma$. The most natural choice for the
switching function is the Gaussian function
$\displaystyle\chi(\tau/\sigma)=e^{-\tau^{2}/2\sigma^{2}}.$ (2)
For weak coupling, the unitary evolution of the total quantum system is given
by L37
$\displaystyle U$ $\displaystyle=$
$\displaystyle\mathbb{I}+U^{(1)}+\mathcal{O}(\lambda^{2})=\mathbb{I}-i\int
d\tau H_{I}(\tau)+\mathcal{O}(\lambda^{2})$ $\displaystyle=$
$\displaystyle-i\lambda\sum_{m}(I_{+,m}{a}_{m}^{\dagger}\sigma^{+}+I_{-,m}{a}_{m}^{\dagger}\sigma^{-}+\text{H.c.})+\mathcal{O}(\lambda^{2}),$
where $m$ denotes the mode of the scalar field with annihilation and creation
operators $a_{m}|0\rangle=0$ and $a_{m}^{\dagger}|0\rangle=|1_{m}\rangle$. The
sum over $m$ takes discrete values due to the periodic boundary condition
$k=2\pi m/L$, and $I_{\pm,m}$ can be written as
$\displaystyle I_{\pm,m}=\int_{-\infty}^{\infty}\chi(\tau/\sigma)e^{\pm
i\Omega\tau+\frac{2\pi
i}{L}[|m|t(\tau)-mx(\tau)]}\frac{d\tau}{\sqrt{4\pi|m|}}.$ (4)
Within the first-order approximation and in the interaction picture, this
evolution can be expressed as L38 ; L39
$\displaystyle U|g\rangle|0\rangle$ $\displaystyle=$ $\displaystyle
C_{0}(|g\rangle|0\rangle-i\eta_{0}|e\rangle|1_{m}\rangle),$ $\displaystyle
U|e\rangle|0\rangle$ $\displaystyle=$ $\displaystyle
C_{1}(|e\rangle|0\rangle+i\eta_{1}|g\rangle|1_{m}\rangle),$ (5)
where $C_{0}$ and $C_{1}$ are the normalization factors. In this paper, we
assume that the accelerated trajectory of the detector is
$t(\tau)=a^{-1}\sinh(a\tau)$ and $x(\tau)=a^{-1}[\cosh(a\tau)-1]$ with $a$
being the proper acceleration. Denoting $\eta_{0}=\lambda{\sum_{m}I_{+.m}}$
and $\eta_{1}=\lambda{\sum_{m}I_{-.m}}$, Eq.(II) can be rewritten as
$\displaystyle U|g\rangle|0\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{1+|\eta_{0}|^{2}}}(|g\rangle|0\rangle-i\eta_{0}|e\rangle|1_{m}\rangle),$
$\displaystyle U|e\rangle|0\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{1+|\eta_{1}|^{2}}}(|e\rangle|0\rangle+i\eta_{1}|g\rangle|1_{m}\rangle).$
If the detector is initially in its ground state, then the excitation
probability is given by
$\displaystyle P=\sum_{m\neq 0}|\langle
1,e|U^{(1)}|0,g\rangle|^{2}=\lambda^{2}\sum_{m\neq 0}|I_{+,m}|^{2}.$ (6)
From Eq.(6) we can see that the transition probability is dependent on the
concrete parameters, such as the length of cavity $L$, the energy gap
$\Omega$, and the interaction time scale $\sigma$. In particular, the
transition probability may decrease with the growth of acceleration when the
interaction timescale $\sigma$ is much smaller than the reciprocal of the
energy gap $\Omega^{-1}$. In other words, the detector is not be warmed but
cooled down. This counterintuitive effect is called anti-Unruh effect.
## III Quantum entanglement and coherence for the Unruh-DeWitt model
Quantum entanglement and coherence are two important quantities for describing
quantum states. They are closely related to each other but also have own
characteristics. In the case of two-qubit systems, quantum entanglement can be
effectively described by the concurrence L44 ; L45
$\displaystyle
E(\rho_{AB})=\max\\{0,\sqrt{\lambda_{1}}-\sqrt{\lambda_{2}}-\sqrt{\lambda_{3}}-\sqrt{\lambda_{4}}\\},$
(7)
where $\lambda_{i}$ are the eigenvalues of the matrix
$\rho_{AB}[(\sigma_{y}\otimes\sigma_{y})\rho_{AB}^{*}(\sigma_{y}\otimes\sigma_{y})]$
in decreasing order. On the other hand, quantum entanglement can also be
measured by the logarithmic negativity $N(\rho_{AB})$, which is defined as L29
$\displaystyle N(\rho_{AB})=\log_{2}||\rho_{AB}^{T_{A}}||,$ (8)
where $||\rho_{AB}^{T_{A}}||$ is the sum of the absolute values of the
eigenvalues of the partial transpose of density matrix $\rho_{AB}$ with
respect to subsystem $A$.
There are several methods to describe the coherence of quantum states, in
which the measure of $l_{1}$ norm of coherence is maybe the simple and
intuitive one. In the given reference basis, the $l_{1}$ norm of coherence is
defined as the sum of absolute value of all the off-diagonal elements of the
system density matrix L46
$C_{l_{1}}(\rho_{AB})=\sum_{{i\neq j}}|\rho_{i,j}|.$ (9)
One can also quantify quantum coherence by the relative entropy of coherence
(REC) which is given by
$C_{\rm
REC}\left(\rho_{AB}\right)=S\left({\rho_{\rm{diag}}}\right)-S\left(\rho_{AB}\right),$
(10)
where $S(\rho_{AB})$ donates the von Neumann entropy of quantum state
$\rho_{AB}$, and $\rho_{\rm diag}$ denotes the state obtained from $\rho_{AB}$
by deleting all off-diagonal elements. In this paper, we employ quantum
entanglement and coherence to study the characteristics of both the Unruh
effect and anti-Unruh effect.
Consider a UDW detector, which is initially entangled with a spin qubit with
spin up $|\uparrow\rangle$ and spin down $|\downarrow\rangle$. The detector is
placed in a vacuum cavity with length $L$. The initial state of the whole
system takes the form
$\displaystyle|\psi\rangle_{qDC}=(\alpha|\uparrow_{q}\rangle|g_{D}\rangle+\beta|\downarrow_{q}\rangle|e_{D}\rangle)|0_{C}\rangle,$
(11)
where the real coefficients $\alpha$ and $\beta$ satisfy
$\alpha^{2}+\beta^{2}=1$, and $|0_{C}\rangle$ denotes the vacuum state of the
cavity field. For convenience of description, we use the subscripts $q$, $D$,
and $C$ to denote the qubit, detector, and cavity field, respectively. Now we
let the detector moves with a uniform acceleration $a$ in the cavity.
According to Eq.(II), the state of the whole system becomes
$\displaystyle|\psi\rangle_{q\bar{D}\bar{C}}$ $\displaystyle=$
$\displaystyle\frac{\alpha}{\sqrt{1+|\eta_{0}|^{2}}}|\uparrow_{q}\rangle|g_{D}\rangle|0_{C}\rangle-\frac{i\alpha\eta_{0}}{\sqrt{1+|\eta_{0}|^{2}}}|\uparrow_{q}\rangle|e_{D}\rangle|1_{C}\rangle$
$\displaystyle+$
$\displaystyle\frac{\beta}{\sqrt{1+|\eta_{1}|^{2}}}|\downarrow_{q}\rangle|e_{D}\rangle|0_{C}\rangle+\frac{i\beta\eta_{1}}{\sqrt{1+|\eta_{1}|^{2}}}|\downarrow_{q}\rangle|g_{D}\rangle|1_{C}\rangle.$
Here the symbol “bar” above D and C denotes that the states for the detector
and cavity field are observed in the noninertial frame determined by the
accelerated detector. Eq.(III) implies that the vacuum state in the inertial
frame becomes anti-vacuum observed in the noninertial frame. In other words,
the UDW detector would detect the production of particles in the vacuum
cavity. In the following, we will study the change of quantum entanglement and
coherence induced by the acceleration effect.
Let us first calculate quantum entanglement and coherence between qubit and
detector. Tracing over the cavity field modes in Eq.(III), we obtain the
density operator between qubit and detector as
$\displaystyle\rho_{q\bar{D}}$ $\displaystyle=$
$\displaystyle\frac{\alpha^{2}}{1+|\eta_{0}|^{2}}|\uparrow_{q}g_{D}\rangle\langle\uparrow_{q}g_{D}|+\frac{\alpha^{2}|\eta_{0}|^{2}}{1+|\eta_{0}|^{2}}|\uparrow_{q}e_{D}\rangle\langle\uparrow_{q}e_{D}|$
$\displaystyle+$
$\displaystyle\frac{\beta^{2}}{1+|\eta_{1}|^{2}}|\downarrow_{q}e_{D}\rangle\langle\downarrow_{q}e_{D}|+\frac{\beta^{2}|\eta_{1}|^{2}}{1+|\eta_{1}|^{2}}|\downarrow_{q}g_{D}\rangle\langle\downarrow_{q}g_{D}|$
$\displaystyle+$
$\displaystyle\frac{\alpha\beta}{\sqrt{(1+|\eta_{0}|^{2})(1+|\eta_{1}|^{2})}}(|\uparrow_{q}g_{D}\rangle\langle\downarrow_{q}e_{D}|+|\downarrow_{q}e_{D}\rangle\langle\uparrow_{q}g_{D}|)$
$\displaystyle-$
$\displaystyle\frac{\alpha\beta}{\sqrt{(1+|\eta_{0}|^{2})(1+|\eta_{1}|^{2})}}(\eta_{0}\eta_{1}^{*}|\uparrow_{q}e_{D}\rangle\langle\downarrow_{q}g_{D}|+\eta_{0}^{*}\eta_{1}|\downarrow_{q}g_{D}\rangle\langle\uparrow_{q}e_{D}|).$
Now the system consists of two objects, the inertial qubit and the accelerated
detector. Employing Eq.(7), we obtain the concurrence $E(\rho_{q\bar{D}})$
between qubit and detector as
$\displaystyle
E(\rho_{q\bar{D}})=\max\bigg{\\{}0,\frac{2\alpha\beta(1-|\eta_{0}||\eta_{1}|)}{\sqrt{(1+|\eta_{0}|^{2})(1+|\eta_{1}|^{2})}},\frac{2\alpha\beta(|\eta_{0}||\eta_{1}|-1)}{\sqrt{(1+|\eta_{0}|^{2})(1+|\eta_{1}|^{2})}}\bigg{\\}}.$
(14)
We can also use Eq.(8) to get the logarithmic negativity $N(\rho_{q\bar{D}})$
$\displaystyle N(\rho_{q\bar{D}})$ $\displaystyle=$
$\displaystyle\log_{2}\bigg{[}\sqrt{\bigg{(}\frac{\alpha^{2}|\eta_{0}|^{2}}{1+|\eta_{0}|^{2}}-\frac{\beta^{2}|\eta_{1}|^{2}}{1+|\eta_{1}|^{2}}\bigg{)}^{2}+\frac{4\alpha^{2}\beta^{2}}{(1+|\eta_{0}|^{2})(1+|\eta_{1}|^{2})}}$
$\displaystyle+$
$\displaystyle\frac{\alpha^{2}}{1+|\eta_{0}|^{2}}+\frac{\beta^{2}}{1+|\eta_{1}|^{2}}\bigg{]}.$
It is shown that the concurrence $E(\rho_{q\bar{D}})$ and the logarithmic
negativity $N(\rho_{q\bar{D}})$ depend not only on the initial parameters
$\alpha$ and $\beta$, but also on the acceleration $a$, the length of cavity
$L$, the energy gap $\Omega$ and the interaction time scale $\sigma$, i.e.,
both the acceleration effect and the setup’s parameters can affect quantum
entanglement.
Figure 1: The transition probability $P$ (left column), concurrence
$E(\rho_{q\bar{D}})$ (middle column), and logarithmic negativity
$N(\rho_{q\bar{D}})$ (right column) as functions of acceleration $a$ for
different energy gaps $\Omega$. The rest parameters are chosen as
$\alpha=\frac{1}{\sqrt{2}}$, $L=200$ and $\sigma=0.2$.
In Fig.1, we plot the transition probability $P$ (left column), concurrence
$E(\rho_{q\bar{D}})$ (middle column), and logarithmic negativity
$N(\rho_{q\bar{D}})$ (right column) as functions of acceleration $a$ for
different energy gaps $\Omega$, with other parameters chosen as
$\alpha=\frac{1}{\sqrt{2}}$, $L=200$, and $\sigma=0.2$. It is shown that for
the smaller energy gap, i.e., $\Omega=0.05$, quantum entanglement increases
and the transition probability of detector decreases, with the growth of
acceleration $a$, meaning that the anti-Unruh effect enhances quantum
entanglement between qubit and detector. On the other hand, for the larger
energy gap, i.e., $\Omega=5$, quantum entanglement decreases and the
transition probability of detector increases with the growth of acceleration,
showing that the Unruh effect reduces quantum entanglement between qubit and
detector. Note that the initial entanglement at zero acceleration ($a=0$) is
not equal to one, which is caused by the interaction between the detector and
the limited vacuum field. The limited cavity space and finite interaction time
$\sigma$ between the cavity field and detector lead to $\eta_{0}\neq 0$ and
$\eta_{1}\neq 0$, so that the initial entanglement between qubit and detector
degrades. As soon as the detector enters the cavity, it is coupled with the
vacuum field in cavity, and the entanglement degradation takes place.
Besides quantum entanglement, we also study the change of quantum coherence
between qubit and detector. According to Eq.(9), the $l_{1}$ norm of coherence
$C(\rho_{q\bar{D}})$ for the system of qubit and detector reads
$\displaystyle
C_{l_{1}}(\rho_{q\bar{D}})=\frac{2\alpha\beta(1+|\eta_{0}||\eta_{1}|)}{\sqrt{(1+|\eta_{0}|^{2})(1+|\eta_{1}|^{2})}}.$
(16)
Next, we study the acceleration effect on the REC $C_{REC}(\rho_{q\bar{D}})$.
For this purpose, we should calculate the eigenvalues of density matrix of
Eq.(III). The density matrix has two non-zero eigenvalues
$\displaystyle\lambda_{1}(\rho_{q\bar{D}})$ $\displaystyle=$
$\displaystyle\frac{\alpha^{2}}{1+|\eta_{0}|^{2}}+\frac{\beta^{2}}{1+|\eta_{1}|^{2}},$
$\displaystyle\lambda_{2}(\rho_{q\bar{D}})$ $\displaystyle=$
$\displaystyle\frac{\alpha^{2}|\eta_{0}|^{2}}{1+|\eta_{0}|^{2}}+\frac{\beta^{2}|\eta_{1}|^{2}}{1+|\eta_{1}|^{2}}.$
Thus, the REC of state $\rho_{q\bar{D}}$ becomes
$\displaystyle
C_{REC}(\rho_{q\bar{D}})=\sum_{i=1}^{2}\lambda_{i}(\rho_{q\bar{D}})\log_{2}\lambda_{i}(\rho_{q\bar{D}})-\sum_{j}\beta_{j}(\rho_{q\bar{D}})\log_{2}\beta_{j}(\rho_{q\bar{D}}),$
(17)
where $\beta_{j}(\rho_{q\bar{D}})$ are the diagonal elements of
$\rho_{q\bar{D}}$ of Eq.(III). Note that Eq.(III) is a X state which has no
monomeric coherence whether for the qubit or for the detector. Thus the
coherence $C_{l_{1}}(\rho_{q\bar{D}})$ and $C_{REC}(\rho_{q\bar{D}})$ is
actually a kind of genuine bipartite coherence between the qubit and the
detector. According to the viewpoint of recent research AA , this genuine
bipartite coherence is actually a kind of quantum correlation between the
relevant subsystems.
Figure 2: The $l_{1}$ norm of coherence $C(\rho_{q\bar{D}})$ (left column) and
the REC $C_{REC}(\rho_{q\bar{D}})$ (right column) as functions of acceleration
$a$ for different energy gaps $\Omega$. The parameters are chosen as
$\alpha=\frac{1}{\sqrt{2}}$, $L=200$, $\sigma=0.2$, $A=0.999999$, and
$B=0.9999988$.
For diffrent energy gaps, we plot the $l_{1}$ norm of coherence
$C(\rho_{q\bar{D}})$ and the REC $C_{REC}(\rho_{q\bar{D}})$ as functions of
acceleration $a$ as in Fig.2, where the parameters are chosen as the same as
in Fig.1. For smaller energy gaps, i.e., $\Omega=0.05$, quantum coherence
$C(\rho_{q\bar{D}})$ changes very slowly with acceleration. It is shown that
quantum coherence $C(\rho_{q\bar{D}})$ decreases with acceleration $a$,
meaning that the anti-Unruh effect reduces quantum coherence between qubit and
detector. For bigger energy gaps, i.e., $\Omega=5$, we see that quantum
coherence $C(\rho_{q\bar{D}})$ increases with acceleration $a$, meaning that
the Unruh effect enhances quantum coherence between qubit and detector.
From above discussions, we see that the Unruh effect and anti-Unruh effect
play completely opposite roles: the Unruh effect reduces quantum entanglement
between qubit and detector but enhances their quantum coherence; while the
anti-Unruh effect enhances quantum entanglement between qubit and detector but
reduces their quantum coherence. This discovery represents a novel outcome. It
was shown in the previous research that both quantum entanglement and
coherence reduce under the influence of the Unruh effect L29 ; L30 ; L31 ; L32
; L33 ; L34 ; L35 ; L36 ; SMW1 ; SMW2 ; SMW3 ; SMW4 ; SMW5 ; SMW6 ; SMW7 ,
which is obviously different from our results. The reason for the difference
is: the previous papers mainly consider free models in the full space and a
Unruh-Dewitt model without boundary conditions, while we consider a Unruh-
Dewitt model in the cavity model with boundary condition. When $a=0$, the
entanglement and coherence are generally less than initial value, owing to the
limited cavity space and finite interaction time between the cavity field and
detector L37 ; L38 ; L39 ; qsc1 ; qsc2 . Physically, we can understand this
phenomenon as a transfer of entanglement and coherence: as the detector enters
the cavity, it interacts with the cavity mode, partially transferring
entanglement and coherence from the detector to the cavity mode, thereby
reducing the entanglement and coherence between detectors. An increase in
acceleration may extract or transfer entanglement and coherence between the
detector and cavity mode to the detectors themselves.
## IV Conclutions
In conclusion, we have studied the influence of acceleration effect on quantum
entanglement and coherence between a qubit and a relativistic detector based
on the Unruh-Dewitt model. Depending on the chosen parameters, one can observe
phenomena associated with both the Unruh and anti-Unruh effects. The Unruh
effect reduces the entanglement between qubit and detector but increases their
quantum coherence, challenging the notion that the Unruh effect is uniformly
detrimental to quantum resources. Contrarily, the anti-Unruh effect increases
their entanglement and reduces the coherence, indicating that the anti-Unruh
effect may not always be advantageous for quantum resources. These opposite
changes between quantum entanglement and coherence under the same processes
suggest the difference between them. Furthermore, previous studies have
indicated that the Unruh effect consistently leads to a simultaneous reduction
in both entanglement and coherence L29 ; L30 ; L31 ; L32 ; L33 ; L34 ; L35 ;
L36 ; SMW1 ; SMW2 ; SMW3 ; SMW4 ; SMW5 ; SMW6 ; SMW7 . The reason for the
different results is: the previous articles mainly consider free models in the
full space and a Unruh-Dewitt model without boundary conditions, while we
consider a Unruh-Dewitt model in the cavity model with boundary condition.
These results overturn conventional perceptions of the Unruh and anti-Unruh
effects, simultaneously providing an unexpected source of experimental
evidence for them.
###### Acknowledgements.
The authors would like to thank Wentao Liu for helpful discussions. This work
is supported by the National Natural Science Foundation of China (Grant Nos.
12205133), LJKQZ20222315 and JYTMS20231051.
## References
* (1) The Physics of Quantum Information, edited by D. Bouwmeester, A. Ekert, A. Zeilinger (Springer-Verlag, Berlin, 2000).
* (2) C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters, Phys. Rev. Lett. 70, 1895 (1993).
* (3) S. F. Huegla, M. B. Plenio, and J. A. Vaccaro, Phys. Rev. A 65, 042316 (2002).
* (4) Y. S. Weinstein, Phys. Rev. A 80, 022310 (2009).
* (5) M. I. Shaukat, E. V. Castro, and H. Terças, Phys. Rev. A 98, 022319 (2018).
* (6) J. Wang and J. Jing, Phys. Rev. A 82, 032324 (2010).
* (7) S. M. Wu and H. S. Zeng, Class. Quantum Grav. 37, 115003 (2020).
* (8) J. Hu and H. Yu, Phys. Rev. A 91, 012327 (2015).
* (9) L. Mazzola, S. Maniscalco, J. Piilo, K. A. Suominen, and B. M. Garraway, Phys. Rev. A 79, 042302 (2009).
* (10) B. Zhang, S. You, and M. Lu, Phys. Rev. A 101, 032335 (2020).
* (11) A. J. Leggett, Prog. Theor. Phys. Suppl. 69, 80 (1980).
* (12) B. Schumacher, M. D. Westmoreland, Phys. Rev. Lett. 80, 5695 (1998).
* (13) S. E. Barnes, R. Ballou, B. Barbara, J. Strelen, Phys. Rev. Lett. 79, 289 (1997).
* (14) A. Streltsov, G. Adesso, M. B. Plenio, Rev. Mod. Phys. 89, 041003 (2017).
* (15) M. Horodecki and J. Oppenheim, Nat. Commun. 4, 2059 (2013).
* (16) T. R. Bromley, M. Cianciaruso, and G. Adesso, Phys. Rev. Lett. 114, 210401 (2015).
* (17) A. Canaguier-Durand and R. Carminati, Phys. Rev. A 93, 033836 (2016).
* (18) C. M. Kropf, Phys. Rev. Research 2, 033311 (2020).
* (19) Z. Zhang, H. Fu, and J. Wang, Phys. Rev. B 95, 144306 (2017).
* (20) K. Chuan Tan, H. Kwon, C. Y. Park, and H. Jeong, Phys. Rev. A 94, 022329 (2016).
* (21) Y. Dai, Y. Dong, Z. Xu, W. You, C. Zhang, and O. Gühne, Phys. Rev. Appl. 13, 054022 (2020).
* (22) K. C. Tan and H. Jeong, Phys. Rev. Lett. 121, 220401 (2018).
* (23) J. X. Hou, S. Y. Liu, X. H. Wang, and W. L. Yang, Phys. Rev. A 96, 042324 (2017).
* (24) C. Radhakrishnan, M. Parthasarathy, S. Jambulingam, and T. Byrnes, Phys. Rev. Lett. 116, 150504 (2016).
* (25) S. Kim, L. Li, A. Kumar, and J. Wu, Phys. Rev. A 98, 022306 (2018).
* (26) W. G. Unruh, Phys. Rev. D 14, 870 (1976).
* (27) L. C. B. Crispino, A. Higuchi, and G. E. A. Matsas, Rev. Mod. Phys. 80, 787 (2008).
* (28) S. W. Hawking, Nature (London) 248, 30 (1974).
* (29) L. Bombelli, R. K. Koul, J. Lee, and R. D. Sorkin, Phys. Rev. D 34, 373 (1986).
* (30) S.W. Hawking, Commun. Math. Phys. 43, 199 (1975).
* (31) H. Terashima, Phys. Rev. D 61, 104016 (2000).
* (32) I. Fuentes-Schuller, and R. B. Mann, Phys. Rev. Lett. 95,120404 (2005).
* (33) P. M. Alsing, I. Fuentes-Schuller, R. B. Mann, and T. E. Tessier, Phys. Rev. A 74, 032326 (2006).
* (34) G. Adesso, I. Fuentes-Schuller, and M. Ericsson, Phys. Rev. A 76, 062112 (2007).
* (35) S. M. Wu, H. S. Zeng, Eur. Phys. J. C 82, 4 (2022).
* (36) S. Xu, X. k. Song, J. d. Shi, and L. Ye, Phys. Rev. D 89, 065022 (2014).
* (37) S. M. Wu, Y. T. Cai, W. J. Peng, H. S. Zeng, Eur. Phys. J. C 82, 412 (2022).
* (38) J. Wang, Z. Tian, J. Jing, and H. Fan, Phys. Rev. A 93, 062105 (2016).
* (39) S. M. Wu, H. S. Zeng, and H. M. Cao, Class. Quantum Grav. 38, 185007 (2021).
* (40) L. C. Céleri, A. G. S. Landulfo, R. M. Serra, and G. E. A. Matsas, Phys. Rev. A 81, 062130 (2010).
* (41) W. C. Qiang, G. H. Sun, Q. Dong, and S. H. Dong, Phys. Rev. A 98, 022320 (2018).
* (42) S. Bhattacharya, N. Joshi, Phys. Rev. D 105, 065007 (2022).
* (43) M. R. Hwang, D. Park, and E. Jung, Phys. Rev. A 83, 012111 (2011).
* (44) Q. Liu, S. M. Wu, C. Wen, J. Wang, Sci. China-Phys. Mech. Astron. 66, 120413 (2023).
* (45) A. G. S. Landulfo and G. E. A. Matsas, Phys. Rev. A 80, 032315 (2009).
* (46) S. Harikrishnan, S. Jambulingam, P. P. Rohde, C. Radhakrishnan, Phys. Rev. A 105, 052403 (2022).
* (47) W.G. Brenna, R.B. Mann, E. Martín-Martínez, Phys. Lett. B 757, 307 (2016).
* (48) T. Li, B. Zhang, L. You, Phys. Rev. D 97, 045005 (2018).
* (49) Y. Pan and B. Zhang, Phys. Rev. A 101, 062111 (2020).
* (50) S. M. Wu, H. S. Zeng, T. Liu, New J. Phys. 24, 073004 (2022).
* (51) X. Ming, Eur. Phys. J. C 83, 1166 (2023).
* (52) L. J. Henderson, R. A. Hennigar, R. B. Mann, A. R. H. Smithe, J. Zhang, Phys. Lett. B 809, 135732 (2020).
* (53) H. Tsuchida, Opt. Lett. 15, 640 (1990).
* (54) B. Peropadre, P. Forn-Díaz, E. Solano, J. J. García-Ripoll, Phys. Rev. Lett. 105, 023601 (2010).
* (55) C. M. Wilson, G. Johansson, A. Pourkabirian, M. Simoen, J.R. Johansson, T. Duty, F. Nori, P. Delsing, Nature 479, 376 (2011).
* (56) W. K. Wootters, Phys. Rev. Lett. 80, 2245 (1998).
* (57) V. Coffman, J. Kundu, and W. K. Wootters, Phys. Rev. A 61, 052306 (2000).
* (58) T. Baumgratz, M. Cramer, and M. B. Plenio, Phys. Rev. Lett. 113, 140401 (2014).
|
# Beyond Accuracy: ROI-driven Data Analytics of Empirical Data
Gouri Deshpande<EMAIL_ADDRESS>
Guenther Ruhe<EMAIL_ADDRESS>
Department of Computer Science, University of Calgary
###### Abstract
Background: The unprecedented access to data has rendered a remarkable
opportunity to analyze, understand, and optimize the investigation approaches
in almost all the areas of (Empirical) Software Engineering. However, data
analytics is time and effort consuming, thus, expensive, and not automatically
valuable.
Objective: This vision paper demonstrates that it is crucial to consider
Return-on-Investment (ROI) when performing Data Analytics. Decisions on ”How
much analytics is needed”? are hard to answer. ROI could guide for decision
support on the What?, How?, and How Much? analytics for a given problem.
Method: The proposed conceptual framework is validated through two empirical
studies that focus on requirements dependencies extraction in the Mozilla
Firefox project. The two case studies are (i) Evaluation of fine-tuned BERT
against Naive Bayes and Random Forest machine learners for binary dependency
classification and (ii) Active Learning against passive Learning (random
sampling) for REQUIRES dependency extraction. For both the cases, their
analysis investment (cost) is estimated, and the achievable benefit from DA is
predicted, to determine a break-even point of the investigation.
Results: For the first study, fine-tuned BERT performed superior to the Random
Forest, provided that more than 40% of training data is available. For the
second, Active Learning achieved higher F1 accuracy within fewer iterations
and higher ROI compared to Baseline (Random sampling based RF classifier). In
both the studies, estimate on, How much analysis likely would pay off for the
invested efforts?, was indicated by the break-even point.
Conclusions: Decisions for the depth and breadth of DA of empirical data
should not be made solely based on the accuracy measures. Since ROI-driven
Data Analytics provides a simple yet effective direction to discover when to
stop further investigation while considering the cost and value of the various
types of analysis, it helps to avoid over-analyzing empirical data.
keywords: Data Analytics, Return-on-Investment, Requirements Engineering,
Dependency extraction, BERT, Mozilla
## 1 Introduction
Return-on-Investment (ROI) is of great interest in engineering and business
for arriving at decisions. This is true in Software Engineering (SE) as well.
For example, Silverio et al. [16] evaluated cost-benefit analysis for the
adoption of software reference architectures for optimizing architectural
decision-making. Cleland et al. [8] studied the ROI of heterogeneous solutions
for the improvement of the ROI of requirements traceability. Recent data
explosion in the form of big data and advances in Machine Learning (ML) have
posed questions on the efficiency and effectiveness of these processes that
have become more relevant. In this paper, we present a retrospective
evaluation of two empirical studies taken from the field of requirements
dependency analysis for the benefit of ROI.
Data Analytics in SE (also called ”Software Analytics” by Bird et al. [6]) is
a term widely used, sometimes with a slightly different meaning. We subsume
all efforts devoted to collecting, cleaning, preparing, classifying, analyzing
data, and interpreting the results as Data Analytics (DA). In SE, the goal of
DA is to provide better insights into some aspects of the software development
life-cycle, which could facilitate some form of understanding, monitoring, or
improvement of processes, products or projects.
Figure 1: Break-even point from cost-benefit analysis of technology
investment.
SE is uncertain in various ways. SE is highly human-centric, and processes are
not strictly repeatable. The goals and constraints of software development are
dynamically changing. Experimentation and DA are inherently arduous under such
circumstances. The famous Aristotle [2] is widely attributed with a saying,
”It is the mark of an educated mind to rest satisfied with the degree of
precision which the nature of the subject admits and not to seek exactness
where only an approximation is possible”. Figure 1 shows a typical ROI (cost-
benefit) curve of technology usage. Following some phase of increase, the
curve reaches saturation, so, beyond that point, further investment does not
pay off. We contemplate that a similar behaviour holds true for applying DA.
Our research hypothesis is that ROI-driven DA helps to determine the break-
even point of investment and thus optimizes resources spent in this process.
Paper structure: Section 2 discusses related work. The problem formulation is
detailed in Section 3. Section 4 explains the empirical ROI investigation
approach for the two problems. A discussion of the applicability of the
results is elaborated in Section 5. Finally, Section 6, provides an outlook on
future research.
## 2 Related work
### 2.1 ROI Analysis in Software Engineering
Evaluating the profitability of expenditure helps to measure success over a
period of time thus takes the guesswork away from the concrete decision-making
process. For instance, Erdogmus et al. [15] analyzed the ROI of quality
investment to bring its importance in perspective and posed important
questions, “We generally want to increase a software products quality because
fixing existing software takes valuable time away from developing new
software. But how much investment in software quality is desirable? When
should we invest, and where?”.
Begel & Zimmermann [3] composed a set of 145 questions - based on a survey
with more than 200 developers and testers - that are considered relevant for
DA at Microsoft. One of the questions:“How important is it to have a software
DA team answer this question?”, expected answer on a five-point scale
(Essential to I don’t understand). Although it provides a sneak peek of the
development and testing environments of Microsoft, it does not prove any
emphasis on any form of ROI. Essentially, we speculate that the ROI aspect was
softened into asking for the perceived subjective importance through this
question.
Boehm et al. [7] presented quantitative results on the ROI of Systems
Engineering based on the analysis of the 161 software projects in the COCOMO
II database. Van Solingen [29] analyzed the ROI of software process
improvement and took a macro perspective to evaluate corporate programs
targeting the improvement of organizational maturity. Ferrari et al. [17]
studied the ROI for text mining and showed that it has not only a tangible
impact in terms of ROI but also an intangible benefits - which occur from the
investment in the knowledge management solution that is not directly
translated into returns, but that must be considered in the process of
judgment to integrate the financial perspective of analysis with the non-
financial ones. A lot of benefits occurring from the investment in this
knowledge management solution are not directly translated into returns, but
they must be considered in the process of judgment to integrate the financial
perspective of analysis with the non-financial ones.
Ruhe and Nayebi [23] proposed the Analytics Design Sheet as a means to sketch
the skeleton of the main components of the DA process. The four-quadrant
template provides direction to brainstorm candidate DA methods and techniques
in response to the problem statement and the data available. In its nature,
the sheet is qualitative. ROI analysis goes further and adds a quantitative
perspective for outlining DA.
### 2.2 Empirical Analysis for Requirements Dependency Extraction
The extraction of dependencies among requirements is an active field of SE
research. The practical importance of the topic was confirmed by our survey
[13]. More than 80% of the participants agreed or strongly agreed that (i)
dependency type extraction is difficult in practice, (ii) dependency
information has implications on maintenance, and (iii) ignoring dependencies
has a significant ill impact on project success.
In the recent past, many empirical studies have explored diverse computational
methods that used natural language processing (NLP) [10] [24], semi-supervised
technique [11], hybrid techniques [12] and deep learning [18]. However, none
of the approaches considered ROI to decide among techniques and the depth and
breadth of their execution level.
## 3 Conceptual Framework for ROI-driven Data Analytics
Different models exist that provide guidance to perform DA. Wieringa [30]
provides a checklist for what he calls the design cycle and the empirical
cycle. In this study, we use the term Scoping for defining the problem and the
analysis objectives. Scoping also means defining the boundaries that help to
exclude non-essential parts of the investigation. Analysis of the projected
Return-on-Investment (ROI) serves as an input for scoping.
### 3.1 Research Question
DA follows a resource and computation-intensive process constituting data
gathering and processing components that are the non-trivial proportion of the
total research cost. Thus, it is essential to account for these to compute the
overall cost-benefit and optimize it further.
Our aim is to look at DA for empirical studies retrospectively (already
conducted studies in the past). In particular, we are interested in
Requirements Dependency Analysis (RDA) based studies. Through this research,
we define and validate the principal concepts needed for ROI-driven DA. Our
research question is:
RQ: What are the benefits of ROI-driven Data Analytics in the studies focusing
on Requirements Dependency Analysis?
Justification: As for any investment, it is most important to know how much is
enough. There is no incentive to invest in analytics just for the sake of
performing some analysis. Although one cannot claim exactness from this, it is
worthwhile to get some form of guidance on where (which techniques) and how
far (how much of it) one should go. To make the analysis concrete, we have
selected RDA as the area of our specific investigations.
Table 1: Parameters used for ROI computation | Symbol | Meaning | Unit
---|---|---|---
Cost | $C_{dg}$ | Data gathering time | Minutes
$C_{pp}$ | Pre-processing time | Minutes
$C_{e}$ | Evaluation time | Minutes
$C_{l}$ | Labeling time | Minutes
| $C_{resource}$ | Human resource cost | $ per hour
Benefit | $B_{reward}$ | Value per TP | $
$B_{penalty}$ | Penalty per FN | $
$BF1_{iteration}$ | F1 difference | Number
$PValue$ | Projected value per 1% F1 improvement | $
Others | $H$ | #Human resources | Number
$N_{train}$ | Size of the training set | Number
$N_{test}$ | Size of the test set | Number
$N$ | $N_{train}$ \+ $N_{test}$ | Number
### 3.2 Cost Factors
Data processing is an umbrella term used to combine data collection
($C_{dg}$), pre-processing ($C_{pp}$) and labeling ($C_{l}$) under one hood,
each one of which is a cost component. However, not all costs are fixed and
some vary based on the solution approach used to tackle any decision problem.
For example, supervised Machine Learning (ML) requires a large amount of
annotated data, to begin with, whereas Active Learning acquires these
annotations over a period of time in iterations until a stopping condition for
classification operation is reached [25]. Additionally, there is a cost
associated with modeling and evaluation ($C_{e}$).
### 3.3 Value Factors
The value returns or “benefits” are defined based on the needs of the decision
problem. In the context of dependency extraction, the benefit could be modeled
in terms of the ability of the ML model to identify a larger number of
dependencies correctly (higher # of True Positives TP: $B_{reward}$) while
limiting misclassification (reduced # of False Negatives FN: $B_{penalty}$).
Conversely, the benefit could also be determined based on the net value
($PValue$) of change of accuracy ($BF1_{iteration}$) in every iteration,
especially when using Active Learning. Table 1 lists the relevant cost
components and their corresponding units. These will be utilized to compute
the $ROI$ later for the two different problems in Section 4.4.
### 3.4 ROI
To determine the ROI, we follow the simplest form of its calculation relating
to the difference between $Benefit$ and $Cost$ to the amount of $Cost$. Both
$Benefit$ and $Cost$ are measured as human effort in person hours.
$\centering ROI=(Benefit-Cost)/Cost\@add@centering$ (1)
Costa et al. [9] distinguished the “hard ROI” from the “soft ROI”. The former
refers to the direct additional revenue generated and cost savings. The latter
improved productivity, customer satisfaction, technological leadership, and
efficiencies.
## 4 ROI of Techniques for Requirements Dependency Analysis
We have selected the area of requirements dependency analysis (RDA) to
illustrate and initially validate our former conceptual framework. In what
follows, we introduce the key terms needed to formulate two Empirical Analysis
Studies called EAS 1 resp. EAS 2.
### 4.1 Problem statement
Following are the definitions of dependency types that are used to state the
two studies. For a set of requirements $R$ and a pair of requirements $(r,s)$
$\epsilon$ $R\times R$
* 1)
An INDEPENDENT relationship is defined as the absence of any form of
relationship between a pair of requirements.
* 2)
A DEPENDENT relationship is defined as the complement set of INDEPENDENT.
i.e., there exists at least one type of the dependency types such as REQUIRES,
SIMILAR, OR, AND, XOR, value synergy, effort synergy etc. between $r$ and $s$.
* 3)
REQUIRES is a special form of DEPENDENT relationship. If $r$ requires $s$, or
$s$ requires $r$, then, $r$ and $s$ are in a REQUIRES relationship
* 4)
OTHER type of dependency is when $(r,s)$ is DEPENDENT and the dependency type
is not REQUIRES (could be any of the other dependency types mentioned in (2))
1. Problem 1-
Binary requirements dependency extraction: For a given set $R$ of requirements
and their textual description, the binary requirements dependency extraction
problem aims to classify each pair (r,s) $\epsilon$ $R\times R$ as DEPENDENT
or INDEPENDENT.
2. Problem 2-
Specific requirements dependency extraction of the type REQUIRES: For a given
set $R$ of requirements and their textual description, the REQUIRES dependency
extraction problem aims to classify for each pair (r,s) $\epsilon$ $R\times R$
if they are in a $REQUIRES$ relationship.
### 4.2 Empirical Analysis Studies (EAS)
In this section, we formulate two Empirical Analysis Studies, EAS 1 and EAS 2,
to investigate the two problems explained above. We aim to analyze and compare
Bidirectional Encoder Representations from Transformers (BERT), and Active
Learning (AL), both proven to be of interest in general and pre-evaluated for
their applicability to the stated problems, with traditional ML. For the two
studies, we examine the (F1) accuracy and the ROI of the whole process of DA.
EAS 1: We compare two supervised classification algorithms: Naive Bayes (NB)
and Random Forest (RF) - ML algorithms successfully and prominently used for
text classification[19] in the past, with a fine-tuned BERT model [14]. The
analysis was performed for an incrementally growing training set size to
capture its impact on F1 accuracy and ROI.
BERT (Bidirectional Encoder Representations from Transformers) [14] is a
recent technique published by researchers from Google. BERT is applying
bidirectional training of Transformer, a popular attention model, to language
modeling, which claims to be state-of-the-art for NLP tasks. In this study
scenario, we explore the question, “How does fine-tune BERT compare with
traditional algorithms on an economical scale?” by comparing models’
effectiveness with incurred ROI.
EAS 2: Random sampling (Passive Learning) randomly selects a training set -
referred to as Baseline in the rest of the paper. Active Learning selects the
most informative instances using various sampling techniques such as MinMargin
and LeastConfidence [25]. We compare Baseline with AL using RF as a classifier
for this scenario. The analysis was done by adding a few training samples in
every iteration concurrently to classify the unlabeled instances.
Active Learning (AL) is a ML method that guides a selection of the instances
to be labeled by an oracle (e.g., human domain expert or a program) [25].
While this mechanism has been proven to positively address the question, “Can
machines learn with fewer labeled training instances if they are allowed to
ask questions?”, through this exploration, we try to answer the question,“Can
machines learn more economically if they are allowed to ask questions?” [26].
### 4.3 Data
The online bug tracking system Bugzilla [20] is widely used in open-source
software development. New requirements are logged into these systems in the
form issue reports [27] [4] which help software developers to track them for
effective implementation [28], testing, and release planning. In Bugzilla,
feature requests are a specific type of issue that is typically tagged as
“enhancement” [21]. We retrieved these feature requests or requirements from
Firefox and exported all related fields such as Title, Type, Priority,
Product, Depends_on, and See_also.
Data collection: Collecting data from Bugzilla was a substantial effort that
was carried out in multiple rounds. We collected 3,704 enhancements from
Firefox using REST API through a python script such that each one of the
enhancements considered for retrieval is dependent on at least another one in
the dataset. The data spanned from 08/05/2001 to 09/08/2019.
Data preparation: The complete data was analyzed to eliminate special
characters and numbers. Then dependent requirement pairs were created based on
the depends_on (interpreted as REQUIRES dependency) field information for each
one of the enhancements. Requirements with no dependency between them were
paired to generate INDEPENDENT class dataset. Further, sentence pairs that had
fewer than three words in them were filtered out resulting in 3,373 REQUIRES,
219 OTHER and 21,358 INDEPENDENT pairs.
Pre-processing and feature extraction: The data was first processed to
eliminate stop words and then lemmatized following the traditional NLP
pipeline [1]. For supervised and AL ML, we used the Bag Of Words (BOW) [22]
feature extraction method, which groups textual elements as tokens. For
applying BERT, we retained sentence pairs in their original form (without stop
word removal and lemmatization).
Classifiers: For both NB and RF, the data was split into train and test
(80:20) and balanced between classes. Also, hyper-parameter tuning was
performed and the results for 10-fold cross-validation were computed, followed
by testing (on unseen data).
To fine-tune the BERT model, we used
NextSentencePrediction111https://huggingface.co/transformers/model_doc/bert.html#bertfornextsentenceprediction,
a sentence pair classification pre-trained BERT model, and further fine-tuned
it for the RDA specific dataset on Tesla K80 GPU on Google
Colab222https://colab.research.google.com/.
### 4.4 ROI Modeling
#### 4.4.1 EAS1
The classification algorithms such as RF and NB, have been explored in NLP
based SE problems. These algorithms are driven by the feature extraction
aspect to a great extent. Thus, could influence their effectiveness on
classification outcomes. However, feature extraction is problem specific and
incurs substantial cost and access to domain expertise.
On the other hand, BERT eliminates the need for feature extraction since it is
a language model based on deep learning. BERT, pre-trained on a large text
corpus, can be fine-tuned on specific tasks by providing only a small amount
of domain-specific data.
In this empirical analysis, we conducted classification by utilizing a
fraction of the whole dataset for training and testing for a small fixed data
set. This was repeated by slowly increasing the fraction of the training set
and results were captured.
During every classification, $Cost$ and $Benefit$ were computed using various
parameters explained in Table 1. $Cost$ is the sum of the data processing
costs ($(C_{dg}+C_{pp}+C_{e}+C_{l})/60$) (in hours) for a fraction (N%) of
training set. This is further translated into dollar cost based on hourly
charges ($C_{resource}$) of $H$ human resources.
$Cost=N\%*\frac{(C_{dg}+C_{pp}+C_{e}+C_{l})}{60}*H*C_{resource}$ (2)
$Return$ computations for RDA, assumes reward ($B_{reward}$) for identifying
the dependent requirements (TP) while penalizing ($B_{penalty}$) instances
that were falsely identified as independent (FN).
$Benefit=TP*B_{reward}-FN*B_{penalty}$ (3)
Table 2: Parameter settings for the two empirical analysis scenarios
[t] Parameters Values $C_{fixed}=C_{dg}+C_{pp}+C_{e}$ 1 min/sample $C_{l}$
0.5 min/sample $C_{resource}$ $400/hr $H$ 1 $N$ 4,586 $B_{reward}$ $500/TP
$B_{penalty}$ $500/FN $BF1_{iteration}$ =$F_{cur}-F_{prev}$ $PValue$ $10,000
per percent F1 improvement
#### 4.4.2 EAS 2
In this empirical analysis, we compared AL with a traditional random sampling
based classification- Baseline \- using the RF ML algorithm.
Beginning with 60 training samples of each class (REQUIRES, INDEPENDENT and
OTHER), we developed multi-class classifiers for both AL and Baseline for this
empirical study scenario. When AL used MinMargin sampling
technique333MinMargin sampling technique performed well compared to Least
Confidence and Entropy thus, we utilized MinMargin for this study to identify
20444The tests were performed with#samples = 10, 15 and 20. In this study, we
will discuss results related to #samples=20 most uncertain instance
(requirement pair) for oracle to label, baseline randomly selected 20
instances and added to the training set along with their label, thus, kept the
two approaches comparable in all the 20 iterations. Since data is already
labeled, for AL, we pretend they are unlabeled until queried and labeled by a
simulated oracle in this scenario.
The $Cost$ is determined by first computing the sum of total processing time
in person hours (= $Cost$) taken for data processing
($C_{fixed}=C_{dg}+C_{pp}+C_{e})$), labeling ($C_{l}$) of train set
($N_{train}$) and data processing cost ($C_{fixed}$) for testing. This is
further translated into dollar cost (=$C_{total}$) based on hourly charges
($C_{resource}$) of $H$ human resources.
$Cost=\frac{N_{train}*(C_{fixed}+C_{l})+N_{test}*C_{fixed}}{60}$
$C_{total}=Cost*H*C_{resource}$ (4)
Likewise, $Benefit$ is defined as the monetary value associated with a 1%
improvement in F1 score ($BF1_{iteration}$) between subsequent iterations.
$Benefit=BF1_{iteration}*PValue$ (5)
## 5 Results
Figure 2: F1 score plot for NB, RF and BERT trained over increasing training
set size, F1 improves, but plateaus beyond a certain point
In the real-world, cost and benefit values are hard to get and are uncertain.
All the results presented in this section are based on the parameter settings
given in Table 2. The settings reflect practical experience but are not taken
from a specific data collection procedure. We claim that the principal
arguments made in our paper are independent of these settings.
### 5.1 EAS 1
(a) F1 vs ROI for Random Forest
(b) F1 vs ROI for Fine tuned BERT
Figure 3: Empirical Analysis Scenario 1 (EAS 1)
Figure 2 provides the “accuracy only view” and shows that F1 gradually
increases with the increasing training size for the three ML algorithms: NB,
RF, and BERT. However, all three ML algorithms reach a saturation towards
larger training set sizes. While BERT performed exceptionally well when
training set size exceeded 42%, it could have been ideal to pre-determine “How
much training is enough?”. Thus we selected the top two classifiers (Figure
2): BERT and RF and applied the monetary values (Table 2) for the various cost
and benefit factors defined in Table 1 and computed the ROI.
Figure 3(a) and 3(b) show the results for RF and BERT, respectively. The ROI
behaviour is not monotonous and peaks for both cases. Although RF
classification achieved the highest ROI with just 20% of training set and
accuracy of F1 = 0.7, highest F1 value of 0.75 was achieved along with the
lowest ROI of 4.7.
For RF classification and applying ROI arguments, learning can be stopped with
20% of the training set.
Now looking at BERT classification, the best ROI-driven results: F1 = 0.84 and
an ROI = 8.43, were achieved with the 60% training set. Although F1 rose to
0.9 with 70% training set size, ROI dropped to 7.27. For the recommendation of
20% of training set size, ROI has a local optimum. BERT in general performs
well on the F1, however, is it worth the ROI? needs to be explored.
For training set sizes of at least 40% of the size of the whole set, BERT
performed better than RF in terms of both accuracy and ROI.
### 5.2 EAS 2
We analyzed the ROI for Baseline against AL for classifying the REQUIRES
class. The results are shown in Figure 4(a) and Figure 4(b). Similar to EAS 1,
we applied the values from Table 2 and equations (4) and (5) to compute cost
and benefit at every iteration for both the approaches. For the Baseline
approach, ROI peaked at 3.2 and F1 = 0.6, in the very 2nd iteration. Onwards,
ROI drastically decreased which indicated lesser value for increasing training
set by random sampling (Baseline) method.
Similar behavior was observed for the AL approach. shown in Figure 4(b). The
peak here was after three iterations with values ROI = 4.5 and F1 = 0.8.
Both Baseline and AL showed the best ROI performance in the early iterations.
Higher F1 accuracy needs additional human resources and reduces the ROI.
(a) F1 vs ROI for Baseline
(b) F1 vs ROI for AL
Figure 4: Empirical Analysis Scenario 2 (EAS2)
## 6 Discussion
For the problem of RDA, we explored the potential value of ROI-driven
decisions. When chasing higher accuracy, there is a risk of over analyzing
empirical data. In the sense that the value added due to increased accuracy is
not justifiable by the additional effort needed in achieving it.
What does a high or low ROI mean for DA? : If available, a high ROI ratio
indicates that there is a substantial benefit expected from following the
recommendations derived from DA. Assuming that the ROI-driven suggestions are
implemented, the small improvements achieved for solving the decision problems
with high impact could justify the effort invested. Analysis related to effort
and benefit, targeting high ROI, also implies simplicity first. Advanced
methods are needed, but they are hard to justify practical application if a
similar type of insight could be reached from a much simpler analysis, e.g.,
from descriptive statistics.
What is the risk of ignoring analysis?: The calculation of ROI is based on
the value and effort estimates and thus only provides an approximation. In all
types of exploratory data analysis, the emphasis is mainly on creating new
research hypotheses or validating existing assumptions. In these cases, the
notion of ROI is not the primary concern. Also, estimates for value and effort
needed are highly dependent; hence, the ROI might only serve as a soft
recommendation. On the other hand, whenever the ROI can be determined as a
reasonable estimate, even after using intervals of best and worst-case
performances, then ignoring ROI means to potentially waste effort for analysis
that does not pay off the investment made. For EAS 1, if the training size set
was limited to 30%, RF could be considered as a better choice over BERT.
However, with the possibility to increase the training set size, the BERT
approach could be favored.
## 7 Conclusions and Future Work
We proposed to complement Data Analytics of empirical studies with ROI
analysis to avoid over analyzing data in this vision paper. To validate the
need, we performed an analysis of accepted papers of ESEM conferences between
2015 and 2019 and found that 51 out of 190 papers (27%) were addressing some
form of DA. Among them, 39% included some consideration of cost, value, or
benefit. However, none of them directly explored or discussed ROI or used
cost-benefit analysis to decide the degree of DA needed. From a decision-
making perspective, selecting one out of many techniques, and for a selected
technique, deciding the termination of analysis amount to enlarge the scope
from one to two criteria.
Beyond accuracy, reflecting the benefit, it is essential to look into the
investment as well. Exclusively looking into the different aspects of accuracy
is cardinal, but it does not provide a full picture as the effort consumption
and impact are ignored. Effort estimation is well studied, however; prediction
of value [5] has not been explored as much. Even rough estimates may be
helpful to decide how much further investment into DA is reasonable. To make
this agenda successful, economical, business, and social concepts need to be
taken into account, apart from just the technical aspects.
## Acknowledgement
We thank Atharva Naik and Venessa Chan for useful comments. This work is
supported by the Natural Sciences and Engineering Research Council of Canada,
Discovery Grant RGPIN-2017-03948.
## References
* [1] A. Arellano, E. Zontek-Carney, and M. A. Austin. Frameworks for natural language processing of textual requirements. International Journal On Advances in Systems and Measurements, 8:230–240, 2015.
* [2] J. Barnes et al. The Nicomachean Ethics. Penguin, 2004.
* [3] A. Begel and T. Zimmermann. Analyze this! 145 questions for data scientists in software engineering. In ICSE, pages 12–23, 2014.
* [4] T. Bhowmik and S. Reddivari. Resolution trend of just-in-time requirements in open source software development. In 2015 IEEE Workshop on Just-In-Time Requirements Engineering (JITRE), pages 17–20. IEEE, 2015.
* [5] S. Biffl, A. Aurum, B. Boehm, H. Erdogmus, and P. Grünbacher. Value-based software engineering. Springer Science & Biz Media, 2006.
* [6] C. Bird, T. Menzies, and T. Zimmermann. The art and science of analyzing software data. Elsevier, 2015.
* [7] B. Boehm et al. The roi of systems engineering: Some quantitative results for software-intensive systems. Systems Engineering, 11(3):221–234, 2008.
* [8] J. Cleland-Huang, G. Zemont, and W. Lukasik. A heterogeneous solution for improving the return on investment of requirements traceability. In Proc. 12th IEEE Requirements Engineering Conference, 2004., pages 230–239. IEEE, 2004.
* [9] A. Costa et al. Intraoral hard and soft tissue depths for temporary anchorage devices. In Seminars in orthodontics, volume 11, pages 10–15. Elsevier, 2005\.
* [10] J. Dag et al. A feasibility study of automated natural language requirements analysis in market-driven development. RE, 7(1):20–33, 2002.
* [11] G. Deshpande, C. Arora, and G. Ruhe. Data-driven elicitation and optimization of dependencies between requirements. Proc. RE, 2019.
* [12] G. Deshpande et al. Requirements dependency extraction by integrating active learning with ontology-based retrieval. Proc. RE, 2020.
* [13] G. Deshpande and G. Ruhe. Survey: Elicitation and maintenance of requirements dependencies, Dec 2020\.
* [14] J. Devlin et al. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
* [15] H. Erdogmus, J. Favaro, and W. Strigel. Return on investment. IEEE Software, 21(3):18–22, 2004.
* [16] S. Fernández et al. Rearm:a reuse-based economic model for software reference architectures. In Int. Conf on Software Reuse, pages 97–112. Springer, 2013.
* [17] M. Ferrari et al. Roi in text mining projects. WIT Transactions on State-of-the-art in Science and Engineering, 17, 2005.
* [18] J. Guo et al. Semantically enhanced software traceability using deep learning techniques. In 2017 IEEE/ACM 39th ICSE, pages 3–14. IEEE, 2017.
* [19] C. Manning et al. Introduction to information retrieval. Natural Language Engineering, 16(1):100–103, 2010.
* [20] Mozilla.org. Bugzilla: bug-tracking system, Apr 2020.
* [21] Mozilla.org. Userguide/bugfields, Apr 2020.
* [22] J. Ramos et al. Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learning, volume 242, pages 133–142. New Jersey, USA, 2003.
* [23] G. Ruhe and M. Nayebi. What counts is decisions, not numbers — toward an analytics design sheet. In Perspectives on Data Science for SE, pages 111–114. Elsevier, 2016.
* [24] R. Samer, M. Stettinger, M. Atas, A. Felfernig, G. Ruhe, and G. Deshpande. New approaches to the identification of dependencies between requirements. In 31st Conference on Tools with Artificial Intelligence, ICTAI ’19. ACM, 2019.
* [25] B. Settles. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences, 2009.
* [26] B. Settles. From theories to queries: Active learning in practice. In Active Learning and Experimental Design workshop colocated with AISTATS 2010, pages 1–18, 2011.
* [27] L. Shi et al. Understanding feature requests by leveraging fuzzy method and linguistic analysis. In Proc. of the 32nd Conf. on ASE, pages 440–450. IEEE Press, 2017\.
* [28] Y. Shin, J. H. Hayes, and J. Cleland-Huang. Guidelines for benchmarking automated software traceability techniques. In Proceedings of the 8th Int. Symposium on Software and Systems Traceability, pages 61–67. IEEE Press, 2015.
* [29] R. Van Solingen. Measuring the roi of software process improvement. IEEE software, 21(3):32–38, 2004.
* [30] R. J. Wieringa. Design science methodology for information systems and software engineering. Springer, 2014.
|
# POSTER: spaceQUIC: Securing Communication in Computationally Constrained
Spacecraft
Joshua Smailes<EMAIL_ADDRESS>University of Oxford , Razvan
David<EMAIL_ADDRESS>University of Oxford , Sebastian Köhler
<EMAIL_ADDRESS>University of Oxford , Simon Birnbach
<EMAIL_ADDRESS>University of Oxford and Ivan Martinovic
<EMAIL_ADDRESS>University of Oxford
###### Abstract.
Recent years have seen a rapid increase in the number of CubeSats and other
small satellites in orbit – these have highly constrained computational and
communication resources, but still require robust secure communication to
operate effectively. The QUIC transport layer protocol is designed to provide
efficient communication with cryptography guarantees built-in, with a
particular focus on networks with high latency and packet loss. In this work
we provide spaceQUIC, a proof of concept implementation of QUIC for NASA’s
“core Flight System” satellite operating system, and assess its performance.
††copyright: none
7cm(1.8cm,7.8cm) † Both authors contributed equally to this work.
## 1\. Motivation
Alongside a general upward trend in the number of satellites in orbit, there
has been a marked recent increase in the number of small satellites in space.
This increase has been driven by a number of factors, including a growing
availability of cheap Commercial Off-The-Shelf (COTS) components, satellite
ride-sharing (in which smaller satellites are launched alongside a larger
payload), and the rise in popularity of the CubeSat, a standardized design
enabling cheap ride-sharing.
Another significant factor has been software availability – access to open-
source operating systems and libraries allow operators to focus on building
payload-specific functionality, reducing unnecessary mission development. The
core Flight System (cFS) is a popular open-source satellite operating system
built by NASA from historical missions, and is used in many ongoing and
planned missions (mccomasCore2015, 1). It is actively maintained by NASA and
the open-source community surrounding the project, and is easily extensible
through the addition of libraries or apps to support specific payloads or
ancillary functions. It is built on top of an Operating System Abstraction
Layer (OSAL), making it easy to port to new hardware platforms, alongside
those for which it is already supported. For these reasons it is a popular
choice of satellite operating system and is used in a wide range of CubeSat
missions.
One key challenge in space systems is performant secure communication – the
vast majority of communication occurs over radio signals which are subject to
significant path loss, atmospheric noise, and multipath distortion. This
problem is exacerbated in CubeSats, which often use omnidirectional antennas
due to an inability to orient themselves, or antennas with lower gain due to
their limited size. As a result, much of these satellites’ communication is
low throughput and subject to data loss or corruption.
The TCP transport layer protocol is rarely used – its congestion control
algorithm assumes that packet loss is caused by a congested link and waits for
retransmission. These assumptions do not apply to point-to-point satellite
communications, and result in a significant decrease in throughput. Instead,
datagram-oriented protocols like UDP or the Space Packet Protocol (SPP) are
used. These are secured using symmetric cryptography through the Space Data
Link Security (SDLS) protocol, implemented in the CryptoLib cFS library
(ccsdsSpace2015, 2). This provides fewer security guarantees than asymmetric
cryptography, but requires less computational overhead.
The QUIC transport layer protocol, introduced in 2012, addresses these
concerns (iyengarQUIC2021, 3). The connection establishment process is highly
streamlined, and in many cases data can start being sent with 0 round trips of
setup. The protocol also provides improved congestion control and recovery
from losses, resulting in significantly better throughput over lossy and noisy
connections. Furthermore, security is built into the protocol, providing all
the security guarantees of an asymmetric cryptosystem with at most one network
round-trip time of setup. These factors make QUIC highly attractive in the
context of lossy satellite connections – existing work has shown that
combining QUIC with performance enhancing proxies can provide better
performance and security over satellite internet connections (pavurQPEP2021,
4). However, there is currently no way to leverage the benefits of QUIC when
in direct communication with satellites.
### 1.1. Contributions
In this work we introduce the spaceQUIC library, implementing QUIC
functionality on the cFS satellite operating system. This brings the
additional performance and resilience of QUIC to space missions, alongside the
increased security of asymmetric cryptography. This library can be used as a
replacement for CryptoLib, the existing library providing asymmetric
cryptography through the SDLS protocol.
We provide a high-level overview of the cFS architecture and describe how
spaceQUIC fits into this model. We also explain how to extend spaceQUIC to
work with real-world missions.
All code has been made open source under the Apache 2.0 license, and can be
found at https://github.com/ssloxford/spaceQUIC. For ease of setup, we also
provide an instance of cFS preconfigured to use spaceQUIC, both as source code
and a Docker container.
## 2\. Architecture
Figure 1. The overall architecture of a space system running cFS using
spaceQUIC.
Figure 1 shows the structure of a cFS system using spaceQUIC. Thanks to the
modular structure of cFS, both central cFS functionality, as well as mission-
specific applications and libraries, are left unchanged. All communication
occurs over a central software bus, with applications exchanging data through
a publish/subscribe message passing system. This means the underlying
communication stack is abstracted away from most of the system, with data sent
only via the software bus.
spaceQUIC is provided as a cFS library, giving access to the required QUIC
functionality. This library is used by modified Command Ingest (CI) and
Telemetry Output (TO) applications – these are provided as part of the
standard cFS system, and are used for processing commands, and sending
telemetry and housekeeping data back to the ground system. The provided “lab”
versions of these applications send data directly over the network, and are
modified on a per-mission basis to support that mission’s radio hardware.
Existing CI and TO applications can also be modified to support spaceQUIC,
replacing calls to CryptoLib or other security/networking libraries.
The spaceQUIC library supports two implementations of SSL/TLS: OpenSSL and
WolfSSL (wolfssl, 5). WolfSSL is designed for use in embedded devices and
optimized to minimize resource usage, making it ideal for small satellites.
## 3\. Performance
Table 1. Overall memory usage of cFS under each security configuration.
Security | Peak heap usage ($\text{\,}\mathrm{kB}$) | Peak RSS usage ($\text{\,}\mathrm{MB}$)
---|---|---
None | $84.5$ | $6.7$
SDLS | $89.5$ | $8.3$
QUIC (OpenSSL) | $583.3$ | $13.3$
QUIC (WolfSSL) | $344.8$ | $9.5$
Figure 2. Encryption and decryption times for SDLS and QUIC.
In this section we assess the performance and resource usage of spaceQUIC to
demonstrate its usefulness in embedded contexts.
All experiments were performed on a Dell laptop with an Intel i7-8750H CPU and
16GB of DDR4 memory, limited to a single thread. Embedded hardware comparable
to onboard satellite hardware was not available, so we focus on relative
performance.
Table 1 shows the memory usage of cFS under each configuration, looking at
both peak heap usage and Resident Set Size (RSS) of the process. We see that
QUIC uses significantly more heap space than SDLS, but when using WolfSSL
overall memory usage is only slightly increased. It is likely that there would
be no memory usage problems running QUIC on all but the most computationally
constrained spacecraft.
We also measured execution time, seen in Figure 2. From these results we
observe that QUIC is $1.5$ to $2.5$ times slower than SDLS – this is
unsurprising due to the greater requirements of asymmetric cryptography, but
further testing on embedded hardware is needed.
Further testing is also also required to measure performance when latency and
packet loss are high – due to the protocol’s design, spaceQUIC is likely to
perform well in these scenarios.
## References
* (1) David McComas, Susanne Strege and Jonathan Wilmot “Core Flight System (cFS) a Low Cost Solution for SmallSats” In _Annual Small Satellite Conference_ , 2015
* (2) CCSDS “Space Data Link Security Protocol”, 2015
* (3) Jana Iyengar and Martin Thomson “QUIC: A UDP-Based Multiplexed and Secure Transport”, 2021 DOI: 10.17487/RFC9000
* (4) James Pavur, Martin Strohmeier, Vincent Lenders and Ivan Martinovic “QPEP: An Actionable Approach to Secure and Performant Broadband From Geostationary Orbit” In _Proceedings 2021 Network and Distributed System Security Symposium_ Virtual: Internet Society, 2021 DOI: 10.14722/ndss.2021.24074
* (5) wolfSSL “wolfSSL”, 2023 URL: https://www.wolfssl.com/
|
# Robust and efficient change point detection using novel multivariate rank-
energy GoF test
###### Abstract
In this paper, we use and further develop upon a recently proposed
multivariate, distribution-free Goodness-of-Fit (GoF) test based on the theory
of Optimal Transport (OT) called the Rank Energy ($\mathsf{RE}$) [1], for non-
parametric and unsupervised Change Point Detection (CPD) in multivariate time
series data. We show that directly using $\mathsf{RE}$ leads to high
sensitivity to very small changes in distributions (causing high false alarms)
and it requires large sample complexity and huge computational cost. To
alleviate these drawbacks, we propose a new GoF test statistic called as soft-
Rank Energy ($\mathsf{sRE}$) that is based on entropy regularized OT and
employ it towards CPD. We discuss the advantages of using $\mathsf{sRE}$ over
$\mathsf{RE}$ and demonstrate that the proposed $\mathsf{sRE}$ based CPD
outperforms all the existing methods in terms of Area Under the Curve (AUC)
and F1-score on real and synthetic data sets.
©2022 IEEE. Personal use of this material is permitted. Permission from IEEE
must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes,
creating new collective works, for resale or redistribution to servers or
lists, or reuse of any copyrighted component of this work in other works.
Index Terms— multivariate rank, rank energy, soft rank energy, optimal
transport, halton points.
## 1 Introduction
A significant part of the multivariate time series analysis deals with the
detection of unknown abrupt changes in a temporal signal that represent the
transitions from one state to another. This problem, commonly referred to as
the change point detection (CPD), has been extensively studied in statistics
and machine learning and is found in many real-world applications including
the analysis of physiological [2], financial [3] and sensor data [4].
Statistical CPD methods can be categorized into different criteria e.g.,
univariate vs. multivariate, parametric vs. nonparametric. Parametric or
model-based methods assume that parameters of the underlying time series data
distribution are either known or can be learned from data [5, 6]. Parametric
methods are advantageous if either of these assumptions holds true. When the
distributions are unknown or learning the parameters from data is difficult,
nonparametric methods are desirable. Approximation of divergence based on
direct density-ratio estimation [7] and the integral probability metric e.g.,
maximum mean discrepancy (MMD) [8] based two-sample multivariate Goodness-of-
fit (GoF) test have been proposed in [9, 10] as the nonparametric ways to
detect change points. Some other nonparametric GoF tests that have been used
for CPD such as Kolmogorov-Smirnov (KS), and Cramer-von-Mises [11], are
univariate in nature. Recently [12] used the univariate Wasserstein two-sample
test (W2T) based on the theory of OT in one-dimension [13] for CPD. However,
one of the drawbacks of this method is that it projects the data onto several
one dimensional directions and uses the average statistic, which can lead to
the loss of detection power.
Main contributions \- In this work, we propose CPD using recently developed
statistics known as the Rank Energy ($\mathsf{RE}$) [1]. What makes
$\mathsf{RE}$ attractive for CPD is that it is a multivariate rank-based and
distribution-free GoF test [1], where the notion of rank is derived leveraging
the theory of Optimal Transport (OT) in high dimensions.
However, as we outline in more detail subsequently and as borne out from
simulation results, directly using the sample version of $\mathsf{RE}$ for CPD
has some drawbacks, namely, high sensitivity to small changes (leading to high
false alarm rates), high computational complexity, and large sample
complexity. To alleviate these shortcomings, we propose a new statistic,
called soft-Rank Energy ($\mathsf{sRE}$) that leverages the computational and
sample efficient entropy regularized OT [14] and exploit it for CPD. We
demonstrate the advantages of using $\mathsf{sRE}$ over $\mathsf{RE}$. We also
evaluate the performances of $\mathsf{RE}$ and $\mathsf{sRE}$ on both toy and
real datasets and compare them with the existing state-of-the-art (SOTA)
methods.
The rest of the paper is organized as follows- In section 3 we provide
necessary background on the multivariate $\mathsf{RE}$ and highlight the pros
and cons of using $\mathsf{RE}$ in CPD. In section 4, we then introduce the
sample version of $\mathsf{sRE}$ and employ it in CPD. In section 5 we show
improved AUC and F1-score for CPD on real datasets compared to state-of-the-
art.
## 2 Problem set-up
Notation: We use bold-math capital letters $\bm{X}$ for multivariate random
variables, bold-face capital letters $\mathbf{X}$ for matrices and maps,
lower-case bold face bold-math $\bm{x}$ for vectors. We denote by
$\mathcal{P}(\mathbb{R}^{d})$ the set of probability measures on
$\mathbb{R}^{d}$. The rest of the notation is standard and should be clear
from the context.
Given a time series $\bm{Z}[t]\in\mathbb{R}^{d}$, $t=1,2,\dots$, where the
data consists of distinct segments
$[0,\\!\tau_{1}],\\!\;[\tau_{1}\\!+\\!1,\\!\tau_{2}],\dots,[\tau_{k-1}+1,\tau_{k}]$
with $\tau_{1}<\tau_{2}<\dots$ Samples within each segment,
$\bm{X}[t],t\in[\tau_{i-1}+1,\tau_{i}]$, are assumed to be i.i.d. and
originated from an unknown distribution. In general, the distributions in two
adjoining segments are considered to be different, whereas two distant
segments may have a similar distribution. The primary objective of a
nonparametric CPD method is to detect the change points
$\tau_{1},\tau_{2},\dots,\tau_{k}$ without any prior knowledge or assumptions
on the set of underlying distributions of the distinct time segments.
A sliding window two-sample GoF test: A common framework to detect change
points is the sliding-window approach [9]. Given a window size $n$ on each
side of a possible change point, an offline, unsupervised sliding window-based
CPD method generally takes two adjacent time segments
$\\{\bm{Z}[t-n],\bm{Z}[t-n+1],\dots,\bm{Z}[t-1]\\}\sim\mu_{X}$ and
$\\{\bm{Z}[t+1],\bm{Z}[t+2],\dots,\bm{Z}[n+t-1]\\}\sim\mu_{Y}$ and carry out a
two-sample GoF test at each $t=1,2,\dots$
For CPD, using the sliding window GoF test, a detection range $\delta$ is
utilized. A time point $t$ is declared as a change point if the statistic
$\sigma(t)$ at this point is the local maximum within $\delta$ and above a
threshold $\eta$. In general, $\eta$ is specific to the statistical tests
which is calculated from the distribution of the statistics under the null
given a confidence level. The sliding-window based CPD procedure is described
in Algorithm 1. In this context, our main contributions in this paper are to
apply the recently proposed Rank-Energy ($\mathsf{RE}$) [1] as the GoF test
for CPD, highlight its main properties and shortcomings and then propose a new
test that improves upon this GoF test.
Algorithm 1 Sliding-window based CPD employing GoF test
1:$\bm{Z}[t]$: data, window size: $n$, threshold: $\eta$.
2:for each $t:n:(T-n)$ do
3: $\mathbf{X}[t]=\\{\bm{Z}[t-n],\dots,\bm{Z}[t-1]\\}$
4: $\mathbf{Y}[t]=\\{\bm{Z}[t],\dots,\bm{Z}[t-n+1]\\}$
5: $\sigma(t)\leftarrow\text{GoF-
statistic}\big{(}\mathbf{X}[t],\mathbf{Y}[t]$)
6:end for
7:Output:
$\\{\tau_{1},\tau_{2},\dots\\}\\!\\!\leftarrow\\!\\!\\{t|\text{max}(\sigma(t)\\!>\\!\eta)\\}$
## 3 Background: Optimal Transport (OT) Based Multivariate Rank Energy Test
Optimal Transport, in it’s most well-studied setting [15, 14], aims to find
$\mathbf{T}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$\- a map that pushes a
source distribution $\mu\in\mathcal{P}(\mathbb{R}^{d})$ to a target
distribution $\nu\in\mathcal{P}(\mathbb{R}^{d})$ with a minimal expected
squared Euclidean cost. That is, given two multivariate random variables,
$\bm{X}\in\mathbb{R}^{d}\sim\mu$, and $\bm{Y}\in\mathbb{R}^{d}\sim\nu$, OT
finds a map $\mathbf{T}$ that solves for,
$\displaystyle\inf_{\mathbf{T}}\int\|\bm{x}-\mathbf{T}(\bm{x})\|^{2}d\mu(\bm{x})\;\;\text{subject
to.}\;\;\bm{Y}=\mathbf{T}(\bm{x})\sim\nu,$ (1)
where $\|\cdot\|$ denotes the standard Euclidean norm in $\mathbb{R}^{d}$.
Note that if $\mathbf{T}(\bm{X})\sim\nu$ when $\bm{X}\sim\mu$, we write
$\nu=\mathbf{T}_{\\#}\mu$. In this case, the measure $\nu$ is referred to as
the push-forward of measure $\mu$ under the mapping $\mathbf{T}$.
When $d=1$, it is known that the optimal map is
$\mathbf{T}=\mathsf{F}_{\nu}^{-1}\circ\mathsf{F}_{\mu}$, where
$\mathsf{F}_{\mu}$ and $\mathsf{F}_{\nu}$ are the (cumulative) distribution
functions for $\mu$ and $\nu$, respectively [15, 14]. If the target measure
$\nu=\mathsf{U}[0,1]$ is a uniform distribution on the line, then
$\mathbf{T}=\mathsf{F}_{\mu}$, which is similar to the rank function in $1$-d.
Extending this insight to the multivariate case, the notion of rank has been
developed based on the following landmark result in OT theory.
###### Theorem 1 (McCann [16]).
Assume $\mu,\nu\in\mathcal{P}(\mathbb{R}^{d})$ be absolutely continuous
measures, then there exist transport maps $\mathbf{R}(\cdot)$ and
$\mathbf{Q}(\cdot)$, that are gradients of real-valued $d$-variate convex
functions such that $\mathbf{R}_{\\#}\mu=\nu,\;\;\mathbf{Q}_{\\#}\nu=\mu$,
$\mathbf{R}$ and $\mathbf{Q}$ are unique and
$\mathbf{R}\circ\mathbf{Q}(\bm{x})=\bm{x}$,
$\mathbf{Q}\circ\mathbf{R}(\bm{y})=\bm{y}$.
In particular, the fact that the gradients of convex functions are monotone
maps [16] has led the authors in [1, 17] to define $\mathbf{R}$ and
$\mathbf{Q}$ as the multivariate rank and quantile map respectively under
appropriate selection of the target measure $\nu$.
Specific to this work, the authors in [1] use the uniform measure on the unit
cube in $\mathbb{R}^{d}$ as the target measure and, developed the rank energy
statistic as a GoF measure between distributions, whose sample version is
stated below.
Sample Multivariate Rank Energy [1]: Given two sets of i.i.d. samples
$\\{\\!\bm{X}_{1},\dots,\bm{X}_{m}\\!\\}\\!\sim\\!\mu_{X}\\!\in\\!\mathcal{P}(\mathbb{R}^{d})$
and
$\\{\bm{Y}_{1},\dots,\bm{Y}_{n}\\}\sim\mu_{Y}\in\mathcal{P}(\mathbb{R}^{d})$.
Let
$\mu^{\bm{X}}_{m}\\!\\!=\\!\\!\frac{1}{m}\sum_{i=1}^{m}\delta_{\bm{X}_{i}}$,
$\mu^{\bm{Y}}_{n}\\!\\!=\\!\\!\frac{1}{n}\sum_{i=1}^{n}\delta_{\bm{Y}_{i}}$
denote the empirical measures. A set of fixed Halton sequences [18], that
mimic randomly chosen vectors in the unit cube in $\mathbb{R}^{d}$, denoted as
$\mathcal{H}_{m+n}^{d}:=\\{\bm{h}_{1}^{d},\dots,\bm{h}^{d}_{m+n}\\}\subset[0,1]^{d}$
with the empirical measure
$\nu_{m,n}^{\mathbf{H}}=(m+n)^{-1}\sum_{i=1}^{m+n}\delta_{\bm{h}_{i}}$ is
taken as the target points.
A joint empirical map $\mathbf{\widehat{R}}_{m,n}$ is computed between the
joint empirical measure
$\mu_{m,n}^{\bm{X},\bm{Y}}:=(m+n)^{-1}(m\mu^{\bm{X}}_{m}+n\mu^{\bm{Y}}_{n})$
and $\nu_{m,n}^{\mathbf{H}}$ by solving the following discrete OT problem,
$\displaystyle\mathbf{\widehat{P}}=\arg\min_{\mathbf{P}\in\Pi}\sum_{i,j=1}^{m+n}\mathbf{C}_{i,j}\mathbf{P}_{i,j},$
(2)
where $\mathbf{C}_{i,j}$ is the squared Euclidean distance, and
$\Pi=\\{\mathbf{P}:\mathbf{P}\bm{1}=\frac{1}{m+n}\bm{1},\bm{1}^{\top}\mathbf{P}=\frac{1}{m+n}\bm{1}^{\top}\\}$.
The above formulation is also known as the Kantorovich relaxation [14]. Now
for any $\bm{X}_{i}$, one obtains a map as
$\widehat{\mathbf{R}}_{m,n}(\bm{X}_{i})\\!\\!=\\!\\!\bm{h}_{\sigma(i)}$, where
$\sigma(i)$ is the non-zero index in the $i$-th row of $\widehat{\mathbf{P}}$.
Given the ranks corresponding to $\bm{X}_{i}$’s and $\bm{Y}_{j}$’s, sample
rank energy [1] is defined as,
$\displaystyle\mathsf{RE}\\!\\!$
$\displaystyle:=\\!\\!\frac{2}{mn}\\!\\!\sum_{i,j=1}^{m,n}\\!\\!\|\widehat{\mathbf{R}}_{m,n}(\bm{X}_{i})\\!-\\!\widehat{\mathbf{R}}_{m,n}(\bm{Y}_{j})\|\\!\\!-\\!\\!\frac{1}{m^{2}}\\!\\!\\!\sum_{i,j=1}^{m}\\!\\!\|\widehat{\mathbf{R}}_{m,n}(\bm{X}_{i})$
$\displaystyle-\widehat{\mathbf{R}}_{m,n}(\bm{X}_{j})\|-\frac{1}{n^{2}}\sum_{i,j=1}^{n}\\!\\!\|\widehat{\mathbf{R}}_{m,n}(\bm{Y}_{i})\\!\\!-\\!\\!\widehat{\mathbf{R}}_{m,n}(\bm{Y}_{j})\|.$
(3)
The null hypothesis $H_{0}$ is rejected if $mn(m+n)^{-1}\mathsf{RE}$ is
greater than the threshold and accepted otherwise. As shown in [1],
$\mathsf{RE}$ is distribution-free under the null for fixed sample size, a
property that is desirable for selecting an optimal universal threshold to
reject the null hypothesis, $H_{0}:\mu_{\bm{X}}=\mu_{\bm{Y}}$.
We now note several shortcomings of directly using $\mathsf{RE}$ for CPD.
* •
Sensitivity: As shown from Figure 1, we note that $\mathsf{RE}$ is
particularly sensitive to small shift in the mean and changes in the
covariance. This characteristic may be useful in applications where it is
required to detect any tiny changes. However, in many real-world datasets,
these tiny changes may not be labeled as the true change points. Hence using
$\mathsf{RE}$ in CPD may lead to the detection of many false positives and
deteriorate the overall performance.
* •
Sample Complexity: Curse of dimensionality - In general case, sample
complexity for reliable estimation of the sample rank map scales as
$O(n^{-1/d})$ with dimension $d$ [19].
* •
Computational complexity: Being a linear program, the computational complexity
of the OT plan for sample $\mathsf{RE}$ scales as $\mathcal{O}(n^{3}\log n)$,
for a given sample size $n$, which is expensive.
To alleviate these issues, in the next section, we introduce the sample soft-
Rank Energy that leverages the properties of entropy regularized optimal
transport [14].
Fig. 1: $\mathsf{RE}$ and $\mathsf{sRE}$ with $\varepsilon=\\{0.001,1,5\\}$
statistics (right axis) produced on a toy dataset using a sliding-window CPD
approach with a window size $n=250$. Dataset consists of 5 different
$3$-dimensional Gaussian distributions
$\mathcal{N}(\bm{\mu}_{3},\sigma\mathbf{I}_{3})$, with zero baselines on both
ends. Here, $\mathbf{I}_{3}$ denotes the identity matrix.
$\mathsf{RE}(\varepsilon=0)$ and $\mathsf{sRE}(\varepsilon=0.001)$ can detect
the tiny changes between the baseline and
$\mathcal{N}(0_{3},0.001\mathbf{I}_{3})$ on both sides, whereas $\mathsf{sRE}$
with $\varepsilon=\\{1,5\\}$ do not label these points as true change points.
## 4 Proposed Sample Multivariate Soft Rank Energy
Given two sets of i.i.d. samples
$\\{\bm{X}_{1},\dots,\bm{X}_{m}\\}\\!\\!\sim\\!\\!\mu_{X}\in\mathcal{P}(\mathbb{R}^{d})$
and
$\\{\bm{Y}_{1},\dots,\bm{Y}_{n}\\}\\!\\!\sim\\!\\!\mu_{Y}\in\mathcal{P}(\mathbb{R}^{d})$.
To compute the soft rank, an entropy regularized OT with a regularizer
$\varepsilon$, is solved via Sinkhorn algorithm [14] between the empirical
joint source measure $\mu_{m,n}^{\bm{X},\bm{Y}}$ and the reference measure
$\nu_{m,n}^{\mathbf{H}}$,
$\displaystyle\mathbf{\widehat{P}}^{\epsilon}=\arg\min_{\mathbf{P}\in\Pi}\sum_{i,j=1}^{m+n}\mathbf{C}_{i,j}\mathbf{P}_{i,j}-\varepsilon
H(\mathbf{P}),$ (4)
where $\mathbf{C}_{i,j}$ is the squared Euclidean distance, $\varepsilon>0$,
$\Pi=\\{\mathbf{P}:\mathbf{P}\bm{1}=\frac{1}{m+n}\bm{1},\bm{1}^{\top}\mathbf{P}=\frac{1}{m+n}\bm{1}^{\top}\\}$,
and $H(\mathbf{P})=-\sum_{i,j}\mathbf{P}_{i,j}\log\mathbf{P}_{i,j}$ is the
entropy functional. $\mathbf{\widehat{P}^{\varepsilon}}$ is a diffused optimal
plan, where the degree of diffusion increases as $\varepsilon$ increases.
Soft ranks are then obtained as follows. We compute a _row-normalized_ plan
$\mathbf{\bar{P}}^{\epsilon}$ via
$\mathbf{\bar{P}}_{i,j}^{\epsilon}=\frac{\mathbf{\widehat{P}}_{i,j}^{\varepsilon}}{\sum_{j=1}^{m+n}\mathbf{\widehat{P}}_{i,j}^{\varepsilon}}$.
Now, for any $\bm{X}_{i}$, one obtains the soft ranks via,
$\displaystyle\widehat{\mathbf{R}}^{s,\epsilon}(\bm{X}_{i})$
$\displaystyle=\sum_{j=1}^{m+n}\mathbf{\bar{P}^{\varepsilon}_{i,j}}\bm{h}_{j}=\mathbb{E}_{\mathbf{\widehat{P}^{\varepsilon}}}[\bm{h}_{j}|\bm{X}_{i}].$
(5)
In other words, soft ranks are the conditional expectation of Halton sequences
under the joint distribution $\mathbf{\widehat{P}}^{\varepsilon}$ given the
source samples.
Given the soft ranks corresponding to $\bm{X}_{i}$’s and $\bm{Y}_{j}$’s,
sample soft rank energy is defined using the same formulation as in Equation
(3):
$\displaystyle\mathsf{sRE}\\!\\!$
$\displaystyle:=\\!\\!\frac{2}{mn}\\!\\!\sum_{i,j=1}^{m,n}\\!\\!\|\widehat{\mathbf{R}}^{s,\epsilon}(\bm{X}_{i})\\!-\\!\widehat{\mathbf{R}}^{s,\epsilon}(\bm{Y}_{j})\|\\!\\!-\\!\\!\frac{1}{m^{2}}\sum_{i,j=1}^{m}\|\widehat{\mathbf{R}}^{s,\epsilon}(\bm{X}_{i})$
$\displaystyle-\widehat{\mathbf{R}}^{s,\epsilon}(\bm{X}_{j})\|-\frac{1}{n^{2}}\sum_{i,j=1}^{n}\|\widehat{\mathbf{R}}^{s,\epsilon}(\bm{Y}_{i})-\widehat{\mathbf{R}}^{s,\epsilon}(\bm{Y}_{j})\|.$
(6)
The null hypothesis $H_{0}$ is rejected if $mn(m+n)^{-1}\mathsf{sRE}$ is
greater than the threshold and accepted otherwise. We note the following
result relating $\mathsf{sRE}$ and $\mathsf{RE}$.
###### Proposition 1.
Soft rank energy $\mathsf{sRE}$ will converge to rank energy $\mathsf{RE}$ as
$\varepsilon\rightarrow 0$.
###### Proof.
The unique minimizer $\mathbf{P^{\varepsilon}}$ of Equation (4) converges to
the optimal solution $\mathbf{P}$ (Equation 2) with a cost function
$\mathbf{C}_{i,j}=\|\bm{X}_{i}-\bm{h}_{j}\|^{2}$ [20],
$\displaystyle\mathbf{P^{\varepsilon}}\rightharpoonup\mathbf{P},$ (7)
where $\rightharpoonup$ denotes convergence w.r.t. weak topology. Let
$\bar{\mathbf{P}}$ denote the row-normalized $\mathbf{P}$. Then Equation (7)
implies that $\lim_{\varepsilon\rightarrow
0}\mathbf{\bar{P}^{\varepsilon}}\rightharpoonup\mathbf{\bar{P}}$, and
$\widehat{\mathbf{R}}^{s,\epsilon}(\bm{X}_{i})\rightharpoonup\widehat{\mathbf{R}}_{m,n}(\bm{X}_{i})$.
Therefore, $\lim_{\varepsilon\rightarrow 0}\mathsf{sRE}=\mathsf{RE}$.
∎
We note the following properties of $\mathsf{sRE}$ that help alleviate the
issues in directly using $\mathsf{RE}$ for CPD.
* •
Proposition (1) implies that $\mathsf{sRE}$ will be nearly distribution-free
for small enough $\varepsilon$.
* •
Sensitivity: As shown in Figure 1, $\mathsf{sRE}$ is sensitive to small
changes for $\varepsilon=0.001$. For $\varepsilon=1$, sensitivity decreases.
However, $\mathsf{sRE}$ still generates visible peaks at all the change points
except the transition between the baseline and Gaussian distribution with tiny
covariance. With $\varepsilon=5$, $\mathsf{sRE}$ shows the least sensitivity
with barely visible peaks at the change points. The entropic regularization
parameter thus allows control of the degree of sensitivity that can be adapted
or adjusted to control the false alarms. A good choice for CPD will be the
$\varepsilon$, for which $\mathsf{sRE}$ is neither too sensitive nor totally
unresponsive to changes.
* •
Sample complexity: Under some mild assumptions, namely, sub-Gaussianity of the
measures, the estimation of the entropic optimal transport does not suffer
from the curse of dimensionality for a sufficiently large $\varepsilon$ [19].
* •
Computational complexity: For a sample size $n$, the computational complexity
of entropic optimal transport is
$\mathcal{O}(\varepsilon^{-2}n^{2}\log\,n\|\mathbf{C}\|_{\infty}^{2})$ [14].
The smaller the $\varepsilon$, the costlier the computation.
| | CP-AUC | | | CP-F1 |
---|---|---|---|---|---|---
| HASC-PAC2016 | HASC2011 | Beedance | HASC-PAC2016 | HASC2011 | Beedance
W2T (Rank-Quantile), $d=1$ [12] | 0.689 | 0.576 | 0.721 | 0.748 | 0.824 | 0.742
M-stat (IPM-based), $d\geq 1$, [9] | 0.658 | 0.585 | 0.727 | 0.713 | 0.770 | 0.725
RE (Rank-Rank) $d\geq 1$, [1] | 0.718 | 0.529 | 0.694 | 0.631 | 0.643 | 0.672
sRE (soft Rank-Rank [This paper], $d\geq 1$ | 0.747 | 0.670 | 0.739 | 0.785 | 0.796 | 0.745
Table 1: Comparison between the proposed method and related state of art in
literature.
## 5 Numerical Experiments & Simulations
Experimental Setup: We compare the performances of our methods with two other
existing algorithms, the univariate distribution-free Wasserstein two-sample
test (W2T) [12] based CPD and the multivariate M-Statistic (MStat) [9] based
CPD that uses Maximum-Mean Discrepancy (MMD) [8] for measuring GoF. The
Gaussian kernel with unit variance is used to compute MStat.
For a fair comparison, we apply the optimal matched filter proposed in [12] on
W2T and MStat that improves the performances of these methods significantly.
It is to be noted that no smoothing filter was applied on $\mathsf{RE}$ and
$\mathsf{sRE}$ statistics.
The hyperparameters we use in the CPD algorithm are the entropic regularizer
$\varepsilon$, and the detection threshold $\eta$ to compute the F1 score. To
compare the methods on an equal footing, we use the same window size $n$ and
detection range $\delta$ for all the methods. The optimal $\eta$,
$\varepsilon$ selected for the proposed methods, window size $n$, and the
detection range $\delta$ along with the specifications of the used datasets
can be found in Table 2. It is worthwhile to note that, since the Beedance
dataset has a comparatively shorter sequence, we padded it with a zero
sequence of length $n$ on both ends.
| HASC2011[21] | HASC-PAC2016[21] | Beedance[22]
---|---|---|---
domain | $\mathbb{R}^{3}$ | $\mathbb{R}^{3}$ | $\mathbb{R}^{3}$
$\\#$ subjects | 2 | 10 | 6
$\\#$actions | 6 | 6 | 3
$\\#$ CP | 65 | 13 | 19
$n$ | 500 | 500 | 50
$\varepsilon$ | 2 | 1 | 1
$\delta$ | 250 | 250 | 10
$\eta$ | 0.52 | 0.52 | 0.25
Table 2: $\\#$ denotes the total number and CP is for change points.
Result on real data: Table 1 demonstrates the performance comparison of the
proposed methods with MStat [9] and W2T [12]. The proposed methods demonstrate
robust results for CPD under the AUC metric. $\mathsf{RE}$ gains higher AUC
compared to W2T and MStat on HASC-PAC2016 dataset but fails to outperform on
HASC2011 and Beedance datasets. $\mathsf{RE}$’s highly sensitive nature to
tiny changes brings about many false alarms, thus explains the lower AUC for
these datasets. On the other hand, $\mathsf{sRE}$ outperforms all the methods
on all three datasets in terms of AUC score. To be noted here, we observe
significant improvement of the AUC using W2T and MStat on the Beedance dataset
after the inclusion of zero-padding on both ends.
Though under the AUC metric, $\mathsf{RE}$ shows a comparable result, we
observe lower F1-scores compared to W2T and MStat on all three datasets. Since
we did not apply any filter to smooth out the $\mathsf{RE}$ statistics,
several spurious maxima exist outside of the detection range $\delta$ on both
sides of the peaks. Moreover, $\mathsf{RE}$ also produces a lot of false
alarms because of its higher sensitivity to small changes. As a result,
$\mathsf{RE}$ achieves slightly lower F1-scores on all three datasets. On the
other hand, $\mathsf{sRE}$ achieves either higher or comparable F1-scores in
all three datasets. On HASC-PAC2016 and Beedance datasets, $\mathsf{sRE}$
achieves the highest F1-score. W2T-based CPD achieves the maximum F1-score on
HASC2011, whereas $\mathsf{sRE}$ performs comparably.
We also compare the performance of $\mathsf{sRE}$ to the method called KL-CPD
[23], which is a semi-supervised CPD method. The best AUC is achieved by KL-
CPD which is a kernel-based semi-supervised method trained by a deep
generative model. KL-CPD achieves AUC of 0.677 and 0.649 on the Beedance and
HASC2011 dataset, respectively, which is clearly lower than the AUC scores
obtained by the proposed $\mathsf{sRE}$.
## 6 Conclusion and Future work
In this paper, we employ recently developed multivariate GoF statistics to
detect change points in an unsupervised, offline approach. We also propose a
new statistic that depends on a regularization parameter which allows control
of the degree of sensitivity. With an appropriate regularizer, we have shown
that our proposed statistic lowers the false positive rate, hence outperforms
state of the art in CPD under the AUC and F1-score metric. Future work will
investigate theoretical properties of $\mathsf{sRE}$ and explain the smoothing
effect in CPD as a function of the entropic regularization.
## 7 Acknowledgement
This research was sponsored by the U.S. Army DEVCOM Soldier Center, and was
accomplished under Cooperative Agreement Number W911QY-19-2-0003. The views
and conclusions contained in this document are those of the authors and should
not be interpreted as representing the official policies, either expressed or
implied, of the U.S. Army DEVCOM Soldier Center, or the U.S. Government. The
U. S. Government is authorized to reproduce and distribute reprints for
Government purposes notwithstanding any copyright notation hereon.
We also acknowledge support from the U.S. National Science Foundation under
award HDR-1934553 for the Tufts T-TRIPODS Institute. Shuchin Aeron is also
supported in part by NSF CCF:1553075, NSF RAISE 1931978, NSF ERC planning
1937057, and AFOSR FA9550-18-1-0465.
## References
* [1] Nabarun Deb and Bodhisattva Sen, “Multivariate rank-based distribution-free nonparametric testing using measure transportation,” Journal of the American Statistical Association, , no. just-accepted, pp. 1–45, 2021.
* [2] Jin-Peng Qi, Qing Zhang, Ying Zhu, and Jie Qi, “A novel method for fast change-point detection on simulated time series and electrocardiogram data,” PloS one, vol. 9, no. 4, pp. e93365, 2014.
* [3] Carlos M Carvalho and Hedibert F Lopes, “Simulation-based sequential analysis of markov switching stochastic volatility models,” Computational Statistics & Data Analysis, vol. 51, no. 9, pp. 4526–4542, 2007.
* [4] Ting He, Shai Ben-David, and Lang Tong, “Nonparametric change detection and estimation in large-scale sensor networks,” IEEE transactions on signal processing, vol. 54, no. 4, pp. 1204–1217, 2006.
* [5] Faicel Chamroukhi, Samer Mohammed, Dorra Trabelsi, Latifa Oukhellou, and Yacine Amirat, “Joint segmentation of multivariate time series with hidden process regression for human activity recognition,” Neurocomputing, vol. 120, pp. 633–644, 2013.
* [6] Wei-Han Lee, Jorge Ortiz, Bongjun Ko, and Ruby Lee, “Time series segmentation through automatic feature learning,” arXiv preprint arXiv:1801.05394, 2018.
* [7] Takafumi Kanamori, Shohei Hido, and Masashi Sugiyama, “A least-squares approach to direct importance estimation,” The Journal of Machine Learning Research, vol. 10, pp. 1391–1445, 2009.
* [8] Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola, “A kernel two-sample test,” The Journal of Machine Learning Research, vol. 13, no. 1, pp. 723–773, 2012.
* [9] Shuang Li, Yao Xie, Hanjun Dai, and Le Song, “Scan $b$-statistic for kernel change-point detection,” arXiv preprint arXiv:1507.01279, 2015.
* [10] Song Liu, Makoto Yamada, Nigel Collier, and Masashi Sugiyama, “Change-point detection in time-series data by relative density-ratio estimation,” Neural Networks, vol. 43, pp. 72–83, 2013.
* [11] Douglas M Hawkins and Qiqi Deng, “A nonparametric change-point control chart,” Journal of Quality Technology, vol. 42, no. 2, pp. 165–173, 2010\.
* [12] Kevin C Cheng, Shuchin Aeron, Michael C Hughes, Erika Hussey, and Eric L Miller, “Optimal transport based change point detection and time series segment clustering,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 6034–6038.
* [13] Aaditya Ramdas, Nicolás García Trillos, and Marco Cuturi, “On wasserstein two-sample testing and related families of nonparametric tests,” Entropy, vol. 19, no. 2, pp. 47, 2017.
* [14] Gabriel Peyré, Marco Cuturi, et al., “Computational optimal transport: With applications to data science,” Foundations and Trends® in Machine Learning, vol. 11, no. 5-6, pp. 355–607, 2019.
* [15] Filippo Santambrogio, “Optimal transport for applied mathematicians,” Birkäuser, NY, vol. 55, no. 58-63, pp. 94, 2015.
* [16] Robert J McCann et al., “Existence and uniqueness of monotone measure-preserving maps,” Duke Mathematical Journal, vol. 80, no. 2, pp. 309–324, 1995.
* [17] Marc Hallin et al., “On distribution and quantile functions, ranks and signs in rd,” ECARES Working Papers, 2017.
* [18] Hongmei Chi, Michael Mascagni, and Tony Warnock, “On the optimal halton sequence,” Mathematics and computers in simulation, vol. 70, no. 1, pp. 9–21, 2005.
* [19] Aude Genevay, Lénaic Chizat, Francis Bach, Marco Cuturi, and Gabriel Peyré, “Sample complexity of sinkhorn divergences,” in The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019, pp. 1574–1583.
* [20] Guillaume Carlier, Vincent Duval, Gabriel Peyré, and Bernhard Schmitzer, “Convergence of entropic schemes for optimal transport and gradient flows,” SIAM Journal on Mathematical Analysis, vol. 49, no. 2, pp. 1385–1418, 2017.
* [21] Haruyuki Ichino, Katsuhiko Kaji, Ken Sakurada, Kei Hiroi, and Nobuo Kawaguchi, “Hasc-pac2016: Large scale human pedestrian activity corpus and its baseline recognition,” in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, 2016, pp. 705–714.
* [22] Sang Min Oh, James M Rehg, Tucker Balch, and Frank Dellaert, “Learning and inferring motion patterns using parametric segmental switching linear dynamic systems,” International Journal of Computer Vision, vol. 77, no. 1, pp. 103–124, 2008.
* [23] Wei-Cheng Chang, Chun-Liang Li, Yiming Yang, and Barnabás Póczos, “Kernel change-point detection with auxiliary deep generative models,” arXiv preprint arXiv:1901.06077, 2019.
|
# Improving Zero-Shot Multilingual Translation with
Universal Representations and Cross-Mappings
Shuhao Gu1,2, Yang Feng1,2
1 Key Laboratory of Intelligent Information Processing,
Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS)
2 University of Chinese Academy of Sciences
<EMAIL_ADDRESS>∗Corresponding author: Yang Feng.
Reproducible code: https://github.com/ictnlp/Zero-MNMT.
###### Abstract
The many-to-many multilingual neural machine translation can translate between
language pairs unseen during training, i.e., zero-shot translation. Improving
zero-shot translation requires the model to learn universal representations
and cross-mapping relationships to transfer the knowledge learned on the
supervised directions to the zero-shot directions. In this work, we propose
the state mover’s distance based on the optimal theory to model the difference
of the representations output by the encoder. Then, we bridge the gap between
the semantic-equivalent representations of different languages at the token
level by minimizing the proposed distance to learn universal representations.
Besides, we propose an agreement-based training scheme, which can help the
model make consistent predictions based on the semantic-equivalent sentences
to learn universal cross-mapping relationships for all translation directions.
The experimental results on diverse multilingual datasets show that our method
can improve consistently compared with the baseline system and other contrast
methods. The analysis proves that our method can better align the semantic
space and improve the prediction consistency.
## 1 Introduction
The many-to-many multilingual neural machine translation (NMT) Ha et al.
(2016); Firat et al. (2016); Johnson et al. (2017); Gu et al. (2018); Fan et
al. (2020); Zhang et al. (2020a) model can support multiple translation
directions in a single model. The shared encoder encodes the input sentence to
the semantic space, and then the shared decoder decodes from the space to
generate the translation of the target language. This paradigm allows the
model to translate between language pairs unseen during training, i.e., zero-
shot translation. Zero-shot translation can improve the inference efficiency
and make the model require less bilingual training data. Performing zero-shot
translation requires universal representations to encode the language-agnostic
features and cross-mapping relationships that can map the semantic-equivalent
sentences of different languages to the particular space of the target
language. In this way, the model can transfer the knowledge learned in the
supervised translation directions to the zero-shot translation directions.
However, the existing model structure and training scheme cannot ensure the
universal representations and cross-mappings because of lacking explicit
constraints. Specifically, the encoder may map different languages to
different semantic subspaces, and the decoder may learn different mapping
relationships for different source languages, especially when the model
possesses high capacity.
Many researchers have made their attempts to solve this problem. Pham et al.
(2019) propose to compress the output of the encoder into a consistent number
of states to only encode the language-independent features. Arivazhagan et al.
(2019) add a regularizing loss to maximize the similarities between the
sentence representations of the source and target sentences. Pan et al. (2021)
propose contrastive learning schemes to minimize the sentence representation
gap of similar sentences and maximize that of irrelevant sentences. All the
above work tries to minimize the representation discrepancies of different
languages at the sentence level, bringing two problems for NMT. Firstly, these
work usually get the sentence-level representation of the encoder output by
max-pooling or averaging, which may potentially ignore the sentence length,
word alignment relationship, and other token-level information. Secondly,
regularizing sentence representation mismatches to the working paradigm of the
NMT model, because the decoder directly performs cross attention on the whole
state sequences rather than the sentence representation. Besides, all the
above work focuses on the encoder side and cannot help learn the universal
mapping relationship for the decoder.
Given the above, we propose a method to learn the universal representations
and cross-mappings to improve the zero-shot translation performance. Based on
the optimal transport theory, we propose state mover’s distance (SMD) to model
the differences of two state sequences at the token level. To map the
semantic-equivalent sentences from different languages to the same place of
the semantic space, we add an auxiliary loss to minimize the SMD of the source
and target sentences. Besides, we propose an agreement-based training scheme
to learn universal mapping relationships for the translation directions with
the same target language. We mixup the source and target sentences to obtain a
pseudo sentence. Then, the decoder makes predictions separately conditioned on
this pseudo sentence and the corresponding source or target sentences. We try
to improve the prediction consistency by minimizing the KL divergence of the
two output distributions. The experimental results on diverse multilingual
datasets show that our method can bring 2~3 BLEU improvements over the strong
baseline system and consistently outperform other contrast methods. The
analysis proves that our method can better align the semantic space and
improve the prediction consistency.
## 2 Background
In this section, we will give a brief introduction to the Transformer Vaswani
et al. (2017) model and the many-to-many multilingual translation.
### 2.1 The transformer
We denote the input sequence of symbols as $\mathbf{x}=(x_{1},\ldots,x_{nx})$
and the ground-truth sequence as $\mathbf{y}=(y_{1},\ldots,y_{ny})$. The
transformer model is based on the encoder-decoder architecture. The encoder is
composed of $\mathnormal{N}$ identical layers. Each layer has two sublayers.
The first is a multi-head self-attention sublayer, and the second is a fully
connected feed-forward network. Both of the sublayers are followed by a
residual connection operation and a layer normalization operation. The input
sequence $\mathbf{x}$ will be first converted to a sequence of vectors. Then,
this sequence of vectors will be fed into the encoder, and the output of the
$\mathnormal{N}$-th layer will be taken as source state sequences. We denote
it as $\mathbf{H}_{\mathbf{x}}$. The decoder is also composed of
$\mathnormal{N}$ identical layers. In addition to the same kind of two
sublayers in each encoder layer, the cross-attention sublayer is inserted
between them, which performs multi-head attention over the output of the
encoder. We can get the predicted probability of the $k$-th target word
conditioned by the source sentence and the $k-1$ previous target words. The
model is optimized by minimizing a cross-entropy loss of the ground-truth
sequence with teacher forcing:
$\mathcal{L}_{CE}=-\frac{1}{n_{y}}\sum_{k=1}^{n_{y}}\log
p(y_{k}|\mathbf{y}_{<k},\mathbf{x};\theta),$ (1)
where $n_{y}$ is the length of the target sentence and $\theta$ denotes the
model parameters.
### 2.2 Multilingual Translation
We define $L=\\{l_{1},\ldots,l_{M}\\}$ where $L$ is a collection of $M$
languages involved in the training phase. Following Johnson et al. (2017), we
share all the model parameters for all the languages. Following Liu et al.
(2020), we add a particular language id token at the beginning of the source
and target sentences, respectively, to indicate the language.
Figure 1: The training scheme of our method. $\mathbf{x}$ and $\mathbf{y}$
denote a pair of translations; $\mathbf{H_{x}}$ and $\mathbf{H_{y}}$ denote
the corresponding state sequences. $\mathbf{z}$ is the pseudo sentence by
mixuping $\mathbf{x}$ and $\mathbf{y}$. ’Dec’ denotes the decoder and there is
only one decoder in the model. ’stop-grad’ denotes the stop-gradient operation
during back propagation. $\mathcal{L}_{CE}$, $\mathcal{L}_{OT}$, and
$\mathcal{L}_{AT}$ denote the cross entropy loss, optimal transport loss, and
agreement-based training loss.
## 3 Method
The main idea of our method is to help the encoder output universal
representations for all the languages and help the decoder map the semantic-
equivalent representation from different languages to the target language’s
space. We propose two approaches to fulfill this goal. The first is to
directly bridge the gap between the state sequences that carry the same
semantics. The second is to force the decoder to make consistent predictions
based on the semantic-equivalent sentences. Figure 1 shows the overall
training scheme.
### 3.1 Optimal Transport
Earth Mover’s Distance Based on the optimal transport theory Villani (2009);
Peyré et al. (2019), the earth mover’s distance (EMD) measures the minimum
cost to transport the probability mass from one distribution to another
distribution. Assuming that there are two probability distributions $\mu$ and
$\mu^{\prime}$, that are defined as:
$\begin{split}&\mu=\\{(\mathbf{w}_{i},m_{i})\\}_{i=1}^{n},\quad
s.t.\sum_{i}m_{i}=1;\\\
&\mu^{\prime}=\\{(\mathbf{w}^{\prime}_{j},m^{\prime}_{j})\\}_{j=1}^{n^{\prime}},\quad
s.t.\sum_{j}m^{\prime}_{j}=1,\end{split}$ (2)
where each data point $\mathbf{w}_{i}\in\mathbb{R}^{d}$ has a probability mass
$m_{i}$ ($m_{i}>0$). There are $n$ data points in $\mu$. We define a cost
function $c(\mathbf{w}_{i},\mathbf{w}^{\prime}_{j})$ that determines the cost
of per unit between two points $\mathbf{w}_{i}$ and $\mathbf{w}^{\prime}_{i}$.
Given above, the EMD is defined as:
$\begin{split}\mathcal{D}(\mu,\mu^{\prime})&=\min_{\mathbf{T}\geq
0}\sum_{i,j}\mathbf{T}_{ij}c(\mathbf{w}_{i},\mathbf{w^{\prime}}_{j})\\\
s.t.\quad&\sum_{j=1}^{n^{\prime}}\mathbf{T}_{ij}=m_{i},\forall
i\in\\{1,\ldots,n\\},\\\ &\sum_{i=1}^{n}\mathbf{T}_{ij}=m^{\prime}_{j},\forall
j\in\\{1,\ldots,n^{\prime}\\}.\end{split}$ (3)
$\mathbf{T}_{ij}$ denotes the mass transported from $\mu$ to $\mu^{\prime}$.
State Mover’s Distance Following EMD, we define the state mover’s distance
(SMD) to measure the minimum ’travel cost’ between two state sequences. Given
a pair of translations $\mathbf{x}=(x_{1},\ldots,x_{nx})$, and
$\mathbf{y}=(y_{1},\ldots,y_{ny})$, we can get their corresponding state
sequences after feeding them to the encoder, which are denoted as:
$\begin{split}&\mathbf{H}_{\mathbf{x}}=(\mathbf{h}_{1},\ldots,\mathbf{h}_{i},\ldots,\mathbf{h}_{nx}),\\\
&\mathbf{H}_{\mathbf{y}}=(\mathbf{h}^{\prime}_{1},\ldots,\mathbf{h}^{\prime}_{j},\ldots,\mathbf{h}^{\prime}_{ny}),\end{split}$
(4)
where $nx$ and $ny$ denote the sentence length of the source and target
sentences. We can regard $\mathbf{H_{x}}$ as a discrete distribution on the
space $\mathbb{R}^{d}$, where the probability only occurs at each specific
point $\mathbf{h}_{i}$. Next, several previous studies Schakel and Wilson
(2015); Yokoi et al. (2020) have confirmed that the embedding norm is related
to the word importance, and the important words have larger norms. Inspired by
these findings, we also observe that the state vector has similar properties.
The state vectors of essential words, such as content and medium-frequency
words, have larger norms than unimportant ones, such as function words, high-
frequency words. Therefore, we propose to use the normalized vector norm as
the probability mass for each state point:
$m_{i}=\frac{|\mathbf{h}_{i}|}{\sum_{i}|\mathbf{h}_{i}|},m^{\prime}_{j}=\frac{|\mathbf{h}^{\prime}_{j}|}{\sum_{j}|\mathbf{h}^{\prime}_{j}|},$
(5)
where $|\cdot|$ denotes the norm of the vector.
Given above, we can convert the state sequences to distributions:
$\begin{split}&\mu_{\mathbf{x}}^{\mathbf{H}}=\\{(\mathbf{h}_{i},\frac{|\mathbf{h}_{i}|}{\sum_{i}|\mathbf{h}_{i}|})\\}_{i=1}^{nx},\\\
&\mu_{\mathbf{y}}^{\mathbf{H}}=\\{(\mathbf{h}^{\prime}_{j},\frac{|\mathbf{h}^{\prime}_{j}|}{\sum_{j}|\mathbf{h}^{\prime}_{j}|})\\}_{j=1}^{ny}.\end{split}$
(6)
Then, the SMD is formally defined as follows:
$\begin{split}\mathcal{D}&(\mu_{\mathbf{x}}^{\mathbf{H}},\mu_{\mathbf{y}}^{\mathbf{H}})=\min_{\mathbf{T}\geq
0}\sum_{i,j}\mathbf{T}_{ij}c(\mathbf{h}_{i},\mathbf{h^{\prime}}_{j}),\\\
s.t.\quad&\sum_{j=1}^{ny}\mathbf{T}_{ij}=\frac{|\mathbf{h}_{i}|}{\sum_{i}|\mathbf{h}_{i}|},\forall
i\in\\{1,\ldots,nx\\},\\\
&\sum_{i=1}^{nx}\mathbf{T}_{ij}=\frac{|\mathbf{h}^{\prime}_{j}|}{\sum_{j}|\mathbf{h}^{\prime}_{j}|},\forall
j\in\\{1,\ldots,ny\\}.\end{split}$ (7)
As illustrated before, we want decoder to make consistent predictions
conditioned on the equivalent state sequences. Considering that the vector
norm and direction both have impacts on the cross-attention results of
decoder, we use the Euclidean distance as the cost function. We didn’t use the
cosine similarity based metric, because it only considers the impact of vector
direction. The proposed SMD is a fully unsupervised algorithm to align the
contextual representations of the two semantic-equivalent sentences.
Approximation of SMD The exact computation to SMD is a linear programming
problem with typical super $O(n^{3})$ complexity, which will slow down the
training speed greatly. We can obtain a relaxed bound of SMD by removing one
of the two constraints, respectively. Following Kusner et al. (2015), we
remove the second constraints:
$\begin{split}\mathcal{D}^{*}&(\mu_{\mathbf{x}}^{\mathbf{H}},\mu_{\mathbf{y}}^{\mathbf{H}})=\min_{\mathbf{T}\geq
0}\sum_{i,j}\mathbf{T}_{ij}c(\mathbf{h}_{i},\mathbf{h^{\prime}}_{j}),\\\
s.t.\quad&\sum_{j=1}^{ny}\mathbf{T}_{ij}=\frac{|\mathbf{h}_{i}|}{\sum_{i}|\mathbf{h}_{i}|},\forall
i\in\\{1,\ldots,nx\\}.\end{split}$ (8)
The above approximation must yield a lower bound to the exact SMD distance.
The accurate SMD solution that satisfies both of the two constraints must also
satisfy the first constraint. Given the approximation, the optimal solution
for each state vector $\mathbf{h}_{i}$ is to move all its probability mass to
the most similar state vector $\mathbf{h}^{\prime}_{j}$. Therefore, the
approximation also enables the many-to-one alignment relationships during
training. We have also tried some approximation algorithms that can get a more
accurate estimation of SMD, e.g., Sinkhorn algorithmCuturi (2013), IPOT Xie et
al. (2020). However, we have not observed consistent improvements in our
preliminary experiments, and these algorithms also slow down the training
speed significantly.
Objective Function We define a symmetrical loss to minimize the SMD of both
sides:
$\mathcal{L}_{OT}=\frac{1}{2}\left(\mathcal{D}^{*}(\mu_{\mathbf{x}}^{\mathbf{H}},\mu_{\mathbf{y}}^{\mathbf{H}})+\mathcal{D}^{*}(\mu_{\mathbf{y}}^{\mathbf{H}},\mu_{\mathbf{x}}^{\mathbf{H}})\right).$
(9)
### 3.2 Agreement-based Training
Theoretical Analysis In zero-shot translation, the decoder should map the
semantic representations from different languages to the target language
space, even if it has never seen the translation directions during training.
This ability needs the model to make consistent predictions based on the
semantic-equivalent sentences, whatever the input language is. To improve the
prediction consistency of the model, we propose an agreement-based training
method. Because the source sentence $\mathbf{x}$ and target sentence
$\mathbf{y}$ are semantically equivalent, the probability of predicting any
other sentence $\mathbf{z}$ based on them should be always equal
theoretically, which is denoted as:
$p(\mathbf{z}|\mathbf{x})=p(\mathbf{z}|\mathbf{y}).$ (10)
Specifically, the predicted probabilities of the $k$-th target word
conditioned by the first $k-1$ words of $\mathbf{z}$ and the source and target
sentences is equal:
$p(z_{k}|\mathbf{z}_{<k},\mathbf{x};\theta)=p(z_{k}|\mathbf{z}_{<k},\mathbf{y};\theta),$
(11)
where $\theta$ denotes the model parameters. Optimizing Equation 11 can not
only help the encoder produce universal semantic representations but also help
the decoder map different source languages to the particular target language
space indicated by $\mathbf{z}$.
Mixup for $\mathbf{z}$ Although Equation 11 is theoretically attractive, the
choice of sentence $\mathbf{z}$ has a significant influence on the above
optimization. If we use a random sentence as $\mathbf{z}$, which is not
related to $\mathbf{x}$ and $\mathbf{y}$, the prediction makes no sense, and
the model learns helpful nothing. If we use either $\mathbf{x}$ or
$\mathbf{y}$ directly, this will cause information leakage on one side of
Equation 11. As a result, the prediction difficulty between the two sides
differs significantly, and it is hard for one side to catch up with the other
side. Given the above, we need a inter-sentence that is "between" $\mathbf{x}$
and $\mathbf{y}$. Inspired by the success of mixup technique in NLP Zhang et
al. (2020b); Cheng et al. (2021), we generate a pseudo sentence by hard
mixuping $\mathbf{x}$ and $\mathbf{y}$ at token-level. We truncate the longer
sentences of $\mathbf{x}$ and $\mathbf{y}$ to make them equal in length. Since
these two sentences are translation pairs, their sentence lengths are usually
close, truncating will not significantly reduce the length of the longer
sentence and will not enhance the decoder learn shorter outputs. We denote the
truncated sentence as $\mathbf{x}^{\prime}$ and $\mathbf{y}^{\prime}$, and
their length as $n^{\prime}$. Then we can generate $\mathbf{z}$ as:
$\mathbf{z}=\mathbf{g}\odot\mathbf{x}^{\prime}+(1-\mathbf{g})\odot\mathbf{y}^{\prime},$
(12)
where $\mathbf{g}\in\\{0,1\\}^{n^{\prime}}$, $\odot$ denotes the element-wise
product. Each element in $\mathbf{g}$ is sampled from Bernoulli$(\lambda)$,
where the parameter $\lambda$ is sampled from Beta$(\alpha,\beta)$, and
$\alpha$ and $\beta$ are two hyperparameters. The language tag in
$\mathbf{z}$, which determines the translation direction, is either come from
$\mathbf{x}$ or $\mathbf{y}$.
Objective Function Similar to Equation 9, we define another symmetrical loss
based on the KL divergence of the model prediction distributions:
$\begin{split}\mathcal{L}_{AT}=&\frac{1}{2n^{\prime}}\sum_{k=1}^{n^{\prime}}KL\left(p(z_{k}|\mathbf{z}_{<k},\mathbf{H_{x}})||p(z_{k}|\mathbf{z}_{<k},\mathbf{H_{y}})\right)\\\
&+KL\left(p(z_{k}|\mathbf{z}_{<k},\mathbf{H_{y}})||p(z_{k}|\mathbf{z}_{<k},\mathbf{H_{x}})\right).\end{split}$
(13)
We omit the model parameters for convenience.
### 3.3 The Final Loss
The final loss consists of three parts, the cross entropy loss (Equation 1),
the optimal transport loss based on SMD (Equation 9) and the KL divergence
loss for the agreement-based training (Equation 13):
$\mathcal{L}=\mathcal{L}_{CE}+\gamma_{1}|\mathbf{x}|\mathcal{L}_{OT}+\gamma_{2}\mathcal{L}_{AT}$
(14)
where $\gamma_{1}$ and $\gamma_{2}$ are two hyperparameters that control the
contributions of the two regularization loss terms. Since $\mathcal{L}_{OT}$
is calculated on the sentence-level and the other two losses are calculated on
the token-level, we multiply the averaged sequence length $|\mathbf{x}|$ to
$\mathcal{L}_{OT}$. Among these three losses, the first term dominates the
parameter update of the model, and determines the model performance mostly.
The latter two regularization loss terms only slightly modify the directions
of the gradients. Because the first loss term does not depend on
$\mathbf{H_{y}}$, we apply the stop-gradient operation to $\mathbf{H_{y}}$
(Figure 1), which means that the gradients will not pass through
$\mathbf{H_{y}}$ to the encoder.
Dataset | Language Pairs | Size
---|---|---
IWSLT | En$\leftrightarrow${De, It, Nl, Ro} | 1.79M
IWSLT-b | Nl$\leftrightarrow$De$\leftrightarrow$En$\leftrightarrow$It$\leftrightarrow$Ro | 1.79M
PC-6 | En$\leftrightarrow${Kk, Tr, Ro, Cs, Ru} | 7.9M
OPUS-7 | En$\leftrightarrow${De, Fr, Nl, Ru, Zh, Ar} | 11.6M
Table 1: The statics of our datasets.
## 4 Experiments
### 4.1 Data Preparation
We conduct experiments on the following multilingual datasets: IWSLT17, PC-6,
and OPUS-7. The brief statistics of the training set are in Table 1. We put
more details in the appendix.
IWSLT17 Cettolo et al. (2017) We simulate two scenarios. The first (IWSLT) is
English-pivot, where we only retain the parallel sentences from/to English.
The second (IWSLT-b) has a chain of pivots, where two languages are connected
by a chain of pivot languages. Each translation direction has about 0.22M
sentence pairs. Both of the two scenarios have eight supervised translation
directions and twelve zero-shot translation directions. We use the official
validation and test sets.
PC-6 The PC-6 dataset is extracted from the PC-32 corpus Lin et al. (2020).
The data amount of different language pairs is unbalanced, ranging from 0.12M
to 1.84M. This dataset has ten supervised and twenty zero-shot translation
directions. We use the validation and test sets collected from WMT16~19 for
the supervised directions. The zero-shot validation and test sets are
extracted from the WikiMatrix Schwenk et al. (2021), each containing about
1K~2K sentences pairs.
OPUS-7 The OPUS-7 dataset is extracted from the OPUS-100 corpus Zhang et al.
(2020a). The language pairs come from different language families and have
significant differences. This dataset has twelve supervised translation
directions and thirty zero-shot translation directions. We use the standard
validation and test sets released by Zhang et al. (2020a). We concatenate the
zero-shot test sets with the same target language for convenience.
We use the Stanford word segmenter Tseng et al. (2005); Monroe et al. (2014)
to segment Arabic and Chinese, and the Moses toolkit Koehn et al. (2007) to
tokenize other languages. Besides, integrating operations of 32K is performed
to learn BPE Sennrich et al. (2016).
IWSLT | De-It | De-Nl | De-Ro | It-Ro | It-Nl | Nl-Ro | Zero Avg. | Sup. Avg
---|---|---|---|---|---|---|---|---
Model | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$
ZS | 15.64 | 15.28 | 18.46 | 18.14 | 14.42 | 14.98 | 17.91 | 20.14 | 18.16 | 18.79 | 15.81 | 16.41 | 17.01 | 30.62
SRA | 16.44 | 16.45 | 18.44 | 19.15 | 15.07 | 15.83 | 18.52 | 21.52 | 19.3 | 19.1 | 16.83 | 17.66 | 17.85 | 30.41
SF | 16.34 | 15.77 | 18.37 | 18.16 | 14.74 | 15.25 | 18.54 | 21.64 | 18.6 | 19.18 | 16.09 | 16.94 | 17.46 | 30.5
CL | 17.37 | 16.58 | 19.69 | 19.5 | 15.51 | 16.25 | 18.91 | 22.58 | 18.78 | 20.02 | 17.27 | 17.91 | 18.36 | 30.39
DisPos | 16.62 | 15.64 | 19.64 | 18.78 | 15.07 | 15.96 | 18.67 | 21.56 | 19.01 | 20.15 | 16.46 | 18.18 | 17.97 | 30.49
DT | 16.82 | 15.81 | 18.74 | 18.64 | 15.12 | 16.32 | 18.70 | 22.13 | 18.92 | 19.29 | 16.21 | 18.22 | 17.91 | 30.51
TGP | 16.77 | 18.51 | 14.58 | 17.12 | 16.84 | 16.88 | 19.42 | 19.25 | 20.01 | 19.04 | 21.67 | 18.43 | 18.21 | 30.66
LMP | 16.87 | 18.44 | 15.05 | 16.66 | 16.20 | 16.12 | 19.04 | 19.05 | 19.35 | 18.68 | 22.17 | 17.97 | 17.96 | 30.52
PivT | 18.31 | 17.9 | 19.99 | 19.33 | 15.54 | 17.45 | 19.77 | 22.97 | 21.43 | 21.44 | 17.57 | 19.82 | 19.29 | -
ZS+OT | 17.35 | 17.08 | 19.77 | 19.05 | 15.66 | 16.17 | 19.71 | 22.32 | 20.18 | 20.57 | 16.87 | 18.09 | 18.56 | 30.42
ZS+AT | 16.37 | 15.84 | 19.11 | 18.41 | 14.85 | 15.59 | 18.37 | 21.09 | 18.77 | 19.4 | 15.86 | 17.46 | 17.59 | 30.55
Ours | 17.53 | 17.03 | 19.94 | 19.67 | 15.61 | 16.57 | 19.23 | 22.42 | 20.05 | 20.23 | 17.05 | 18.64 | 18.66 | 30.52
IWSLT-b | De-It | En-Nl | De-Ro | En-Ro | It-Nl | Nl-Ro | Zero Avg. | Sup. Avg.
---|---|---|---|---|---|---|---|---
Model | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$
ZS | 17.79 | 17.3 | 25.48 | 30.99 | 15.65 | 17.28 | 21.7 | 30.14 | 20.79 | 21.02 | 15.74 | 17.28 | 20.93 | 30.46
SRA | 18.09 | 18.05 | 26.52 | 31.15 | 15.8 | 17.43 | 22.24 | 30.19 | 20.35 | 20.65 | 16.39 | 17.83 | 21.22 | 30.29
SF | 18.25 | 17.61 | 26 | 31.28 | 16.06 | 17.51 | 22.43 | 30.51 | 20.67 | 20.82 | 16.2 | 17.24 | 21.21 | 30.35
CL | 18.49 | 18.29 | 26.88 | 31.46 | 15.71 | 17.23 | 23.01 | 30.78 | 20.62 | 20.8 | 16.58 | 18.17 | 21.5 | 30.28
DisPos | 17.98 | 17.35 | 26.26 | 31.13 | 15.75 | 18.07 | 22.95 | 30.45 | 21.02 | 20.58 | 16.38 | 18.28 | 21.35 | 29.89
TGP | 18.22 | 18.69 | 26.62 | 30.96 | 15.57 | 17.26 | 23.21 | 30.22 | 20.62 | 20.38 | 16.58 | 17.65 | 21.33 | 30.33
LMP | 18.36 | 18.83 | 27.2 | 30.5 | 16.05 | 17.05 | 23.99 | 29.38 | 20.57 | 19.83 | 16.72 | 17.56 | 21.33 | 30.37
PivT | 18.38 | 19.08 | 27.3 | 28.02 | 15 | 16.35 | 23.72 | 28.72 | 20.34 | 19.45 | 15.7 | 16.8 | 20.74 | -
ZS+OT | 18.09 | 18.06 | 26.6 | 31.69 | 15.76 | 17.19 | 23.46 | 30.99 | 20.31 | 20.86 | 16.92 | 18.05 | 21.49 | 30.37
ZS+AT | 18.23 | 17.51 | 26.24 | 31.12 | 16.19 | 17.5 | 22.64 | 30.33 | 20.72 | 20.59 | 16.29 | 17.64 | 21.25 | 30.39
Ours | 18.41 | 18.05 | 27.39 | 31.36 | 16.15 | 17.48 | 23.22 | 30.9 | 20.68 | 20.82 | 17.03 | 18.29 | 21.64 | 30.33
PC-6 | x$\rightarrow$Kk | x$\rightarrow$Tr | x$\rightarrow$Ro | x$\rightarrow$Cs | x$\rightarrow$Ru | | Zero
---
Avg.
| Sup.
---
Avg.
OPUS-7 | x$\rightarrow$De | x$\rightarrow$Fr | x$\rightarrow$Nl | x$\rightarrow$Ru | x$\rightarrow$Zh | x$\rightarrow$Ar | | Zero
---
Avg.
| Sup.
---
Avg.
ZS | 5.87 | 9.29 | 14.23 | 13.55 | 16.83 | 11.95 | 21.73 | ZS | 13.58 | 22.63 | 17.96 | 15.42 | 29.78 | 21.58 | 20.15 | 34.2
SRA | 5.90 | 10.09 | 17.36 | 15.85 | 19.31 | 13.68 | 21.66 | SRA | 17.04 | 26.12 | 19.29 | 20.9 | 31.99 | 22.01 | 22.89 | 33.97
SF | 4.76 | 9.95 | 17.77 | 15.83 | 20.10 | 13.68 | 21.64 | SF | 15.99 | 25.2 | 18.2 | 20.85 | 31.65 | 21.5 | 22.23 | 33.99
CL | 6.07 | 10.72 | 17.96 | 16.14 | 21.58 | 14.49 | 21.54 | CL | 17.41 | 26.19 | 19.66 | 21.1 | 32.52 | 21.69 | 23.09 | 33.86
DisPos | 6.60 | 10.14 | 15.47 | 15.89 | 18.70 | 12.51 | 21.45 | DisPos | 15.95 | 25.36 | 18.86 | 19.75 | 31.34 | 22.08 | 22.22 | 34.12
DT | 6.92 | 10.49 | 17.37 | 15.63 | 21.74 | 14.43 | 21.61 | DT | 14.97 | 23.95 | 18.10 | 18.91 | 29.65 | 20.68 | 21.04 | 34.03
TGP | 7.33 | 10.98 | 20.63 | 13.81 | 21.21 | 14.79 | 21.58 | TGP | 16.86 | 25.65 | 18.99 | 20.83 | 32.47 | 21.47 | 22.71 | 34.18
LMP | 4.45 | 8.50 | 16.42 | 15.25 | 19.28 | 12.78 | 21.71 | LMP | 14.65 | 23.94 | 18.36 | 19.02 | 30.58 | 20.99 | 21.26 | 34.07
PivT | 4.29 | 10.59 | 19.23 | 17.22 | 21.65 | 14.58 | - | PivT | 17.97 | 28.37 | 19.76 | 22.97 | 34.08 | 23.74 | 24.48 | -
ZS+OT | 6.22 | 11.08 | 18.74 | 16.86 | 22.61 | 15.1 | 21.6 | ZS+OT | 17.56 | 26.70 | 19.54 | 21.88 | 32.42 | 22.48 | 23.43 | 34.02
ZS+AT | 6.04 | 10.74 | 17.92 | 15.69 | 20.63 | 14.2 | 21.72 | ZS+AT | 16.78 | 25.89 | 18.93 | 21.21 | 32.02 | 21.72 | 22.75 | 34.1
Ours | 6.58 | 11.44 | 18.55 | 17.11 | 22.77 | 15.29 | 21.68 | Ours | 17.60 | 26.74 | 19.68 | 21.91 | 32.63 | 23.24 | 23.63 | 34.17
Table 2: The overall BLEU scores on the test sets. "Zero Avg." and "Sup. Avg."
denote the average BLEU scores on the zero-shot and supervised directions. The
"x" in the third table denotes all languages except for the target language.
The highest scores are marked in bold for all models except for the "PivT"
system in each column.
### 4.2 Systems
We use the open-source toolkit called Fairseq-py Ott et al. (2019) as our
Transformer system. We implement the following systems:
• Zero-Shot (ZS) The baseline system which is trained only with the cross-
entropy loss (Equation 1). Then the model is tested directly on the zero-shot
test sets.
• Pivot Translation (PivT) Cheng et al. (2017) The same translation model as
ZS. The model first translates the source language to the pivot language and
then generates the target language.
•Sentence Representation Alignment (SRA) Arivazhagan et al. (2019) This
methods adds an regularization loss to minimize the discrepancy of the source
and target sentence representations.
$\mathcal{L}=\mathcal{L}_{CE}+\gamma Dis(Enc(s),Enc(t)),$ (15)
where ’Dis’ denotes the distance function and ’Enc($\cdot$)’ denotes the
sentence representations. We use the averaged sentence representation and
Euclidean distance function because we find they work better. We vary the
hyperparameter $\gamma$ from $0.1$ to $1$ to tune the performance.
• Softmax Forcing (SF) Pham et al. (2019) This method enable the decoder to
generate the target sentence from itself by adding an extra loss:
$\mathcal{L}_{SF}=\gamma\sum_{k}^{n_{y}}KL(p(y_{k}|\mathbf{y}_{<k},\mathbf{x})||p(y_{k}|\mathbf{y}_{<k},\mathbf{y}))$
(16)
The $\gamma$ is tuned as in the ’SRA’ system.
• Contrastive Learning (CL) Pan et al. (2021) This method adds an extra
contrastive loss to minimize the representation gap of similar sentences and
maximize that of irrelevant sentences:
$\mathcal{L}_{CL}=-\gamma\log\frac{e^{sim^{+}(\mathcal{R}(s),\mathcal{R}(t))/\tau}}{\sum_{w}e^{sim^{-}(\mathcal{R}(s),\mathcal{R}(w))/\tau}},$
(17)
where $+$ and $-$ denote positive and negative sample pairs,
$\mathcal{R}(\cdot)$ denotes the averaged state representations. We set $\tau$
as 0.1 as suggested in the paper and tune $\gamma$ as in the ’SRA’ system.
• Disentangling Positional Information (DisPos) Liu et al. (2021) This method
removes the residual connections in a middle layer of the encoder to get the
language-agnostic representations.
• Denosing Training (DT) Wang et al. (2021) This method introduces a denoising
auto-encoder objective during training.
• Target Gradient Projection (TGP) Yang et al. (2021b) This method projects
the training gradient to not conflict with the oracle gradient of a small
amount of direct data.
• Language Model Pre-training (LMP) Gu et al. (2019) This method strengthens
the decoder language model prior to machine translation training.
The following systems are implemented based on our method:
• ZS+OT We only add the optimal transport loss (Equation 9) during training.
We vary the hyperparameter $\gamma_{1}$ from $0.1$ to $1$, and we find that it
can constantly improve the performance whatever $\gamma_{1}$ is. The detailed
results and the final setting about the hyperparameter are put in the
appendix.
• ZS+AT We only add the agreement-based training loss (Equation 13) during
training. The $\alpha$ and $\beta$ in the beta distribution are set as $6$ and
$3$, respectively. We vary the hyperparameter $\gamma_{2}$ from ${10}^{-4}$ to
$0.1$.
• ZS+OT+AT (Ours) The model is trained with the complete objective function
(Equation 14). The hyperparameters are set according to the searched results
of the above two systems and are listed in the appendix.
Implementation Details All the systems are implemented as the base model
configuration in Vaswani et al. (2017) strictly. We employ the Adam optimizer
with $\beta_{1}=0.9$ and $\beta_{2}=0.98$. We use the inverse square root
learning scheduler and set the $warmup\\_steps=4000$ and $lr=0.0007$. We set
dropout as 0.3 for the IWSLT datasets and 0.1 for the for the PC-6 and OPUS-7
datasets. All the systems are trained on 4 RTX3090 GPUs with the update
frequency 2. The max token is 4096 for each GPU. For the IWSLT data sets, we
first pretrain the model with the cross-entropy loss (Equation 1) for 20K
steps and then continually train the model combined with the proposed loss
terms for 80K steps. For the PC-6 and OPUS-7 datasets, the pre-training steps
and continual-training steps are both 100k.
(a) ZS
(b) SRA
(c) CL
(d) Ours
Figure 2: The visualization of sentence representation after dimension
reduction on the IWSLT three-way-parallel test sets. The blue line denotes
Germany, the orange line denotes Italian, and the green line denotes Dutch.
### 4.3 Main Results
All the results (including the intermediate results of the ’PivT’ system) are
generated with beam size = $5$ and length penalty $\alpha=0.6$. The
translation quality is evaluated using the case-sensitive BLEU Papineni et al.
(2002) with the SacreBLEU tool Post (2018). We report the tokenized BLEU for
Arabic, char-based BLEU for Chinese, and detokenized BLEU for other
languages111BLEU+case.mixed+numrefs.1+smooth.exp+
tok.{13a,none,zh}+version.1.5.1. The main results are shown in Table 2. We
report the averaged BLEU with the same target language on the PC-6 and OPUS-7
dataset for display convenience, and the detailed results are in the appendix.
The ’Ours’ system significantly improves over the ’ZS’ baseline system and
outperforms other zero-shot-based systems on all datasets. The two proposed
methods, OT and AT, can both help the model learn universal and cross mappings
, so they both can improve the model performance independently. These two
methods also complement each other and can further improve the performance
when combined together. Besides, ’Ours’ system can even exceed the ’PivT’
system when the distant language pairs in the IWSLT-b or the low-resource
language pairs in the PC-6 bring severe error accumulation problems. We also
compare the training speed and put the results in the appendix.
IWSLT | x-De | x-It | x-Nl | Avg.
---|---|---|---|---
ZS | 21.5 | 20.79 | 19.99 | 20.76
SRA | 21.79 | 21.92 | 20.67 | 21.46
CL | 23.47 | 21.52 | 21.09 | 22.03
Ours | 23.6 | 23.33 | 21.48 | 22.80
Table 3: The pair-wise BLEU on the IWSLT three-way-parallel test sets.
## 5 Analysis
In this section, we try to understand how our method improves the zero-shot
translation.
### 5.1 Sentence Representation Visualization
To verify whether our method can better align different languages’ semantic
space, we visualize each model’s encoder output with the IWSLT test sets. We
first select three languages: Germany, Italian, and Dutch. Then we filter out
the overlapped sentences of the three languages from the corresponding test
sets and create a new three-way-parallel test set. Next, we feed all the
sentences to the encoder of each model and average the encoder output to get
the sentence representation. Last, we apply dimension reduction to the
representation with t-SNE Van der Maaten and Hinton (2008). The visualization
result in Figure 2(a) shows that the ’ZS’ system cannot align the three
languages well, which partly confirms our assumption that the conventional
MNMT cannot learn universal representations for all languages. As a contrast,
the ’Ours’ system (d) can draw the representation closer and achieve
comparative results as the ’CL’ system (c) without large amounts of negative
instances to contrast. The visualization results confirm that our method can
learn good universal representation for different languages.
### 5.2 Inspecting Prediction Consistency
To verify whether our method can help map the semantic representation from
different languages to the same space of the target language, we inspect the
prediction consistency of the models when the model is fed with synonymous
sentences from different languages. Precisely, we measure the pair-wise BLEU
on the above IWSLT three-way-parallel test set. We choose one language as the
target language, e.g., German, and then translate the other two languages,
e.g., Italian and Dutch, to the target language. After obtaining these two
translation files, we use one file as the reference, the other as the
translation to calculate the BLEU, and then we swap the role of these two
files to calculate the BLEU again. We average the BLEU scores to get the pair-
wise BLEU, and the results in Table 3 show that our method can achieve higher
results, which proves that our method can improve the prediction consistency.
System | IWSLT | IWSLT-b | PC-6 | OPUS-7
---|---|---|---|---
ZS | 93.2% | 93.72% | 87.93% | 74.1%
SRA | 93.9% | 93.88% | 91.54% | 85.83%
CL | 93.97% | 93.96% | 91.76% | 86.23%
Ours | 94.03% | 94.06% | 93.24% | 86.75%
Table 4: The target language prediction accuracy.
### 5.3 Inspecting Spurious Correlations
The zero-shot translation usually suffers from capturing spurious correlations
in the supervised directions, which means that the model overfits the mapping
relationship from the input language to the output language observed in the
training set Gu et al. (2019). This problem often causes the off-target
prediction phenomenon where the model generates translation in the wrong
target languages. To check whether our method can alleviate this phenomenon,
we use the $\mathrm{Langdetect}$ 222https://github.com/Mimino666/langdetect
toolkit to identify the target language and calculate the prediction accuracy
as $1-n_{off-target}/n_{total}$. We also compare our method with the ’SRA’ and
’CL’ methods. The results are shown in Table 4. The ’ZS’ baseline system can
achieve high prediction accuracy on the IWSLT dataset, but the performance
begin to decline as the amount of data becomes unbalanced and the languages
become more unrelated. On all the datasets, our method achieves higher
prediction accuracy and outperforms all the contrast methods. We can conclude
from the results that our method can reduce the spurious correlation captured
by the model.
## 6 Related Wrok
Recent work on zero-shot translation can be divided into two categories. The
first category helps the encoder produce language-agnostic features via extra
regularization loss or training tasks. Pham et al. (2019) propose to compress
the output of the encoder into a consistent number of states. Arivazhagan et
al. (2019) maximize the cosine similarities between the averaged
representations of the source and target sentences. Pan et al. (2021) and Wei
et al. (2021) propose contrastive learning schemes to minimize the averaged
sentence representation gap of similar sentences and maximize that of
irrelevant sentences. Compared with their methods, we directly bridge the gap
between two state sequences, which alleviates the mismatch problem of sentence
representation. Ji et al. (2020) leverage explicit alignment information by
external aligner tool or additional attention layer to obtain the aligned
words for masking, and then they let the model predict the masked words based
on the surrounding words. Compared with this work, our method is to align the
whole state sequences of different languages, not just for single words. Liu
et al. (2021) remove the residual connections in a middle layer of the encoder
to release the positional correspondence to input tokens. Wang et al. (2021)
introduce a denoising auto-encoder objective to improve the translation
accuracy. Yang et al. (2021b) leverage an auxiliary target language prediction
task to retain information about the target languages. Z. et al. (2022) uses
optimal transport theory to improve the low-resource neural machine
translation. Compared with these work, our method introduces explicit
constraints to the semantic representations.
The second category extends the training data by generating pseudo sentence
pairs or utilizing monolingual data. Gu et al. (2019) apply decoder pre-
training and back-translation to improve the zero-shot ability. Al-Shedivat
and Parikh (2019) first translate the source and target languages to a third
language and then make consistent predictions based on this pseudo sentence.
Zhang et al. (2020a) propose random online back translation to enforce the
translation of unseen training language pairs. Chen et al. (2021) fuse the
pretrained multilingual model to the NMT model. Compared with these works, our
method does not need additional data or additional time to generate pseudo
corpus. If necessary, our method can also be combined with these works to
further improve the zero-shot performance of the model. Yang et al. (2021a)
propose to substitute some fragments of the source language with their
counterpart translations to get the code-switch sentences. Compared to this
work, our agreement-based method mixups the translation pairs to generate the
pseudo sentence as the decoder input and then help the model to make
consistent predictions.
## 7 Conclusion
In this work, we focus on improving the zero-shot ability of multilingual
neural machine translation. To reduce the discrepancy of the encoder output,
we propose the state mover’s distance based on the optimal transport theory
and directly minimize the distance during training. We also propose an
agreement-based training method to help the decoder make consistent
predictions based on the semantic-equivalent sentences. The experimental
results show that our method can get consistent improvements on diverse
multilingual datasets. Further analysis shows that our method can better align
the semantic space, improve the prediction consistency, and reduce the
spurious correlations.
## Limitations
Although our method can improve the performance of the zero-shot translation
directions, it has limited benefits for the supervised translation
performance. On the one hand, the vanilla MNMT model has already been able to
learn a lot of language shared knowledge. On the other hand, the language-
specific knowledge learned by the model can also help the model achieve good
translation performance in the supervised translation directions. Therefore,
our method is limited to improving the supervised translation performance.
Besides, some reviewers pointed out that our method degraded the supervised
translation performance according to the results of the main experiments. This
is because we select the checkpoints based on the performance of the zero-shot
valid sets, which may cause a slight decline in the performance of the
supervised directions. If we select checkpoints based on the the supervised
valid sets, our method can improve the zero-shot performance without degrading
the BLEU of the supervised directions.
## Acknowledgements
We thank all the anonymous reviewers for their insightful and valuable
comments.
## References
* Al-Shedivat and Parikh (2019) Maruan Al-Shedivat and Ankur P. Parikh. 2019. Consistency by agreement in zero-shot neural machine translation. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers)_ , pages 1184–1197.
* Arivazhagan et al. (2019) Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Roee Aharoni, Melvin Johnson, and Wolfgang Macherey. 2019. The missing ingredient in zero-shot neural machine translation. _CoRR_ , abs/1903.07091.
* Cettolo et al. (2017) Mauro Cettolo, Marcello Federico, Luisa Bentivogli, Niehues Jan, Stüker Sebastian, Sudoh Katsuitho, Yoshino Koichiro, and Federmann Christian. 2017. Overview of the iwslt 2017 evaluation campaign. In _International Workshop on Spoken Language Translation_ , pages 2–14.
* Chen et al. (2021) Guanhua Chen, Shuming Ma, Yun Chen, Li Dong, Dongdong Zhang, Jia Pan, Wenping Wang, and Furu Wei. 2021. Zero-shot cross-lingual transfer of neural machine translation with multilingual pretrained encoders. In _EMNLP 2021_.
* Cheng et al. (2021) Yong Cheng, Wei Wang, Lu Jiang, and Wolfgang Macherey. 2021. Self-supervised and supervised joint training for resource-rich machine translation. In _Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event_ , pages 1825–1835.
* Cheng et al. (2017) Yong Cheng, Qian Yang, Yang Liu, Maosong Sun, and Wei Xu. 2017. Joint training for pivot-based neural machine translation. In _Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017_ , pages 3974–3980.
* Cuturi (2013) Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. _Advances in neural information processing systems_ , 26:2292–2300.
* Fan et al. (2020) Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2020. Beyond english-centric multilingual machine translation. _CoRR_ , abs/2010.11125.
* Firat et al. (2016) Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In _NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016_ , pages 866–875.
* Gu et al. (2018) Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O. K. Li. 2018. Universal neural machine translation for extremely low resource languages. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers)_ , pages 344–354.
* Gu et al. (2019) Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor O. K. Li. 2019. Improved zero-shot neural machine translation via ignoring spurious correlations. In _Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers_ , pages 1258–1268.
* Ha et al. (2016) Thanh-Le Ha, Jan Niehues, and Alexander H. Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. _CoRR_ , abs/1611.04798.
* Ji et al. (2020) Baijun Ji, Zhirui Zhang, Xiangyu Duan, Min Zhang, Boxing Chen, and Weihua Luo. 2020\. Cross-lingual pre-training based transfer for zero-shot neural machine translation. In _AAAI 2020, New York, NY, USA, February 7-12, 2020_ , pages 115–122.
* Johnson et al. (2017) Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. _Trans. Assoc. Comput. Linguistics_ , 5:339–351.
* Koehn et al. (2007) Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In _ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic_.
* Kusner et al. (2015) Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In _International conference on machine learning_ , pages 957–966. PMLR.
* Lin et al. (2020) Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, and Lei Li. 2020. Pre-training multilingual neural machine translation by leveraging alignment information. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020_ , pages 2649–2663.
* Liu et al. (2021) Danni Liu, Jan Niehues, James Cross, Francisco Guzmán, and Xian Li. 2021. Improving zero-shot translation by disentangling positional information. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021_ , pages 1259–1273.
* Liu et al. (2020) Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. _Trans. Assoc. Comput. Linguistics_ , 8:726–742.
* Monroe et al. (2014) Will Monroe, Spence Green, and Christopher D. Manning. 2014. Word segmentation of informal arabic with domain adaptation. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 2: Short Papers_ , pages 206–211.
* Ott et al. (2019) Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Demonstrations_ , pages 48–53.
* Pan et al. (2021) Xiao Pan, Mingxuan Wang, Liwei Wu, and Lei Li. 2021. Contrastive learning for many-to-many multilingual neural machine translation. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021_ , pages 244–258.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA_ , pages 311–318.
* Peyré et al. (2019) Gabriel Peyré, Marco Cuturi, et al. 2019. Computational optimal transport: With applications to data science. _Foundations and Trends® in Machine Learning_ , 11(5-6):355–607.
* Pham et al. (2019) Ngoc-Quan Pham, Jan Niehues, Thanh-Le Ha, and Alexander H. Waibel. 2019. Improving zero-shot translation with language-independent constraints. In _Proceedings of the Fourth Conference on Machine Translation, WMT 2019, Florence, Italy, August 1-2, 2019 - Volume 1: Research Papers_ , pages 13–23.
* Post (2018) Matt Post. 2018. A call for clarity in reporting BLEU scores. In _Proceedings of the Third Conference on Machine Translation: Research Papers_ , pages 186–191, Belgium, Brussels. Association for Computational Linguistics.
* Schakel and Wilson (2015) Adriaan M. J. Schakel and Benjamin J. Wilson. 2015. Measuring word significance using distributed representations of words. _CoRR_ , abs/1508.02297.
* Schwenk et al. (2021) Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2021. Wikimatrix: Mining 135m parallel sentences in 1620 language pairs from wikipedia. In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021_ , pages 1351–1361.
* Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers_.
* Tseng et al. (2005) Huihsin Tseng, Pi-Chuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher D. Manning. 2005. A conditional random field word segmenter for sighan bakeoff 2005. In _Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, SIGHAN@IJCNLP 2005, Jeju Island, Korea, 14-15, 2005_.
* Van der Maaten and Hinton (2008) Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. _Journal of machine learning research_ , 9(11).
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems_ , pages 5998–6008.
* Villani (2009) Cédric Villani. 2009. _Optimal transport: old and new_ , volume 338. Springer.
* Wang et al. (2021) Weizhi Wang, Zhirui Zhang, Yichao Du, Boxing Chen, Jun Xie, and Weihua Luo. 2021\. Rethinking zero-shot neural machine translation: From a perspective of latent variables. In _EMNLP 2021, Findings_.
* Wei et al. (2021) Xiangpeng Wei, Rongxiang Weng, Yue Hu, Luxi Xing, Heng Yu, and Weihua Luo. 2021\. On learning universal representations across languages. In _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_.
* Xie et al. (2020) Yujia Xie, Xiangfeng Wang, Ruijia Wang, and Hongyuan Zha. 2020. A fast proximal point method for computing exact wasserstein distance. In _Uncertainty in Artificial Intelligence_ , pages 433–453. PMLR.
* Yang et al. (2021a) Jian Yang, Yuwei Yin, Shuming Ma, Haoyang Huang, Dongdong Zhang, Zhoujun Li, and Furu Wei. 2021a. Multilingual agreement for multilingual neural machine translation. In _ACL/IJCNLP 2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021_ , pages 233–239.
* Yang et al. (2021b) Yilin Yang, Akiko Eriguchi, Alexandre Muzio, Prasad Tadepalli, Stefan Lee, and Hany Hassan. 2021b. Improving multilingual translation by representation and gradient regularization. In _EMNLP 2021, Long Paper_.
* Yokoi et al. (2020) Sho Yokoi, Ryo Takahashi, Reina Akama, Jun Suzuki, and Kentaro Inui. 2020. Word rotator’s distance. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020_ , pages 2944–2960.
* Z. et al. (2022) Yang Z., Fang Q., and Y. Feng. 2022. Low-resource neural machine translation with cross-modal alignment. In _EMNLP 2022 Main Conference Long Paper_.
* Zhang et al. (2020a) Biao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. 2020a. Improving massively multilingual neural machine translation and zero-shot translation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020_ , pages 1628–1639.
* Zhang et al. (2020b) Rongzhi Zhang, Yue Yu, and Chao Zhang. 2020b. Seqmix: Augmenting active sequence labeling via sequence mixup. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020_ , pages 8566–8579.
## Appendix A Appendix
PC-6 | Cs-Kk | | Kk-Ru | | Ro-Ru | | Tr-Ro | | Cs-Ro | | Cs-Ru |
---|---|---|---|---|---|---|---|---|---|---|---|---
$\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$
PivT | 1.77 | 2.55 | 11.37 | 10.51 | 32.86 | 28.1 | 20.03 | 14.47 | 25.47 | 25.7 | 27.05 | 24.26
ZS | 2.07 | 2.69 | 15.61 | 15.7 | 20.65 | 20.6 | 13.44 | 11.69 | 19.82 | 18.81 | 20.19 | 20.26
SRA | 2.15 | 2.5 | 17.03 | 16.86 | 25.37 | 25.66 | 17.35 | 14.6 | 23.62 | 23.91 | 22.68 | 21.6
CL | 1.99 | 2.68 | 16.48 | 16.49 | 29.28 | 26.8 | 17.82 | 15.66 | 23.87 | 23.42 | 27.05 | 24.29
DisPos | 2.24 | 2.74 | 17.14 | 17.95 | 21.87 | 23.47 | 14.73 | 13.52 | 20.42 | 19.96 | 27.18 | 25.7
DT | 2.2 | 2.87 | 19.23 | 18.88 | 28.05 | 25.88 | 17.82 | 14.41 | 22.29 | 22.3 | 26.29 | 23.98
TLP | 2.01 | 2.82 | 14.59 | 13.01 | 28.41 | 25.88 | 18.53 | 13.25 | 23.11 | 22.54 | 25.24 | 22.74
ZS+OT | 2.16 | 3.02 | 18.12 | 16.35 | 30.71 | 27.84 | 19.18 | 15.63 | 24.44 | 24.17 | 27.18 | 25.71
ZS+AT | 2.06 | 2.82 | 15.8 | 16.54 | 28.01 | 26.37 | 19.25 | 15.59 | 22.63 | 22.55 | 24.6 | 23.27
Ours | 2.2 | 3.08 | 18.3 | 17.91 | 30.59 | 27.73 | 19.66 | 16.16 | 23.58 | 24.49 | 27.22 | 25.66
PC-6 | Cs-Tr | | Kk-Ro | | Kk-Tr | | Ru-Tr | | Zero
---|---|---|---|---|---|---|---|---|---
$\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | Avg.
PivT | 13.37 | 16.36 | 3.3 | 2.75 | 2.91 | 2.11 | 11.59 | 15.31 | 14.58
ZS | 11 | 12.44 | 3.06 | 3.26 | 3.81 | 2.44 | 10.66 | 10.88 | 11.95
SRA | 12.32 | 15.37 | 2.82 | 2.72 | 2.7 | 1.87 | 10.72 | 12.17 | 13.68
CL | 12.02 | 14.16 | 3.33 | 3.34 | 3.49 | 2.44 | 11.72 | 13.52 | 14.49
DisPos | 12.8 | 15.26 | 3.25 | 3.17 | 3.9 | 3.09 | 10.22 | 8.56 | 12.51
DT | 11.62 | 13.38 | 3.49 | 3.37 | 3.96 | 3.24 | 11.97 | 13.38 | 14.43
TLP | 11.96 | 13.98 | 3.33 | 2.98 | 3.65 | 2.98 | 12.02 | 13.5 | 13.83
ZS+OT | 12.83 | 14.54 | 3.5 | 3.11 | 3.94 | 3.26 | 11.92 | 14.44 | 15.1
ZS+AT | 11.99 | 14.11 | 3.41 | 3.03 | 3.46 | 2.54 | 11.9 | 14.11 | 14.2
Ours | 12.85 | 15.21 | 3.24 | 3.18 | 3.95 | 3.04 | 12.81 | 14.96 | 15.29
Table 5: The results of each zero-shot translation direction on the PC-6
corpus. The notations denote the same meaning as in Table 2.
### A.1 PC-6 Data
OPUS-6 | Size
---|---
En-Kk | 0.12M
En-Tr | 0.39M
En-Ro | 0.77M
En-Cs | 0.82M
En-Ru | 1.84M
Table 6: The statistics about the PC-6 corpus.
The detailed statistics about the PC-6 corpus are shown in Table 6
### A.2 Experiments Results on PC-6
The detailed results on the PC-6 corpus are shown in Table 5.
| $\gamma_{1}$ | $\gamma_{2}$
---|---|---
IWSLT | 0.4 | 0.001
IWSLT-b | 0.2 | 0.002
PC-6 | 0.2 | 0.003
OPUS-7 | 0.3 | 0.01
Table 7: The hyperparameters $\gamma_{1}$ and $\gamma_{2}$ on each dataset. $\alpha$ | $\beta$ | zero Avg.
---|---|---
1 | 1 | 17.23
6 | 2 | 17.44
6 | 3 | 17.59
6 | 4 | 17.5
Table 8: The averaged BLEU with different $\alpha$ and $\beta$ for the ’ZS+AT’ system. | kwps | ratio
---|---|---
ZS | 199 | 1
SRA | 118 | 0.59
SF | 61 | 0.31
CL | 94 | 0.47
ZS+OT | 125 | 0.63
ZS+AT | 61 | 0.31
Ours | 58 | 0.29
Table 9: The training speed on the IWSLT dataset.
### A.3 Hyperparameters
$\gamma_{1}$ and $\gamma_{2}$ The hyperparameter $\gamma_{1}$ and $\gamma_{2}$
in Equation 14 are set as in Table 7.
$\alpha$ and $\beta$ We tried several combinations of $\alpha$ and $\beta$,
and report the averaged BLEU in Table. Under the optimal setting
($\alpha=6,\beta=3$), the probability expectation that the words of the pseudo
sentence $\mathbf{z}$ come from the source sentence $\mathbf{x}$ is $0.67$ and
from the target sentence $\mathbf{y}$ is $0.33$.
### A.4 Training Speed
We test the training speed of all the systems. All the speeds are measured as
kilo-words per second (kwps) and tested in parallel on 4 RTX3090 GPUs with the
same max token and update frequency. We also report the speed ratios of
different systems compared with the speed of the ZS system. The results are
shown in Table 9. The results show that our ’ZS+OT’ system is faster than the
’SRA’ and ’CL’ systems with better performance. The ’ZS+AT’ system is much
slower because it needs three complete forward propagations.
|
Uppsala University, Sweden Uppsala University, Sweden University of Edinburgh,
UK University of Edinburgh, UK University of Edinburgh,
UKhttps://orcid.org/0000-0001-5274-8190 Parosh Aziz Abdulla, Mohamed Faouzi
Atig, Radu Ciobanu, Richard Mayr and Patrick Totzke This work was supported by
the EPSRC, grant EP/M027651/1.
###### Acknowledgements.
Sven Schewe and Lijun Zhang 2 29th International Conference on Concurrency
Theory (CONCUR 2018) CONCUR 2018 CONCUR 2018 September 4–7, 2018 Beijing,
China 118 6
# Universal Safety for Timed Petri Nets is PSPACE-complete
Parosh Aziz Abdulla , Mohamed Faouzi Atig , Radu Ciobanu , Richard Mayr
and Patrick Totzke
###### Abstract.
A timed network consists of an arbitrary number of initially identical 1-clock
timed automata, interacting via hand-shake communication. In this setting
there is no unique central controller, since all automata are initially
identical. We consider the universal safety problem for such controller-less
timed networks, i.e., verifying that a bad event (enabling some given
transition) is impossible regardless of the size of the network.
This universal safety problem is dual to the existential coverability problem
for timed-arc Petri nets, i.e., does there exist a number $m$ of tokens, such
that starting with $m$ tokens in a given place, and none in the other places,
some given transition is eventually enabled.
We show that these problems are PSPACE-complete.
###### Key words and phrases:
timed networks, safety checking, Petri nets, coverability
###### 1991 Mathematics Subject Classification:
[500]Theory of computation Timed and hybrid models
###### category:
## 1\. Introduction
#### Background.
Timed-arc Petri nets (TPN) [4, 16, 3, 8, 13] are an extension of Petri nets
where each token carries one real-valued clock and transitions are guarded by
inequality constraints where the clock values are compared to integer bounds
(via strict or non-strict inequalities). The known models differ slightly in
what clock values newly created tokens can have, i.e., whether newly created
tokens can inherit the clock value of some input token of the transition, or
whether newly created tokens always have clock value zero. We consider the
former, more general, case.
Decision problems associated with the reachability analysis of (extended)
Petri nets include _Reachability_ (can a given marking reach another given
marking?) and _Coverability_ (can a given marking ultimately enable a given
transition?).
While Reachability is undecidable for all these TPN models [15], Coverability
is decidable using the well-quasi ordering approach of [1, 10] and complete
for the hyper-Ackermannian complexity class $F_{\omega^{\omega^{\omega}}}$
[12]. With respect to Coverability, TPN are equivalent [7] to (linearly
ordered) data nets [14].
The _Existential Coverability_ problem for TPN asks, for a given place $p$ and
transition $t$, whether there exists a number $m$ such that the marking
$M(m)\overset{\text{\tiny def}}{=}m\cdot\\{(p,\bm{0})\\}$ ultimately enables
$t$. Here, $M(m)$ contains exactly $m$ tokens on place $p$ with all clocks set
to zero and _no other tokens_. This problem corresponds to checking safety
properties in distributed networks of arbitrarily many (namely $m$) initially
identical timed processes that communicate by handshake. A negative answer
certifies that the ‘bad event’ of transition $t$ can never happen regardless
of the number $m$ of processes, i.e., the network is safe for any size. Thus
by checking existential coverability, one solves the dual problem of
_Universal Safety_. (Note that the $m$ timed tokens/processes are only
initially identical. They can develop differently due to non-determinacy in
the transitions.)
The corresponding problem for timed networks studied in [2] does not allow the
dynamic creation of new timed processes (unlike the TPN model which can
increase the number of timed tokens), but considers multiple clocks per
process (unlike our TPN with one clock per token).
The TPN model above corresponds to a distributed network without a central
controller, since initially there are no tokens on other places that could be
used to simulate one. Adding a central controller would make _Existential
Coverability_ polynomially inter-reducible with normal _Coverability_ and thus
complete for $F_{\omega^{\omega^{\omega}}}$ [12] (and even undecidable for
$>1$ clocks per token [2]).
Aminof et. al. [6] study the model checking problem of $\omega$-regular
properties for the controller-less model and in particular claim an
$\mathsf{EXPSPACE}$ upper bound for checking universal safety. However, their
result only holds for discrete time (integer-valued clocks) and they do not
provide a matching lower bound.
#### Our contribution.
We show that _Existential Coverability_ (and thus universal safety) is
decidable and $\mathsf{PSPACE}$-complete. This positively resolves an open
question from [2] regarding the decidability of universal safety in the
controller-less networks. Moreover, a symbolic representation of the set of
coverable configurations can be computed (using exponential space).
The $\mathsf{PSPACE}$ lower bound is shown by a reduction from the iterated
monotone Boolean circuit problem. (It does not follow directly from the
$\mathsf{PSPACE}$-completeness of the reachability problem in timed automata
of [5], due to the lack of a central controller.)
The main ideas for the $\mathsf{PSPACE}$ upper bound are as follows. First we
provide a logspace reduction of the Existential Coverability problem for TPN
to the corresponding problem for a syntactic subclass, non-consuming TPN. Then
we perform an abstraction of the real-valued clocks, similar to the one used
in [3]. Clock values are split into integer parts and fractional parts. The
integer parts of the clocks can be abstracted into a finite domain, since the
transition guards cannot distinguish between values above the maximal constant
that appears in the system. The fractional parts of the clock values that
occur in a marking are ordered sequentially. Then every marking can be
abstracted into a string where all the tokens with the $i$-th fractional clock
value are encoded in the $i$-th symbol in the string. Since token
multiplicities do not matter for existential coverability, the alphabet from
which these strings are built is finite. The primary difficulty is that the
length of these strings can grow dynamically as the system evolves, i.e., the
space of these strings is still infinite for a given TPN. We perform a forward
exploration of the space of reachable strings. By using an acceleration
technique, we can effectively construct a symbolic representation of the set
of reachable strings in terms of finitely many regular expressions. Finally,
we can check existential coverability by using this symbolic representation.
## 2\. Timed Petri Nets
We use ${\rm Nature}$ and ${\mathbb{R}}_{\geq 0}$ to denote the sets of
nonnegative integers and reals, respectively. For $n\in{\rm Nature}$ we write
$[{n}]$ for the set $\mathopen{}\mathclose{{}\left\\{0,\ldots,n}\right\\}$.
For a set ${\it A}$, we use ${{\it A}}^{*}$ to denote the set of words, i.e.
finite sequences, over ${\it A}$, and write $\varepsilon$ for the empty word.
If $R$ is a regular expression over ${\it A}$ then
$\mathcal{L}\mathopen{}\mathclose{{}\left(R}\right)\subseteq{\it A}^{*}$
denotes its language.
A _multiset_ over a set $X$ is a function $M:X\to\mathbb{N}$. The set
${X}^{\oplus}$ of all (finitely supported) multisets over $X$ is partially
ordered pointwise (by $\leq$). The multiset union of
$M,M^{\prime}\in{X}^{\oplus}$ is $(M\oplus M^{\prime})\in{X}^{\oplus}$ with
$(M\oplus M^{\prime})(\alpha)\overset{\text{\tiny
def}}{=}M(\alpha)+M^{\prime}(\alpha)$ for all $\alpha\in X$. If $M\geq
M^{\prime}$ then the multiset difference $(M\ominus M^{\prime})$ is the unique
$M^{\prime\prime}\in{X}^{\oplus}$ with $M=M^{\prime}\oplus M^{\prime\prime}$.
We will use a monomial representation and write for example
$(\alpha+\beta^{3})$ for the multiset $(\alpha\mapsto 1,\beta\mapsto 3)$. For
a multiset $M$ and a number $m\in\mathbb{N}$ we let $m\cdot M$ denote the
$m$-fold multiset sum of $M$. We further lift this to sets of numbers and
multisets on the obvious fashion, so that in particular $\mathbb{N}\cdot
S\overset{\text{\tiny def}}{=}\\{n\cdot M\mid n\in\mathbb{N},M\in S\\}$.
_Timed Petri nets_ are place/transition nets where each token carries a real
value, sometimes called its _clock value_ or _age_. Transition firing depends
on there being sufficiently many tokens whose value is in a specified
interval. All tokens produced by a transition either have age $0$, or inherit
the age of an input-token of the transition. To model time passing, all token
ages can advance simultaneously by the same (real-valued) amount.
###### Definition 1 (TPN).
A _timed Petri net_ (TPN)
$\mathcal{N}=(P,T,\mathit{Var},G,\mathit{Pre},\mathit{Post})$ consists of
finite sets of _places_ $P$, _transitions_ $T$ and _variables_ $\mathit{Var}$,
as well as functions $G,\mathit{Pre},\mathit{Post}$ defining transition
_guards_ , _pre_ – and _postconditions_ , as follows.
For every transition $t\in T$, the guard $G(t)$ maps variables to (open, half-
open or closed) intervals with endpoints in $\mathbb{N}\cup\\{\infty\\}$,
restricting which values variables may take. All numbers are encoded in unary.
The precondition $\mathit{Pre}(t)$ is a finite multiset over
$(P\times\mathit{Var})$. Let $\mathit{Var}(t)\subseteq\mathit{Var}$ be the
subset of variables appearing positively in $\mathit{Pre}(t)$. The
postcondition $\mathit{Post}(t)$ is then a finite multiset over
$(P\times(\\{0\\}\cup\mathit{Var}(t)))$, specifying the locations and clock
values of produced tokens. Here, the symbolic clock value is either $0$
(demanding a reset to age $0$), or a variable that appeared already in the
precondition.
A _marking_ is a finite multiset over $P\times{\mathbb{R}}_{\geq 0}$.
###### Example 2.
The picture below shows a place/transition representation of an TPN with four
places and one transition. $\mathit{Var}(t)=\\{x,y\\}$,
$\mathit{Pre}(t)=(p,x)^{2}+(q,y)$, $G(t)(x)=[0,5]$, $G(t)(y)=]1,2]$ and
$\mathit{Post}(t)=(r,y)^{3}+(s,0)$.
pt$0\leq x\leq 5$$1<y\leq 2$$t$$p$$q$$r$$s$$x^{2}$$y$$y^{3}$$0$
The transition $t$ consumes two tokens from place $p$, both of which have the
same clock value $x$ (where $0\leq x\leq 5$) and one token from place $q$ with
clock value $y$ (where $1<y\leq 2$). It produces three tokens on place $r$ who
all have the same clock value $y$ (where $y$ comes from the clock value of the
token read from $q$), and another token with value $0$ on place $s$.
There are two different binary step relations on markings: _discrete_ steps
$\longrightarrow_{t}$ which fire a transition $t$ as specified by the
relations $G,\mathit{Pre}$, and $\mathit{Post}$, and _time passing_ steps
$\longrightarrow_{d}$ for durations $d\in{\mathbb{R}}_{\geq 0}$, which simply
increment all clocks by $d$.
###### Definition 3 (Discrete Steps).
For a transition $t\in T$ and a variable evaluation
$\pi:\mathit{Var}\to{\mathbb{R}}_{\geq 0}$, we say that _$\pi$ satisfies
$G(t)$_ if $\pi(x)\in G(t)(x)$ holds for all $x\in\mathit{Var}$. By lifting
$\pi$ to multisets over $(P\times\mathit{Var})$ (respectively, to multisets
over $(P\times(\\{0\\}\cup\mathit{Var}))$ with $\pi(0)=0$) in the canonical
way, such an evaluation translates preconditions $\mathit{Pre}(t)$ and
$\mathit{Post}(t)$ into markings $\pi(\mathit{Pre}(t))$ and
$\pi(\mathit{Post}(t))$, where for all $p\in P$ and $c\in{\mathbb{R}}_{\geq
0}$,
$\displaystyle\pi(\mathit{Pre}(t))(p,c)\overset{\text{\tiny
def}}{=}\sum_{\pi(v)=c}\mathit{Pre}(t)(p,v)\qquad\text{and}\qquad\pi(\mathit{Post}(t))(p,c)\overset{\text{\tiny
def}}{=}\sum_{\pi(v)={c}}\mathit{Post}(t)(p,{v}).$
A transition $t\in T$ is called _enabled_ in marking $M$, if there exists an
evaluation $\pi$ that satisfies $G(t)$ and such that $\pi(\mathit{Pre}(t))\leq
M$. In this case, there is a discrete step $M\longrightarrow_{t}M^{\prime}$
from marking $M$ to $M^{\prime}$, defined as
$M^{\prime}=M\ominus\pi(\mathit{Pre}(t))\oplus\pi(\mathit{Post}(t)).$
###### Definition 4 (Time Steps).
Let $M$ be a marking and $d\in{\mathbb{R}}_{\geq 0}$. There is a time step
$M\longrightarrow_{d}M^{\prime}$ to the marking $M^{\prime}$ with
$M^{\prime}(p,{c})\overset{\text{\tiny def}}{=}M(p,{c}-{d})$ for ${c}\geq{d}$,
and $M^{\prime}(p,{c})\overset{\text{\tiny def}}{=}0$, otherwise. We also
refer to $M^{\prime}$ as $(M+d)$.
We write $\xrightarrow{}_{\textit{Time}}$ for the union of all timed steps,
$\xrightarrow{}_{\textit{Disc}}$ for the union of all discrete steps and
simply $\xrightarrow{}$ for
$\xrightarrow{}_{\textit{Disc}}\cup\xrightarrow{}_{\textit{Time}}{}$. The
transitive and reflexive closure of $\xrightarrow{}$ is $\xrightarrow{*}$.
$\mathit{Cover}\mathopen{}\mathclose{{}\left(M}\right)$ denotes the set of
markings $M^{\prime}$ for which there is an $M^{\prime\prime}\geq M^{\prime}$
with $M\xrightarrow{*}M^{\prime\prime}$.
We are interested in the _existential coverability problem_ ($\exists$COVER
for short), as follows.
Input: A TPN, an initial place $p$ and a transition $t$. Question: Does there
exist
$M\in\mathit{Cover}\mathopen{}\mathclose{{}\left(\mathbb{N}\cdot\\{(p,{0})\\}}\right)$
that enables $t$?
We show that this problem is $\mathsf{PSPACE}$-complete. Both lower and upper
bound will be shown (w.l.o.g., see Lemma 8) for the syntactic subclass of
_non-consuming_ TPN, defined as follows.
###### Definition 5.
A _timed Petri net_ $(P,T,\mathit{Var},G,\mathit{Pre},\mathit{Post})$ is _non-
consuming_ if for all $t\in T$, $p\in P$ and $x\in\mathit{Var}$ it holds that
both 1) $\mathit{Pre}(t)(p,x)\leq 1$, and 2)
$\mathit{Pre}(t)\leq\mathit{Post}(t)$.
In a non-consuming TPN, token multiplicities are irrelevant for discrete
transitions. Intuitively, having one token $(p,{c})$ is equivalent to having
an inexhaustible supply of such tokens.
The first condition is merely syntactic convenience. It asks that each
transition takes at most one token from each place. The second condition in
Definition 5 implies that for each discrete step
$M\longrightarrow_{t}M^{\prime}$ we have $M^{\prime}\geq M$. Therefore, once a
token $(p,{c})$ is present on a place $p$, it will stay there unchanged
(unless time passes), and it will enable transitions with $(p,{c})$ in their
precondition.
Wherever possible, we will from now on therefore allow ourselves to use the
set notation for markings, that is simply treat markings
$M\in{(P\times{\mathbb{R}}_{\geq 0})}^{\oplus}$ as sets
$M\subseteq(P\times{\mathbb{R}}_{\geq 0})$.
## 3\. Lower Bound
$\mathsf{PSPACE}$-hardness of $\exists$COVER does not follow directly from the
$\mathsf{PSPACE}$-completeness of the reachability problem in timed automata
of [5]. The non-consuming property of our TPN makes it impossible to fully
implement the control-state of a timed automaton. Instead our proof uses
multiple timed tokens and a reduction from the iterated monotone Boolean
circuit problem [11].
A depth-1 monotone Boolean circuit is a function
$F:\\{0,1\\}^{n}\to\\{0,1\\}^{n}$ represented by $n$ constraints: For every
$0\leq i<n$ there is a constraint of the form $i^{\prime}=j\otimes k,$ where
$0\leq j,k<n$ and $\otimes\in\\{\wedge,\vee\\}$, which expresses how the next
value of bit $i$ depends on the current values of bits $j$ and $k$. For every
bitvector $\bm{v}\in\\{0,1\\}^{n}$, the function $F$ then satisfies
$F(\bm{v})[i]\overset{\text{\tiny def}}{=}\bm{v}[j]\otimes\bm{v}[k]$. It is
$\mathsf{PSPACE}$-complete to check whether for a given vector
$\bm{v}\in\\{0,1\\}^{n}$ there exists a number $m\in\mathbb{N}$ such that
$F^{m}(\bm{v})[0]=1$.
Towards a lower bound for $\exists$COVER (Theorem 7) we construct a non-
consuming TPN as follows, for a given circuit. The main idea is to simulate
circuit constraints by transitions that reset tokens of age $1$ (encoding
$\bm{v}$) to fresh ones of age $0$ (encoding $F(\bm{v})$), and let time pass
by one unit to enter the next round.
$\mathit{True}_{j}$$\mathit{True}_{i}$$\mathit{True}_{k}$$\mathit{False}_{j}$$\mathit{False}_{i}$$\mathit{False}_{k}$$x=y=1$$i.B$$x=1$$i.L$$x=1$$i.R$$x$$y$$0$$x$$0$$x$$0$
Figure 1. The transitions $i.B,i.R$ and $i.L$ that simulate the update of bit
$i$ according to constraint $i^{\prime}=j\land k$. All transitions demand that
incoming tokens are of age exactly $1$ and only tokens of age $0$ are
produced.
For every bit $0\leq i<n$, the net contains two places $\mathit{True}_{i}$ and
$\mathit{False}_{i}$. A marking $M_{\bm{v}}\leq P\times{\mathbb{R}}_{\geq 0}$
is an _encoding_ of a vector $\bm{v}\in\\{0,1\\}^{n}$ if for every $0\leq i<n$
the following hold.
1. (1)
$(\mathit{True}_{i},0)\in M_{\bm{v}}\iff\bm{v}[i]=1$.
2. (2)
$(\mathit{False}_{i},0)\in M_{\bm{v}}\iff\bm{v}[i]=0$.
3. (3)
If $(p,c)\in M_{\bm{v}}$ then $c=0$ or $c\geq 1$.
Note that in particular one cannot have both $(\mathit{True}_{i},0)$ and
$(\mathit{False}_{i},0)$ in $M_{\bm{v}}$. For every constraint
$i^{\prime}=j\land k$ we introduce three transitions, $i.L,i.R$, and $i.B$,
where
$\displaystyle\mathit{Pre}(i.B)$ $\displaystyle\overset{\text{\tiny
def}}{=}{(\mathit{True}_{j},x)+(\mathit{True}_{k},y)}$
$\displaystyle\mathit{Post}(i.B)\overset{\text{\tiny
def}}{=}\mathit{Pre}(i.B)+{(\mathit{True}_{i},0)}$
$\displaystyle\mathit{Pre}(i.L)$ $\displaystyle\overset{\text{\tiny
def}}{=}{(\mathit{False}_{j},x)}$
$\displaystyle\mathit{Post}(i.L)\overset{\text{\tiny
def}}{=}\mathit{Pre}(i.L)+(\mathit{False}_{i},0)$
$\displaystyle\mathit{Pre}(i.R)$ $\displaystyle\overset{\text{\tiny
def}}{=}{(\mathit{False}_{k},x)}$
$\displaystyle\mathit{Post}(i.R)\overset{\text{\tiny
def}}{=}\mathit{Pre}(i.R)+(\mathit{False}_{i},0)$
and the guard for all transitions is $G(x)=G(y)=1$. See Figure 1 for an
illustration. For disjunctions $i^{\prime}=j\lor k$ the transitions are
defined analogously, with $\mathit{True}$ and $\mathit{False}$ inverted. The
correctness proof of our construction rests on the following simple
observation.
###### Lemma 6.
If $F(\bm{v})=\bm{v}^{\prime}$ then for every encoding $M_{\bm{v}}$ of
$\bm{v}$, there exists an encoding $M_{\bm{v^{\prime}}}$ of $\bm{v}^{\prime}$
such that
$M_{\bm{v}}\longrightarrow_{1}\xrightarrow{*}_{\textit{Disc}}M_{\bm{v}^{\prime}}$.
Conversely, if
$M_{\bm{v}}\longrightarrow_{1}\xrightarrow{*}_{\textit{Disc}}M_{\bm{v}^{\prime}}$
for encodings $M_{\bm{v}}$ and $M_{\bm{v^{\prime}}}$ of $\bm{v}$ and
$\bm{v}^{\prime}$ respectively, then $F(\bm{v})=\bm{v^{\prime}}$.
###### Proof.
For the first part, we construct a sequence
$M_{0}\xrightarrow{}_{\textit{Disc}}M_{1}\xrightarrow{}_{\textit{Disc}}\dots\xrightarrow{}_{\textit{Disc}}M_{n-1}$
where $M_{0}\overset{\text{\tiny def}}{=}(M_{\bm{v}}+1)$ and every step
$M_{i-1}\xrightarrow{}_{\textit{Disc}}M_{i}$ adds tokens simulating the $i$th
constraint of $F$. Since the TPN is non-consuming, we will have that
$M_{i}\geq(M_{\bm{v}}+1)$, for all $i<n$. Consider now constraint
$i^{\prime}$, and assume w.l.o.g. that $i^{\prime}=j\land k$ (the other case
is analogous). There are two cases depending on $\bm{v^{\prime}}[i]$.
1. (1)
Case $\bm{v^{\prime}}[i]=1$. By our assumption that
$F(\bm{v})=\bm{v^{\prime}}$ we know that $\bm{v}[j]=1$ and $\bm{v}[k]=1$. So
$(\mathit{True}_{j},1)\in(M_{\bm{v}}+1)\leq M_{i-1}$ and
$(\mathit{True}_{k},1)\in(M_{\bm{v}}+1)\leq M_{i-1}$. By construction of the
net, there is a transition $i.B$ with
$\mathit{Pre}(i.B)={(\mathit{True}_{j},1)+(\mathit{True}_{k},1)}$ and
$\mathit{Post}(i.B)=\mathit{Pre}(i.B)+{(\mathit{True}_{i},0)}$. This justifies
step $M_{i-1}\longrightarrow_{i.B}M_{i}$ and therefore that $(True_{i},0)\in
M_{i}\leq M_{n-1}$. Also notice that no marking reachable from $M_{0}$ using
only discrete steps can contain the token $(\mathit{False}_{i},0)$. This is
because these can only be produced by transitions requiring either
$(\mathit{False}_{j},1)$ or $(\mathit{False}_{k},1)$, which are not contained
in $M_{0}$ by assumption that $M_{\bm{v}}$ encodes $\bm{v}$. Therefore
$(\mathit{False}_{i},0)\notin M_{n-1}$.
2. (2)
Case $\bm{v^{\prime}}[i]=0$. W.l.o.g., $\bm{v}[j]=0$. Therefore,
$(\mathit{False}_{j},1)\in(M_{\bm{v}}+1)\leq M_{i-1}$. By construction of the
net, there exists transition $i.L$ with
$\mathit{Pre}(i.L)={(\mathit{False}_{j},1)}$ and
$\mathit{Post}(i.L)=\mathit{Pre}(i.L)+{(\mathit{False}_{i},0)}$. This
justifies the step $M_{i-1}\longrightarrow_{i.L}M_{i}$, with $(False_{i},0)\in
M_{i}\leq M_{n-1}$. Notice again that no marking reachable from $M_{0}$ using
only discrete steps can contain the token $(\mathit{True}_{i},0)$. This is
because these can only be produced by transitions $i.B$, requiring both
$(\mathit{True}_{j},1),(\mathit{True}_{k},1)\in M_{0}$, contradicting our
assumptions. Hence, $(\mathit{True}_{i},0)\notin M_{n-1}$.
We conclude that the constructed marking $M_{n-1}$ is an encoding of
$\bm{v^{\prime}}$.
For the other part of the claim, assume that there exist markings $M_{\bm{v}}$
and $M_{\bm{v^{\prime}}}$ which are encodings of vectors $\bm{v}$ and
$\bm{v^{\prime}}$, respectively, with
$M_{\bm{v}}\longrightarrow_{1}\xrightarrow{*}_{\textit{Disc}}M_{\bm{v^{\prime}}}$.
We will show that $F(\bm{v})=\bm{v}^{\prime}$. Recall that
$F(\bm{v})[i]\overset{\text{\tiny def}}{=}\bm{v}[j]\otimes\bm{v}[k]$, where
$0\leq j,k<n$ and $\otimes\in\\{\wedge,\vee\\}$. We will show for each $i<n$
that $\bm{v}^{\prime}[i]=\bm{v}[j]\otimes\bm{v}[k]$. Again, consider the
constraint $i^{\prime}$, and assume w.l.o.g. that $i^{\prime}=j\land k$ (the
other case is analogous). There are two cases.
1. (1)
Case $\bm{v^{\prime}}[i]=1$. By definition of a marking encoding, we have that
$(\mathit{True}_{i},0)\in M_{\bm{v}}$. By construction, there is a transition
$i.B$ with $\mathit{Pre}(i.B)={(\mathit{True}_{j},1)+(\mathit{True}_{k},1)}$
and $\mathit{Post}(i.B)=\mathit{Pre}(i.B)+{(\mathit{True}_{i},0)}$. By
assumption, it holds that
$(M_{\bm{v}}+1)\xrightarrow{*}_{\textit{Disc}}M_{\bm{v}}^{\prime}$, where
$M_{\bm{v}}\longrightarrow_{1}(M_{\bm{v}}+1)$. Note that
$(\mathit{True}_{j},1)\in(M_{\bm{v}}+1)$ and
$(\mathit{True}_{k},1)\in(M_{\bm{v}}+1)$. Hence, we have that $\bm{v}[j]=1$
and $\bm{v}[k]=1$, and therefore that
$F(\bm{v})[i]=\bm{v^{\prime}}[i]=\bm{v}[j]\land\bm{v}[k]$.
2. (2)
Case $\bm{v^{\prime}}[i]=0$. Then $(\mathit{False}_{i},0)\in M_{\bm{v}}$ and,
since this token can only be produced by transitions $i.L$ or $i.R$, either
$(\mathit{False}_{j},1)\in(M_{\bm{v}}+1)$ or
$(\mathit{False}_{k},1)\in(M_{\bm{v}}+1)$.
Therefore $(\mathit{False}_{j},0)\in(M_{\bm{v}})$ or
$(\mathit{False}_{k},0)\in(M_{\bm{v}})$ and because $M_{\bm{v}}$ is an
encoding of $\bm{v}$, this means that either $\bm{v}[j]=0$ or $\bm{v}[k]=0$.
Therefore, $F(\bm{v^{\prime}})[i]=\bm{v}[j]\land\bm{v}[k]=0$. ∎
###### Theorem 7.
$\exists$COVER is $\mathsf{PSPACE}$-hard for non-consuming TPN.
###### Proof.
For a given monotone Boolean circuit, define a non-consuming TPN as above. By
induction on $m\in\mathbb{N}$ using Lemma 6, we derive that there exists
$m\in\mathbb{N}$ with $F^{m}(\bm{v})=\bm{v}^{\prime}$ and
$\bm{v}^{\prime}[0]=1$ if, and only if, there exists encodings $M_{\bm{v}}$ of
$\bm{v}$ and $M_{\bm{v^{\prime}}}$ of $\bm{v^{\prime}}$, with
$M_{\bm{v}}\xrightarrow{*}M_{\bm{v}^{\prime}}$. Moreover, if there is a
marking $M$ such that $M_{\bm{v}}\xrightarrow{*}M$ and $0\in{\it frac}(M)$,
where $M$ contains a token of age $0$, then $M\leq M_{\bm{v^{\prime}}}$ for
some encoding $M_{\bm{v}^{\prime}}$ of a vector
$\bm{v^{\prime}}=F^{m}(\bm{v})$. This means that it suffices to add one
transition $t$ with $\mathit{Pre}(t)=(\mathit{True}_{0},0)$ whose enabledness
witnesses the existence of a reachable encoding $M_{\bm{v}^{\prime}}$
containing a token $(\mathit{True}_{0},0)$. By the properties above, there
exists $m\in\mathbb{N}$ with $F^{m}(\bm{v})=\bm{v}^{\prime}$ and
$\bm{v}^{\prime}[0]=1$ iff
$M_{\bm{v}}\xrightarrow{*}M_{\bm{v}^{\prime}}\xrightarrow{t}$. ∎
This lower bound holds even for discrete time TPN, e.g. [9], because the proof
uses only timed steps with duration $d=1$.
## 4\. Upper Bound
We start by observing that we can restrict ourselves, without loss of
generality, to non-consuming TPN (Definition 5) for showing the upper bound.
Intuitively, since we start with an arbitrarily high number of tokens anyway,
it does not matter how many of them are consumed by transitions during the
computation, since some always remain.
###### Lemma 8.
The $\exists$COVER problem for TPN logspace-reduces to the $\exists$COVER
problem for non-consuming TPN. That is, for every TPN $\mathcal{N}$ and for
every place $p$ and transition $t$ of $\mathcal{N}$, one can construct, using
logarithmic space, a non-consumimg TPN $\mathcal{N}^{\prime}$ together with a
place $p^{\prime}$ and transition $t^{\prime}$ of $\mathcal{N}^{\prime}$, so
that there exists
$M\in\mathit{Cover}_{\mathcal{N}}\mathopen{}\mathclose{{}\left(\mathbb{N}\cdot\\{(p,{0})\\}}\right)$
enabling $t$ in $\mathcal{N}$ if and only if there exists
$M^{\prime}\in\mathit{Cover}_{\mathcal{N}^{\prime}}\mathopen{}\mathclose{{}\left(\mathbb{N}\cdot\\{(p^{\prime},0)\\}}\right)$
that enables $t^{\prime}$ in $\mathcal{N}^{\prime}$.
###### Proof.
First notice that the first condition in Definition 5, that asks that every
transition takes at most one token each place, is merely a syntactic
convenience. A net satisfying this condition can be constructed by adding a
few extra places and intermediate transitions to first distribute tokens to
those extra places for the original transition to consume.
So let’s assume w.l.o.g., that $\mathcal{N}$ satisfies this condition and let
$\mathcal{N}^{\prime}$ be the non-consuming variant derived from $\mathcal{N}$
where for all transitions $T$,
$\mathit{Post}_{\mathcal{N}^{\prime}}(t)\overset{\text{\tiny
def}}{=}\mathit{Post}_{\mathcal{N}}(t)\oplus\mathit{Pre}_{\mathcal{N}}(t)$.
Notice that then, for every discrete step $M\longrightarrow_{t}M^{\prime}$ we
have that $M\leq M^{\prime}$. We prove the following claim.
###### Claim 8.1.
_For every place $p$ and transition $t$ of $\mathcal{N}$ there exists
$M\in\mathit{Cover}_{\mathcal{N}}(\mathbb{N}\cdot\\{(p,{0}\\})$ enabling $t$
in $\mathcal{N}$ if, and only if there exists
$M^{\prime}\in\mathit{Cover}_{\mathcal{N}^{\prime}}(\mathbb{N}\cdot\\{(p,0)\\})$
that enables $t$ in $\mathcal{N}^{\prime}$. _
The “$\mathcal{N}\to\mathcal{N}^{\prime}$” direction follows from the
observation that the pointwise ordering $\leq$ on markings, is a simulation:
If $M\xrightarrow{}N$ and $M^{\prime}\geq M$ then there exists an
$N^{\prime}\geq N$ with $M^{\prime}\xrightarrow{}N^{\prime}$. For the other
direction, suppose there exists a witnessing path
$m\cdot\\{(p,{0})\\}\leavevmode\nobreak\ =\leavevmode\nobreak\
M_{0}\xrightarrow{}M_{1}\xrightarrow{}M_{2}\xrightarrow{}\cdots\xrightarrow{}M_{k}\xrightarrow{t}$
of length $k$ in $\mathcal{N}^{\prime}$. We can inductively derive a
witnessing path in $\mathcal{N}$ backwards, again using the fact that $\leq$
is a simulation. First note that if $M^{\prime}$ enables $t$, then every
$m^{\prime}\cdot M^{\prime}$ with $m^{\prime}>0$ enables $t$, (in both nets).
Suppose $M_{i}\xrightarrow{\rho}$ is a path of length $(k-i)$ that ends in a
$t$-transition. By the simulation property, there is such a path from every
$m\cdot M_{i}$, $m>0$. Further, there must exist markings
$M^{\prime}_{i-1}\in\ \downarrow\\!{(}\mathbb{N}\cdot M_{i-1})$ and
$M^{\prime}_{i}\in\ \downarrow\\!{(}\mathbb{N}\cdot M_{i})$ such that
$M^{\prime}_{i-1}\xrightarrow{}M^{\prime}_{i}$. It suffices to pick
$M^{\prime}_{i-1}\overset{\text{\tiny def}}{=}B\cdot M_{i-1}$, where
$B\in\mathbb{N}$ is the maximal cardinality of any multiset $\mathit{Pre}(t)$
(This number is itself bounded by $\lvert
P\rvert\cdot\lvert\mathit{Var}\rvert$ by our assumption on $\mathit{Pre}(t)$).
We conclude that in $\mathcal{N}$ there is a path ending in a $t$-transition
and starting in marking $(B\cdot k)\cdot M_{0}$, which is in
$\mathbb{N}\cdot\\{(p,{0})\\}$. ∎
### 4.1. Region Abstraction
We recall a constraint system called regions defined for timed automata [5].
The version for TPN used here is similar to the one in [3].
Consider a fixed, nonconsuming TPN
$\mathcal{N}=(P,T,\mathit{Var},G,\mathit{Pre},\mathit{Post})$. Let
$c_{\mathit{max}}$ be the largest finite value appearing in transition guards
$G$. Since different tokens with age $>c_{\mathit{max}}$ cannot be
distinguished by transition guards, we consider only token ages below or equal
to $c_{\mathit{max}}$ and treat the integer parts of older tokens as equal to
$c_{\mathit{max}}+1$. Let ${\it int}(c)\overset{\text{\tiny
def}}{=}\min\\{c_{\mathit{max}}+1,\lfloor{c}\rfloor\\}$ and ${\it
frac}(c)\overset{\text{\tiny def}}{=}c-\lfloor{c}\rfloor$ for a real value
$c\in{\mathbb{R}}_{\geq 0}$. We will work with an abstraction of TPN markings
as words over the alphabet $\Sigma\overset{\text{\tiny
def}}{=}2^{P\times[{c_{\mathit{max}}+1}]}$. Each symbol $X\in\Sigma$
represents the places and integer ages of tokens for a particular fractional
value.
###### Definition 9.
Let $M\subseteq P\times{\mathbb{R}}_{\geq 0}$ be a marking and let ${\it
frac}(M)\overset{\text{\tiny def}}{=}\\{{\it frac}(c)\mid(p,c)\in M\\}$ be the
set of fractional clock values that appear in $M$.
Let $S\subset[0,1[$ be a finite set of real numbers with $0\in S$ and ${\it
frac}(M)\subseteq S$ and let $f_{0},f_{1},\dots,f_{n}$, be an enumeration of
$S$ so that $f_{i-1}<f_{i}$ for all $i\leq n$. The _$S$ -abstraction_ of $M$
is
$\mathit{abs}_{S}(M)\overset{\text{\tiny def}}{=}x_{0}x_{1}\dots
x_{n}\in\Sigma^{*}$
where $x_{i}\overset{\text{\tiny def}}{=}\\{(p,{\it int}(c))\mid(p,c)\in
M\land{\it frac}(c)=f_{i}\\}$ for all $i\leq n$. We simply write
$\mathit{abs}(M)$ for the shortest abstraction, i.e. with respect to
$S=\\{0\\}\cup{\it frac}(M)$.
###### Example 10.
The abstraction of marking $M=\\{(p,2.1),(q,2.2),(p,5.1),(q,5.1)\\}$ is
$\mathit{abs}(M)=\emptyset\leavevmode\nobreak\
\\{(p,2),(p,5),(q,5)\\}\leavevmode\nobreak\ \\{(q,2)\\}$. The first symbol is
$\emptyset$, because $M$ contains no token with an integer age (i.e., no token
whose age has fractional part $0$). The second and third symbols represent
sets of tokens with fractional values $0.1$ and $0.2$, respectively.
Clocks with integer values play a special role in the behavior of TPN, because
the constants in the transition guards are integers. Thus we always include
the fractional part $0$ in the set $S$ in Definition 9.
We use a special kind of regular expressions over $\Sigma$ to represent
coverable sets of TPN markings as follows.
###### Definition 11.
A regular expression $E$ over $\Sigma$ represents the downward-closed set of
TPN markings covered by one that has an abstraction in the language of $E$:
$[\\![E]\\!]\overset{\text{\tiny def}}{=}\\{N\mid\exists M\exists
S.\leavevmode\nobreak\ M\geq
N\land\mathit{abs}_{S}(M)\in\mathcal{L}\mathopen{}\mathclose{{}\left(E}\right)\\}.$
An expression is _simple_ if it is of the form $E=x_{0}x_{1}\dots x_{k}$ where
for all $i\leq k$ either $x_{i}\in\Sigma$ or $x_{i}={y_{i}}^{*}$ for some
$y_{i}\in\Sigma$. In the latter case we say that $x_{i}$ _carries a star_.
That is, a simple expression is free of Boolean combinators and uses only
concatenation and Kleene star. We will write $\hat{x}_{i}$ to denote the
symbol in $\Sigma$ at position $i$: it is $x_{i}$ if $x_{i}\in\Sigma$ and
$y_{i}$ otherwise.
###### Remark 12.
Notice that for all simple expressions $\alpha,\beta$ so that
$\lvert\alpha\rvert>0$, we have that
$[\\![\alpha\emptyset\beta]\\!]=[\\![\alpha\beta]\\!]$. However, unless
$\alpha$ has length $0$ or is of the form $\alpha=\emptyset\alpha^{\prime}$,
we have $[\\![\emptyset\alpha]\\!]\neq[\\![\alpha]\\!]$. This is because a
marking $M$ that contains a token $(p,c)$ with ${\it frac}(c)=0$ has the
property that all abstractions $\mathit{abs}_{S}(M)=x_{0}\dots x_{k}$ of $M$
have $x_{0}\neq\emptyset$.
The following lemmas express the effect of TPN transitions at the level of the
region abstraction. Lemmas 13 and 15 state that maximally firing of discrete
transitions (the relation $\xrightarrow{*}_{\textit{Disc}}$) is computable and
monotone. Lemmas 16 and 17 state how to represent timed-step successor
markings.
###### Lemma 13.
For every non-consuming TPN $\mathcal{N}$ there are polynomial time computable
functions $f:\Sigma\times\Sigma\times\Sigma\to\Sigma$ and
$g:\Sigma\times\Sigma\times\Sigma\to\Sigma$ with the following properties.
1. (1)
$f$ and $g$ are monotone (w.r.t. subset ordering) in each argument.
2. (2)
$f(\alpha,\beta,x)\supseteq x$ and $g(\alpha,\beta,x)\supseteq x$ for all
$\alpha,\beta,x\in\Sigma$.
3. (3)
Suppose that $E=x_{0}x_{1}\dots x_{k}$ is a simple expression,
$\alpha\overset{\text{\tiny def}}{=}x_{0}$ and $\beta\overset{\text{\tiny
def}}{=}\bigcup_{i>0}\hat{x}_{i}$, and
$E^{\prime}=x^{\prime}_{0}x^{\prime}_{1}\dots x^{\prime}_{k}$ is the derived
expression defined by conditions:
1. (a)
$x_{0}^{\prime}\overset{\text{\tiny def}}{=}f(\alpha,\beta,x_{0})$,
2. (b)
$x_{i}^{\prime}\overset{\text{\tiny def}}{=}g(\alpha,\beta,\hat{x}_{i})^{*}$
for $i>0$,
3. (c)
$x_{i}^{\prime}$ carries a star iff $x_{i}$ does.
Then $[\\![E^{\prime}]\\!]=\\{M^{\prime\prime}\mid\exists M\in[\\![E]\\!]\land
M\xrightarrow{*}_{\textit{Disc}}M^{\prime}\geq M^{\prime\prime}\\}$.
A proof of this statement is in the appendix. It is essentially due to the
monotonicity of discrete transition firing in TPN and the fact that
iteratively firing transitions must saturate due to the nonconsuming
semantics. We first prove it only for star-free expressions $E$ in condition 3
(Lemma 25) and then generalize to all simple expressions by induction.
###### Definition 14.
We will write $\mathit{SAT}(E)\overset{\text{\tiny def}}{=}E^{\prime}$ for the
successor expression $E^{\prime}$ of $E$ guaranteed by Lemma 13. I.e.,
$\mathit{SAT}(E)$ is the saturation of $E$ by maximally firing discrete
transitions.
Notice that by definition it holds that
$[\\![E]\\!]\subseteq[\\![\mathit{SAT}(E)]\\!]\subseteq\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![E]\\!]}\right)$,
and consequently also that
$\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![\mathit{SAT}(E)]\\!]}\right)=\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![E]\\!]}\right)$.
###### Lemma 15.
Suppose that $X=x_{0}x_{1}\dots x_{k}$ is a simple expression of length $k+1$
with $\mathit{SAT}(X)=x^{\prime}_{0}x^{\prime}_{1}\dots x^{\prime}_{k}$ and
$x_{0},x^{\prime}_{0}\in\Sigma$. Let
$Y=y_{0}\alpha_{1}y_{1}\alpha_{2}\dots\alpha_{k}y_{k}$ be a simple expression
with
$\mathit{SAT}(Y)=y^{\prime}_{0}\alpha^{\prime}_{1}y^{\prime}_{1}\alpha^{\prime}_{2}\dots\alpha^{\prime}_{k}y^{\prime}_{k}$
and $y_{0},y^{\prime}_{0}\in\Sigma$.
If $\hat{x}_{i}\subseteq\hat{y}_{i}$ for all $i\leq k$ then
$\hat{x}^{\prime}_{i}\subseteq\hat{y}^{\prime}_{i}$ for all $i\leq k$.
###### Proof.
The assumption of the lemma provides that $\alpha_{x}\overset{\text{\tiny
def}}{=}x_{0}\subseteq\alpha_{y}\overset{\text{\tiny def}}{=}y_{0}$ and
$\beta_{x}\overset{\text{\tiny def}}{=}\bigcup_{k\geq
i>0}\hat{x}_{i}\subseteq\beta_{y}\overset{\text{\tiny def}}{=}\bigcup_{k\geq
i>0}\hat{y}_{i}$. Therefore, by Item 1 of Lemma 13, we get that
$x^{\prime}_{0}=f(\alpha_{x},\beta_{x},x_{0})\quad\subseteq\quad
f(\alpha_{y},\beta_{y},y_{0})=y^{\prime}_{0}$
and similarly, for all $k\geq i\geq 0$, that
$\hat{x}^{\prime}_{i}=g(\alpha_{x},\beta_{x},\hat{x}_{i})\leavevmode\nobreak\
\subseteq\leavevmode\nobreak\
g(\alpha_{y},\beta_{y},\hat{y}_{i})=\hat{y}^{\prime}_{i}.$ ∎
For $x\in\Sigma$ we write $(x+1)\overset{\text{\tiny def}}{=}\\{(p,{\it
int}(n+1))\mid(p,n)\in x\\}$ for the symbol where token ages are incremented
by $1$.
###### Lemma 16.
$[\\![\emptyset E]\\!]=\\{M^{\prime}\mid\exists M\in[\\![E]\\!]\land
M\longrightarrow_{d}M^{\prime}\land d<1-\max(frac(M))\\}$.
###### Proof.
_“ $\supseteq$”_: Suppose that $M$ is a non-empty marking in $[\\![E]\\!]$,
$d<1-\max({\it frac}(M))$ and $M\longrightarrow_{d}M^{\prime}$. The assumption
on $d$ implies that for every token $(p,c)\in M$ we have ${\it int}(c)={\it
int}(c+d)$. In other words, the integral part of the token age remained the
same. Therefore $(p,{\it int}(c))=(p,{\it int}(c+d))\in M^{\prime}$. Also from
the assumption on $d$ we get that
${\it frac}(M^{\prime})=\\{x+d\mid x\in{\it frac}(M)\\}$
Recall that $\mathit{abs}(M)=\mathit{abs}_{S}(M)$ and
$\mathit{abs}(M^{\prime})=\mathit{abs}_{S^{\prime}}(M^{\prime})$ for the sets
$S\overset{\text{\tiny def}}{=}\\{0\\}\cup{\it frac}(M)$ and
$S^{\prime}\overset{\text{\tiny def}}{=}\\{0\\}\cup{\it frac}(M^{\prime})$.
Clearly, $0\notin{\it frac}(M^{\prime})$. There are two cases:
1. (1)
$0\in{\it frac}(M)$. Then
$\mathit{abs}(M^{\prime})=\emptyset\mathit{abs}(M)\in\mathcal{L}\mathopen{}\mathclose{{}\left(\emptyset
E}\right)$, and consequently, $M^{\prime}\in[\\![\emptyset E]\\!]$.
2. (2)
$0\notin{\it frac}(M)$. Then
$\mathit{abs}(M^{\prime})=\mathit{abs}(M)=\emptyset
w\in\mathcal{L}\mathopen{}\mathclose{{}\left(E}\right)$. Suppose that
$E=x_{0}\alpha$, i.e., $E$ has $x_{0}\in\Sigma$ as its leftmost symbol, and
$w\in\mathcal{L}\mathopen{}\mathclose{{}\left(\alpha}\right)$. If
$x_{0}=\emptyset$ then $[\\![E]\\!]=[\\![\emptyset E]\\!]$ and thus
$\mathit{abs}(M^{\prime})\in[\\![\emptyset E]\\!]$. Otherwise, if
$x_{0}\neq\emptyset$ then
$x_{0}w\in\mathcal{L}\mathopen{}\mathclose{{}\left(E}\right)$ and
$x_{0}w=\mathit{abs}(M^{\prime\prime})$ for some marking $M^{\prime\prime}\geq
M^{\prime}$. So again, $M^{\prime}\in[\\![\emptyset E]\\!]$.
_“ $\subseteq$”_: W.l.o.g., pick a non-empty marking
$M^{\prime}\in[\\![\emptyset E]\\!]$. If $E$ has $\emptyset$ as its leftmost
symbol, then $[\\![\emptyset E]\\!]=[\\![E]\\!]$ and the claim follows using
$d=0$, since then $M^{\prime}\in[\\![E]\\!]$. So suppose that $E$ does not
start with $\emptyset$. Note that by Definition 9, there are no tokens in the
marking $M^{\prime}$ whose clocks have fractional value zero. Let
$d\overset{\text{\tiny def}}{=}\min({\it frac}(M^{\prime}))$
be the minimal fractional clock value among the tokens of $M^{\prime}$ and
based on this, define $M\overset{\text{\tiny def}}{=}\\{(p,c-d)\mid(p,c)\in
N^{\prime}\\}$. By construction of $M$ we get $M\longrightarrow_{d}M^{\prime}$
and also that $\max({\it frac}(M))=\max({\it frac}(M^{\prime}))-d<1$.
Therefore that $1-\max({\it frac}(M))<1-d$. Finally, observe that ${\it
frac}(M)=\\{x-d\mid x\in{\it frac}(M^{\prime})\\}$ and $0\in{\it frac}(M)$. It
follows that $\mathit{abs}(M^{\prime})=\emptyset\mathit{abs}(M)$ and therefore
that $\mathit{abs}(M)\in\mathcal{L}\mathopen{}\mathclose{{}\left(E}\right)$
and $M\in[\\![E]\\!]$. This means that $M^{\prime}$ is included in the set on
the right in the claim. ∎
###### Lemma 17.
Let $\alpha z$ be a simple expression where $\hat{z}=z\in\Sigma$ (the
rightmost symbol is not starred). Then, $[\\![(z+1)\alpha]\\!]$ contains a
marking $N$ if, and only if, there exists markings $N^{\prime}\geq N$ and $M$,
and a set $S\subseteq[0,1[$ so that
1. (1)
$\lvert S\rvert=\lvert\alpha z\rvert$
2. (2)
$\mathit{abs}_{S}(M)\in\mathcal{L}\mathopen{}\mathclose{{}\left(\alpha
z}\right)$
3. (3)
$M\longrightarrow_{d}N^{\prime}$ for $d=1-\max(S)$.
###### Proof.
Suppose markings $N,N^{\prime},M$, a set $S\subseteq[0,1[$ and
$d\in{\mathbb{R}}_{\geq 0}$ so that the conditions 1 to 3 are satisfied. Let
$S^{\prime}\overset{\text{\tiny def}}{=}\\{0\\}\cup\\{s+d\mid s\in
S\setminus\\{d\\}\\}$. Then, $\lvert S^{\prime}\rvert=\lvert S\rvert$ and
$\mathit{abs}_{S^{\prime}}(N^{\prime})\in\mathcal{L}\mathopen{}\mathclose{{}\left((z+1)\alpha}\right)$,
which witnesses that $N\in[\\![(z+1)\alpha]\\!]$.
Conversely, let $N\in[\\![(z+1)\alpha]\\!]$ be a non-empty marking. If
$\lvert\alpha\rvert=0$, then $N\in[\\![(z+1)]\\!]$ and so
$\mathit{abs}_{S}(N)\in\mathcal{L}\mathopen{}\mathclose{{}\left((z+1)}\right)$
for $S\overset{\text{\tiny def}}{=}{\it frac}(N)=\\{0\\}$. This means that
$M\longrightarrow_{1}N=(M+1)$ for a marking $M$ with
$\mathit{abs}_{S}(M)\in\mathcal{L}\mathopen{}\mathclose{{}\left(z}\right)=\mathcal{L}\mathopen{}\mathclose{{}\left(\alpha
z}\right)$.
If $\lvert\alpha\rvert>0$, pick some marking $N^{\prime}\geq N$ and set
$S^{\prime}$ so that $\mathit{abs}_{S^{\prime}}(N^{\prime})=(z+1)w$, for some
word $w\in\mathcal{L}\mathopen{}\mathclose{{}\left(\alpha}\right)$. Then we
must have that $\lvert S^{\prime}\rvert=\lvert(z+1)\alpha\rvert>1$ and so
$d\overset{\text{\tiny def}}{=}\min(S^{\prime}\setminus\\{0\\})$ exists. Let
$S\overset{\text{\tiny def}}{=}\\{s-d\mid s\in S^{\prime}\\}\cup\\{1-d\\}$ and
$M$ be the unique marking with $M\longrightarrow_{d}N^{\prime}$. Notice that
$1-d=\max(S)$. It follows that
$\mathit{abs}_{S}(M)=wz\in\mathcal{L}\mathopen{}\mathclose{{}\left(\alpha
z}\right)$. ∎
We will often use the following simple fact, which is a direct consequence of
Lemma 17.
###### Corollary 18.
$[\\![(z+1)\alpha]\\!]\subseteq\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![\alpha
z]\\!]}\right)$.
Finally, the following lemma will be the basis for our exploration algorithm.
###### Lemma 19.
Let $\alpha x_{0}^{*}$ be a simple expression with $\mathit{SAT}(\alpha
x_{0}^{*})=\alpha x_{0}^{*}$. Then
$\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![\alpha
x_{0}^{*}]\\!]}\right)=[\\![\alpha
x_{0}^{*}]\\!]\cup\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![(x_{0}+1)\alpha
x_{0}^{*}]\\!]}\right)$.
###### Proof.
For the right to left inclusion notice that $[\\![\alpha
x_{0}^{*}]\\!]\subseteq\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![\alpha
x_{0}^{*}]\\!]}\right)$ trivially holds. For the rest, we have
$[\\![(x_{0}+1)\alpha
x_{0}^{*}]\\!]\subseteq\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![\alpha
x_{0}^{*}]\\!]}\right)$ by Corollary 18, and therefore
$\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![(x_{0}+1)\alpha
x_{0}^{*}]\\!]}\right)\leavevmode\nobreak\ \subseteq\leavevmode\nobreak\
\mathit{Cover}\mathopen{}\mathclose{{}\left(\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![\alpha
x_{0}^{*}]\\!]}\right)}\right)=\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![\alpha
x_{0}^{*}]\\!]}\right)$. For the left to right inclusion, we equivalently show
that
$\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![\alpha
x_{0}^{*}]\\!]}\right)\setminus[\\![\alpha
x_{0}^{*}]\\!]\subseteq\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![(x_{0}+1)\alpha
x_{0}^{*}]\\!]}\right)$ (1)
Using the assumption that $\mathit{SAT}(\alpha x_{0}^{*})=\alpha x_{0}^{*}$,
the set on the left contains everything coverable from $[\\![\alpha
x_{0}^{*}]\\!]$ by a sequence that starts with a (short) time step. It can
therefore be written as
$\mathit{Cover}\mathopen{}\mathclose{{}\left(\\{N_{1}\mid\exists
N_{0}\in[\\![\alpha x_{0}^{*}]\\!]\land N_{0}\longrightarrow_{d}N_{1}\land
0<d<1-\max(frac(N_{0}))\\}}\right).$
By Lemma 16 and because $[\\![\emptyset\alpha]\\!]\subseteq[\\![X\alpha]\\!]$
for all $X\in\Sigma$ and $\alpha\in\Sigma^{*}$, we conclude that indeed,
$\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![\alpha
x_{0}^{*}]\\!]}\right)\setminus[\\![\alpha x_{0}^{*}]\\!]\leavevmode\nobreak\
\subseteq\leavevmode\nobreak\
\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![\emptyset\alpha
x_{0}^{*}]\\!]}\right)\subseteq\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![(x_{0}+1)\alpha
x_{0}^{*}]\\!]}\right)$. ∎
### 4.2. Acceleration
We propose an acceleration procedure based on unfolding expressions according
to Lemma 19 (interleaved with saturation steps to guarantee its premise) and
introducing new Kleene stars to keep the length of intermediate expressions
bounded. This procedure (depicted in Algorithm 1), is used to characterize an
initial subset of the coverability set.
1:a simple expression $S_{0}=x_{1}x_{0}^{*}$ (of length 2 and with last symbol
starred)
2:simple expressions $S_{1},S_{i}$ and $R$, of lengths 2, 4, and 2,
respectively.
3:$S_{1}\overset{\text{\tiny
def}}{=}x_{1}^{1}(x_{0}^{1})^{*}=\mathit{SAT}(x_{1}x_{0}^{*})$
4:$S_{2}\overset{\text{\tiny
def}}{=}x_{2}^{2}x_{1}^{2}(x_{0}^{2})^{*}=\mathit{SAT}((x_{0}^{1}+1)S_{1})$
5:$S_{3}\overset{\text{\tiny
def}}{=}x_{3}^{3}x_{2}^{3}x_{1}^{3}(x_{0}^{3})^{*}=\mathit{SAT}((x_{0}^{2}+1)S_{2})$
6:$i\leftarrow 3$
7:repeat
8:
$x_{i+1}^{i+1}x_{i}^{i+1}x_{i-1}^{i+1}x_{1}^{i+1}(x_{0}^{i+1})^{*}\overset{\text{\tiny
def}}{=}\mathit{SAT}((x_{0}^{i}+1)S_{i})$
9: $S_{i+1}\overset{\text{\tiny
def}}{=}x_{i+1}^{i+1}(x_{i}^{i+1})^{*}x_{1}^{i+1}(x_{0}^{i+1})^{*}$
10: $i\leftarrow i+1$
11:until $S_{i}=S_{i-1}$
12:$R\overset{\text{\tiny def}}{=}(x_{1}^{i}+1)(x_{i-1}^{i})^{*}$
13:return $S_{1},S_{i},R$
Algorithm 1 Accelerate pt$x_{0}^{*}$$x_{1}$ start $(x_{0}^{1})^{*}$$x_{1}^{1}$
$S_{1}=\mathit{SAT}(x_{1}x_{0}^{*})$
$(x_{0}^{1})^{*}$$x_{1}^{1}$$(x_{0}^{1}+1)$ $(x_{0}^{1}+1)S_{1}$
$(x_{0}^{2})^{*}$$x_{1}^{2}$$x_{2}^{2}$
$S_{2}=\mathit{SAT}((x_{0}^{1}+1)S_{1})$
$(x_{0}^{2})^{*}$$x_{1}^{2}$$x_{2}^{2}$$(x_{0}^{2}+1)$ $(x_{0}^{2}+1)S_{2}$
$(x_{0}^{3})^{*}$$x_{1}^{3}$$x_{2}^{3}$$x_{3}^{3}$
$S_{3}=\mathit{SAT}((x_{0}^{2}+1)S_{2})$
$(x_{0}^{3})^{*}$$x_{1}^{3}$$x_{2}^{3}$$x_{3}^{3}$$(x_{0}^{3}+1)$
$(x_{0}^{3}+1)S_{3}$
$(x_{0}^{4})^{*}$$x_{1}^{4}$$x_{2}^{4}$$x_{3}^{4}$$x_{4}^{4}$
$\mathit{SAT}((x_{0}^{3}+1)S_{3})$
$(x_{0}^{4})^{*}$$x_{1}^{4}$$(x_{3}^{4})^{*}$$x_{4}^{4}$ $S_{4}$
$(x_{0}^{4})^{*}$$x_{1}^{4}$$(x_{3}^{4})^{*}$$x_{4}^{4}$$(x_{0}^{4}+1)$
$(x_{0}^{4}+1)S_{4}$
$(x_{0}^{5})^{*}$$x_{1}^{5}$$(x_{3}^{5})^{*}$$x_{4}^{5}$$x_{5}^{5}$
$\mathit{SAT}((x_{0}^{4}+1)S_{4})$
$(x_{0}^{5})^{*}$$x_{1}^{5}$$(x_{4}^{5})^{*}$$x_{5}^{5}$ $S_{5}$
$\vdots$$\vdots$$\vdots$$\vdots$$\vdots$ $\vdots$ line 1: 2: 3: 6: 7: 6: 7:
Figure 2. A Run of Algorithm 1 (initial steps). The column on the left
indicates the line of code, the middle depicts the current expression and the
column on the right recalls its origin. Gray bars indicate that the respective
symbols are equal. Arrows denote (set) inclusion between symbols. The gray
vertical arrows indicate inclusions due to saturation (Lemma 13), as claimed
in item 1 of Lemma 20. Red and blue arrows indicate derived inclusions (as
stated in Lemma 20).
Given a length-2 simple expression $S_{0}$ where the rightmost symbol is
starred, the algorithm will first saturate (Definition 14, in line 1), and
then alternatingly rotate a copy of the rightmost symbol (Lemma 17), and
saturate the result (see lines 2, 3, 6). Since each such round extends the
length of the expression by one, we additionally collapse them (in line 7) by
adding an extra Kleene star to the symbol at the second position. The crucial
observation for the correctness of this procedure is that the subsumption step
in line 7 does not change the cover sets of the respective expressions.
Observe that Algorithm 1 is well defined because the $\mathit{SAT}(S_{i})$ are
computable by Lemma 13. Termination is guaranteed by the following simple
observation.
###### Lemma 20.
Let $x_{j}^{i}\in\Sigma$ be the symbols computed by Algorithm 1. Then
1. (1)
$x_{j}^{i+1}\supseteq x_{j}^{i}$, for all $i>j\geq 0$.
2. (2)
$x_{i}^{i}\supseteq x_{i-1}^{i-1}$ and $x_{i}^{i+1}\supseteq x_{i-1}^{i}$, for
all $i\geq 3$.
###### Proof.
The first item is guaranteed by Point 2 of Lemma 13. In particular this means
that $x_{0}^{i+1}\supseteq x_{0}^{i}$ and therefore that
$(x_{0}^{i+1}+1)\supseteq(x_{0}^{i}+1)$ for all $i\geq 0$ (indicated as red
arrows in Figure 2). The second item now follows from this observation by
Lemma 15. ∎
###### Lemma 21 (Termination).
Algorithm 1 terminates with $i\leq 4\cdot\lvert
P\rvert\cdot(c_{\mathit{max}}+1)$.
###### Proof.
From Lemma 20 we deduce that for all $i\geq 2$, the expression $S_{i+1}$ is
point-wise larger than or equal to $S_{i}$ with respect to the subset ordering
on symbols. The claim now follows from the observation that all expressions
$S_{i\geq 3}$ have length $4$ and that every symbol $x_{i}\in\Sigma$ can only
increase at most $\lvert P\rvert\cdot(c_{\mathit{max}}+1)$ times. ∎
###### Lemma 22 (Correctness).
Suppose that $S_{1},S_{\ell},R$ be the expressions computed by Algorithm 1
applied to the simple expression $x_{1}x_{0}^{*}$. Then
$\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![x_{1}x_{0}^{*}]\\!]}\right)=[\\![S_{1}]\\!]\cup[\\![S_{\ell}]\\!]\cup\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![R]\\!]}\right)$.
###### Proof.
Let $S_{1},\ldots S_{\ell}$ denote the expressions defined in lines 1,2,3, and
7 of the algorithm. That is, $\ell$ is the least index $i$ such that
$S_{i+1}=S_{i}$. We define a sequence $E_{i}$ of expressions inductively,
starting with $E_{1}\overset{\text{\tiny def}}{=}S_{1}$ and if
$E_{i}=e_{i}^{i}e_{i-1}^{i}\dots e_{0}^{i}$, we let
$E_{i+1}\overset{\text{\tiny
def}}{=}e_{i+1}^{i+1}e_{i}^{i+1}e_{i-1}^{i+1}\dots
e_{0}^{i+1}\overset{\text{\tiny
def}}{=}\mathit{SAT}((\hat{e}_{0}^{i}+1)E_{i})$. Here, the superscript
indicates the position of a symbol and not iteration. This is the sequence of
expressions resulting from unfolding Lemma 19, interleaved with saturation
steps, just in line 6 of the algorithm. That is, the expressions $E_{i}$ are
_not_ collapsed (line 7) and instead grow in length with $i$. Still,
$E_{1}=S_{1}$, $E_{2}=S_{2}$ and $E_{2}=S_{3}$, but $E_{4}\neq S_{4}$, because
the latter is the result of applying the subsumption step of line $7$ in our
algorithm. Notice that
$\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![x_{1}x_{0}^{*}]\\!]}\right)=\mathopen{}\mathclose{{}\left(\bigcup_{k-1\geq
i\geq
1}[\\![E_{i}]\\!]}\right)\cup\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![E_{k}]\\!]}\right)$
holds for all $k\in\mathbb{N}$. We will use that
$\bigcup_{i\geq 2}[\\![E_{i}]\\!]=\bigcup_{i\geq
2}[\\![S_{i}]\\!]=[\\![S_{\ell}]\\!].$ (2)
We start by observing that for all $i,j\in\mathbb{N}$ it holds that
$e_{j}^{i}=x_{j}^{i}$. For $i\leq 3$ this holds trivially by definition of
$E_{i}=S_{i}$. For larger $i$, this can be seen by induction using Lemma 13.
Towards the first equality in Equation 2, let $S_{i}^{j}$ be the expression
resulting from $S_{i}=x_{i}^{i}({x_{i-1}^{i}})^{*}x_{1}^{i}({x_{0}^{i}})^{*}$
by unfolding the first star $j$ times. That is, $S_{i}^{j}\overset{\text{\tiny
def}}{=}x_{i}^{i}({x_{i-1}^{i}})^{(j)}x_{1}^{i}(x_{0}^{i})^{*}$, where the
superscript $(j)$ denotes $j$-fold concatenation. Clearly,
$[\\![S_{i}]\\!]=\bigcup_{j\geq 0}[\\![S_{i}^{j}]\\!]$ and so the
$\supseteq$-direction of the first equality in Equation 2 follows by
$\displaystyle[\\![S_{i}^{j}]\\!]=[\\![x_{i}^{i}({x_{i-1}^{i}})^{(j)}x_{1}^{i}(x_{0}^{i})^{*}]\\!]$
$\displaystyle\subseteq[\\![x_{i+j}^{i+j}\mathopen{}\mathclose{{}\left({x_{i+j-1}^{i+j}}{x_{i+j-2}^{i+j}}\ldots{x_{i}^{i+j}}}\right)x_{1}^{i+1}(x_{0}^{i+1})^{*}]\\!]$
$\displaystyle\subseteq[\\![x_{i+j}^{i+j}\mathopen{}\mathclose{{}\left({x_{i+j-1}^{i+j}}{x_{i+j-2}^{i+j}}\ldots{x_{i}^{i+j}}}\right)\mathopen{}\mathclose{{}\left({x_{i-1}^{i+j}}\ldots{x_{2}^{i+j}}}\right)x_{1}^{i+1}(x_{0}^{i+j})^{*}]\\!]$
$\displaystyle=[\\![E_{i+j}]\\!],$
where the first inclusion is due to Lemma 20. The same helps for the other
direction:
$[\\![E_{i}]\\!]=[\\![x_{i}^{i}x_{i-1}^{i}x_{i-2}^{i}\dots
x_{2}^{i}x_{1}^{i}x_{0}^{i}]\\!]\subseteq[\\![x_{i}^{i}{(x_{i-1}^{i})}^{(i-2)}x_{1}^{i}x_{0}^{i}]\\!]=[\\![S_{i}^{i-2}]\\!]=[\\![S_{i}]\\!],$
(3)
which completes the proof of the first equality in Equation 2. The second
equality holds because $[\\![S_{i}]\\!]\subseteq[\\![S_{i+1}]\\!]$ for all
$i\geq 2$, by Lemma 20, and by definition of $S_{\ell}=S_{\ell+1}$. As a next
step we show that
$\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![S_{\ell}]\\!]}\right)=[\\![S_{\ell}]\\!]\cup\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![R]\\!]}\right)$
(4)
First observe that
$[\\![R]\\!]=[\\![(x_{1}^{\ell}+1){(x_{\ell-1}^{\ell})}^{*}]\\!]=[\\![(x_{1}^{\ell}+1)x_{\ell}^{\ell}{(x_{\ell-1}^{\ell})}^{*}]\\!]$
and consequently,
$\displaystyle\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![R]\\!]}\right)$
$\displaystyle=\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![(x_{1}^{\ell}+1)x_{\ell}^{\ell}{(x_{\ell-1}^{\ell})}^{*}]\\!]}\right)$
$\displaystyle\subseteq\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![x_{\ell}^{\ell}{(x_{\ell-1}^{\ell})}^{*}x_{1}^{\ell}]\\!]}\right)$
$\displaystyle\subseteq\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![x_{\ell}^{\ell}{(x_{\ell-1}^{\ell})}^{*}x_{1}^{\ell}{(x_{0}^{\ell})}^{*}]\\!]}\right)=\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![S_{\ell}]\\!]}\right)$
where the first equation follows by Corollary 18 and the second because
$\mathcal{L}\mathopen{}\mathclose{{}\left(x_{\ell}^{\ell}{(x_{\ell-1}^{\ell})}^{*}x_{1}^{\ell}}\right)\subseteq\mathcal{L}\mathopen{}\mathclose{{}\left(x_{\ell}^{\ell}{(x_{\ell-1}^{\ell})}^{*}x_{1}^{\ell}{(x_{0}^{\ell})}^{*}}\right)$.
For the left to right inclusion in Equation 4, consider a marking
$M\in\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![S_{\ell}]\\!]}\right)\setminus[\\![S_{\ell}]\\!]$.
We show that
$M\in\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![R]\\!]}\right)$. Recall
that $\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![S_{\ell}]\\!]}\right)$
consists of all those markings $M$ so that there exists a finite path
$M_{0}\xrightarrow{*}_{\textit{Disc}}M^{\prime}_{0}\xrightarrow{d_{1}}_{\textit{Time}}M_{1}\xrightarrow{*}_{\textit{Disc}}M^{\prime}_{1}\xrightarrow{d_{2}}_{\textit{Time}}M_{2}\dots
M^{\prime}_{k-1}\xrightarrow{*}_{\textit{Disc}}M_{k}$
alternating between timed and (sequences of) discrete transition steps, with
$M_{0}\in[\\![S_{\ell}]\\!]$, $M_{k}\geq M$ and all $d_{i}\leq\max({\it
frac}(M^{\prime}_{i}))$.
By our choice of $M$, there must be a first expression in the sequence which
is not a member of $[\\![S_{\ell}]\\!]$. Since
$[\\![\mathit{SAT}(S_{\ell})]\\!]=[\\![S_{\ell}]\\!]$, we can assume an index
$i>0$ so that $M_{i}\notin[\\![S_{\ell}]\\!]$ but
$M^{\prime}_{i-1}\in[\\![S_{\ell}]\\!]$ that is, the step that takes us out of
$[\\![S_{\ell}]\\!]$ is a timed step.
Because $[\\![S_{\ell}]\\!]=\bigcup_{i\geq 2}[\\![S_{i}]\\!]$, it must hold
that
$M^{\prime}_{i-1}\in[\\![S_{j}]\\!]=[\\![x_{j}^{j}(x_{j-1}^{j})^{*}x_{1}^{j}(x_{0}^{j})^{*}]\\!]$
for some index $j\geq 2$. We claim that it already holds that
$M^{\prime}_{i-1}\in[\\![x_{j}^{j}{(x_{j-1}^{j})}^{*}x_{1}^{j}]\\!].$ (5)
Suppose not. If $d_{i}<\max({\it frac}(M^{\prime}_{i-1}))$ then
$M_{i}\in[\\![\emptyset S_{j}]\\!]\subseteq[\\![S_{j}]\\!]$ by Lemma 16,
contradiction. Otherwise, if $d_{i}=\max({\it frac}(M^{\prime}_{i-1}))$,
notice that every abstraction
$\mathit{abs}_{S}(M^{\prime}_{i-1})\in\mathcal{L}\mathopen{}\mathclose{{}\left(S_{j}}\right)$
must have $\lvert S\rvert=4$. So by Lemma 17,
$M_{i}\in[\\![(x_{0}^{j}+1)S_{j}]\\!]$. But then again
$[\\![(x_{0}^{j}+1)S_{j}]\\!]\subseteq[\\![\mathit{SAT}((x_{0}^{j}+1)S_{j})]\\!]\subseteq[\\![S_{j+1}]\\!],$
(6)
contradicting our assumption that $M_{i}\notin[\\![S_{\ell}]\\!]$. Therefore
Equation 5 holds. By Lemma 17 we derive that
$M_{i}\in[\\![(x_{1}^{j}+1)x_{j}^{j}(x_{j-1}^{j})^{*}]\\!]=[\\![(x_{1}^{j}+1)(x_{j-1}^{j})^{*}]\\!]\subseteq[\\![(x_{1}^{\ell}+1)(x_{\ell-1}^{\ell})^{*}]\\!]=[\\![R]\\!]$.
This concludes the proof of Equation 4.
Notice that by Lemma 19 we have that
$\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![x_{1}x_{0}^{*}]\\!]}\right)=[\\![\mathit{SAT}(x_{1}x_{0}^{*})]\\!]\cup\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![\mathit{SAT}(x_{1}x_{0}^{*})]\\!]}\right)=[\\![S_{1}]\\!]\cup\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![S_{1}]\\!]}\right).$
(7)
Analogously, we get for every $i\geq 1$ that
$\displaystyle\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![E_{i}]\\!]}\right)=[\\![\mathit{SAT}(E_{i})]\\!]\cup\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![\mathit{SAT}((x^{i}_{0}+1)E_{i})]\\!]}\right)=[\\![E_{i}]\\!]\cup\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![E_{i+1}]\\!]}\right)$
(8)
This used Lemma 19 and the fact that $\mathit{SAT}(E_{i})=E_{i}$ by
construction. Using Equation 8 and that
$[\\![E_{i}]\\!]\subseteq[\\![E_{i+1}]\\!]$ for $i\geq 2$, we deduce
$\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![S_{1}]\\!]}\right)=\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![E_{1}]\\!]}\right)=[\\![E_{1}]\\!]\cup\mathopen{}\mathclose{{}\left(\bigcup_{i\geq
2}\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![E_{i}]\\!]}\right)}\right).$
(9)
Finally we can conclude the desired result as follows.
$\displaystyle\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![x_{1}x_{0}^{*}]\\!]}\right)$
$\displaystyle\overset{\text{\tiny(\ref{eq:acc:EisSunr1})}}{=}[\\![S_{1}]\\!]\cup\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![S_{1}]\\!]}\right)\overset{\text{\tiny(\ref{eq:acc:3})}}{=}[\\![S_{1}]\\!]\cup\mathit{Cover}\mathopen{}\mathclose{{}\left(\bigcup_{i\geq
2}[\\![E_{i}]\\!]}\right)$
$\displaystyle\overset{\text{\tiny(\ref{eq:acc:EisS})}}{=}[\\![S_{1}]\\!]\cup\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![S_{\ell}]\\!]}\right)$
$\displaystyle\overset{\text{\tiny(\ref{eq:acc:Rconnection})}}{=}[\\![S_{1}]\\!]\cup[\\![S_{\ell}]\\!]\cup\mathit{Cover}\mathopen{}\mathclose{{}\left([\\![R]\\!]}\right)\qed$
### 4.3. Main Result
The following theorem summarizes our main claims regarding the $\exists$COVER
problem.
###### Theorem 23.
Consider an instance of $\exists$COVER with ${\cal
N}=(P,T,\mathit{Var},G,\mathit{Pre},\mathit{Post})$ a non-consuming TPN where
$c_{\mathit{max}}$ is the largest constant appearing in the transition guards
$G$ encoded in unary, and let $p$ be an initial place and $t$ be a transition.
1. (1)
The number of different simple expressions of length $m$ is
$B(m)\overset{\text{\tiny def}}{=}2^{(\lvert
P\rvert\cdot(c_{\mathit{max}}+2)\cdot m)+m}$.
2. (2)
It is possible to compute a symbolic representation of the set of markings
coverable from some marking in the initial set $\mathbb{N}\cdot\\{(p,{0})\\}$,
as a finite set of simple expressions. I.e., one can compute simple
expressions $S_{1},\dots,S_{\ell}$ s.t. $\bigcup_{1\leq
i\leq\ell}[\\![S_{i}]\\!]=\mathit{Cover}\mathopen{}\mathclose{{}\left(\mathbb{N}\cdot\\{(p,{0})\\}}\right)$
and where $\ell\leq 3\cdot B(2)$. Each of the $S_{i}$ has length either $2$ or
$4$.
3. (3)
Checking if there exists
$M\in\mathit{Cover}\mathopen{}\mathclose{{}\left(\mathbb{N}\cdot\\{(p,0)\\}}\right)$
with $M\longrightarrow_{t}$ can be done in $\mathcal{O}(\lvert P\rvert\cdot
c_{\mathit{max}})$ deterministic space.
###### Proof.
For Item 1 note that a simple expression is described by a word where some
symbols have a Kleene star. There are $\lvert\Sigma\rvert^{m}$ different words
of length $m$ and $2^{m}$ possibilities to attach stars to symbols. Since the
alphabet is $\Sigma\overset{\text{\tiny
def}}{=}2^{P\times[{c_{\mathit{max}}+1}]}$ and
$\lvert[{c_{\mathit{max}}+1}]\rvert=c_{\mathit{max}}+2$, the result follows.
Towards Item 2, we can assume w.l.o.g. that our TPN is non-consuming by Lemma
8, and thus the region abstraction introduced in Section 4.1 applies. In
particular, the initial set of markings $\mathbb{N}\cdot\\{(p,{0})\\}$ is
represented exactly by the expression $S_{0}\overset{\text{\tiny
def}}{=}\\{(p,0)\\}\emptyset^{*}$ where $\emptyset\in\Sigma$ is the symbol
corresponding to the empty set. That is, we have
$[\\![S_{0}]\\!]=\mathbb{N}\cdot\\{(p,{0})\\}$ and thus
$\mathit{Cover}([\\![S_{0}]\\!])=\mathit{Cover}(\mathbb{N}\cdot\\{(p,{0})\\})$.
The claimed expressions $S_{i}$ are the result of iterating Algorithm 1 until
a previously seen expression is revisited. Starting at $i=0$ and
$S_{0}\overset{\text{\tiny def}}{=}\\{(p,0)\\}\emptyset^{*}$, each round will
set $S_{i+1},S_{i+2}$ and $S_{i+3}$ to the result of applying Algorithm 1 to
$S_{i}$, and increment $i$ to $i+3$.
Notice that then all $S_{i}$ are simple expressions of length $2$ or $4$ and
that in particular, all expressions with index divisible by $3$ are of the
form $ab^{*}$ for $a,b\in\Sigma$. Therefore after at most $B(2)$ iterations,
an expression $S_{\ell}$ is revisited (with $\ell\leq 3B(2)$). Finally, an
induction using Lemma 22 provides that $\bigcup_{1\leq
i\leq\ell}[\\![S_{i}]\\!]=\mathit{Cover}\mathopen{}\mathclose{{}\left(\mathbb{N}\cdot\\{(p,{0})\\}}\right)$.
Towards Item 3, we modify the above algorithm for the $\exists$COVER problem
with the sliding window technique. The algorithm is the same as above where
instead of recording all the expressions $S_{1},\dots,S_{\ell}$, we only store
the most recent ones and uses them to decide whether the transition $t$ is
enabled. If the index $i$ reaches the maximal value of $3\cdot B(2)$ we return
unsuccessfully.
The bounded index counter uses $\mathcal{O}(\log(B(2)))$ space; Algorithm 1
uses space $\mathcal{O}(\log(B(5)))$ because it stores only simple expressions
of length $\leq 5$. The space required to store the three expressions
resulting from each application of Algorithm 1 is
$\mathcal{O}(3\cdot\log(B(4)))$. For every encountered simple expression we
can check in logarithmic space whether the transition $t$ is enabled by some
marking in its denotation. Altogether the space used by our new algorithm is
bounded by $\mathcal{O}(\log(B(5)))$. By Item 1, this is
$\mathcal{O}(|P|\cdot(c_{\mathit{max}}+2))=\mathcal{O}(\lvert P\rvert\cdot
c_{\mathit{max}})$. ∎
###### Corollary 24.
The $\exists$COVER problem for TPN is $\mathsf{PSPACE}$-complete.
###### Proof.
The $\mathsf{PSPACE}$ lower bound was shown in Theorem 7. The upper bound
follows from Lemma 8 and Item 3 of Theorem 23. ∎
## 5\. Conclusion and Future Work
We have shown that _Existential Coverability_ (and its dual of universal
safety) is $\mathsf{PSPACE}$-complete for TPN with one real-valued clock per
token. This implies the same complexity for checking safety of arbitrarily
large timed networks without a central controller. The absence of a central
controller makes a big difference, since the corresponding problem _with_ a
central controller is complete for $F_{\omega^{\omega^{\omega}}}$ [12].
It remains an open question whether these positive results for the controller-
less case can be generalized to multiple real-valued clocks per token. In the
case _with_ a controller, safety becomes undecidable already for two clocks
per token [2].
Another question is whether our results can be extended to more general
versions of timed Petri nets. In our version, clock values are either
inherited, advanced as time passes, or reset to zero. However, other versions
of TPN allow the creation of output-tokens with new non-deterministically
chosen non-zero clock values, e.g., the timed Petri nets of [3, 4] and the
read-arc timed Petri nets of [8].
## References
* [1] Parosh Aziz Abdulla, Karlis Čerāns, Bengt Jonsson, and Yih-Kuen Tsay. Algorithmic analysis of programs with well quasi-ordered domains. Information and Computation, 160(1–2):109–127, 2000.
* [2] Parosh Aziz Abdulla, Johann Deneux, and Pritha Mahata. Multi-clock timed networks. In Annual IEEE Symposium on Logic in Computer Science (LICS), pages 345–354, 2004.
* [3] Parosh Aziz Abdulla, Pritha Mahata, and Richard Mayr. Dense-timed Petri nets: Checking Zenoness, token liveness and boundedness. Logical Methods in Computer Science, 3(1), 2007.
* [4] Parosh Aziz Abdulla and Aletta Nylén. Timed Petri nets and BQOs. In International Conference on Application and Theory of Petri Nets (ICATPN), volume 2075 of LNCS, pages 53–70. Springer, 2001.
* [5] R. Alur and D. L. Dill. A theory of timed automata. Theoretical Computer Science, 126(2):183–235, 1994.
* [6] Benjamin Aminof, Sasha Rubin, Florian Zuleger, and Francesco Spegni. Liveness of parameterized timed networks. In International Colloquium on Automata, Languages and Programming (ICALP), volume 9135 of LNCS, 2015.
* [7] Rémi Bonnet, Alain Finkel, Serge Haddad, and Fernando Rosa-Velardo. Comparing Petri data nets and timed Petri nets. Technical Report LSV-10-23, LSV Cachan, 2010.
* [8] Patricia Bouyer, Serge Haddad, and Pierre-Alain Reynier. Timed Petri nets and timed automata: On the discriminating power of Zeno sequences. In International Colloquium on Automata, Languages and Programming (ICALP), pages 420–431. Springer, 2006.
* [9] David de Frutos Escrig, Valentín Valero Ruiz, and Olga Marroquín Alonso. Decidability of properties of timed-arc Petri nets. In International Conference on Application and Theory of Petri Nets (ICATPN), volume 1825 of LNCS, pages 187–206. Springer, 2000.
* [10] Alain Finkel and Philippe Schnoebelen. Well-structured transition systems everywhere! Theoretical Computer Science, 256(1–2):63–92, 2001.
* [11] Eric Goles, Pedro Montealegre, Ville Salo, and Ilkka Törmä. PSPACE-completeness of majority automata networks. Theoretical Computer Science, 609(1):118 – 128, 2016.
* [12] Serge Haddad, Sylvain Schmitz, and Philippe Schnoebelen. The ordinal recursive complexity of timed-arc Petri nets, data nets, and other enriched nets. In Annual IEEE Symposium on Logic in Computer Science (LICS), pages 355–364, 2012.
* [13] Lasse Jacobsen, Morten Jacobsen, Mikael H. Møller, and Jiří Srba. Verification of timed-arc Petri nets. In International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM), volume 6543 of LNCS, pages 46–72, 2011.
* [14] Ranko Lazić, Tom Newcomb, Joël Ouaknine, A.W. Roscoe, and James Worrell. Nets with tokens which carry data. Fundamenta Informaticae, 88(3):251–274, 2008.
* [15] Valentin Valero Ruiz, Fernando Cuartero Gomez, and David de Frutos Escrig. On non-decidability of reachability for timed-arc Petri nets. In International Workshop on Petri Nets and Performance Models. IEEE Computer Society, 1999.
* [16] Jiří Srba. Timed-arc Petri nets vs. networks of timed automata. In International Conference on Application and Theory of Petri Nets (ICATPN), volume 3536 of LNCS, pages 385–402. Springer, 2005.
## Appendix A Proof of Lemma 13
###### Lemma 25.
For every non-consuming TPN $\mathcal{N}$ there are polynomial time computable
functions $f:\Sigma\times\Sigma\times\Sigma\to\Sigma$ and
$g:\Sigma\times\Sigma\times\Sigma\to\Sigma$ with the following properties.
1. (1)
$f$ and $g$ are monotone (w.r.t. subset ordering) in each argument.
2. (2)
$f(\alpha,\beta,x)\supseteq x$ and $g(\alpha,\beta,x)\supseteq x$ for all
$\alpha,\beta,x\in\Sigma$.
3. (3)
For every word $w=x_{0}x_{1}\dots x_{k}$ over $\Sigma$,
$\alpha\overset{\text{\tiny def}}{=}x_{0}$ and $\beta\overset{\text{\tiny
def}}{=}\bigcup_{i>0}x_{i}$, and
$w^{\prime}\overset{\text{\tiny
def}}{=}f(\alpha,\beta,x_{0})g(\alpha,\beta,x_{1})\dots g(\alpha,\beta,x_{k})$
we have $[\\![w^{\prime}]\\!]=\\{M^{\prime\prime}\mid\exists
M\in[\\![w]\\!]\land M\xrightarrow{*}_{\textit{Disc}}M^{\prime}\geq
M^{\prime\prime}\\}$.
###### Proof.
(Sketch). It suffices to show the existence of such functions $f_{t}$ and
$g_{t}$ for individual transitions $t\in T$ and $\longrightarrow_{t}$ instead
of $\xrightarrow{*}_{\textit{Disc}}$. The functions $f$ and $g$ can then be
obtained by iterated applications of $f_{t}$ and $g_{t}$ (for all transitions
$t$) until convergence. (In addition to expanding $x$, the results of each
application $f_{t}$ and $g_{t}$ are also added to $\alpha$ and $\beta$,
respectively.) This works, because the functions $f_{t}$ and $g_{t}$ are
monotone and operate on the finite domain/range $\Sigma$. Since we have a
polynomial number of transitions, and each symbol in $\Sigma$ can increase (by
strict subset ordering) at most $\lvert P\rvert\cdot(c_{\mathit{max}}+1)$
times, the number of iterations is polynomial. Moreover, the properties of
Item 1, Item 2 and Item 3 carry over directly from $f_{t}$ and $g_{t}$ to $f$
and $g$, respectively.
Now we consider the definitions and properties of the functions $f_{t}$ and
$g_{t}$ for a particular transition $t$. Given a variable evaluation
$\pi:\mathit{Var}\to{\mathbb{R}}_{\geq 0}$, we define the functions $\pi_{0}$
and $\pi_{>0}$ from sets over $(P\times\mathit{Var})$ to sets over
$(P\times\mathbb{N})$ as follows. Intuitively, they cover the parts of the
assignment $\pi$ with zero/nonzero fractional values, respectively. Let
$\pi_{0}(S)\overset{\text{\tiny def}}{=}\\{(p,c)\,|\,(p,y)\in S\ \wedge\
\pi(y)=c\in\mathbb{N}\\}$ and $\pi_{>0}(S)\overset{\text{\tiny
def}}{=}\\{(p,c)\,|\,(p,y)\in S\ \wedge\ \lfloor\pi(y)\rfloor=c\ \wedge\ {\it
frac}(\pi(y))>0\\}$. The definitions are lifted to multisets in the
straightforward way.
Now let $t$ be a transition. We say that $(\alpha,\beta)$ enables $t$ iff
$\exists\pi:\mathit{Var}\to{\mathbb{R}}_{\geq 0}$ such that $\pi(y)\in
G(t)(y)$ for all variables $y$ and $\pi_{0}(\mathit{Pre}(t))\subseteq\alpha$
and $\pi_{>0}(\mathit{Pre}(t))\subseteq\beta$. Thus if
$\mathit{abs}(M)=x_{0}x_{1}\dots x_{n}$ then $M$ enables $t$ iff
$(x_{0},\bigcup_{i>0}x_{i})$ enables $t$, since all transition guards in
$G(t)$ are intervals bounded by integers (i.e., $t$ cannot distinguish between
different nonzero fractional values). Moreover, enabledness can be checked in
polynomial time (choose integers for the part in $\alpha$ and rationals with
fractional part $1/2$ for the part in $\beta$).
In the case where $(\alpha,\beta)$ does not enable $t$ we just let
$g_{t}(\alpha,\beta,x)\overset{\text{\tiny def}}{=}x$ and
$f_{t}(\alpha,\beta,x)\overset{\text{\tiny def}}{=}x$. The conditions above
are trivially satisfied in this case.
In the case where $(\alpha,\beta)$ enables $t$, let
$g_{t}(\alpha,\beta,x)\overset{\text{\tiny def}}{=}x\cup\gamma$ where $\gamma$
is defined as follows. We have $(p,c)\in\gamma$ iff there is a
$(p,y)\in\mathit{Post}(t)$ and $(q,y)\in\mathit{Pre}(t)$ such that $(q,c)\in
x$. Similarly, let $f_{t}(\alpha,\beta,x)\overset{\text{\tiny
def}}{=}x\cup\gamma$ where $\gamma$ is defined as follows. We have
$(p,c)\in\gamma$ iff either (1) there is a $(p,y)\in\mathit{Post}(t)$ and
$(q,y)\in\mathit{Pre}(t)$ such that $(q,c)\in x$, or (2) $c=0$ and there is a
$(p,0)\in\mathit{Post}(t)$. All these conditions can be checked in polynomial
time. Item 1 and Item 2 follow directly from the definition.
Towards Item 3, we show
$[\\![w^{\prime}]\\!]\supseteq\\{M^{\prime\prime}\mid\exists
M\in[\\![w]\\!]\land M\longrightarrow_{t}M^{\prime}\geq M^{\prime\prime}\\}$.
(The proof of the reverse inclusion $\subseteq$ is similar.) Let
$w=x_{0}x_{1}\dots x_{k}$, $\alpha\overset{\text{\tiny def}}{=}x_{0}$,
$\beta\overset{\text{\tiny def}}{=}\bigcup_{i>0}x_{i}$ such that
$(\alpha,\beta)$ enables $t$ and $w^{\prime}\overset{\text{\tiny
def}}{=}f_{t}(\alpha,\beta,x_{0})g_{t}(\alpha,\beta,x_{1})\dots
g_{t}(\alpha,\beta,x_{k})$. If $M\in[\\![w]\\!]$ and
$M\longrightarrow_{t}M^{\prime}$ then $M^{\prime}\geq M$ since $\mathcal{N}$
is non-consuming. We show that every additional token $(p,u)\in
M^{\prime}\ominus M$ is included in $[\\![w^{\prime}]\\!]$. (This implies the
inclusion above, since $M^{\prime}\ominus M\geq M^{\prime\prime}\ominus M$.)
For every additional token $(p,u)\in M^{\prime}\ominus M$ there are two cases.
* •
Assume ${\it frac}(u)>0$. Then the token $(p,u)$ must have inherited its clock
value from some token $(q,u)\in M$ via a variable $y$ specified in the
Pre/Post of $t$ (since discrete transitions cannot create new fractional parts
of clock values). This case is covered by $\gamma$ in the definition of
$g_{t}$ above. In particular, if $(q,u)\in M$ was abstracted to $x_{i}$ in $w$
then $(p,u)\in M^{\prime}$ is abstracted to $g_{t}(\alpha,\beta,x_{i})$ in
$w^{\prime}$.
* •
Assume ${\it frac}(u)=0$. Then there are two cases. In the first case the
token $(p,u)$ inherited its clock value from some token $(q,u)\in M$ via a
variable $y$ specified in the Pre/Post of $t$. This case is covered by part
(1) of $\gamma$ in the definition of $f_{t}$ above. In particular, $(q,u)\in
M$ was abstracted to $x_{0}$ in $w$, because ${\it frac}(u)=0$. Thus $(p,u)\in
M^{\prime}$ is abstracted to $f_{t}(\alpha,\beta,x_{0})$ in $w^{\prime}$. In
the second case the token $(p,u)$ got its clock value via a clock-reset to
zero. This case is covered by part (2) of $\gamma$ in the definition of
$f_{t}$ above. In particular, in this case we must have $u=0$, and $(p,0)\in
M^{\prime}$ was abstracted to $f_{t}(\alpha,\beta,x_{0})$ in $w^{\prime}$.
It follows that $\mathit{abs}(M^{\prime})\leq w^{\prime}$, i.e., by the
ordering on symbols in $\Sigma$, every letter in $\mathit{abs}(M^{\prime})$ is
smaller than the corresponding letter in $w^{\prime}$. Thus
$M^{\prime}\in[\\![w^{\prime}]\\!]$. Since $M^{\prime}\geq M^{\prime\prime}$
and $[\\![w^{\prime}]\\!]$ is downward closed, we also have
$M^{\prime\prime}\in[\\![w^{\prime}]\\!]$ as required. ∎
See 13
###### Proof.
Let $f$ and $g$ be the functions from Lemma 25, which immediately yields Item
1 and Item 2. Towards Item 3, consider all words $w$ in $\mathcal{L}(E)$ that
contain each starred symbol in $E$ at least once. (The other cases are
irrelevant for $[\\![E]\\!]$ since they are subsumed by monotonicity.) For
each such word $w$, the $\alpha,\beta$ derived from $w$ in Lemma 25 are the
same as the $\alpha,\beta$ derived from $E$ in Item 3. If $x_{i}$ in $E$
carries a star then $w$ contains a corresponding nonempty subsequence
$x_{i}\dots x_{i}$. We apply Lemma 25 to each such $w$ to obtain the
corresponding $w^{\prime}$. The word $w^{\prime}$ then contains the
corresponding subsequence $g(\alpha,\beta,x_{i})\dots g(\alpha,\beta,x_{i})$.
Let $E^{\prime}$ then be defined as in Item 3, i.e., by applying functions to
the symbols and keeping the stars at the same symbols as in $E$. By Lemma 25,
this is computable in polynomial time. We have
$\mathcal{L}(E^{\prime})=\bigcup_{w\in\mathcal{L}(E)}\\{w^{\prime}\\}$. Thus
$[\\![E^{\prime}]\\!]=\bigcup_{w\in\mathcal{L}(E)}[\\![w^{\prime}]\\!]=\bigcup_{w\in\mathcal{L}(E)}\\{M^{\prime\prime}\mid\exists
M\in[\\![w]\\!]\land M\xrightarrow{*}_{\textit{Disc}}M^{\prime}\geq
M^{\prime\prime}\\}=\\{M^{\prime\prime}\mid\exists M\in[\\![E]\\!]\land
M\xrightarrow{*}_{\textit{Disc}}M^{\prime}\geq M^{\prime\prime}\\}$ for Item 3
as required. ∎
|
# Distributed Bootstrap for Simultaneous Inference Under High Dimensionality
Yang Yu<EMAIL_ADDRESS>
Department of Statistics
Purdue University
West Lafayette, IN 47907, USA Shih-Kang Chao<EMAIL_ADDRESS>
Department of Statistics
University of Missouri
Columbia, MO 65211, USA Guang Cheng<EMAIL_ADDRESS>
Department of Statistics
University of California, Los Angeles
Los Angeles, CA 90095, USA Part of this manuscript was completed while Cheng
was at Purdue.
###### Abstract
We propose a distributed bootstrap method for simultaneous inference on high-
dimensional massive data that are stored and processed with many machines. The
method produces an $\ell_{\infty}$-norm confidence region based on a
communication-efficient de-biased lasso, and we propose an efficient cross-
validation approach to tune the method at every iteration. We theoretically
prove a lower bound on the number of communication rounds $\tau_{\min}$ that
warrants the statistical accuracy and efficiency. Furthermore, $\tau_{\min}$
only increases logarithmically with the number of workers and the intrinsic
dimensionality, while nearly invariant to the nominal dimensionality. We test
our theory by extensive simulation studies, and a variable screening task on a
semi-synthetic dataset based on the US Airline On-Time Performance dataset.
The code to reproduce the numerical results is available at GitHub:
https://github.com/skchao74/Distributed-bootstrap.
Keywords: Distributed Learning, High-dimensional Inference, Multiplier
Bootstrap, Simultaneous Inference, De-biased Lasso
## 1 Introduction
Modern massive datasets with enormous sample size and tremendous
dimensionality are usually impossible to be processed with a single machine.
For remedy, a master-worker architecture is often adopted, e.g., Hadoop (Singh
and Kaur, 2014), which operates on a cluster of nodes for data storage and
processing, where the master node also contains a portion of the data; see
Figure 1. An inherent problem of this architecture is that inter-node
communication can be over a thousand times slower than intra-node computation
due to the inter-node communication protocol, which unfortunately always comes
with significant overhead (Lan et al., 2018; Fan et al., 2019a). Hence,
communication efficiency is usually a top concern for algorithm development in
distributed learning.
Figure 1: Master-worker architecture for storing and processing distributed
data.
Classical statistical methods are usually not communication-efficient as some
of them require hundreds or even thousands passes over the entire dataset. In
the last few years, active research has greatly advanced our ability to
perform distributed statistical optimization and inference in, e.g., maximum
likelihood estimation (Zhang et al., 2012; Li et al., 2013; Chen and Xie,
2014; Battey et al., 2018; Jordan et al., 2019; Huang and Huo, 2019; Chen et
al., 2018; Zhu et al., 2020), Lasso (Lee et al., 2017; Wang et al., 2017; Wang
and Zhang, 2017), partially linear models (Zhao et al., 2016), nonstandard
regression (Shi et al., 2018; Banerjee et al., 2019), quantile regression
(Volgushev et al., 2019; Chen et al., 2019), principal component analysis (Fan
et al., 2019b; Chen et al., 2020), just to name a few. However, solutions for
many other problems in the distributed framework, for example the statistical
inference for high-dimensional models, are still elusive.
Simultaneous inference for high-dimensional statistical models has been widely
considered in many applications where datasets can be handled with a
standalone computer (Cai and Sun, 2017), and many recent papers focus on
bootstrap as an effective way to implement simultaneous inference (Dezeure et
al., 2017; Zhang and Cheng, 2017; Belloni et al., 2018, 2019; Yu et al.,
2020a). These existing methods typically use the well-celebrated de-biased
Lasso (van de Geer et al., 2014; Zhang and Zhang, 2014; Javanmard and
Montanari, 2014a, b), where the de-biased score results from the KKT condition
of the Lasso optimization problem. However, de-biased Lasso is not directly
applicable in a distributed computational framework. For one thing, the
implementation of de-biased Lasso requires expensive subroutines such as
nodewise Lasso (van de Geer et al., 2014), which has to be replaced by a more
communication-efficient method. For another, the quality of the de-biased
score, which is essential to the validity of the bootstrap, is generally worse
in a distributed computational framework than that in a centralized
computational framework. In particular, it is heavily biased so the asymptotic
normality fails. However, it can possibly be improved with sufficient rounds
of communication between the master and worker nodes. The bootstrap validity
therefore critically hinges on the interplay between the dimensionality of the
model and the sparsity level, as well as the rounds of communication, the
number of worker nodes and the size of local sample that are specific to the
distributed computational framework.
In this paper, we tackle the challenges discussed above and propose a
communication-efficient simultaneous inference method for high-dimensional
models. The main component at the core of our method is a novel way to improve
the quality of the de-biased score with a carefully selected number of rounds
of communication while relaxing the constraint on the number of machines. Our
method is motivated by Wang et al. (2017), who proposed an iterative procedure
for computing the estimator but no statistical inference was provided. Note
that the de-biased Lasso has been applied by Lee et al. (2017) to obtain a
communication-efficient $\sqrt{N}$-consistent estimator, but their method
restricts the number of worker nodes to be less than the local sample size.
Next, we apply communicate-efficient multiplier bootstrap methods k-grad and
n+k-1-grad, which are originally proposed in Yu et al. (2020b) for low
dimensional models. These bootstrap methods prevent repeatedly refitting the
models and relax the constraint on the number of machines that plague the
methods proposed earlier (Kleiner et al., 2014; Sengupta et al., 2016). A key
challenge in implementation is that cross-validation, which is a popular
method for selecting tuning parameters, usually requires multiple passes of
the entire dataset and is typically inefficient in the distributed
computational framework. We propose a new cross-validation that only requires
the master node for implementation without needing to communicate with the
worker nodes.
Our theoretical study focuses on the explicit lower bounds on the rounds of
communication that warrant the validity of the bootstrap method for high-
dimensional generalized linear models; see Section 3.1 for an overview. In
short, the greater the number of worker nodes and/or the intrinsic
dimensionality, the greater the rounds of communication required for the
bootstrap validity. The bootstrap validity and efficiency are corroborated by
an extensive simulation study.
We further demonstrate the merit of our method on variable screening with a
semi-synthetic dataset, based on the large-scale US Airline On-Time
Performance dataset. By performing a pilot study on an independently sampled
subset of data, we take four key explanatory variables for flight delay, which
correspond to the dummy variables of the four years after the September 11
attacks. On another independently sampled subset of data, we combine the dummy
variables of the four years with artificial high-dimensional spurious
variables to create a design matrix. We perform our method on this artificial
dataset, and find that the relevant variables are correctly identified as the
number of iteration increases. In particular, we visualize the effect of these
four years by confidence intervals.
We go beyond our previous publication Yu et al. (2020b) in two major aspects:
(1) In this paper we focus on high-dimensional models. In particular, the
dimensionality of the model can exceed the sample size in each computing node.
We handle high dimensionality using $\ell_{1}$ penalization, and consider de-
biased Lasso under the distributed computational framework. (2) We tune the
$\ell_{1}$ penalized problem with a carefully designed cross-validation
method, which can be applied under distributed computational framework.
The rest of the paper is organized as follows. In Section 2, we introduce the
problem formulation of distributed high-dimensional simultaneous inference and
present the main bootstrap algorithm as well as the cross-validation algorithm
for hyperparameter tuning. Theoretical guarantees of bootstrap validity for
high-dimensional (generalized) linear models are provided in Section 3.
Section 4 presents simulation results that corroborate our theoretical
findings. Section 5 showcases an application on variable screening for high-
dimensional logistic regression with a big real dataset using our new method.
Finally, Section 6 concludes the paper. Technical details are in Appendices.
The proofs of the theoretical results are in Supplementary Material. The code
to reproduce the numerical results is in GitHub:
https://github.com/skchao74/Distributed-bootstrap.
Notations. We denote the $\ell_{p}$-norm ($p\geq 1$) of any vector
$v=(v_{1},\dots,v_{n})$ by $\|v\|_{p}=(\sum_{i=1}^{n}|v_{i}|^{p})^{1/p}$ and
$\|v\|_{\infty}=\max_{1\leq i\leq n}|v_{i}|$. The induced $p$-norm and the
max-norm of any matrix $M\in\mathbb{R}^{m\times n}$ (with element $M_{ij}$ at
$i$-th row and $j$-th column) are denoted by
$\left|\\!\left|\\!\left|{M}\right|\\!\right|\\!\right|_{p}=\sup_{x\in\mathbb{R}^{n};\|x\|_{p}=1}\|Mx\|_{p}$
and $\left|\\!\left|\\!\left|{M}\right|\\!\right|\\!\right|_{\max}=\max_{1\leq
i\leq m;1\leq j\leq n}|M_{i,j}|$. We write $a\lesssim b$ if $a=O(b)$, and
$a\ll b$ if $a=o(b)$.
## 2 Distributed Bootstrap for High-Dimensional Simultaneous Inference
In this section, we introduce the distributed computational framework and
present a novel bootstrap algorithm for high-dimensional simultaneous
inference under this framework. A communication-efficient cross-validation
method is proposed for tuning.
### 2.1 Distributed Computation Framework
Suppose data $\\{Z_{i}\\}_{i=1}^{N}$ are i.i.d., and $\mathcal{L}(\theta;Z)$
is a twice-differentiable convex loss function arising from a statistical
model, where $\theta=(\theta_{1},\dots,\theta_{d})\in\mathbb{R}^{d}$. Suppose
that the parameter of interest $\theta^{\ast}$ is the minimizer of an expected
loss:
$\theta^{\ast}=\operatorname*{\arg\min}_{\theta\in\mathbb{R}^{d}}\mathcal{L}^{\ast}(\theta),\mbox{
where $\mathcal{L}^{\ast}(\theta):\,=\mathbb{E}_{Z}[\mathcal{L}(\theta;Z)]$}.$
We consider a high-dimensional setting where $d>N$ is possible, and
$\theta^{\ast}$ is sparse, i.e., the support of $\theta^{\ast}$ is small.
We consider a distributed computation framework, in which the entire data are
stored distributedly in $k$ machines, and each machine has data size $n$.
Denote by $\\{Z_{ij}\\}_{i=1,\dots,n;j=1,\dots,k}$ the entire data, where
$Z_{ij}$ is $i$-th datum on the $j$-th machine $\mathcal{M}_{j}$, and $N=nk$.
Without loss of generality, assume that the first machine $\mathcal{M}_{1}$ is
the master node; see Figure 1. Define the local and global loss functions as
$\displaystyle\begin{split}\mbox{global loss:
}\mathcal{L}_{N}(\theta)&=\frac{1}{k}\sum_{j=1}^{k}\mathcal{L}_{j}(\theta),\quad\mbox{where}\\\
\mbox{local loss:
}\mathcal{L}_{j}(\theta)&=\frac{1}{n}\sum_{i=1}^{n}\mathcal{L}(\theta;Z_{ij}),\quad
j=1,\dots,k.\end{split}$ (1)
A great computational overhead occurs when the master and worker nodes
communicate. In order to circumvent the overhead, the rounds of communications
between the master and worker nodes should be minimized, and the algorithms
with reduced communication overheads are “communication-efficient”.
### 2.2 High-Dimensional Simultaneous Inference
In this paper, we focus on the simultaneous confidence region for
$\theta^{\ast}$ in a high-dimensional model, which is one of the effective
ways for variable selection and inference that are immune to the well-known
multiple testing problem. In particular, given an estimator $\widehat{\theta}$
that is $\sqrt{N}$-consistent, simultaneous confidence intervals can be found
with confidence $\alpha$, for large $\alpha\in(0,1)$, by finding the quantile
$\displaystyle c(\alpha)$
$\displaystyle:\,=\inf\\{t\in\mathbb{R}:P(\widehat{T}\leq
t)\geq\alpha\\}\quad\text{where}$ (2) $\displaystyle\widehat{T}$
$\displaystyle:\,=\big{\|}\sqrt{N}\big{(}\widehat{\theta}-\theta^{\ast}\big{)}\big{\|}_{\infty}.$
(3)
where $\widehat{\theta}$ may be computed through the de-biased Lasso (van de
Geer et al., 2014; Zhang and Zhang, 2014; Javanmard and Montanari, 2014a, b):
$\displaystyle\widehat{\theta}=\widehat{\theta}_{Lasso}-\widehat{\Theta}\nabla\mathcal{L}_{N}(\widehat{\theta}_{Lasso}),$
(4)
where
$\widehat{\theta}_{Lasso}=\operatorname*{\arg\min}_{\theta\in\mathbb{R}^{d}}\mathcal{L}_{N}(\theta)+\lambda\|\theta\|_{1}$
is the Lasso estimator with some hyperparameter $\lambda>0$,
$\widehat{\Theta}$ is a surrogate inverse Hessian matrix and
$\mathcal{L}_{N}(\theta)=N^{-1}\sum_{i=1}^{N}\mathcal{L}(\theta;Z_{i})$ is the
empirical loss.
Implementing the simultaneous inference based on $\widehat{\theta}$ and
$\widehat{T}$ in distributed computational framework inevitably faces some
computational challenges. Firstly, computing $\widehat{\theta}$ usually
involves some iterative optimization routines that can accumulate a large
communication overhead without a careful engineering. Next, some bootstrap
methods have been proposed for estimating $c(\alpha)$, e.g., the multiplier
bootstrap (Zhang and Cheng, 2017), but they cannot be straightforwardly
implemented within a distributed computational framework due to excessive
resampling and communication. Even though some communication-efficient
bootstrap methods have been proposed, e.g., Kleiner et al. (2014); Sengupta et
al. (2016); Yu et al. (2020b), they either require a large number of machines
or are inapplicable to high-dimensional models.
Because of the above-mentioned difficulties, inference based on $\widehat{T}$
is inapplicable in the distributed computational framework and is regarded as
an “oracle” in this paper. Our goal is to provide a method that is
communication-efficient while entertaining the same statistical accuracy as
that based on the oracle $\widehat{T}$.
### 2.3 High-Dimensional Distributed Bootstrap
In order to adapt (4) to the distributed computational setting, we first need
to find a good substitute $\widetilde{\theta}$ for $\widehat{\theta}_{Lasso}$
that is communication-efficient, while noting that standard algorithms for
Lasso are not communication-efficient. Fortunately, $\widetilde{\theta}$ can
be computed by the communication-efficient surrogate likelihood (CSL)
algorithm with the $\ell_{1}$-norm regularization (Wang et al., 2017; Jordan
et al., 2019), which iteratively generates a sequence of estimators
$\widetilde{\theta}^{(t)}$ with regularization parameters $\lambda^{(t)}$ at
each iteration $t=0,\dots,\tau-1$. See Remark 1 for model tuning and Lines
2-17 of Algorithm 1 for the exact implementation. Under regularity conditions,
if $t$ is sufficiently large, it is warranted that $\widetilde{\theta}$ is
close to $\widehat{\theta}_{Lasso}$.
Typical algorithms for computing $\widehat{\Theta}$, e.g., the nodewise Lasso
(van de Geer et al., 2014), cannot be extended straightforwardly to the
distributed computational framework due to the same issue of communication
inefficiency. We overcome this by performing the nodewise Lasso using only
$\mathcal{M}_{1}$ without accessing the entire dataset. This simple approach
does not sacrifice accuracy as long as a sufficient amount of communication
brings $\widetilde{\theta}$ sufficiently close to $\theta^{*}$.
Lastly, given the surrogate estimators $\widetilde{\theta}$ for
$\widehat{\theta}_{Lasso}$ and $\widetilde{\Theta}$ for $\widehat{\Theta}$, we
estimate the asymptotic quantile $c(\alpha)$ of $\widehat{T}$ by bootstrapping
$\|\widetilde{\Theta}\sqrt{N}\nabla\mathcal{L}_{N}(\widetilde{\theta})\|_{\infty}$
using the k-grad or n+k-1-grad bootstrap originally proposed by Yu et al.
(2020b) for low-dimensional models. However, the number of communication
rounds between master and worker nodes has to be carefully fine-tuned for
high-dimensional models. In particular, the k-grad algorithm computes
$\displaystyle\overline{W}^{(b)}:\,=\bigg{\|}\underbrace{-\widetilde{\Theta}\frac{1}{\sqrt{k}}\sum_{j=1}^{k}\epsilon_{j}^{(b)}\sqrt{n}(\mathbf{g}_{j}-\bar{\mathbf{g}})}_{=:\overline{A}}\bigg{\|}_{\infty},$
(5)
where $\epsilon_{j}^{(b)}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,1)$
independent from the data,
$\mathbf{g}_{j}=\nabla\mathcal{L}_{j}(\widetilde{\theta})$ and
$\bar{\mathbf{g}}=k^{-1}\sum_{j=1}^{k}\mathbf{g}_{j}$. However, it is known
that k-grad does not perform well when $k$ is small (Yu et al., 2020b). The
improved algorithm n+k-1-grad computes
$\displaystyle\begin{split}\widetilde{W}^{(b)}:\,=\bigg{\|}&\underbrace{-\widetilde{\Theta}\frac{1}{\sqrt{n+k-1}}\bigg{(}\sum_{i=1}^{n}\epsilon_{i1}^{(b)}(\mathbf{g}_{i1}-\bar{\mathbf{g}})+\sum_{j=2}^{k}\epsilon_{j}^{(b)}\sqrt{n}(\mathbf{g}_{j}-\bar{\mathbf{g}})\bigg{)}}_{=:\widetilde{A}}\bigg{\|}_{\infty},\end{split}$
(6)
where $\epsilon_{i1}^{(b)}$ and $\epsilon_{j}^{(b)}$ are i.i.d.
$\mathcal{N}(0,1)$ multipliers, and
$\mathbf{g}_{i1}=\nabla\mathcal{L}(\widetilde{\theta};Z_{i1})$ is based on a
single datum $Z_{i1}$ in the master. The key advantage of k-grad or n+k-1-grad
is that once the master has the gradients $\mathbf{g}_{j}$ from the worker
nodes, the quantile of $\\{{\overline{W}}^{(b)}\\}_{b=1}^{B}$ can be computed
in the master node only, without needing to communicate with worker nodes. See
Algorithm 3 in the Appendix for the pseudocode of k-grad and n+k-1-grad.
Algorithm 1 k-grad/n+k-1-grad with de-biased $\ell_{1}$-CSL estimator
1:Require: $\tau\geq 1$ rounds of communication; hyperparameters
$\\{\lambda^{(t)}\\}_{t=0}^{\tau-1}$ , nodewise Lasso procedure
Node$(\cdot,\cdot)$ with hyperparameters $\\{\lambda_{l}\\}_{l=1}^{d}$ (see
Section B)
2:$\widetilde{\theta}^{(0)}\leftarrow\operatorname*{\arg\min}_{\theta}\mathcal{L}_{1}(\theta)+\lambda^{(0)}\|\theta\|_{1}$
at $\mathcal{M}_{1}$
3:Compute $\widetilde{\Theta}$ by running
Node$(\nabla^{2}\mathcal{L}_{1}(\widetilde{\theta}^{(0)}),\\{\lambda_{l}\\}_{l=1}^{d})$
at $\mathcal{M}_{1}$
4:for $t=1,\ldots,\tau$ do
5: Transmit $\widetilde{\theta}^{(t-1)}$ to $\\{\mathcal{M}_{j}\\}_{j=2}^{k}$
6: Compute $\nabla\mathcal{L}_{1}(\widetilde{\theta}^{(t-1)})$ at
$\mathcal{M}_{1}$
7: for $j=2,\ldots,k$ do
8: Compute $\nabla\mathcal{L}_{j}(\widetilde{\theta}^{(t-1)})$ at
$\mathcal{M}_{j}$
9: Transmit $\nabla\mathcal{L}_{j}(\widetilde{\theta}^{(t-1)})$ to
$\mathcal{M}_{1}$
10: end for
11: $\nabla\mathcal{L}_{N}(\widetilde{\theta}^{(t-1)})\leftarrow
k^{-1}\sum_{j=1}^{k}\nabla\mathcal{L}_{j}(\widetilde{\theta}^{(t-1)})$ at
$\mathcal{M}_{1}$
12: if $t<\tau$ then
13:
$\widetilde{\theta}^{(t)}\leftarrow\operatorname*{\arg\min}_{\theta}\mathcal{L}_{1}(\theta)-\theta^{\top}\left(\nabla\mathcal{L}_{1}(\widetilde{\theta}^{(t-1)})-\nabla\mathcal{L}_{N}(\widetilde{\theta}^{(t-1)})\right)+\lambda^{(t)}\|\theta\|_{1}$
at $\mathcal{M}_{1}$
14: else
15:
$\widetilde{\theta}^{(\tau)}\leftarrow\widetilde{\theta}^{(\tau-1)}-\widetilde{\Theta}\nabla\mathcal{L}_{N}(\widetilde{\theta}^{(\tau-1)})$
at $\mathcal{M}_{1}$
16: end if
17:end for
18:Run DistBoots$(\text{`{k-grad}' or
`{n+k-1-grad}'},\widetilde{\theta}=\widetilde{\theta}^{(\tau)},\\{\mathbf{g}_{j}=\nabla\mathcal{L}_{j}(\widetilde{\theta}^{(\tau-1)})\\}_{j=1}^{k},$
19: $\widetilde{\Theta}=\widetilde{\Theta})$ at $\mathcal{M}_{1}$
Algorithm 1 presents the complete statistical inference procedure. There are
two key innovative steps in Algorithm 1 that facilitate the statistical
inference for high dimensional model with a big dataset. First, we introduce
de-biased Lasso in distributed inference, which goes beyond high dimensional
model estimation considered in Jordan et al. (2019); Wang et al. (2017).
Second, we use nodewise Lasso to provide a sparse estimation of the high-
dimensional inverse Hessian matrix instead of the empirical Hessian used in Yu
et al. (2020b).
Algorithm 1 can achieve high computational efficiency due to two reasons.
First, we initialize Algorithm 1 with a warm start. Namely, we warm start with
the Lasso estimator estimated with dataset in the master node, which provides
a good initializer. Second, because the nodewise Lasso is computationally
expensive, we perform it only once at the very beginning and freeze it through
the iterations of the algorithm without updating it.
The number of iterations $\tau$ in Algorithm 1 steers the trade-off between
statistical accuracy and communication efficiency. In particular, a larger
$\tau$ leads to a more accurate coverage of the simultaneous confidence
interval, but it also induces a higher communication cost. Therefore, studying
the minimal $\tau$ that warrants the bootstrap accuracy is crucial, which is
done in Section 3.
###### Remark 1
Two groups of hyperparameters need to be chosen in Algorithm 1:
$\\{\lambda^{(t)}\\}_{t=0}^{\tau-1}$ for regularization in CSL estimation, and
$\\{\lambda_{l}\\}_{l=1}^{d}$ for regularization in nodewise Lasso (see
Algorithm 4). In Section 2.4, we propose a cross-validation method for tuning
$\\{\lambda^{(t)}\\}_{t=0}^{\tau-1}$. As to $\\{\lambda_{l}\\}_{l=1}^{d}$,
while van de Geer et al. (2014) suggests to choose the same value for all
$\lambda_{l}$ by cross-validation, a potentially better way may be to allow
$\lambda_{l}$ to be different across $l$ and select each $\lambda_{l}$ via
cross-validation for the corresponding nodewise Lasso, which is the approach
we take for a distributed variable screening task in Section 5.
###### Remark 2
There exist other options than CSL for $\widetilde{\theta}$ such as the
averaging de-biased estimator (Lee et al., 2017), but an additional round of
communication may be needed to compute the local gradients. More importantly,
their method may be inaccurate when $n<k$.
### 2.4 Communication-Efficient Cross-Validation
We propose a communication-efficient cross-validation method for tuning the
hyperparameters $\\{\lambda^{(t)}\\}_{t=0}^{\tau-1}$ in Algorithm 1. Wang et
al. (2017) proposes to hold out a validation set on each node for selecting
$\lambda^{(t)}$. However, this method requires fitting the model for each
candidate value of $\lambda^{(t)}$, which uses the same communication cost as
the complete CSL estimation procedure.
We propose a communication-efficient $K$-fold cross-validation method that
chooses $\lambda^{(t)}$ for the CSL estimation at every iteration $t$. At
iteration $t$, the master uses the gradients already communicated from the
worker nodes at iteration $t-1$. Hence, the cross-validation needs only the
master node, which circumvents costly communication between the master and the
worker nodes.
Specifically, notice that the surrogate loss (see Line 13 in Algorithm 1) is
constructed using $n$ observations $\mathcal{Z}=\\{Z_{i1}\\}_{i=1}^{n}$ in the
master node and $k-1$ gradients
$\mathcal{G}=\\{\nabla\mathcal{L}_{j}(\widetilde{\theta}^{(t-1)})\\}_{j=2}^{k}$
from the worker nodes. We then create $K$ (approximately) equal-size
partitions to both $\mathcal{Z}$ and $\mathcal{G}$. The objective function for
training is formed using $K-1$ partitions of $\mathcal{Z}$ and $\mathcal{G}$.
In terms of the measure of fit, instead of computing the original likelihood
or loss, we calculate the unregularized surrogate loss using the last
partition of $\mathcal{Z}$ and $\mathcal{G}$, still in the master node. See
Algorithm 2 for the pseudocode.
Algorithm 2 Distributed $K$-fold cross-validation for $t$-step CSL
1:Require: $(t-1)$-step CSL estimate $\widetilde{\theta}^{(t-1)}$, set
$\Lambda$ of candidate values for $\lambda^{(t)}$, partition of master data
$\mathcal{Z}=\bigcup_{q=1}^{K}\mathcal{Z}_{q}$, partition of worker gradients
$\mathcal{G}=\bigcup_{q=1}^{K}\mathcal{G}_{q}$
2:for $q=1,\dots,K$ do
3: $\mathcal{Z}_{train}\leftarrow\bigcup_{r\neq q}\mathcal{Z}_{r}$;
$\mathcal{Z}_{test}\leftarrow\mathcal{Z}_{q}$
4: $\mathcal{G}_{train}\leftarrow\bigcup_{r\neq q}\mathcal{G}_{r}$;
$\mathcal{G}_{test}\leftarrow\mathcal{G}_{q}$
5:
$g_{1,train}\leftarrow\text{Avg}_{Z\in\mathcal{Z}_{train}}\Big{(}\nabla\mathcal{L}(\widetilde{\theta}^{(t-1)};Z)\Big{)}$;
$g_{1,test}\leftarrow\text{Avg}_{Z\in\mathcal{Z}_{test}}\Big{(}\nabla\mathcal{L}(\widetilde{\theta}^{(t-1)};Z)\Big{)}$
6:
$\bar{g}_{train}\leftarrow\text{Avg}_{g\in\\{g_{1,train}\\}\cup\mathcal{G}_{train}}(g)$;
$\bar{g}_{test}\leftarrow\text{Avg}_{g\in\\{g_{1,test}\\}\cup\mathcal{G}_{test}}(g)$
7: for $\lambda\in\Lambda_{t}$ do
8:
$\beta\leftarrow\operatorname*{\arg\min}_{\theta}\text{Avg}_{Z\in\mathcal{Z}_{train}}\big{(}\mathcal{L}(\theta;Z)\big{)}-\theta^{\top}\left(g_{1,train}-\bar{g}_{train}\right)+\lambda\|\theta\|_{1}$
9:
$Loss(\lambda,q)\leftarrow\text{Avg}_{Z\in\mathcal{Z}_{test}}\big{(}\mathcal{L}(\beta;Z)\big{)}-\beta^{\top}\left(g_{1,test}-\bar{g}_{test}\right)$
10: end for
11:end for
12:Return
$\lambda^{(t)}=\operatorname*{\arg\min}_{\lambda\in\Lambda}K^{-1}\sum_{q=1}^{K}Loss(\lambda,q)$
## 3 Theoretical Analysis
Section 3.1 provides an overview of the theoretical results. Sections 3.2 and
3.3 presents the rigorous statements for linear models and generalized linear
models (GLMs) respectively.
### 3.1 An Overview of Theoretical Results
As discussed in Section 2.3, $\tau$ has to be large enough to ensure the
bootstrap accuracy, yet it also induces a great communication cost. Hence, our
main goal is to pin down the minimal number of iterations $\tau_{\min}$
(communication rounds) sufficient for the bootstrap validity in Algorithm 1.
An overview of the theoretical results is provided in Figure 2.
As an overall trend in Figure 2, $\tau_{\min}$ is increasing logarithmically
in $k$ and decreasing in $n$ for both k-grad and n+k-1-grad in (generalized)
linear models; in addition, $\tau_{\min}$ is increasing in $\overline{s}$
logarithmically, where $\overline{s}$ is the maximum of the sparsity of the
true coefficient vector and the inverse population Hessian matrix to be
formally defined later.
By comparing the left and right panels of Figure 2 under a fixed tuple
$(n,k,\overline{s})$, the $\tau_{\min}$ for k-grad is always greater or equal
to that for n+k-1-grad, which indicates a greater communication efficiency of
n+k-1-grad. For very small $k$, n+k-1-grad can still provably work, while
k-grad cannot. Particularly, $\tau_{\min}=1$ can work for certain instances of
n+k-1-grad but is always too small for k-grad.
Regarding the comparison between high-dimensional sparse linear models (top
panels) and GLMs (bottom panels), GLMs typically require a greater $n$ than
sparse linear models, which ensures that the error between
$\widetilde{\theta}^{(t)}$ and $\theta^{\ast}$ decreases in a short transient
phase; see Section C in the Appendix for details.
Figure 2: Illustration of Theorems 3-11. Gray region are where the bootstrap
validity are not warranted by our theory, and the other area is colored blue
with varying lightness according to the lower bound of iteration $\tau$.
$\gamma_{n}=\log_{d}n$, $\gamma_{k}=\log_{d}k$ and
$\gamma_{\bar{s}}=\log_{d}\bar{s}$ are the orders of the local sample size
$n$, number of machines $k$ and the sparsity $\bar{s}$.
### 3.2 Linear Model
Suppose that $N$ i.i.d. observations are generated by a linear model
$y=x^{\top}\theta^{\ast}+e$ with an unknown coefficient vector
$\theta^{\ast}\in\mathbb{R}^{d}$, covariate random vector
$x\in\mathbb{R}^{d}$, and noise $e\in\mathbb{R}$ independent of $x$ with zero
mean and variance of $\sigma^{2}$. We consider the least-squares loss
$\mathcal{L}(\theta;z)=\mathcal{L}(\theta;x,y)=(y-x^{\top}\theta)^{2}/2$.
We impose the following assumptions on the linear model.
* •
$x$ is sub-Gaussian, i.e.,
$\sup_{\|w\|_{2}\leq
1}\mathbb{E}\big{[}\exp((w^{\top}x)^{2}/L^{2})\big{]}=O(1),$
for some absolute constant $L>0$. Moreover,
$1/\lambda_{\tiny{\min}}(\Sigma)\leq\mu$ for some absolute constant $\mu>0$,
where $\Sigma=\mathbb{E}[xx^{\top}]$.
* •
$e$ is sub-Gaussian, i.e.,
$\mathbb{E}\big{[}\exp(e^{2}/L^{\prime 2})\big{]}=O(1),$
for some absolute constant $L^{\prime}>0$. Moreover, $\sigma>0$ is an absolute
constant.
* •
$\theta^{\ast}$ and $\Theta_{l,\cdot}$ are sparse for $l=1,\cdots,d$, where
$\Theta:\,=\Sigma^{-1}=\mathbb{E}[xx^{\top}]^{-1}$. Specifically, we denote by
$S:\,=\\{l:\theta^{\ast}_{l}\neq 0\\}$ the active set of covariates and its
cardinality by $s_{0}:\,=|S|$. Also, we define $s_{l}:\,=|\\{l^{\prime}\neq
l:\Theta_{l,l^{\prime}}\neq 0\\}|$, $s^{*}:\,=\max_{l}s_{l}$, and
$\overline{s}=s_{0}\vee s^{*}$.
Assumption • ‣ 3.2 ensures a restricted eigenvalue condition when
$n\gtrsim\bar{s}\log d$ by Rudelson and Zhou (2013). Under the assumptions, we
first investigate the theoretical property of Algorithm 1, where we apply
k-grad with the de-biased $\ell_{1}$-CSL estimator with $\tau$ communications.
Define
$\displaystyle T$
$\displaystyle:\,=\big{\|}\sqrt{N}\big{(}\widetilde{\theta}^{(\tau)}-\theta^{\ast}\big{)}\big{\|}_{\infty},$
(7)
where $\widetilde{\theta}^{(\tau)}$ is an output of Algorithm 1.
###### Theorem 3 (k-grad, sparse linear model)
Suppose • ‣ 3.2-• ‣ 3.2 hold, and that we run Algorithm 1 with k-grad method
in linear models. Let
$\displaystyle\lambda_{l}\asymp\sqrt{\frac{\log
d}{n}}\quad\text{and}\quad\lambda^{(t)}\asymp\sqrt{\frac{\log
d}{nk}}+\sqrt{\frac{\log d}{n}}\bigg{(}s_{0}\sqrt{\frac{\log
d}{n}}\bigg{)}^{t},$ (8)
for $l=1,\dots,d$ and $t=0,\dots,\tau-1$. Assume $n=d^{\gamma_{n}}$,
$k=d^{\gamma_{k}}$, $\overline{s}=d^{\gamma_{s}}$ for some constants
$\gamma_{n},\gamma_{k},\gamma_{s}>0$. If $\gamma_{n}>3\gamma_{s}$,
$\gamma_{k}>3\gamma_{s}$, and $\tau\geq\tau_{\min}$, where
$\displaystyle\tau_{\min}=1+\left\lfloor\max\left\\{\frac{\gamma_{k}+\gamma_{s}}{\gamma_{n}-2\gamma_{s}},1+\frac{3\gamma_{s}}{\gamma_{n}-2\gamma_{s}}\right\\}\right\rfloor,$
then for $T$ defined in (7), we have
$\displaystyle\sup_{\alpha\in(0,1)}|P(T\leq
c_{\overline{W}}(\alpha))-\alpha|=o(1).$ (9)
where
$c_{\overline{W}}(\alpha):\,=\inf\\{t\in\mathbb{R}:P_{\epsilon}(\overline{W}\leq
t)\geq\alpha\\}$, in which $\overline{W}$ is the k-grad bootstrap statistics
with the same distribution as $\overline{W}^{(b)}$ in (5) and $P_{\epsilon}$
denotes the probability with respect to the randomness from the multipliers.
In addition, (9) also holds if $T$ is replaced by $\widehat{T}$ defined in
(3).
Theorem 3 warrants the bootstrap validity for the simultaneous confidence
intervals produced by Algorithm 1 with the k-grad. Furthermore, it also
suggests that the bootstrap quantile can approximates the quantile of the
oracle statistics $T$; that is, our distributed bootstrap procedure is as
statistically efficient as the oracle centralized method.
Next, we show that the same distributed bootstrap validity and the efficiency
of the k-grad also hold for the n+k-1-grad in Algorithm 1.
###### Theorem 4 (n+k-1-grad, sparse linear model)
Suppose • ‣ 3.2-• ‣ 3.2 hold, and that we run Algorithm 1 with n+k-1-grad
method. Let $\lambda_{l}$ and $\lambda^{(t)}$ be as in (8) for $l=1,\dots,d$
and $t=0,\dots,\tau-1$. Assume $n=d^{\gamma_{n}}$, $k=d^{\gamma_{k}}$,
$\overline{s}=d^{\gamma_{s}}$ for some constants
$\gamma_{n},\gamma_{k},\gamma_{s}>0$. If $\gamma_{n}>3\gamma_{s}$,
$\gamma_{n}+\gamma_{k}>4\gamma_{s}$, and $\tau\geq\tau_{\min}$, where
$\displaystyle\tau_{\min}=1+\left\lfloor\frac{(\gamma_{k}\vee\gamma_{s})+\gamma_{s}}{\gamma_{n}-2\gamma_{s}}\right\rfloor,$
then for $T$ defined in (7), we have
$\displaystyle\sup_{\alpha\in(0,1)}|P(T\leq
c_{\widetilde{W}}(\alpha))-\alpha|=o(1).$ (10)
where
$c_{\widetilde{W}}(\alpha):\,=\inf\\{t\in\mathbb{R}:P_{\epsilon}(\widetilde{W}\leq
t)\geq\alpha\\},$
in which $\widetilde{W}$ is the n+k-1-grad bootstrap statistics with the same
distribution as $\widetilde{W}^{(b)}$ in (6) and $P_{\epsilon}$ denotes the
probability with respect to the randomness from the multipliers.
In addition, (10) also holds if $T$ is replaced by $\widehat{T}$ defined in
(3).
Note by Theorem 2.4 of van de Geer et al. (2014) that $\widehat{T}$ is well
approximated by
$\|\widehat{\Theta}\sqrt{N}\nabla\mathcal{L}_{N}(\theta^{\ast})\|_{\infty}$ ,
which is further approximated by the $\ell_{\infty}$-norm of the oracle score
$A=-\Theta\frac{1}{\sqrt{N}}\sum_{i=1}^{n}\sum_{j=1}^{k}\nabla\mathcal{L}(\theta^{\ast};Z_{ij}),$
given that $\widehat{\Theta}$ only deviates from $\Theta$ up to order
$O_{P}(s^{*}(\log d)^{1/2}N^{-1/2})$ in $\ell_{\infty}$-norm. To gain a deeper
look into the efficiency of k-grad and n+k-1-grad, we compare the difference
between the covariance of $A$ and the conditional covariance of $\overline{A}$
(for k-grad, defined in (5)), and $\widetilde{A}$ (for n+k-1-grad, defined in
(6)). In particular, conditioning on the data $Z_{ij}$, we have
$\displaystyle\left|\\!\left|\\!\left|{\operatorname{cov}_{\epsilon}(\overline{A})-\operatorname{cov}(A)}\right|\\!\right|\\!\right|_{\max}\leq$
$\displaystyle
s^{*}\|\widetilde{\theta}^{(\tau-1)}-\theta^{\ast}\|_{1}+ns^{*}\|\widetilde{\theta}^{(\tau-1)}-\theta^{\ast}\|_{1}^{2}$
$\displaystyle+O_{P}\bigg{(}\sqrt{\frac{{s^{*}}^{2}}{k}}+\sqrt{\frac{s^{*}}{n}}\bigg{)},$
(11)
$\displaystyle\left|\\!\left|\\!\left|{\operatorname{cov}_{\epsilon}(\widetilde{A})-\operatorname{cov}(A)}\right|\\!\right|\\!\right|_{\max}\leq$
$\displaystyle
s^{*}\|\widetilde{\theta}^{(\tau-1)}-\theta^{\ast}\|_{1}+(n\wedge
k)s^{*}\|\widetilde{\theta}^{(\tau-1)}-\theta^{\ast}\|_{1}^{2}$
$\displaystyle+O_{P}\bigg{(}\sqrt{\frac{{s^{*}}^{2}}{n+k}}+\sqrt{\frac{s^{*}}{n}}\bigg{)},$
(12)
up to some logarithmic terms in $d$, $n$ or $k$. Overall, n+k-1-grad in (12)
has a smaller error term than that of k-grad in (11). In particular, k-grad
requires both $n$ and $k$ to be large, while n+k-1-grad requires a large $n$
but not necessarily a large $k$. In addition, $\tau=1$ could be enough for
n+k-1-grad, but not for k-grad. To see it, if
$\|\widetilde{\theta}^{(0)}-\theta^{\ast}\|_{1}$ is of order
$O_{P}(s^{*}/\sqrt{n})$, the right-hand side of (11) can grow with $s^{*}$,
while the error in (12) still shrinks to zero as long as $k\ll n$.
###### Remark 5
Note in both Theorems 3 and 4 that the expression of $\tau_{\min}$ does not
depend on $d$, because the direct effect of $d$ only enters through an
iterative logarithmic term $\log\log d$ which is dominated by
$\log\overline{s}\asymp\log d$.
###### Remark 6
The rates of $\\{\lambda^{(t)}\\}_{t=0}^{\tau-1}$ and
$\\{\lambda_{l}\\}_{l=1}^{d}$ in Theorems 3 and 4 are motivated by those in
Wang et al. (2017) and van de Geer et al. (2014), which, unfortunately, are
not useful in practice. We therefore provide a practically useful cross-
validation method in Section 2.4.
###### Remark 7
The main result (Theorem 2.2) in Zhang and Cheng (2017) can be seen as a
justification of multiplier bootstrap for high-dimensional linear models with
data being processed in a centralized manner. Theorem 4 compliments it by
justifying a distributed multiplier bootstrap with at least one round of
communication ($\tau\geq 1$).
###### Remark 8
A rate of $\sup_{\alpha\in(0,1)}\left|P(T\leq
c_{\overline{W}}(\alpha))-\alpha\right|$ may be shown to be polynomial in $n$
and $k$ with a more careful analysis, which is faster than the order obtained
by the extreme value distribution approach (Chernozhukov et al., 2013; Zhang
and Cheng, 2017) that is at best logarithmic.
###### Remark 9
We have not addressed the question of whether the conditions for $\tau_{\min}$
in Theorem 3 and 4 can be improved in a minimax sense. This is left for future
research. On the other hand, we remark that the total communication cost in
our algorithm is of order $\Omega(\tau_{\min}kd)$, because in each iteration
we communicate $d$-dimensional vectors between the master node and $k-1$
worker nodes, and $\tau_{\min}$ only grows logarithmically with $k$. Our order
matches those in the existing communication-efficient statistical inference
literature e.g., Jordan et al. (2019); Wang et al. (2017).
### 3.3 Generalized Linear Model
In this section, we consider GLMs, which generate i.i.d. observations
$(x,y)\in\mathbb{R}^{d}\times\mathbb{R}$. We assume that the loss function
$\mathcal{L}$ is of the form $\mathcal{L}(\theta;z)=g(y,x^{\top}\theta)$ for
$\theta,x\in\mathbb{R}^{d}$ and $y\in\mathbb{R}$ with
$g:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$, and $g(a,b)$ is three times
differentiable with respect to $b$, and denote $\frac{\partial}{\partial
b}g(a,b)$, $\left(\frac{\partial}{\partial b}\right)^{2}g(a,b)$,
$\left(\frac{\partial}{\partial b}\right)^{3}g(a,b)$ by $g^{\prime}(a,b)$,
$g^{\prime\prime}(a,b)$, $g^{\prime\prime\prime}(a,b)$ respectively. We let
$\theta^{\ast}$ be the unique minimizer of the expected loss
$\mathcal{L}^{\ast}(\theta)$.
We let $X_{1}\in\mathbb{R}^{n\times d}$ be the design matrix in the master
node $\mathcal{M}_{1}$ and $X_{1}^{*}:\,=P^{*}X_{1}$ be the weighted design
matrix with a diagonal $P^{*}\in\mathbb{R}^{n\times n}$ with elements
$\\{g^{\prime\prime}(y_{i1},x_{i1}^{\top}\theta^{\ast})^{1/2}\\}_{i=1,\dots,n}$.
We further let $(X_{1}^{*})_{-l}\varphi^{*}_{l}$ be the $L_{2}$ projection of
$(X_{1}^{*})_{l}$ on $(X_{1}^{*})_{-l}$, for $l=1,\dots,d$. Equivalently, for
$l=1,\dots,d$, we define
$\varphi^{*}_{l}:\,=\operatorname*{\arg\min}_{\varphi\in\mathbb{R}^{d-1}}\mathbb{E}[\|(X_{1}^{*})_{l}-(X_{1}^{*})_{-l}\varphi\|_{2}^{2}]$.
We impose the following assumptions on the GLM.
* •
For some $\Delta>0$, and $\Delta^{\prime}>0$ such that
$|x^{\top}\theta^{\ast}|\leq\Delta^{\prime}$,
$\displaystyle\sup_{|b|\vee|b^{\prime}|\leq\Delta+\Delta^{\prime}}$
$\displaystyle\sup_{a}\frac{|g^{\prime\prime}(a,b)-g^{\prime\prime}(a,b^{\prime})|}{|b-b^{\prime}|}\leq
1,$ $\displaystyle\max_{|b_{0}|\leq\Delta}$
$\displaystyle\sup_{a}|g^{\prime}(a,b_{0})|=O(1),\quad\text{and}\quad\max_{|b|\leq\Delta+\Delta^{\prime}}\sup_{a}|g^{\prime\prime}(a,b)|=O(1).$
* •
$\|x\|_{\infty}=O(1)$. Moreover, $x^{\top}\theta^{\ast}=O(1)$ and
$\max_{l}\big{|}g^{\prime\prime}(y,x^{\top}\theta^{\ast})^{1/2}x_{-l}^{\top}\varphi^{*}_{l}\big{|}=O(1)$,
where $x_{-l}$ consists of all but the $l$-th coordinate of $x$.
* •
The least and the greatest eigenvalues of
$\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})$ and
$\mathbb{E}\left[\nabla\mathcal{L}(\theta^{\ast};Z)\nabla\mathcal{L}(\theta^{\ast};Z)^{\top}\right]$
are bounded away from zero and infinity respectively.
* •
For some constant $L>0$,
$\max_{l}\max_{q=1,2}\mathbb{E}[|\mathbf{h}_{l}^{2+q}|/L^{q}]+\mathbb{E}[\exp(|\mathbf{h}_{l}|/L)]=O(1),\quad\text{or}$
$\max_{l}\max_{q=1,2}\mathbb{E}[|\mathbf{h}_{l}^{2+q}|/L^{q}]+\mathbb{E}[(\max_{l}|\mathbf{h}_{l}|/L)^{4}]=O(1),$
where
$\mathbf{h}=\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}(\theta^{\ast};Z)$
and $\mathbf{h}_{l}$ is the $l$-th coordinate.
* •
$\theta^{\ast}$ and $\Theta_{l,\cdot}$ are sparse, where the inverse
population Hessian matrix
$\Theta:\,=\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}$, i.e.,
$S:\,=\\{l:\theta^{\ast}_{l}\neq 0\\}$, $s_{0}:\,=|S|$,
$s_{l}:\,=|\\{l^{\prime}\neq l:\Theta_{l,l^{\prime}}\neq 0\\}|$,
$s^{*}:\,=\max_{l}s_{l}$, and $\overline{s}=s_{0}\vee s^{*}$.
Assumption • ‣ 3.3 imposes smoothness conditions on the loss function, which
is satisfied by, for example, the logistic regression. In particular, logistic
regression has $g(a,b)=-ab+\log(1+\exp(b))$, and it can be easily seen that
$|g^{\prime}(a,b)|\leq 2$, $|g^{\prime\prime}(a,b)|\leq 1$,
$|g^{\prime\prime\prime}(a,b)|\leq 1$. Assumption • ‣ 3.3 imposes some
boundedness conditions required for the validity of the nodewise Lasso
(Algorithm 4; van de Geer et al. (2014)) in the master node. Assumption • ‣
3.3 is a standard assumption in the GLM literature. Assumption • ‣ 3.3 is
required for proving the validity of multiplier bootstrap (Chernozhukov et
al., 2013).
Analogously to Theorem 3 and 4 that focus on the distributed bootstrap
validity and the efficiency of Algorithm 1 using k-grad/ n+k-1-grad for linear
models, here we extend them to the high-dimensional de-biased GLMs. See Figure
2 for a comparison between the results of high-dimensional linear models and
GLMs.
###### Theorem 10 (k-grad, sparse GLM)
Suppose • ‣ 3.3-• ‣ 3.3 hold, and that we run Algorithm 1 with k-grad method
in GLMs. Let $\lambda_{l}\asymp\sqrt{\log d/n}$ for $l=1,\dots,d$, and
$\lambda^{(t)}$ be as
$\displaystyle\lambda^{(t)}\asymp\begin{cases}\sqrt{\frac{\log
d}{nk}}+\frac{1}{s_{0}^{2}}\Big{(}s_{0}^{2}\sqrt{\frac{\log
d}{n}}\Big{)}^{2^{t}},&t\leq\tau_{0},\\\ \sqrt{\frac{\log
d}{nk}}+\frac{1}{s_{0}^{2}}\Big{(}s_{0}^{2}\sqrt{\frac{\log
d}{n}}\Big{)}^{2^{\tau_{0}}}\Big{(}s_{0}\sqrt{\frac{\log
d}{n}}\Big{)}^{t-\tau_{0}},&t>\tau_{0}+1,\end{cases}$ (13)
for $t=0,\dots,\tau-1$, where
$\displaystyle\tau_{0}=1+\left\lfloor\log_{2}\frac{\gamma_{n}-2\gamma_{s}}{\gamma_{n}-4\gamma_{s}}\right\rfloor.$
(14)
Assume $n=d^{\gamma_{n}}$, $k=d^{\gamma_{k}}$, $\overline{s}=d^{\gamma_{s}}$
for some constants $\gamma_{n},\gamma_{k},\gamma_{s}>0$. If
$\gamma_{n}>5\gamma_{s}$, $\gamma_{k}>3\gamma_{s}$, and $\tau\geq\tau_{\min}$,
where
$\displaystyle\tau_{\min}=\max\left\\{\tau_{0}+\left\lfloor\frac{\gamma_{k}+\gamma_{s}}{\gamma_{n}-2\gamma_{s}}+\nu_{0}\right\rfloor,2+\left\lfloor\log_{2}\frac{\gamma_{n}-\gamma_{s}}{\gamma_{n}-4\gamma_{s}}\right\rfloor\right\\},$
$\displaystyle\nu_{0}=2-\frac{2^{\tau_{0}}(\gamma_{n}-4\gamma_{s})}{\gamma_{n}-2\gamma_{s}}\in(0,1],$
(15)
then we have (9). In addition, (9) also holds if $T$ is replaced by
$\widehat{T}$ defined in (3).
The $\tau_{0}$ in (14) is the preliminary communication rounds needed for the
CSL estimator to go through the regions which are far from $\theta^{\ast}$. As
$\overline{s}$ grows, the time spent in these regions can increase. However,
when $n$ is large, e.g., $n\gg\overline{s}^{6}$, the loss function is more
well-behaved so that the preliminary communication round can reduce to
$\tau_{0}=1$. See Section C in the Appendix for more details.
###### Theorem 11 (n+k-1-grad, sparse GLM)
Suppose • ‣ 3.3-• ‣ 3.3 hold, and that we run Algorithm 1 with n+k-1-grad
method in GLMs. Let $\lambda_{l}\asymp\sqrt{\log d/n}$ for $l=1,\dots,d$, and
$\lambda^{(t)}$ be as in (13) for $t=0,\dots,\tau-1$. Assume
$n=d^{\gamma_{n}}$, $k=d^{\gamma_{k}}$, $\overline{s}=d^{\gamma_{s}}$ for some
constants $\gamma_{n},\gamma_{k},\gamma_{s}>0$. If $\gamma_{n}>5\gamma_{s}$
and $\tau\geq\tau_{\min}$, where
$\displaystyle\tau_{\min}=\begin{cases}\max\left\\{2+\left\lfloor\log_{2}\frac{\gamma_{k}+\gamma_{s}}{\gamma_{n}-4\gamma_{s}}\right\rfloor,1\right\\},&\text{if}\quad\gamma_{k}\leq\gamma_{n}-3\gamma_{s},\\\
\tau_{0}+\left\lfloor\frac{\gamma_{k}+\gamma_{s}}{\gamma_{n}-2\gamma_{s}}+\nu_{0}\right\rfloor,&\text{otherwise},\end{cases}$
$\tau_{0}$ and $\nu_{0}$ defined as in (14) and (15) respectively, then we
have (10). In addition, (10) also holds if $T$ is replaced by $\widehat{T}$
defined in (3).
###### Remark 12
The selection of $\\{\lambda_{l}\\}_{l=1}^{d}$ in Theorems 10 and 11 are
motivated by those in van de Geer et al. (2014),
$\\{\lambda^{(t)}\\}_{t=0}^{\tau-1}$ are motivated by Wang et al. (2017) and
Jordan et al. (2019). We perform a more careful analysis for the two phases of
model tuning as in (13).
## 4 Simulation Studies
We demonstrate the merits of our methods using synthetic data in this section.
The code to reproduce the simulation experiments, results, and plots is
available at GitHub: https://github.com/skchao74/Distributed-bootstrap.
We consider a Gaussian linear model and a logistic regression model. We fix
total sample size $N=2^{14}$ and the dimension $d=2^{10}$, and choose the
number of machines $k$ from $\\{2^{2},2^{3},\dots,2^{6}\\}$. The true
coefficient $\theta^{\ast}$ is a $d$-dimensional vector in which the first
$s_{0}$ coordinates are 1 and the rest is 0, where $s_{0}\in\\{2^{2},2^{4}\\}$
for the linear model and $s_{0}\in\\{2^{1},2^{3}\\}$ for the GLM. We generate
covariate vector $x$ independently from $\mathcal{N}(0,\Sigma)$, while
considering two different specifications for $\Sigma$:
* •
Toeplitz: $\Sigma_{l,l^{\prime}}=0.9^{|l-l^{\prime}|}$;
* •
Equi-correlation: $\Sigma_{l,l^{\prime}}=0.8$ for all $l\neq l^{\prime}$,
$\Sigma_{l,l}=1$ for all $l$.
For linear model, we generate the model noise independently from
$\mathcal{N}(0,1)$; for GLM, we obtain i.i.d. responses from
$y\sim\text{Ber}(1/(1+\exp[-x^{\top}\theta^{\ast}]))$. For each choice of
$s_{0}$ and $k$, we run Algorithm 1 with k-grad and n+k-1-grad on $1{,}000$
independently generated datasets, and compute the empirical coverage
probability and the average width based on the results from these $1{,}000$
replications. At each replication, we draw $B=500$ bootstrap samples, from
which we calculate the $95\%$ empirical quantile to further obtain the $95\%$
simultaneous confidence interval.
For the $\ell_{1}$-CSL computation, we choose the initial $\lambda^{(0)}$ by a
local $K$-fold cross-validation, where $K=10$ for linear regression and $K=5$
for logistic regression. For each iteration $t$, $\lambda^{(t)}$ is selected
by Algorithm 2 in Section 2.4 with $K^{\prime}$ folds with
$K^{\prime}=\min\\{k-1,5\\}$, which ensures that each partition of worker
gradients is non-empty when $k$ is small. For an efficient implementation of
the nodewise Lasso, we select a $\hat{\lambda}$ at every simulation repetition
and set $\lambda_{l}=\bar{\lambda}$ for all $l$. Specifically, for each
simulated dataset, we select
$\bar{\lambda}=10^{-1}\sum_{l=1}^{10}\hat{\lambda}_{l}$, where each
$\hat{\lambda}_{l}$ is obtained obtained by a cross-validation of nodewise
Lasso regression of $l$-th variable on the remaining variables. Since the
variables are homogeneous, these $\hat{\lambda}_{l}$’s only deviate by some
random variations, which can be alleviated by an average.
The computation of the oracle width starts with fixing $(N,d,s_{0})$ and
generating $500$ independent datasets. For each dataset, we compute the
centralized de-biased Lasso estimator $\widehat{\theta}$ as in (4). The oracle
width is defined as two times the $95\%$ empirical quantile of
$\|\widehat{\theta}-\theta^{\ast}\|_{\infty}$ of the 500 samples. The average
widths are compared against the oracle widths by taking the ratio of the two.
The empirical coverage probabilities and the average width ratios of k-grad
and n+k-1-grad are displayed for the linear model in Figures 3 (Toeplitz
design) and 4 (equi-correlation design), and for the logistic regression in
Figures 5 (Toeplitz design) and 6 (equi-correlation design), respectively.
Note that increase in $k$ indicates decrease in $n$, given the fixed $N$.
For small $k$, k-grad tends to over-cover, whereas n+k-1-grad has a more
accurate coverage. By contrast, the coverage of both algorithms fall when $k$
gets too large (or $n$ gets too small), since the estimator
$\widetilde{\theta}^{(\tau)}$ deviates from $\widehat{\theta}$ and the
deviation of the width from the oracle width, which reflects the discussion of
(11) and (12). Moreover, as $s_{0}=\|\theta^{\ast}\|_{0}$ increases, it
becomes harder for both algorithms to achieve the accurate $95\%$ coverage,
and both algorithms start to fail at a smaller $k$ (or larger $n$), which
stems from the fact that the bootstrap cannot accurately approximate variance
of the asymptotic distribution as shown in (11) and (12). Nevertheless,
raising the number of iterations improves the coverage, which verifies our
theory. We also observe an under-coverage of our bootstrap method in both the
linear regression and the logistic regression at the early stage of increasing
$k$. This is due to the loss of accuracy in estimating the inverse Hessian
matrices using only the data in the master node when $k$ increases (or $n$
decreases).
Figure 3: Empirical coverage probability (left axis, solid lines) and average
width (right axis, dashed lines) of simultaneous confidence intervals by
k-grad and n+k-1-grad in sparse linear regression with Toeplitz design and
varying sparsity. Black solid line represents the $95\%$ nominal level and
black dashed line represents 1 on the right $y$-axis.
Figure 4: Empirical coverage probability (left axis, solid lines) and average
width (right axis, dashed lines) of simultaneous confidence intervals by
k-grad and n+k-1-grad in sparse linear regression with equi-correlation design
and varying sparsity. Black solid line represents the $95\%$ nominal level and
black dashed line represents 1 on the right $y$-axis.
Figure 5: Empirical coverage probability (left axis, solid lines) and average
width (right axis, dashed lines) of simultaneous confidence intervals by
k-grad and n+k-1-grad in sparse logistic regression with Toeplitz design and
varying sparsity. Black solid line represents the $95\%$ nominal level and
black dashed line represents 1 on the right $y$-axis.
Figure 6: Empirical coverage probability (left axis, solid lines) and average
width (right axis, dashed lines) of simultaneous confidence intervals by
k-grad and n+k-1-grad in sparse logistic regression with equi-correlation
design and varying sparsity. Black solid line represents the $95\%$ nominal
level and black dashed line represents 1 on the right $y$-axis.
## 5 Variable Screening with Distributed Simultaneous Inference
Having demonstrated the performance of our method on purely synthetic data
using sparse models in the last section, in this section, we artificially
create spurious variables and mix them with the variables obtained from a real
big dataset. We check if our method can successfully select the relevant
variables associated with the response variable from the real dataset. The
code to retrieve data and reproduce the analyses, results, and plots is
available at GitHub: https://github.com/skchao74/Distributed-bootstrap.
### 5.1 Data
The US Airline On-Time Performance dataset (DVN, 2008), available at
http://stat-computing.org/dataexpo/2009, consists of flight arrival and
departure details for all commercial flights within the US from 1987 to 2008.
Given the high dimensionality after dummy transformation and the huge sample
size of the entire dataset, the most efficient way to process the data is
using a distributed computational system, with sample size on each worker node
likely to be smaller than the dimension. Our goal here is to uncover
statistically significant independent variables associated with flight delay.
We use variables Year, Month, DayOfWeek, CRSDepTime, CRSArrTime,
UniqueCarrier, Origin, Dest, and ArrDelay in our model; descriptions are
deferred to Appendix (Section D).
The response variable is labeled by $1$ to denote a delay if ArrDelay is
greater than zero, and by $0$ otherwise. The rest of the variables are treated
as categorical explanatory variables and are converted into dummy variables;
refer to Appendix (Section E) for the details of the dummy variable creation.
This results in a total of $203$ predictors. The total sample size is 113.9
million observations. We randomly sample a dataset $\mathcal{D}_{1}$ of
$N=500{,}000$ observations, and conceptually distribute them across
$k=1{,}000$ nodes such that each node receives $n=500$ observations. We
randomly sample another dataset $\mathcal{D}_{2}$ of $N=500{,}000$
observations for a pilot study to select relevant variables, where
$\mathcal{D}_{1}\cap\mathcal{D}_{2}=\emptyset$.
### 5.2 An Artificial Design Matrix and Variable Screening
In the first stage, we perform a preliminary study that informs us some
seemingly relevant variables to include in an artificial design matrix, which
will be used to demonstrate variable screening performance of our method in
the second stage. Note that the purpose of this stage is only to preliminarily
discover possibly relevant variables, rather than to select variables in a
fully rigorous manner. We perform a logistic regression in a centralized
manner with intercept and without regularization using the $N$ observations in
$\mathcal{D}_{2}$. Standard Wald tests reveal that $144$ out of $203$ slopes
are significantly non-zero ($p$-values less than $0.05$).
The four predictors with the least $p$-values correspond to the dummy
variables of years 2001–2004, and the coefficients are all negative, which
suggests less likelihood of flight delay in these years. This interesting
finding matches the results of previous study that the September 11 terrorist
attacks have negatively impacted the US airline demand (Ito and Lee, 2005),
which led to less flights and congestion. In addition, the Notice of Market-
based Actions to Relieve Airport Congestion and Delay, (Docket No.
OST-2001-9849) issued by Department of Transportation on August 21, 2001,
might also alleviate the US airline delay.
To construct the artificial design matrix, we group the $4$ predictors with
the least $p$-values mentioned above and the intercept, so the number of the
relevant columns is $5$. Given $d$, we artificially create $d-5$ columns of
binary and real valued variables by first sampling rows from
$\mathcal{N}(0,\mathcal{C}_{d-5})$, where $\mathcal{C}_{d-5}$ is a Toeplitz
matrix ($(\mathcal{C}_{d-5})_{l,l^{\prime}}=0.5^{|l-l^{\prime}|}$), and then
converting half of the columns to either 0 or 1 by their signs. Then, we
combine these $d-5$ spurious columns with a column of intercept and the $4$
columns in $\mathcal{D}_{1}$ that are associated with the selected relevant
variables to obtain an artificial design matrix.
In the second stage, using the artificial design matrix with the binary
response vector from the ArrDelay in $\mathcal{D}_{1}$, we test if our
distributed bootstrap n+k-1-grad (Algorithm 1) can screen the artificially
created spurious variables. Note that $\mathcal{D}_{1}$ and $\mathcal{D}_{2}$
are disjoint, where $\mathcal{D}_{2}$ is used in the first stage for the
preliminary study. For model tuning, we select $\lambda^{(0)}$ by a local
$10$-fold cross-validation; for each $t\geq 1$, $\lambda^{(t)}$ is chosen by
running a distributed $10$-fold cross-validation in Algorithm 2. We select
each $\lambda_{l}$ by performing a $10$-fold cross-validation for the nodewise
Lasso of each variable. The same entire procedure is repeated under each
dimensionality $d\in\\{200,500,1{,}000\\}$.
The left panel of Figure 7 plots the number of significant variables against
the number of iterations $\tau$, which was broken down into the number
intersecting with the relevant variables (solid lines) and the number
intersecting with the spurious variables (dashed lines). First, all of the $4$
relevant variables are tested to be significant at all iterations. For the
spurious variables, we see that with $\tau=1$, the distributed bootstrap
falsely detects one of them. However, as the number of iterations increases,
less spurious variables are detected until none of them is detected. We also
see that $2$ iterations ($\tau=2$) for $d=500,1{,}000$ and $3$ iterations
($\tau=3$) for $d=200$ are sufficient, which empirically verifies that our
method is not very sensitive to the nominal dimension $d$.
As an illustration that is potentially useful in practice, the confidence
intervals computed with the simultaneous quantile for the $4$ important slopes
under $d=1{,}000$ and $\tau=2$ are plotted in the right panel of Figure 7. It
can be seen that the flights in years 2002 and 2003 are relatively less likely
to delay, which match the decreased air traffic in the aftermath of the
September 11 terrorist attacks.
Figure 7: The left panel shows the number of significant variables uncovered
by the simultaneous confidence intervals among the $4$ relevant variables and
among the $d-5$ spurious variables for $d=200,500,1{,}000$. The right panel
shows the simultaneous confidence intervals of the $4$ relevant variables for
$d=1{,}000$ and $\tau=2$.
## 6 Conclusion
We propose a distributed bootstrap method for high-dimensional simultaneous
inference based on the de-biased $\ell_{1}$-CSL estimator as well as a
distributed cross-validation method for hyperparameter tuning. The bootstrap
validity and oracle efficiency are rigorously studied, and the merits are
further shown via simulation study on coverage probability and efficiency, and
a practical example on variable screening.
Acknowledgments
Shih-Kang Chao would like to acknowledge the financial support from the
Research Council of the University of Missouri. Guang Cheng would like to
acknowledge support from the National Science Foundation (NSF – SCALE MoDL
(2134209)).
## Appendix A Pseudocode for k-grad and n+k-1-grad
Algorithm 3
DistBoots$(\text{method},\widetilde{\theta},\\{\mathbf{g}_{j}\\}_{j=1}^{k},\widetilde{\Theta})$:
only need the master node $\mathcal{M}_{1}$
1:Require: local gradient $\mathbf{g}_{j}$ and estimate $\widetilde{\Theta}$
of inverse Hessian obtained at $\mathcal{M}_{1}$
2:$\bar{\mathbf{g}}\leftarrow k^{-1}\sum_{j=1}^{k}\mathbf{g}_{j}$
3:for $b=1,\ldots,B$ do
4: if method=‘k-grad’ then
5: Draw
$\epsilon_{1}^{(b)},\ldots,\epsilon_{k}^{(b)}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,1)$
and compute $W^{(b)}$ by (5)
6: else if method=‘n+k-1-grad’ then
7: Draw
$\epsilon_{11}^{(b)},\ldots,\epsilon_{n1}^{(b)},\epsilon_{2}^{(b)},\ldots,\epsilon_{k}^{(b)}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,1)$
and compute $W^{(b)}$ by (6)
8: end if
9:end for
10:Compute the quantile $c_{W}(\alpha)$ of $\\{W^{(1)},\dots,W^{(B)}\\}$ for
$\alpha\in(0,1)$
11:Return $\widetilde{\theta}_{l}\pm N^{-1/2}c_{W}(\alpha)$, $l=1,\dots,d$
###### Remark 13
Although in Algorithm 3 the same $\widetilde{\theta}$ is used for the center
of the confidence interval and for evaluating the gradients $\mathbf{g}_{ij}$,
allowing them to be different (such as in Algorithm 1) can save one round of
communication. For example, we can use $\widetilde{\theta}^{(\tau)}$ for the
center of the confidence interval, while the gradients are evaluated with
$\widetilde{\theta}^{(\tau-1)}$.
## Appendix B Nodewise Lasso
In Algorithm 4, we state the nodewise Lasso method for constructing
approximate inverse Hessian matrix used in Section 3.1.1 of van de Geer et al.
(2014), which we apply in Algorithm 1. We define the components of
$\widehat{\gamma}_{l}$ as
$\widehat{\gamma}_{l}=\\{\widehat{\gamma}_{l,l^{\prime}};l^{\prime}=1,\dots,d,l^{\prime}\neq
l\\}$. We denote by $\widehat{M}_{l,-l}$ the $l$-th row of $\widehat{M}$
without the diagonal element $(l,l)$, and by $\widehat{M}_{-l,-l}$ the
submatrix without the $l$-th row and $l$-th column.
Algorithm 4 Node($\widehat{M}$)
1:Require: sample Hessian matrix $\widehat{M}\in\mathbb{R}^{d\times d}$,
hyperparameters $\\{\lambda_{l}\\}_{l=1}^{d}$
2:for $l=1,\ldots,d$ do
3: Compute
$\widehat{\gamma}_{l}=\operatorname*{\arg\min}_{\gamma\in\mathbb{R}^{d-1}}\widehat{M}_{l,l}-2\widehat{M}_{l,-l}\gamma+\gamma^{\top}\widehat{M}_{-l,-l}\gamma+2\lambda_{l}\|\gamma\|_{1}$
4: Compute
$\widehat{\tau}_{l}^{2}=\widehat{M}_{l,l}-\widehat{M}_{l,-l}\widehat{\gamma}_{l}$
5:end for
6:Construct $\widehat{M^{-1}}$ as
$\widehat{M^{-1}}=\begin{pmatrix}\widehat{\tau}_{1}^{-2}&0&\dots&0\\\
0&\widehat{\tau}_{2}^{-2}&\dots&0\\\ \vdots&\vdots&\ddots&\vdots\\\
0&0&\dots&\widehat{\tau}_{d}^{-2}\end{pmatrix}\begin{pmatrix}1&-\widehat{\gamma}_{1,2}&\dots&-\widehat{\gamma}_{1,d}\\\
-\widehat{\gamma}_{2,1}&1&\dots&-\widehat{\gamma}_{2,d}\\\
\vdots&\vdots&\ddots&\vdots\\\
-\widehat{\gamma}_{d,1}&-\widehat{\gamma}_{d,2}&\dots&1\end{pmatrix}.$
###### Remark 14
Throughout this paper, we fix the choice of nodewise Lasso in Algorithm 1 for
computing an approximate inverse Hessian matrix. In practice, various
approaches (e.g., Zhang and Zhang (2014); Javanmard and Montanari (2014a)) can
be chosen from in consideration of estimation accuracy and computational
efficiency.
## Appendix C CSL Estimator for GLMs
For the $\ell_{1}$-penalized CSL estimator of generalized linear models,
Theorem 3.3 of Wang et al. (2017) states that
$\displaystyle\big{\|}\widetilde{\theta}^{(t+1)}-\theta^{\ast}\big{\|}_{1}\lesssim
s_{0}\sqrt{\frac{\log d}{N}}+s_{0}\sqrt{\frac{\log
d}{n}}\big{\|}\widetilde{\theta}^{(t)}-\theta^{\ast}\big{\|}_{1}+Ms_{0}\big{\|}\widetilde{\theta}^{(t)}-\theta^{\ast}\big{\|}_{1}^{2},$
(16)
where $M\geq 0$ is a Lipschitz constant of the $g^{\prime\prime}$, which
exists due to Assumptions • ‣ 3.3. In linear models, $g(a,b)=(a-b)^{2}/2$,
$g^{\prime\prime}$ is a constant, so $M=0$ and CSL estimator has linear
convergence to $\theta^{\ast}$ with rate $s_{0}(\log d)^{1/2}n^{-1/2}$ until
it reaches the upper bound given by the first term, which is also the rate of
the centralized (oracle) estimator. For GLMs, however, $M>0$ and the third
term can be dominant when $t$ is small. For example, when $t=0$, given that
$\|\widetilde{\theta}^{(0)}-\theta^{\ast}\|_{1}\lesssim s_{0}(\log
d)^{1/2}n^{-1/2}$, it is easy to see that the third term is always $s_{0}$
times larger than the second term (up to a constant), and a larger $n$ is
required to ensure third term is less than
$\big{\|}\widetilde{\theta}^{(t)}-\theta^{\ast}\big{\|}_{1}$ and the error is
shrinking. However, when $t$ is sufficiently large, this dominance reverses.
The threshold is given by the $\tau_{0}$ in (14), and this implies the three
phases of convergence: When $t\leq\tau_{0}$, the third term dominates and the
convergence is quadratic; when $t>\tau_{0}$, the second term dominates the
third and the linear convergence kicks in. Finally, when $t$ is sufficiently
large, the first term dominates. Our analysis complements that of Wang et al.
(2017), while in their Corollary 3.7 it is simply assumed that the second term
dominates the third.
## Appendix D Variable Descriptions
We use the following variables in our model for the semi-synthetic study in
Section 5:
* •
Year: from 1987 to 2008,
* •
Month: from 1 to 12,
* •
DayOfWeek: from 1 (Monday) to 7 (Sunday),
* •
CRSDepTime: scheduled departure time (in four digits, first two representing
hour, last two representing minute),
* •
CRSArrTime: scheduled arrival time (in the same format as above),
* •
UniqueCarrier: unique carrier code,
* •
Origin: origin (in IATA airport code),
* •
Dest: destination (in IATA airport code),
* •
ArrDelay: arrival delay (in minutes). Positive value means there is a delay.
The complete variable information can be found at http://stat-
computing.org/dataexpo/2009/the-data.html.
## Appendix E Creation of Dummy Variables
We categorize CRSDepTime and CRSArrTime into $24$ one-hour time intervals
(e.g., 1420 is converted to 14 to represent the interval [14:00,15:00]), and
then treat Year, Month, DayOfWeek, CRSDepTime, CRSArrTime, UniqueCarrier,
Origin, and Dest as nominal predictors. The nominal predictors are encoded by
dummies with appropriate dimensions and merging all categories of lower counts
into “others”, and either “others” or the smallest ordinal value is treated as
the baseline.
To ensure that none of the columns of the design matrix on the master node is
completely zero so that the nodewise Lasso can be computed, we create the
dummy variables using only the observations in the master node on the dataset
$\mathcal{D}_{1}$. Specifically, for variables UniqueCarrier, Origin, and
Dest, we keep the top categories that make up $90\%$ of the data in the master
node on $\mathcal{D}_{1}$; the rest categories are merged into “others” and
are treated as baseline. For CRSDepTime and CRSArrTime, we merge the time
intervals 23:00-6:00 and 1:00-7:00 respectively (due to their low counts) and
use them as baseline. For Year, Month, and DayOfWeek, we treat year 1987,
January, and Monday as baseline respectively.
## Appendix F Extension to Heteroscedastic Error Across Machines
As suggested by the associated editor, here we consider an extension to a more
challenging scenario for linear models where the data across machines have
heteroscedastic errors. In this scenario, Algorithm 2 can no longer apply as
it relies on the homogeneity in data across machines. We provide a new
Algorithm 5 by exploiting the multiplier bootstrap idea underlying the “High-
Dimensional Metrics” (HDM, Chernozhukov et al. (2016)).
Algorithm 5 Simultaneous inference for distributed data with
heteroscedasticity
1:Require: $\tau\geq 1$ rounds of communication; nodewise Lasso procedure
Node$(\cdot,\cdot)$ with hyperparameters $\\{\lambda_{l}\\}_{l=1}^{d}$,
theoretical constant $c$
2:$\widetilde{\theta}^{(0)}\leftarrow\operatorname*{\arg\min}_{\theta}\mathcal{L}_{1}(\theta)+\lambda^{(0)}\|\theta\|_{1}$
at $\mathcal{M}_{1}$, where $\lambda^{(0)}$ is chosen by cross-validation
using the data at $\mathcal{M}_{1}$
3:Compute $\widetilde{\Theta}$ by running
Node$(\nabla^{2}\mathcal{L}_{1}(\widetilde{\theta}^{(0)}),\\{\lambda_{l}\\}_{l=1}^{d})$
at $\mathcal{M}_{1}$
4:for $t=1,\ldots,\tau$ do
5: Transmit $\widetilde{\theta}^{(t-1)}$ to $\\{\mathcal{M}_{j}\\}_{j=2}^{k}$
6: Compute $\nabla\mathcal{L}_{1}(\widetilde{\theta}^{(t-1)})$ and
$\psi_{1}^{(t-1)}=n^{-1}\sum_{i=1}^{n}\nabla\mathcal{L}((x_{i1},y_{i1}),\widetilde{\theta}^{(t-1)})^{2}$
at $\mathcal{M}_{1}$
7: for $j=2,\ldots,k$ do
8: Compute $\nabla\mathcal{L}_{j}(\widetilde{\theta}^{(t-1)})$ and
$\psi_{j}^{(t-1)}=n^{-1}\sum_{i=1}^{n}\nabla\mathcal{L}((x_{ij},y_{ij}),\widetilde{\theta}^{(t-1)})^{2}$
at $\mathcal{M}_{j}$
9: Transmit $\nabla\mathcal{L}_{j}(\widetilde{\theta}^{(t-1)})$ and
$\psi_{j}^{(t-1)}$ to $\mathcal{M}_{1}$
10: end for
11: $\nabla\mathcal{L}_{N}(\widetilde{\theta}^{(t-1)})\leftarrow
k^{-1}\sum_{j=1}^{k}\nabla\mathcal{L}_{j}(\widetilde{\theta}^{(t-1)})$ at
$\mathcal{M}_{1}$
12: if $t<\tau$ then
13: for $b=1,\ldots,B$ do
14: Draw
$\epsilon_{1}^{(b)},\ldots,\epsilon_{k}^{(b)}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,1)$
15: $\Lambda_{b}^{(t)}\leftarrow
ck^{-1}\|\sum_{j=1}^{k}\epsilon_{j}^{(b)}\nabla\mathcal{L}_{j}(\widetilde{\theta}^{(t-1)})\|_{\infty}$
16: end for
17: $\lambda^{(t)}\leftarrow 90\%$ quantile of
$\\{\Lambda_{1}^{(t)},\dots,\Lambda_{B}^{(t)}\\}$
18: for $l=1,\ldots,d$ do
19:
$\Psi_{l}^{(t)}\leftarrow\sqrt{k^{-1}\sum_{j=1}^{k}(\psi_{j}^{(t-1)})_{l}}$
20: end for
21: $\Psi^{(t)}\leftarrow\text{diag}(\Psi_{1}^{(t)},\dots,\Psi_{d}^{(t)})$
22:
$\widetilde{\theta}^{(t)}\leftarrow\operatorname*{\arg\min}_{\theta}\mathcal{L}_{1}(\theta)-\theta^{\top}\left(\nabla\mathcal{L}_{1}(\widetilde{\theta}^{(t-1)})-\nabla\mathcal{L}_{N}(\widetilde{\theta}^{(t-1)})\right)+\lambda^{(t)}\|\Psi^{(t)}\theta\|_{1}$
at $\mathcal{M}_{1}$
23: else
24:
$\widetilde{\theta}^{(\tau)}\leftarrow\widetilde{\theta}^{(\tau-1)}-\widetilde{\Theta}\nabla\mathcal{L}_{N}(\widetilde{\theta}^{(\tau-1)})$
at $\mathcal{M}_{1}$
25: end if
26:end for
27:Run DistBoots$(\text{`{k-grad}' or
`{n+k-1-grad}'},\widetilde{\theta}=\widetilde{\theta}^{(\tau)},\\{\mathbf{g}_{j}=\nabla\mathcal{L}_{j}(\widetilde{\theta}^{(\tau-1)})\\}_{j=1}^{k},$
28: $\widetilde{\Theta}=\widetilde{\Theta})$ at $\mathcal{M}_{1}$
In Algorithm 5, we select the regularization parameters
$\\{\lambda^{(t)}\\}_{t=1}^{\tau-1}$ in lines 13-17 by integrating the idea of
Spindler et al. (2016). In addition, we handle heteroscedasticity by data-
driven regularization loadings $\Psi^{(t)}$ in lines 18-22.
Under heteroscedasticity, we expect the k-grad in Algorithm 3 to continue
being valid because it treats each machine equally as an independent data
point. However, n+k-1-grad may no longer provide an accurate coverage because
each single data point in the first machine is treated as equally important as
the average of entire data in $j$ machine for $j=2,\cdots,k$, so the variance
in the first machine could dominate so that the n+k-1-grad bootstrap could
fail to precisely approximate the variance of the target empirical
distribution. A careful theoretical study deserves future research.
The empirical performance of Algorithm 5 is verified by a simulation study
based on a heteroscedastic Gaussian linear model. We fix total sample size
$N=2^{14}$ and the dimension $d=2^{10}$, and choose the number of machines $k$
from $\\{2^{2},2^{3},\dots,2^{6}\\}$. The true coefficient $\theta^{\ast}$ is
a $d$-dimensional vector in which the first $s_{0}$ coordinates are 1 and the
rest is 0, where $s_{0}\in\\{2^{2},2^{4}\\}$. We generate covariate vector $x$
independently from $\mathcal{N}(0,\Sigma)$, where $\Sigma$ is a Toeplitz
matrix with $\Sigma_{l,l^{\prime}}=0.9^{|l-l^{\prime}|}$. We introduce
heteroscedasticity across machines by first independently generating the model
noise from $\mathcal{N}(0,1)$ for all data in the master node
$\mathcal{M}_{1}$. Next, we generate model noise for each data point $i$ in
worker node $\mathcal{M}_{j}$ ($j=2,\dots,k$) independently from
$\mathcal{N}(0,\sigma_{j}^{2}+\omega_{ij})$, where the node level variance
$\sigma_{j}^{2}$ is generated independently from $\text{Unif}(2,3)$ and the
idiosyncratic variance $\omega_{ij}$ is generated independently from
$\text{Unif}(-0.2,0.2)$. For each choice of $s_{0}$ and $k$, we run Algorithm
5 with k-grad and n+k-1-grad on $1{,}000$ independently generated datasets,
and compute the empirical coverage probability and the average width based on
the results from these $1{,}000$ replications. At each replication, we draw
$B=500$ bootstrap samples, from which we calculate the $95\%$ empirical
quantile to further obtain the $95\%$ simultaneous confidence interval. For
tuning the nodewise Lasso, we use the same approach as in the main text. The
computation of the oracle width starts with fixing $(N,d,s_{0},k)$ and
generating $500$ independent datasets. For each dataset, we compute the
centralized de-biased Lasso estimator $\widehat{\theta}$. The oracle width is
defined as two times the $95\%$ empirical quantile of
$\|\widehat{\theta}-\theta^{\ast}\|_{\infty}$ of the 500 samples.
Figure 8 shows the coverage probability and efficiency in the form of relative
widths of Algorithm 5. As expected, the coverage of the simultaneous
confidence intervals is improved as the iteration goes using the new data-
driven parameter tuning and heteroscedasticity-adapted regularization. The
k-grad performs much better than the n+k-1-grad, which basically fails as the
coverage probability of n+k-1-grad is nearly zero in all cases. The failure of
the n+k-1-grad is due to the fact that it over-weigh the data in the master
node $\mathcal{M}_{1}$ which leads to an under-estimation of the variance in
other nodes, whereas in k-grad each node is weighed equally.
By comparing Figure 8 and Figure 9, we observe that our algorithm is generally
robust to the selection of $c$ as it performs similarly for $c=0.5$ and $c=1$.
However, we note that $c=0.5$ could be too small to stabilize the algorithm as
the optimization solver in 22 fails to converge in about $2\%$ of the
replications, and the divergent results are not included in Figure 8. This
suggests that the penalty at $c=0.5$ may be so small that leads to an ill-
conditioned objective function. After increasing $c$ from $0.5$ to $1$,
optimization solvers of all the replications stably converge.
Figure 8: Under $c=0.5$, empirical coverage probability (left axis, solid
lines) and average relative width (right axis, dashed lines) of simultaneous
confidence intervals by k-grad and n+k-1-grad in sparse linear regression with
Toeplitz design and varying sparsity. Black solid line represents the $95\%$
nominal level and black dashed line represents 1 on the right $y$-axis. Figure
9: Under $c=1$, empirical coverage probability (left axis, solid lines) and
average relative width (right axis, dashed lines) of simultaneous confidence
intervals by k-grad and n+k-1-grad in sparse linear regression with Toeplitz
design and varying sparsity. Black solid line represents the $95\%$ nominal
level and black dashed line represents 1 on the right $y$-axis.
## References
* DVN (2008) Data Expo 2009: Airline on time data, 2008. URL https://doi.org/10.7910/DVN/HG7NV7.
* Banerjee et al. (2019) Moulinath Banerjee, Cecile Durot, Bodhisattva Sen, et al. Divide and conquer in nonstandard problems and the super-efficiency phenomenon. _The Annals of Statistics_ , 47(2):720–757, 2019\.
* Battey et al. (2018) Heather Battey, Jianqing Fan, Han Liu, Junwei Lu, and Ziwei Zhu. Distributed estimation and inference with statistical guarantees. _Annals of Statistics_ , 46(3):1352–1382, 2018\.
* Belloni et al. (2018) Alexandre Belloni, Victor Chernozhukov, Denis Chetverikov, and Ying Wei. Uniformly valid post-regularization confidence regions for many functional parameters in z-estimation framework. _Ann. Statist._ , 46(6B):3643–3675, 12 2018. doi: 10.1214/17-AOS1671.
* Belloni et al. (2019) Alexandre Belloni, Victor Chernozhukov, and Kengo Kato. Valid post-selection inference in high-dimensional approximately sparse quantile regression models. _Journal of the American Statistical Association_ , 114(526):749–758, 2019. doi: 10.1080/01621459.2018.1442339.
* Cai and Sun (2017) T Tony Cai and Wenguang Sun. Large-scale global and simultaneous inference: Estimation and testing in very high dimensions. _Annual Review of Economics_ , 9:411–439, 2017.
* Chen et al. (2018) Xi Chen, Weidong Liu, and Yichen Zhang. First-order newton-type estimator for distributed estimation and inference. _arXiv preprint arXiv:1811.11368_ , 2018.
* Chen et al. (2019) Xi Chen, Weidong Liu, and Yichen Zhang. Quantile regression under memory constraint. _Ann. Statist._ , 47(6):3244–3273, 12 2019. doi: 10.1214/18-AOS1777.
* Chen et al. (2020) Xi Chen, Jason D Lee, He Li, and Yun Yang. Distributed estimation for principal component analysis: a gap-free approach. _arXiv preprint arXiv:2004.02336_ , 2020.
* Chen and Xie (2014) Xueying Chen and Min-ge Xie. A split-and-conquer approach for analysis of extraordinarily large data. _Statistica Sinica_ , pages 1655–1684, 2014.
* Chernozhukov et al. (2013) Victor Chernozhukov, Denis Chetverikov, Kengo Kato, et al. Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors. _The Annals of Statistics_ , 41(6):2786–2819, 2013.
* Chernozhukov et al. (2016) Victor Chernozhukov, Chris Hansen, and Martin Spindler. hdm: High-dimensional metrics. _The R Journal_ , 8(2):185–199, 2016.
* Dezeure et al. (2017) Ruben Dezeure, Peter Bühlmann, and Cun-Hui Zhang. High-dimensional simultaneous inference with the bootstrap. _Test_ , 26(4):685–719, 2017.
* Fan et al. (2019a) Jianqing Fan, Yongyi Guo, and Kaizheng Wang. Communication-efficient accurate statistical estimation. _arXiv preprint arXiv:1906.04870_ , 2019a.
* Fan et al. (2019b) Jianqing Fan, Dong Wang, Kaizheng Wang, and Ziwei Zhu. Distributed estimation of principal eigenspaces. _Annals of statistics_ , 47(6):3009, 2019b.
* Huang and Huo (2019) Cheng Huang and Xiaoming Huo. A distributed one-step estimator. _Mathematical Programming_ , 174(1-2):41–76, 2019\.
* Ito and Lee (2005) Harumi Ito and Darin Lee. Assessing the impact of the september 11 terrorist attacks on us airline demand. _Journal of Economics and Business_ , 57(1):75–95, 2005.
* Javanmard and Montanari (2014a) Adel Javanmard and Andrea Montanari. Confidence intervals and hypothesis testing for high-dimensional regression. _The Journal of Machine Learning Research_ , 15(1):2869–2909, 2014a.
* Javanmard and Montanari (2014b) Adel Javanmard and Andrea Montanari. Hypothesis testing in high-dimensional regression under the gaussian random design model: Asymptotic theory. _IEEE Transactions on Information Theory_ , 60(10):6522–6554, 2014b.
* Jordan et al. (2019) Michael I Jordan, Jason D Lee, and Yun Yang. Communication-efficient distributed statistical inference. _Journal of the American Statistical Association_ , 114(526):668–681, 2019.
* Kleiner et al. (2014) Ariel Kleiner, Ameet Talwalkar, Purnamrita Sarkar, and Michael I Jordan. A scalable bootstrap for massive data. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ , 76(4):795–816, 2014.
* Lan et al. (2018) Guanghui Lan, Soomin Lee, and Yi Zhou. Communication-efficient algorithms for decentralized and stochastic optimization. _Mathematical Programming_ , pages 1–48, 2018.
* Lee et al. (2017) Jason D Lee, Qiang Liu, Yuekai Sun, and Jonathan E Taylor. Communication-efficient sparse regression. _The Journal of Machine Learning Research_ , 18(1):115–144, 2017.
* Li et al. (2013) Runze Li, Dennis KJ Lin, and Bing Li. Statistical inference in massive data sets. _Applied Stochastic Models in Business and Industry_ , 29(5):399–409, 2013.
* Rudelson and Zhou (2013) M. Rudelson and S. Zhou. Reconstruction from anisotropic random measurements. _IEEE Transactions on Information Theory_ , 59(6):3434–3447, 2013. doi: 10.1109/TIT.2013.2243201.
* Sengupta et al. (2016) Srijan Sengupta, Stanislav Volgushev, and Xiaofeng Shao. A subsampled double bootstrap for massive data. _Journal of the American Statistical Association_ , 111(515):1222–1232, 2016.
* Shi et al. (2018) Chengchun Shi, Wenbin Lu, and Rui Song. A massive data framework for m-estimators with cubic-rate. _Journal of the American Statistical Association_ , 113(524):1698–1709, 2018.
* Singh and Kaur (2014) Kamalpreet Singh and Ravinder Kaur. Hadoop: addressing challenges of big data. In _2014 IEEE International Advance Computing Conference (IACC)_ , pages 686–689. IEEE, 2014.
* Spindler et al. (2016) Martin Spindler, Victor Chernozhukov, and Christian Hansen. hdm: High-dimensional metrics. _R Package Version 0.1. 0. Available at http://CRAN. R-project. org/package= hdm.[233]_ , 2016.
* van de Geer et al. (2014) Sara van de Geer, Peter Bühlmann, Ya’acov Ritov, Ruben Dezeure, et al. On asymptotically optimal confidence regions and tests for high-dimensional models. _The Annals of Statistics_ , 42(3):1166–1202, 2014.
* Volgushev et al. (2019) Stanislav Volgushev, Shih-Kang Chao, Guang Cheng, et al. Distributed inference for quantile regression processes. _The Annals of Statistics_ , 47(3):1634–1662, 2019.
* Wang and Zhang (2017) Jialei Wang and Tong Zhang. Improved optimization of finite sums with minibatch stochastic variance reduced proximal iterations. _arXiv preprint arXiv:1706.07001_ , 2017.
* Wang et al. (2017) Jialei Wang, Mladen Kolar, Nathan Srebro, and Tong Zhang. Efficient distributed learning with sparsity. In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_ , pages 3636–3645. JMLR. org, 2017.
* Yu et al. (2020a) Ming Yu, Varun Gupta, and Mladen Kolar. Simultaneous inference for pairwise graphical models with generalized score matching. _Journal of Machine Learning Research_ , 21(91):1–51, 2020a.
* Yu et al. (2020b) Yang Yu, Shih-Kang Chao, and Guang Cheng. Simultaneous inference for massive data: Distributed bootstrap. In _International Conference on Machine Learning_ , pages 10892–10901. PMLR, 2020b.
* Zhang and Zhang (2014) Cun-Hui Zhang and Stephanie S Zhang. Confidence intervals for low dimensional parameters in high dimensional linear models. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ , 76(1):217–242, 2014.
* Zhang and Cheng (2017) Xianyang Zhang and Guang Cheng. Simultaneous inference for high-dimensional linear models. _Journal of the American Statistical Association_ , 112(518):757–768, 2017.
* Zhang et al. (2012) Yuchen Zhang, Martin J Wainwright, and John C Duchi. Communication-efficient algorithms for statistical optimization. In _Advances in Neural Information Processing Systems_ , pages 1502–1510, 2012.
* Zhao et al. (2016) Tianqi Zhao, Guang Cheng, and Han Liu. A partially linear framework for massive heterogeneous data. _Annals of Statistics_ , 44(4):1400, 2016.
* Zhu et al. (2020) Xuening Zhu, Feng Li, and Hansheng Wang. Least squares approximation for a distributed system. _ArXiv Preprint arXiv:1908.04904_ , 2020.
SUPPLEMENTARY MATERIAL
## 1\. Proofs of Main Results
To simplify the notation, in the proof we denote
$\bar{\theta}=\widetilde{\theta}^{(\tau-1)}$, where
$\widetilde{\theta}^{(\tau-1)}$ is the $\ell_{1}$-penalized estimator at
$\tau-1$ iterator output by Algorithm 1. Denote
$\widetilde{\theta}=\widetilde{\theta}^{(\tau)}$ output by Algorithm 1.
Proof of Theorem 3. We apply Theorem 3 of Wang et al. (2017), where their
Assumption 2 is inherited from Assumption • ‣ 3.2, and obtain that if $n\gg
s_{0}^{2}\log d$,
$\left\|\bar{\theta}-\theta^{\ast}\right\|_{1}=\left\|\widetilde{\theta}^{(\tau-1)}-\theta^{\ast}\right\|_{1}=O_{P}\left(s_{0}\sqrt{\frac{\log
d}{N}}+\left(s_{0}\sqrt{\frac{\log d}{n}}\right)^{\tau}\right).$
Then, by Lemma 15, we have that $\sup_{\alpha\in(0,1)}\left|P(T\leq
c_{\overline{W}}(\alpha))-\alpha\right|=o(1),$ as long as
$n\gg{s^{*}}^{2}\log^{3+\kappa}d+{s^{*}}\log^{5+\kappa}d+s_{0}^{2}\log d$,
$k\gg{s^{*}}^{2}\log^{5+\kappa}d$, and
$s_{0}\sqrt{\frac{\log d}{N}}+\left(s_{0}\sqrt{\frac{\log
d}{n}}\right)^{\tau}\ll\min\left\\{\frac{1}{\sqrt{k{s^{*}}}\log^{1+\kappa}d},\frac{1}{\sqrt{n{s^{*}}}\log^{1+\kappa}d}\right\\}.$
These conditions hold if
$n\gg({s^{*}}^{2}+{s^{*}}s_{0}^{2})\log^{3+\kappa}d+{s^{*}}\log^{5+\kappa}d$,
$k\gg{s^{*}}s_{0}^{2}\log^{3+\kappa}d+{s^{*}}^{2}\log^{5+\kappa}d$, and
$\tau>\max\left\\{\frac{\log k+\log{{s^{*}}}+\log(C\log^{2+\kappa}d)}{\log
n-\log(s_{0}^{2})-\log\log
d},1+\frac{\log{{s^{*}}}+\log(s_{0}^{2})+\log(C\log^{3+\kappa}d)}{\log
n-\log(s_{0}^{2})-\log\log d}\right\\}.$
If $n=d^{\gamma_{n}}$, $k=d^{\gamma_{k}}$, $\overline{s}=s_{0}\vee
s^{*}=d^{\gamma_{s}}$ for some constants $\gamma_{n}$, $\gamma_{k}$, and
$\gamma_{s}$, then a sufficient condition is $\gamma_{n}>3\gamma_{s}$,
$\gamma_{k}>3\gamma_{s}$, and
$\tau\geq
1+\left\lfloor\max\left\\{\frac{\gamma_{k}+\gamma_{s}}{\gamma_{n}-2\gamma_{s}},1+\frac{3\gamma_{s}}{\gamma_{n}-2\gamma_{s}}\right\\}\right\rfloor.$
$\blacksquare$
Proof of Theorem 4. Similarly to the proof of Theorem 3, applying Theorem 3
of Wang et al. (2017) and Lemma 16, we have that
$\sup_{\alpha\in(0,1)}\left|P(T\leq
c_{\widetilde{W}}(\alpha))-\alpha\right|=o(1),$ as long as
$n\gg{s^{*}}^{2}\log^{3+\kappa}d+{s^{*}}\log^{5+\kappa}d+s_{0}^{2}\log d$,
$n+k\gg{s^{*}}^{2}\log^{5+\kappa}d$, and
$s_{0}\sqrt{\frac{\log d}{N}}+\left(s_{0}\sqrt{\frac{\log
d}{n}}\right)^{\tau}\ll\min\left\\{\frac{1}{\sqrt{k{s^{*}}}\log^{1+\kappa}d},\frac{1}{{s^{*}}\sqrt{\log((n+k)d)}\log^{2+\kappa}d}\right\\}.$
These conditions hold if
$n\gg({s^{*}}^{2}+{s^{*}}s_{0}^{2})\log^{3+\kappa}d+{s^{*}}\log^{5+\kappa}d$,
$n+k\gg{s^{*}}^{2}\log^{5+\kappa}d$,
$nk\gg{s^{*}}^{2}s_{0}^{2}\log^{5+\kappa}d$, and
$\tau>\max\left\\{\frac{\log k+\log{{s^{*}}}+\log(C\log^{2+\kappa}d)}{\log
n-\log(s_{0}^{2})-\log\log
d},\frac{\log{{s^{*}}^{2}}+\log\log((n+k)d)+\log(C\log^{4+\kappa}d)}{\log
n-\log(s_{0}^{2})-\log\log d}\right\\}.$
If $n=d^{\gamma_{n}}$, $k=d^{\gamma_{k}}$, $\overline{s}=s_{0}\vee
s^{*}=d^{\gamma_{s}}$ for some constants $\gamma_{n}$, $\gamma_{k}$, and
$\gamma_{s}$, then a sufficient condition is $\gamma_{n}>3\gamma_{s}$,
$\gamma_{n}+\gamma_{k}>4\gamma_{s}$, and
$\tau\geq
1+\left\lfloor\frac{(\gamma_{k}\vee\gamma_{s})+\gamma_{s}}{\gamma_{n}-2\gamma_{s}}\right\rfloor.$
$\blacksquare$
Proof of Theorem 10. We apply Theorem 6 of Wang et al. (2017), where their
Assumption 2 is inherited from Assumption • ‣ 3.3, and obtain that if $n\gg
s_{0}^{4}\log d$,
$\displaystyle\left\|\bar{\theta}-\theta^{\ast}\right\|_{1}$
$\displaystyle=\left\|\widetilde{\theta}^{(\tau-1)}-\theta^{\ast}\right\|_{1}=\begin{cases}O_{P}\left(s_{0}\sqrt{\frac{\log
d}{N}}+\frac{1}{s_{0}}\left(s_{0}^{2}\sqrt{\frac{\log
d}{n}}\right)^{2^{\tau-1}}\right),&\tau\leq\tau_{0}+1,\\\
O_{P}\left(s_{0}\sqrt{\frac{\log
d}{N}}+\frac{1}{s_{0}}\left(s_{0}^{2}\sqrt{\frac{\log
d}{n}}\right)^{2^{\tau_{0}}}\left(s_{0}\sqrt{\frac{\log
d}{n}}\right)^{\tau-\tau_{0}-1}\right),&\tau>\tau_{0}+1,\end{cases}$
where $\tau_{0}$ is the smallest integer $t$ such that
$\left(s_{0}^{2}\sqrt{\frac{\log d}{n}}\right)^{2^{t}}\lesssim
s_{0}\sqrt{\frac{\log d}{n}},$
that is,
$\tau_{0}=\left\lceil\log_{2}\left(\frac{\log n-\log(s_{0}^{2})-\log(C\log
d)}{\log n-\log(s_{0}^{4})-\log\log d}\right)\right\rceil.$
Then, by Lemma 17, we have that $\sup_{\alpha\in(0,1)}\left|P(T\leq
c_{\overline{W}}(\alpha))-\alpha\right|=o(1),$ as long as
$n\gg(s_{0}^{2}+{s^{*}}^{2})\log^{3+\kappa}d+(s_{0}+{s^{*}})\log^{5+\kappa}d+s_{0}^{4}\log
d$, $k\gg{s^{*}}^{2}\log^{5+\kappa}d$, and
$\displaystyle s_{0}\sqrt{\frac{\log
d}{N}}+\frac{1}{s_{0}}\left(s_{0}^{2}\sqrt{\frac{\log
d}{n}}\right)^{2^{\tau-1}}\ll\min\left\\{\frac{1}{\sqrt{k{s^{*}}}s_{0}\log^{1+\kappa}d},\frac{1}{\sqrt{n{s^{*}}}\log^{1+\kappa}d}\right\\},$
if $\tau\leq\tau_{0}+1$, and
$\displaystyle s_{0}\sqrt{\frac{\log
d}{N}}+\frac{1}{s_{0}}\left(s_{0}^{2}\sqrt{\frac{\log
d}{n}}\right)^{2^{\tau_{0}}}\left(s_{0}\sqrt{\frac{\log
d}{n}}\right)^{\tau-\tau_{0}-1}\ll\min\left\\{\frac{1}{\sqrt{k{s^{*}}}s_{0}\log^{1+\kappa}d},\frac{1}{\sqrt{n{s^{*}}}\log^{1+\kappa}d}\right\\},$
if $\tau>\tau_{0}+1$.
If $n=d^{\gamma_{n}}$, $k=d^{\gamma_{k}}$, $\overline{s}=s_{0}\vee
s^{*}=d^{\gamma_{s}}$ for some constants $\gamma_{n}$, $\gamma_{k}$, and
$\gamma_{s}$, then a sufficient condition is $\gamma_{n}>5\gamma_{s}$,
$\gamma_{k}>3\gamma_{s}$, and
$\displaystyle\tau$ $\displaystyle\geq
1+\left\lfloor\max\left\\{1+\log_{2}\frac{\gamma_{n}-\gamma_{s}}{\gamma_{n}-4\gamma_{s}},\tau_{0}+1+\frac{\gamma_{k}+(4\cdot
2^{\tau_{0}}+1)\gamma_{s}-2^{\tau_{0}}\gamma_{n}}{\gamma_{n}-2\gamma_{s}}\right\\}\right\rfloor$
$\displaystyle=\left\lfloor\max\left\\{2+\log_{2}\frac{\gamma_{n}-\gamma_{s}}{\gamma_{n}-4\gamma_{s}},\tau_{0}+2+\frac{\gamma_{k}+(4\cdot
2^{\tau_{0}}+1)\gamma_{s}-2^{\tau_{0}}\gamma_{n}}{\gamma_{n}-2}\right\\}\right\rfloor$
$\displaystyle=\left\lfloor\max\left\\{2+\log_{2}\frac{\gamma_{n}-\gamma_{s}}{\gamma_{n}-4\gamma_{s}},\tau_{0}+\frac{\gamma_{k}+\gamma_{s}}{\gamma_{n}-2\gamma_{s}}+\nu_{0}\right\\}\right\rfloor$
$\displaystyle=\max\left\\{\tau_{0}+\left\lfloor\frac{\gamma_{k}+\gamma_{s}}{\gamma_{n}-2\gamma_{s}}+\nu_{0}\right\rfloor,2+\left\lfloor\log_{2}\frac{\gamma_{n}-\gamma_{s}}{\gamma_{n}-4\gamma_{s}}\right\rfloor\right\\},$
where
$\tau_{0}=1+\left\lfloor\log_{2}\frac{\gamma_{n}-2\gamma_{s}}{\gamma_{n}-4\gamma_{s}}\right\rfloor,\quad\nu_{0}=2-\frac{2^{\tau_{0}}(\gamma_{n}-4\gamma_{s})}{\gamma_{n}-2\gamma_{s}}\in(0,1].$
$\blacksquare$
Proof of Theorem 11. Similarly to the proof of Theorem 4, applying Theorem 3
of Wang et al. (2017) and Lemma 18, we have that
$\sup_{\alpha\in(0,1)}\left|P(T\leq
c_{\overline{W}}(\alpha))-\alpha\right|=o(1),$ as long as
$n\gg(s_{0}+{s^{*}})\log^{5+\kappa}d+(s_{0}^{2}+{s^{*}}^{2})\log^{3+\kappa}d$,
$n+k\gg{s^{*}}^{2}\log^{5+\kappa}d$, and
$\displaystyle s_{0}\sqrt{\frac{\log
d}{N}}+\frac{1}{s_{0}}\left(s_{0}^{2}\sqrt{\frac{\log
d}{n}}\right)^{2^{\tau-1}}$
$\displaystyle\ll\min\left\\{\frac{n+k}{{s^{*}}\left(n+k\sqrt{\log
d}+k^{3/4}\log^{3/4}d\right)\log^{2+\kappa}d},\frac{1}{\sqrt{k{s^{*}}}s_{0}\log^{1+\kappa}d},\frac{1}{\left(nk{s^{*}}\log^{1+\kappa}d\right)^{1/4}}\right\\},$
if $\tau\leq\tau_{0}+1$, and
$\displaystyle s_{0}\sqrt{\frac{\log
d}{N}}+\frac{1}{s_{0}}\left(s_{0}^{2}\sqrt{\frac{\log
d}{n}}\right)^{2^{\tau_{0}}}\left(s_{0}\sqrt{\frac{\log
d}{n}}\right)^{\tau-\tau_{0}-1}$
$\displaystyle\ll\min\left\\{\frac{n+k}{{s^{*}}\left(n+k\sqrt{\log
d}+k^{3/4}\log^{3/4}d\right)\log^{2+\kappa}d},\frac{1}{\sqrt{k{s^{*}}}s_{0}\log^{1+\kappa}d},\frac{1}{\left(nk{s^{*}}\log^{1+\kappa}d\right)^{1/4}}\right\\},$
if $\tau>\tau_{0}+1$, where
$\tau_{0}=\left\lceil\log_{2}\left(\frac{\log n-\log(s_{0}^{2})-\log(C\log
d)}{\log n-\log(s_{0}^{4})-\log\log d}\right)\right\rceil.$
If $n=d^{\gamma_{n}}$, $k=d^{\gamma_{k}}$, $\overline{s}=s_{0}\vee
s^{*}=d^{\gamma_{s}}$ for some constants $\gamma_{n}$, $\gamma_{k}$, and
$\gamma_{s}$, then a sufficient condition is $\gamma_{n}>5\gamma_{s}$, and
Let $\overline{s}=s_{0}\vee s^{*}$. If $n=\overline{s}^{\gamma_{n}}$,
$k=\overline{s}^{\gamma_{k}}$, and $d=\overline{s}^{\gamma_{d}}$ for some
constants $\gamma_{n}$, $\gamma_{k}$, and $\gamma_{d}$, then a sufficient
condition is $\gamma_{n}>5$, and if $\tau\leq\tau_{0}+1$,
$\tau\geq\max\left\\{2+\left\lfloor\log_{2}\frac{\gamma_{k}+1}{\gamma_{n}-4}\right\rfloor,1\right\\},$
and if $\tau>\tau_{0}+1$
$\displaystyle\tau$ $\displaystyle\geq
1+\left\lfloor\tau_{0}+1+\frac{\gamma_{k}+4\cdot
2^{\tau_{0}}+1-2^{\tau_{0}}\gamma_{n}}{\gamma_{n}-2}\right\rfloor$
$\displaystyle=\left\lfloor\tau_{0}+2+\frac{\gamma_{k}+4\cdot
2^{\tau_{0}}+1-2^{\tau_{0}}\gamma_{n}}{\gamma_{n}-2}\right\rfloor$
$\displaystyle=\left\lfloor\tau_{0}+\frac{\gamma_{k}+1}{\gamma_{n}-2}+\nu_{0}\right\rfloor$
$\displaystyle=\tau_{0}+\left\lfloor\frac{\gamma_{k}+1}{\gamma_{n}-2}+\nu_{0}\right\rfloor,$
where
$\tau_{0}=1+\left\lfloor\log_{2}\frac{\gamma_{n}-2}{\gamma_{n}-4}\right\rfloor,\quad\nu_{0}=2-\frac{2^{\tau_{0}}(\gamma_{n}-4)}{\gamma_{n}-2}\in(0,1].$
$\blacksquare$
## 2\. Technical Lemmas
###### Lemma 15 (k-grad)
In sparse linear model, under Assumptions • ‣ 3.2 and • ‣ 3.2, if
$n\gg{s^{*}}^{2}\log^{3+\kappa}d+{s^{*}}\log^{5+\kappa}d$,
$k\gg{s^{*}}^{2}\log^{5+\kappa}d$, and
$\displaystyle\left\|\bar{\theta}-\theta^{\ast}\right\|_{1}\ll\min\left\\{\frac{1}{\sqrt{k{s^{*}}}\log^{1+\kappa}d},\frac{1}{\sqrt{n{s^{*}}}\log^{1+\kappa}d}\right\\},$
for some $\kappa>0$, then we have that
$\displaystyle\sup_{\alpha\in(0,1)}\left|P(T\leq
c_{\overline{W}}(\alpha))-\alpha\right|$ $\displaystyle=o(1),\quad\text{and}$
(17) $\displaystyle\sup_{\alpha\in(0,1)}\left|P(\widehat{T}\leq
c_{\overline{W}}(\alpha))-\alpha\right|$ $\displaystyle=o(1).$ (18)
Proof of Lemma 15. As noted by Zhang and Cheng (2017), since
$\|\sqrt{N}(\widetilde{\theta}-\theta^{\ast})\|_{\infty}=\max_{l}\sqrt{N}|\widetilde{\theta}_{l}-\theta^{\ast}_{l}|=\sqrt{N}\max_{l}\big{(}(\widetilde{\theta}_{l}-\theta^{\ast}_{l})\vee(\theta^{\ast}_{l}-\widetilde{\theta}_{l})\big{)}$,
the arguments for the bootstrap consistency result with
$\displaystyle T$
$\displaystyle=\max_{l}\sqrt{N}(\widetilde{\theta}-\theta^{\ast})_{l}\quad\text{and}$
(19) $\displaystyle\widehat{T}$
$\displaystyle=\max_{l}\sqrt{N}(\widehat{\theta}-\theta^{\ast})_{l}$ (20)
imply the bootstrap consistency result for
$T=\|\sqrt{N}(\widetilde{\theta}-\theta^{\ast})\|_{\infty}$ and
$\widehat{T}=\|\sqrt{N}(\widehat{\theta}-\theta^{\ast})\|_{\infty}$. Hence,
from now on, we redefine $T$ and $\widehat{T}$ as (19) and (20). Define an
oracle multiplier bootstrap statistic as
$\displaystyle W^{*}:\,=\max_{1\leq l\leq
d}-\frac{1}{\sqrt{N}}\sum_{i=1}^{n}\sum_{j=1}^{k}\left(\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}(\theta^{\ast};Z_{ij})\right)_{l}\epsilon_{ij}^{*},$
(21)
where $\\{\epsilon_{ij}^{*}\\}_{i=1,\dots,n;j=1,\dots,k}$ are $N$ independent
standard Gaussian variables, also independent of the entire dataset. The proof
consists of two steps; the first step is to show that $W^{*}$ achieves
bootstrap consistency, i.e., $\sup_{\alpha\in(0,1)}|P(T\leq
c_{W^{*}}(\alpha))-\alpha|$ converges to $0$, where
$c_{W^{*}}(\alpha)=\inf\\{t\in\mathbb{R}:P_{\epsilon}(W^{*}\leq
t)\geq\alpha\\},$ and the second step is to show the bootstrap consistency of
our proposed bootstrap statistic by showing the quantiles of $W$ and $W^{*}$
are close.
Note that
$\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}(\theta^{\ast};Z)=\mathbb{E}[xx^{\top}]^{-1}x(x^{\top}\theta^{\ast}-y)=\Theta
xe$ and
$\mathbb{E}\left[\left(\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}(\theta^{\ast};Z)\right)\left(\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}(\theta^{\ast};Z)\right)^{\top}\right]=\Theta\mathbb{E}\left[xx^{\top}e^{2}\right]\Theta=\sigma^{2}\Theta\Sigma\Theta=\sigma^{2}\Theta.$
Then, under Assumptions • ‣ 3.2 and • ‣ 3.2,
$\displaystyle\min_{l}\mathbb{E}\left[\left(\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}(\theta^{\ast};Z)\right)_{l}^{2}\right]=\sigma^{2}\min_{l}\Theta_{l,l}\geq\sigma^{2}\lambda_{\tiny{\min}}(\Theta)=\frac{\sigma^{2}}{\lambda_{\tiny{\max}}(\Sigma)},$
(22)
is bounded away from zero. Under Assumption • ‣ 3.2, $x$ is sub-Gaussian, that
is, $w^{\top}x$ is sub-Gaussian with uniformly bounded $\psi_{2}$-norm for all
$w\in S^{d-1}$. To show $w^{\top}\Theta x$ is also sub-Gaussian with uniformly
bounded $\psi_{2}$-norm, we write it as
$w^{\top}\Theta x=(\Theta w)^{\top}x=\left\|\Theta
w\right\|_{2}\left(\frac{\Theta w}{\left\|\Theta
w\right\|_{2}}\right)^{\top}x.$
Since $\Theta w/\left\|\Theta w\right\|_{2}\in S^{d-1}$, we have that
$\left(\Theta w/\left\|\Theta w\right\|_{2}\right)x$ is sub-Gaussian with
$O(1)$ $\psi_{2}$-norm, and hence, $w^{\top}\Theta x$ is sub-Gaussian with
$O(\left\|\Theta
w\right\|_{2})=O(\lambda_{\tiny{\max}}(\Theta))=O(\lambda_{\tiny{\min}}(\Sigma)^{-1})=O(1)$
$\psi_{2}$-norm, under Assumption • ‣ 3.2. Since $e$ is also sub-Gaussian
under Assumption • ‣ 3.2 and is independent of $w^{\top}\Theta x$, we have
that $w^{\top}\Theta xe$ is sub-exponential with uniformly bounded
$\psi_{1}$-norm for all $w\in S^{d-1}$, and also, all
$\left(\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}(\theta^{\ast};Z)\right)_{l}$
are sub-exponential with uniformly bounded $\psi_{1}$-norm. Combining this
with (22), we have verified Assumption (E.1) of Chernozhukov et al. (2013) for
$\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}(\theta^{\ast};Z)$.
Define
$\displaystyle T_{0}:\,=\max_{1\leq l\leq
d}-\sqrt{N}\left(\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}_{N}(\theta^{\ast})\right)_{l},$
(23)
which is a Bahadur representation of $T$. Under the condition
$\log^{7}(dN)/N\lesssim N^{-c}$ for some constant $c>0$, which holds if
$N\gtrsim\log^{7+\kappa}d$ for some $\kappa>0$, applying Theorem 3.2 and
Corollary 2.1 of Chernozhukov et al. (2013), we obtain that for some constant
$c>0$ and for every $v,\zeta>0$,
$\displaystyle\begin{split}\sup_{\alpha\in(0,1)}\left|P(T\leq
c_{W^{*}}(\alpha))-\alpha\right|&\lesssim
N^{-c}+v^{1/3}\left(1\vee\log\frac{d}{v}\right)^{2/3}+P\left(\left|\\!\left|\\!\left|{\widehat{\Omega}-\Omega_{0}}\right|\\!\right|\\!\right|_{\max}>v\right)\\\
&\quad+\zeta\sqrt{1\vee\log\frac{d}{\zeta}}+P\left(|T-T_{0}|>\zeta\right),\end{split}$
(24)
where
$\displaystyle\begin{split}\widehat{\Omega}&:\,=\operatorname{cov}_{\epsilon}\left(-\frac{1}{\sqrt{N}}\sum_{i=1}^{n}\sum_{j=1}^{k}\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}(\theta^{\ast};Z_{ij})\epsilon_{ij}^{*}\right)\\\
&=\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\left(\frac{1}{N}\sum_{i=1}^{n}\sum_{j=1}^{k}\nabla\mathcal{L}(\theta^{\ast};Z_{ij})\nabla\mathcal{L}(\theta^{\ast};Z_{ij})^{\top}\right)\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1},\quad\text{and}\end{split}$
(25) $\displaystyle\Omega_{0}$
$\displaystyle:\,=\operatorname{cov}\left(-\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}(\theta^{\ast};Z)\right)=\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\mathbb{E}\left[\nabla\mathcal{L}(\theta^{\ast};Z)\nabla\mathcal{L}(\theta^{\ast};Z)^{\top}\right]\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}.$
(26)
To show the quantiles of $\overline{W}$ and $W^{*}$ are close, we first have
that for any $\omega$ such that $\alpha+\omega,\alpha-\omega\in(0,1)$,
$\displaystyle P(\\{T\leq c_{\overline{W}}(\alpha)\\}\ominus\\{T\leq
c_{W^{*}}(\alpha)\\})$ $\displaystyle\leq 2P(c_{W^{*}}(\alpha-\omega)<T\leq
c_{W^{*}}(\alpha+\omega))+P(c_{W^{*}}(\alpha-\omega)>c_{\overline{W}}(\alpha))+P(c_{\overline{W}}(\alpha)>c_{W^{*}}(\alpha+\omega)),$
where $\ominus$ denotes symmetric difference. Following the arguments in the
proof of Lemma 3.2 of Chernozhukov et al. (2013), we have that
$P(c_{\overline{W}}(\alpha)>c_{W^{*}}(\alpha+\pi(u)))\leq
P\left(\left|\\!\left|\\!\left|{\overline{\Omega}-\widehat{\Omega}}\right|\\!\right|\\!\right|_{\max}>u\right),\quad\text{and}$
$P(c_{W^{*}}(\alpha-\pi(u))>c_{\overline{W}}(\alpha))\leq
P\left(\left|\\!\left|\\!\left|{\overline{\Omega}-\widehat{\Omega}}\right|\\!\right|\\!\right|_{\max}>u\right),$
where $\pi(u):\,=u^{1/3}\left(1\vee\log(d/u)\right)^{2/3}$ and
$\displaystyle\begin{split}\overline{\Omega}&:\,=\operatorname{cov}_{\epsilon}\left(-\frac{1}{\sqrt{k}}\sum_{j=1}^{k}\widetilde{\Theta}\sqrt{n}\left(\nabla\mathcal{L}_{j}(\bar{\theta})-\nabla\mathcal{L}_{N}(\bar{\theta})\right)\epsilon_{j}\right)\\\
&=\widetilde{\Theta}\left(\frac{1}{k}\sum_{j=1}^{k}n\left(\nabla\mathcal{L}_{j}(\bar{\theta})-\nabla\mathcal{L}_{N}(\bar{\theta})\right)\left(\nabla\mathcal{L}_{j}(\bar{\theta})-\nabla\mathcal{L}_{N}(\bar{\theta})\right)^{\top}\right)\widetilde{\Theta}^{\top}.\end{split}$
(27)
By letting $\omega=\pi(u)$, we have that
$\displaystyle P(\\{T\leq c_{\overline{W}}(\alpha)\\}\ominus\\{T\leq
c_{W^{*}}(\alpha)\\})$ $\displaystyle\leq 2P(c_{W^{*}}(\alpha-\pi(u))<T\leq
c_{W^{*}}(\alpha+\pi(u)))+P(c_{W^{*}}(\alpha-\pi(u))>c_{\overline{W}}(\alpha))+P(c_{\overline{W}}(\alpha)>c_{W^{*}}(\alpha+\pi(u)))$
$\displaystyle\leq 2P(c_{W^{*}}(\alpha-\pi(u))<T\leq
c_{W^{*}}(\alpha+\pi(u)))+2P\left(\left|\\!\left|\\!\left|{\overline{\Omega}-\widehat{\Omega}}\right|\\!\right|\\!\right|_{\max}>u\right),$
where by (24),
$\displaystyle P(c_{W^{*}}(\alpha-\pi(u))<T\leq c_{W^{*}}(\alpha+\pi(u)))$
$\displaystyle=P(T\leq c_{W^{*}}(\alpha+\pi(u)))-P(T\leq
c_{W^{*}}(\alpha-\pi(u)))$
$\displaystyle\lesssim\pi(u)+N^{-c}+\zeta\sqrt{1\vee\log\frac{d}{\zeta}}+P\left(|T-T_{0}|>\zeta\right),$
and then,
$\displaystyle\sup_{\alpha\in(0,1)}\left|P(T\leq
c_{\overline{W}}(\alpha))-\alpha\right|$ $\displaystyle\lesssim
N^{-c}+v^{1/3}\left(1\vee\log\frac{d}{v}\right)^{2/3}+P\left(\left|\\!\left|\\!\left|{\widehat{\Omega}-\Omega_{0}}\right|\\!\right|\\!\right|_{\max}>v\right)$
$\displaystyle\quad+\zeta\sqrt{1\vee\log\frac{d}{\zeta}}+P\left(|T-T_{0}|>\zeta\right)+u^{1/3}\left(1\vee\log\frac{d}{u}\right)^{2/3}+P\left(\left|\\!\left|\\!\left|{\overline{\Omega}-\widehat{\Omega}}\right|\\!\right|\\!\right|_{\max}>u\right).$
(28)
Applying Lemmas 19, 24, and 23, we have that there exist some $\zeta,u,v>0$
such that
$\displaystyle\zeta\sqrt{1\vee\log\frac{d}{\zeta}}$
$\displaystyle+P\left(|T-T_{0}|>\zeta\right)=o(1),\quad\text{and}$ (29)
$\displaystyle u^{1/3}\left(1\vee\log\frac{d}{u}\right)^{2/3}$
$\displaystyle+P\left(\left|\\!\left|\\!\left|{\overline{\Omega}-\widehat{\Omega}}\right|\\!\right|\\!\right|_{\max}>u\right)=o(1),\quad\text{and}$
(30) $\displaystyle v^{1/3}\left(1\vee\log\frac{d}{v}\right)^{2/3}$
$\displaystyle+P\left(\left|\\!\left|\\!\left|{\widehat{\Omega}-\Omega_{0}}\right|\\!\right|\\!\right|_{\max}>v\right)=o(1),$
(31)
and hence, after simplifying the conditions, obtain the first result in the
lemma. To obtain the second result, we use Lemma 20, which yields
$\displaystyle\xi\sqrt{1\vee\log\frac{d}{\xi}}+P\left(|\widehat{T}-T_{0}|>\xi\right)=o(1).$
(32)
$\blacksquare$
###### Lemma 16 (n+k-1-grad)
In sparse linear model, under Assumptions • ‣ 3.2 and • ‣ 3.2, if
$n\gg{s^{*}}^{2}\log^{3+\kappa}d+{s^{*}}\log^{5+\kappa}d$,
$n+k\gg{s^{*}}^{2}\log^{5+\kappa}d$, $nk\gtrsim\log^{7+\kappa}d$, and
$\left\|\bar{\theta}-\theta^{\ast}\right\|_{1}\ll\min\left\\{\frac{1}{\sqrt{k{s^{*}}}\log^{1+\kappa}d},\frac{1}{{s^{*}}\sqrt{\log((n+k)d)}\log^{2+\kappa}d}\right\\},$
for some $\kappa>0$, then we have that
$\displaystyle\sup_{\alpha\in(0,1)}\left|P(T\leq
c_{\widetilde{W}}(\alpha))-\alpha\right|$ $\displaystyle=o(1),\quad\text{and}$
(33) $\displaystyle\sup_{\alpha\in(0,1)}\left|P(\widehat{T}\leq
c_{\widetilde{W}}(\alpha))-\alpha\right|$ $\displaystyle=o(1).$ (34)
Proof of Lemma 16. By the argument in the proof of Lemma 15, we have that
$\displaystyle\sup_{\alpha\in(0,1)}\left|P(T\leq
c_{\widetilde{W}}(\alpha))-\alpha\right|$ $\displaystyle\lesssim
N^{-c}+v^{1/3}\left(1\vee\log\frac{d}{v}\right)^{2/3}+P\left(\left|\\!\left|\\!\left|{\widehat{\Omega}-\Omega_{0}}\right|\\!\right|\\!\right|_{\max}>v\right)$
$\displaystyle\quad+\zeta\sqrt{1\vee\log\frac{d}{\zeta}}+P\left(|T-T_{0}|>\zeta\right)+u^{1/3}\left(1\vee\log\frac{d}{u}\right)^{2/3}+P\left(\left|\\!\left|\\!\left|{\widetilde{\Omega}-\widehat{\Omega}}\right|\\!\right|\\!\right|_{\max}>u\right),$
(35)
where
$\displaystyle\begin{split}\widetilde{\Omega}&:\,=\operatorname{cov}_{\epsilon}\left(-\frac{1}{\sqrt{n+k-1}}\left(\sum_{i=1}^{n}\widetilde{\Theta}\left(\nabla\mathcal{L}(\bar{\theta};Z_{i1})-\nabla\mathcal{L}_{N}(\bar{\theta})\right)\epsilon_{i1}+\sum_{j=2}^{k}\widetilde{\Theta}\sqrt{n}\left(\nabla\mathcal{L}_{j}(\bar{\theta})-\nabla\mathcal{L}_{N}(\bar{\theta})\right)\epsilon_{j}\right)\right)\\\
&=\widetilde{\Theta}\frac{1}{n+k-1}\Bigg{(}\sum_{i=1}^{n}\left(\nabla\mathcal{L}(\theta;Z_{i1})-\nabla\mathcal{L}_{N}(\theta)\right)\left(\nabla\mathcal{L}(\theta;Z_{i1})-\nabla\mathcal{L}_{N}(\theta)\right)^{\top}\\\
&\quad+\sum_{j=2}^{k}n\left(\nabla\mathcal{L}_{j}(\theta)-\nabla\mathcal{L}_{N}(\theta)\right)\left(\nabla\mathcal{L}_{j}(\theta)-\nabla\mathcal{L}_{N}(\theta)\right)^{\top}\Bigg{)}\widetilde{\Theta}^{\top},\end{split}$
(36)
if $N\gtrsim\log^{7+\kappa}d$ for some $\kappa>0$. Applying Lemmas 19, 24, and
25, we have that there exist some $\zeta,u,v>0$ such that (29),
$\displaystyle
u^{1/3}\left(1\vee\log\frac{d}{u}\right)^{2/3}+P\left(\left|\\!\left|\\!\left|{\widetilde{\Omega}-\widehat{\Omega}}\right|\\!\right|\\!\right|_{\max}>u\right)=o(1),$
(37)
and (31) hold, and hence, after simplifying the conditions, obtain the first
result in the lemma. To obtain the second result, we use Lemma 20, which
yields (32). $\blacksquare$
###### Lemma 17 (k-grad)
In sparse GLM, under Assumptions • ‣ 3.3–• ‣ 3.3, if
$n\gg(s_{0}^{2}+{s^{*}}^{2})\log^{3+\kappa}d+(s_{0}+{s^{*}})\log^{5+\kappa}d$,
$k\gg{s^{*}}^{2}\log^{5+\kappa}d$, and
$\displaystyle\left\|\bar{\theta}-\theta^{\ast}\right\|_{1}\ll\min\left\\{\frac{1}{\sqrt{k{s^{*}}}s_{0}\log^{1+\kappa}d},\frac{1}{\sqrt{n{s^{*}}}\log^{1+\kappa}d}\right\\},$
for some $\kappa>0$, then we have that (17) and (18) hold.
Proof of Lemma 17. We redefine $T$ and $\widehat{T}$ as (19) and (20). We
define an oracle multiplier bootstrap statistic as in (21). Under Assumption •
‣ 3.3,
$\displaystyle\min_{l}\mathbb{E}\left[\left(\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}(\theta^{\ast};Z)\right)_{l}^{2}\right]$
$\displaystyle=\min_{l}\left(\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\mathbb{E}\left[\nabla\mathcal{L}(\theta^{\ast};Z)\nabla\mathcal{L}(\theta^{\ast};Z)^{\top}\right]\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\right)_{l,l}$
$\displaystyle\geq\lambda_{\tiny{\min}}\left(\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\mathbb{E}\left[\nabla\mathcal{L}(\theta^{\ast};Z)\nabla\mathcal{L}(\theta^{\ast};Z)^{\top}\right]\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\right)$
$\displaystyle\geq\lambda_{\tiny{\min}}\left(\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\right)^{2}\lambda_{\tiny{\min}}\left(\mathbb{E}\left[\nabla\mathcal{L}(\theta^{\ast};Z)\nabla\mathcal{L}(\theta^{\ast};Z)^{\top}\right]\right)$
$\displaystyle=\frac{\lambda_{\tiny{\min}}\left(\mathbb{E}\left[\nabla\mathcal{L}(\theta^{\ast};Z)\nabla\mathcal{L}(\theta^{\ast};Z)^{\top}\right]\right)}{\lambda_{\tiny{\max}}\left(\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})\right)^{2}}$
is bounded away from zero. Combining this with Assumption • ‣ 3.3, we have
verified Assumption (E.1) of Chernozhukov et al. (2013) for
$\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}(\theta^{\ast};Z)$.
Then, we use the same argument as in the proof of Lemma 15, and obtain (28)
with
$\displaystyle\begin{split}\overline{\Omega}&:\,=\widetilde{\Theta}(\widetilde{\theta}^{(0)})\left(\frac{1}{k}\sum_{j=1}^{k}n\left(\nabla\mathcal{L}_{j}(\bar{\theta})-\nabla\mathcal{L}_{N}(\bar{\theta})\right)\left(\nabla\mathcal{L}_{j}(\bar{\theta})-\nabla\mathcal{L}_{N}(\bar{\theta})\right)^{\top}\right)\widetilde{\Theta}(\widetilde{\theta}^{(0)})^{\top},\end{split}$
(38)
under the condition $\log^{7}(dN)/N\lesssim N^{-c}$ for some constant $c>0$,
which holds if $N\gtrsim\log^{7+\kappa}d$ for some $\kappa>0$. Applying Lemmas
21, 27, and 26, we have that there exist some $\zeta,u,v>0$ such that (29),
(30), and (31) hold, and hence, after simplifying the conditions, obtain the
first result in the lemma. To obtain the second result, we use Lemma 22, which
yields (32). $\blacksquare$
###### Lemma 18 (n+k-1-grad)
In sparse GLM, under Assumptions • ‣ 3.3–• ‣ 3.3, if
$n\gg(s_{0}+{s^{*}})\log^{5+\kappa}d+(s_{0}^{2}+{s^{*}}^{2})\log^{3+\kappa}d$,
$n+k\gg{s^{*}}^{2}\log^{5+\kappa}d$, $nk\gtrsim\log^{7+\kappa}d$, and
$\displaystyle\left\|\bar{\theta}-\theta^{\ast}\right\|_{1}$
$\displaystyle\ll\min\Bigg{\\{}\frac{n+k}{{s^{*}}\left(n+k\sqrt{\log
d}+k^{3/4}\log^{3/4}d\right)\log^{2+\kappa}d},\frac{1}{\sqrt{k{s^{*}}}s_{0}\log^{1+\kappa}d},\frac{1}{\left(nk{s^{*}}\log^{1+\kappa}d\right)^{1/4}}\Bigg{\\}},$
for some $\kappa>0$, then we have that (33) and (34) hold.
Proof of Lemma 18. By the argument in the proof of Lemma 17, we obtain (35)
with
$\displaystyle\begin{split}\widetilde{\Omega}&:\,=\widetilde{\Theta}(\widetilde{\theta}^{(0)})\frac{1}{n+k-1}\Bigg{(}\sum_{i=1}^{n}\left(\nabla\mathcal{L}(\theta;Z_{i1})-\nabla\mathcal{L}_{N}(\theta)\right)\left(\nabla\mathcal{L}(\theta;Z_{i1})-\nabla\mathcal{L}_{N}(\theta)\right)^{\top}\\\
&\quad+\sum_{j=2}^{k}n\left(\nabla\mathcal{L}_{j}(\theta)-\nabla\mathcal{L}_{N}(\theta)\right)\left(\nabla\mathcal{L}_{j}(\theta)-\nabla\mathcal{L}_{N}(\theta)\right)^{\top}\Bigg{)}\widetilde{\Theta}(\widetilde{\theta}^{(0)})^{\top},\end{split}$
(39)
if $N\gtrsim\log^{7+\kappa}d$ for some $\kappa>0$. Applying Lemmas 21, 27, and
28, we have that there exist some $\zeta,u,v>0$ such that (29), (37), and (31)
hold, and hence, after simplifying the conditions, obtain the first result in
the lemma. To obtain the second result, we use Lemma 22, which yields (32).
$\blacksquare$
###### Lemma 19
$T$ and $T_{0}$ are defined as in (7) and (23) respectively. In sparse linear
model, under Assumptions • ‣ 3.2 and • ‣ 3.2, provided that
$\left\|\bar{\theta}-\theta^{\ast}\right\|_{1}=O_{P}(r_{\bar{\theta}})$ and
$n\gg{s^{*}}\log d$, we have that
$|T-T_{0}|=O_{P}\left(r_{\bar{\theta}}\sqrt{{s^{*}}k\log d}+\frac{{s^{*}}\log
d}{\sqrt{n}}\right).$
Moreover, if $n\gg{s^{*}}^{2}\log^{3+\kappa}d$ and
$\left\|\bar{\theta}-\theta^{\ast}\right\|_{1}\ll\frac{1}{\sqrt{k{s^{*}}}\log^{1+\kappa}d},$
for some $\kappa>0$, then there exists some $\zeta>0$ such that (29) holds.
Proof of Lemma 19. First, we note that
$\displaystyle|T-T_{0}|$ $\displaystyle\leq\max_{1\leq l\leq
d}\left|\sqrt{N}(\widetilde{\theta}-\theta^{\ast})_{l}+\sqrt{N}\left(\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}_{N}(\theta^{\ast})\right)_{l}\right|$
$\displaystyle=\sqrt{N}\left\|\widetilde{\theta}-\theta^{\ast}+\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}_{N}(\theta^{\ast})\right\|_{\infty},$
where we use the fact that
$|\max_{l}a_{l}-\max_{l}b_{l}|\leq\max_{l}|a_{l}-b_{l}|$ for any two vectors
$a$ and $b$ of the same dimension. Next, we bound
$\left\|\widetilde{\theta}-\theta^{\ast}+\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}_{N}(\theta^{\ast})\right\|_{\infty}$.
In linear model, we have that
$\widetilde{\theta}-\theta^{\ast}+\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}_{N}(\theta^{\ast})=\bar{\theta}+\widetilde{\Theta}\frac{X_{N}^{\top}(y_{N}-X_{N}\bar{\theta})}{N}-\theta^{\ast}-\Theta\frac{X_{N}^{\top}(y_{N}-X_{N}\theta^{\ast})}{N},$
and then,
$\displaystyle\left\|\widetilde{\theta}-\theta^{\ast}+\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}_{N}(\theta^{\ast})\right\|_{\infty}$
$\displaystyle=\left\|\bar{\theta}+\widetilde{\Theta}\frac{X_{N}^{\top}(y_{N}-X_{N}\bar{\theta})}{N}-\theta^{\ast}-\Theta\frac{X_{N}^{\top}(y_{N}-X_{N}\theta^{\ast})}{N}\right\|_{\infty}$
$\displaystyle=\left\|\bar{\theta}+\widetilde{\Theta}\frac{X_{N}^{\top}(y_{N}-X_{N}\bar{\theta})}{N}-\theta^{\ast}-\widetilde{\Theta}\frac{X_{N}^{\top}(y_{N}-X_{N}\theta^{\ast})}{N}+\widetilde{\Theta}\frac{X_{N}^{\top}(y_{N}-X_{N}\theta^{\ast})}{N}-\Theta\frac{X_{N}^{\top}(y_{N}-X_{N}\theta^{\ast})}{N}\right\|_{\infty}$
$\displaystyle\leq\left\|\left(\widetilde{\Theta}\frac{X_{N}^{\top}X_{N}}{N}-I_{d}\right)(\bar{\theta}-\theta^{\ast})\right\|_{\infty}+\left\|\left(\widetilde{\Theta}-\Theta\right)\frac{X_{N}^{\top}e_{N}}{N}\right\|_{\infty}$
$\displaystyle\leq\left|\\!\left|\\!\left|{\widetilde{\Theta}\frac{X_{N}^{\top}X_{N}}{N}-I_{d}}\right|\\!\right|\\!\right|_{\max}\left\|\bar{\theta}-\theta^{\ast}\right\|_{1}+\left|\\!\left|\\!\left|{\widetilde{\Theta}-\Theta}\right|\\!\right|\\!\right|_{\infty}\left\|\frac{X_{N}^{\top}e_{N}}{N}\right\|_{\infty},$
where we use the triangle inequality in the second to last inequality and the
fact that for any matrix $A$ and vector $a$ with compatible dimensions,
$\|Aa\|_{\infty}\leq\left|\\!\left|\\!\left|{A}\right|\\!\right|\\!\right|_{\max}\|a\|_{1}$
and
$\|Aa\|_{\infty}\leq\left|\\!\left|\\!\left|{A}\right|\\!\right|\\!\right|_{\infty}\|a\|_{\infty}$,
in the last inequality. Further applying the triangle inequality and the fact
that for any two matrices $A$ and $B$ with compatible dimensions,
$\left|\\!\left|\\!\left|{AB}\right|\\!\right|\\!\right|_{\max}\leq\left|\\!\left|\\!\left|{A}\right|\\!\right|\\!\right|_{\infty}\left|\\!\left|\\!\left|{B}\right|\\!\right|\\!\right|_{\max}$,
we have that
$\displaystyle\left|\\!\left|\\!\left|{\widetilde{\Theta}\frac{X_{N}^{\top}X_{N}}{N}-I_{d}}\right|\\!\right|\\!\right|_{\max}$
$\displaystyle=\left|\\!\left|\\!\left|{\widetilde{\Theta}\frac{X_{N}^{\top}X_{N}}{N}-\widetilde{\Theta}\frac{X_{1}^{\top}X_{1}}{n}+\widetilde{\Theta}\frac{X_{1}^{\top}X_{1}}{n}-I_{d}}\right|\\!\right|\\!\right|_{\max}$
$\displaystyle\leq\left|\\!\left|\\!\left|{\widetilde{\Theta}\left(\frac{X_{N}^{\top}X_{N}}{N}-\frac{X_{1}^{\top}X_{1}}{n}\right)}\right|\\!\right|\\!\right|_{\max}+\left|\\!\left|\\!\left|{\widetilde{\Theta}\frac{X_{1}^{\top}X_{1}}{n}-I_{d}}\right|\\!\right|\\!\right|_{\max}$
$\displaystyle\leq\left|\\!\left|\\!\left|{\widetilde{\Theta}}\right|\\!\right|\\!\right|_{\infty}\left|\\!\left|\\!\left|{\frac{X_{N}^{\top}X_{N}}{N}-\frac{X_{1}^{\top}X_{1}}{n}}\right|\\!\right|\\!\right|_{\max}+\left|\\!\left|\\!\left|{\widetilde{\Theta}\frac{X_{1}^{\top}X_{1}}{n}-I_{d}}\right|\\!\right|\\!\right|_{\max}.$
Under Assumption • ‣ 3.2, $X_{N}$ has sub-Gaussian rows. Then, by Lemma 35, if
$n\gg{s^{*}}\log d$, we have that
$\left|\\!\left|\\!\left|{\widetilde{\Theta}}\right|\\!\right|\\!\right|_{\infty}=\max_{l}\left\|\widetilde{\Theta}_{l}\right\|_{1}=O_{P}\left(\sqrt{{s^{*}}}\right),$
$\left|\\!\left|\\!\left|{\widetilde{\Theta}\frac{X_{1}^{\top}X_{1}}{n}-I_{d}}\right|\\!\right|\\!\right|_{\max}=O_{P}\left(\sqrt{\frac{\log
d}{n}}\right),$
and
$\left|\\!\left|\\!\left|{\widetilde{\Theta}-\Theta}\right|\\!\right|\\!\right|_{\infty}=\max_{l}\left\|\widetilde{\Theta}_{l}-\Theta_{l}\right\|_{1}=O_{P}\left({s^{*}}\sqrt{\frac{\log
d}{n}}\right).$
It remains to bound
$\left|\\!\left|\\!\left|{\frac{X_{N}^{\top}X_{N}}{N}-\frac{X_{1}^{\top}X_{1}}{n}}\right|\\!\right|\\!\right|_{\max}$
and $\left\|\frac{X_{N}^{\top}e_{N}}{N}\right\|_{\infty}$.
Under Assumptions • ‣ 3.2, each $x_{ij,l}$ is sub-Gaussian, and therefore, the
product $x_{ij,l}x_{ij,l^{\prime}}$ of any two is sub-exponential. By
Bernstein’s inequality, we have that for any $t>0$,
$P\left(\left|\frac{(X_{N}^{\top}X_{N})_{l,l^{\prime}}}{N}-\Sigma_{l,l^{\prime}}\right|>t\right)\leq
2\exp\left(-cN\left(\frac{t^{2}}{\Sigma_{l,l^{\prime}}^{2}}\wedge\frac{t}{|\Sigma_{l,l^{\prime}}|}\right)\right),$
or for any $\delta\in(0,1)$,
$P\left(\left|\frac{(X_{N}^{\top}X_{N})_{l,l^{\prime}}}{N}-\Sigma_{l,l^{\prime}}\right|>|\Sigma_{l,l^{\prime}}|\left(\frac{\log\frac{2d^{2}}{\delta}}{cN}\vee\sqrt{\frac{\log\frac{2d^{2}}{\delta}}{cN}}\right)\right)\leq\frac{\delta}{d^{2}},$
for some constant $c>0$. Then, by the union bound, we have that
$\displaystyle
P\left(\left|\\!\left|\\!\left|{\frac{X_{N}^{\top}X_{N}}{N}-\Sigma}\right|\\!\right|\\!\right|_{\max}>\left|\\!\left|\\!\left|{\Sigma}\right|\\!\right|\\!\right|_{\max}\left(\frac{\log\frac{2d^{2}}{\delta}}{cN}\vee\sqrt{\frac{\log\frac{2d^{2}}{\delta}}{cN}}\right)\right)\leq\delta.$
(40)
Similarly, we have that
$\displaystyle
P\left(\left|\\!\left|\\!\left|{\frac{X_{1}^{\top}X_{1}}{n}-\Sigma}\right|\\!\right|\\!\right|_{\max}>\left|\\!\left|\\!\left|{\Sigma}\right|\\!\right|\\!\right|_{\max}\left(\frac{\log\frac{2d^{2}}{\delta}}{cn}\vee\sqrt{\frac{\log\frac{2d^{2}}{\delta}}{cn}}\right)\right)\leq\delta.$
(41)
Then, by the triangle inequality, we have that
$\displaystyle\left|\\!\left|\\!\left|{\frac{X_{N}^{\top}X_{N}}{N}-\frac{X_{1}^{\top}X_{1}}{n}}\right|\\!\right|\\!\right|_{\max}$
$\displaystyle\leq\left|\\!\left|\\!\left|{\frac{X_{1}^{\top}X_{1}}{n}-\Sigma}\right|\\!\right|\\!\right|_{\max}+\left|\\!\left|\\!\left|{\frac{X_{N}^{\top}X_{N}}{N}-\Sigma}\right|\\!\right|\\!\right|_{\max}$
$\displaystyle\lesssim\left|\\!\left|\\!\left|{\Sigma}\right|\\!\right|\\!\right|_{\max}\left(\frac{\log\frac{2d^{2}}{\delta}}{n}\vee\sqrt{\frac{\log\frac{2d^{2}}{\delta}}{n}}\right)$
$\displaystyle\lesssim\left(\frac{\log\frac{2d^{2}}{\delta}}{n}\vee\sqrt{\frac{\log\frac{2d^{2}}{\delta}}{n}}\right),$
with probability at least $1-\delta$, where we use
$\left|\\!\left|\\!\left|{\Sigma}\right|\\!\right|\\!\right|_{\max}\leq\left|\\!\left|\\!\left|{\Sigma}\right|\\!\right|\\!\right|_{2}=\lambda_{\tiny{\max}}(\Sigma)=O(1)$
under Assumption • ‣ 3.2. This implies that
$\left|\\!\left|\\!\left|{\frac{X_{N}^{\top}X_{N}}{N}-\frac{X_{1}^{\top}X_{1}}{n}}\right|\\!\right|\\!\right|_{\max}=O_{P}\left(\sqrt{\frac{\log
d}{n}}\right).$
Under Assumptions • ‣ 3.2 and • ‣ 3.2, each $x_{ij,l}$ and $e_{ij}$ are sub-
Gaussian, and therefore, their product $x_{ij,l}e_{ij}$ is sub-exponential.
Applying Bernstein’s inequality, we have that for any $\delta\in(0,1)$,
$P\left(\left|\frac{(X_{N}^{\top}e_{N})_{l}}{N}\right|>\sqrt{\Sigma_{l,l}}\sigma\left(\frac{\log\frac{2d}{\delta}}{cN}\vee\sqrt{\frac{\log\frac{2d}{\delta}}{cN}}\right)\right)\leq\frac{\delta}{d},$
for some constant $c>0$. Then, by the union bound, we have that
$\displaystyle
P\left(\left\|\frac{X_{N}^{\top}e_{N}}{N}\right\|_{\infty}>\max_{l}\sqrt{\Sigma_{l,l}}\sigma\left(\frac{\log\frac{2d}{\delta}}{cN}\vee\sqrt{\frac{\log\frac{2d}{\delta}}{cN}}\right)\right)\leq\delta,$
(42)
and then,
$\left\|\frac{X_{N}^{\top}e_{N}}{N}\right\|_{\infty}=O_{P}\left(\sqrt{\frac{\log
d}{N}}\right).$
Putting all the preceding bounds together, we obtain that
$\displaystyle\left\|\widetilde{\theta}-\theta^{\ast}+\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}_{N}(\theta^{\ast})\right\|_{\infty}$
$\displaystyle\leq\left(\left|\\!\left|\\!\left|{\widetilde{\Theta}}\right|\\!\right|\\!\right|_{\infty}\left|\\!\left|\\!\left|{\frac{X_{N}^{\top}X_{N}}{N}-\frac{X_{1}^{\top}X_{1}}{n}}\right|\\!\right|\\!\right|_{\max}+\left|\\!\left|\\!\left|{\widetilde{\Theta}\frac{X_{1}^{\top}X_{1}}{n}-I_{d}}\right|\\!\right|\\!\right|_{\max}\right)\left\|\bar{\theta}-\theta^{\ast}\right\|_{1}+\left|\\!\left|\\!\left|{\widetilde{\Theta}-\Theta}\right|\\!\right|\\!\right|_{\infty}\left\|\frac{X_{N}^{\top}e_{N}}{N}\right\|_{\infty}$
$\displaystyle=\left(O_{P}\left(\sqrt{{s^{*}}}\right)O_{P}\left(\sqrt{\frac{\log
d}{n}}\right)+O_{P}\left(\sqrt{\frac{\log
d}{n}}\right)\right)O_{P}(r_{\bar{\theta}})+O_{P}\left({s^{*}}\sqrt{\frac{\log
d}{n}}\right)O_{P}\left(\sqrt{\frac{\log d}{N}}\right)$
$\displaystyle=O_{P}\left(\sqrt{\frac{{s^{*}}\log
d}{n}}r_{\bar{\theta}}+\frac{{s^{*}}\log d}{n\sqrt{k}}\right),$
where we assume that
$\left\|\bar{\theta}-\theta^{\ast}\right\|_{1}=O_{P}(r_{\bar{\theta}})$, and
hence,
$|T-T_{0}|=O_{P}\left(r_{\bar{\theta}}\sqrt{{s^{*}}k\log d}+\frac{{s^{*}}\log
d}{\sqrt{n}}\right).$
Choosing
$\zeta=\left(r_{\bar{\theta}}\sqrt{{s^{*}}k\log d}+\frac{{s^{*}}\log
d}{\sqrt{n}}\right)^{1-\kappa},$
with any $\kappa>0$, we deduce that
$P\left(|T-T_{0}|>\zeta\right)=o(1).$
We also have that
$\zeta\sqrt{1\vee\log\frac{d}{\zeta}}=o(1),$
provided that
$\left(r_{\bar{\theta}}\sqrt{{s^{*}}k\log d}+\frac{{s^{*}}\log
d}{\sqrt{n}}\right)\log^{1/2+\kappa}d=o(1),$
which holds if
$n\gg{s^{*}}^{2}\log^{3+\kappa}d,$
and
$r_{\bar{\theta}}\ll\frac{1}{\sqrt{k{s^{*}}}\log^{1+\kappa}d}.$
$\blacksquare$
###### Lemma 20
$\widehat{T}$ and $T_{0}$ are defined as in (20) and (23) respectively. In
sparse linear model, under Assumptions • ‣ 3.2 and • ‣ 3.2, provided that
$n\gg{s^{*}}\log d$, we have that
$|\widehat{T}-T_{0}|=O_{P}\left(\frac{\left(s_{0}\sqrt{s^{*}}+{s^{*}}\right)\log
d}{\sqrt{n}}\right).$
Moreover, if $n\gg\left(s_{0}^{2}{s^{*}}+{s^{*}}^{2}\right)\log^{3+\kappa}d$
and for some $\kappa>0$, then there exists some $\xi>0$ such that (32) holds.
Proof of Lemma 20. By the proof of Lemma 19, we obtain that
$\displaystyle|\widehat{T}-T_{0}|$ $\displaystyle\leq\max_{1\leq l\leq
d}\left|\sqrt{N}(\widehat{\theta}-\theta^{\ast})_{l}+\sqrt{N}\left(\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}_{N}(\theta^{\ast})\right)_{l}\right|$
$\displaystyle=\sqrt{N}\left\|\widehat{\theta}-\theta^{\ast}+\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}_{N}(\theta^{\ast})\right\|_{\infty}$
$\displaystyle=\sqrt{N}\left\|\widehat{\theta}_{L}+\widetilde{\Theta}\frac{X_{N}^{\top}(y_{N}-X_{N}\widehat{\theta}_{L})}{N}-\theta^{\ast}-\Theta\frac{X_{N}^{\top}(y_{N}-X_{N}\theta^{\ast})}{N}\right\|_{\infty}$
$\displaystyle\leq\left|\\!\left|\\!\left|{\widetilde{\Theta}\frac{X_{N}^{\top}X_{N}}{N}-I_{d}}\right|\\!\right|\\!\right|_{\max}\left\|\widehat{\theta}_{L}-\theta^{\ast}\right\|_{1}+\left|\\!\left|\\!\left|{\widetilde{\Theta}-\Theta}\right|\\!\right|\\!\right|_{\infty}\left\|\frac{X_{N}^{\top}e_{N}}{N}\right\|_{\infty}$
$\displaystyle=O_{P}\left(\sqrt{{s^{*}}k\log
d}\right)\left\|\widehat{\theta}_{L}-\theta^{\ast}\right\|_{1}+O_{P}\left(\frac{{s^{*}}\log
d}{\sqrt{n}}\right).$
Since
$\left\|\widehat{\theta}_{L}-\theta^{\ast}\right\|_{1}=O_{P}\left(s_{0}\sqrt{\frac{\log
d}{N}}\right),$
we have that
$\displaystyle|\widehat{T}-T_{0}|=O_{P}\left(\frac{\left(s_{0}\sqrt{s^{*}}+{s^{*}}\right)\log
d}{\sqrt{n}}\right).$
Choosing
$\xi=\left(\frac{\left(s_{0}\sqrt{s^{*}}+{s^{*}}\right)\log
d}{\sqrt{n}}\right)^{1-\kappa},$
with any $\kappa>0$, we deduce that
$P\left(|\widehat{T}-T_{0}|>\xi\right)=o(1).$
We also have that
$\xi\sqrt{1\vee\log\frac{d}{\xi}}=o(1),$
provided that
$\left(\frac{\left(s_{0}\sqrt{s^{*}}+{s^{*}}\right)\log
d}{\sqrt{n}}\right)\log^{1/2+\kappa}d=o(1),$
which holds if
$n\gg\left(s_{0}^{2}{s^{*}}+{s^{*}}^{2}\right)\log^{3+\kappa}d.$
$\blacksquare$
###### Lemma 21
$T$ and $T_{0}$ are defined as in (7) and (23) respectively. In sparse GLM,
under Assumptions • ‣ 3.3 and • ‣ 3.3, provided that
$\left\|\bar{\theta}-\theta^{\ast}\right\|_{1}=O_{P}(r_{\bar{\theta}})$ and
$n\gg s_{0}^{2}\log^{2}d+{s^{*}}^{2}\log d$, we have that
$|T-T_{0}|=O_{P}\left(r_{\bar{\theta}}\sqrt{{s^{*}}k\log d}+\frac{{s^{*}}\log
d}{\sqrt{n}}\right).$
Moreover, if $n\gg({s^{*}}^{2}+s_{0}^{2})\log^{3+\kappa}d$ and
$\left\|\bar{\theta}-\theta^{\ast}\right\|_{1}\ll\min\left\\{\frac{1}{\sqrt{k{s^{*}}}s_{0}\log^{1+\kappa}d},\frac{1}{\left(nk{s^{*}}\log^{1+\kappa}d\right)^{1/4}}\right\\},$
for some $\kappa>0$, then there exists some $\zeta>0$ such that (29) holds.
Proof of Lemma 21. Following the argument in the proof of Lemma 19, we have
that
$\displaystyle|T-T_{0}|$ $\displaystyle\leq\max_{1\leq l\leq
d}\left|\sqrt{N}(\widetilde{\theta}_{l}-\theta^{\ast}_{l})+\sqrt{N}\left(\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}_{N}(\theta^{\ast})\right)_{l}\right|$
$\displaystyle=\sqrt{N}\left\|\widetilde{\theta}-\theta^{\ast}+\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}_{N}(\theta^{\ast})\right\|_{\infty},$
and
$\displaystyle\left\|\widetilde{\theta}-\theta^{\ast}+\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}_{N}(\theta^{\ast})\right\|_{\infty}$
$\displaystyle=\left\|\bar{\theta}-\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla\mathcal{L}_{N}(\bar{\theta})-\theta^{\ast}+\Theta\nabla\mathcal{L}_{N}(\theta^{\ast})\right\|_{\infty}$
$\displaystyle=\left\|\bar{\theta}-\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla\mathcal{L}_{N}(\bar{\theta})-\theta^{\ast}+\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla\mathcal{L}_{N}(\theta^{\ast})-\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla\mathcal{L}_{N}(\theta^{\ast})+\Theta\nabla\mathcal{L}_{N}(\theta^{\ast})\right\|_{\infty}$
$\displaystyle\leq\left\|\bar{\theta}-\theta^{\ast}-\widetilde{\Theta}(\widetilde{\theta}^{(0)})\left(\nabla\mathcal{L}_{N}(\bar{\theta})-\nabla\mathcal{L}_{N}(\theta^{\ast})\right)\right\|_{\infty}+\left\|\left(\widetilde{\Theta}(\widetilde{\theta}^{(0)})-\Theta\right)\nabla\mathcal{L}_{N}(\theta^{\ast})\right\|_{\infty}.$
By Taylor’s theorem, we have that
$\displaystyle\nabla\mathcal{L}_{N}(\bar{\theta})-\nabla\mathcal{L}_{N}(\theta^{\ast})=\int_{0}^{1}\nabla^{2}\mathcal{L}_{N}(\theta^{\ast}+t(\bar{\theta}-\theta^{\ast}))dt(\bar{\theta}-\theta^{\ast}),$
(43)
and then,
$\displaystyle\left\|\widetilde{\theta}-\theta^{\ast}+\nabla^{2}\mathcal{L}^{\ast}(\theta^{\ast})^{-1}\nabla\mathcal{L}_{N}(\theta^{\ast})\right\|_{\infty}$
$\displaystyle\leq\left\|\bar{\theta}-\theta^{\ast}-\widetilde{\Theta}(\widetilde{\theta}^{(0)})\int_{0}^{1}\nabla^{2}\mathcal{L}_{N}(\theta^{\ast}+t(\bar{\theta}-\theta^{\ast}))dt(\bar{\theta}-\theta^{\ast})\right\|_{\infty}+\left\|\left(\widetilde{\Theta}(\widetilde{\theta}^{(0)})-\Theta\right)\nabla\mathcal{L}_{N}(\theta^{\ast})\right\|_{\infty}$
$\displaystyle=\left\|\int_{0}^{1}\left(\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla^{2}\mathcal{L}_{N}(\theta^{\ast}+t(\bar{\theta}-\theta^{\ast}))-I_{d}\right)dt(\bar{\theta}-\theta^{\ast})\right\|_{\infty}+\left\|\left(\widetilde{\Theta}(\widetilde{\theta}^{(0)})-\Theta\right)\nabla\mathcal{L}_{N}(\theta^{\ast})\right\|_{\infty}$
$\displaystyle\leq\int_{0}^{1}\left|\\!\left|\\!\left|{\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla^{2}\mathcal{L}_{N}(\theta^{\ast}+t(\bar{\theta}-\theta^{\ast}))-I_{d}}\right|\\!\right|\\!\right|_{\max}dt\left\|\bar{\theta}-\theta^{\ast}\right\|_{1}+\left|\\!\left|\\!\left|{\widetilde{\Theta}(\widetilde{\theta}^{(0)})-\Theta}\right|\\!\right|\\!\right|_{\infty}\left\|\nabla\mathcal{L}_{N}(\theta^{\ast})\right\|_{\infty}.$
By the triangle inequality, we have that
$\displaystyle\left|\\!\left|\\!\left|{\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla^{2}\mathcal{L}_{N}(\theta^{\ast}+t(\bar{\theta}-\theta^{\ast}))-I_{d}}\right|\\!\right|\\!\right|_{\max}$
$\displaystyle=\bigg{|}\\!\bigg{|}\\!\bigg{|}\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla^{2}\mathcal{L}_{N}(\theta^{\ast}+t(\bar{\theta}-\theta^{\ast}))-\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla^{2}\mathcal{L}_{N}(\theta^{\ast})+\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla^{2}\mathcal{L}_{N}(\theta^{\ast})-\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla^{2}\mathcal{L}_{1}(\theta^{\ast})$
$\displaystyle\quad+\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla^{2}\mathcal{L}_{1}(\theta^{\ast})-\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla^{2}\mathcal{L}_{1}(\widetilde{\theta}^{(0)})+\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla^{2}\mathcal{L}_{1}(\widetilde{\theta}^{(0)})-I_{d}\bigg{|}\\!\bigg{|}\\!\bigg{|}_{\max}$
$\displaystyle\leq\left|\\!\left|\\!\left|{\widetilde{\Theta}(\widetilde{\theta}^{(0)})\left(\nabla^{2}\mathcal{L}_{N}(\theta^{\ast}+t(\bar{\theta}-\theta^{\ast}))-\nabla^{2}\mathcal{L}_{N}(\theta^{\ast})\right)}\right|\\!\right|\\!\right|_{\max}+\left|\\!\left|\\!\left|{\widetilde{\Theta}(\widetilde{\theta}^{(0)})\left(\nabla^{2}\mathcal{L}_{N}(\theta^{\ast})-\nabla^{2}\mathcal{L}_{1}(\theta^{\ast})\right)}\right|\\!\right|\\!\right|_{\max}$
$\displaystyle\quad+\left|\\!\left|\\!\left|{\widetilde{\Theta}(\widetilde{\theta}^{(0)})\left(\nabla^{2}\mathcal{L}_{1}(\theta^{\ast})-\nabla^{2}\mathcal{L}_{1}(\widetilde{\theta}^{(0)})\right)}\right|\\!\right|\\!\right|_{\max}+\left|\\!\left|\\!\left|{\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla^{2}\mathcal{L}_{1}(\widetilde{\theta}^{(0)})-I_{d}}\right|\\!\right|\\!\right|_{\max}$
$\displaystyle\leq\left|\\!\left|\\!\left|{\widetilde{\Theta}(\widetilde{\theta}^{(0)})}\right|\\!\right|\\!\right|_{\infty}\bigg{(}\left|\\!\left|\\!\left|{\nabla^{2}\mathcal{L}_{N}(\theta^{\ast}+t(\bar{\theta}-\theta^{\ast}))-\nabla^{2}\mathcal{L}_{N}(\theta^{\ast})}\right|\\!\right|\\!\right|_{\max}+\left|\\!\left|\\!\left|{\nabla^{2}\mathcal{L}_{N}(\theta^{\ast})-\nabla^{2}\mathcal{L}_{1}(\theta^{\ast})}\right|\\!\right|\\!\right|_{\max}$
$\displaystyle\quad+\left|\\!\left|\\!\left|{\nabla^{2}\mathcal{L}_{1}(\theta^{\ast})-\nabla^{2}\mathcal{L}_{1}(\widetilde{\theta}^{(0)})}\right|\\!\right|\\!\right|_{\max}\bigg{)}+\left|\\!\left|\\!\left|{\widetilde{\Theta}(\widetilde{\theta}^{(0)})\nabla^{2}\mathcal{L}_{1}(\widetilde{\theta}^{(0)})-I_{d}}\right|\\!\right|\\!\right|_{\max}.$
Under Assumption • ‣ 3.3, we have by Taylor’s theorem that
$\displaystyle\left|g^{\prime\prime}(y_{ij},x_{ij}^{\top}(\theta^{\ast}+t(\bar{\theta}-\theta^{\ast})))-g^{\prime\prime}(y_{ij},x_{ij}^{\top}\theta^{\ast})\right|$
$\displaystyle=\left|\int_{0}^{1}g^{\prime\prime\prime}(y_{ij},x_{ij}^{\top}(\theta^{\ast}+st(\bar{\theta}-\theta^{\ast})))ds\cdot
tx_{ij}^{\top}(\bar{\theta}-\theta^{\ast})\right|$
$\displaystyle\lesssim\left|x_{ij}^{\top}(\bar{\theta}-\theta^{\ast})\right|,$
and then by the triangle inequality,
$\displaystyle\left|\\!\left|\\!\left|{\nabla^{2}\mathcal{L}_{N}(\theta^{\ast}+t(\bar{\theta}-\theta^{\ast}))-\nabla^{2}\mathcal{L}_{N}(\theta^{\ast})}\right|\\!\right|\\!\right|_{\max}$
$\displaystyle=\left|\\!\left|\\!\left|{\frac{1}{N}\sum_{i=1}^{n}\sum_{j=1}^{k}x_{ij}x_{ij}^{\top}\left(g^{\prime\prime}(y_{ij},x_{ij}^{\top}(\theta^{\ast}+t(\bar{\theta}-\theta^{\ast})))-g^{\prime\prime}(y_{ij},x_{ij}^{\top}\theta^{\ast})\right)}\right|\\!\right|\\!\right|_{\max}$
$\displaystyle\leq\frac{1}{N}\sum_{i=1}^{n}\sum_{j=1}^{k}\left|\\!\left|\\!\left|{x_{ij}x_{ij}^{\top}\left(g^{\prime\prime}(y_{ij},x_{ij}^{\top}(\theta^{\ast}+t(\bar{\theta}-\theta^{\ast})))-g^{\prime\prime}(y_{ij},x_{ij}^{\top}\theta^{\ast})\right)}\right|\\!\right|\\!\right|_{\max}$
$\displaystyle=\frac{1}{N}\sum_{i=1}^{n}\sum_{j=1}^{k}\left|\\!\left|\\!\left|{x_{ij}x_{ij}^{\top}}\right|\\!\right|\\!\right|_{\max}\left|g^{\prime\prime}(y_{ij},x_{ij}^{\top}(\theta^{\ast}+t(\bar{\theta}-\theta^{\ast})))-g^{\prime\prime}(y_{ij},x_{ij}^{\top}\theta^{\ast})\right|$
$\displaystyle\lesssim\frac{1}{N}\sum_{i=1}^{n}\sum_{j=1}^{k}\|x_{ij}\|_{\infty}^{2}\left|x_{ij}^{\top}(\bar{\theta}-\theta^{\ast})\right|$
$\displaystyle\leq\frac{1}{N}\sum_{i=1}^{n}\sum_{j=1}^{k}\|x_{ij}\|_{\infty}^{3}\|\bar{\theta}-\theta^{\ast}\|_{1}$
$\displaystyle\lesssim\left\|\bar{\theta}-\theta^{\ast}\right\|_{1},$ (44)
where we use that $\|x_{ij}\|_{\infty}=O(1)$ under Assumption • ‣ 3.3 in the
last inequality. Similarly, we have that
$\left|\\!\left|\\!\left|{\nabla^{2}\mathcal{L}_{1}(\theta^{\ast})-\nabla^{2}\mathcal{L}_{1}(\widetilde{\theta}^{(0)})}\right|\\!\right|\\!\right|_{\max}\lesssim\|\widetilde{\theta}^{(0)}-\theta^{\ast}\|_{1}=O_{P}\left(s_{0}\sqrt{\frac{\log
d}{n}}\right),$ |
††institutetext: Department of Physics and Astronomy, Rutherford Building,
University of Canterbury, Private Bag 4800, Christchurch 8140, New Zealand
# CP violatingTri-bimaximal-Cabibbo mixing
D. V. Ahluwalia<EMAIL_ADDRESS>
###### Abstract
In view of the new data from the Daya Bay and RENO collaborations, King has
presented a very natural deformation of tri-bimaximal mixing. Here we show
that $L/E$ flatness of the $e$-like event ratio in the atmospheric neutrino
data, when coupled with King’s observation that the smallest neutrino mixing
angle, $\theta_{13}$, seems to be related to the largest quark mixing angle
(the Cabibbo angle $\theta_{C}$), leads to a CP violating tri-bimaximal-
Cabibbo mixing. King’s tri-bimaximal-Cabibbo mixing follows as a leading order
approximation from our result.
###### Keywords:
Neutrino physics, CP violation
The precise form of the neutrino mixing matrix, $U$, that defines the
relationship between the flavour and mass eigenstates, $|\nu_{\ell}\rangle$
and $|\nu_{j}\rangle$ respectively Chau:1984fp ; Beringer:2012bj , reads
$|\nu_{\ell}\rangle=\sum_{j}U^{\ast}_{\ell
j}|\nu_{j}\rangle,\quad\ell=e,\mu,\tau,\quad j=1,2,3,$ (1)
and the knowledge of the masses for the underlying mass eigenstates, arise
from yet unknown physics. Nevertheless, once the parameters that determine the
mixing matrix and the mass-squared differences are deciphered from the data
one can derive their phenomenological consequences on supernova explosions
Ahluwalia:2004dv ; Lunardini:2007vn ; Duan:2006an ; Duan:2007sh , on the
synthesis of elements Yoshida:2006sk , on the cosmic microwave background and
the distribution of large-scale structure Lesgourgues:2006nd . In particular,
if the neutrino mixing angle $\theta_{13}\neq 0$ then one can obtain CP
violation in the neutrino sector with many interesting physical consequences
Khlopov:1981nq ; Frampton:2002qc ; Balantekin:2007es .
The T2K, MINOS, and Double CHOOZ indications that the smallest neutrino mixing
angle $\theta_{13}$ may be non-zero Abe:2011ph ; Adamson:2011qu ; Abe:2011fz
has now been confirmed by the results of the Daya Bay and RENO collaborations
An:2012eh ; Ahn:2012nd . King has made the observation King:2012vj that the
smallest neutrino mixing angle $\theta_{13}$, seems to be related to the
largest quark mixing angle, the Cabibbo angle $\theta_{C}$ Cabibbo:1963yz , or
equivalently to the Wolfenstein parameter, $\lambda=0.2253\pm 0.0007$
Wolfenstein:1983yz ; Beringer:2012bj :111It is worth noting that Mohapatra and
Smirnov had earlier conjectured King’s observation (Mohapatra:2006gs, , Sec.
3.1).
$\theta_{13}~{}\mbox{(or,
}\theta_{reac}\mbox{)}=\arcsin\left(\frac{\sin\theta_{C}}{\sqrt{2}}\right)=\arcsin\left(\frac{\lambda}{\sqrt{2}}\right).$
(2)
To this observation we now add that the $L/E$ — where $L$ is the neutrino
source-detector distance and $E$ is the neutrino energy — flatness of the
$e$-like event ratio observed for atmospheric neutrinos Fukuda:1998mi
requires that
$\theta_{23}~{}\mbox{(or,
}\theta_{atm}\mbox{)}=\frac{\pi}{4},\quad\delta=\pm\frac{\pi}{2}.$ (3)
This observation was first made in reference Ahluwalia:2002tr . The $\delta$
obtained in Ahluwalia:2002tr was also introduced recently as an Ansatz in
Ref. Zhang:2012ys .
Global analysis of neutrino oscillation data by two independent groups shows:
(a) $\delta$ to be $\left(0.83^{+0.54}_{-0.64}\right)\pi$ for the normal mass
hierarchy while allowing for the full $[0,2\pi]$ range for the inverted mass
hierarchy Tortola:2012te , (b) $\delta\approx\pi$ with no significant
difference between the normal and inverted mass hierarchies Fogli:2012ua . A
detailed study of these two papers reveals that there is no statistically
significant indication which disfavours $\delta=\pm\pi/2$. Regarding
$\theta_{23}$: (a) the first of the mentioned groups obtains
$\sin^{2}\theta_{23}=0.49^{+0.08}_{-0.05}$ for the normal mass hierarchy, and
$\sin^{2}\theta_{23}=0.53^{+0.05}_{-0.07}$ for the inverted mass hierarchy
(these values are consistent with $\theta_{23}=\pi/4$), while (b) the second
group finds a slight preference for $\theta_{23}<\pi/4$.
Both groups agree with the tri-bimaximal mixing value for the remaining angle
Tortola:2012te ; Fogli:2012ua
$\theta_{12}~{}\mbox{(or,
}\theta_{\odot}\mbox{)}=\arcsin\left(\frac{1}{\sqrt{3}}\right).$ (4)
With all the angles and phases thus fixed, the neutrino mixing matrix for the
choice $\delta=\pi/2$ in equation (3) takes the form
$U^{+}=\begin{pmatrix}\sqrt{\frac{2}{3}}\left(1-\frac{\lambda^{2}}{2}\right)^{1/2}&\sqrt{\frac{1}{3}}\left(1-\frac{\lambda^{2}}{2}\right)^{1/2}&i\frac{1}{\sqrt{2}}\lambda\\\
-\frac{1}{\sqrt{6}}\left(1-i\lambda\right)&\frac{1}{\sqrt{3}}\left(1+i\frac{1}{2}\lambda\right)&\frac{1}{\sqrt{2}}\left(1-\frac{\lambda^{2}}{2}\right)^{1/2}\\\
\frac{1}{\sqrt{6}}\left(1+i\lambda\right)&-\frac{1}{\sqrt{3}}\left(1-i\frac{1}{2}\lambda\right)&\frac{1}{\sqrt{2}}\left(1-\frac{\lambda^{2}}{2}\right)^{1/2}\end{pmatrix}.$
(5)
Its counterpart, $U^{-}$, for $\delta=-\pi/2$ is obtained by letting $i\to-i$
in $U^{+}$. As a measure of CP violation, following Beringer:2012bj , we
define the asymmetries
$A_{CP}^{(\ell^{\prime}\ell)}\colonequals
P(\nu_{\ell}\to\nu_{\ell^{\prime}})-P(\bar{\nu}_{\ell}\to\bar{\nu}_{\ell^{\prime}}),$
(6)
and find
$\displaystyle A_{CP}^{(\mu e)}=-A^{(\tau e)}_{CP}=A_{CP}^{(\tau\mu)}$
$\displaystyle=\mp\frac{1}{3}\lambda\left(2-\lambda^{2}\right)\left(\sin\frac{\Delta
m^{2}_{32}}{2p}L+\sin\frac{\Delta m^{2}_{21}}{2p}L+\sin\frac{\Delta
m^{2}_{13}}{2p}L\right)$ $\displaystyle\approx\mp 0.146\left(\sin\frac{\Delta
m^{2}_{32}}{2p}L+\sin\frac{\Delta m^{2}_{21}}{2p}L+\sin\frac{\Delta
m^{2}_{13}}{2p}L\right),$ (7)
where all symbols have their usual meaning. The $\mp$ sign holds for
$\delta=\pm\frac{\pi}{2}$. For $\lambda=0$, or equivalently $\theta_{13}=0$,
the $U^{\pm}$ reduce to the standard tri-bimaximal mixing matrix
Harrison:2002er .222This may be compared with (Stancu:1999ct, , Eq. (26)) that
gives an interpolating matrix with $\theta_{\odot}$ as a variable. In one
limit the interpolating matrix gives the bimaximal mixing Vissani:1997pa ;
Ahluwalia:1998xb ; Barger:1998ta and in another it yields tri-bimaximal
mixing Harrison:2002er .
The result (7) is modified by matter effects Wolfenstein:1977ue ;
Mikheev:1986gs . Its general features are studied in detail by various authors
Gava:2008rp ; Balantekin:2007es ; Kneller:2009vd ; Kisslinger:2012se . In
gravitational environments the following argument suggests that one must
expect a significant modification to the result (7). Neutrino oscillations
provide us with a set of flavour oscillation clocks. These clocks must
redshift according to the general expectations of the theory of general
relativity. In gravitational environments of neutron stars the dimensionless
gravitational potential is $\Phi^{NS}_{grav}\approx 0.2$ (cf. for Earth,
$\Phi^{\oplus}_{grav}\approx 6.95\times 10^{-10}$). For a given source-
detector distance, and a given energy, the asymmetries $A_{CP}$ for supernovae
modeling must be accordingly modified Ahluwalia:1996ev ; Ahluwalia:1998jx ;
Konno:1998kq ; Wudka:2000rf ; Mukhopadhyay:2005gb ; Singh:2003sp at the
$20\%$ level, or thereabouts.
An examination of the $U^{\pm}$ immediately shows that the expectation values
of the $\nu_{\mu}$ and $\nu_{\tau}$ masses are identical. To
$\mathcal{O}(\lambda^{2})$ the $U^{-}$ obtained above reproduces to King’s
result (King:2012vj, , Eq. (8)) for $\delta=\pi/2$. The presented $U^{\pm}$
not only accommodate the implications of the Daya Bay and RENO collaborations,
but also the L/E flatness of the $e$-like event ratio seen in the atmospheric
neutrino data while respecting all other known data on neutrino oscillations.
###### Acknowledgements.
The result presented here was obtained on 10 May 2012, and was presented the
next day at a MatScience Seminar. The author thanks Institute of Mathematical
Sciences (“MatScience”, Chennai, India) for its hospitality and for its
vibrant scholarly environment.
## References
* (1) L.-L. Chau and W.-Y. Keung, Comments on the Parametrization of the Kobayashi-Maskawa Matrix, Phys.Rev.Lett. 53 (1984) 1802.
* (2) Particle Data Group Collaboration, J. Beringer et al., The review of particle physics, Phys. Rev. D86 (2012) 010001.
* (3) D. V. Ahluwalia-Khalilova, Addendum to: Gen. Rel. Grav. 28 (1996) 1161, First Prize Essay for 1996: Neutrino Oscillations and Supernovae, Gen. Rel. Grav. 36 (2004) 2183–2187.
* (4) C. Lunardini, B. Muller, and H.-T. Janka, Neutrino oscillation signatures of oxygen-neon-magnesium supernovae, Phys. Rev. D78 (2008) 023016, [arXiv:0712.3000].
* (5) H. Duan, G. M. Fuller, J. Carlson, and Y.-Z. Qian, Simulation of Coherent Non-Linear Neutrino Flavor Transformation in the Supernova Environment. 1. Correlated Neutrino Trajectories, Phys. Rev. D74 (2006) 105014, [astro-ph/0606616].
* (6) H. Duan, G. M. Fuller, J. Carlson, and Y.-Z. Qian, Flavor Evolution of the Neutronization Neutrino Burst from an O-Ne-Mg Core-Collapse Supernova, Phys. Rev. Lett. 100 (2008) 021101, [arXiv:0710.1271].
* (7) T. Yoshida, T. Kajino, H. Yokomakura, K. Kimura, A. Takamura, et al., Neutrino Oscillation Effects on Supernova Light Element Synthesis, Astrophys. J. 649 (2006) 319–331, [astro-ph/0606042].
* (8) J. Lesgourgues and S. Pastor, Massive neutrinos and cosmology, Phys. Rept. 429 (2006) 307–379, [astro-ph/0603494].
* (9) M. Y. Khlopov and S. Petcov, Possible cosmological effect of CP violation in neutrino oscillations, Phys. Lett. B99 (1981) 117.
* (10) P. Frampton, S. Glashow, and T. Yanagida, Cosmological sign of neutrino CP violation, Phys. Lett. B548 (2002) 119–121, [hep-ph/0208157].
* (11) A. B. Balantekin, J. Gava, and C. Volpe, Possible CP-Violation effects in core-collapse Supernovae, Phys. Lett. B662 (2008) 396–404, [arXiv:0710.3112].
* (12) Super-Kamiokande Collaboration Collaboration, K. Abe et al., Search for Differences in Oscillation Parameters for Atmospheric Neutrinos and Antineutrinos at Super-Kamiokande, Phys. Rev. Lett. 107 (2011) 241801, [arXiv:1109.1621].
* (13) MINOS Collaboration Collaboration, P. Adamson et al., Improved search for muon-neutrino to electron-neutrino oscillations in MINOS, Phys. Rev. Lett. 107 (2011) 181802, [arXiv:1108.0015].
* (14) DOUBLE-CHOOZ Collaboration Collaboration, Y. Abe et al., Indication for the disappearance of reactor electron antineutrinos in the Double Chooz experiment, Phys. Rev. Lett. 108 (2012) 131801, [arXiv:1112.6353].
* (15) DAYA-BAY Collaboration Collaboration, F. An et al., Observation of electron-antineutrino disappearance at Daya Bay, Phys. Rev. Lett. 108 (2012) 171803, [arXiv:1203.1669].
* (16) RENO collaboration Collaboration, J. Ahn et al., Observation of Reactor Electron Antineutrino Disappearance in the RENO Experiment, Phys. Rev. Lett. 108 (2012) 191802, [arXiv:1204.0626].
* (17) S. King, Tri-bimaximal-Cabibbo Mixing, arXiv:1205.0506.
* (18) N. Cabibbo, Unitary Symmetry and Leptonic Decays, Phys. Rev. Lett. 10 (1963) 531–533.
* (19) L. Wolfenstein, Parametrization of the Kobayashi-Maskawa Matrix, Phys. Rev. Lett. 51 (1983) 1945.
* (20) R. Mohapatra and A. Smirnov, Neutrino Mass and New Physics, Ann. Rev. Nucl. Part. Sci. 56 (2006) 569–628, [hep-ph/0603118].
* (21) Super-Kamiokande Collaboration Collaboration, Y. Fukuda et al., Evidence for oscillation of atmospheric neutrinos, Phys. Rev. Lett. 81 (1998) 1562–1567, [hep-ex/9807003].
* (22) D. V. Ahluwalia, Y. Liu, and I. Stancu, CP-violation in neutrino oscillations and $L/E$ flatness of the E-like event ratio at Super-Kamiokande, Mod. Phys. Lett. A17 (2002) 13–21.
* (23) X. Zhang and B.-Q. Ma, A Prediction of neutrino mixing matrix with CP violating phase, arXiv:1203.2906.
* (24) M. Tortola, J. Valle, and D. Vanegas, Global status of neutrino oscillation parameters after recent reactor measurements, arXiv:1205.4018.
* (25) G. Fogli, E. Lisi, A. Marrone, D. Montanino, A. Palazzo, et al., Global analysis of neutrino masses, mixings and phases: entering the era of leptonic CP violation searches, arXiv:1205.5254.
* (26) P. Harrison, D. Perkins, and W. Scott, Tri-bimaximal mixing and the neutrino oscillation data, Phys. Lett. B530 (2002) 167, [hep-ph/0202074].
* (27) I. Stancu and D. V. Ahluwalia, L / E flatness of the electron - like event ratio in Super-Kamiokande and a degeneracy in neutrino masses, Phys. Lett. B460 (1999) 431–436, [hep-ph/9903408].
* (28) F. Vissani, A Study of the scenario with nearly degenerate Majorana neutrinos, hep-ph/9708483.
* (29) D. V. Ahluwalia, Reconciling Super-Kamiokande, LSND, and home-stake neutrino oscillation data, Mod. Phys. Lett. A13 (1998) 2249–2264, [hep-ph/9807267].
* (30) V. D. Barger, S. Pakvasa, T. J. Weiler, and K. Whisnant, Bimaximal mixing of three neutrinos, Phys. Lett. B437 (1998) 107–116, [hep-ph/9806387].
* (31) L. Wolfenstein, Neutrino Oscillations in Matter, Phys.Rev. D17 (1978) 2369–2374.
* (32) S. Mikheev and A. Y. Smirnov, Resonance Amplification of Oscillations in Matter and Spectroscopy of Solar Neutrinos, Sov. J. Nucl. Phys. 42 (1985) 913–917.
* (33) J. Gava and C. Volpe, Collective neutrinos oscillation in matter and CP-violation, Phys. Rev. D78 (2008) 083007, [arXiv:0807.3418].
* (34) J. P. Kneller and G. C. McLaughlin, Three Flavor Neutrino Oscillations in Matter: Flavor Diagonal Potentials, the Adiabatic Basis and the CP phase, Phys.Rev. D80 (2009) 053002, [arXiv:0904.3823].
* (35) L. S. Kisslinger, E. M. Henley, and M. B. Johnson, Neutrino Oscillation in Matter and Parameters $s_{13},\delta_{CP}$, arXiv:1203.6613.
* (36) D. V. Ahluwalia and C. Burgard, Gravitationally induced quantum mechanical phases and neutrino oscillations in astrophysical environments, Gen. Rel. Grav. 28 (1996) 1161–1170, [gr-qc/9603008].
* (37) D. V. Ahluwalia and C. Burgard, Interplay of gravitation and linear superposition of different mass eigenstates, Phys. Rev. D57 (1998) 4724–4727, [gr-qc/9803013].
* (38) K. Konno and M. Kasai, General relativistic effects of gravity in quantum mechanics: A Case of ultrarelativistic, spin 1/2 particles, Prog. Theor. Phys. 100 (1998) 1145–1157, [gr-qc/0603035].
* (39) J. Wudka, Mass dependence of the gravitationally induced wave function phase, Phys. Rev. D64 (2001) 065009, [gr-qc/0010077].
* (40) B. Mukhopadhyay, Neutrino asymmetry around black holes: Neutrinos interact with gravity, Mod. Phys. Lett. A20 (2005) 2145–2156, [astro-ph/0505460].
* (41) P. Singh and B. Mukhopadhyay, Gravitationally induced neutrino asymmetry, Mod. Phys. Lett. A18 (2003) 779–785.
|
# Acceleration as a circular motion along an imaginary circle: Kubo-Martin-
Schwinger condition for accelerating field theories in imaginary-time
formalism
Victor E. Ambrus<EMAIL_ADDRESS>Maxim N. Chernodub
<EMAIL_ADDRESS>Department of Physics, West University of
Timisoara,
Bd. Vasile Pârvan 4, Timisoara 300223, Romania Institut Denis Poisson,
Université de Tours, Tours 37200, France
###### Abstract
We discuss the imaginary-time formalism for field theories in thermal
equilibrium in uniformly accelerating frames. We show that under a Wick
rotation of Minkowski spacetime, the Rindler event horizon shrinks to a point
in a two-dimensional subspace tangential to the acceleration direction and the
imaginary time. We demonstrate that the accelerated version of the Kubo-
Martin-Schwinger (KMS) condition implies an identification of all spacetime
points related by integer-multiple rotations in the tangential subspace about
this Euclidean Rindler event-horizon point, with the rotational quanta defined
by the thermal acceleration, $\alpha=a/T$. In the Wick-rotated Rindler
hyperbolic coordinates, the KMS relations reduce to standard (anti-)periodic
boundary conditions in terms of the imaginary proper time (rapidity)
coordinate. Our findings pave the way to study, using first-principle lattice
simulations, the Hawking-Unruh radiation in geometries with event horizons,
phase transitions in accelerating Early Universe and early stages of quark-
gluon plasma created in relativistic heavy-ion collisions.
###### keywords:
Acceleration , Unruh effect , KMS relation , Finite temperature field theory
††journal: Physics Letters B
## 1 Introduction
In the past decades, there has been a renewed interest in studying systems
with acceleration as toy models for understanding the dynamics of the quark-
gluon plasma fireball created in ultrarelativistic (non-central) heavy-ion
collisions [1]. Such systems exhibit large acceleration immediately after the
collision [2] until the central rapidity plateau develops as in the Björken
boost-invariant flow model [3], where the acceleration vanishes. A natural
question that arises for such a system is to what extent these extreme
kinematic regimes affect the thermodynamics of the plasma fireball, which sets
the stage for further evolution of the quark-gluon plasma. The environment of
the “Little Bangs” of high-energy heavy-ion collisions [4] sheds insights on
the properties of a primordial quark-gluon matter that once emerged at the
time of the Big Bang in the Early Universe [5].
Our knowledge of the non-perturbative properties of the quark-gluon plasma
originates from first-principle numerical simulations of lattice QCD, which is
formulated in Euclidean spacetime, by means of the imaginary-time formalism
[6]. Acceleration is closely related to rotation due to the resemblance of the
corresponding generators of Lorentz transformations of Minkowski spacetime. In
the case of non-central collisions, the angular velocity of the quark-gluon
fluid can reach values of the order of $\Omega\sim 10^{22}\,{\rm Hz}$ [7]
which translates to $\hbar\Omega\simeq 6\ {\rm MeV}\ll T_{c}$, where $T_{c}$
is the transition temperature to the quark-gluon plasma phase. The lattice
studies have so far been limited to the case of uniformly rotating systems in
Euclidean space-time, where the rotation parameter has to be analytically
continued to imaginary values [8] in order to avoid the sign problem that also
plagues lattice calculations at finite chemical potential [9]. Analytical
analyses of the effects of rotation on the phase diagram, performed in various
effective infrared models of QCD [10, 11, 12, 13, 14, 15, 16, 17], stay in
persistent contradiction with the first-principle numerical results [18, 19,
20, 21, 22, 23], presumably due to numerically-observed rotational instability
of quark-gluon plasma [21, 22, 23] (related to the thermal melting of the non-
perturbative gluon condensate [21]), splitting of chiral and deconfining
transitions [23, 24], or formation of a strongly inhomogeneous mixed
hadronic–quark-gluon-plasma phase induced by rotation [17, 25].
An earlier study of a Euclidean quantum field theory in an accelerating
spacetime with the Friedmann-Lemaître-Robertson-Walker metric has also
encountered the sign problem, which was avoided by considering a purely
imaginary Hubble constant [26]. On the contrary, our formulation of
acceleration in the imaginary-time formalism is free from the sign problem,
and thus, it can be formulated for physical, real-valued acceleration.
Throughout the paper, we use $\hbar=c=k_{B}=1$ units.
## 2 Global equilibrium in uniform acceleration
From a classical point of view, global equilibrium states in generic particle
systems are characterized by the inverse temperature four-vector
$\beta^{\mu}\equiv u^{\mu}(x)/T(x)$, associated with the local fluid velocity
$u^{\mu}$, with $\beta^{\mu}$ satisfying the Killing equation,
$\partial_{\mu}\beta_{\nu}+\partial_{\nu}\beta_{\mu}=0$ [27, 28]. For an
accelerated system at equilibrium, one gets
$\beta^{\mu}\partial_{\mu}=\beta_{T}[(1+az)\partial_{t}+at\partial_{z}]$, with
$\beta_{T}=1/T$ where111Throughout our article, $T(x)$ denotes the local
temperature (1), while $T$ stands for the value of $T(x)$ at the origin
$t=z=0$. Also, for reasons that will become clear shortly later, we use the
notation $\beta_{T}$ instead of the conventional $\beta$ for the inverse
temperature at the coordinate origin. $T\equiv T({\boldsymbol{0}})$ represents
the temperature at the coordinate origin
${\boldsymbol{x}}_{\|}\equiv(t,z)={\boldsymbol{0}}$ in the longitudinal plane
spanned by the time coordinate $t$ and the acceleration direction $z$. The
local temperature $T(x)$, the local fluid velocity $u^{\mu}(x)$ and the local
proper acceleration $a^{\mu}(x)\equiv u^{\nu}\partial_{\nu}u^{\mu}$,
respectively,
$\displaystyle T(x)$
$\displaystyle\equiv(u_{\mu}\beta^{\mu})^{-1}=\frac{1}{\beta_{T}\sqrt{(1+az)^{2}-(at)^{2}}},$
(1) $\displaystyle u^{\mu}(x)\partial_{\mu}$
$\displaystyle=T(x)\beta_{T}\bigl{[}(1+az)\partial_{t}+at\partial_{z}\bigr{]}\,,$
(2) $\displaystyle a^{\mu}(x)\partial_{\mu}$
$\displaystyle=aT^{2}(x)\beta_{T}^{2}[at\partial_{t}+(1+az)\partial_{z}]\,,$
(3)
diverge at the Rindler horizon:
$\displaystyle(1+az)^{2}-(at)^{2}=0,\qquad\ z\geqslant-\frac{1}{a}\,.$ (4)
It is convenient to define the dimensionless quantity called the proper
thermal acceleration $\alpha=\sqrt{-\alpha^{\mu}\alpha_{\mu}}$ and the
corresponding four-vector
$\alpha^{\mu}=u^{\nu}\partial_{\nu}\beta^{\mu}=a^{\mu}/T(x)$, respectively:
$\displaystyle\alpha$ $\displaystyle=a\beta_{T}\,,$
$\displaystyle\alpha^{\mu}(x)\partial_{\mu}$
$\displaystyle=a\beta_{T}^{2}T(x)[at\partial_{t}+(1+az)\partial_{z}]\,.$ (5)
Note that, while the magnitude $\alpha$ of the thermal acceleration is a
space-time constant, the local acceleration
$a(x)=\sqrt{-a_{\mu}a^{\mu}}=\alpha T(x)$ depends on space and time
coordinates.
In classical theory, the energy-momentum tensor for an accelerating fluid in
thermal equilibrium reads
$T^{\mu\nu}=\mathcal{E}u^{\mu}u^{\nu}-\mathcal{P}\Delta^{\mu\nu},$ (6)
where $\Delta^{\mu\nu}=g^{\mu\nu}-u^{\mu}u^{\nu}$. The local energy density
$\mathcal{E}$ and pressure $\mathcal{P}$ are characterized by the local
temperature (1). For a conformal system,
$\mathcal{E}=3\mathcal{P}=\frac{\nu_{\rm eff}\pi^{2}}{30}T^{4}(x),$ (7)
where $\nu_{\rm eff}$ is the effective bosonic degrees of freedom. In the case
of a massless, neutral scalar field, $\nu_{\rm eff}=1$, while for Dirac
fermions, $\nu_{\rm eff}=\frac{7}{8}\times 2\times 2=7/2$, taking into account
the difference between Bose-Einstein and Fermi-Dirac statistics ($7/8$), spin
degeneracy, as well as particle and anti-particle contributions.
## 3 Unruh and Hawking effects
Unruh has found that in a frame subjected to a uniform acceleration $a$, an
observer detects a thermal radiation with the temperature [29]:
$\displaystyle T_{U}\equiv\frac{1}{\beta_{U}}=\frac{a}{2\pi}\,,$ (8)
where we also defined the Unruh length $\beta_{U}$, which will be useful in
our discussions below.
The Unruh effect is closely related to the Hawking evaporation of black holes
[30, 31], which proceeds via the quantum production of particle pairs near the
event horizon of the black hole. The Hawking radiation has a thermal spectrum
with an effective temperature
$\displaystyle T_{H}=\frac{\kappa}{2\pi}\,,$ (9)
where $\kappa=1/(4M)$ is the acceleration due to gravity at the horizon of a
black hole of mass $M$. The similarity of both effects, suggested by the
equivalence of formulas for the Unruh temperature (9) and the Hawking
temperature (8), goes deeper as the thermal character of both phenomena
apparently originates from the presence of appropriate event horizons [32,
33]. In an accelerating frame, the event horizon separates causally
disconnected regions of spacetime, evident in the Rindler coordinates in which
the metric of the accelerating frame is conformally flat [34].
Quantum effects lead to acceleration-dependent corrections to Eq. (7) and may
also produce extra (anisotropic) contributions to the energy-momentum tensor
$T^{\mu\nu}$ of the system. Such corrections were already established using
the Zubarev approach [35, 36] or Wigner function formalism [37, 38], and one
remarkable conclusion is that the energy-momentum tensor $\Theta^{\mu\nu}$ in
an accelerating system exactly vanishes at the Unruh temperature (8), or,
equivalently, when the thermal acceleration (3) reaches the critical value
$\alpha=\alpha_{c}=2\pi$: $\Theta^{\mu\nu}(T=T_{U})=0$. A somewhat related
property is satisfied by thermal correlation functions in the background of a
Schwarzschild black hole, establishing the equivalence between Feynman and
thermal Green’s functions, with the latter one taken at the Hawking
temperature (9), cf. Ref. [33, 32].
As noted earlier, the energy density receives quantum corrections. For the
conformally-coupled massless real-valued Klein-Gordon scalar field and the
Dirac field, we have, respectively [36, 37, 38, 39, 40]:
$\displaystyle\mathcal{E}_{\rm scalar}$
$\displaystyle=\frac{\pi^{2}T^{4}(x)}{30}\biggl{[}1-\Bigr{(}\frac{\alpha}{2\pi}\Bigl{)}^{4}\biggr{]}\,,$
(10a) $\displaystyle\mathcal{E}_{\rm Dirac}$
$\displaystyle=\frac{7\pi^{2}T^{4}(x)}{60}\biggl{[}1-\Bigr{(}\frac{\alpha}{2\pi}\Bigl{)}^{2}\biggr{]}\biggl{[}1+\frac{17}{7}\Bigr{(}\frac{\alpha}{2\pi}\Bigl{)}^{2}\biggr{]}\,,$
(10b)
where we specially rearranged terms to make it evident that at the Unruh
temperature $T=T_{U}$ (or, equivalently, at $\alpha=2\pi$), the energy density
vanishes.
The above discussion focused on the free-field theory. In the interacting
case, a legitimate question is to what extent do the local kinematics
influence the phase structure of phenomenologically relevant field theories,
for example, to deconfinement and chiral thermal transitions of QCD. Central
to lattice finite-temperature studies is how to set the Euclidean-space
boundary conditions in the imaginary-time formalism. A static bosonic
(fermionic) system at finite temperature can be implemented by imposing
(anti-)periodicity in imaginary time $\tau=it$ with period given by the
inverse temperature, $\tau\rightarrow\tau+\beta_{T}$. These boundary
conditions are closely related to, and in fact, derived from the usual Kubo-
Martin-Schwinger (KMS) relation formulated for a finite-temperature state (at
vanishing acceleration), which translates into a condition written for the
scalar and fermionic thermal two-point functions [6, 41]:
$G_{F}(t)=G_{F}(t+i\beta_{T}),\quad S_{F}(t)=-S_{F}(t+i\beta_{T}),$ (11)
where we suppressed the dependence on the spatial coordinate $\boldsymbol{x}$
and the second four-point $x^{\prime}$. In the case of rotating states, the
KMS relation (11) gets modified to [17, 40, 42]
$\displaystyle G_{F}(t,\varphi)$
$\displaystyle=G_{F}(t+i\beta_{T},\varphi+i\beta_{T}\Omega),$ $\displaystyle
S_{F}(t,\varphi)$ $\displaystyle=-e^{-\beta_{T}\Omega
S^{z}}S_{F}(t+i\beta_{T},\varphi+i\beta_{T}\Omega),$ (12)
where $e^{-\beta_{T}\Omega S^{z}}$ is the spin part of the rotation with
imaginary angle $i\beta_{T}\Omega$ along the rotation ($z$) axis and
$S^{z}=\frac{i}{2}\gamma^{x}\gamma^{y}$ is the spin matrix. The purpose of the
present paper is to uncover the KMS relation and subsequent conditions for
fields and, consequently, for correlation functions in a uniformly accelerated
state.
## 4 Quantum field theory at constant acceleration
In Minkowski space, the most general solution of the Killing equation reads
$\beta^{\mu}=b^{\mu}+\varpi^{\mu}{}_{\nu}x^{\nu},$ (13)
where $b^{\mu}$ is a constant four-vector and $\varpi^{\mu\nu}$ is a constant,
anti-symmetric tensor. A quantum system in thermal equilibrium is
characterized by the density operator
$\hat{\rho}=e^{-b\cdot\hat{P}+\varpi:\hat{J}/2},$ (14)
where $\hat{P}^{\mu}$ and $\hat{J}^{\mu\nu}$ are the conserved four-momentum
and total angular momentum operator, representing the generators of
translations and of Lorentz transformations. In order to derive the KMS
relation, it is convenient to factorize $\hat{\rho}$ into a translation part
and a Lorentz transformation part, as pointed out in Ref. [37]:
$e^{-b\cdot\hat{P}+\varpi:\hat{J}/2}=e^{-\tilde{b}(\varpi)\cdot\hat{P}}e^{\varpi:\hat{J}/2},$
(15)
where $\tilde{b}$ is given by
$\tilde{b}(\varpi)^{\mu}=\sum_{k=0}^{\infty}\frac{i^{k}}{(k+1)!}(\varpi^{\mu}{}_{\nu_{1}}\varpi^{\nu_{1}}{}_{\nu_{2}}\cdots\varpi^{\nu_{k-1}}{}_{\nu_{k}})b^{\nu_{k}}.$
(16)
Focusing now on the accelerated system with reference inverse temperature
$\beta_{T}=1/T$, we have $b^{\mu}=\beta_{T}\delta^{\mu}_{0}$ and
$\varpi^{\mu}{}_{\nu}=\alpha(\delta^{\mu}_{3}g_{0\nu}-\delta^{\mu}_{0}g_{3\nu})$,
such that $\tilde{b}$ becomes
$\tilde{b}^{\mu}=B\delta^{\mu}_{0}+A\delta^{\mu}_{3},\quad
B=\frac{\sin\alpha}{a},\quad A=\frac{i}{a}(1-\cos\alpha),$ (17)
where $\alpha=a/T$ is the thermal acceleration (5). This observation allows
$\hat{\rho}=e^{-\beta_{T}\hat{H}+\alpha\hat{K}^{z}}$ to be factorized as
$\hat{\rho}=e^{-B\hat{H}+A\hat{P}^{z}}e^{\alpha\hat{K}^{z}}.$ (18)
A relativistic quantum field described by the field operator $\hat{\Phi}$
transforms under Poincaré transformations as
$\displaystyle
e^{i\tilde{b}\cdot\hat{P}}\hat{\Phi}(x)e^{-i\tilde{b}\cdot\hat{P}}$
$\displaystyle=\hat{\Phi}(x+\tilde{b}),$
$\displaystyle\hat{\Lambda}\hat{\Phi}(x)\hat{\Lambda}^{-1}$
$\displaystyle=D[\Lambda^{-1}]\hat{\Phi}(\Lambda x),$ (19)
where $\Lambda=e^{-\frac{i}{2}\varpi:\mathcal{J}}$ is written in terms of the
matrix generators
$(\mathcal{J}^{\mu\nu})_{\alpha\beta}=i(\delta^{\mu}_{\alpha}\delta^{\nu}_{\beta}-\delta^{\mu}_{\beta}\delta^{\nu}_{\alpha})$,
while $D[\Lambda]^{-1}=e^{\frac{i}{2}\varpi:S}$ is the spin part of the
inverse Lorentz transformation. Comparing Eq. (19) and (14), it can be seen
that the density operator $\hat{\rho}$ acts like a Poincaré transformation
with imaginary parameters [37]. Using now the factorization (18), it can be
seen that $\hat{\rho}$ acts on the field operator $\hat{\Phi}$ as follows:
$\hat{\rho}\hat{\Phi}(t,z)\hat{\rho}^{-1}=e^{-\alpha
S^{0z}}\hat{\Phi}({\tilde{t}},{\tilde{z}}),$ (20)
where
$\displaystyle{\tilde{t}}$
$\displaystyle=\cos(\alpha)t+i\sin(\alpha)z+\frac{i}{a}\sin(\alpha),$
$\displaystyle{\tilde{z}}$
$\displaystyle=i\sin(\alpha)t+\cos(\alpha)z-\frac{1}{a}[1-\cos(\alpha)].$ (21)
The spin term evaluates to $e^{-\alpha S^{0z}}=1$ in the scalar case (since
$S^{0z}=0$), while for the Dirac field,
$S^{0z}=\frac{i}{2}\gamma^{0}\gamma^{3}$ and
$e^{-\alpha
S^{0z}}=\cos\frac{\alpha}{2}-i\gamma^{0}\gamma^{3}\sin\frac{\alpha}{2}.$ (22)
## 5 KMS relation at constant uniform acceleration
Consider now the Wightman functions $G^{\pm}(x,x^{\prime})$ and
$S^{\pm}(x,x^{\prime})$ of the Klein-Gordon and Dirac theories, defined
respectively as
$\displaystyle G^{+}(x,x^{\prime})$
$\displaystyle=\langle\hat{\Phi}(x)\hat{\Phi}(x^{\prime})\rangle,$
$\displaystyle S^{+}(x,x^{\prime})$
$\displaystyle=\langle\hat{\Psi}(x)\hat{\overline{\Psi}}(x^{\prime})\rangle,$
$\displaystyle G^{-}(x,x^{\prime})$
$\displaystyle=\langle\hat{\Phi}(x^{\prime})\hat{\Phi}(x)\rangle,$
$\displaystyle S^{-}(x,x^{\prime})$
$\displaystyle=-\langle\hat{\overline{\Psi}}(x^{\prime})\hat{\Psi}(x)\rangle.$
(23)
When the expectation value $\langle\cdot\rangle$ is taken at finite
temperature and under acceleration, we derive the KMS relations:
$\displaystyle G^{+}(x,x^{\prime})$
$\displaystyle=G^{-}({\tilde{t}},{\tilde{z}};x^{\prime}),$ $\displaystyle
S^{+}(x,x^{\prime})$ $\displaystyle=-e^{-\alpha
S^{0z}}S^{-}({\tilde{t}},{\tilde{z}};x^{\prime}).$ (24)
The KMS relations also imply natural boundary conditions for the thermal
propagators:
$\displaystyle G_{F}({\tilde{t}},{\tilde{z}};x^{\prime})$
$\displaystyle=G_{F}(t,z;x^{\prime})\,,$ $\displaystyle
S_{F}({\tilde{t}},{\tilde{z}};x^{\prime})$ $\displaystyle=-e^{\alpha
S^{0z}}S_{F}(t,z;x^{\prime})\,,$ (25)
which are solved formally by [34, 40]
$\displaystyle G_{F}^{(\alpha)}(t,z;x^{\prime})$
$\displaystyle=\sum_{j=-\infty}^{\infty}G_{F}^{\rm
vac}(t_{(j)},z_{(j)};x^{\prime})\,,$ (26a) $\displaystyle
S_{F}^{(\alpha)}(t,z;x^{\prime})$
$\displaystyle=\sum_{j=-\infty}^{\infty}(-1)^{j}e^{-j\alpha S^{0z}}S_{F}^{\rm
vac}(t_{(j)},z_{(j)};x^{\prime})\,,$ (26b)
where $G^{\rm vac}_{F}(x,x^{\prime})$ and $S^{\rm vac}_{F}(x,x^{\prime})$ are
the vacuum propagators, while $t_{(j)}$ and $z_{(j)}$ are obtained by applying
the transformation in Eq. (21) $j\in{\mathbb{Z}}$ times:
$\displaystyle t_{(j)}$
$\displaystyle=t\cos(j\alpha)+\frac{i}{a}(1+az)\sin(j\alpha),$ $\displaystyle
z_{(j)}$
$\displaystyle=it\sin(j\alpha)+\frac{1}{a}(1+az)\cos(j\alpha)-\frac{1}{a}.$
(27)
In particular, $\tilde{t}=t_{(1)}$ and $\tilde{z}=z_{(1)}$. Due to the
periodicity of the trigonometric functions appearing above, in the case when
$\alpha/2\pi=p/q$ is a rational number represented as an irreducible fraction,
the sum over $j$ in Eqs. (26) contains only $q$ terms:
$\displaystyle G_{F}^{(p,q)}(t,z;x^{\prime})$
$\displaystyle=\sum_{j=0}^{q-1}G_{F}^{\rm vac}(t_{(j)},z_{(j)};x^{\prime}),$
(28a) $\displaystyle S_{F}^{(p,q)}(t,z;x^{\prime})$
$\displaystyle=\sum_{j=0}^{q-1}(-1)^{j}e^{-j\alpha S^{0z}}S_{F}^{\rm
vac}(t_{(j)},z_{(j)};x^{\prime}).$ (28b)
In particular, the case $\alpha=2\pi$ corresponds to $p=q=1$, while the
thermal propagators reduce trivially to the vacuum ones:
$G_{F}^{(1,1)}=G_{F}^{\rm vac}$ and $S_{F}^{(1,1)}=S_{F}^{\rm vac}$. Since
$e^{-q\alpha S^{0z}}=(-1)^{p}$ by virtue of Eq. (22), applying Eq. (25) $q$
times shows that
$S_{F}^{(p,q)}(t_{(q)},z_{(q)};x^{\prime})=(-1)^{p+q}S^{(p,q)}_{F}(t,z;x^{\prime})$
and thus $S_{F}^{(p,q)}$ cancels identically when $p+q$ is an odd integer.
## 6 Imaginary-time formulation for acceleration
We now move to the Euclidean manifold by performing the Wick rotation to
imaginary time, $t\rightarrow\tau=it$. Then, Eq. (25) becomes
$\displaystyle G_{E}(\tau_{(1)},z_{(1)};x^{\prime})$
$\displaystyle=G_{E}(\tau,z;x^{\prime}),$ $\displaystyle
S_{E}(\tau_{(1)},z_{(1)};x^{\prime})$ $\displaystyle=-e^{\alpha
S^{0z}}S_{E}(\tau,z;x^{\prime}),$ (29)
and Eq. (26) reads, for the case when $\alpha/2\pi$ is an irrational number,
$\displaystyle G_{E}^{(\alpha)}(\tau,z;x^{\prime})$
$\displaystyle=\sum_{j=-\infty}^{\infty}G_{E}^{\rm
vac}(\tau_{(j)},z_{(j)};x^{\prime}),$ (30a) $\displaystyle
S_{E}^{(\alpha)}(\tau,z;x^{\prime})$
$\displaystyle=\sum_{j=-\infty}^{\infty}(-1)^{j}e^{-j\alpha S^{0z}}S_{E}^{\rm
vac}(\tau_{(j)},z_{(j)};x^{\prime}).$ (30b)
The case when $\alpha/2\pi=p/q$ must be treated along the lines summarized in
Eqs. (28) (see also discussion in Sec. 10). In the above, we considered
$j\in{\mathbb{Z}}$ and
$\displaystyle\tau_{(j)}$
$\displaystyle=\tau\cos(j\alpha)-\frac{1}{a}(1+az)\sin(j\alpha),$ (31a)
$\displaystyle z_{(j)}$
$\displaystyle=\tau\sin(j\alpha)+\frac{1}{a}(1+az)\cos(j\alpha)-\frac{1}{a}.$
(31b)
For the fields, the accelerated KMS conditions suggest the identification of
the fields at the points:
$\displaystyle\phi(\tau_{(j)},{\boldsymbol{x}}_{\|},z_{(j)})$
$\displaystyle=\phi(\tau,{\boldsymbol{x}}_{\|},z)\,,$ (32a)
$\displaystyle\psi(\tau_{(j)},{\boldsymbol{x}}_{\|},z_{(j)})$
$\displaystyle=(-1)^{j}e^{j\alpha
S^{0z}}\psi(\tau,{\boldsymbol{x}}_{\|},z)\,,$ (32b)
where the identified coordinates $(\tau_{(j)},z_{(j)})$ in the longitudinal
plane are given by Eq. (31) and ${\boldsymbol{x}}_{\|}=(x,y)$ are the
transverse coordinates which are unconstrained by acceleration. While the sums
of the form (26) may formally be divergent, the modified conditions (31) and
(32) give a finite solution to the accelerated KMS relations. The points
identified with the accelerated KMS condition (31) are illustrated in Fig. 1.
Figure 1: The cyclic paths determined by the accelerating KMS boundary
condition (31) in the longitudinal plane spanned by the imaginary time $\tau$
and the acceleration direction $z$ of Wick-rotated Minkowski spacetime. Each
plot illustrates different accelerations $a$ encoded in the ratio
$\beta_{U}/\beta_{T}\equiv 2\pi T/a=3,4,5,10$ of the Unruh length $\beta_{U}$,
Eq. (8), to the thermal length $\beta_{T}=1/T$. The starting point of each
cyclic path, $(z,\tau)_{i}=(z_{i},0)$, with $z_{i}/\beta_{U}=-1,-1/2,\dots,1$,
is denoted by a hollow circle. The position of the Rindler horizon, collapsed
under the Wick rotation to a point (34), is denoted by the green star in each
plot.
## 7 Geometrical meaning of the accelerated KMS relation in imaginary-time
formalism
It is convenient, for a moment, to define a translationally shifted spatial
coordinate, ${\mathsf{z}}=z+1/a$, and rewrite Eq. (31) in the very simple and
suggestive form:
$\displaystyle\tau_{(j)}$
$\displaystyle=\tau\cos(j\alpha)-{\mathsf{z}}\sin(j\alpha),$
$\displaystyle{\mathsf{z}}_{(j)}$
$\displaystyle=\tau\sin(j\alpha)+{\mathsf{z}}\cos(j\alpha).$ (33)
In the shifted coordinates, the condition (4) for the Rindler horizon becomes
$a^{2}(\mathsf{z}^{2}+\tau^{2})=0$, which is solved by
$\displaystyle\tau={\mathsf{z}}=0\qquad\Leftrightarrow\qquad\tau=0,\quad
z=-\frac{1}{a}\,.$ (34)
Thus, we arrive at the following beautiful conclusion: in the Euclidean
spacetime of the imaginary-time formalism, the Rindler horizon (4) shrinks to
a single point (34). Thus, the accelerated KMS condition corresponds to the
identification of all points obtained by the discrete rotation of the space
around the Euclidean Rindler horizon point $(\tau,z)=(0,-1/a)$ with the unit
rotation angle defined by the reference thermal acceleration $\alpha=a/T$.
Our accelerated KMS condition, given in Eqs. (31) and (32), recovers the usual
finite-temperature KMS condition in the limit of vanishing acceleration.
Figure 2 demonstrates that in this limit,with $\alpha=a/T\to 0$, the proposed
KMS-type condition (27) for the acceleration is reduced to the standard
finite-temperature KMS-boundary condition [6] for which imaginary time $\tau$
is compactified to a circle of the length $\beta_{T}\equiv 1/T$ with the
points $(\tau,{\boldsymbol{x}})$ and $(\tau+\beta_{T}n,{\boldsymbol{x}})$,
$n\in{\mathbb{Z}}$, identified.
Figure 2: The sets of points in the ($\tau,z$) plane which are identified by
our circular KMS condition (33) with the origin $(\tau,z)=(0,0)$ in a
thermally equilibrated system which experiences a uniform acceleration $a$
along the $z$ axis. The color distinguishes different acceleration strength
marked by different Unruh lengths $\beta_{U}=2\pi/|a|$. At vanishing
acceleration ($\beta_{U}/\beta_{T}\to\pm\infty$), condition (33) reduces to
the standard thermodynamic requirement of compactification of imaginary time
$\tau$ to a circle with the length $\beta_{T}=1/T$, while the Euclidean
Rindler horizon moves to (minus) spatial infinity. In the figure, each set of
points, corresponding to various ratios $\beta_{U}/\beta_{T}$, is connected by
a smooth line to guide the eye.
At the critical acceleration $\alpha=2\pi n$ (with $n\in{\mathbb{Z}}$), when
the background temperature $T$ equals to (an integer multiple of) the Unruh
temperature (8), the accelerated KMS conditions (31) do not constrain the
system anymore, $\tau_{(j)}=\tau$ and $z_{(j)}=z$, so that the system becomes
equivalent to a zero-temperature system in non-accelerated flat Minkowski
spacetime. This property, for $\alpha=2\pi$, has been observed in Refs. [35,
36, 37, 38].
In the situation where $2\pi/\alpha=\beta_{U}/\beta_{T}=n$ is an integer
number, the accelerated state at finite temperature can be implemented in
Euclidean space by imposing periodicity with respect to a specific set of
points that form a regular polygon with $n$ vertices located on the circle of
radius $\tau^{2}+z^{2}$. This is particularly convenient for lattice
simulations since the Euclidean action remains the standard one, allowing
accelerated systems to be modeled in the imaginary-time path integral
formalism without encountering the infamous sign problem.
## 8 KMS relations in Rindler coordinates
In the Minkowski Lorentz frame that we considered so far, the accelerating KMS
conditions (31) and (32) do not correspond to a boundary condition (as one
would naively expect from the KMS condition in thermal field theory) but
rather to a bulk condition: instead of relating the points at the boundary of
the imaginary-time Euclidean system, the accelerated KMS relations give us the
identification of the spacetime points in its interior.
While seemingly non-trivial in the form written in Eq. (27), the displacements
implied by the KMS relation correspond to the usual translation of the proper
time (rapidity) coordinate $\eta$ when employing the Rindler coordinates,
$at=e^{\zeta}\sinh(a\eta),\quad 1+az=e^{\zeta}\cosh(a\eta).$ (35)
It is easy to see that
$\displaystyle at_{(j)}$ $\displaystyle=e^{\zeta}\sinh(a\eta+ij\alpha),$ (36a)
$\displaystyle 1+az_{(j)}$ $\displaystyle=e^{\zeta}\cosh(a\eta+ij\alpha),$
(36b)
which implies that
$\eta_{(j)}=\eta+ij\beta_{T},\qquad\zeta_{(j)}=\zeta,$ (37)
in a seemingly perfect agreement with the usual KMS relation (11) for static
systems in Minkowski. However, there is also an unusual particularity of the
KMS conditions (37) in the Rindler coordinates (35).
The first relation in Eq. (37) suggests that the Wick rotation of the
Minkowski time $t=-i\tau$ should be supplemented with the Wick rotation of the
proper time in the accelerated frame $\eta=-i\theta/a$, where $\theta$ is the
imaginary rapidity.222Named in analogy with the rapidity coordinate
$\psi\equiv a\eta$. Then, the relation (35) in the imaginary (both Minkowski
and Rindler) time becomes as follows:
$a\tau=e^{\zeta}\sin\theta,\quad 1+az=e^{\zeta}\cos\theta,$ (38)
which shows that the imaginary rapidity becomes an imaginary coordinate with
the Euclidean Rindler KMS condition (37):
$\theta_{(j)}=\theta+j\alpha,\qquad\zeta_{(j)}=\zeta,\qquad
j\in{\mathbb{Z}}\,.$ (39)
Curiously, under the Wick transform, the rapidity becomes a cyclic compact
variable, $0\leqslant\theta<2\pi$, on which the imaginary-time condition (39)
imposes the additional periodicity with the period equal to the thermal
acceleration $\alpha$. Expectedly, at $\alpha=2\pi$ (or, equivalently, at
$T=T_{U}$), the boundary condition (39) becomes trivial.
The boundary conditions (39), characterized by the doubly-periodic imaginary
rapidity coordinate $\theta$, with periodicities $\theta\to\theta+2\pi$ and
$\theta\to\theta+\alpha$ (for $0\leqslant\alpha<2\pi$), can be easily
implemented in lattice simulations. Notice that this double periodicity has a
strong resemblance to the observation of Refs. [43, 44, 45] that the Euclidean
Rindler space can be identified with the space of the cosmic string which
possesses a conical singularity with the angular deficit
$\Delta\varphi=2\pi-\alpha$ [46, 47].
The KMS periodicity (39) of the compact imaginary rapidity $\theta$ is
formally sensitive to the rationality of the normalized thermal acceleration
$\alpha/(2\pi)$. Obviously, for $\alpha=2\pi p/q$, where $p<q$ are
nonvanishing irreducible integer numbers, the interplay of the two
periodicities will correspond to the single period $\theta\to\theta+2\pi/q$.
Interestingly, the sensitivity of an effect to the denominator $q$ (and not to
the numerator $p$) of a relevant parameter is a signature of the fractal
nature of the effect. Such fractality is noted, for example, in particle
systems subjected to imaginary rotation implemented via rotwisted boundary
conditions [17, 48, 49], which leads, in turn, to the appearance of “ninionic”
deformation of particle statistics [50]. The suggested fractality of
acceleration in imaginary formalism is not surprising given the conceptual
similarity of acceleration and rotation with imaginary angular frequency [37,
38]. Below, we will show that, despite the fractal property of the system, the
KMS boundary condition (39) in Euclidean Rindler space correctly reproduces
results for accelerated particle systems.
## 9 Energy-momentum tensor with the accelerated KMS conditions
Now let us come back to the Wick-rotated Minkowski spacetime and verify how
the modified KMS conditions for the fields, Eqs. (31) and (32), and related
solutions for their two-point functions (30), can recover the known results in
field theories under acceleration. To this end, we start from a non-minimally
coupled scalar field theory with the Lagrangian [51, 52, 53]
$\displaystyle{\mathcal{L}}_{\xi}=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-2\xi\partial_{\mu}\left(\phi\partial^{\mu}\phi\right),$
(40)
possessing the following energy-momentum tensor:
$\displaystyle\Theta^{\xi}_{\mu\nu}=(1-2\xi)\partial_{\mu}\phi\partial_{\nu}\phi$
$\displaystyle-2\xi\phi\partial_{\mu}\partial_{\nu}\phi$
$\displaystyle-\frac{1}{2}(1-4\xi)\delta_{\mu\nu}\partial_{\lambda}\phi\partial_{\lambda}\phi,$
(41)
where the values $\xi=0$ and $\xi=1/6$ of the coupling parameter correspond to
the canonical and conformal energy-momentum tensors, respectively. In terms of
the Euclidean Green’s function, $\Theta^{\xi}_{\mu\nu}$ can be written as
$\Theta^{\xi}_{\mu\nu}=\lim_{x^{\prime}\rightarrow
x}\left[(1-2\xi)\partial_{(\mu}\partial_{\nu^{\prime})}-\tfrac{1}{2}(1-4\xi)\delta_{\mu\nu}\partial_{\lambda}\partial_{\lambda^{\prime}}\right.\\\
\left.-\xi(\partial_{\mu}\partial_{\nu}+\partial_{\mu^{\prime}}\partial_{\nu^{\prime}})\right]\Delta
G^{(\alpha)}_{E}(x,x^{\prime}),$ (42)
where $\Delta
G^{(\alpha)}_{E}(x,x^{\prime})=G^{(\alpha)}_{E}(x,x^{\prime})-G_{E}^{\rm
vac}(x,x^{\prime})$ represents the thermal part of the Green’s function. For
the Dirac field,
$\Theta_{\mu\nu}=\frac{1}{2}\bar{\psi}\gamma^{E}_{\mu}\overleftrightarrow{\partial_{\nu}}\psi$
can be computed from the Euclidean two-point function
$S^{(\alpha)}_{E}(x,x^{\prime})$ via
$\Theta_{\mu\nu}=-\frac{1}{2}\lim_{x^{\prime}\rightarrow x}{\rm
tr}[\gamma^{E}_{\mu}(\partial_{\nu}-\partial_{\nu^{\prime}})\Delta
S^{(\alpha)}_{E}].$ (43)
The vacuum propagators satisfying $\Box G^{\rm
vac}_{E}(x,x^{\prime})=\gamma^{E}_{\mu}\partial_{\mu}S^{\rm
vac}_{E}(x,x^{\prime})=\delta^{4}(x-x^{\prime})$ are given by
$\displaystyle G^{\rm vac}_{E}(\Delta x)$
$\displaystyle=\frac{1}{4\pi^{2}\Delta X^{2}},$ (44) $\displaystyle S_{E}^{\rm
vac}(\Delta x)$ $\displaystyle=\gamma^{E}_{\mu}\partial_{\mu}G^{\rm
vac}_{E}(\Delta x)=-\frac{\gamma^{E}_{\mu}\partial_{\mu}}{2\pi^{2}\Delta
X^{4}},$ (45)
with $\Delta X^{2}=(\Delta\tau)^{2}+(\Delta{\boldsymbol{x}})^{2}$. Using Eq.
(30), the thermal expectation values of the normal-ordered energy-momentum
operator can be obtained in the case of the Klein-Gordon field as:
$\Theta^{\mu\nu}_{\xi}(x)=\sum_{j\neq 0}^{\infty}\frac{1}{4\pi^{2}\Delta
X_{(j)}^{4}}\left[(1-2\xi)(R^{(j)}_{\mu\nu}+R^{(j)}_{\nu\mu})\right.\\\
\left.-\delta_{\mu\nu}(1-4\xi)R^{(j)}_{\lambda\lambda}+2\xi(R^{(j)}_{\nu\lambda}R^{(j)}_{\mu\lambda}+\delta_{\mu\nu})\right]\\\
-\sum_{j\neq 0}\frac{\Delta x^{(j)}_{\lambda}\Delta
x^{(j)}_{\kappa}}{\pi^{2}\Delta
X^{6}_{(j)}}\left[(1-2\xi)(\delta_{\mu\lambda}R^{(j)}_{\nu\kappa}+\delta_{\nu\lambda}R^{(j)}_{\mu\kappa})\right.\\\
\left.-\delta_{\mu\nu}(1-4\xi)R^{(j)}_{\lambda\kappa}+2\xi(R^{(j)}_{\mu\lambda}R^{(j)}_{\nu\kappa}+\delta_{\mu\lambda}\delta_{\nu\kappa})\right],$
(46)
where $\Delta
X^{2}_{(j)}=\frac{4}{a^{2}}\sin^{2}\frac{j\alpha}{2}[(a\tau)^{2}+(1+az)^{2}]$
and $R^{(j)}_{\mu\nu}\equiv\partial_{\mu}\Delta x^{(j)}_{\nu}$ is given by
$R^{(j)}_{\mu\nu}=\begin{pmatrix}\cos(j\alpha)&0&0&\sin(j\alpha)\\\ 0&1&0&0\\\
0&0&1&0\\\ -\sin(j\alpha)&0&0&\cos(j\alpha)\end{pmatrix},$ (47)
such that $R^{(j)}_{\mu\lambda}R^{(j)}_{\nu\lambda}=\delta_{\mu\nu}$. For the
Dirac field, we find
$\Theta_{\mu\nu}=\sum_{j\neq
0}\frac{(-1)^{j}}{\pi^{2}}\left[\delta_{\mu\lambda}\cos\tfrac{j\alpha}{2}+\left(\delta_{\mu
0}\delta_{\lambda 3}-\delta_{\mu 3}\delta_{\lambda
0}\right)\sin\tfrac{j\alpha}{2}\right]\\\
\times\left[\frac{R_{\nu\lambda}^{(j)}+\delta_{\nu\lambda}}{\Delta
X^{4}_{(j)}}-\frac{4\Delta X^{(j)}_{\lambda}}{\Delta
X_{(j)}^{6}}(R^{(j)}_{\nu\kappa}+\delta_{\nu\kappa})\Delta
X^{(j)}_{\kappa}\right].$ (48)
Taking advantage of the relation
$(R^{(j)}_{\nu\kappa}+\delta_{\nu\kappa})\Delta
x^{(j)}_{\kappa}=-\frac{2}{a}\sin(j\alpha)[(1+az)\delta_{\nu
0}-a\tau\delta_{\nu 3}]$ and after switching back to the real time $t$, we
find
$\Theta^{\mu\nu}=\mathcal{E}u^{\mu}u^{\nu}-\mathcal{P}\Delta^{\mu\nu}+\pi^{\mu\nu},$
(49)
with $\mathcal{E}$, $\mathcal{P}$, and $u^{\mu}$ being the energy density,
isotropic pressure, and the fluid four-velocity (2), respectively. The shear-
stress tensor $\pi^{\mu\nu}$ is by construction traceless, symmetric and
orthogonal to $u^{\mu}$, discriminating between the energy-momentum tensors in
classical (6) and quantum (49) fluids. Due to the symmetries of the problem,
its tensor structure is fixed as
$\displaystyle\pi^{\mu\nu}=\frac{\pi_{s}}{2}\left(\Delta^{\mu\nu}-\frac{3\alpha^{\mu}\alpha^{\nu}}{\alpha^{\lambda}\alpha_{\lambda}}\right)\,,$
(50)
with $\alpha^{\mu}(x)$ being the local thermal acceleration (3), such that the
shear coefficient $\pi_{s}$ is the only degree of freedom of $\pi^{\mu\nu}$ in
Eq. (50). In the scalar case, we find for the components of (49):
$\displaystyle\mathcal{E}_{\xi}$ $\displaystyle=\frac{3[\alpha
T(x)]^{4}}{16\pi^{2}}\left[G_{4}(\alpha)+4\xi G_{2}(\alpha)\right],$
$\displaystyle\mathcal{P}_{\xi}$ $\displaystyle=\frac{[\alpha
T(x)^{4}]}{16\pi^{2}}\left[G_{4}(\alpha)+\frac{4}{3}\left(1-3\xi\right)G_{2}(\alpha)\right],$
$\displaystyle\pi_{s}^{\xi}$ $\displaystyle=-\frac{[\alpha
T(x)]^{4}}{12\pi^{2}}(1-6\xi)G_{2}(\alpha),$ (51)
with $G_{n}(\alpha)=\sum_{j=1}^{\infty}[\sin(j\alpha/2)]^{-n}$, in complete
agreement with the results in Ref. [37]. Formally, $G_{n}$ diverges, however
its value can be obtained from its analytical continuation to imaginary
acceleration $a=i\phi$,
$\widetilde{G}_{n}(\beta_{T}\phi)=i^{n}G_{n}(i\beta_{T}\phi)$. The sum can be
evaluated, in a certain domain around $\beta_{T}\phi>0$ [37], to:
$\displaystyle\widetilde{G}_{2}(\beta_{T}\phi)$
$\displaystyle=\frac{2\pi^{2}}{3\beta_{T}^{2}\phi^{2}}-\frac{2}{\beta_{T}\phi}+\frac{1}{6},$
$\displaystyle\widetilde{G}_{4}(\beta_{T}\phi)$
$\displaystyle=\frac{8\pi^{4}}{45\beta_{T}^{4}\phi^{4}}-\frac{4\pi^{2}}{9\beta_{T}^{2}\phi^{2}}+\frac{4}{3\beta_{T}\phi}-\frac{11}{90}.$
(52)
Substituting now $G_{n}(\alpha)={\rm
Re}[i^{-n}\widetilde{G}_{n}(i\beta_{T}\phi)\rfloor_{\phi\rightarrow-ia}]$ into
Eq. (51) gives Eq. (10) for the conformal coupling $\xi=1/6$. For minimal
coupling $\xi=0$ or a generic non-conformal coupling $\xi\neq 1/6$, we recover
the results of Refs. [37, 54].
In the case of the Dirac field, one can easily check that
$\mathcal{E}_{D}=3\mathcal{P}_{D}$ and $\pi_{D}^{s}=0$, while
$\mathcal{P}_{D}=\frac{[\alpha T(x)]^{4}}{4\pi^{2}}S_{4}(\alpha),$ (53)
with
$S_{n}(\alpha)=-\sum_{j=1}^{\infty}(-1)^{j}\cos(j\alpha/2)/[\sin(j\alpha/2)]^{n}\rightarrow\widetilde{S}_{n}(\beta_{T}\phi)\equiv
i^{n}S_{n}(i\beta_{T}\phi)=-\sum_{j=1}^{\infty}(-1)^{j}\cosh(j\beta_{T}\phi/2)/[\sinh(j\beta_{T}\phi/2)]^{n}$,
which agrees with the results obtained in Ref. [38].
Finally, let us also illustrate the practical functionality of the
accelerating KMS boundary conditions (39) formulated in the imaginary-rapidity
Rindler space (38). For simplicity, we calculate the fluctuations of the
scalar field $\langle\phi^{2}\rangle$ using point-splitting and noticing that
the same method can be used to calculate also other quantities.
When expressed with respect to Rindler coordinates
$X=(\theta/a,\mathbf{x}_{\perp},\zeta)$, the Euclidean vacuum two-point
function $G_{E,R}^{\rm vac}(X,X^{\prime})$ given in Eq. (44) reads as follows:
$G_{\rm E,R}^{\rm
vac}=\frac{1}{4\pi^{2}}\left[\frac{2}{a^{2}}e^{\zeta+\zeta^{\prime}}(\cosh\Delta\zeta-\cos\Delta\theta)+\Delta{\boldsymbol{x}}_{\perp}^{2}\right]^{-1}.$
(54)
The KMS condition (39) implies that the Euclidean two-point function under
acceleration satisfies $G^{(\alpha)}_{\rm E,R}=\sum_{j\in{\mathbb{Z}}}G^{\rm
vac}_{\rm E,R}(\Delta\theta+j\alpha)$, where we consider vanishing spatial
distance between the points: $\zeta^{\prime}\to\zeta$ and
$\mathbf{x}_{\perp}^{\prime}\to\mathbf{x}_{\perp}$. Subtracting the vacuum
($j=0$) term that diverges in the $\Delta X\to 0$ limit, we get for the scalar
fluctuations:
$\displaystyle\langle\phi^{2}\rangle$ $\displaystyle=\lim_{\Delta\theta\to
0}\bigl{[}G^{(\alpha)}_{\rm E,R}(\Delta\theta)-G^{\rm vac}_{\rm
E,R}(\Delta\theta)\bigr{]}$ (55)
$\displaystyle=\frac{a^{2}e^{-2\zeta}}{8\pi^{2}}G_{2}(\alpha)=\frac{T^{2}(x)}{12}-\frac{a^{2}(x)}{48\pi^{2}}\,,\quad
0\leqslant a\leqslant 2\pi T\,,$
which agrees with the known result [37, 55].
## 10 Fractalization of thermodynamics
Let us consider the case when $\alpha/2\pi$ is a rational number, represented
as the irreducible fraction $p/q$. Then, the functions
$G_{n}(\alpha)\rightarrow
G_{n}^{(p,q)}(\alpha)=\frac{1}{2}\sum_{j=1}^{q-1}[\sin(\pi jp/q)]^{-n}$ are
regular and evaluate in the relevant $n=2$ and $n=4$ cases to
$G_{2}^{(p,q)}=\frac{q^{2}-1}{6},\quad
G_{4}^{(p,q)}=\frac{q^{4}+10q^{2}-11}{90}.$ (56)
The above results are independent of the numerator $p$ of the irreducible
fraction. The quadratic field fluctuations, shear stress coefficient
$\pi_{s}$, energy density, and pressure reduce to
$\displaystyle\langle\phi^{2}\rangle^{(p,q)}$ $\displaystyle=\frac{[\alpha
T(x)]^{2}}{96\pi^{2}}(q^{2}-1),$ (57a)
$\displaystyle\mathcal{E}_{\xi}^{(p,q)}$ $\displaystyle=\frac{[\alpha
T(x)]^{4}}{480\pi^{2}}(q^{2}-1)(q^{2}+11+60\xi),$ (57b)
$\displaystyle\mathcal{P}_{\xi}^{(p,q)}$ $\displaystyle=\frac{[\alpha
T(x)]^{4}}{1440\pi^{2}}(q^{2}-1)(q^{2}+31-60\xi),$ (57c)
$\displaystyle\pi_{s;\xi}^{(p,q)}$ $\displaystyle=-\frac{[\alpha
T(x)]^{4}}{72\pi^{2}}(1-6\xi)(q^{2}-1),$ (57d)
manifestly vanishing when $q^{2}=1$, i.e. for $\alpha=2\pi$.
In the case of the Dirac field, we have $S_{n}(\alpha)\rightarrow
S_{n}^{(p,q)}=-\frac{1}{2}\sum_{j=1}^{q-1}(-1)^{j}\cos(\pi jp/q)/[\sin(\pi
jp/q)]^{n}$. For the case $n=4$, the relation
$(-1)^{q-j}\cos[\pi(q-j)p/q]=(-1)^{j+p+q}\cos(\pi jp/q)$ implies that
$S_{4}^{(p,q)}$ vanishes when $p+q$ is an odd number. This happens whenever
$q$ is an even number in order to maintain the fraction $p/q$ irreducible.
When $q$ is odd, $S_{4}^{(p,q)}$ vanishes for all even values of $p$. When
both $p$ and $q$ are odd, $S_{4}^{(p,q)}$ can be computed analytically and the
final result can be summarized as
$S_{4}^{(p,q)}=\frac{7q^{2}+17}{720}(q^{2}-1)\times\frac{1+(-1)^{p+q}}{2}.$
(58)
The fermion pressure becomes
$\mathcal{P}^{(p,q)}_{D}=\frac{[\alpha
T(x)]^{4}}{2880\pi^{2}}(q^{2}-1)(7q^{2}+17)\frac{1+(-1)^{p+q}}{2}.$ (59)
## 11 Conclusions
In this paper, we derived the KMS relation for bosonic and fermionic quantum
systems at finite temperature under uniform acceleration. In Wick-rotated
Minkowski spacetime, the uniform acceleration requires the identification (31)
of the points in the bulk of the system along the discrete points lying on
circular orbits (32) about the Rindler horizon, which shrinks to a point (34)
under the Wick rotation. In the Wick-rotated Rindler coordinates, the KMS
relations reduce to standard (anti-)periodic boundary conditions in terms of
the imaginary rapidity coordinates. To illustrate the effectiveness of the
method, we considered the quantum thermal distributions of massless scalar and
Dirac particles under acceleration and found perfect agreement with results
previously derived in the literature.
Our work paves the way to systematic explorations of the influence of the
kinematic state of a system on its global equilibrium thermodynamic
properties. Our paper equips us with a rigorously formulated method in
imaginary-time formalism which allows us to construct the ground state of a
field theory in thermal equilibrium in a uniformly accelerating frame,
opening, in particular, a way for first-principle lattice simulations of
accelerated systems.
## Acknowledgements
This work is supported by the European Union - NextGenerationEU through the
grant No. 760079/23.05.2023, funded by the Romanian ministry of research,
innovation and digitalization through Romania’s National Recovery and
Resilience Plan, call no. PNRR-III-C9-2022-I8.
## References
* [1] P. Castorina, D. Kharzeev, H. Satz, Thermal Hadronization and Hawking-Unruh Radiation in QCD, Eur. Phys. J. C 52 (2007) 187–201. arXiv:0704.1426, doi:10.1140/epjc/s10052-007-0368-6.
* [2] D. Kharzeev, K. Tuchin, From color glass condensate to quark gluon plasma through the event horizon, Nucl. Phys. A 753 (2005) 316–334. arXiv:hep-ph/0501234, doi:10.1016/j.nuclphysa.2005.03.001.
* [3] J. D. Bjorken, Highly Relativistic Nucleus-Nucleus Collisions: The Central Rapidity Region, Phys. Rev. D 27 (1983) 140–151. doi:10.1103/PhysRevD.27.140.
* [4] F. Gelis, B. Schenke, Initial-state quantum fluctuations in the Little Bang, Annual Review of Nuclear and Particle Science 66 (1) (2016) 73–94. doi:10.1146/annurev-nucl-102115-044651.
URL https://doi.org/10.1146/annurev-nucl-102115-044651
* [5] K. Yagi, T. Hatsuda, Y. Miake, Quark-Gluon Plasma: From Big Bang to Little Bang, Cambridge Monographs on Particle Physics, Nuclear Physics and Cosmology, Cambridge University Press, 2008.
URL https://books.google.se/books?id=ZXIdOwAACAAJ
* [6] J. I. Kapusta, C. Gale, Finite-temperature field theory: Principles and applications, Cambridge Monographs on Mathematical Physics, Cambridge University Press, 2011. doi:10.1017/CBO9780511535130.
* [7] L. Adamczyk, et al., Global $\Lambda$ hyperon polarization in nuclear collisions: evidence for the most vortical fluid, Nature 548 (2017) 62–65. arXiv:1701.06657, doi:10.1038/nature23004.
* [8] A. Yamamoto, Y. Hirono, Lattice QCD in rotating frames, Phys. Rev. Lett. 111 (2013) 081601. arXiv:1303.6292, doi:10.1103/PhysRevLett.111.081601.
* [9] P. de Forcrand, O. Philipsen, The QCD phase diagram for small densities from imaginary chemical potential, Nucl. Phys. B 642 (2002) 290–306. arXiv:hep-lat/0205016, doi:10.1016/S0550-3213(02)00626-0.
* [10] H.-L. Chen, K. Fukushima, X.-G. Huang, K. Mameda, Analogy between rotation and density for Dirac fermions in a magnetic field, Phys. Rev. D 93 (10) (2016) 104052\. arXiv:1512.08974, doi:10.1103/PhysRevD.93.104052.
* [11] Y. Jiang, J. Liao, Pairing Phase Transitions of Matter under Rotation, Phys. Rev. Lett. 117 (19) (2016) 192302. arXiv:1606.03808, doi:10.1103/PhysRevLett.117.192302.
* [12] M. N. Chernodub, S. Gongyo, Interacting fermions in rotation: chiral symmetry restoration, moment of inertia and thermodynamics, JHEP 01 (2017) 136. arXiv:1611.02598, doi:10.1007/JHEP01(2017)136.
* [13] X. Wang, M. Wei, Z. Li, M. Huang, Quark matter under rotation in the NJL model with vector interaction, Phys. Rev. D 99 (1) (2019) 016018. arXiv:1808.01931, doi:10.1103/PhysRevD.99.016018.
* [14] N. Sadooghi, S. M. A. Tabatabaee Mehr, F. Taghinavaz, Inverse magnetorotational catalysis and the phase diagram of a rotating hot and magnetized quark matter, Phys. Rev. D 104 (11) (2021) 116022. arXiv:2108.12760, doi:10.1103/PhysRevD.104.116022.
* [15] X. Chen, L. Zhang, D. Li, D. Hou, M. Huang, Gluodynamics and deconfinement phase transition under rotation from holography, JHEP 07 (2021) 132. arXiv:2010.14478, doi:10.1007/JHEP07(2021)132.
* [16] Y. Fujimoto, K. Fukushima, Y. Hidaka, Deconfining Phase Boundary of Rapidly Rotating Hot and Dense Matter and Analysis of Moment of Inertia, Phys. Lett. B 816 (2021) 136184. arXiv:2101.09173, doi:10.1016/j.physletb.2021.136184.
* [17] M. N. Chernodub, Inhomogeneous confining-deconfining phases in rotating plasmas, Phys. Rev. D 103 (5) (2021) 054027. arXiv:2012.04924, doi:10.1103/PhysRevD.103.054027.
* [18] V. V. Braguta, A. Y. Kotov, D. D. Kuznedelev, A. A. Roenko, Study of the Confinement/Deconfinement Phase Transition in Rotating Lattice SU(3) Gluodynamics, Pisma Zh. Eksp. Teor. Fiz. 112 (1) (2020) 9–16. doi:10.31857/S1234567820130029.
* [19] V. V. Braguta, A. Y. Kotov, D. D. Kuznedelev, A. A. Roenko, Influence of relativistic rotation on the confinement-deconfinement transition in gluodynamics, Phys. Rev. D 103 (9) (2021) 094515. arXiv:2102.05084, doi:10.1103/PhysRevD.103.094515.
* [20] V. V. Braguta, A. Kotov, A. Roenko, D. Sychev, Thermal phase transitions in rotating QCD with dynamical quarks, PoS LATTICE2022 (2023) 190. arXiv:2212.03224, doi:10.22323/1.430.0190.
* [21] V. V. Braguta, M. N. Chernodub, A. A. Roenko, D. A. Sychev, Negative moment of inertia and rotational instability of gluon plasma (3 2023). arXiv:2303.03147.
* [22] V. V. Braguta, I. E. Kudrov, A. A. Roenko, D. A. Sychev, M. N. Chernodub, Lattice Study of the Equation of State of a Rotating Gluon Plasma, JETP Lett. 117 (9) (2023) 639–644. doi:10.1134/S0021364023600830.
* [23] J.-C. Yang, X.-G. Huang, QCD on Rotating Lattice with Staggered Fermions (7 2023). arXiv:2307.05755.
* [24] F. Sun, K. Xu, M. Huang, Quarkyonic phase induced by Rotation (7 2023). arXiv:2307.14402.
* [25] M. N. Chernodub, V. A. Goy, A. V. Molochkov, Inhomogeneity of a rotating gluon plasma and the Tolman-Ehrenfest law in imaginary time: Lattice results for fast imaginary rotation, Phys. Rev. D 107 (11) (2023) 114502. arXiv:2209.15534, doi:10.1103/PhysRevD.107.114502.
* [26] A. Yamamoto, Lattice QCD in curved spacetimes, Phys. Rev. D 90 (5) (2014) 054510\. arXiv:1405.6665, doi:10.1103/PhysRevD.90.054510.
* [27] C. Cercignani, G. M. Kremer, The Relativistic Boltzmann Equation: Theory and Applications, Springer, 2002.
* [28] F. Becattini, Covariant statistical mechanics and the stress-energy tensor, Phys. Rev. Lett. 108 (2012) 244502. arXiv:1201.5278, doi:10.1103/PhysRevLett.108.244502.
* [29] W. G. Unruh, Notes on black hole evaporation, Phys. Rev. D 14 (1976) 870. doi:10.1103/PhysRevD.14.870.
* [30] S. W. Hawking, Black hole explosions?, Nature 248 (5443) (1974) 30–31. doi:10.1038/248030a0.
URL https://doi.org/10.1038/248030a0
* [31] S. W. Hawking, Particle creation by black holes, Communications In Mathematical Physics 43 (3) (1975) 199–220. doi:10.1007/bf02345020.
URL https://doi.org/10.1007/bf02345020
* [32] G. W. Gibbons, M. J. Perry, Black Holes and Thermal Green’s Functions, Proc. Roy. Soc. Lond. A 358 (1978) 467–494. doi:10.1098/rspa.1978.0022.
* [33] G. W. Gibbons, M. J. Perry, Black Holes in Thermal Equilibrium, Phys. Rev. Lett. 36 (1976) 985. doi:10.1103/PhysRevLett.36.985.
* [34] N. D. Birrell, P. C. W. Davies, Quantum Fields in Curved Space, Cambridge Monographs on Mathematical Physics, Cambridge Univ. Press, Cambridge, UK, 1984\. doi:10.1017/CBO9780511622632.
* [35] G. Y. Prokhorov, O. V. Teryaev, V. I. Zakharov, Unruh effect for fermions from the Zubarev density operator, Phys. Rev. D 99 (7) (2019) 071901. arXiv:1903.09697, doi:10.1103/PhysRevD.99.071901.
* [36] G. Y. Prokhorov, O. V. Teryaev, V. I. Zakharov, Calculation of acceleration effects using the Zubarev density operator, Particles 3 (1) (2020) 1–14. arXiv:1911.04563, doi:10.3390/particles3010001.
* [37] F. Becattini, M. Buzzegoli, A. Palermo, Exact equilibrium distributions in statistical quantum field theory with rotation and acceleration: scalar field, JHEP 02 (2021) 101. arXiv:2007.08249, doi:10.1007/JHEP02(2021)101.
* [38] A. Palermo, M. Buzzegoli, F. Becattini, Exact equilibrium distributions in statistical quantum field theory with rotation and acceleration: Dirac field, JHEP 10 (2021) 077. arXiv:2106.08340, doi:10.1007/JHEP10(2021)077.
* [39] V. E. Ambrus, Dirac fermions on rotating space-times, Ph.D. thesis, Sheffield U. (2014).
* [40] V. E. Ambrus, E. Winstanley, Vortical Effects for Free Fermions on Anti-De Sitter Space-Time, Symmetry 13 (2021) 2019. arXiv:2107.06928, doi:10.3390/sym13112019.
* [41] S. Mallik, S. Sarkar, Hadrons at Finite Temperature, Cambridge University Press, Cambridge, 2016. doi:10.1017/9781316535585.
* [42] V. E. Ambrus, Fermion condensation under rotation on anti-de Sitter space, Acta Phys. Polon. Supp. 13 (2020) 199. arXiv:1912.02014, doi:10.5506/APhysPolBSupp.13.199.
* [43] G. Y. Prokhorov, O. V. Teryaev, V. I. Zakharov, Thermodynamics of accelerated fermion gases and their instability at the Unruh temperature, Phys. Rev. D 100 (12) (2019) 125009. arXiv:1906.03529, doi:10.1103/PhysRevD.100.125009.
* [44] G. Y. Prokhorov, O. V. Teryaev, V. I. Zakharov, Unruh effect universality: emergent conical geometry from density operator, JHEP 03 (2020) 137. arXiv:1911.04545, doi:10.1007/JHEP03(2020)137.
* [45] V. I. Zakharov, G. Y. Prokhorov, O. V. Teryaev, Acceleration and rotation in quantum statistical theory, Phys. Scripta 95 (8) (2020) 084001. doi:10.1088/1402-4896/ab996b.
* [46] J. S. Dowker, Vacuum Averages for Arbitrary Spin Around a Cosmic String, Phys. Rev. D 36 (1987) 3742. doi:10.1103/PhysRevD.36.3742.
* [47] B. Linet, Euclidean spinor Green’s functions in the space-time of a straight cosmic string, J. Math. Phys. 36 (1995) 3694–3703. arXiv:gr-qc/9412050, doi:10.1063/1.530991.
* [48] S. Chen, K. Fukushima, Y. Shimada, Confinement in hot gluonic matter with imaginary and real rotation (7 2022). arXiv:2207.12665.
* [49] V. E. Ambruş, M. N. Chernodub, Rigidly rotating scalar fields: Between real divergence and imaginary fractalization, Phys. Rev. D 108 (8) (2023) 085016\. arXiv:2304.05998, doi:10.1103/PhysRevD.108.085016.
* [50] M. N. Chernodub, Fractal thermodynamics and ninionic statistics of coherent rotational states: realization via imaginary angular rotation in imaginary time formalism (10 2022). arXiv:2210.05651.
* [51] C. G. Callan, S. Coleman, R. Jackiw, A new improved energy-momentum tensor, Annals of Physics 59 (1) (1970) 42–73. doi:10.1016/0003-4916(70)90394-5.
URL https://doi.org/10.1016/0003-4916(70)90394-5
* [52] V. P. Frolov, E. M. Serebriany, Vacuum polarization in the gravitational field of a cosmic string, Physical Review D 35 (12) (1987) 3779–3782. doi:10.1103/physrevd.35.3779.
URL https://doi.org/10.1103/physrevd.35.3779
* [53] F. Becattini, E. Grossi, Quantum corrections to the stress-energy tensor in thermodynamic equilibrium with acceleration, Phys. Rev. D 92 (2015) 045037. arXiv:1505.07760, doi:10.1103/PhysRevD.92.045037.
* [54] V. I. Zakharov, G. Y. Prokhorov, O. V. Teryaev, Acceleration and rotation in quantum statistical theory, Physica Scripta 95 (8) (2020) 084001. doi:10.1088/1402-4896/ab996b.
URL https://doi.org/10.1088/1402-4896/ab996b
* [55] D. V. Diakonov, K. V. Bazarov, Thermal loops in the accelerating frame (1 2023). arXiv:2301.07478.
|
# Modified particle lifetimes as a signature of deformed relativity
Pedro H. Morais ID<EMAIL_ADDRESS>Physics Department, Federal
University of Paraíba, Caixa Postal 5008, 58059-900, João Pessoa, PB, Brazil.
Iarley P. Lobo ID<EMAIL_ADDRESS>Department of Chemistry and Physics,
Federal University of Paraíba, Rodovia BR 079 - Km 12, 58397-000 Areia-PB,
Brazil. Physics Department, Federal University of Lavras, Caixa Postal 3037,
37200-000 Lavras-MG, Brazil. Christian Pfeifer ID<EMAIL_ADDRESS>ZARM, University of Bremen, 28359 Bremen, Germany. Rafael Alves Batista ID
<EMAIL_ADDRESS>Instituto de Física Teórica UAM-CSIC, C/ Nicolás
Cabrera 13-15, 28049 Madrid, Spain. Valdir B. Bezerra ID
<EMAIL_ADDRESS>Physics Department, Federal University of Paraíba, Caixa
Postal 5008, 58059-900, João Pessoa, PB, Brazil.
###### Abstract
We demonstrate a compatibility between the relativity principle and the clock
postulate in deformed special relativity, by identifying the relevant deformed
Lorentz transformations in position space between arbitrary frames. This
result leads to a first-principles correction to the dilated lifetime of
fundamental particles. It turns out that these modified time dilations offer a
way to scrutinize Lorentz invariance (or deviations thereof) to high
precision.
Introduction. The characterization of what should be a quantum spacetime is
one of the routes that may lead us to an appropriate description of quantum
gravity. We expect that such a challenging task should pass through
intermediary steps before its full realization from the theoretical and
experimental points of view. For this reason, it is plausible to expect that
corrections to the Riemannian and general relativistic descriptions of gravity
should become manifest once we advance towards the Quantum Gravity (QG) scale
.In this sense, quantum gravity phenomenology plays a fundamental role by
translating the intuition of this area to observables, recognizable in our
current treatment of spacetime physics (for reviews on the subject we refer
the reader to Amelino-Camelia (2013); Addazi _et al._ (2022)).
It is known that there exist formulations of such effective quantum gravity
spacetime geometries, in which some of the cornerstones of modern physics,
like the physical equivalence between local inertial observers, are preserved
Amelino-Camelia (2002). Those formulations that incorporate an invariant
length/energy scale, which is expected to exist from various fundamental
quantum gravity models, in a relativistic way, are known as Deformed Special
Relativity (DSR) models Magueijo and Smolin (2003); Majid and Ruegg (1994);
Bruno _et al._ (2001); Barcaroli _et al._ (2015); Girelli _et al._ (2007).
They extend Einstein’s first postulate and their impact on observables is
intensively studied. Among the possibilities for the realization of this idea,
Finsler geometry stands out when one searches for a continuous description of
the kinematics of a particle in a curved spacetime with a fundamental length
Girelli _et al._ (2007); Lobo and Pfeifer (2021); Lobo _et al._ (2017); Zhu
and Ma (2023a). It describes the spacetime geometry fully in terms of an arc-
length functional (for the usefulness of different commutative geometries for
the description of phase and configuration spaces, please refer to the reviews
Pfeifer (2019); Albuquerque _et al._ (2023); Zhu and Ma (2023b)). Finslerian
geodesics turn out to be the deformed trajectories of massless particles that
are analyzed in the rich phenomenology of time delays from gamma-ray bursts
Zhu and Ma (2022) and isometries of the Finsler metric are connected to local
deformations of the Lorentz symmetry Amelino-Camelia _et al._ (2014). This
means for example, that if we call $E_{\text{Pl}}$ the Planck energy, then the
kinematics of a particle of mass $m$, energy $E$ and momentum $|\vec{p}|=|p|$,
subject to a modified dispersion relation (MDR) of the form ($\eta^{(n)}$ is a
dimensionless parameter that controls the perturbative approach that we are
going to follow)111We are using natural coordinates such that $c=\hbar=1$.
$E^{2}-p^{2}=m^{2}+\eta^{(n)}\frac{|p|^{n+2}}{E_{\text{Pl}}^{n}}\,$ (1)
are determined by the arc-length functional,
$s=\int F(\dot{t},\dot{x})d\lambda\,,$ (2)
where
$F(\dot{t},\dot{x})=\sqrt{\dot{t}^{2}-\dot{x}^{2}}+\frac{\eta^{(n)}}{2}\left(\frac{m}{E_{\text{Pl}}}\right)^{n}\frac{|\dot{x}|^{n+2}}{(\dot{t}^{2}-\dot{x}^{2})^{\frac{n+1}{2}}}\,.$
(3)
Here, “dot” means derivative with respect to the parameter $\lambda$ and
$|\dot{x}|=|\dot{\vec{x}}|$. For simplicity we shall assume a treatment in
$1+1$ dimensions in the following. The function $F$ is called Finsler function
and this result has been derived in more details and generality in Lobo and
Pfeifer (2021). This equivalence of treatments is due to the fact that a
Finsler function above is the Lagrangian derived from a Legendre
transformation Rodrigues and Lobo (2022) of the nonquadratic Hamiltonian
defined by the MDR (1), as was originally discussed in Girelli _et al._
(2007).
Time dilation from the Clock Postulate. Recently, a novel aspect of the
Finslerian description of quantum spacetime was found in Lobo and Pfeifer
(2021) . By applying the Clock Postulate (CP) (which states that the proper
time an observer measures between two events is given by the arc-length of its
worldline in spacetime between the two events), we found that the dilated
laboratory lifetime $t$ of a particle with velocity $v=dx/dt$ relative to the
laboratory (lab) and proper rest frame lifetime $\tau$ is
$t=\gamma\tau\left[1-\frac{\eta^{(n)}}{2}\left(\frac{m}{E_{\text{Pl}}}\right)^{n}(\gamma^{2}-1)^{\frac{n+2}{2}}\right]\,,$
(4)
where $\gamma^{-1}=\sqrt{1-v^{2}}$. We clearly see the Planck-scale
corrections beyond Special Relativity (SR) induced by the MDR (1). In order to
connect this expression with observations, it is necessary to express the
velocity $v$ in terms of the particle’s energy in Finsler geometry, which
simply reads
$\displaystyle\begin{split}E&=m\frac{\partial
F}{\partial\dot{t}}\Bigg{|}_{\lambda=t}\\\
&=m\gamma\left[1-\frac{\eta^{(n)}}{2}(n+1)\left(\frac{m}{E_{\text{Pl}}}\right)^{n}(\gamma^{2}-1)^{\frac{n+2}{2}}\right].\end{split}$
(5)
Curiously, as we discussed in Lobo _et al._ (2022), this expression is
actually a deformed Lorentz transformation from the rest frame to the
laboratory. Expression (5) can be inverted, giving
$\gamma=\frac{E}{m}\left[1+\frac{\eta^{(n)}}{2}(n+1)\left(\frac{m}{E_{\text{Pl}}}\right)^{n}\left[\left(\frac{E}{m}\right)^{2}-1\right]^{\frac{n+2}{2}}\right]\,.$
(6)
From this expression we easily find from the CP the Finslerian description of
the dilated lifetime of a particle. Let $\tau$ be the particle’s lifetime at
rest and $m$ and $E$ be it’s mass and energy which obey a MDR of the form (1).
Then, the particle lifetime $t$ in the laboratory frame is
$\displaystyle
t=\gamma_{\text{CP}}\tau=\frac{E}{m}\left[1+\frac{n\eta^{(n)}}{2}\left(\frac{|p|}{m}\right)^{2}\left(\frac{|p|}{E_{\text{Pl}}}\right)^{n}\right]\tau\,.$
(7)
We call the modified Lorentz factor that dilates the lifetime
$\gamma=\gamma_{\text{CP}}$, since it was calculated from an extension of the
clock postulate to an effective quantum spacetime described in terms of
Finsler geometry that is used for describing the kinematics of a particle
subject to a MDR.
A similar effect was described preliminarly in Trimarelli (2022), in which a
corrected Lorentz factor is suggested to be
$\gamma_{\text{LIV}}=E/m_{\text{LIV}}$, where
$m_{\text{LIV}}^{2}=m^{2}+\eta^{(n)}|p|^{n+2}/E_{\text{Pl}}^{n}$ is the right-
hand side of Eq.(1) (LIV stands for Lorentz Invariance Violation). The first
order (and dominant) correction of this expression gives
$\gamma_{\text{LIV}}=E/m_{\text{LIV}}\approx\frac{E}{m}\left[1-\frac{\eta^{(n)}}{2}\left(\frac{|p|}{m}\right)^{2}\left(\frac{|p|}{E_{\text{Pl}}}\right)^{n}\right]\,.$
(8)
The expressions for $\gamma_{\text{LIV}}$ and $\gamma_{\text{CP}}$ look
similar at first order (one simply translates superluminal effects from
$\gamma_{\text{LIV}}$ to be subluminal in $\gamma_{\text{CP}}$). However, only
in the CP case the concept of time emerges in a natural way due to the
Finslerian approach employed. In the LIV case one could say that there seems
to be no deeper reason to suppose that a Lorentz factor that dilates lifetimes
should be modified the way it is proposed.
Interestingly, in both cases, this kind of correction presents an amplifying
factor given by $(|p|/m)^{2}$, which can furnish large values for ultra-high-
energy cosmic rays (UHECRs). This is the reason why dilated lifetimes have
been considered as potential observables in the search for quantum gravity and
deviations from Lorentz invariance Trimarelli (2022), in addition to other
effects such as modified interaction thresholds Abreu _et al._ (2022).
Despite the clear physical interpretation and mathematical formulation of the
CP approach, one could be tempted to state that an actual comparison between
times in different frames must be derived from an actual map between
observers. In order to be coherent with the DSR roots of the Finsler relation
with quantum gravity phenomenology, such an effect must come from a Deformed
Lorentz Transformation (DLT) involving spacetime coordinates – a step that is
missing so far in this approach.
This is precisely the goal of this letter. We seek Therefore to show that in
this DSR scenario, just like in SR, the result concerning the CP is actually
an isometry of the Finsler measure or a DLT between frames that move relative
to each other with velocity $v$.
Compatibility between the Clock Postulate and a Deformed Lorentz
Transformation. To prove that Eq.(7) is indeed a DLT, we use the geodesic
equation that determines the relation $x(t)$. To find it, we use a conserved
quantity given by the spatial momentum $p=m\partial F/d\dot{x}$, as this
expression is parametrization-invariant, we use the laboratory time as
parameter and solve this equation for $dx/dt$. Finally, we use Eq.(7) to
express this solution as a function of $\tau$, $E$, $p$ and $m$ as222In fact,
the relation between a propagation distance $L_{xy}=|x|$, the transverse
momentum $|p|=p_{\text{T}}$, the mass of a particle from the PDG
$m=M_{\text{PDG}}$ and the proper time $\tau$ is the basis for the measurement
of particles’ proper lifetimes in accelerators ALICE collaboration (2023).
Therefore, our result describes discrepancies that could emerge for
measurements done with future experiments (with higher energies than those
attainable today) and with better precision Lobo and Pfeifer (2023).
$\displaystyle
x=-\frac{p\,\tau}{m}\left[1+\frac{\eta^{(n)}}{2}\frac{(2m^{2}+nE^{2})}{m^{2}}\left(\frac{|p|}{E_{\text{Pl}}}\right)^{n}\right]\,.$
(9)
This is basically the geodesic solution in the proper time parametrization. If
we use the above expression along with (7) to calculate the Finsler function
(3), a direct calculation shows that for on-shell particles
$\displaystyle F(\dot{t},\dot{x})$
$\displaystyle=\sqrt{\dot{t}^{2}-\dot{x}^{2}}+\frac{\eta^{(n)}}{2}\left(\frac{m}{E_{\text{Pl}}}\right)^{n}\frac{|\dot{x}|^{n+2}}{(\dot{t}^{2}-\dot{x}^{2})^{\frac{n+1}{2}}}$
$\displaystyle=\dot{\tau}=F(\dot{\tau},0)\,.$ (10)
This proves that the set of transformations given by Eqs.(7) and (9)
corresponds to an isometry in Finsler geometry, therefore, they constitute a
Deformed Lorentz Transformation.
An alternative expression for this transformation can be found by expressing
$E$ and $p$ as a function of the velocity $v$ from $p_{\mu}=m\partial
F/\partial\dot{x}^{\mu}$ in the lab time parametrization. In this case, the
transformations for $t$ and $x$ are simply
$\displaystyle t$
$\displaystyle=\gamma\tau\left[1-\frac{\eta^{(n)}}{2}\left(\frac{m}{E_{\text{Pl}}}\right)^{n}(\gamma^{2}-1)^{\frac{n+2}{2}}\right]=\gamma_{\text{CP}}\tau\,,$
(11) $\displaystyle x$
$\displaystyle=v\gamma\tau\left[1-\frac{\eta^{(n)}}{2}\left(\frac{m}{E_{\text{Pl}}}\right)^{n}(\gamma^{2}-1)^{\frac{n+2}{2}}\right]=v\gamma_{\text{CP}}\tau\,.$
(12)
It is straightforward to verify that indeed $v$ is the velocity of the
particle in the lab frame, since we check that $dx/dt=v$.
This is a remarkable result. For the first time we have simultaneously a DLT
involving space and time coordinates and the boost parameter is undoubtedly
identified as the velocity of a particle $dx/dt$. Besides that, this result is
compatible with the CP (just like in SR), which allows us to indeed describe
what would be a Planck-scale correction to the twin paradox. For this reason,
we are confident to state that Eq.(7) actually defines a DSR Lorentz factor
$\displaystyle\gamma_{\text{CP}}=\gamma_{\text{DSR}}\,.$ (13)
Let us compare our findings here with the ones made in Trimarelli (2022) for
the LIV case, which are currently being analyzed using UHECRs. One may be
tempted to translate the results found in that paper using
$\eta^{(n)}\mapsto-n\eta^{(n)}$ (since only very-high energy relativistic
particles would effectively contribute to the effect). But, we should notice
that besides the modification in the particle’s lifetime, also a modified
velocity is used as input
$v_{\text{LIV}}=\beta_{\text{LIV}}=\frac{|p|}{m\gamma_{\text{LIV}}}\approx\frac{|p|}{E}\left[1+\frac{\eta^{(n)}}{2}\left(\frac{|p|}{m}\right)^{2}\left(\frac{|p|}{E_{\text{Pl}}}\right)^{n}\right]\,.$
(14)
In our case, the relation between the velocity $v$ and the momenta is
naturally given by the definition of the spatial physical momentum
$p=m\frac{\partial
F}{\partial\dot{x}}\Bigg{|}_{\lambda=t}=-mv\gamma\left[1+\frac{\eta^{(n)}}{2}\frac{(mv)^{n}(v^{2}-2-n)}{E_{\text{Pl}}^{n}(1-v^{2})^{\frac{n+2}{2}}}\right]\,,$
(15)
from which we can calculate its absolute value $|p|$ and, using the relation
between the Lorentz factor and the energy (5), we derive the following
$\displaystyle
v_{\text{DSR}}=\beta_{\text{DSR}}=\frac{|p|}{E}\left[1+\frac{(2+n)\eta^{(n)}}{2}\left(\frac{|p|}{E_{\text{Pl}}}\right)^{n}\right]\,.$
(16)
As expected, this result is actually the velocity of the particle defined from
the MDR (1), since
$v=\Bigg{|}\frac{\partial E}{\partial
p}\Bigg{|}\stackrel{{\scriptstyle\text{MDR}}}{{=\mathrel{\mkern-3.0mu}=\mathrel{\mkern-3.0mu}=}}v_{\text{DSR}}\,.$
(17)
For this reason, even in a LIV scenario, one should use Eq.(16) instead of
(14). So, we can actually drop the symbol DSR out of the velocity in (16),
since it is simply the velocity of the particle read from the MDR. With this
observation, we complete the analysis connecting the rest and the lab frames.
The next natural step consists in connecting general spacetime rectangular
coordinates of different frames that move relative to each other with velocity
$v$, i.e., $(t,x)\mapsto(t^{\prime},x^{\prime})$, which reduces to the
previous case when the target frame is $(\tau,0)\mapsto(t,x)$.
General Deformed Lorentz Transformation. In order to generalize the previous
isometry such that we connect two arbitrary rectangular coordinates, it is
sufficient to propose a transformation involving the boost parameter $v$, the
spacetime coordinates $(t,x)$ and the velocities $(\dot{t},\dot{x})$ that
reduces to the previous case when the target frame obeys $x=0=\dot{x}$. Since
this is a transformation that shall leave the Finsler function invariant (and
consequently the metric), it should not depend on the parametrization
$\lambda$ of the velocities $(\dot{t},\dot{x})$.
The functions that naturally satisfy this requirement are the momentum
components
$\displaystyle E(\dot{t},\dot{x})=m\frac{\partial
F(\dot{t},\dot{x})}{\partial\dot{t}},\quad p(\dot{t},\dot{x})=m\frac{\partial
F(\dot{t},\dot{x})}{\partial\dot{x}}\,,$ (18)
which satisfy $E=m$ in the particle rest frame, i.e. for $x=0=v$, see (5) and
(15).
A generalisation of the transformations (11) and (12) for transformations
between arbitrary frames $(t,x)\mapsto(t^{\prime},x^{\prime})$ is constructed
from combinations of factors of the type $E^{n-r}|p|^{r}/E_{\text{Pl}}^{n}$,
where $E$ and $p$ are treated as functions of $(\dot{t},\dot{x})$. A general
ansatz for such transformations to first order in $\eta^{(n)}$ is of the
following form:
$\displaystyle t^{\prime}=$
$\displaystyle\,\,\,(t+xv)\gamma+\frac{\eta^{(n)}}{2}t\gamma\left(\frac{E}{E_{\text{Pl}}}\right)^{n}\left(\gamma^{2}-1\right)^{\frac{n+2}{2}}$
(19)
$\displaystyle+\frac{\eta^{(n)}}{2E_{\text{Pl}}}\left(t\sum_{r=0}^{n}\alpha_{r}E^{n-r}|p|^{r}+x\sum_{r=0}^{n}\beta_{r}E^{n-r}|p|^{r}\right)\,,$
$\displaystyle x^{\prime}=$
$\displaystyle\,\,\,(tv+x)\gamma+\frac{\eta^{(n)}}{2}tv\gamma\left(\frac{E}{E_{\text{Pl}}}\right)^{n}\left(\gamma^{2}-1\right)^{\frac{n+2}{2}}$
(20)
$\displaystyle+\frac{\eta^{(n)}}{2E_{\text{Pl}}}\left(x\sum_{r=0}^{n}\delta_{r}E^{n-r}|p|^{r}+t\sum_{r=0}^{n}\lambda_{r}E^{n-r}|p|^{r}\right)\,,$
where we isolated the terms from the sum which are non-vanishing in the rest
frame, i,e, $p=0=x$, $E=m$, for clarity.
Imposing the isometry condition
$F^{2}(\dot{t}^{\prime},\dot{x}^{\prime})=F^{2}(\dot{t},\dot{x})$ on this
transformation and noticing that
$E=m\dot{t}/\sqrt{\dot{t}^{2}-\dot{x}^{2}}+{\cal O}(m/E_{\text{Pl}})$ and
$p=-m\dot{x}/\sqrt{\dot{t}^{2}-\dot{x}^{2}}+{\cal O}(m/E_{\text{Pl}})$ are
conserved functions of velocities, we derive an expression involving powers
and factors of $\dot{t}$ and $|\dot{x}|$, besides terms like
$(|\dot{x}|+v\dot{t})^{n}$, for which we can use the binomial theorem to
express it in terms of combinations of $\dot{t}^{n-r}|\dot{x}|^{r}$.
Furthermore, imposing that this transformation should reduce to that of
Eqs.(11), (12) when $p=0=x$ and $E=m$, we find the following conditions
$\displaystyle\alpha_{0}$ $\displaystyle=0=\lambda_{0},\,$
$\displaystyle\beta_{0}$ $\displaystyle=-v^{1+n}\gamma^{3+n},\,$ (21)
$\displaystyle\delta_{0}$ $\displaystyle=-v^{n}\gamma^{1+n}(\gamma^{2}-1),$
(22) $\displaystyle\alpha_{n}$
$\displaystyle=v\beta_{n}+\gamma-\gamma^{-1},\,$ $\displaystyle\lambda_{n}$
$\displaystyle=\beta_{n}+v\gamma(1+\gamma^{n})\,,$ (23)
$\displaystyle\delta_{n}$
$\displaystyle=v\beta_{n}+\gamma^{-1}+\gamma^{1+n}\,,$ (24)
and for $1\leq r\leq n-1$,
$\displaystyle\alpha_{r}$
$\displaystyle=v\beta_{r},\,\qquad\lambda_{r}=\beta_{r}+v^{1+n-r}\gamma^{1+n}\binom{n}{r}\,,$
(25) $\displaystyle\delta_{r}$
$\displaystyle=v\beta_{r}+v^{n-r}\gamma^{1+n}\binom{n}{r}\,,$ (26)
where $\binom{n}{r}$ is the binomial coefficient. We see that we have the
freedom to choose $n$ arbitrary functions of $v$, namely $\beta_{r}\,(1\leq
r\leq n)$ that should be null when $v\rightarrow 0$ in order to guarantee that
we recover the identity transformation when the velocity is zero. As this
transformation preserves the Finsler function, it also preserves the metric
and consequently the MDR (1) that is calculated from the norm of the momenta
Girelli _et al._ (2007); Barcaroli _et al._ (2015); Lobo _et al._ (2017).
A simple choice for the $\beta_{r}$ is setting all $\beta_{r}=0$ for $1\leq
r\leq n$. Another possible choice is fixing $\beta_{r}$ is the comparison of
this transformation with one arising from the action of boosts in a quantum
algebraic approach. For example, similar ambiguities are found at the
bicrossproduct basis of $\kappa$-Poincaré-inspired Finsler isometries, which
could be fixed by comparing the generators of the transformations found from
the geometric and algebraic approaches Amelino-Camelia _et al._ (2014).
Conclusion. Phenomenological models which break Lorentz invariance lead to a
modified Lorentz factor which encodes the time dilation between different
observer frames. This prediction triggered the search for such phenomenology
using the mass content of EAS from UHECR data Trimarelli (2022), thus using
deformations of particle lifetimes as a window to Planck scale physics.
In this letter, we have used Finsler geometry to show that actually a similar
correction (with the opposite sign) emerges naturally from an approach that
deforms, rather than breaks, Lorentz symmetry, such that the modified
dispersion relation’s form is kept invariant when transforming between frames
in a way that not only preserves the relativity principle, but also the so-
called clock postulate (the observer’s proper time is the line element of its
trajectory) from special relativity to a deformed special relativity. This can
be seen in the discussion that leads to Eqs.(7) and (13). Therefore,
unintentionally, what has been considered in previous analysis using UHECR
data would be a deformation instead of a violation of Lorentz symmetry with
basically opposite signs on the correction. The phenomenological consequences
of these two scenarios are manifestly different. For instance, some processes
that are forbidden in the Lorentz-invariance case would be allowed in the LIV
scenario but not in DSR. These additional observables would ultimately allow
us to distinguish between these scenarios.
Besides that, we have shown that one must not use a velocity given by
$\beta_{\text{LIV}}=|p|/m\,\gamma_{\text{LIV}}$ (Eq.(14)), as done in
Trimarelli (2022) since it is incompatible with the actual velocity of a
particle in the lab frame, which must be read from the dispersion relation,
whose expression can also be naturally derived from our analysis, as can be
seen in the discussion that surrounds Eqs.(16) and (17).
We also generalized this result to a transformation between two lab frames
that move relative to each other with velocity $v$, given by Eqs.(19)-(26),
and that reduces to the previous case in the comoving limit $x=0=p$ and $E=m$.
We believe that the search for quantum gravity effects from cosmic-ray data
could benefit from the findings of this letter within the scope of the
deformation of Lorentz symmetry and a next natural step would be to consider
these findings in future analyses. As a final remark, as UHECR observatories
improve their detection techniques and capabilities of reconstruction of air
showers, the prospects for detecting the effects discussed here will become
even better, especially with future facilities Coleman _et al._ (2023).
Acknowledgments. P. H. M. thanks Coordenação de Aperfeiçoamento de Pessoal de
Nível Superior - Brazil (CAPES) - Finance Code 001 for financial support. I.
P. L. was supported by the National Council for Scientific and Technological
Development - CNPq grant 306414/2020-1 and by the grant 3197/2021, Paraíba
State Research Foundation (FAPESQ). C. P. is funded by the excellence cluster
QuantumFrontiers funded by the Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation) under Germany’s Excellence Strategy – EXC-2123
QuantumFrontiers – 390837967. R. A. B. is funded by the “la Caixa” Foundation
(ID 100010434) and the European Union’s Horizon 2020 research and innovation
program under the Marie Marie Sklodowska-Curie-Curie grant agreement No
847648, fellowship code LCF/BQ/PI21/11830030. V. B. B. was supported by the
National Council for Scientific and Technological Development - CNPq grant
307211/2020-7. The authors like to acknowledge networking support by the COST
Action CA18108.
## References
* Amelino-Camelia (2013) G. Amelino-Camelia, Living Rev. Rel. 16, 5 (2013), arXiv:0806.0339 [gr-qc] .
* Addazi _et al._ (2022) A. Addazi _et al._ , Prog. Part. Nucl. Phys. 125, 103948 (2022), arXiv:2111.05659 [hep-ph] .
* Amelino-Camelia (2002) G. Amelino-Camelia, Int. J. Mod. Phys. D 11, 35 (2002), arXiv:gr-qc/0012051 .
* Magueijo and Smolin (2003) J. Magueijo and L. Smolin, Phys. Rev. D 67, 044017 (2003), arXiv:gr-qc/0207085 .
* Majid and Ruegg (1994) S. Majid and H. Ruegg, Phys. Lett. B 334, 348 (1994), arXiv:hep-th/9405107 .
* Bruno _et al._ (2001) N. R. Bruno, G. Amelino-Camelia, and J. Kowalski-Glikman, Phys. Lett. B 522, 133 (2001), arXiv:hep-th/0107039 .
* Barcaroli _et al._ (2015) L. Barcaroli, L. K. Brunkhorst, G. Gubitosi, N. Loret, and C. Pfeifer, Phys. Rev. D 92, 084053 (2015), arXiv:1507.00922 [gr-qc] .
* Girelli _et al._ (2007) F. Girelli, S. Liberati, and L. Sindoni, Phys. Rev. D 75, 064015 (2007), arXiv:gr-qc/0611024 .
* Lobo and Pfeifer (2021) I. P. Lobo and C. Pfeifer, Phys. Rev. D 103, 106025 (2021), arXiv:2011.10069 [hep-ph] .
* Lobo _et al._ (2017) I. P. Lobo, N. Loret, and F. Nettel, Phys. Rev. D 95, 046015 (2017), arXiv:1611.04995 [gr-qc] .
* Zhu and Ma (2023a) J. Zhu and B.-Q. Ma, Eur. Phys. J. C 83, 349 (2023a), arXiv:2304.08676 [gr-qc] .
* Pfeifer (2019) C. Pfeifer, Int. J. Geom. Meth. Mod. Phys. 16, 1941004 (2019), arXiv:1903.10185 [gr-qc] .
* Albuquerque _et al._ (2023) S. Albuquerque, V. B. Bezerra, I. P. Lobo, G. Macedo, P. H. Morais, E. Rodrigues, L. C. N. Santos, and G. Varão, Physics 5, 90 (2023), arXiv:2301.09448 [gr-qc] .
* Zhu and Ma (2023b) J. Zhu and B.-Q. Ma, Symmetry 15, 978 (2023b), arXiv:2304.12767 [gr-qc] .
* Zhu and Ma (2022) J. Zhu and B.-Q. Ma, Phys. Rev. D 105, 124069 (2022), arXiv:2206.07616 [gr-qc] .
* Amelino-Camelia _et al._ (2014) G. Amelino-Camelia, L. Barcaroli, G. Gubitosi, S. Liberati, and N. Loret, Phys. Rev. D 90, 125030 (2014), arXiv:1407.8143 [gr-qc] .
* Rodrigues and Lobo (2022) E. Rodrigues and I. P. Lobo, (2022), arXiv:2208.11406 [gr-qc] .
* Lobo _et al._ (2022) I. P. Lobo, C. Pfeifer, P. H. Morais, R. A. Batista, and V. B. Bezerra, JHEP 09, 003, arXiv:2112.12172 [hep-ph] .
* Trimarelli (2022) C. Trimarelli (Pierre Auger), PoS CORFU2021, 343 (2022).
* Abreu _et al._ (2022) P. Abreu _et al._ (Pierre Auger), JCAP 01 (01), 023, arXiv:2112.06773 [astro-ph.HE] .
* ALICE collaboration (2023) ALICE collaboration, (2023), arXiv:2303.00606 [nucl-ex] .
* Lobo and Pfeifer (2023) I. P. Lobo and C. Pfeifer, (2023), arXiv:2306.07210 [hep-ph] .
* Coleman _et al._ (2023) A. Coleman _et al._ , Astropart. Phys. 149, 102819 (2023), arXiv:2205.05845 [astro-ph.HE] .
|
# Exact meromorphic solutions of Schwarzian differential equations
Liangwen Liao and Chengfa Wu
Mathematics Subject Classification (2020): Primary 34M05; Secondary 30D35.
Key words and phrases. differential equation, exact solution, Schwarzian
differential equation, elliptic function.
> Abstract: This paper studies exact meromorphic solutions of the autonomous
> Schwarzian differential equations. All transcendental meromorphic solutions
> of five canonical types (among six) of the autonomous Schwarzian
> differential equations are constructed explicitly. In particular, the
> solutions of four types are shown to be elliptic functions. Also, all
> transcendental meromorphic solutions that are locally injective or possess a
> Picard exceptional value are characterized for the remaining canonical type.
## 1 Introduction and Lemmas
The Schwarzian derivative of a meromorphic function $f$ is defined as
$S(f,z)=\left({{f^{\prime\prime}}\over{f^{\prime}}}\right)^{\prime}-{1\over
2}\left({{f^{\prime\prime}}\over{f^{\prime}}}\right)^{2}={{f^{\prime\prime\prime}}\over{f^{\prime}}}-{3\over
2}\left({{f^{\prime\prime}}\over{f^{\prime}}}\right)^{2}.$
It is well-known that $S(f,z)\equiv 0$ if and only if $f$ is a Möbius
transformation. This property reveals that the Schwarzian derivative $S(f,z)$
measures how much $f$ differs from being a Möbius transformation. Another
basic property of the Schwarzian derivative is that it is invariant under the
Möbius group in the sense that $S(f,z)=S(\gamma\circ f,z)$, where $\gamma$ can
be any Möbius transformation. The converse is also true, namely, if
$S(g,z)=S(f,z)$, where $f,g$ are meromorphic functions, then there exits a
Möbius transformation $\gamma$ such that $g=\gamma\circ f$.
The Schwarzian derivative plays an essential role in various branches of
complex analysis [5, 9, 12] including univalent functions and conformal
mappings. It has also been shown that the Schwarzian derivative has close
connections with second-order linear differential equations [8] and Lax pairs
of certain integrable partial differential equations [13]. In particular, it
appears in the differential equation
$S(f,z)^{p}=R(z,f)={{P(z,f)}\over{Q(z,f)}},$ (1)
where $p$ is a positive integer, and $R(z,f)$ is an irreducible rational
function in $f$ with meromorphic coefficients. The equation (1) is known as
the Schwarzian differential equation. Ishizaki [7] obtained some Malmquist-
type theorems of this equation and results concerning the deficiencies of its
meromorphic solutions. The growth of meromorphic solutions of the equation (1)
with polynomial coefficients has been studied by Liao and Ye [10]. A more
complicated Schwarzian type differential equation was considered by Hotzel and
Jank [6]. If we restrict ourselves to the autonomous Schwarzian differential
equation
$S(f,z)^{p}=R(f)={{P(f)}\over{Q(f)}},$ (2)
where $P,Q$ are co-prime polynomials with constant coefficients, Ishizaki [7]
obtained a Malmquist-Yosida-type result in which he gave a complete
classification of the equation (2) possessing transcendental meromorphic
solutions.
###### Theorem A.
Suppose that the autonomous Schwarzian differential equation (2) admits a
transcendental meromorphic solution. Then for some Möbius transformation
$u=(af+b)/(cf+d),ad-bc\not=0,$ (2) reduces into one of the following types
$\displaystyle S(u,z)$ $\displaystyle=$ $\displaystyle
c\frac{(u-\sigma_{1})(u-\sigma_{2})(u-\sigma_{3})(u-\sigma_{4})}{(u-\tau_{1})(u-\tau_{2})(u-\tau_{3})(u-\tau_{4})}$
(3) $\displaystyle S(u,z)^{3}$ $\displaystyle=$ $\displaystyle
c\frac{(u-\sigma_{1})^{3}(u-\sigma_{2})^{3}}{(u-\tau_{1})^{3}(u-\tau_{2})^{2}(u-\tau_{3})}$
(4) $\displaystyle S(u,z)^{3}$ $\displaystyle=$ $\displaystyle
c\frac{(u-\sigma_{1})^{3}(u-\sigma_{2})^{3}}{(u-\tau_{1})^{2}(u-\tau_{2})^{2}(u-\tau_{3})^{2}}$
(5) $\displaystyle S(u,z)^{2}$ $\displaystyle=$ $\displaystyle
c\frac{(u-\sigma_{1})^{2}(u-\sigma_{2})^{2}}{(u-\tau_{1})^{2}(u-\tau_{2})(u-\tau_{3})}$
(6) $\displaystyle S(u,z)$ $\displaystyle=$ $\displaystyle
c\frac{(u-\sigma_{1})(u-\sigma_{2})}{(u-\tau_{1})(u-\tau_{2})}$ (7)
$\displaystyle S(u,z)$ $\displaystyle=$ $\displaystyle c$ (8)
where $c\in\mathbb{C},\tau_{j}$ are distinct constants, and $\sigma_{j}$ are
constants, not necessarily distinct, $j=1,\dots,4$.
###### Remark 1.
We remark that the conclusion of Theorem A does not hold for rational
solutions of equation (2). For instance, the function
$f(z)=-\frac{3}{2(z+a)^{2}},$
where $a$ is an arbitrary constant, satisfies the equation $S(u,z)=u$ but it
cannot be transformed into any type of (3)-(8) via Möbius transformations. It
is also noted that $f$ can be viewed as a fixed point of the Schwarzian
operator and we refer the readers to the reference [15] for the details on
fixed points and $N$-cycles of the Schwarzian operator.
The above theorem intimates that to study the autonomous Schwarzian
differential equation (2), it suffices to consider the equations (3)–(8). We
will show that all transcendental meromorphic solutions of the equations
(3)-(6) are elliptic functions and can be explicitly constructed. It is also
shown that all transcendental meromorphic solutions of the equation (2) can be
characterized by imposing some conditions on them. The precise statements of
these results are as follows.
###### Theorem 1.
If the Schwarzian differential equation (2) admits a transcendental
meromorphic solution $f$ with a Picard exceptional value
$\xi\in\hat{\mathbb{C}}$, then by some M$\ddot{o}$bius transformation
$f=\gamma_{1}(u)$, (2) reduces into either
$S(u,z)=c\frac{(u-\sqrt{2}i)(u+\sqrt{2}i)}{(u-1)(u+1)},$
and the transcendental meromorphic solutions of (2) are
$f(z)=\gamma_{1}(\sin(\alpha z+\beta))$, where $\alpha=\sqrt{2c}$ and $\beta$
is a constant; or
$S(u,z)=c,$
and all solutions of (2) are $f(z)=\gamma_{2}(e^{\alpha z})$, where
$\alpha=\sqrt{-2c}$ and $\gamma_{2}$ is any Möbius transformation.
###### Remark 2.
Theorem 1 shows that any transcendental meromorphic solution of equations
(3)-(6) must have infinitely many poles.
The result below follows immediately from Theorem 1.
###### Corollary 1.
If the Schwarzian differential equation (2) admits a transcendental entire
solution $f$, then by some linear transformation $f=L_{1}(u)$, (2) reduces
into either
$S(u,z)=c\frac{(u-\sqrt{2}i)(u+\sqrt{2}i)}{(u-1)(u+1)},$
and the entire solutions of (2) are $f(z)=L_{1}(\sin(\alpha z+\beta))$, where
$\alpha=\sqrt{2c}$ and $\beta$ is a constant; or
$S(u,z)=c,$
and all entire solutions of (2) are $f(z)=L_{2}(e^{\pm\alpha z})$, where
$\alpha=\sqrt{-2c}$ and $L_{2}$ is any linear transformation.
###### Theorem 2.
If the Schwarzian differential equation (2) admits a locally injective
transcendental meromorphic solution, then by some Möbius transformation
$f=\gamma(u)$, (2) reduces into
$S(u,z)=c,$
and all solutions of (2) are $f(z)=\gamma(e^{\alpha z})$, where
$\alpha=\sqrt{-2c}$ and $\gamma$ is any Möbius transformation.
Rewrite the equation (3) as
$\displaystyle S(u,z)$ $\displaystyle=$ $\displaystyle
c\frac{(u-\sigma_{1})(u-\sigma_{2})(u-\sigma_{3})(u-\sigma_{4})}{(u-\tau_{1})(u-\tau_{2})(u-\tau_{3})(u-\tau_{4})}$
(9) $\displaystyle=$
$\displaystyle\frac{r_{4}u^{4}+r_{3}u^{3}+r_{2}u^{2}+r_{1}u+r_{0}u}{(u-\tau_{1})(u-\tau_{2})(u-\tau_{3})(u-\tau_{4})},$
and denote by
$\displaystyle e_{1}=\sum_{j=1}^{4}\tau_{j},\quad e_{2}=\sum_{1\leq j<k\leq
4}\tau_{j}\tau_{k},\quad e_{3}=\sum_{1\leq j<k<l\leq
4}\tau_{j}\tau_{k}\tau_{l},\quad e_{4}=\prod_{j=1}^{4}\tau_{j}.$ (10)
Then we can construct all transcendental meromorphic solutions to the equation
(3).
###### Theorem 3.
All transcendental meromorphic solutions of the equation (9) are elliptic
functions of the form
$\displaystyle u(z)$ $\displaystyle=$ $\displaystyle
a-\frac{b}{\wp(z-z_{0};g_{2},g_{3})-d},$ (11)
where $\wp(z;g_{2},g_{3})$ is the Weierstrass elliptic function,
$z_{0}\in\mathbb{C}$ is arbitrary, $a=\tau_{i}$ and $b,d,g_{2},g_{3}$ are
constants that depend on $c$, $\sigma_{i}$ and $\tau_{i},i=1,2,3,4$. Further,
with $e_{i}\,(i=1,2,3,4)$ defined in (10) and
$q_{i}=\prod_{\begin{subarray}{c}1\leq j\leq 4\\\
j\not=i\end{subarray}}(\tau_{i}-\tau_{j}),\quad i=1,2,3,4,$
the equation (9) admits solutions of the form (11) if and only if the
following parameter relations hold
$\displaystyle r_{0}$ $\displaystyle=$
$\displaystyle\frac{b}{2q_{i}}\left(3e_{3}^{2}-8e_{2}e_{4}\right),\quad
r_{1}=\frac{2b}{q_{i}}\left(6e_{1}e_{4}-e_{2}e_{3}\right),\quad
r_{2}=\frac{b}{q_{i}}\left(2e_{2}^{2}-3e_{1}e_{3}-24e_{4}\right),$ (12)
$\displaystyle r_{3}$ $\displaystyle=$
$\displaystyle\frac{2b}{q_{i}}\left(6e_{3}-e_{1}e_{2}\right),\quad
r_{4}=\frac{b}{2q_{i}}\left(3e_{1}^{2}-8e_{2}\right),$ (13) $\displaystyle d$
$\displaystyle=$
$\displaystyle\frac{b}{6q_{i}}\left[\sum_{\begin{subarray}{c}1\leq j<k\leq
4\\\
j,k\not=i\end{subarray}}(\tau_{j}-\tau_{k})^{2}-\sum_{\begin{subarray}{c}1\leq
j\leq 4\\\ j\not=i\end{subarray}}2(\tau_{i}-\tau_{j})^{2}\right],$ (14)
$\displaystyle g_{2}$ $\displaystyle=$
$\displaystyle\frac{4b^{2}}{3q_{i}^{2}}\left(e_{2}^{2}-3e_{1}e_{3}+12e_{4}\right),\quad
g_{3}=\frac{4b^{3}}{27q_{i}^{3}}\left(2e_{2}^{3}-9e_{1}e_{2}e_{3}-72e_{2}e_{4}+27e_{3}^{2}+27e_{1}^{2}e_{4}\right),$
(15) $\displaystyle\Delta$ $\displaystyle=$ $\displaystyle
g_{2}^{3}-27g_{3}^{2}=\frac{16b^{6}}{q_{i}^{6}}\prod_{1\leq j<k\leq
4}(\tau_{j}-\tau_{k})^{2}\not=0.$ (16)
###### Remark 3.
Theorem 3 indicates that the equation (9) has transcendental meromorphic
solutions only if the parameters $c,\sigma_{i},\tau_{i},i=1,2,3,4$ satisfy the
conditions (12) and (13). In addition, the solution (11) has just one free
parameter, which implies that the general solution of equation (9) should have
more complicated singularities other than poles.
In view of the invariance of Schwarzian derivatives under the Möbius group, we
may compose the solution $u$ of equations (4)-(6) with a Möbius transformation
such that $\tau_{1},\tau_{2}$ and $\tau_{3}$ can be any distinct desired
numbers, and this allows us to derive all transcendental meromorphic solutions
to the equation equations (4)-(6) explicitly.
###### Theorem 4.
Let $\tau_{1}=4,\tau_{2}=-3,\tau_{3}=0$, then all transcendental meromorphic
solutions to the equation (4) are elliptic functions. Moreover, these
solutions exist if and only if
$\displaystyle\\{\sigma_{1},\sigma_{2}\\}=\left\\{\sqrt{5}i,-\sqrt{5}i\right\\}.$
and in this case, all the transcendental meromorphic solutions to the equation
(6) are given by
$u(z)=-\frac{3c}{c-74088\wp\left(z-z_{0};g_{2},g_{3}\right)^{3}},$
where $\wp(z;g_{2},g_{3})$ is the Weierstrass elliptic function with
$g_{2}=0,g_{3}=c/10584$, and $z_{0}\in\mathbb{C}$ is arbitrary.
###### Theorem 5.
Let $\\{\tau_{1},\tau_{2},\tau_{3}\\}=\\{0,1,-1\\}$, then all transcendental
meromorphic solutions to the equation (5) are elliptic functions. Moreover,
these solutions exist if and only if
$\displaystyle\\{\sigma_{1},\sigma_{2}\\}=\left\\{\frac{i}{\sqrt{3}},-\frac{i}{\sqrt{3}}\right\\},$
and in this case, all the transcendental meromorphic solutions to the equation
(5) are given by
$u(z)=\frac{9\left[9\wp\left(z-z_{0};g_{2},g_{3}\right)+L^{2}\right]\wp^{\prime}\left(z-z_{0};g_{2},g_{3}\right)}{2L\left[81\wp\left(z-z_{0};g_{2},g_{3}\right)^{2}-9L^{2}\wp\left(z-z_{0};g_{2},g_{3}\right)+L^{4}\right]}$
where $L^{6}=-27c/64$, $\wp(z;g_{2},g_{3})$ is the Weierstrass elliptic
function with $g_{2}=0,g_{3}=c/432$, and $z_{0}\in\mathbb{C}$ is arbitrary.
###### Theorem 6.
Let $\tau_{1}=0,\tau_{2}=1,\tau_{3}=-1$, then all transcendental meromorphic
solutions to the equation (6) are elliptic functions. Moreover, these
solutions exist if and only if
$\displaystyle\\{\sigma_{1},\sigma_{2}\\}=\left\\{\frac{i}{2},-\frac{i}{2}\right\\},$
and in this case, all the transcendental meromorphic solutions to the equation
(6) are given by
$u(z)=-\frac{1}{2L}\frac{\left(8\wp\left(z-z_{0};g_{2},g_{3}\right)+L^{2}\right)^{2}\wp^{\prime}\left(z-z_{0};g_{2},g_{3}\right)}{\wp\left(z-z_{0};g_{2},g_{3}\right)\left(64\wp\left(z-z_{0};g_{2},g_{3}\right)^{2}+L^{4}\right)},$
where $c=9L^{4}/4$, $\wp(z;g_{2},g_{3})$ is the Weierstrass elliptic function
with $g_{2}=-c/36,g_{3}=0$, and $z_{0}\in\mathbb{C}$ is arbitrary.
###### Remark 4.
It follows from Theorems 1-6 that all transcendental meromorphic solutions of
the canonical Schwarzian differential equations (3)-(8) have been derived,
except the solutions of equation (7) that have no Picard exceptional values.
Although we are not able to prove that any transcendental meromorphic solution
of equation (7) must have Picard exceptional value(s), we conjecture this is
true.
## 2 Preliminaries
The important tools in our proof include Wiman-Valiron theorem and Wiman-
Valiron theory. Let $f$ be a transcendental entire function, and write
$f(z)=\sum_{n=0}^{\infty}a_{n}z^{n}.$
As usual, for $r>0$, we denote the maximum term by $\mu(r,f)$, the central
index by $\nu(r,f)$, and the maximum modulus by $M(r,f)$, i.e.,
$\mu(r,f)=\max_{|z|=r}|a_{n}z^{n}|,\ \
\nu(r,f)=\sup\\{n||a_{n}|r^{n}=\mu(r,f)\\},\ \ M(r,f)=\max_{|z|=r}|f(z)|.$
###### Lemma 1 (Wiman-Valiron Theorem [1]).
There exists a set $F\subset[1,+\infty)$ satisfying
$\int_{F}\frac{dt}{t}<\infty$
with the following property: if $(z_{k})$ is a sequence in $\mathbb{C}$ with
$|f(z_{k})|=M(|z_{k}|,f),|z_{k}|\not\in F$ and $z_{k}\to\infty$, and if
$\nu_{k}=\nu(|z_{k}|,f),$ then
$\frac{f\left(z_{k}+\dfrac{z_{k}}{\nu_{k}}z\right)}{f(z_{k})}\to e^{z}$
as $k\to\infty.$
###### Lemma 2 ([8]).
Let $f$ be a transcendental entire function, $0<\delta<{1\over 4}$ and $|z|=r$
such that
$|f(z)|>M(r,f){\nu(r,f)}^{-{1\over 4}+\delta}$ $None$
holds. Then there exists a set $F\subset(0,+\infty)$ of finite logarithmic
measure, i.e., $\int_{F}dt/t<+\infty$ such that
$f^{(m)}(z)={\left({{\nu(r,f)}\over z}\right)}^{m}(1+o(1))f(z)$ $None$
holds for all $m\geq 0$ and all $r\not\in F$.
The Schwarzian derivative has a fundamental relation with second-order linear
ordinary differential equations.
###### Lemma 3.
[8, p. 110] Let $A(z)$ be analytic in a simply connected domain $\Omega$.
Then, for any two linearly independent solutions $f_{1},f_{2}$ of
$f^{\prime\prime}(z)+A(z)f(z)=0,$ (17)
their quotient $g=f_{1}/f_{2}$ is locally injective and satisfies the
differential equation
$S(g,z)=2A(z).$ (18)
Conversely, let $g$ be a locally injective meromorphic function in $\Omega$
and define $A(z)$ by (18). Then $A(z)$ is analytic in $\Omega$ and the
differential equation (17) admits two linearly independent solutions
$f_{1},f_{2}$ such that $g=f_{1}/f_{2}$.
###### Remark 5.
The lemma above has crucial applications in differential equations. In
particular, it has been used by Bergweiler and Eremenko [2] to solve the Bank-
Laine conjecture, which concerns the zero distribution of solutions of
equation (17) where $A$ is an entire function of finite order.
Now we introduce some terminologies in Nevanlinna theory [8]. Let $f$ be a
meromorphic function on $\mathbb{C}$ and $n(r,f)$ denote the number of poles
of $f$ in the disk $\mathbb{D}(r)=\\{z\in\mathbb{C}||z|<r\\}$, counting
multiplicity. The Nevanlinna characteristic function of $f$ is defined as
$T(r,f)=m(r,f)+N(r,f),$
where
$\displaystyle m(r,f)$ $\displaystyle=$
$\displaystyle\int_{0}^{2\pi}\log^{+}|f\left(re^{i\theta}\right)|\frac{d\theta}{2\pi},$
$\displaystyle N(r,f)$ $\displaystyle=$ $\displaystyle n(0,f)\log
r+\int_{0}^{r}\left[n(t,f)-n(0,f)\right]\frac{dt}{t},$
with $\log^{+}x=\max\\{0,\log x\\}$. We note that $m(r,f)$ and $N(r,f)$ are
called the proximity function and integrated counting function, respectively.
Next, we define the order of $f$ by
$\rho(f)=\mathop{\overline{\rm lim}}_{r\rightarrow\infty}\frac{\log
T(r,f)}{\log r}.$
The following result of Liao and Ye [10] says that the order of meromorphic
solutions of equation (2) is bounded from above by $2$.
###### Lemma 4.
Let $f$ be a meromorphic solution of the autonomous Schwarzian differential
equation (2), then $\rho(f)\leq 2$.
## 3 Proof of main results
We first recall the definition of totally ramified values: we call a point
$a\in\overline{\mathbb{C}}$ a totally ramified value of a meromorphic function
$f$ if all $a$-points of $f$ are multiple. According to a classical result of
Nevanlinna, a non-constant function meromorphic in the plane can have at most
four totally ramified values while a non-constant entire function can have at
most two finite totally ramified values. We also need the following results.
###### Lemma 5 ([8]).
Let $f(z)$ be a nonconstant meromorphic function. Then
$m\left(r,\frac{f^{\prime}}{f}\right)=O(\log r),$
if $f$ is of finite order, and
$m\left(r,\frac{f^{\prime}}{f}\right)=O(\log(rT(r,f))),$
possibly outside a set $E$ of $r$ with finite linear measure, if $f(z)$ is of
infinite order.
###### Lemma 6 ([14]).
If the differential equation
$w^{2}+R(z)(w^{\prime})^{2}=Q(z),$ (19)
where $R,Q$ are nonzero rational functions, admits a transcendental
meromorphic solution $f$, then $Q\equiv A$ is a constant, the multiplicity of
zeros of $R(z)$ is no greater than 2 and $f(z)=\sqrt{A}\cos\alpha(z)$, where
$\alpha(z)$ is a primitive of $1/\sqrt{R(z)}$ such that
$\sqrt{A}\cos\alpha(z)$ is a transcendental meromorphic function.
### 3.1 Proof of Theorem 1
Let $f$ be a transcendental meromorphic solution with a Picard exceptional
value of the equation (2). It follows from Theorem A that by some Möbius
transformation
$u=\dfrac{af+b}{cf+d},\quad ad-bc\not=0,$
$u$ satisfies one of the equations (3)-(8).
If $u$ satisfies the equation (3), then $u$ has four totally ramified values
$\tau_{1},\tau_{2},\tau_{3},\tau_{4}$. This is impossible since $u$ has a
Picard exceptional value. If $u$ satisfies the equation (4), then $u$ has
three totally ramified values $\tau_{1},\tau_{2},\tau_{3}$. Thus, the Picard
exceptional value of $u$ must be one of them. Without loss of generality, we
may assume $\tau_{3}$ is a Picard exceptional value of $u$. Let
$v=\dfrac{1}{u-\tau_{3}},$
then $v$ has at most finitely many poles and satisfies the following
differential equation
$S(v,z)=c^{\prime}\frac{(v-\sigma^{\prime}_{1})^{3}(v-\sigma^{\prime}_{2})^{3}}{(v-\tau^{\prime}_{1})^{3}(v-\tau^{\prime}_{2})^{2}}$
(20)
Assume $\zeta_{1},\cdots,\zeta_{n}$ are the poles (counting multiplicities) of
$v$, then $v(z)=g(z)/P(z)$, where $g(z)$ is a transcendental entire function
and $P(z)=(z-\zeta_{1})\cdots(z-\zeta_{n}).$ We choose $z_{k}\to\infty$ such
that $|z_{k}|\not\in F$ and $|g(z_{k})|=M(|z_{k}|,g).$ Let
$h_{k}(z)=\frac{v(z_{k}+\rho_{k}z)}{v(z_{k})},$
where $\displaystyle\rho_{k}=\frac{z_{k}}{\nu_{k}},\nu_{k}=\nu(|z_{k}|,g)$,
then by Lemma 1, we have
$\lim_{k\to\infty}h_{k}(z)=\lim_{k\to\infty}\frac{v(z_{k}+\rho_{k}z)}{v(z_{k})}=\lim_{k\to\infty}\frac{g(z_{k}+\rho_{k}z)}{g(z_{k})}\frac{P(z_{k})}{P(z_{k}+\rho_{k}z)}=e^{z}.$
Thus
$\lim_{k\to\infty}\frac{\rho_{k}v^{\prime}(z_{k}+\rho_{k}z)}{v(z_{k})}=\lim_{k\to\infty}h_{k}^{\prime}(z)=e^{z},$
and
$\lim_{k\to\infty}\frac{\rho_{k}^{2}v^{\prime\prime}(z_{k}+\rho_{k}z)}{v(z_{k})}=\lim_{k\to\infty}h_{k}^{\prime\prime}(z)=e^{z}.$
It follows from (20) that
$\frac{1}{v(z_{k})}\left(\frac{1}{\rho_{k}}\right)^{2}\left(\frac{h_{k}^{\prime\prime\prime}(z)}{h_{k}^{\prime}(z)}-\frac{3}{2}\left(\frac{h_{k}^{\prime\prime}(z)}{h_{k}^{\prime}(z)}\right)^{2}\right)=c^{\prime}\frac{(h_{k}(z)-\sigma_{1}^{\prime}/v(z_{k}))^{3}(h_{k}(z)-\sigma_{2}^{\prime}/v(z_{k}))^{3}}{(h_{k}(z)-\tau_{1}^{\prime}/v(z_{k}))^{3}(h_{k}(z)-\tau_{2}^{\prime}/v(z_{k}))^{2}}.$
(21)
Noting the selection of $z_{k}$, we have
$\displaystyle\lim_{k\to\infty}\frac{\nu_{k}^{M}}{v(z_{k})}=0$
for any positive number $M$. Thus, the left side of the equation (21) tends to
zero while the right side of equation (21) tends tends to $c^{\prime}e^{z}$ as
$k\to\infty$, which is a contradiction. Thus $u$ cannot satisfy (4). With
similar arguments, we can prove that $u$ satisfies neither (5) nor (6).
If $u$ satisfies the equation (7), then $u$ has two totally ramified values
$\tau_{1},\tau_{2}$. Then we distinguish two cases.
Case 1: one of $\tau_{1}$ and $\tau_{2}$ is the Picard exceptional value of
$u$, by the same arguments as above, we get a contradiction.
Case 2: both of $\tau_{1}$ and $\tau_{2}$ are not the Picard exceptional value
of $u$. Without loss of generality, we may assume the Picard exceptional value
of $u$ is infinity. Otherwise, we may consider a composition of a Möbius
transformation and the function $u$. Thus we can express $u$ as
$u(z)=\frac{g(z)}{P(z)},$
where $g(z)$ is a transcendental entire function and $P(z)$ is a polynomial.
For any $r>0,$ let
$|g(z_{0})|=M(g,r),\quad|z_{0}|=r.$
Then, by Lemma 2, there exists a set $F\subseteq(0,+\infty)$ with a finite
logarithmic measure such that
$\frac{u^{\prime}(z_{0})}{u(z_{0})}=\frac{g^{\prime}(z_{0})}{g(z_{0})}-\frac{P^{\prime}(z_{0})}{P(z_{0})}=\frac{\nu(g,r)}{z_{0}}(1+o(1)),$
$\frac{u^{\prime\prime}(z_{0})}{u(z_{0})}=\frac{g^{\prime\prime}(z_{0})}{g(z_{0})}-\frac{P^{\prime\prime}(z_{0})}{P(z_{0})}-2\frac{u^{\prime}(z_{0})}{u(z_{0})}\frac{P^{\prime}(z_{0})}{P(z_{0})}=\left(\frac{\nu(g,r)}{z_{0}}\right)^{2}(1+o(1)),$
and
$\frac{u^{\prime\prime\prime}(z_{0})}{u(z_{0})}=\frac{g^{\prime\prime\prime}(z_{0})}{g(z_{0})}-\frac{P^{\prime\prime\prime}(z_{0})}{P(z_{0})}-3\frac{u^{\prime\prime}(z_{0})}{u(z_{0})}\frac{P^{\prime}(z_{0})}{P(z_{0})}-3\frac{u^{\prime}(z_{0})}{u(z_{0})}\frac{P^{\prime\prime}(z_{0})}{P(z_{0})}=\left(\frac{\nu(g,r)}{z_{0}}\right)^{3}(1+o(1)),$
for all sufficiently large $r\not\in F.$ Thus the equation (7) becomes
$\left(\frac{\nu(g,r)}{z_{0}}\right)^{2}(1+o(1))-\frac{3}{2}\left(\frac{\nu(g,r)}{z_{0}}(1+o(1))\right)^{2}=c^{\prime}(1+o(1)).$
This leads to
$\nu(r,g)\sim Ar\text{ and }\rho(g)=1.$
Hence $\rho(u)=1.$
By computing the Laurent expansions on both sides of (7), we may obtain
* •
$u^{\prime}(z)=0$ if and only if $u(z)=\tau_{1}$ or $u(z)=\tau_{2}$.
* •
all the zeros of $u^{\prime}$ are simple.
Without loss of generality, we may assume $\tau_{1}=1,\tau_{2}=-1.$ Thus,
$\displaystyle\frac{(u^{\prime})^{2}}{u^{2}-1}$
is a meromorphic function having only finitely many poles and no zeros. Noting
$\rho(u)=1$, we have
$\displaystyle Q^{2}(z)\frac{(u^{\prime})^{2}}{u^{2}-1}=e^{h(z)},$
where $Q(z)$ is a nonzero polynomial with simple zeros and $h(z)$ is an entire
function. Then, by Lemma 5, we have
$\displaystyle T(r,e^{h})$ $\displaystyle=$ $\displaystyle m(r,e^{h})$
$\displaystyle\leq$ $\displaystyle
2m\left(r,Q\right)+m\left(r,\frac{u^{\prime}}{u-1}\right)+m\left(r,\frac{u^{\prime}}{u+1}\right)$
$\displaystyle=$ $\displaystyle O(\log r).$
This implies $e^{h}$ is a polynomial and hence $h$ is a constant. Without loss
of generality, we may assume $h=1$, then $u$ satisfies the differential
equation
$u^{2}-Q(z)^{2}(u^{\prime})^{2}=1.$ (22)
If $\deg Q\geq 1$, then by the equation (22) and Lemma 2, we have
$\nu(r,u)\sim Ar^{1-\frac{\deg P}{2}},$
where $A$ is a positive number, but this contradicts with $\rho(u)=1.$ Hence
$Q(z)$ is a constant. It is easy to see that the solutions of (22) are of the
form
$u=\sin(\alpha z+\beta),$
where $\alpha,\beta$ are constants with $Q^{2}\alpha^{2}=-1.$ Substituting
$u=\sin(\alpha z+\beta)$ into (7) and noting $\tau_{1}=1,\tau_{2}=-1$, we
obtain that
$\alpha=\sqrt{2c},\quad\sigma_{1}=\sqrt{2}i,\quad\sigma_{2}=-\sqrt{2}i.$
Thus we get the conclusion.
Finally, if $u$ satisfies equation (8), then $R(f)$ must be a constant, say A,
and hence $c^{p}=A$. It is easy to check that $u(z)=e^{\alpha z}$ is a
solution of the equation (8), where $\alpha=\sqrt{-2c}$. Then it follows from
the invariance property of the Schwarzian derivative under Möbius
transformations that all the solutions of (8) are given by $u=\gamma(e^{\alpha
z})$, where $\gamma$ is a Möbius transformation and $\alpha=\sqrt{-2c}.$
Hence, in this case, all the solutions of the equation (2) are
$f(z)=\gamma(e^{\alpha z})$, where $\gamma$ is a Möbius transformation and
$\alpha=\sqrt{-2}A^{\frac{1}{2p}}.$
### 3.2 Proof of Theorem 2
Suppose $f$ is a locally injective transcendental meromorphic solution of the
equation (2). According to Theorem A, there exits a Möbius transformation
$\gamma_{1}$ such that $u=\gamma_{1}(f)$ is also a locally injective
transcendental meromorphic function and satisfies one of the equations
(3)-(8). Then it follows from Lemma 3 that $S(u,z)$ is entire. This implies
$u$ cannot satisfy any of the equations (3)-(7). Otherwise, $u$ has at least
one Picard exceptional value. By Theorem 1, it indicates that
$u=\gamma_{2}(\sin(\alpha z+\beta))$, where $\alpha,\beta$ are constants and
$\gamma_{2}$ is a Möbius transformation. Nevertheless, this contradicts with
the fact that $u$ is locally injective. As a consequence, $u$ can only satisfy
equation (8) and then the conclusion follows immediately from Theorem 1.
### 3.3 Proof of Theorem 3
Suppose $u$ is a transcendental meromorphic solution to the equation (9), then
Theorem 1 shows that $u$ must have infinitely many poles. By comparing the
Laurent expansions on both sides of the equation (9), we deduce that all the
poles of $u$ are simple and all the poles (if they exist) of $S(u,z)$ come
from the zeros of $u^{\prime}$. Since all the poles of $S(u,z)$ are double, it
follows that all zeros of $u^{\prime}$ should be simple, and at any zero of
$u^{\prime}$, $u(z)$ assumes one of the $\tau_{i},i=1,2,3,4$. This means any
zero of $u-\tau_{i}$ must be double. Therefore,
$G(z)=\frac{u^{\prime 2}}{(u-\tau_{1})(u-\tau_{2})(u-\tau_{3})(u-\tau_{4})}$
(23)
is a nonvanishing entire function, and there exists an entire function $g(z)$
such that $G=e^{g}$. According to Theorem 4, $u$ has finite order of growth.
Then we have
$\displaystyle T(r,e^{g})$ $\displaystyle=$ $\displaystyle m(r,e^{g})$
$\displaystyle\leq$ $\displaystyle
m\left(r,\frac{u^{\prime}}{(u-\tau_{1})(u-\tau_{2})}\right)+m\left(r,\frac{u^{\prime}}{(u-\tau_{3})(u-\tau_{4})}\right)$
$\displaystyle\leq$
$\displaystyle\sum_{i=1}^{4}m\left(r,\frac{u^{\prime}}{u-\tau_{i}}\right)+O(1)$
$\displaystyle=$ $\displaystyle O(\log r),$
where the last equality follows from Lemma 5. This implies $e^{g}$ is a
polynomial and hence $g=C$ is a constant. As a consequence, $u$ satisfies the
differential equation
$u^{\prime 2}=K(u-\tau_{1})(u-\tau_{2})(u-\tau_{3})(u-\tau_{4}),\quad K=e^{C}$
whose general solution is given by [3]
$\displaystyle u(z)$ $\displaystyle=$ $\displaystyle
K^{-1/2}\left(A-\frac{\wp^{\prime}(w;g_{2},g_{3})}{\wp(z-z_{0};g_{2},g_{3})-\wp(w;g_{2},g_{3})}\right)$
(24) $\displaystyle=$ $\displaystyle a-\frac{b}{\wp(z-z_{0};g_{2},g_{3})-d}$
where $\wp(z;g_{2},g_{3})$ is the Weierstrass elliptic function,
$z_{0}\in\mathbb{C}$ is arbitrary and $a,b,d,g_{2},g_{3}$ are constants that
depend on $K$ and $\tau_{i},i=1,2,3,4$. Finally, by substituting (24) into (9)
and applying the differential equation satisfied by $\wp(z;g_{2},g_{3})$
$\wp^{\prime 2}=4\wp^{3}-g_{2}\wp-g_{3},$
where $\Delta=g_{2}^{3}-27g_{3}^{2}\not=0$, it can be computed that $a$ should
be equal to one of the $\tau_{i},i=1,2,3,4$, and other parameters should
satisfy the relations (12)-(15).
### 3.4 Proof of Theorem 4
Let $u$ be a transcendental meromorphic solution to the equation (4), then
Theorem 1 implies that $u$ must have infinitely many poles. With similar
arguments as in Theorem 3, we find that
* •
all the poles of $u$ are simple;
* •
$u^{\prime}(z)=0$ if and only if $u(z)=\tau_{i}$ for some $i\in\\{1,2,3\\}$;
* •
if $u(z)=\tau_{1}$, then $z$ is a simple zero of $u^{\prime}$;
* •
if $u(z)=\tau_{2}$, then $z$ is a double zero of $u^{\prime}$;
* •
if $u(z)=\tau_{3}$, then $z$ is a zero of $u^{\prime}$ of order $5$.
It follows that
$G(z)=\frac{u^{\prime 6}}{(u-\tau_{1})^{3}(u-\tau_{2})^{4}(u-\tau_{3})^{5}}$
(25)
is a nonvanishing entire function, and hence, there exists an entire function
$g(z)$ such that $G=e^{g}$. According to Theorem 4, $u$ has finite order of
growth. Then we have
$\displaystyle T(r,e^{g})$ $\displaystyle=$ $\displaystyle m(r,e^{g})$
$\displaystyle\leq$ $\displaystyle m\left(r,\frac{u^{\prime
3}}{(u-\tau_{1})^{3}(u-\tau_{2})^{3}(u-\tau_{3})^{3}}\right)+m\left(r,\frac{u^{\prime}}{u-\tau_{2}}\right)+m\left(r,\frac{u^{\prime
2}}{(u-\tau_{3})^{2}}\right)$ $\displaystyle\leq$ $\displaystyle
3m\left(r,\frac{u^{\prime}}{u-\tau_{1}}\right)+4m\left(r,\frac{u^{\prime}}{u-\tau_{2}}\right)+5m\left(r,\frac{u^{\prime}}{u-\tau_{3}}\right)+O(1)$
$\displaystyle=$ $\displaystyle O(\log r).$
This indicates that $g=C$ is a constant and hence $u$ satisfies the
differential equation
$u^{\prime 6}=K(u-\tau_{1})^{3}(u-\tau_{2})^{4}(u-\tau_{3})^{5},\quad
K=e^{C}.$ (26)
Since the elliptic curve parametrized by $u$ and $u^{\prime}$ has genus one,
the general solution of the above equation should be elliptic functions. Let
$u(z)=\frac{1}{v(z)}+\tau_{3},$ (27)
then the equation (26) reduces to
$v^{\prime 6}=K[(\tau_{1}-\tau_{3})v-1]^{3}[(\tau_{2}-\tau_{3})v-1]^{4}.$ (28)
By using the singularity methods (see [4, 11] and the references therein), we
find that the general solution to (28) reads
$v(z)=h-\frac{23328\left[6\wp(z-z_{0};g_{2},g_{3})^{3}+\wp^{\prime}(z-z_{0};g_{2},g_{3})^{2}\right]}{5K(\tau_{1}-\tau_{3})^{3}(\tau_{2}-\tau_{3})^{4}},$
(29)
where $\wp(z;g_{2},g_{3})$ is the Weierstrass elliptic function with
$g_{2}=0$, $z_{0}\in\mathbb{C}$ is arbitrary and $h,g_{3}$ are constants
depending on $K$ and $\tau_{i},i=1,2,3$. Finally, with
$\tau_{1}=4,\tau_{2}=-3,\tau_{3}=0$, substituting (27) and (29) into (4)
yields the solution of equation (4)
$u(z)=-\frac{3c}{c-74088\wp\left(z-z_{0};0,g_{3}\right)^{3}},$
where $g_{3}=c/10584$ and $z_{0}\in\mathbb{C}$ is arbitrary, provided that
$\displaystyle\\{\sigma_{1},\sigma_{2}\\}=\left\\{\sqrt{5}i,-\sqrt{5}i\right\\}.$
This completes the proof.
### 3.5 Proof of Theorem 5
Suppose $u$ is a transcendental meromorphic solution to the equation (5), then
Theorem 1 implies that $u$ must have infinitely many poles. Using similar
arguments as in Theorem 3, we can show that
* •
all the poles of $u$ are simple;
* •
$u^{\prime}(z)=0$ if and only if $u(z)=\tau_{i}$ for some $i\in\\{1,2,3\\}$;
* •
all the zeros of $u^{\prime}$ are double.
It follows that
$G(z)=\frac{u^{\prime 3}}{(u-\tau_{1})^{2}(u-\tau_{2})^{2}(u-\tau_{3})^{2}}$
(30)
is a nonvanishing entire function, and hence, there exists an entire function
$g(z)$ such that $G=e^{g}$. Since the order of $u$ is finite, we have
$\displaystyle T(r,e^{g})$ $\displaystyle=$ $\displaystyle m(r,e^{g})$
$\displaystyle\leq$ $\displaystyle
m\left(r,\frac{u^{\prime}}{u-\tau_{1}}\right)+m\left(r,\frac{u^{\prime}}{(u-\tau_{2})(u-\tau_{3})}\right)+m\left(r,\frac{u^{\prime}}{(u-\tau_{1})(u-\tau_{2})(u-\tau_{3})}\right)$
$\displaystyle\leq$ $\displaystyle
2\left[\sum_{i=1}^{3}m\left(r,\frac{u^{\prime}}{u-\tau_{i}}\right)\right]+O(1)$
$\displaystyle=$ $\displaystyle O(\log r).$
This implies that $g=C$ is a constant and hence $u$ satisfies the differential
equation
$u^{\prime 3}=K(u-\tau_{1})^{2}(u-\tau_{2})^{2}(u-\tau_{3})^{2},\quad
K=e^{C}.$ (31)
Since the elliptic curve parametrized by $u$ and $u^{\prime}$ has genus one,
the general solution of the above equation should be elliptic functions. By
using the singularity methods, we find that the general solution to (34) can
be expressed as
$\displaystyle u(z)$ $\displaystyle=$
$\displaystyle\frac{1}{L}\left(\frac{\left(1+i\sqrt{3}\right)\left(\wp^{\prime}\left(z-z_{0};g_{2},g_{3}\right)-A_{1}\right)}{4\left(\wp\left(z-z_{0};g_{2},g_{3}\right)-B_{1}\right)}+\frac{\left(1-i\sqrt{3}\right)\left(\wp^{\prime}\left(z-z_{0};g_{2},g_{3}\right)-A_{2}\right)}{4\left(\wp\left(z-z_{0};g_{2},g_{3}\right)-B_{2}\right)}\right)$
(32) $\displaystyle+\frac{1}{3}(\tau_{1}+\tau_{2}+\tau_{3})$
where $L^{3}=K$, $\wp(z;g_{2},g_{3})$ is the Weierstrass elliptic function,
$z_{0}\in\mathbb{C}$ is arbitrary and $A_{1},A_{2},B_{1},B_{2},g_{2},g_{3}$
are constants depending on $K$ and $\tau_{i},i=1,2,3$. Finally, with
$\\{\tau_{1},\tau_{2},\tau_{3}\\}=\\{0,1,-1\\}$, substituting (32) into (5)
yields that
$\displaystyle\\{\sigma_{1},\sigma_{2}\\}=\left\\{\frac{i}{\sqrt{3}},-\frac{i}{\sqrt{3}}\right\\},\quad
A_{1}=A_{2}=g_{2}=0,\quad g_{3}=\frac{c}{432},$ $\displaystyle
B_{1}=\frac{1}{18}\left(1-i\sqrt{3}\right)L^{2},\quad
B_{2}=\frac{1}{18}\left(1+i\sqrt{3}\right)L^{2},\quad L^{6}=-\frac{27}{64}c.$
In this case, the equation (5) reduces to
$\displaystyle S(u,z)^{3}=c\frac{(u^{2}+1/3)^{3}}{u^{2}(u^{2}-1)^{2}},$
and the solution (32) becomes
$u(z)=\frac{9\left[9\wp\left(z;g_{2},g_{3}\right)+L^{2}\right]\wp^{\prime}\left(z;g_{2},g_{3}\right)}{2L\left[81\wp\left(z;g_{2},g_{3}\right)^{2}-9L^{2}\wp\left(z;g_{2},g_{3}\right)+L^{4}\right]}$
where $L^{6}=-27c/64,g_{2}=0,g_{3}=c/432$.
### 3.6 Proof of Theorem 6
Let $u$ be a transcendental meromorphic solution to the equation (6), then
Theorem 1 implies that $u$ must have infinitely many poles. With similar
arguments as in Theorem 3, we find that
* •
all the poles of $u$ are simple;
* •
$u^{\prime}(z)=0$ if and only if $u(z)=\tau_{i}$ for some $i\in\\{1,2,3\\}$;
* •
if $u(z)=\tau_{1}$, then $z$ is a simple zero of $u^{\prime}$;
* •
if $u(z)=\tau_{j},j=2,3$, then $z$ is a triple zero of $u^{\prime}$;
It follows that
$G(z)=\frac{u^{\prime 4}}{(u-\tau_{1})^{2}(u-\tau_{2})^{3}(u-\tau_{3})^{3}}$
(33)
is a nonvanishing entire function, and hence, there exists an entire function
$g(z)$ such that $G=e^{g}$. As the order of $u$ is finite, we have
$\displaystyle T(r,e^{g})$ $\displaystyle=$ $\displaystyle m(r,e^{g})$
$\displaystyle\leq$ $\displaystyle m\left(r,\frac{u^{\prime
2}}{(u-\tau_{1})^{2}(u-\tau_{2})^{2}(u-\tau_{3})^{2}}\right)+m\left(r,\frac{u^{\prime}}{u-\tau_{2}}\right)+m\left(r,\frac{u^{\prime}}{u-\tau_{3}}\right)$
$\displaystyle\leq$ $\displaystyle
2m\left(r,\frac{u^{\prime}}{u-\tau_{1}}\right)+3m\left(r,\frac{u^{\prime}}{u-\tau_{2}}\right)+3m\left(r,\frac{u^{\prime}}{u-\tau_{3}}\right)+O(1)$
$\displaystyle=$ $\displaystyle O(\log r).$
This indicates that $g=C$ is a constant and hence $u$ satisfies the
differential equation
$u^{\prime 4}=K(u-\tau_{1})^{2}(u-\tau_{2})^{3}(u-\tau_{3})^{3},\quad
K=e^{C}.$ (34)
Since the elliptic curve parametrized by $u$ and $u^{\prime}$ has genus one,
the general solution of the above equation should be elliptic functions. Then
the singularity methods indicate that the general solution to (34) can be
expressed as
$\displaystyle u(z)$ $\displaystyle=$ $\displaystyle
h+\frac{1}{2L}\frac{\wp^{\prime}\left(z-z_{0};g_{2},g_{3}\right)-A_{1}}{\wp\left(z-z_{0};g_{2},g_{3}\right)-B_{1}}+$
(35)
$\displaystyle\frac{i}{2L}\left(\frac{\wp^{\prime}\left(z-z_{0};g_{2},g_{3}\right)-A_{2}}{\wp\left(z-z_{0};g_{2},g_{3}\right)-B_{2}}-\frac{\wp^{\prime}\left(z-z_{0};g_{2},g_{3}\right)-A_{3}}{\wp\left(z-z_{0};g_{2},g_{3}\right)-B_{3}}\right)$
where $L^{4}=K$, $\wp(z;g_{2},g_{3})$ is the Weierstrass elliptic function,
$z_{0}\in\mathbb{C}$ is arbitrary and $A_{j},B_{j},g_{2},g_{3}$ are constants
depending on $K$ and $\tau_{j},j=1,2,3$. Finally, with
$\tau_{1}=0,\tau_{2}=1,\tau_{3}=-1$, substituting (35) into (6) yields that
$\displaystyle\\{\sigma_{1},\sigma_{2}\\}=\left\\{\frac{i}{2},-\frac{i}{2}\right\\},\quad
g_{2}=-\frac{c}{36},\quad B_{2}=-B_{3}=\frac{L^{2}}{8}i,$ $\displaystyle
c=\frac{9}{4}L^{4},\quad A_{1}=A_{2}=A_{3}=B_{1}=g_{3}=h=0.$
In this case, the equation (5) reduces to
$\displaystyle S(u,z)^{2}=c\frac{(u^{2}+1/4)^{2}}{u^{2}(u^{2}-1)},$
and the solution (35) becomes
$u(z)=-\frac{1}{2L}\frac{\left(8\wp\left(z-z_{0};g_{2},g_{3}\right)+L^{2}\right)^{2}\wp^{\prime}\left(z-z_{0};g_{2},g_{3}\right)}{\wp\left(z-z_{0};g_{2},g_{3}\right)\left(64\wp\left(z-z_{0};g_{2},g_{3}\right)^{2}+L^{4}\right)},$
where $g_{2}=-c/36,g_{3}=0$ and $c=9L^{4}/4$. This completes the proof.
###### Remark 6.
Since elliptic functions are of order $2$, Theorems 3-6 indicate that the
estimate on the growth of meromorphic solutions of the equation (2) given in
Lemma 4 is sharp.
### 3.7 Examples
We present some examples to illustrate all the possible configurations of the
transcendental meromorphic solutions given in Theorem 3.
###### Example 1.
The Schwarzian differential equation
$\displaystyle
S(u,z)=\frac{3\left(25u^{4}+20u^{3}+14u^{2}+4u+1\right)}{2u(u-1)(u+1)\left(3u+1\right)}$
has the solution
$u(z)=\frac{1}{\wp(z-z_{0};g_{2},g_{3})-1},$ (36)
where $z_{0}\in\mathbb{C}$ is arbitrary, $g_{2}=16$ and $g_{3}=0.$
###### Example 2.
The Schwarzian differential equation
$\displaystyle
S(u,z)=\frac{3\left(25u^{4}+20u^{3}+14u^{2}+4u+1\right)}{u(u-1)(u+1)\left(3u+1\right)}$
admits the solution
$u(z)=1-\frac{16}{\wp(z-z_{0};g_{2},g_{3})+12},$ (37)
where $z_{0}\in\mathbb{C}$ is arbitrary, $g_{2}=64$ and $g_{3}=0.$
###### Example 3.
The Schwarzian differential equation
$\displaystyle
S(u,z)=-\frac{3\left(25u^{4}+20u^{3}+14u^{2}+4u+1\right)}{u(u-1)(u+1)(3u+1)}$
has the solution
$u(z)=-1-\frac{8}{\wp(z-z_{0};g_{2},g_{3})-8},$ (38)
where $z_{0}\in\mathbb{C}$ is arbitrary, $g_{2}=64$ and $g_{3}=0.$
###### Example 4.
The Schwarzian differential equation
$\displaystyle
S(u,z)=\frac{3\left(225u^{4}+180u^{3}+126u^{2}+36u+9\right)}{u(u-1)(u+1)(3u+1)},$
admits the solution
$u(z)=-\frac{1}{3}-\frac{16}{\wp(z-z_{0};g_{2},g_{3})-12},$ (39)
where $z_{0}\in\mathbb{C}$ is arbitrary, $g_{2}=5184$ and $g_{3}=0.$
## Acknowledgement
We would like to thank Robert Conte for the helpful discussions.
## Funding
The first author was supported by the National Natural Science Foundation of
China (Grant No. 11671191). The second author was supported by the National
Natural Science Foundation of China (Grant Nos. 11701382 and 11971288).
## References
* [1] Bergweiler, W.: Rescaling principles in function theory. Proceedings of the International Conference on Analysis and its Applications, 11–29 (2001)
* [2] Bergweiler, W., Eremenko, A.: On the Bank-Laine conjecture. J. Eur. Math. Soc. 19, 1899–1909 (2017)
* [3] Conte, R., Ng, T.W., Wu, C.F.: Hayman’s classical conjecture on some nonlinear second-order algebraic ODEs. Complex Var. Elliptic Equ. 60, 1539–1552 (2015)
* [4] Conte, R., Ng, T.W., Wu, C.F.: Singularity methods for meromorphic solutions of differential equations. Nonlinear Systems and Their Remarkable Mathematical Structures, CRC Press, 1, 159–186 (2018)
* [5] Hille, E.: Ordinary differential equations in the complex domain. Wiley, New York-London-Sydney (1976)
* [6] Hotzel, R., Jank, G.: Algebraic Schwarzian differential equations. Ann. Acad. Sci. Fenn. Math. 21, 353–366 (1996)
* [7] Ishizaki, K.: Admissible solutions of the Schwarzian differential equations. J. Austral. Math. Soc. Ser. A 50, 258–278 (1991)
* [8] Laine, I.: Nevanlinna theory and complex differential equations. Walter de Gruyter, Berlin-New York (1993)
* [9] Lehto, O.: Univalent Functions and Teichmüller Spaces. Springer-Verlag, New York-Heidelberg (1987)
* [10] Liao, L.W., Ye, Z.: On the growth of meromorphic solutions of the Schwarzian differential equations. J. Math. Anal. Appl. 309, 91–102 (2005)
* [11] Ng, T.W., Wu, C.F.: Nonlinear Loewy factorizable algebraic ODEs and Hayman’s conjecture. Israel J. Math. 229, 1–38 (2019)
* [12] Steinmetz, N.: On the factorization of the solutions of the Schwarzian differential equation $\\{w,z\\}=q(z)$. Funkcial. Ekav. 24, 307–315 (1981)
* [13] Weiss, J. : The Painlevé property for partial differential equations. II: Bäcklund transformation, Lax pairs, and the Schwarzian derivative. J. Math. Phys. 24, 1405–1413 (1983)
* [14] Zhang, X., Liao, L.W.: On a certain type of nonlinear differential equations admitting transcendental meromorphic solutions. Sci. China Math. 56, 2025–2034 (2013)
* [15] Zemyan, S.M.: The Schwarzian operator: sequences, fixed points and $N$-cycles. Conform. Geom. Dyn. 15, 44–49 (2011)
Department of Mathematics
Nanjing University
Nanjing, China
Email<EMAIL_ADDRESS>
Institute for Advanced Study
Shenzhen University
Shenzhen, China
Email<EMAIL_ADDRESS>
|
ArXiv preprint
[orcid=0000-0001-8013-8613<EMAIL_ADDRESS>]
<EMAIL_ADDRESS>]
<EMAIL_ADDRESS>]
<EMAIL_ADDRESS>]
# Fault Detection and Diagnosis with Imbalanced and Noisy Data: A Hybrid
Framework for Rotating Machinery
Masoud Jalayer Amin Kaboli Carlotta Orsenigo Carlo Vercellis Department of
Management, Economics and Industrial Engineering, Politecnico di Milano, Via
Lambruschini 24/b, 20156, Milan, Italy Department of Mechanical Engineering,
University of Victoria, Victoria BC, V8P 5C2, Canada Institute of Mechanical
Engineering, School of Engineering, Swiss Federal Institute of Technology at
Lausanne (EPFL), Lausanne, Switzerland
###### Abstract
Fault diagnosis plays an essential role in reducing the maintenance costs of
rotating machinery manufacturing systems. In many real applications of fault
detection and diagnosis, data tend to be imbalanced, meaning that the number
of samples for some fault classes is much less than the normal data samples.
At the same time, in an industrial condition, accelerometers encounter high
levels of disruptive signals and the collected samples turn out to be heavily
noisy. As a consequence, many traditional Fault Detection and Diagnosis (FDD)
frameworks get poor classification performances when dealing with real-world
circumstances. Three main solutions have been proposed in the literature to
cope with this problem: (1) the implementation of generative algorithms to
increase the amount of under-represented input samples, (2) the employment of
a classifier being powerful to learn from imbalanced and noisy data, (3) the
development of an efficient data pre-processing including feature extraction
and data augmentation. This paper proposes a hybrid framework which uses the
three aforementioned components to achieve an effective signal based FDD
system for imbalanced conditions. Specifically, it first extracts the fault
features, using Fourier and wavelet transforms to make full use of the
signals. Then, it employs Wasserstein Generative Adversarial Networks (WGAN)
to generate synthetic samples to populate the rare fault class and enhance the
training set. Moreover, to achieve a higher performance a novel combination of
Convolutional Long Short-term Memory (CLSTM) and Weighted Extreme Learning
Machine (WELM) is also proposed. To verify the effectiveness of the developed
framework, different bearing datasets settings on different imbalance
severities and noise degrees were used. The comparative results demonstrate
that in different scenarios GAN-CLSTM-ELM significantly outperforms the other
state-of-the-art FDD frameworks.
###### keywords:
Surface Inspection Optical Quality ControlComputer VisionImage
AugmentationImage Object DetectionFault Diagnosis
## 1 Introduction
Rotating machinery is one of the essential equipment in today’s industrial
environments. From petroleum, automobile, chemicals, pharmaceutical, mining,
power generation plants to consumer goods, at least there is a machine with a
rotating component. The rotating component could be the gearbox, axles, wind,
steam and gas turbines, centrifugal and oil-free screw compressors, and pumps.
30% of rotating machinery breakdowns are mainly caused by loose, partially
rubbed, misaligned, cracked, and unbalanced rotating parts [1]. Machine
breakdowns can present complex challenges during day-to-day operations and
significantly impact business profitability and operations productivity.
Monitoring machine health conditions can prevent machine breakdowns and reduce
the maintenance costs of manufacturing systems [2]. It is, hence, crucial to
develop efficient diagnosis systems to analyze different health conditions of
the rotating components.
Figure 1: Main steps of an automated Fault Detection and Diagnosis system
There are two main approaches for coping with fault detection and diagnosis in
rotating machinery: (1) physical-based control systems and (2) data-driven-
based models. Recent advancements in computer processing and digital
technologies enhanced the robustness and higher computational capabilities to
use data-driven fault detection and diagnosis models. Implementing these
models enable us to monitor and control the parameters of machines from a
remote distance and drive insights. That is the main reason for which data-
driven fault detection and diagnosis models are used in smart manufacturing
systems [2].
The main contributions of this paper are as follows: (1) In order to get
higher classification performance in different environments, a hybrid deep
learning architecture is designed such that it takes Fourier and Wavelet
spectra of the vibration signals. This architecture uses CNN blocks to find
shift-agnostic characteristics of the fault types, a LSTM block which
understands the spatiotemporal and sequential features of it and, finally, a
Weighted ELM classifier which is effective in learning from scarce patterns,
the necessity of which is examined through experimental comparisons. The
proposed classifier is named CLSTM-ELM. (2) A Wasserstein-GAN model with a
gradient penalty is developed and employed in the hybrid framework to
reproduce rare patterns and enhance the training set. The effectiveness of
this proposition is investigated in Section 5. (3) A comprehensive set of
scenarios is designed to study the effect of different imbalance severities
and noise degrees on the performance of the framework. A sensitivity analysis
is conducted on the scenarios revealing more insights about the
characteristics of the model. (4) Seven state-of-the-art FDD models are chosen
to compete with the proposed framework on four different dataset settings. The
experimental comparison illustrates how implementing WGAN-GP and W-ELM can
improve the classifier performance and shows the superiority of GAN-CLSTM-ELM
over other algorithms.
The rest of the paper is organized as follows. Section 2 provides an overview
of the principal AI-based approaches proposed for FDD problems. In Section 3,
the theory behind WGAN-GP, LSTM, Convolutional layers and W-ELM is briefly
reviewed. Then, the proposed hybrid framework, GAN-CLSTM-ELM, is presented in
Section 4. Section 5 compares the performance of different FDD algorithms on
different imbalance ratios and noise severities. Finally, some research
conclusions and future extensions are provided in Section 6.
## 2 Review of Current Models
Early data-driven fault detection and diagnosis (hereafter FDD) models have
benefited from traditional Artificial Intelligence (AI) models, or “shallow
learning” models, such as Support Vector Machines (SVM), Decision Trees (DT),
and Multi-layer Perceptron (MLP) [3]. Despite the applicability of traditional
AI models to FDD problems, these models show poor performances and limitations
when dealing with complicated fault patterns such as the above-mentioned
rotating machinery faults [4]. One of the first applications of rotating
machinery FDD dates back to 1969 in Boeing Co., when Balderston [5]
illustrated some characteristics of the fault signs on the signals measured by
an accelerometer in natural and high frequencies. [6] employed the rectified
envelope signals with a synchronous averaging, which was later called
“envelope analysis”, to identify bearing local faults. The peak localization
in the vibration signal spectrum is another classical example of the fault
detection methods for the ball bearing faults [7].
Recently, with the emergence of novel deep learning (DL) architectures and
their promising pattern recognition capabilities, many researchers proposed
deep learning solutions for data-driven-based FDD systems [8]. These FDD
approaches rely on the common assumption that the distribution of classes for
different machine health conditions is approximately balanced. In practice,
however, the number of instances may significantly differ from a fault class
to another. This causes a crucial issue since a classifier which has been
trained on such a data distribution primarily exhibits a skewed accuracy
towards the majority class, or fails to learn the rare patterns. Most of the
proposed FDD approaches, thus, suffer from higher misclassification ratios
when dealing with scarce conditions such as in high-precision industries where
the number of faults are limited [9].
Through their deep architectures, DL-based methods are capable of adaptively
capturing the information from sensory signals through non-linear
transformations and approximate complex non-linear functions with small errors
[3]. Auto-encoders (AE) are among the most promising DL techniques for
automatic feature extraction of mechanical signals. They have been adopted in
a variety of FDD problems in the semiconductor industry [10], foundry
processes [11], gearboxes [12] and rotating machinery [13], [14]. [15]
employed the “stacked” variation of AE to initialize the weights and offsets
of a multi-layer neural network and to provide an expert knowledge for
spacecraft conditions. However, to cope with mechanical signals, using a
single AE architecture has shown some drawbacks: it may only learn similar
features in feature extraction and the learned features may have shift variant
properties which potentially lead to misclassification. Some approaches were
proposed to make this architecture appropriate for signal-based fault
diagnosis tasks. [16] used a local connection network on a normalized sparse
AE, called NSAE-LCN, to overcome these shortcomings. [17] developed a stacked-
AE to directly learn features of mechanical vibration signals on a motor
bearing dataset and a locomotive bearing dataset; specifically, they first
used a two-layer AE for sparse filtering and then applied a softmax regression
to classify the motor condition. The combination of these two techniques let
the method achieved high accuracy in bearing fault diagnosis.
Extreme learning machine (ELM) is a competitive machine learning technique,
which is simple in theory and fast in implementation. As an effective and
efficient machine learning technique, ELM has attracted tremendous attention
from various fields in recent years. Some researchers suggest ELM and Online
Sequential ELM (OS-ELM) for learning from imbalance data [18]; [19]; [20]. ELM
and OS-ELM can learn extremely fast due to their ability to learn data one-by-
one or chunk-by-chunk [21]. Despite their effective performances on online
sequential data, the performance associated to their classical implementation
on highly imbalanced data is controversial; according to [22], for example,
OS-ELM tends to have poor accuracy on such data. Therefore, they proposed a
voting-based weighted version of it, called VWOS-ELM, to cope with severely
rare patterns, whereas [9] developed a two-stage hybrid strategy using a
modified version of OS-ELM, named PL-OSELM. In offline stage, the principal
curve is employed to explore the data distribution and develop an initial
model on it. In online stage, some virtual samples are generated according to
the principal curve. The algorithm chooses virtual minority class samples to
feed more valuable training samples.
Considering the promising results obtained by ELM- based classifiers coping
with imbalanced data, they accordingly became one of the mainstreams in FDD
research area. In [23], the authors developed an evolutionary OS-ELM for FDD
for bearing elements of high-speed electric multiple units. They employed a
K-means synthetic minority oversampling technique (SMOTE) for oversampling the
minority class samples. They also used an artificial bee colony (ABC)
algorithm to find a near-optimum combination of input weights, hidden layer
bias, and number of hidden layer nodes of the OS-ELM. In another paper, [24]
used density-weighted one-class ELM for fault diagnosis in high-voltage
circuit breakers (HVCBs), using vibration signals. [25] applied an adaptive
class-specific cost regulation ELM (ACCR-ELM) with variable-length brainstorm
algorithm for its parameter optimization to conveyor belt FDD. The proposed
algorithm exhibits a stable performance under different imbalance ratios. [26]
presented a feature extraction scheme on time-domain, frequency-domain, and
time-frequency-domain, to feed a full spectrum of information gained from the
vibration signals to the classifier. They also demonstrated that the cost-
sensitive gradient boosting decision tree (CS-GBDT) shows a satisfactory
performance for imbalanced fault diagnosis. In another FDD framework for
rolling bearings [27], the authors coupled an Optimized Unsupervised Extreme
Learning Machine (OUSELM) with an Adaptive Sparse Contractive Auto-encoder
(ASCAE). The ASCAE can gain an effectual sparse and more sensitive feature
extraction from the bearing vibration signals. A Cuckoo search algorithm was
also proposed to optimize the ELM hyper-parameters. Another variation of ELM
was developed by [28] to deal with imbalanced aircraft engines fault data
which is derived from the engine’s thermodynamic maps. This ELM variation
flexibly sets a soft target margin for each training sample; hence, it does
not need to force the margins of all the training samples exactly equaling one
from the perspective of margin learning theory. After some experiments on
different datasets, including the aircraft engine, it is concluded that SELM
outperforms ELM.
On the other hand, there are frameworks for imbalanced and noisy FDD without
the employment of any ELM variation. [16] proposed a Deep Normalized
Convolutional Neural Network (DNCNN) for FDD under imbalanced conditions. The
DNCNN employs a weighted softmax loss which assumes that the misclassification
errors of different health conditions share an equivalent importance.
Subsequently, it minimizes the overall classification errors during the
training processes and achieves a better performance when dealing with
imbalanced fault classification of machinery adaptively. [29] used WGAN-GP to
interpolate stochastically between the actual and virtual instances so that it
ensures that the transition region between them is stable. They also utilized
a Stacked-AE to classify the enhanced dataset and determined the availability
of the virtual instances. Since a single GAN model encounters hardship and
poor performance when dealing with FDD datasets, [30] proposed a framework
based on GANs under small sample size conditions which boost the adaptability
of feature extraction and consequently diagnosis accuracy. The effectiveness
and satisfactory performance of the proposed method were demonstrated using
CWRU bearing and gearbox datasets. Another novel GANs-based framework, named
dual discriminator conditional GANs (D2CGANs), has been recently proposed to
learn from the signals on multi-modal fault samples [31]. This framework
automatically synthesizes realistic high-quality fake signals for each fault
class and is used for data augmentation such that it solves the imbalanced
dataset problem. After some experiments on the CWRU bearing dataset, the
authors showed that Conditional-GANs, Auxiliary Classifier-GANs and D2CGANs
significantly outperform GANs and Dual-GANs. [32] proposed a framework which
adopts a CNN-based GANs with the coordinative usage of two auxiliary
classifiers. The experimental results on analog-circuit fault diagnosis data
suggested that the proposed framework achieves a better classification
performance than that of DBN, SVM and artificial neural networks (ANN). [33]
presented a CNN-based GANs for rotating machinery FDD which uses a Wavelet
Transform (WT) technique. The proposed so-called WT-GAN-CNN approach extracts
time-frequency image features from one-dimension raw signals using WT.
Secondly, GANs are used to generate more training image samples while the
built CNN model is used to accomplish the FDD on the augmented dataset. The
experimental results demonstrated high testing accuracy in the interference of
severe environment noise or when working conditions were changed.
## 3 Background Theory
### 3.1 WGAN-GP
In the classical definition, GANs consist of two adversarial networks trained
in opposition to one another: (1) a generative model, $G$, which learns the
data distribution to generate a fake sample $\tilde{x}^{(i)}=G(z)$ from a
random vector $z$, where $z\sim\mathscr{P}_{z}$ and $\mathscr{P}_{z}$ is the
noise distribution; (2) a discriminator model, $D$, which determines if the
sample is generated by $G$ or is a real sample. $G$ strives to deceive $D$ by
making realistic random samples, while $D$ receives both real and fake
samples. On the contrary, $D$ tries to find out the source of each sample by
calculating its corresponding probability, $p(S|x)=D(x)$, and is trained to
maximize the log-likelihood it assigns to the correct source [34]
$\begin{split}\min_{G}\max_{D}V(D,G)=\mathbb{E}_{x\sim\mathscr{P}_{r}}[\log(D(x))]+\mathbb{E}_{\tilde{x}\sim\mathscr{P}_{f}}[\log(1-D(\tilde{x}))],\end{split}$
(1)
where $\mathscr{P}_{r}$ and $\mathscr{P}_{f}$ denote the distribution of the
raw data and of the fake samples, respectively. Then, the model reaches a
dynamic equilibrium if $\mathscr{P}_{f}=\mathscr{P}_{r}$ [34].
While GAN is a powerful generative model, it suffers from training
instability. Some different solutions have been proposed to solve this
problem. Wasserstein GAN (WGAN) is one of the novel proposed techniques
offering a new loss function which has demonstrated a better performance and a
better model stability. The present paper uses a variation of GAN, Entropy-
based WGAN-GP proposed by [35], generating an entropy-weighted label vector
for each class with respect to its frequency.
When the discriminator $D$ is sufficiently trained, the gradient of the
generator $G$ is relatively small; when the effect of $D$ is lower, it gets
larger. WGAN employs a distance called Wasserstein to calculate the difference
between the distributions of the real and fake samples [36], which can be
mathematically written as follows:
$\mathcal{W}(\mathscr{P}_{r},\mathscr{P}_{f})=\inf_{\lambda\in\Pi(\mathscr{P}_{r},\mathscr{P}_{f})}\mathbb{E}_{(a,b)\sim\lambda}[\|a-b\|],$
(2)
where $\Pi(\mathscr{P}_{r},\mathscr{P}_{f})$ denotes the set of all
distributions with margins of $\mathscr{P}_{r}$ and $\mathscr{P}_{f}$, and
$\lambda(a,b)$ represents the distance between two given distributions, $a$
and $b$. Therefore, the Wasserstein variable can be interpreted as the
transportation cost between the distributions of real and fake datasets. To
avoid the gradient uninformativeness issue and to guarantee the existence and
uniqueness of the optimal discriminative function and the respective Nash
equilibrium, Lipschitz condition is applied [37]. In the proposed WGAN, the
discriminative function is restricted to 1-Lipschitz. The WGAN, hence,
proposes a revised objective function based on 1, using Kantorovich-Rubinstein
duality, and is formulated as:
$\min_{G}\max_{D\in\mathcal{L}_{1}}\mathbb{E}_{x\sim\mathscr{P}_{f}}[D(x)]-\mathbb{E}_{\tilde{x}\sim\mathscr{P}_{r}}[D(\tilde{x})],$
(3)
where $\mathcal{L}_{1}$ is the collection of 1-Lipschitz functions. In this
case, under an optimal discriminator which minimizes the objective function
with respect to the parameters of $G$, the model strives to minimize
$\mathcal{W}(\mathscr{P}_{r},\mathscr{P}_{f})$.
Let $\delta\sim U[0,1]$ be a random number to have a linear interpolation
between $x$ and $\tilde{x}$:
$\hat{x}=\delta x+(1-\delta)\tilde{x},$ (4)
Therefore, the loss function of the discriminator, $\mathscr{L}_{D}$, can be
determined as follows
$\mathscr{L}_{D}=\mathbb{E}_{\tilde{x}\sim\mathscr{P}_{g}}[D(\tilde{x})]-\mathbb{E}_{x\sim\mathscr{P}_{r}}[D(x)]+\gamma\mathbb{E}_{\hat{x}\sim\mathscr{P}_{z}}[(\|\nabla_{\hat{x}}D(\hat{x})\|_{2}-1)^{2}],$
(5)
where $\gamma$ stands for the gradient penalty coefficient. The last part of
Eq.5 denotes the gradient penalty:
$\gamma\mathbb{E}_{\hat{x}\sim\mathscr{P}_{z}}[(\|\nabla_{\hat{x}}D(\hat{x})\|_{2}-1)^{2}]$
[38].
Figure 2 exhibits the simplified process of synthesizing rotating machinery
signal samples out of some random noises in a conventional GAN model.
Figure 2: The schematic process in GANs
### 3.2 CLSTM
RNN (Recurrent Neural Network) is a powerful class of artificial deep learning
architectures proposed to identify patterns in sequential data. It can
consider time and sequence by taking the present data and the recent past data
as inputs. RNN is trained across the time steps using backpropagation.
However, due to the multiplication of gradients at time steps the gradient
value may vanish or blow up rapidly. This issue limits its usage when the time
window is greater than 10 discrete time steps [39]. By adding constant error
carousels and introducing forget gates to RNN, a new form of RNN architecture
is proposed, named LSTM. These adopted forget gates are able to control the
utilization of information in the cell states and impede the vanishing or
exploding gradient issues [40]. Compared to RNN, LSTM is more powerful in
capturing long-term dependencies of the data features where it can handle
time-windows exceeding 1000 discrete time stamps [39].
On the other hand, convolutional neural networks are mainly composed of
convolutional, pooling and normalization layers, making them capable of
understanding the data shift-invariance and sharing the weights through
convolutional connections. This weight sharing makes CNN lighter for
computation since it reduces the number of parameters in the network.
Let $x_{i}=[\kappa_{1},\ldots,\kappa_{L}]$ be the sequential data, where $L$
denotes the length of the signal sample, and $\kappa_{i}\in\mathbb{R}^{d}$ the
set of values at each timestamp, where $d$ is the number of channels. The
convolution operation is defined as the following dot product:
$c_{i}=\varphi(u\cdot\kappa_{i:i+m-1}+b),$ (6)
where $u\in\mathbb{R}^{md}$ is a filter vector, $b$ is the bias, $\varphi$ is
an activation function, and $\kappa_{i:i+m-1}$ represents an $m$-length window
starting from the $i_{th}$ time stamp of the sample. Sliding the filter window
from the first timestamp of the sample to its last possible one, a feature map
is given as follows:
$\mathscr{C}_{\xi}=[c_{1},c_{2},\dotsc,c_{L-m+1}],$ (7)
where $\mathscr{C}_{\xi}$ corresponds to the $\xi_{th}$ filter.
To reduce the length of these feature maps and minimizing the model
parameters, pooling layers are proposed. Max Pooling layer is one of the most
common pooling techniques. The compressed feature vector, $\mathscr{H}$, is
defined as follows:
$\mathscr{H}=[h_{1},h_{2},\dotsc,h_{(\frac{L-m}{s})+1}],$ (8)
where
$h_{\xi}=\max(\mathscr{C}_{(\xi-1)s},\mathscr{C}_{(\xi-1)s+1},\dotsc,\mathscr{C}_{(\xi
s-1)})$ on the $s$ consecutive values of feature map $\mathscr{C}_{\xi}$ and
$s$ denotes the pooling length. Batch normalization is widely used in CNN
blocks to reduce the shift of interval covariance and make the learning
quicker by alleviating the computational load. The normalization is completed
by making each individual scalar feature with zero mean and unit variance.
This process can be mathematically described as:
$\hat{\kappa_{i}}=(\kappa_{i}-\mathbb{E}[x_{i}])\sqrt{Var(x_{i})+\epsilon},$
(9)
where $\epsilon$ is a small constant added for numerical stability. However,
the extracted features can be affected when the features of a certain layer
are normalized directly by Eq. 9, leading to poor network expression
abilities. To resolve this issue, each normalized value $\kappa_{i}$ is
modified based on the scale parameter $\varrho_{i}$ and the shift parameter
$\varpi_{i}$. These two learnable reconstruction parameters can recover the
feature distribution of the original network. The following formula can be
used to determine the output of the neuron response:
$\hat{\nu_{i}}=\varrho_{i}\hat{\kappa_{i}}+\varpi_{i}.$ (10)
### 3.3 W-ELM
ELM is in general a single-hidden-layer feed-forward neural network (SLFN).
The difference between SLFN and ELM lies in how the weights of hidden layer
and output layer neurons are updated. In SLFN, the weights of both input and
outputs layers are initialized randomly, and the weights of both the layers
are updated by the backpropagation algorithm. In ELM, the weights of the
hidden layers are assigned randomly but never updated, and only the weights of
the output layer are updated during the training process. As in ELM, the
weights of only one layer are to be updated as opposed to both layers of SLFN;
this makes ELM faster than SLFN.
Let the training dataset be $\\{(x_{i},y_{i})\\}_{(i=1)}^{N}$, where $x_{i}$
is the input vector and $y_{i}$ is the output vector. The output of the
$j^{th}$ hidden layer neuron is given by $\varphi_{a}(a_{j},b_{j},x_{i})$,
where $a_{j}$ is the weight vector connecting the input neurons to the
$j^{th}$ hidden layer neuron, $b_{j}$ is the bias of the $j^{th}$ hidden
neuron, and $\varphi_{a}$ is the activation function. Each hidden layer neuron
of ELM is also connected to each output layer neuron with some associated
weights. Let $\beta=[\beta_{1},\beta_{2},\dotsc,\beta_{K}]^{T}$ denote the
output weights connecting the hidden layer (composed of $K$ hidden nodes) with
output neurons. Thus, the $i^{th}$ output is determined as:
$o_{i}=\sum^{K}_{j=1}\beta_{j}\phi_{a}(a_{j},b_{j},x_{i}),\quad i=1,\dots,N.$
(11)
Let $H=(H_{ij})=(\phi_{a}(w_{j},b_{j},x_{i}))$ be the hidden layer matrix. The
$N$ equations of the output layer (Eq. 11) can be shortly written as follows:
$O=H\beta.$ (12)
Using Moore-Penrose generalized inverse [41], $H^{\dagger}$, a least square
solution, referred to as the extreme learning machine, can be determined
mathematically as follows:
$\beta=H^{\dagger}Y=\begin{cases}H^{T}(\frac{1}{C}+HH^{T})^{-1}Y&N<K\\\
(\frac{1}{C}+H^{T}H)^{-1}H^{T}Y&N\geq K,\end{cases}$ (13)
where $C$ is a positive parameter to achieve a better generalization
performance [42]. Weighted ELM [20] considers a $N$×$N$ diagonal matrix $W$
associated with each training sample $x_{i}$. If $x_{i}$ belongs to the
minority class, the algorithm allocates a relatively larger weight to $w_{i}$
rather than those of majority classes, which intensifies the impact of
minority classes in the training phase. Therefore, the solution of Eq. 12 will
be obtained by using the optimization formula of ELM:
$\begin{split}\emph{Minimize}:L_{P^{ELM}}=\frac{1}{2}\|\beta\|^{2}+CW\frac{1}{2}\sum^{N}_{i=1}\|\eta_{i}\|^{2}\\\
\emph{subject
to}:h(x_{i})\beta=y^{T}_{i}-\eta^{T}_{i},i=1,\dotsc,N.\end{split}$ (14)
According to the KKT theorem [43], the solution to Eq. 14 is obtained as
follows:
$\beta=H^{\dagger}Y=\begin{cases}H^{T}(\frac{1}{C}+WHH^{T})^{-1}WY&N<K\\\
(\frac{1}{C}+H^{T}WH)^{-1}WH^{T}Y&N\geq K.\end{cases}$ (15)
## 4 The Proposed FDD Model
As mentioned in Section 1, there is a need to improve the performance of
imbalanced and noisy FDD systems. Therefore, this paper presents a hybrid
framework which embeds three steps: (1) the employment of a generative
algorithm to improve the training set, (2) the signal processing with FFT and
CWT techniques providing the DL classifier a deeper understanding of the
fault’s identity, (3) the development of a hybrid classifier based on CNN,
LSTM and weighted ELM, as illustrated in Figure 3.
### 4.1 Sample generation model design
The structure of the WGAN-GP generator $G$ comprises a five-layered
autoencoder of $l,\frac{l}{2},\frac{l}{4},\frac{l}{2}$ and $l$ neurons, while
the discriminator $D$ is composed of three convolutional-LSTM blocks. The
input variable $z$ has a dimension of $l\times 1$. Due to the poor performance
in weight clipping of WGAN-GP, the paper uses an alternative in the form of a
gradient penalty in the discriminator loss function, which has been introduced
by [44] and which achieves high performances compared to other GAN models.
Algorithm 1 shows how WGAN-GP works, where $\rho$ is the gradient coefficient,
$\chi_{d}$ is the number of discriminator iterations per each generator
iteration, $\chi_{b}$ denotes the batch size, $\vartheta$,$\mu_{1}$ and
$\mu_{2}$ are the Adam hyperparameters, and $\omega_{1}$ and $\theta_{1}$
represent the initial discriminator and generator parameters, respectively.
Input: $\gamma,n_{critic},m,\alpha,\beta_{1},\beta_{2},\omega_{0},\theta_{0}$
while _$\theta$ is not converged_ do
for _$t\leftarrow 1,...,n_{critic}$_ do
for _$i\leftarrow 1,...,m$_ do
Sample from real dataset $x\sim\mathscr{P}_{r}$,
Generate noise samples $z\sim\mathscr{P}_{z}$,
Generate a random number $\delta\sim U[0,1]$
$\tilde{x}\leftarrow G_{\theta}(z)$
$\hat{x}\leftarrow\delta x+(1-\delta)\tilde{x}$
$\mathscr{L}^{(i)}\leftarrow
D_{\omega}(\tilde{x})-D_{\omega}(x)+\gamma(\|\nabla_{\tilde{x}}D_{\omega}(\hat{x})\|_{2}-1)^{2}$
$\omega\leftarrow
Adam(\nabla_{\omega}\frac{1}{m}\sum_{i=1}^{m}L^{i},\omega,\alpha,\beta_{1},\beta_{2})$
Sample batch of $m$ noise samples $\\{z^{i}\\}^{m}_{i=1}\sim\mathscr{P}_{z}$
$\theta\leftarrow
Adam(\nabla_{\theta}\frac{1}{m}\sum_{i=1}^{m}-D_{\omega}(G_{\theta}(z)),\omega,\alpha,\beta_{1},\beta_{2})$
Algorithm 1 WGAN-GP
### 4.2 Fault diagnosis model design
In order to reveal more information about the fault characteristics, the paper
separately employs two signal processing feature extraction techniques on the
input samples: FFT and CWT [45], whose results are merged to be passed to the
deep learning architectures. As it is demonstrated in [46], employing these
two feature extraction techniques significantly improves the diagnosis
performance and increases the accuracy.
As it is shown in Figure 3, the paper proposes a dual-path deep learning
architecture which combines LSTM and CNN. The reason behind this duality lies
in the nature of these two architectures and the fact that each of them
explains a different feature type. More specifically, [47] illustrated that
the concatenation of CNN and LSTM features meaningfully enhances the
classification accuracy.
In the first pathway, after applying a one-dimensional convolutional layer on
the pre-processed input tensors which extracts the local and discriminative
features, an LSTM is added to encode the long-term temporal patterns. The
importance of adding a convolutional layer prior to the LSTM layer is not only
that it reduces the high-frequency noise impact, but it also helps the LSTM to
learn more effectively. The convolutional layer processes the sequence of the
features extracted after FFT and CWT. The model slides the kernel filters on
the sequence and generates feature maps. These feature maps are subsequently
processed by an LSTM which acquires the spatial dependencies of these
sequenced features.
On the other hand, in the second pathway three one-dimensional CNN blocks are
adjoined to better extract the local features of the Fourier and Wavelet
transform-based diagrams. Each CNN block contains a convolutional layer which
convolves the local regions of the diagrams, and a Rectified Linear Unit
(ReLU) activation function, which helps the network achieve a non-linear
expression and, consequently, make the learned features more distinguishable.
To reduce the computational complexity and decrease the covariance of shift
intervals, a batch normalization layer is added to each CNN block, following
by a max pooling layer which compresses the learnt features, advances its
local translation invariance and also alleviates the learning computational
expenses. These CNN blocks are followed by a flatten layer that reshapes the
tensor size to a vector to become compatible to join the output of the first
pathway and to be fed to the classifier.
For the classification architecture, the paper employs a weighted ELM which is
introduced in Section 3.3. ELM classifiers can get fast training speeds by
means of non-tuned training strategy. They also tend to have high
generalization performance in multi-class problems and showed excellent
classification performance in different studies [19]. Compared to unweighted
ELM, weighted ELM is aimed to put an additional accent on the samples implying
the imbalanced class distribution, so that the features in samples from the
minority class are also well perceived by the ELM [20]. Therefore, after
concatenating the outputs of both pathways, their combined learnt features are
passed to the W-ELM to diagnose the fault types.
### 4.3 General procedure of the proposed model
The schematic flowchart of the proposed intelligent FDD model is illustrated
in Figure 3. The general procedure of this model is summarized as follows:
* •
Step 1: The sensory signals are collected from the accelerometers mounted on
the rotating machinery.
* •
Step 2: The training, the test, and the validation datasets are constructed
from the raw signals to separate bursts by resampling.
* •
Step 3: The training dataset is augmented using WGAN-GP introduced in Section
3.1 on the minority classes. The fake samples are added to the real samples to
make the training dataset balanced.
* •
Step 4: By employing FFT and CWT techniques the model can extract fault
signatures which were hidden in the raw signals. The extracted Fourier and
Wavelet transform-based diagrams are concatenated to form three-dimensional
matrices which can be given in input to the deep learning blocks.
* •
Step 5: These pre-processed samples go through two different paths of deep
learning blocks: (1) a one -dimensional convolutional layer followed by an
LSTM block, and (2) three blocks of CNN architectures followed by flatten and
dense layers.
* •
Step 6: After concatenating the outputs of the two deep learning paths, a
W-ELM technique is used to classify the extracted deep features and diagnose
the fault type.
Figure 3: Schematic illustration of the proposed model for the training set
## 5 Results
To evaluate the effectiveness of the proposed method, some experiments were
run on one of the most widely used bearing fault datasets, known as Case
Western Reserve University (CWRU) bearing
dataset111https://csegroups.case.edu/bearingdatacenter/home. To conduct a
comprehensive comparison, we defined different noise and imbalance conditions
on which eight different DL-based FDD methods were tested. All experiments
were performed by using Python 3.9 on a computer with a GPU of NVIDIA Geforce
GTX 1070 with CUDA version of 10.1 and 16 GB of memory.
### 5.1 Dataset description
The paper employs CWRU bearing dataset using the test stand shown in Figure 4,
that consists of a motor, a torque transducer/encoder, a dynamometer, and
control electronics. The dataset consists of five different types of faults
corresponding to inner race, balls and outer race in three different
orientations: 3 o’clock (directly in the load zone), 6 o’clock (orthogonal to
the load zone) and 12 o’clock (opposite to the load zone). Moreover, the
faults are collected in a range of severity varying between 0.007 inches to
0.040 inches in diameter. The dataset is also recorded for motor loads, from 0
to 3 horsepower. However, for the sake of simplicity this paper uses only one
motor speed of 1797 RPM. The samples are collected at 12,000 samples/second
frequency from two accelerometers mounted on fan-end and drive-end of the
machine. In the experiments we took signal bursts of 800 timestamps, equal to
66.6 milli-seconds, to generate some different datasets of approximately
25,500 signal bursts.
To explore the diagnostic capabilities of the proposed framework in imbalanced
conditions, some non-equitant sets of samples were selected such that a fault
class becomes rare. Table 1 shows the distribution of samples for each machine
condition in the selected sets, where the value of $\alpha$ denotes the
percentage of minority class within the whole dataset. Accordingly, as
$\alpha$ decreases the imbalance degree increases. In this paper we chose
“out3” class to represent the minority class, whose samples correspond to the
outer race faults of opposite load zone position. In these scenarios, the
“health” class, corresponding to the healthy condition, represents
($80-\alpha$) percent of the whole dataset, while the other fault classes
account for 5% each. The generative algorithm, subsequently, strives to
equalize the sample size of the fault classes in the training set by
augmenting the minority class.
Adding “additive white Gaussian noise” with different signal-to-noise ratios
(SNRs) to the original samples, the paper is able to examine the performance
of GAN-CLSTM-ELM framework on different natural noise severities. These noisy
samples better portray the real-world industrial production settings where the
noise varies a lot. The original drive-end and fan-end signals with their
driven noisy samples are exhibited in Figure 5.
Figure 4: Two-horsepower (left), a torque transducer and encoder (center) and a dynamometer (right) used to collect the dataset Figure 5: Some noisy signal samples generated from raw sensory data with different SNRs Table 1: The distribution of condition type samples in the cases _minority share(%)_ | _Percentage of training samples in each condition_
---|---
_health_ | _inner_ | _ball_ | _out1_ | _out2_ | _out3_
$\alpha=4$ | 76% | 5% | 5% | 5% | 5% | 4%
$\alpha=2$ | 78% | 5% | 5% | 5% | 5% | 2%
$\alpha=1$ | 79% | 5% | 5% | 5% | 5% | 1%
$\alpha=0.5$ | 79.5% | 5% | 5% | 5% | 5% | 0.5%
$\alpha=0.25$ | 79.75% | 5% | 5% | 5% | 5% | 0.25%
### 5.2 GAN model selection
Figure 6: The generator and discriminator loss values for different GAN
architectures
As it is mentioned in the previous section, the proposition of Wasserstein
loss function and adding the gradient penalty to its loss function help
stabilize the generative algorithm. Figure 6 depicts how the proposed WGAN-GP
reaches an equilibrium after 9000 epochs where it can generate realistic
samples. Whereas, the other GAN generators make samples which cannot devise
their discriminators. As it can be clearly seen in Figure 6 their generator
loss values go significantly higher than those of the discriminators. This
comparison demonstrates why the implementation of WGAN-GP is preferred. Figure
7 shows some real samples of normal baseline, and fault conditions associated
with the bearing ball, inner race and outer race with fault diameters of 7
mils and 21 mils. Figure 8, similarly, visualizes the synthetic samples
generated by WGAN-GP after 10000 epochs.
Figure 7: Some real samples associated with different running conditions
Figure 8: Some random synthesized samples associated with different running
conditions made by WGAN-GP
### 5.3 The sensitivity analysis
In this section the paper illustrates a sensitivity analysis on the
performance of the proposed model by changing the $\alpha$ values and the
SNRs. Specifically, we considered 25 points with respect to
$\alpha={2^{k}:k=-2,-1,0,1,2}$ and SNR $=(10,20,50,75,100)$, and run the model
10 times at each point to achieve a robust analysis. Figure 9 and Figure 10
demonstrate the performance of GAN-CLSTM-ELM model with different metrics on
these points.
Figure 9: Accuracy and Recall performances of GAN-CLSTM-ELM in different SNR
and $\alpha$ levels Figure 10: AUC and $f_{1}$ score performances of GAN-
CLSTM-ELM in different SNR and $\alpha$ levels
As it can be seen in the figures, high levels of noise impact on the
performance of the model, changing the $f_{1}$ score from 100% to 95.91%, and
from 99.7% to 81.45% when $\alpha=4$ and $\alpha=0.25$, respectively. In this
defined space the accuracy, AUC and recall values fall above 96.7%, 92.6% and
81.16%, respectively. The model shows a relatively high robustness to both
noise and imbalance severities for SNRs greater than 20. At its best-case
scenario, where $\alpha=4$ and SNR=100, it gains $f_{1}$ score of 100%; in its
worst-case scenario, where $\alpha=0.25$ and SNR=50, it respectively gets
98.02% and 99.77% of $f_{1}$ score and accuracy. In the following, the paper
conducts a comparison to figure out how these numbers are meaningful and
whether the proposed model can better mitigate the adverse impacts of
imbalanced and noisy conditions.
### 5.4 Model performance evaluation
In order to achieve meaningful comparisons, some novel FDD frameworks were
employed to perform the diagnosis at different scenarios. CLSTM, df-CNN, sdAE,
GAN-CNN, WELM, CNN-STFT, and CNN-FFT have shown promising performances in the
literature, hence, they were selected for this purpose. Three traditional
machine learning classifiers, SVM, ANN and Random Forest (RF), are also
considered in this experimental comparison. A grid-search was designed on the
hyper-parameters of these models to achieve higher performances. Specifically,
the learning rate, batch size and the architecture of fully connected layers
were optimized for each algorithm, while the number of epochs was set to 50
for all architectures. CLSTM-ELM was also added to the comparison panel to
examine the necessity of Weighted ELM in the architecture of the proposed
framework. This paper used two augmentation techniques: (i) "GAN": with WGAN-
GP, as discussed earlier, (ii) "classic": where the samples are flipped,
mirrored and different white noises are added to the samples. A brief
description of the selected frameworks is provided in Table 2.
Table 2: The comparison panel
Framework | Augmentation | Preprossecing | Description | Reference
---|---|---|---|---
CLSTM | classic | FFT+CWT+Statistical features | Its architecture comprises two CNN blocks (containing 1D$\sim$-Convolutional layers, Batch Normalization, ReLU and Max Pooling), a LSTM block, a Logarithmic SoftMax, a concatenation which adds statistical features and three fully connected neural networks for the classification. | [46]
CLSTM-ELM | classic | FFT+CWT+Statistical features | Its CNN and LSTM architecture are the same as in CLSTM; yet, the fully connected layers are substituted for W-ELM with 150 nodes. | N/A
df-CNN | classic | raw signals | It is proposed to make an abstract 2-dimensional image out of raw signals. Its architecture comprises two CNN blocks (containing 2D$\sim$-Convolutional layers, Batch Normalization, ReLU and Max Pooling), and three fully connected neural networks for the classification. df-CNN works directly on the raw vibration signals. | [48]
sdAE | classic | raw signals | It is a multilayered architecture composed of four auto-associative neural network layers, which contain one input layer and three AEs. The input of this framework are raw signals. | [14]
CNN-STFT | classic | STFT | The architecture consists of three CNN blocks (containing one 1D-$\sim$Convolutional layer and a Pooling layer), two fully connected layers, and a SoftMax classification layer. It takes short-term Fourier transform (STFT) form of the signals as its input. | [31]
GAN-CNN | WGAN-GP | STFT | The paper has slightly modified the architecture proposed by its authors to examine how the same WGAN-GP architecture improves the CNN-STFT model. Therefore, GAN-CNN is a combination of WGAN-GP and CNN-STFT models. | [33]
W-ELM | | FFT+VMD+Statistical features | It takes a combination of FFT, VMD [49] and some statistical features. | [20]
CNN-FFT | classic | FFT | It takes FFT as its input while its architecture comprises two CNN blocks (containing 1D-$\sim$Convolutional and Pooling layers) followed by three fully connected layers. | [16]
GAN-SVM | WGAN-GP | Statistical features | SVM with polynomial kernel and degree of 2 is selected | N/A
GAN-ANN | WGAN-GP | Statistical features | 3 fully connected layers with a grid search to find optimal number of neurons per layer and the activation functions | N/A
GAN-RF | WGAN-GP | Statistical features | A grid search is designed to find the optimal number of estimators, and criteria (between ‘gini’ and ‘entropy’) parameters | N/A
To avoid the weight initialization effect and randomness on the results, we
ran each framework for ten independent times, using a k-fold cross validation
technique on each imbalance and noise degree conditions. Figure 11 illustrates
the corresponding normalized confusion matrices and the model performances.
Comparing the different scenarios, it can plainly be concluded that GAN-CLSTM-
ELM has a better ability to extenuate the negative effects of imbalance and
noise conditions compared to the other frameworks. Regarding the highly
imbalanced situation, its $f_{1}$ score has gently dropped by 0.32% in the
first two scenarios (SNR:100, $\alpha:2^{-2}$ and SNR:100, $\alpha:2^{2}$)
while the other frameworks have shown relatively substantial declines in their
$f_{1}$ scores, ranging from 1.14% (GAN-CNN) to roughly 48% (df-CNN). In the
second scenario, while the proposed model correctly identifies all the
minority class samples, CLSTM-ELM and GAN-CNN were able to classify roughly
92% of them. This percentage for CLSTM, sdAE, WELM and CNN-STFT was between 80
and 85. CNN-FFT and df-CNN were among the poorest models to identify the
minority class as df-CNN could not correctly diagnose any of the corresponding
samples. The figure also shows that replacing the fully connected layers with
W-ELM in the CLSTM-ELM model has slightly increased its robustness when
$\alpha$ plummets from 4 to 0.25.
Figure 11: Confusion matrices and $f_{1}$-scores of the comparison panel in
different scenarios(t represents true labeled samples)
In the presence of heavy noises, there are sudden falls in the performances of
all the algorithms. Comparing the first and the third scenarios (SNR=100,
$\alpha=2^{2}$ and SNR=10, $\alpha=2^{2}$), GAN-CLSTM-ELM, CLSTM-ELM and GAN-
CNN had the least decrease in the $f_{1}$ score (roughly 5%); thus, they were
the most robust algorithms in noisy conditions. However, CNN-STFT achieved
comparatively poorer results when $\alpha$ dips below 1. Its combination with
a WGAN-GP mitigated this loss and GAN-CNN achieved a satisfactory result. With
the presence of heavy noises, WELM classification quality drastically plunged
and, despite its comparatively satisfactory performance in the first two
scenarios, the noise made it unable to diagnose the minority class in highly
imbalanced situations. By comparing the confusion matrices of WELM and CLSTM-
ELM, it was demonstrated that CLSTM architecture alongside WELM model improves
its performance against the noise. It is worth noting that both models that
comprise WGAN-GP showed the highest scores in the first, second and the last
scenario and exhibited superiority over their root algorithms, CLSTM-ELM,
CLSTM, WELM and CNN-STFT. This proves that WGAN-GP can effectively enhance the
quality of the classifier not only in imbalanced situations but also in noisy
environments.
## 6 Discussion and Conclusions
In many real applications of fault detection and diagnosis data tend to be
imbalanced and noisy, meaning that the number of samples for some fault
classes is much fewer than the normal data samples and there are errors in
recording the actual measurement by the sensors. These two conditions make
many traditional FDD frameworks perform poorly in real-world industrial
environments.
In this paper a novel framework called GAN-CLSTM-ELM is proposed, which
enhances the performance of rotating machinery FDD systems coping with highly-
imbalanced and noisy datasets. In this framework, WGAN-GP is first applied to
augment the minority class and enhance the training set. A hybrid classifier
is then developed, containing Convolutional LSTM and Weighted ELM, which
learns more efficiently from vibration signals. The framework also benefits
from both wavelet and Fourier transform techniques in its feature engineering
step, revealing more hidden information of the fault signatures to make the
classifier perform more accurately. The effectiveness of the proposed
framework is verified by using four dataset settings with different imbalance
severities and SNRs. After conducting the comparisons with state-of-the-art
FDD algorithms, it is demonstrated that the GAN-CLSTM-ELM framework can reduce
the misclassification rate and outperform the other methods, more
significantly when the imbalance degree is higher. The efficiency of the WGAN-
GP is also proved by comparing the results of the proposed model and CLSTM-ELM
as well as those of GAN-CNN and CNN-SFTF models. The experimental results make
it discernible that using a generative algorithm helps to alleviate the
adverse impacts of low SNRs. Therefore, it stresses the necessity of employing
such hybrid frameworks for practitioners working on noisy and industrial
applications. The paper also justifies the implementation of W-ELM in the
architecture of CLSTM, since the adjusted model shows sturdy classification
when $\alpha$ decreases either in noisy or noiseless scenarios. A sensitivity
analysis is designed with 25 dataset settings built on a range of $\alpha$ and
SNR values, to obtain insights of how these two factors impact on the model’s
classification ability.
Extracting the FFT and CWT spectra needs some knowledge of signal processing
and is still more convenient than extracting other hand-crafted features
proposed in the literature. Another advantage of the proposed framework is
that it gains comparatively high performances under noisy conditions while it
requires no complex denoising pre-processing being handled by employees with
expert knowledge of signal processing. These characteristics make GAN-CLSTM-
ELM an attractive option for industrial practitioners who are in need of a
relatively easy-to-use software without undergoing any complicated pre-
processing task.
Future work will include more experiments on the behavior of different
generative algorithms and the development of a more powerful architecture to
create high-quality signals with fewer samples. We will also attempt to
explore the feasibility of implementing and testing the proposed framework on
other applications.
## References
* Glowacz et al. [2018] A. Glowacz, W. Glowacz, Z. Glowacz, J. Kozik, Early fault diagnosis of bearing and stator faults of the single-phase induction motor using acoustic signals, Measurement: Journal of the International Measurement Confederation 113 (2018) 1–9. doi:10.1016/j.measurement.2017.08.036.
* Shojaeinasab et al. [2022] A. Shojaeinasab, T. Charter, M. Jalayer, M. Khadivi, O. Ogunfowora, N. Raiyani, M. Yaghoubi, H. Najjaran, Intelligent manufacturing execution systems: A systematic review, Journal of Manufacturing Systems 62 (2022) 503–522. URL: https://doi.org/10.1016/j.jmsy.2022.01.004. doi:10.1016/j.jmsy.2022.01.004.
* Liu et al. [2018] R. Liu, B. Yang, E. Zio, X. Chen, Artificial intelligence for fault diagnosis of rotating machinery: A review, Mechanical Systems and Signal Processing 108 (2018) 33–47. URL: https://doi.org/10.1016/j.ymssp.2018.02.016. doi:10.1016/j.ymssp.2018.02.016.
* Zhao et al. [2016] G. Zhao, G. Zhang, Q. Ge, X. Liu, Research Advances in Fault Diagnosis and Prognostic based on Deep Learning, Prognostics and System Health Management Conference (2016) 1–6. doi:10.1109/PHM.2016.7819786.
* Balderston [1969] H. Balderston, Incipient failure detection: Incipient failure detection in ball bearings, Technical Report, BOEING CO SEATTLE WA AEROSPACE SYSTEMS DIV., 1969.
* Weichbrodt and Smith [1970] B. Weichbrodt, K. A. Smith, Signature Analaysis. Non-intrusive techniques for incipient failure identification, in: NBS Spec Publ 336, Proc 5th Space Simulation Symp Conf, Elsevier B.V., Gaithersburg, MD, USA, 1970.
* Li and Ma [1997] C. J. Li, J. Ma, Wavelet decomposition of vibrations for detection of bearing-localized defects 30 (1997) 143–149. doi:10.1016/S0963-8695(96)00052-7.
* Raghav and Sharma [2020] M. S. Raghav, R. B. Sharma, A Review on Fault Diagnosis and Condition Monitoring of Gearboxes by Using AE Technique, Archives of Computational Methods in Engineering (2020). doi:10.1007/s11831-020-09480-8.
* Mao et al. [2017] W. Mao, L. He, Y. Yan, J. Wang, Online sequential prediction of bearings imbalanced fault diagnosis by extreme learning machine, Mechanical Systems and Signal Processing 83 (2017) 450–473. URL: http://dx.doi.org/10.1016/j.ymssp.2016.06.024. doi:10.1016/j.ymssp.2016.06.024.
* Lee [2017] H. Lee, Framework and development of fault detection classification using IoT device and cloud environment, Journal of Manufacturing Systems 43 (2017) 257–270. URL: http://dx.doi.org/10.1016/j.jmsy.2017.02.007. doi:10.1016/j.jmsy.2017.02.007.
* Zhang et al. [2016] T. Zhang, H. Ye, H. Zhang, M. Li, PCA-LMNN-Based Fault Diagnosis Method for Ironmaking Processes with Insufficient Faulty Data, ISIJ International 56 (2016) 1779–1788. URL: https://www.jstage.jst.go.jp/article/isijinternational/56/10/56{_}ISIJINT-2016-101/{_}article. doi:10.2355/isijinternational.ISIJINT-2016-101.
* Liu et al. [2018] G. Liu, H. Bao, B. Han, A Stacked Autoencoder-Based Deep Neural Network for Achieving Gearbox Fault Diagnosis, Mathematical Problems in Engineering 2018 (2018) 1–10. URL: https://www.hindawi.com/journals/mpe/2018/5105709/. doi:10.1155/2018/5105709.
* Haidong et al. [2017] S. Haidong, J. Hongkai, Z. Huiwei, W. Fuan, A novel deep autoencoder feature learning method for rotating machinery fault diagnosis, Mechanical Systems and Signal Processing 95 (2017) 187–204. URL: http://dx.doi.org/10.1016/j.ymssp.2017.03.034. doi:10.1016/j.ymssp.2017.03.034.
* Lu et al. [2017] C. Lu, Z. Y. Wang, W. L. Qin, J. Ma, Fault diagnosis of rotary machinery components using a stacked denoising autoencoder-based health state identification, Signal Processing 130 (2017) 377–388. URL: http://dx.doi.org/10.1016/j.sigpro.2016.07.028. doi:10.1016/j.sigpro.2016.07.028.
* Ke Li and Quanxin Wang [2015] Ke Li, Quanxin Wang, Study on signal recognition and diagnosis for spacecraft based on deep learning method, in: 2015 Prognostics and System Health Management Conference (PHM), IEEE, 2015, pp. 1–5. URL: http://ieeexplore.ieee.org/document/7380040/. doi:10.1109/PHM.2015.7380040.
* Jia et al. [2018] F. Jia, Y. Lei, L. Guo, J. Lin, S. Xing, A neural network constructed by deep learning technique and its application to intelligent fault diagnosis of machines, Neurocomputing 272 (2018) 619–628. URL: https://doi.org/10.1016/j.neucom.2017.07.032. doi:10.1016/j.neucom.2017.07.032.
* Lei et al. [2016] Y. Lei, F. Jia, J. Lin, S. Xing, S. X. Ding, An Intelligent Fault Diagnosis Method Using Unsupervised Feature Learning Towards Mechanical Big Data, IEEE Transactions on Industrial Electronics 63 (2016) 3137–3147. URL: http://ieeexplore.ieee.org/document/7386639/. doi:10.1109/TIE.2016.2519325.
* Li et al. [2014] K. Li, X. Kong, Z. Lu, L. Wenyin, J. Yin, Boosting weighted ELM for imbalanced learning, Neurocomputing 128 (2014) 15–21. URL: http://dx.doi.org/10.1016/j.neucom.2013.05.051. doi:10.1016/j.neucom.2013.05.051.
* Vong et al. [2014] C. M. Vong, W. F. Ip, P. K. Wong, C. C. Chiu, Predicting minority class for suspended particulate matters level by extreme learning machine, Neurocomputing 128 (2014) 136–144. URL: http://dx.doi.org/10.1016/j.neucom.2012.11.056. doi:10.1016/j.neucom.2012.11.056.
* Zong et al. [2013] W. Zong, G. B. Huang, Y. Chen, Weighted extreme learning machine for imbalance learning, Neurocomputing 101 (2013) 229–242. URL: http://dx.doi.org/10.1016/j.neucom.2012.08.010. doi:10.1016/j.neucom.2012.08.010.
* Mirza et al. [2013] B. Mirza, Z. Lin, K. A. Toh, Weighted online sequential extreme learning machine for class imbalance learning, Neural Processing Letters 38 (2013) 465–486. doi:10.1007/s11063-013-9286-9.
* Mirza et al. [2015] B. Mirza, Z. Lin, J. Cao, X. Lai, Voting based weighted online sequential extreme learning machine for imbalance multi-class classification, Proceedings - IEEE International Symposium on Circuits and Systems 2015-July (2015) 565–568. doi:10.1109/ISCAS.2015.7168696.
* Hao and Liu [2020] W. Hao, F. Liu, Imbalanced Data Fault Diagnosis Based on an Evolutionary Online Sequential Extreme Learning Machine, Symmetry 12 (2020) 1204. URL: https://www.mdpi.com/2073-8994/12/8/1204. doi:10.3390/sym12081204.
* Chen et al. [2017] Y.-J. Chen, B.-c. Wang, J.-Z. Wu, C.-F. Chien, Big Data Analytic for Multivariate Fault Detection and Classification in Semiconductor Manufacturing, 2017\.
* Cheng et al. [2019] J. Cheng, J. Chen, Y. nan Guo, S. Cheng, L. Yang, P. Zhang, Adaptive CCR-ELM with variable-length brain storm optimization algorithm for class-imbalance learning, Natural Computing 9 (2019). URL: https://doi.org/10.1007/s11047-019-09735-9. doi:10.1007/s11047-019-09735-9.
* Xu et al. [2020] Q. Xu, S. Lu, W. Jia, C. Jiang, Imbalanced fault diagnosis of rotating machinery via multi-domain feature extraction and cost-sensitive learning, Journal of Intelligent Manufacturing 31 (2020) 1467–1481. URL: https://doi.org/10.1007/s10845-019-01522-8. doi:10.1007/s10845-019-01522-8.
* Zhao et al. [2020] X. Zhao, M. Jia, Z. Liu, Fault Diagnosis Framework of Rolling Bearing Using Adaptive Sparse Contrative Auto-Encoder with Optimized Unsupervised Extreme Learning Machine, IEEE Access 8 (2020) 99154–99170. doi:10.1109/ACCESS.2019.2963193.
* Zhao et al. [2019] Y. P. Zhao, G. Huang, Q. K. Hu, J. F. Tan, J. J. Wang, Z. Yang, Soft extreme learning machine for fault detection of aircraft engine, Aerospace Science and Technology 91 (2019) 70–81. URL: https://doi.org/10.1016/j.ast.2019.05.021. doi:10.1016/j.ast.2019.05.021.
* Han et al. [2020] B. Han, S. Jia, G. Liu, J. Wang, Imbalanced Fault Classification of Bearing via Wasserstein Generative Adversarial Networks with Gradient Penalty, Shock and Vibration 2020 (2020) 1–14. URL: https://www.hindawi.com/journals/sv/2020/8836477/. doi:10.1155/2020/8836477.
* Ding et al. [2019] Y. Ding, L. Ma, J. Ma, C. Wang, C. Lu, A generative adversarial network-based intelligent fault diagnosis method for rotating machinery under small sample size conditions, IEEE Access 7 (2019) 149736–149749. doi:10.1109/ACCESS.2019.2947194.
* Zhang et al. [2020] Y. Zhang, K. Xing, R. Bai, D. Sun, Z. Meng, An enhanced convolutional neural network for bearing fault diagnosis based on time–frequency image, Measurement: Journal of the International Measurement Confederation 157 (2020) 107667\. URL: https://doi.org/10.1016/j.measurement.2020.107667. doi:10.1016/j.measurement.2020.107667.
* He et al. [2020] W. He, Y. He, B. Li, Generative Adversarial Networks with Comprehensive Wavelet Feature for Fault Diagnosis of Analog Circuits, IEEE Transactions on Instrumentation and Measurement 69 (2020) 6640–6650. doi:10.1109/TIM.2020.2969008.
* Liang et al. [2020] P. Liang, C. Deng, J. Wu, Z. Yang, Intelligent fault diagnosis of rotating machinery via wavelet transform, generative adversarial nets and convolutional neural network, Measurement: Journal of the International Measurement Confederation 159 (2020) 107768\. URL: https://doi.org/10.1016/j.measurement.2020.107768. doi:10.1016/j.measurement.2020.107768.
* Goodfellow et al. [2014] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, Advances in neural information processing systems 27 (2014).
* Ren et al. [2019] J. Ren, Y. Liu, J. Liu, EWGAN: Entropy-Based Wasserstein GAN for Imbalanced Learning, Proceedings of the AAAI Conference on Artificial Intelligence 33 (2019) 10011–10012. doi:10.1609/aaai.v33i01.330110011.
* Arjovsky et al. [2017] M. Arjovsky, S. Chintala, L. Bottou, Wasserstein GaN, arXiv (2017). arXiv:1701.07875.
* Zhou et al. [2019] F. Zhou, X. Lin, C. Liu, Y. Zhao, P. Xu, L. Ren, T. Xue, L. Ren, A survey of visualization for smart manufacturing, Journal of Visualization 22 (2019) 419–435. URL: https://doi.org/10.1007/s12650-018-0530-2. doi:10.1007/s12650-018-0530-2.
* Jalayer et al. [2021] M. Jalayer, R. Jalayer, A. Kaboli, C. Orsenigo, C. Vercellis, Automatic visual inspection of rare defects: A framework based on gp-wgan and enhanced faster r-cnn, in: 2021 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT), IEEE, 2021, pp. 221–227. URL: https://doi.org/10.1109/IAICT52856.2021.9532584. doi:10.1109/IAICT52856.2021.9532584.
* Gers et al. [1999] F. A. Gers, J. Schmidhuber, F. Cummins, Learning to forget: Continual prediction with LSTM, IEE Conference Publication 2 (1999) 850–855. doi:10.1049/cp:19991218.
* Gers and Schraudolph [2002] F. A. Gers, N. N. Schraudolph, Learning Precise Timing with LSTM Recurrent Networks, Journal of Machine Learning Research 3 (2002) 115–143.
* Healy et al. [1972] M. J. R. Healy, C. R. Rao, S. K. Mitra, Generalized Inverse of Matrices and its Applications., Journal of the Royal Statistical Society. Series A (General) 135 (1972) 439. URL: https://www.jstor.org/stable/10.2307/2344631?origin=crossref. doi:10.2307/2344631.
* Hoerl and Kennard [1970] A. E. Hoerl, R. W. Kennard, Ridge Regression: Biased Estimation for Nonorthogonal Problems, Technometrics 12 (1970) 55–67. URL: http://www.tandfonline.com/doi/abs/10.1080/00401706.1970.10488634. doi:10.1080/00401706.1970.10488634.
* Fletcher [2000] R. Fletcher, Practical Methods of Optimization, Second Edition, John Wiley & Sons, Ltd, Chichester, West Sussex England, 2000\. URL: http://doi.wiley.com/10.1002/9781118723203. doi:10.1002/9781118723203.
* Gulrajani et al. [2017] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, A. Courville, Improved training of wasserstein GANs, Advances in Neural Information Processing Systems 2017-Decem (2017) 5768–5778. arXiv:1704.00028.
* Mallat [2008] S. Mallat, A Wavelet Tour of Signal Processing, 3rd editio ed., Academic Press, 2008. URL: https://www.elsevier.com/books/a-wavelet-tour-of-signal-processing/mallat/978-0-12-374370-1.
* Jalayer et al. [2021] M. Jalayer, C. Orsenigo, C. Vercellis, Fault detection and diagnosis for rotating machinery: A model based on convolutional LSTM, Fast Fourier and continuous wavelet transforms, Computers in Industry 125 (2021) 103378. URL: https://doi.org/10.1016/j.compind.2020.103378. doi:10.1016/j.compind.2020.103378.
* Karim et al. [2019] F. Karim, S. Majumdar, H. Darabi, Insights into lstm fully convolutional networks for time series classification, IEEE Access 7 (2019) 67718–67725. doi:10.1109/ACCESS.2019.2916828. arXiv:1902.10756.
* Mao et al. [2021] W. Mao, W. Feng, Y. Liu, D. Zhang, X. Liang, A new deep auto-encoder method with fusing discriminant information for bearing fault diagnosis, Mechanical Systems and Signal Processing 150 (2021) 107233. URL: https://doi.org/10.1016/j.ymssp.2020.107233. doi:10.1016/j.ymssp.2020.107233.
* Yan and Jia [2018] X. Yan, M. Jia, A novel optimized SVM classification algorithm with multi-domain feature and its application to fault diagnosis of rolling bearing, Neurocomputing 313 (2018) 47–64. URL: https://doi.org/10.1016/j.neucom.2018.05.002. doi:10.1016/j.neucom.2018.05.002.
|
$\frac{dx}{dt}$
DIFFERENTIAL EQUATIONS
AND
CONTROL PROCESSES
N. 4, 2020
Electronic Journal,
reg. N ${\Phi}$C77-39410 at 15.04.2010
ISSN 1817-2172
http://diffjournal.spbu.ru/
e-mail<EMAIL_ADDRESS>
Numerical methods
# A Numerical-Analytical Method for Constructing Periodic Solutions of the
Lorenz System
Alexander N. Pchelintsev
Tambov State Technical University,
ul. Sovetskaya 106, Tambov, 392000, Russia
e-mail<EMAIL_ADDRESS>
Abstract. This article describes a method for constructing approximations to
periodic solutions of dynamic Lorenz system with classical values of the
system parameters. The author obtained a system of nonlinear algebraic
equations in general form concerning of the cyclic frequency, constant terms
and amplitudes of harmonics that make up harmonic approximations to the
desired solutions. The initial approximation for the Newton method is
selected, which converges to a solution describing a periodic solution
different from the equilibrium position. The results of a computational
experiment are presented. The results are verified using high-precision
calculations.
Keywords: Attractor, Lorenz Attractor, Trigonometric Polynomial, Newton’s
Method.
## 1 Introduction
Let us consider the nonlinear system of differential equations introduced by
E. Lorenz in [1]
$\left\\{\begin{array}[]{l}\dot{x}_{1}=\sigma(x_{2}-x_{1}),\vskip 6.0pt plus
2.0pt minus 2.0pt\\\ \dot{x}_{2}=rx_{1}-x_{2}-x_{1}x_{3},\vskip 6.0pt plus
2.0pt minus 2.0pt\\\ \dot{x}_{3}=x_{1}x_{2}-bx_{3},\end{array}\right.$ (1)
where $\sigma=10$, $r=28$ and $b=8/3$ are the classical values of the system
parameters.
Let us denote by
$X(t)=\left[x_{1}(t)\>\>x_{2}(t)\>\>x_{3}(t)\right]^{\scriptsize\mbox{T}}$. It
is proved in the article [1] that there exists a number $C>0$ such that for
any solution $X(t)$ of the system (1), starting at time moment, $|X(t)|<C$,
and the divergence of the vector velocity field of the system (1) is negative
everywhere in $\mathbb{R}^{3}$ for classical values of the system parameters.
Then [1] there exists a limit set, called the Lorenz attractor, to which all
trajectories of the dynamical system are attracted when time tends to
infinity. Thus the attractor determines the behavior of the solutions of a
dynamical system over large segments of time.
W. Tucker in his work [2] proved that the attractor is hyperbolic in the
system (1), that is, the attractor consists of cycles everywhere dense on it
along which the near trajectories diverge exponentially. This creates their
chaotic behavior.
As know [3, 4], the symbolic dynamics is used to track cycles in the Lorenz
system. The region in the phase space containing the attractor is divided into
a finite number of subdomains. Denoting each partition element by a symbol,
the trajectories on the attractor passing through the corresponding regions
are encoded by sequences of such symbols. If the sequence has regularity
(repeatability of groups of characters), then we have a cycle. However, the
return of trajectories in a neighborhood of its part does not mean its
closure. A critique of the results of such computational experiments can be
found, for example, in [5].
In 2004, D. Viswanath published the paper [6], in which he presented the
initial conditions and periods for three cycles in the Lorenz attractor with a
high accuracy. The calculation algorithm is based on the Lindstedt-Poincaré
(LP) method, which (unlike numerical integration methods) is not affected by
the stability of the cycle to which approximations are constructed.
An analysis of the Viswanath’s articles [6, 7] showed that the author gives a
general description of the algorithm without reference to the computer
implementation (in MATLAB as indicated in his works). Moreover, it is not
clear how the obtained inhomogeneous linear system of differential equations
with periodic coefficients is symbolically solved by the LP-method. For
example, this can be done for the Van der Pol equation without any special
problems.
In the article [6] Viswanath showed data that can be verified by solving the
Cauchy problem with high-precision numerical methods (for example, [8]), but
the details of the algorithm are not disclosed.
Therefore, it is important here to obtain the values of the initial conditions
and the period with a given accuracy, having described in detail the
implementation of the cycles search algorithm in the system (1).
The goal of this article is to develop a numerical-analytical method for
constructing approximations to periodic solutions of the Lorenz system, which
is simpler to implement than the LP-method. In this case, a system of
nonlinear algebraic equations concerning of the cyclic frequency, constant
terms, and amplitudes of harmonics making up the desired solution will be
obtained in general form.
## 2 A Numerical-Analytical Method
Attempts to construct approximate periodic solutions in the system (1) with
were made before Viswanath (for example, [9]) by the method of harmonic
balance, but with low accuracy in representing real numbers, while in the
article [9] initial conditions and periods of found cycles are not indicated
(only drawings with cycles are given). Now this method is actively developing
in the works of [10, 11, 12] A. Luo to find periodic solutions of nonlinear
systems of differential equations.
Next, we describe a numerical-analytical method for constructing
approximations to periodic solutions of the system (1). We make for this an
approximation of the phase coordinates on the period $T$ by trigonometric
polynomials in general form with an unknown cyclic frequency $\omega$ (since
we do not know the value of $T$; in the general case, it can be an irrational
number):
$\begin{array}[]{l}\displaystyle
x_{1}(t)\approx\tilde{x}_{1}(t)=x_{1,0}+\sum_{i=1}^{h}\left(c_{1,i}\cos(i\omega
t)+s_{1,i}\sin(i\omega t)\right),\vskip 6.0pt plus 2.0pt minus 2.0pt\\\
\displaystyle
x_{2}(t)\approx\tilde{x}_{2}(t)=x_{2,0}+\sum_{i=1}^{h}\left(c_{2,i}\cos(i\omega
t)+s_{2,i}\sin(i\omega t)\right),\vskip 6.0pt plus 2.0pt minus 2.0pt\\\
\displaystyle
x_{3}(t)\approx\tilde{x}_{3}(t)=x_{3,0}+\sum_{i=1}^{h}\left(c_{3,i}\cos(i\omega
t)+s_{3,i}\sin(i\omega t)\right),\end{array}$
where $h$ is given number of harmonics. If $i>h$, then we assume
$c_{1,i}=s_{1,i}=c_{2,i}=s_{2,i}=c_{3,i}=s_{3,i}=0.$ (2)
By the right-hand side of the system (1), we compose the residuals
$\begin{array}[]{l}\delta_{1}(t)=\tilde{x}^{\prime}_{1}(t)-\sigma[\tilde{x}_{2}(t)-\tilde{x}_{1}(t)],\\\
\delta_{2}(t)=\tilde{x}^{\prime}_{2}(t)-[r\tilde{x}_{1}(t)-\tilde{x}_{2}(t)-\tilde{x}_{1}(t)\tilde{x}_{3}(t)],\\\
\delta_{3}(t)=\tilde{x}^{\prime}_{3}(t)-[\tilde{x}_{1}(t)\tilde{x}_{2}(t)-b\tilde{x}_{3}(t)],\end{array}$
where the prime denotes the time derivative of the function. If we make
calculations in an analytical form, then for each residual you need the
following:
1. 1.
Differentiate by time the corresponding trigonometric polynomial;
2. 2.
Where there are products of phase coordinates, multiply the corresponding
trigonometric polynomials, converting the products of trigonometric functions
into sums;
3. 3.
Give similar terms for each function $\cos()$ and $\sin()$ with the
corresponding argument;
4. 4.
By virtue of the equalities (2), to cut off the higher-order harmonics from
the resulting residual;
5. 5.
Set the resulting residual to zero, i.e., coefficients at its harmonics.
If we put together the found algebraic equations for each residual, we obtain
a still unclosed system of nonlinear equations concerning of unknown
amplitudes $c_{1,i}$, $s_{1,i}$, $c_{2,i}$, $s_{2,i}$, $c_{3,i}$ and $s_{3,i}$
($i=\overline{1,h}$), constant terms $x_{1,0}$, $x_{2,0}$ and $x_{3,0}$ and
the cyclic frequency $\omega$. The number of unknown variables in the system
is $3(1+2h)+1=6h+4$, but the equations are one less.
An additional equation can be taken from the following considerations. It is
known (see [4, 6]) that the desired cycles intersect the plane passing through
the equilibrium positions of the system (1)
$O_{1}\left(-\sqrt{b(r-1)},\,-\sqrt{b(r-1)},\,r-1\right),\>\>O_{2}\left(\sqrt{b(r-1)},\,\sqrt{b(r-1)},\,r-1\right)$
(3)
and parallel to the plane $x_{1}Ox_{2}$ (a Poincare section). Then the third
coordinate in the initial condition for the desired cycles is equal to $r-1$,
whence $\tilde{x}_{3}(0)=r-1$.
Therefore the additional equation of the system has the form:
$x_{3,0}+\sum_{i=1}^{h}c_{3,i}-27=0.$
The author did not find in literature of other additional information on the
periodic solutions in the Lorenz system. Note that for the three cycles found
by Viswanath, in the initial condition for the third coordinate, the number 27
was taken.
Next, we give an example of a system of equations for $h=2$:
$\left\\{\begin{aligned} \omega s_{1,1}-10c_{2,1}+10c_{1,1}&=0,\\\
-10s_{2,1}+10s_{1,1}-c_{1,1}\omega&=0,\\\ 2\omega
s_{1,2}-10c_{2,2}+10c_{1,2}&=0,\\\ -10s_{2,2}+10s_{1,2}-2c_{1,2}\omega&=0,\\\
10x_{1,0}-10x_{2,0}&=0,\\\
c_{1,1}x_{3,0}+c_{3,1}x_{1,0}+\dfrac{s_{1,1}s_{3,2}}{2}+\dfrac{s_{1,2}s_{3,1}}{2}+\omega
s_{2,1}+\dfrac{c_{1,1}c_{3,2}}{2}+\dfrac{c_{1,2}c_{3,1}}{2}+c_{2,1}-28c_{1,1}&=0,\\\
s_{1,1}x_{3,0}+s_{3,1}x_{1,0}+\dfrac{c_{1,1}s_{3,2}}{2}-\dfrac{c_{1,2}s_{3,1}}{2}+s_{2,1}+\dfrac{c_{3,1}s_{1,2}}{2}-\dfrac{c_{3,2}s_{1,1}}{2}-28s_{1,1}-c_{2,1}\omega&=0,\\\
c_{1,2}x_{3,0}+c_{3,2}x_{1,0}-\dfrac{s_{1,1}s_{3,1}}{2}+2\omega
s_{2,2}+\dfrac{c_{1,1}c_{3,1}}{2}+c_{2,2}-28c_{1,2}&=0,\\\
s_{1,2}x_{3,0}+s_{3,2}x_{1,0}+\dfrac{c_{1,1}s_{3,1}}{2}+s_{2,2}-28s_{1,2}+\dfrac{c_{3,1}s_{1,1}}{2}-2c_{2,2}\omega&=0,\\\
x_{1,0}x_{3,0}+x_{2,0}-28x_{1,0}+\dfrac{s_{1,2}s_{3,2}}{2}+\dfrac{s_{1,1}s_{3,1}}{2}+\dfrac{c_{1,2}c_{3,2}}{2}+\dfrac{c_{1,1}c_{3,1}}{2}&=0,\\\
-c_{1,1}x_{2,0}-c_{2,1}x_{1,0}+\omega
s_{3,1}-\dfrac{s_{1,1}s_{2,2}}{2}-\dfrac{s_{1,2}s_{2,1}}{2}+\dfrac{8c_{3,1}}{3}-\dfrac{c_{1,1}c_{2,2}}{2}-\dfrac{c_{1,2}c_{2,1}}{2}&=0,\\\
-s_{1,1}x_{2,0}-s_{2,1}x_{1,0}+\dfrac{8s_{3,1}}{3}-\dfrac{c_{1,1}s_{2,2}}{2}+\dfrac{c_{1,2}s_{2,1}}{2}-\dfrac{c_{2,1}s_{1,2}}{2}+\dfrac{c_{2,2}s_{1,1}}{2}-c_{3,1}\omega&=0,\\\
-c_{1,2}x_{2,0}-c_{2,2}x_{1,0}+2\omega
s_{3,2}+\dfrac{s_{1,1}s_{2,1}}{2}+\dfrac{8c_{3,2}}{3}-\dfrac{c_{1,1}c_{2,1}}{2}&=0,\\\
-s_{1,2}x_{2,0}-s_{2,2}x_{1,0}+\dfrac{8s_{3,2}}{3}-\dfrac{c_{1,1}s_{2,1}}{2}-\dfrac{c_{2,1}s_{1,1}}{2}-2c_{3,2}\omega&=0,\\\
\dfrac{8x_{3,0}}{3}-x_{1,0}x_{2,0}-\dfrac{s_{1,2}s_{2,2}}{2}-\dfrac{s_{1,1}s_{2,1}}{2}-\dfrac{c_{1,2}c_{2,2}}{2}-\dfrac{c_{1,1}c_{2,1}}{2}&=0,\\\
x_{3,0}+c_{3,1}+c_{3,2}-27&=0.\end{aligned}\right.$
Note that for any $h$ a similar system has solutions
$\begin{array}[]{c}\displaystyle
x_{1,0}=x_{2,0}=\pm\sqrt{b(r-1)},\>x_{3,0}=r-1,\>c_{k,i}=0,\>s_{k,i}=0,\\\
\omega\>\mbox{is any
number},\>\,k=\overline{1,3},\>i=\overline{1,h},\end{array}$
corresponding to the equilibrium positions (3).
Therefore the resulting nonlinear system of algebraic equations has a non-
unique solution. To find its approximate solutions, we will use the Newton
numerical method, whose a convergence to the desired solution (i.e.,
describing a periodic solution of the system (1) different from its the
equilibrium positions) depends on the choice of the initial approximation.
## 3 The Symbolic Computations to Obtain the System of Algebraic Equations
Thus, to obtain an approximation to the periodic solution, we must obtain a
nonlinear system concerning of unknown decomposition coefficients and
frequencies. As shown in the previous section, even for two harmonics, the
system has a bulky form. Therefore, we consider the algorithm for performing
symbolic calculations to obtain it.
When developing software [13], the Maxima math package (a computer algebra
system) was chosen. The program for obtaining the amplitudes and constant
terms of the residuals for $h=2$ is presented below.
/* [wxMaxima batch file version 1] [ DO NOT EDIT BY HAND! ]*/
/* [wxMaxima: input start ] */
display2d:false$
x1:x10+c1c1*cos(1*omega*t)+s1c1*sin(1*omega*t)+
c1c2*cos(2*omega*t)+s1c2*sin(2*omega*t)$
x2:x20+c2c1*cos(1*omega*t)+s2c1*sin(1*omega*t)+
c2c2*cos(2*omega*t)+s2c2*sin(2*omega*t)$
x3:x30+c3c1*cos(1*omega*t)+s3c1*sin(1*omega*t)+
c3c2*cos(2*omega*t)+s3c2*sin(2*omega*t)$
assume(omega > 0)$
delta1:trigreduce(diff(x1,t)-(10*(x2-x1)),t)$
delta2:trigreduce(diff(x2,t)-(28*x1-x2-x1*x3),t)$
delta3:trigreduce(diff(x3,t)-(x1*x2-8/3*x3),t)$
expand(diff(delta1,cos(1*omega*t)));
expand(diff(delta1,sin(1*omega*t)));
expand(diff(delta1,cos(2*omega*t)));
expand(diff(delta1,sin(2*omega*t)));
expand(integrate(delta1,t,0,2*%pi/omega)*omega/(2*%pi));
expand(diff(delta2,cos(1*omega*t)));
expand(diff(delta2,sin(1*omega*t)));
expand(diff(delta2,cos(2*omega*t)));
expand(diff(delta2,sin(2*omega*t)));
expand(integrate(delta2,t,0,2*%pi/omega)*omega/(2*%pi));
expand(diff(delta3,cos(1*omega*t)));
expand(diff(delta3,sin(1*omega*t)));
expand(diff(delta3,cos(2*omega*t)));
expand(diff(delta3,sin(2*omega*t)));
expand(integrate(delta3,t,0,2*%pi/omega)*omega/(2*%pi));
/* [wxMaxima: input end ] */
The expression `display2d:false$` turns off multi-line drawing of fractions,
degrees, etc. The sign `$` allows to calculate the result of an expression,
but not display it (instead of `;`). The function `trigreduce(expression,t)`
collapses all products of trigonometric functions concerning of the variable
$t$ in a combination of sums. Differentiation of residuals according to
harmonic functions is necessary to obtain the corresponding amplitudes. The
function `expand(expression)` expands brackets (performs multiplication,
exponentiation, leads similar terms).
To find the constant terms of the residuals, their integration over the period
is applied, i.e. the constant term of the $k$-residual is
$\dfrac{\displaystyle\omega\int_{0}^{\frac{2\pi}{\omega}}\delta_{k}(t)dt}{2\pi}.$
So that during symbolic integration the package does not ask a question about
the sign of the frequency, a command is given `assume(omega > 0)$`.
A file with package commands is generated similarly for any number of $h$
harmonics by a computer program written in C++ [13]. After executing this
program, the package will output symbolic expressions to the console for the
left side of the system of algebraic equations, which will be solved in it by
the Newton method.
Note that the most time-consuming operation here is symbolic integration. For
example, for 120 harmonics, the system formation time is more than 2 days. We
can here parallelize the computational process on three computers, but this
will not have a significant effect. Therefore, a system of algebraic equations
must be formed immediately. Next, we get a general form of this system. Note
that when solving the system of nonlinear equations by the Newton method, the
Jacobi matrix for the left side of the system does not invert. The Maxima
package uses LU decomposition to solve a system of linear equations at each
iteration of the method.
## 4 General Form of the System of Algebraic Equations
Since the right-hand side of the (1) system contains nonlinearities in the
form of products of phase coordinates, let us obtain relations expressing the
coefficients of trigonometric polynomials obtained by multiplying the
approximations $\tilde{x}_{1}(t)\tilde{x}_{3}(t)$ and
$\tilde{x}_{1}(t)\tilde{x}_{2}(t)$.
We consider two functions $f(t)$ and $F(t)$ represented by Fourier series
$\begin{array}[]{c}\displaystyle
f(t)=a_{0}+\sum_{i=1}^{\infty}\left(a_{i}\cos(i\omega t)+b_{i}\sin(i\omega
t)\right),\vskip 6.0pt plus 2.0pt minus 2.0pt\\\ \displaystyle
F(t)=A_{0}+\sum_{i=1}^{\infty}\left(A_{i}\cos(i\omega t)+B_{i}\sin(i\omega
t)\right).\end{array}$
Let
$f(t)F(t)=\alpha_{0}+\sum_{i=1}^{\infty}\left(\alpha_{i}\cos(i\omega
t)+\beta_{i}\sin(i\omega t)\right).$
Following the book [14, pp. 123-125], we have the relations:
$\alpha_{0}=a_{0}A_{0}+\dfrac{1}{2}\sum_{m=1}^{\infty}\left(a_{m}A_{m}+b_{m}B_{m}\right),$
$\alpha_{i}=a_{0}A_{i}+\dfrac{1}{2}\sum_{m=1}^{\infty}\left(a_{m}(A_{m+i}+A_{m-i})+b_{m}(B_{m+i}+B_{m-i})\right),$
(4)
$\beta_{i}=a_{0}B_{i}+\dfrac{1}{2}\sum_{m=1}^{\infty}\left(a_{m}(B_{m+i}-B_{m-i})-b_{m}(A_{m+i}-A_{m-i})\right).$
(5)
We assume that for $i>h$
$a_{i}=b_{i}=A_{i}=B_{i}=0.$
Since for our problem we find for an approximation up to and including the
$h$-harmonic, we zero all the amplitudes in the product for $i>h$, i.e.
$\alpha_{i}=\beta_{i}=0.$
Thus, we pass from the product of series to the product of trigonometric
polynomials. Also in the relations (4) and (5) we will assume [14, p. 124]
that
$A_{m-i}=A_{i-m},\>\>B_{m-i}=-B_{i-m},\>\>B_{0}=0.$
Then we get
$\alpha_{0}=a_{0}A_{0}+\dfrac{1}{2}\sum_{m=1}^{h}\left(a_{m}A_{m}+b_{m}B_{m}\right),$
$\displaystyle\alpha_{i}$
$\displaystyle=a_{0}A_{i}+\dfrac{1}{2}\sum_{m=1}^{\infty}a_{m}A_{m+i}+\dfrac{1}{2}\sum_{m=1}^{\infty}a_{m}A_{m-i}+\dfrac{1}{2}\sum_{m=1}^{\infty}b_{m}B_{m+i}+\dfrac{1}{2}\sum_{m=1}^{\infty}b_{m}B_{m-i}=$
$\displaystyle=a_{0}A_{i}+\dfrac{1}{2}\sum_{m=1}^{h-i}a_{m}A_{m+i}+\dfrac{1}{2}a_{i}A_{0}+\dfrac{1}{2}\sum_{m=1}^{i-1}a_{m}A_{i-m}+\dfrac{1}{2}\sum_{m=i+1}^{h}a_{m}A_{m-i}+$
$\displaystyle+\dfrac{1}{2}\sum_{m=1}^{h}b_{m}B_{m+i}+\dfrac{1}{2}b_{i}B_{0}-\dfrac{1}{2}\sum_{m=1}^{i-1}b_{m}B_{i-m}+\dfrac{1}{2}\sum_{m=i+1}^{h}b_{m}B_{m-i}=$
$\displaystyle=a_{0}A_{i}+a_{i}A_{0}+\dfrac{1}{2}\sum_{m=1}^{h-i}\left(a_{m}A_{m+i}+b_{m}B_{m+i}\right)+\dfrac{1}{2}\sum_{m=1}^{i-1}\left(a_{m}A_{i-m}-b_{m}B_{i-m}\right)+$
$\displaystyle+\dfrac{1}{2}\sum_{m=i+1}^{h}\left(a_{m}A_{m-i}+b_{m}B_{m-i}\right),$
$\displaystyle\beta_{i}$
$\displaystyle=a_{0}B_{i}+\dfrac{1}{2}\sum_{m=1}^{\infty}a_{m}B_{m+i}-\dfrac{1}{2}\sum_{m=1}^{\infty}a_{m}B_{m-i}-\dfrac{1}{2}\sum_{m=1}^{\infty}b_{m}A_{m+i}+\dfrac{1}{2}\sum_{m=1}^{\infty}b_{m}A_{m-i}=$
$\displaystyle=a_{0}B_{i}+\dfrac{1}{2}\sum_{m=1}^{h-i}a_{m}B_{m+i}+\dfrac{1}{2}\sum_{m=1}^{i-1}a_{m}B_{i-m}-\dfrac{1}{2}\sum_{m=i+1}^{h}a_{m}B_{m-i}-$
$\displaystyle-\dfrac{1}{2}\sum_{m=1}^{h-i}b_{m}A_{m+i}+b_{i}A_{0}+\dfrac{1}{2}\sum_{m=1}^{i-1}b_{m}A_{i-m}+\dfrac{1}{2}\sum_{m=i+1}^{h}b_{m}A_{m-i}=$
$\displaystyle=a_{0}B_{i}+b_{i}A_{0}+\dfrac{1}{2}\sum_{m=1}^{h-i}\left(a_{m}B_{m+i}-b_{m}A_{m+i}\right)+\dfrac{1}{2}\sum_{m=1}^{i-1}\left(a_{m}B_{i-m}+b_{m}A_{i-m}\right)+$
$\displaystyle+\dfrac{1}{2}\sum_{m=i+1}^{h}\left(-a_{m}B_{m-i}+b_{m}A_{m-i}\right).$
Applying the obtained formulas to calculate the products of trigonometric
polynomials to the residuals, we can write the equations for the $i$-th
harmonics ($i=\overline{1,h}$ is the number of harmonics, $k=\overline{1,3}$
is residual number):
$k=1$:
$\begin{array}[]{r}i\omega s_{1,i}-10c_{2,i}+10c_{1,i}=0,\vskip 6.0pt plus
2.0pt minus 2.0pt\\\ -i\omega c_{1,i}-10s_{2,i}+10s_{1,i}=0,\end{array}$
the equation corresponding to the constant term for the first residual is
$x_{1,0}-x_{2,0}=0,$
$k=2$:
$\displaystyle i\omega
s_{2,i}-28c_{1,i}+c_{2,i}+x_{1,0}c_{3,i}+c_{1,i}x_{3,0}$
$\displaystyle+\dfrac{1}{2}\sum_{m=1}^{h-i}\left(c_{1,m}c_{3,m+i}+s_{1,m}s_{3,m+i}\right)+$
$\displaystyle+\dfrac{1}{2}\sum_{m=1}^{i-1}\left(c_{1,m}c_{3,i-m}-s_{1,m}s_{3,i-m}\right)+$
$\displaystyle+\dfrac{1}{2}\sum_{m=i+1}^{h}\left(c_{1,m}c_{3,m-i}+s_{1,m}s_{3,m-i}\right)=0,$
$\displaystyle-i\omega
c_{2,i}-28s_{1,i}+s_{2,i}+x_{1,0}s_{3,i}+s_{1,i}x_{3,0}$
$\displaystyle+\dfrac{1}{2}\sum_{m=1}^{h-i}\left(c_{1,m}s_{3,m+i}-s_{1,m}c_{3,m+i}\right)+$
$\displaystyle+\dfrac{1}{2}\sum_{m=1}^{i-1}\left(c_{1,m}s_{3,i-m}+s_{1,m}c_{3,i-m}\right)+$
$\displaystyle+\dfrac{1}{2}\sum_{m=i+1}^{h}\left(-c_{1,m}s_{3,m-i}+s_{1,m}c_{3,m-i}\right)=0,$
the equation corresponding to the constant term for the second residual is
$-28x_{1,0}+x_{2,0}+x_{1,0}x_{3,0}+\dfrac{1}{2}\sum_{m=1}^{h}\left(c_{1,m}c_{3,m}+s_{1,m}s_{3,m}\right)=0,$
$k=3$:
$\displaystyle i\omega s_{3,i}-x_{1,0}c_{2,i}-c_{1,i}x_{2,0}$
$\displaystyle-\dfrac{1}{2}\sum_{m=1}^{h-i}\left(c_{1,m}c_{2,m+i}+s_{1,m}s_{2,m+i}\right)-$
$\displaystyle-\dfrac{1}{2}\sum_{m=1}^{i-1}\left(c_{1,m}c_{2,i-m}-s_{1,m}s_{2,i-m}\right)-$
$\displaystyle-\dfrac{1}{2}\sum_{m=i+1}^{h}\left(c_{1,m}c_{2,m-i}+s_{1,m}s_{2,m-i}\right)+\dfrac{8}{3}c_{3,i}=0,$
$\displaystyle-i\omega c_{3,i}-x_{1,0}s_{2,i}-s_{1,i}x_{2,0}$
$\displaystyle-\dfrac{1}{2}\sum_{m=1}^{h-i}\left(c_{1,m}s_{2,m+i}-s_{1,m}c_{2,m+i}\right)-$
$\displaystyle-\dfrac{1}{2}\sum_{m=1}^{i-1}\left(c_{1,m}s_{2,i-m}+s_{1,m}c_{2,i-m}\right)-$
$\displaystyle-\dfrac{1}{2}\sum_{m=i+1}^{h}\left(-c_{1,m}s_{2,m-i}+s_{1,m}c_{2,m-i}\right)+\dfrac{8}{3}s_{3,i}=0,$
the equation corresponding to the constant term for the third residual is
$-x_{1,0}x_{2,0}-\dfrac{1}{2}\sum_{m=1}^{h}\left(c_{1,m}c_{2,m}+s_{1,m}s_{2,m}\right)+\dfrac{8}{3}x_{3,0}=0,$
the additional system equation is
$x_{3,0}+\sum_{i=1}^{h}c_{3,i}-27=0.$
## 5 The Results of the Computational Experiment
As a result of numerous computational experiments, the initial approximation
was chosen for the cyclic frequency, constant terms, and amplitudes at
$h=h_{1}=5$:
$\begin{array}[]{c}\omega=4,\>\>x_{1,0}=x_{2,0}=x_{3,0}=0,\>\>c_{1,i}=-1,\>i=\overline{1,5},\\\
s_{1,j}=0,\>j=1,3,4,5,\>\>s_{1,2}=1.\end{array}$
This result is remarkable in that the Newton method converges to a solution
different from the equilibrium positions. Therefore, to improve the accuracy
of the approximate periodic solution, we consider a system of algebraic
equations for the value of $h$ equal to some $h_{2}>h_{1}$.
The obtained numerical solution of the system for $h=h_{1}$ is taken as the
initial approximation for amplitudes with indices $i\leq h_{1}$ for a system
with $h=h_{2}$, and the values of the initial approximation for amplitudes
with indices $i>h_{1}$ are assumed to be zero.
Table 1: The amplitudes of harmonics for $\tilde{x}_{1}(t)$, $x_{1,0}=0$. $i$ | $c_{1,i}$ | $s_{1,i}$
---|---|---
1 | $-5.780478259196228$ | $8.56017654325353$
2 | 0 | 0
3 | $3.160762628380509$ | $2.239212141102876$
4 | 0 | 0
5 | $0.6958870387616096$ | $-0.7979388979225431$
6 | 0 | 0
7 | $-0.1891992374027477$ | $-0.1864921358925765$
8 | 0 | 0
9 | $-0.04770429623010056$ | $0.04554044367245914$
10 | 0 | 0
11 | $0.01112322884679491$ | $0.01209138588669679$
12 | 0 | 0
13 | $0.003061207095371694$ | $-0.002735092350544739$
14 | 0 | 0
15 | $-6.744578887916229\cdot 10^{-4}$ | $-7.748319471034087\cdot 10^{-4}$
16 | 0 | 0
17 | $-1.960718247379475\cdot 10^{-4}$ | $1.665584161919807\cdot 10^{-4}$
18 | 0 | 0
19 | $4.116738805347028\cdot 10^{-5}$ | $4.960493476144467\cdot 10^{-5}$
20 | 0 | 0
21 | $1.254757391175977\cdot 10^{-5}$ | $-1.018054283421179\cdot 10^{-5}$
22 | 0 | 0
23 | $-2.518375902000733\cdot 10^{-6}$ | $-3.173486439630506\cdot 10^{-6}$
24 | 0 | 0
25 | $-8.025338211960923\cdot 10^{-7}$ | $6.230623750431923\cdot 10^{-7}$
26 | 0 | 0
27 | $1.541534734542893\cdot 10^{-7}$ | $2.0292802821633\cdot 10^{-7}$
28 | 0 | 0
29 | $5.130649139299358\cdot 10^{-8}$ | $-3.813725452268523\cdot 10^{-8}$
30 | 0 | 0
31 | $-9.43393531993558\cdot 10^{-9}$ | $-1.297038481588497\cdot 10^{-8}$
32 | 0 | 0
33 | $-3.278552746800046\cdot 10^{-9}$ | $2.333260259021725\cdot 10^{-9}$
34 | 0 | 0
35 | $5.76957885768651\cdot 10^{-10}$ | $8.28626640138045\cdot 10^{-10}$
Table 2: The amplitudes of harmonics for $\tilde{x}_{2}(t)$, $x_{2,0}=0$. $i$ | $c_{2,i}$ | $s_{2,i}$
---|---|---
1 | $-2.32972926505593$ | $10.89038310357172$
2 | 0 | 0
3 | $5.86875317198698$ | $-1.5832552129833$
4 | 0 | 0
5 | $-0.9124249133801483$ | $-2.200556873678218$
6 | 0 | 0
7 | $-0.7154457265566421$ | $0.3473932955614448$
8 | 0 | 0
9 | $0.1175186702136983$ | $0.2186139734768588$
10 | 0 | 0
11 | $0.06473984670858603$ | $-0.03723215039412078$
12 | 0 | 0
13 | $-0.01127208646321726$ | $-0.01877739524860192$
14 | 0 | 0
15 | $-0.005359671824365359$ | $0.003303445299126894$
16 | 0 | 0
17 | $9.453499475830811\cdot 10^{-4}$ | $0.001510235036151227$
18 | 0 | 0
19 | $4.211022386354685\cdot 10^{-4}$ | $-2.657049331814368\cdot 10^{-4}$
20 | 0 | 0
21 | $-7.363528144366622\cdot 10^{-5}$ | $-1.164013765469982\cdot 10^{-4}$
22 | 0 | 0
23 | $-3.19419300699788\cdot 10^{-5}$ | $2.017609175377016\cdot 10^{-5}$
24 | 0 | 0
25 | $5.47663534401654\cdot 10^{-6}$ | $8.710929378319451\cdot 10^{-6}$
26 | 0 | 0
27 | $2.362852034076972\cdot 10^{-6}$ | $-1.474901091428546\cdot 10^{-6}$
28 | 0 | 0
29 | $-3.94532524722541\cdot 10^{-7}$ | $-6.379296603810031\cdot 10^{-7}$
30 | 0 | 0
31 | $-1.715198229248314\cdot 10^{-7}$ | $1.049218598356554\cdot 10^{-7}$
32 | 0 | 0
33 | $2.776045093375681\cdot 10^{-8}$ | $4.59473450493284\cdot 10^{-8}$
34 | 0 | 0
35 | $1.22681173575872\cdot 10^{-8}$ | $-7.31171826830086\cdot 10^{-9}$
Table 3: The amplitudes of harmonics for $\tilde{x}_{3}(t)$, $x_{3,0}=23.04210397942006$. $i$ | $c_{3,i}$ | $s_{3,i}$
---|---|---
1 | 0 | 0
2 | $7.568410271550653$ | $-9.50386584559212$
3 | 0 | 0
4 | $-3.555327211552558$ | $-1.844710563805469$
5 | 0 | 0
6 | $-0.4741220131932616$ | $1.279043179069961$
7 | 0 | 0
8 | $0.4227292179138024$ | $0.1274574086305204$
9 | 0 | 0
10 | $0.03498415351761577$ | $-0.1315337800809524$
11 | 0 | 0
12 | $-0.03934013541135439$ | $-0.009645786231708874$
13 | 0 | 0
14 | $-0.002660052258813564$ | $0.01145537653603837$
15 | 0 | 0
16 | $0.003271688724557337$ | $7.33752523103949\cdot 10^{-4}$
17 | 0 | 0
18 | $2.024982256871223\cdot 10^{-4}$ | $-9.206266886554897\cdot 10^{-4}$
19 | 0 | 0
20 | $-2.560063570343799\cdot 10^{-4}$ | $-5.58964460662525\cdot 10^{-5}$
21 | 0 | 0
22 | $-1.542436654918173\cdot 10^{-5}$ | $7.050327849098175\cdot 10^{-5}$
23 | 0 | 0
24 | $1.926014222030195\cdot 10^{-5}$ | $4.25261452471065\cdot 10^{-6}$
25 | 0 | 0
26 | $1.170939944189529\cdot 10^{-6}$ | $-5.225643926851625\cdot 10^{-6}$
27 | 0 | 0
28 | $-1.409525591131397\cdot 10^{-6}$ | $-3.21879984959824\cdot 10^{-7}$
29 | 0 | 0
30 | $-8.83134288999026\cdot 10^{-8}$ | $3.782652721710986\cdot 10^{-7}$
31 | 0 | 0
32 | $1.010610960272394\cdot 10^{-7}$ | $2.418021923473667\cdot 10^{-8}$
33 | 0 | 0
34 | $6.606163280924149\cdot 10^{-9}$ | $-2.689431432873997\cdot 10^{-8}$
35 | 0 | 0
Figure 1: The cycle obtained by described method.
Tables 1–3 show the result of solving the system for $h=35$; the accuracy of
the Newton method is $10^{-8}$. The period value is obtained equal to
$T=1.558652210$, the initial condition for the obtained approximate periodic
solution is
$\begin{array}[]{c}\tilde{x}_{1}(0)=-2.147367631,\>\>\tilde{x}_{2}(0)=2.078048211,\>\>\tilde{x}_{3}(0)=27.\end{array}$
(6)
The initial values (6) were checked on the period in a computer program that
implements the numerical integration of the system (1) by the modified power
series method [8] with an accuracy of estimating the common term of the series
$10^{-25}$, 100 bits for mantissa real number and machine epsilon
$1.57772\cdot 10^{-30}$. With such parameters of the method, the approximate
values of the phase coordinates obtained by numerical integration were also
verified by the same numerical method, but in reverse time. The values in the
reverse time coincide with (6) up to the 9th character inclusive after the
point. The resulting values of $x_{1}(T)$, $x_{2}(T)$ and $x_{3}(T)$ coincide
with (6) up to the 8th character inclusive.
The cycle corresponding to (6) is shown in Fig. 1. Note that the cycle found
coincides with the first Viswanath cycle in [6], all signs after the point for
$T$ also coincide with the data from [6].
## 6 Acknowledgements
The reported study was funded by RFBR according to the research project
20-01-00347.
## References
* [1] Lorenz, E. N. Deterministic Nonperiodic Flow, Journal of the Atmospheric Sciences, vol. 20, no. 2 (1963), pp. 130-141.
* [2] Tucker, W. A Rigorous ODE Solver and Smale’s 14th Problem, Foundations of Computational Mathematics, vol. 2, no. 1 (2002), pp. 53-117.
* [3] Rabinovich, M. I. Stochastic Self-Oscillations and Turbulence, Soviet Physics Uspekhi, vol. 21, no. 5 (1978), pp. 443-469.
* [4] Galias, Z., Tucker, W. Validated Study of the Existence of Short Cycles for Chaotic Systems Using Symbolic Dynamics and Interval Tools, International Journal of Bifurcation and Chaos, vol. 21, no. 2 (2011), pp. 551-563.
* [5] Lozi, R. Can We Trust in Numerical Computations of Chaotic Solutions of Dynamical Systems?, Topology and Dynamics of Chaos. In Celebration of Robert Gilmore’s 70th Birthday. - World Scientific Series in Nonlinear Science Series A, vol. 84 (2013), pp. 63-98.
* [6] Viswanath, D. The Fractal Property of the Lorenz Attractor, Physica D: Nonlinear Phenomena, vol. 190, no. 1-2 (2004), pp. 115-128.
* [7] Viswanath, D. The Lindstedt-Poincare Technique as an Algorithm for Computing Periodic Orbits, SIAM Review, vol. 43, no. 3 (2001), pp. 478-495.
* [8] Pchelintsev, A. N. Numerical and Physical Modeling of the Dynamics of the Lorenz System, Numerical Analysis and Applications, vol. 7, no. 2 (2014), pp. 159-167.
* [9] Neymeyr, K., Seelig, F. Determination of Unstable Limit Cycles in Chaotic Systems by Method of Unrestricted Harmonic Balance, Zeitschrift für Naturforschung A, vol. 46, no. 6 (1991), pp. 499-502.
* [10] Luo, A. C. J., Huang, J. Approximate Solutions of Periodic Motions in Nonlinear Systems via a Generalized Harmonic Balance, Journal of Vibration and Control, vol. 18, no. 11 (2011), pp. 1661-1674.
* [11] Luo, A. C. J. Toward Analytical Chaos in Nonlinear Systems, John Wiley & Sons, Chichester, ISBN: 978-1-118-65861-1, 2014, 258 pp.
* [12] Luo, A. C. J., Guo, S. Analytical Solutions of Period-1 to Period-2 Motions in a Periodically Diffused Brusselator, Journal of Computational and Nonlinear Dynamics, vol. 13, no. 9, 090912 (2018), 8 pp.
* [13] Pchelintsev, A. N. The Programs for Finding of Periodic Solutions in the Lorenz Attractor, GitHub, https://github.com/alpchelintsev/periodic_sols
* [14] Tolstov, G. P. Fourier Series, Dover Publications, New York (1962), 336 pp.
|
# Linear and non-linear infrared response of one-dimensional vibrational
Holstein polarons in the anti-adiabatic limit: optical and acoustical phonon
models
Cyril Falvo<EMAIL_ADDRESS>Institut des Sciences Moléculaires d’Orsay
(ISMO), CNRS, Univ. Paris-Sud, Université Paris-Saclay, 91405 Orsay, France
Univ. Grenoble Alpes, CNRS, LIPhy, 38000 Grenoble, France
###### Abstract
The theory of linear and non-linear infrared response of vibrational Holstein
polarons in one-dimensional lattices is presented in order to identify the
spectral signatures of self-trapping phenomena. Using a canonical
transformation the optical response is computed from the small polaron point
of view which is valid in the anti-adiabatic limit. Two types of phonon baths
are considered: optical phonons and acoustical phonons, and simple expressions
are derived for the infrared response. It is shown that for the case of
optical phonons, the linear response can directly probe the polaron density of
states. The model is used to interpret the experimental spectrum of crystaline
actetanilide in the C$=$O range. For the case of acoustical phonons, it is
shown that two bound states can be observed in the two-dimensional infrared
spectrum at low temperature. At high temperature, analysis of the time-
dependence of the two-dimensional infrared spectrum indicates that bath
mediated correlations slow down spectral diffusion. The model is used to
interpret the experimental linear-spectroscopy of model $\alpha$-helix and
$\beta$-sheet polypeptides. This work shows that the Davydov Hamiltonian
cannot explain the observations in the NH stretching range.
## I Introduction
The dynamics of electronic or vibrational excitons in quasi one-dimensional
lattices has been an open topic of research for the past 60 years.Mahan
(1981); Holstein (1959a, b) The interplay between exciton delocalization and
coupling with the lattice vibrations results in the self-trapping phenomena,
i.e. the formation of a polaron. A polaron usually refers to a quasi-particle
that comprises the exciton and the lattice deformation created by the exciton
which modifies its dynamics.Mahan (1981); Holstein (1959a, b) The concept of
self-trapping in one-dimensional lattices has a large number of applications
for example in molecular aggregates,Spano (2010); Huynh _et al._ (2013); Lu
and Mukamel (1991); Sun _et al._ (2015); Chorošajev _et al._ (2014);
Chorosajev, Rancova, and Abramavicius (2016) conjugated polymers,Yamagata and
Spano (2014); Barford and Marcus (2014) halogen-bridged metal
complexes,Okamoto _et al._ (1992) molecular crystalsFillaux (1981); Barthes
_et al._ (1998); Herrebout, Clou, and Desseyn (2001); Careri _et al._ (1983,
1984); Eilbeck, Lomdahl, and Scott (1984); Alexander and Krumhansl (1986);
Edler, Hamm, and Scott (2002); Edler and Hamm (2002, 2003); Hamm and Edler
(2006) and biological macromolecules.Davydov (1973, 1985); Scott (1982, 1992);
Edler _et al._ (2004, 2005); Hamm (2009); Brown and Ivić (1989); Ivić _et
al._ (1997); Pouthier (2003); Pouthier and Falvo (2004); Falvo and Pouthier
(2005a, b, c); Tsivlin, Meyer, and May (2006); Tsivlin and May (2006, 2007);
Bodis _et al._ (2009); Cruzeiro (2009); Goj and Bittner (2011) Vibrational
excitons emerge in molecular crystal or within biological macromolecules by
the delocalization of high-frequency vibrations through dipole-dipole
interactions. This is the case for example, in $\alpha$-helix polypeptides
where the amide-I band corresponds to the delocalization of the C$=$O
vibrations of each peptide group along the backbone of the peptide that forms
a quasi one-dimensional lattice.Miyazawa (1960) These excitons are strongly
coupled to the CO$\cdots$NH hydrogen-bonds that stabilize the helix. This
coupling was first introduced by A. S. Davydov which speculated the formation
of a soliton able to transfer energy from one side of the $\alpha$-helix to
the other.Davydov (1973, 1985); Scott (1982, 1992) It appeared later that this
coupling results into the formation of a vibrational polaron rather than a
soliton.Brown and Ivić (1989); Ivić _et al._ (1997); Pouthier (2003);
Pouthier and Falvo (2004); Falvo and Pouthier (2005a, b, c)
Just a few years after the work of Davydov, the infrared (IR) spectroscopy of
the molecular crystals acetanilide (ACN)Careri _et al._ (1983, 1984) and
N-methylacetamide (NMA)Fillaux (1981); Barthes _et al._ (1998); Herrebout,
Clou, and Desseyn (2001) showed some anomalous temperature dependance that was
interpreted as a signature of self-trapping. These molecular crystals which
consist of quasi-one-dimensional chains of hydrogen-bonded peptide groups
resembling the hydrogen-bond network of $\alpha$-helices were then considered
as model systems for polypeptides. In ACN, at ambiant temperature, the amide-I
band is characterized by a single band located at 1666 $\textrm{cm}^{-1}$,
while at lower temperature a second band appears at 1650 $\textrm{cm}^{-1}$.
The amide-A band corresponding to the N$-$H stretching vibration is
characterized by a main band located at 3295 $\textrm{cm}^{-1}$ with a series
of 9 satelite peaks towards low frequency. These observations show that a
strong coupling occurs between the C$=$O and N$-$H vibrations with some low
frequency optical phonons. From a theoretical point of view the dynamics of
vibrational excitons in molecular crystals and in $\alpha$-helices can be
described by the same Holstein Hamiltonian, which was first introduced to
describe the dynamics of electrons in molecular crystals. The main difference
is that in molecular crystals the vibrational excitons are coupled to optical
phonons while in the Davydov Hamiltonian the vibrational excitons are coupled
to acoustical phonons.Eilbeck, Lomdahl, and Scott (1984); Alexander and
Krumhansl (1986); Scott (1982, 1992)
Two decades after these observations, Hamm and Edler shed new light on the
dynamics of vibrational polarons by performing non-linear IR spectroscopy of
ACN and NMA crystals as well as a model $\alpha$-helix.Edler, Hamm, and Scott
(2002); Edler and Hamm (2002, 2003); Hamm and Edler (2006); Edler _et al._
(2004, 2005) This work was reviewed in Ref. 30. Developed over the past two-
decades, time-resolved nonlinear IR spectroscopy, in particular two-
dimensional (2D) IR spectroscopy have allowed researchers to study the
vibrational dynamics of condensed phase systems including peptides, proteins,
and water.Fayer (2013); Hamm and Zanni (2011); Khalil, Demirdöven, and
Tokmakoff (2003); Loparo, Roberts, and Tokmakoff (2006); Wong _et al._
(2013); Bloem _et al._ (2012); Middleton _et al._ (2012); Bandaria _et al._
(2010); Ghosh _et al._ (2014); Kim and Hochstrasser (2009) 2D-IR spectroscopy
can probe vibrational anharmonic couplings, vibrational relaxation, population
transport, chemical-exchange dynamics and spectral diffusion therefore
providing much more information than absorption spectroscopy.Jansen and
Knoester (2009); Zheng _et al._ (2005); Falvo _et al._ (2008); Cho (2008);
Kim and Hochstrasser (2009) In ACN and NMA, Edler and Hamm used 2D-IR
spectroscopy to show that vibrational self-trapping and the formation of
vibrational polarons occured in these molecular crystals.Edler, Hamm, and
Scott (2002); Edler and Hamm (2002, 2003); Hamm and Edler (2006); Hamm (2009)
They also performed pump-probe spectroscopy on a model $\alpha$-helix in the
N$-$H spectral range.Edler _et al._ (2004, 2005) They show the appearance in
the two-exciton spectrum of two bound states. These two bound states were
interpreted as the signature of a strong coupling between the vibrational
exciton and a set of accoustical phonons in accordance with the original model
of Davydov. A similar observation was made a few years later in the spectrum
of a model $\beta$-sheet peptide.Bodis _et al._ (2009)
A large number of theoretical studies have been devoted to the Holstein
Hamiltonian (electronic or vibrational). Holstein polarons are usually
described within two limiting cases that depend on the size of the polaron
wavefunction: small and large polarons.Holstein (1959a, b) For the former the
discreteness of the lattice plays a key role while for the later a continuum
approximation can be used. Large polarons are often described within the
adiabatic limit, i.e. the case when the lattice deformation remains static
when the exciton moves along the lattice. In this case, variational
approachesLu and Mukamel (1991); Huynh _et al._ (2013); Sun _et al._ (2015)
or mixed quantum-classical simulationsScott (1992); Cruzeiro (2009) gives in
general good results.Brown and Ivić (1989); Ivić _et al._ (1997) In contrast,
small polarons are usually described within the anti-adiabatic limit which
corresponds to a weak exciton coupling.Tempelaar _et al._ (2013) In this
limit the lattice deformation follows the exciton modifying its effective
mass. In this case, a canonical transformation allows to switch to the small
polaron point of view.Lang and Firsov (1963); Brown and Ivić (1989); Ivić _et
al._ (1997); Pouthier (2003); Pouthier and Falvo (2004); Falvo and Pouthier
(2005a, b, c); Yalouz, Falvo, and Pouthier (2017) It has been shown that for
the case of vibrational excitons, because the hopping constant between
nearest-neighbor lattice sites is in general small compared to the phonon
frequency, the anti-adiabatic limit gives good results.Brown and Ivić (1989);
Ivić _et al._ (1997); Pouthier (2003) The Holstein Hamiltonian has been also
solved by a variety of numerical methods that include the Density Matrix
Renormalization GroupJeckelmann and White (1998), the Multi-Configuration
Time-dependant Hartree method,Tsivlin and May (2007) the Hierarchy Equation Of
Motion (HEOM)Chen, Zhao, and Tanimura (2015), using the two-particle
approximationPhilpott (1971); Spano (2002) or using a direct exact
diagonalization.Hamm and Edler (2006); Yalouz, Falvo, and Pouthier (2017);
Yalouz, Pouthier, and Falvo (2017) Most theoretical studies focused on the
energy transport properties of polarons and on linear spectroscopy, very few
are dedicated to predict non-linear spectroscopy. This is particularly true
for the case of vibrational polarons where to my knowledge the few studies
were conducted on pump-probe spectroscopy,Edler _et al._ (2004); Tsivlin and
May (2006); Tsivlin, Meyer, and May (2006); Woutersen (2007) and none were
conducted on 2D-IR spectroscopy. Note that two recent studies focused on the
2D spectroscopy of electronic excitons in molecular aggregates using a
variational approach.Huynh _et al._ (2013); Sun _et al._ (2015) However, as
mentioned earlier this approach is mostly adapted for the case of large
polarons and is not adapted for vibrational excitons. Therefore, for
vibrational polarons there is a clear lack of theoretical work to predict the
non-linear IR response.
In this article, the theory of linear and non-linear spectroscopy of
vibrational polarons in one-dimensional lattices is presented in order to
establish a physical framework to identify the spectral signatures of self-
trapping phenomena. The case of both optical and acoustical phonons are
considered allowing to cover both Davydov and molecular crystals models. This
theoretical work relies on the anti-adiabatic approximation which assumes that
the vibrational excitons are slow compared to the phonon bath. Note that in
this article, the simple case of a one-dimensional (1D) lattice is
investigated keeping the Holstein Hamiltonian as simple as possible in order
to set up the framework for the nonlinear response of vibrational polarons and
present analytical results. In section II, simple expressions are derived for
the linear and non-linear optical response of vibrational polarons. These
expressions are used in the section III for a variety of parameters values in
the Holstein model. In section IV where further theoretical derivations are
performed, the model results are discussed within the context of experimental
observations. Finally, future experiments to probe self-trapping phenomena are
suggested in addition to theoretical developments needed in the future as well
as conclusions are presented in section V.
## II Theoretical model
In this section, the theoretical framework describing vibrational excitons
coupled to optical and acoustical phonons is presented within the context of
the anti-adiabatic limit. Using this approximation the linear and third-order
response is given.
### II.1 Vibrational Holstein Hamiltonian
A one-dimensional chain of $N$ identical high frequency vibrations coupled to
a phonon bath is considered. The system Hamiltonian $\hat{H}$ is written as
$\hat{H}=\hat{H}_{v}+\hat{H}_{b}+\hat{H}_{vb},$ (1)
where $\hat{H}_{v}$ is the vibrational Hamiltonian, $\hat{H}_{b}$ the bath
Hamiltonian and $\hat{H}_{vb}$ the coupling between the vibrations and the
bath. The vibrational Hamiltonian $\hat{H}_{v}$ is described by an excitonic
Hamiltonian written as
$\hat{H}_{v}=\sum_{n}\omega_{0}b_{n}^{\dagger}b_{n}-Ab_{n}^{\dagger
2}b_{n}^{2}+J\left(b_{n+1}^{\dagger}+b_{n-1}^{\dagger}\right)b_{n},$ (2)
where $\omega_{0}$ is the fundamental frequency, $A$ is the anharmonicity, $J$
is the hopping constant and where $b_{n}^{\dagger}$ and $b_{n}$ are the vibron
creation and annihilation operators. In Eq. (2) and in the remaining of this
paper, the convention $\hbar=1$ is used. The bath is described by a set of $N$
phonons of frequencies $\Omega_{q}$ and wavevector $q=2\pi p/N$ with
$p=-(N-1)/2,\dots,(N-1)/2$. Using the phonon creation and annihilation
operators $a^{\dagger}_{q}$ and $a_{q}$ the bath Hamiltonian is written
$\hat{H}_{b}=\sum_{q}\Omega_{q}(a_{q}^{\dagger}a_{q}+1/2).$ (3)
To describe the coupling between the high frequency vibrations and the bath
modes it is assumed that each bath mode induces fluctuations of the
fundamental frequencies. The coupling hamiltonian is then written as
$\hat{H}_{vp}=\frac{1}{\sqrt{N}}\sum_{n}\sum_{q}\left(\Delta_{q}e^{-\textrm{i}qn}a_{q}^{\dagger}+\Delta_{q}^{*}e^{\textrm{i}qn}a_{q}\right)b_{n}^{\dagger}b_{n}.$
(4)
Note that here each bath modes are coupled to all the vibrations therefore
introducing strong bath mediated correlations between different vibrations. In
this article, two types of phonon models are considered, a model of optical
phonons and a model of acoustical phonons. Derivation of the optical and
acoustical models are detailed in appendix A. For the optical phonon model the
phonon frequency and coupling are written as
$\displaystyle\Omega^{\text{opt}}_{q}=\Omega_{\text{opt}},$ (5)
$\displaystyle\Delta^{\text{opt}}_{q}=\Delta_{\text{opt}},$ (6)
where $\Omega_{\text{opt}}$ is the frequency of the phonon and
$\Delta_{\text{opt}}$ is the coupling strength. The acoustical phonon model is
derived from the Davydov Hamiltonian and is given by the following parameters
$\displaystyle\Omega_{q}^{\text{ac}}=\Omega_{\text{ac}}\left|\sin q/2\right|,$
(7)
$\displaystyle\Delta^{\text{ac}}_{q}=-2\textrm{i}\Delta_{{\text{ac}}}\sqrt{|\sin
q/2|}\cos q/2,$ (8)
where $\Omega_{\text{ac}}$ is the cutoff frequency and where
$\Delta_{\text{ac}}$ is the coupling strength.
### II.2 Effective Hamiltonian in the anti adiabatic limit
To partially remove the vibron-bath coupling Hamiltonian, a Lang-Firsov
transformation is applied.Lang and Firsov (1963) A “full dressing” is
considered and the following unitary transformation is introduced
$\hat{U}=\exp\left(\sum_{n}\hat{X}_{n}b_{n}^{\dagger}b_{n}\right),$ (9)
where the operator $\hat{X}_{n}$ is defined by
$\hat{X}_{n}=\frac{1}{\sqrt{N}}\sum_{q}\left(\frac{\Delta_{q}e^{-\textrm{i}qn}}{\Omega_{q}}a_{q}^{\dagger}-\frac{\Delta_{q}^{*}e^{\textrm{i}qn}}{\Omega_{q}}a_{q}\right).$
(10)
By using Eq. (9), the transformed Hamiltonian $\tilde{H}=U\hat{H}U^{\dagger}$
is written as
$\tilde{H}=\sum_{n}\left(\omega_{0}-\epsilon_{0}\right)b_{n}^{\dagger}b_{n}-\left(A+\epsilon_{0}\right)b_{n}^{\dagger
2}b_{n}^{2}-2\sum_{n<m}\epsilon_{|n-m|}b_{n}^{\dagger}b_{n}b_{m}^{\dagger}b_{m}\\\
+\sum_{n}J\left(\Theta^{\dagger}_{n+1}b_{n+1}^{\dagger}+\Theta^{\dagger}_{n-1}b_{n-1}^{\dagger}\right)\Theta_{n}b_{n}+\hat{H}_{b},$
(11)
where the dressing operators $\Theta^{\dagger}_{n}$ are defined by the
transformation of the vibron creation operators $b^{\dagger}_{n}$
$\hat{U}b^{\dagger}_{n}\hat{U}^{\dagger}=b^{\dagger}_{n}\Theta^{\dagger}_{n},$
(12)
and are written as
$\Theta^{\dagger}_{n}=\exp\left(\hat{X}_{n}\right).$ (13)
The parameters $\epsilon_{n}$ characterize the reorganizational energies of
the bath, they are written as
$\epsilon_{n}=\frac{1}{N}\sum_{q}\frac{\left|\Delta_{q}\right|^{2}}{\Omega_{q}}\cos(nq).$
(14)
In the small polaron point of view, the vibrational exciton are dressed by the
bath. The remaining coupling between the polaron and the bath now operates
through the hopping term which is modulated by bath coherent states. The main
advantage of this procedure is that the exciton-phonon coupling has been
strongly reduced and a mean field approach can then be used.Ivić _et al._
(1997) The final Hamiltonian is written as a sum of three contribution
$\tilde{H}=\hat{H}_{0}+\hat{H}_{b}+\Delta\hat{H},$ (15)
where $\hat{H}_{0}=\langle\tilde{H}-\hat{H}_{b}\rangle_{b}$ is the effective
Hamiltonian of the dressed excitons and
$\Delta\hat{H}=\tilde{H}-\hat{H}_{b}-\hat{H}_{0}$ is the remaining part of the
exciton-bath interaction. The symbol $\langle\dots\rangle_{b}$ stands for the
thermal average over the bath degrees of freedom which are assumed to be in
equilibrium at temperature $T$. After straightforward calculation, the
effective polaron hamiltonian is finally written as
$\hat{H}_{0}=\sum_{n}\left(\omega_{0}-\epsilon_{0}\right)b_{n}^{\dagger}b_{n}-\left(A+\epsilon_{0}\right)b_{n}^{\dagger
2}b_{n}^{2}\\\
-2\sum_{n<m}\epsilon_{|n-m|}b_{n}^{\dagger}b_{n}b_{m}^{\dagger}b_{m}+\sum_{n}Je^{-S(\beta)}\left(b_{n+1}^{\dagger}+b_{n-1}^{\dagger}\right)b_{n},$
(16)
where the temperature dependent coupling constant $S(\beta)$ is the nearest-
neighbor dressing factor given by
$S(\beta)=\frac{1}{N}\sum_{q}\left|\frac{\Delta_{q}}{\Omega_{q}}\right|^{2}\coth\left(\frac{\beta\Omega_{q}}{2}\right)\left(1-\cos(q)\right),$
(17)
where $\beta=1/k_{\text{B}}T$. In the following, the effect of the remaining
coupling $\Delta\hat{H}$ is disregarded and the linear and nonlinear optical
responses of polarons will be computed under the effective Hamiltonian given
by Eq. (16). This approximation is relevant in the anti-adiabatic limit where
the hopping constant $J$ is small. This approach can be improved by treating
the remaining coupling using perturbation theoryPouthier and Falvo (2004);
Pouthier (2013); Yalouz and Pouthier (2016) which can give very reliable
results on a large range of parameters provided that no accidental resonances
occur.Pouthier (2013) However, as a first step this work will only consider
the effective Hamiltonian and the effect of the remaining coupling will be the
subject of a separate study.
Since $\hat{H}_{0}$ commute with the number operator
$\hat{N}=\sum_{n}b_{n}^{\dagger}b_{n}$, $\hat{H}_{0}$ is block diagonal in the
eigenvalues of the operator $\hat{N}$, $v=0,1,2,\dots$. This article focus on
the third-order nonlinear optical response and therefore only the blocks
$v=0,1$ and $v=2$ need to be considered. The one-exciton states block $v=1$ is
trivially diagonalized and the eigenstates are expressed by plane-waves
$\left|k\right\rangle=\frac{1}{\sqrt{N}}\sum_{n}e^{\textrm{i}kn}b^{\dagger}_{n}\left|\varnothing\right\rangle,$
(18)
and the eigenvalues are given by
$\omega_{k}=\tilde{\omega}_{0}+2\tilde{J}(\beta)\cos k,$ (19)
where $\tilde{\omega}_{0}=\omega_{0}-\epsilon_{0}$ is the shifted frequency
and $\tilde{J}(\beta)=Je^{-S(\beta)}$ is the effective hopping constant. The
two-excitons states block $v=2$ can be simplified by using the periodicity of
the lattice. Introducing the following center of mass plane-wave basisPouthier
(2003)
$\left|k\
m\right\rangle=\frac{1}{\sqrt{N}}\sum_{n}e^{\textrm{i}k\left(n+m/2\right)}\xi_{m}b_{n}^{\dagger}b_{n+m}^{\dagger}\left|\varnothing\right\rangle,$
(20)
where $m=0,\dots,(N-1)/2$ is the distance between the two-exciton and where
$\xi_{m}$ is defined by
$\xi_{m}=\begin{cases}0&\mbox{if }m<0,\\\ 1/\sqrt{2}&\mbox{if }m=0,\\\
1&\mbox{if }m>0.\end{cases}$ (21)
Using this basis-set, one can show that the Hamiltonian is block diagonal in
the wave vector $k$. The $k$-th block can be deduced from the
equationsPouthier (2003)
$\displaystyle\hat{H}_{0}\left|k\ 0\right\rangle$
$\displaystyle=\left(2\tilde{\omega}_{0}-2A-2\epsilon_{0}\right)\left|k\
0\right\rangle+\sqrt{2}\tilde{J}_{k}\left|k\ 1\right\rangle,$ (22)
$\displaystyle\hat{H}_{0}\left|k\ 1\right\rangle$
$\displaystyle=\left(2\tilde{\omega}_{0}-2\epsilon_{1}\right)\left|k\
1\right\rangle+\sqrt{2}\tilde{J}_{k}\left|k\
0\right\rangle+\tilde{J}_{k}\left|k\ 2\right\rangle,$ (23)
$\displaystyle\hat{H}_{0}\left|k\ m\right\rangle$
$\displaystyle=\left(2\tilde{\omega}_{0}-2\epsilon_{m}\right)\left|k\
m\right\rangle+\tilde{J}_{k}\left|k\ m-1\right\rangle+\tilde{J}_{k}\left|k\
m+1\right\rangle,$ $\displaystyle\mbox{if }m>1,$ (24)
with $\tilde{J}_{k}=2\tilde{J}\cos(k/2)$. Each block $k$ of $\hat{H}_{0}$ can
then be easily diagonalized numerically giving a set of eigenvalues
$\omega_{k\sigma}$ and eigenvectors $\psi_{k\sigma}(m)$ where
$\sigma=0,\dots,(N-1)/2$ labels the different eigenvalues.
### II.3 Linear optical response
The coupling of the vibrations to the optical field $E(\textbf{r},t)$ is given
by
$\hat{H}_{\text{int}}=-E(\textbf{r},t)\hat{V},$ (25)
where $\hat{V}$ is the dipole operator expressed for a set of identical
molecules as a function of the projection of the transition dipole moments
$\mu$ on the electric field as
$\hat{V}=\sum_{n}\mu\left(b_{n}+b_{n}^{\dagger}\right).$ (26)
The linear optical response is given by the response function written as
$R^{(1)}(t)=\textrm{i}\Theta(t)\left\langle\left[\hat{V}(t),\hat{V}\right]\right\rangle=\textrm{i}\Theta(t)\left(J(t)-J^{*}(t)\right),$
(27)
where $\Theta(t)$ is the Heaviside function,
$V(t)=e^{\textrm{i}\hat{H}t}\hat{V}e^{-\textrm{i}\hat{H}t}$ is the time
evolution of the dipole operator in the Heisenberg picture and where
$\langle\dots\rangle$ is the thermal average over all degrees of freedom. The
function $J(t)$ can be expressed as a function of the total density matrix at
equilibrium $\hat{\rho}=\exp(-\beta\hat{H})/Z(\beta)$ as
$J(t)=\left\langle\hat{V}(t)\hat{V}\right\rangle=\textrm{Tr}\left[\hat{\rho}e^{\textrm{i}\hat{H}t}\hat{V}e^{-\textrm{i}\hat{H}t}\hat{V}\right].$
(28)
By introducing the Lang-Firsov unitary transformation $\hat{U}$ in the
correlation function, neglecting the remaining coupling $\Delta\hat{H}$,
assuming that the harmonic frequencies of the vibrations are much higher than
the temperature and using the rotating wave approximation, the function $J(t)$
is now written
$J(t)=\sum_{n,m}\mu^{2}\left\langle\varnothing\right|b_{n}e^{-\textrm{i}\hat{H}_{0}t}b^{\dagger}_{m}\left|\varnothing\right\rangle
C_{|n-m|}(t)$ (29)
where $C_{|n-m|}(t)$ is the bath correlation function given by
$C_{|n-m|}(t)=\left\langle\theta_{n}(t)\theta_{m}^{\dagger}\right\rangle_{b}=\exp\left(-g_{|n-m|}(t)\right),$
(30)
where the linebroadening function $g_{n}(t)$ is given by
$g_{n}(t)=\frac{1}{N}\sum_{q}\left|\frac{\Delta_{q}}{\Omega_{q}}\right|^{2}\left\\{\coth\left(\frac{\beta\Omega_{q}}{2}\right)\left(1-\cos\left(\Omega_{q}t-qn\right)\right)+\textrm{i}\sin\left(\Omega_{q}t-qn\right)\right\\}.$
(31)
Note that the linebroadening function for $n=1$ at $t=0$ is simply the
nearest-neighbor dressing factor $g_{1}(0)=S(\beta)$. After straightforward
calculation, the optical response is written
$J(t)=\mu^{2}\sum_{k}e^{-\textrm{i}\omega_{k}t}C_{k}(t),$ (32)
where $C_{k}(t)$ is the spatial Fourier transform of the bath correlation
function
$C_{k}(t)=\sum_{n}e^{\textrm{i}kn}\exp\left(-g_{n}(t)\right).$ (33)
Finally, the absorption spectrum $\alpha(\omega)$ is then directly
proportional to the Fourier transform of the response fonction $R^{(1)}(t)$
given by Eq. (27). For a periodic and isolated system, assuming that the laser
wavelength is larger than the system’s typical size, only the excitation
energy corresponding to a vanishing wavevector $k\rightarrow 0$ should
contribute to the linear optical response. Here, because of the coupling to
the phonon bath, all modes $k$ contribute to the optical response with
different weights corresponding to the spatial Fourier transform of the bath
correlation function. Note that the expression for the linebroadening function
$g_{n}(t)$ is very close to the expression for the linebroadening function of
a single isolated transition coupled to a harmonic bath.Duke and Mahan (1965)
In Eq. (31), the dephasing of the vibrations takes into account the
delocalized nature of the phonons and the correlation induced by the bath. In
addition, the Stokes shift which is usually included in the definition of the
linebroadening functionMukamel (1995) is not present in Eq. (31). This Stokes
shift is in fact included directly in the definition of the polaron
Hamiltonian via the reorganizational energies $\epsilon_{n}$ defined in Eq.
(14).
### II.4 Third-order nonlinear optical response
The third-order response function is given by
$R^{(3)}(t_{1},t_{2},t_{3})=\textrm{i}^{3}\Theta(t_{1})\Theta(t_{2})\Theta(t_{3})\left\langle\left[\left[\left[\hat{V}(t_{1}+t_{2}+t_{3}),\hat{V}(t_{1}+t_{2})\right],\hat{V}(t_{1})\right],\hat{V}(0)\right]\right\rangle.$
(34)
The three nested commumators yield eight Liouville space pathways.Mukamel
(1995); Abramavicius _et al._ (2009) Each nonlinear technique is based on a
specific phase matching condition which selects a subgroup of pathways. For
simplification, only the expressions for the signal corresponding to the
direction
$\textbf{k}_{\text{I}}=-\textbf{k}_{1}+\textbf{k}_{2}+\textbf{k}_{3}$ are
presented. The expression for the direction
$\textbf{k}_{\text{II}}=\textbf{k}_{1}-\textbf{k}_{2}+\textbf{k}_{3}$ is given
in appendix B. The total response function for $\textbf{k}_{\text{I}}$ is
given as a sum of three contributions
$R_{\textbf{k}_{\text{I}}}(t_{1},t_{2},t_{3})=R_{1}+R_{2}-R_{3},$ (35)
where $R_{1}$ is the ground state bleaching (GSB), $R_{2}$ is the excited
state emission (ESE) and $R_{3}$ is the excited state absorption (ESA). Using
the same approach as for the linear response function each contributions to
the $\textbf{k}_{\text{I}}$ signal can be written as
$\displaystyle R_{1}(t_{1},t_{2},t_{3})$
$\displaystyle=\frac{\mu^{4}}{N}\sum_{k_{1}k_{2}}e^{\textrm{i}\omega_{k_{1}}t_{1}-\textrm{i}\omega_{k_{2}}t_{3}}C^{(1)}_{k_{1}k_{2}}(t_{1},t_{2},t_{3}),$
(36) $\displaystyle R_{2}(t_{1},t_{2},t_{3})$
$\displaystyle=\frac{\mu^{4}}{N}\sum_{k_{1}k_{2}}e^{\textrm{i}\omega_{k_{1}}(t_{1}+t_{2})-\textrm{i}\omega_{k_{2}}(t_{3}+t_{2})}C^{(2)}_{k_{1}k_{2}}(t_{1},t_{2},t_{3}),$
(37) $\displaystyle R_{3}(t_{1},t_{2},t_{3})$
$\displaystyle=\frac{\mu^{4}}{N^{2}}\sum_{k_{1}k_{2}k_{3}\sigma}e^{\textrm{i}\omega_{k_{1}}(t_{1}+t_{2}+t_{3})-\textrm{i}\omega_{k_{2}}t_{2}-\textrm{i}\omega_{k_{3}\sigma}t_{3}}C^{(3)}_{k_{1}k_{2}k_{3}}(t_{1},t_{2},t_{3})A_{k_{1}k_{3}\sigma}A_{k_{2}k_{3}\sigma},$
(38)
with the functions $C^{(i)}(t_{1},t_{2},t_{3})$ defined by
$\displaystyle C^{(1)}_{k_{1}k_{2}}(t_{1},t_{2},t_{3})$
$\displaystyle=\sum_{m_{1}m_{2}m_{3}}e^{-\textrm{i}k_{1}m_{1}+\textrm{i}k_{2}m_{3}}e^{-g^{(1)}_{m_{1}m_{2}m_{3}}(t_{1},t_{2},t_{3})},$
(39) $\displaystyle C^{(2)}_{k_{1}k_{2}}(t_{1},t_{2},t_{3})$
$\displaystyle=\sum_{m_{1}m_{2}m_{3}}e^{-\textrm{i}k_{1}(m_{1}+m_{2})+\textrm{i}k_{2}(m_{2}+m_{3})}e^{-g^{(2)}_{m_{1}m_{2}m_{3}}(t_{1},t_{2},t_{3})},$
(40) $\displaystyle C^{(3)}_{k_{1}k_{2}k_{3}}(t_{1},t_{2},t_{3})$
$\displaystyle=\sum_{m_{1}m_{2}m_{3}}e^{-\textrm{i}k_{1}(m_{1}+m_{2}+m_{3})+\textrm{i}k_{2}m_{2}+\textrm{i}k_{3}m_{3}}e^{-g^{(3)}_{m_{1}m_{2}m_{3}}(t_{1},t_{2},t_{3})},$
(41)
and where the linebroadening functions are given by
$\displaystyle
g^{(1)}_{m_{1}m_{2}m_{3}}(t_{1},t_{2},t_{3})=g^{*}_{m_{1}}(t_{1})-g^{*}_{m_{2}}(t_{2})+g_{m_{3}}(t_{3})$
$\displaystyle+g^{*}_{m_{1}+m_{2}}(t_{1}+t_{2})+g^{*}_{m_{2}+m_{3}}(t_{2}+t_{3})-g^{*}_{m_{1}+m_{2}+m_{3}}(t_{1}+t_{2}+t_{3}),$
(42) $\displaystyle
g^{(2)}_{m_{1}m_{2}m_{3}}(t_{1},t_{2},t_{3})=g^{*}_{m_{1}}(t_{1})-g_{m_{2}}(t_{2})+g^{*}_{m_{3}}(t_{3})$
$\displaystyle+g^{*}_{m_{1}+m_{2}}(t_{1}+t_{2})+g_{m_{2}+m_{3}}(t_{2}+t_{3})-g^{*}_{m_{1}+m_{2}+m_{3}}(t_{1}+t_{2}+t_{3}),$
(43) $\displaystyle
g^{(3)}_{m_{1}m_{2}m_{3}}(t_{1},t_{2},t_{3})=g^{*}_{m_{1}}(t_{1})-g_{m_{2}}(t_{2})+g_{m_{3}}(t_{3})$
$\displaystyle+g^{*}_{m_{1}+m_{2}}(t_{1}+t_{2})+g_{m_{2}+m_{3}}(t_{2}+t_{3})-g^{*}_{m_{1}+m_{2}+m_{3}}(t_{1}+t_{2}+t_{3}).$
(44)
The tensor $A_{kk^{\prime}\sigma}$ is expressed as a function of the two-
excitons wave function
$A_{kk^{\prime}\sigma}=2\sum_{m}\Psi_{k^{\prime}\sigma}(m)\xi_{m}\cos\left(\left(k^{\prime}/2-k\right)m\right).$
(45)
Note that the discrete spatial Fourier transform in Eqs. (33), (39), (40) and
(41) can be easily computed numerically using the 1D, 2D and 3D Fast Fourier
Transform (FFT) algorithm.Frigo and Johnson (2005)
## III Results
In this section, the previous formalism is applied to compute the linear and
nonlinear spectroscopy of vibrational polarons in a 1D lattice. The parameters
range used here corresponds to the amide-I vibration in $\alpha$-helix
polypeptides and molecular crystals such as ACN or NMA often modeled as quasi-
one-dimensional chains. The intramolecular anharmonicity value is fixed to
$A=8~{}\textrm{cm}^{-1}$.Hamm, Lim, and Hochstrasser (1998) The hopping
constant ranges between -10 $\textrm{cm}^{-1}$ to 10 $\textrm{cm}^{-1}$. Hamm
and Edler (2006) For the optical phonon model the optical phonon frequency is
fixed to $\Omega_{\text{opt}}=~{}50~{}\textrm{cm}^{-1}$ corresponding to the
crystalline acetanilide (ACN) optical frequency. Hamm and Edler (2006) For the
acoustical phonon model, the cutoff frequency is fixed to
$\Omega_{\text{ac}}=100~{}\textrm{cm}^{-1}$ corresponding to the
$\alpha$-helices cutoff frequency.Scott (1982) The coupling between the
vibration and the phonons will take a typical value of
$\Delta_{\text{opt}}=\Delta_{\text{ac}}=25~{}\textrm{cm}^{-1}$. Hamm and Edler
(2006) A phenomenological life-time of $T_{1}=1.5$ ps was added to the
calculation of the linear and non-linear spectra. This value is chosen close
to the relaxation time of 1.2 ps measured for amide-I vibration in
peptides.Hamm, Lim, and Hochstrasser (1998) All numerical calculations were
performed using a number of sites of $N=51$. This number was found to be large
enough to obtain results close to an infinite system. For the case of the
acoustical phonon model, to avoid spurious effects due to the finite size used
in the numerical calculations, the sum over the phonon wavevector $q$ in Eq.
(31) is performed using a larger number of phonon modes $N_{\text{ph}}=5001$.
This number is chosen so that no recursion is observed in the behavior of the
linebroadening function $g_{n}(t)$ given by Eq. (31). Finally the harmonic
frequency is set to the value $\omega_{0}=0$ without any loses of generality.
In the following subsections, the influence of the structure of the bath on
the linear and non-linear vibrational responses is investigated by using
optical and acoustical phonon models.
### III.1 Optical phonon model
Figure 1: One-polaron (left panel) and two-polarons (right panel) energy
spectra for the optical phonon model as a function of the wave vector $k$ for
$\Omega_{\text{opt}}=~{}50~{}\textrm{cm}^{-1}$,
$\Delta_{\text{opt}}=25~{}\textrm{cm}^{-1}$, $T=0$ K and for
$J=-10~{}\textrm{cm}^{-1}$ (black solid lines) and $J=0~{}\textrm{cm}^{-1}$
(blue dashed lines).
First, the case of the optical phonon model is considered. The one-polaron
energy spectrum which controls the behavior of the linear absorption spectrum
is depicted on the left panel of Fig. 1. As seen in Eq. (19), the one-polaron
eigenfrequencies are centered around the shifted frequency
$\tilde{\omega}_{0}$ with a width of $4\tilde{J}(\beta)$. For
$\Omega_{\text{opt}}=~{}50~{}\textrm{cm}^{-1}$,
$\Delta_{\text{opt}}=25~{}\textrm{cm}^{-1}$, $T=0$ K and
$J=-10~{}\textrm{cm}^{-1}$, the shifted frequency takes the value
$\tilde{\omega}_{0}=-12.5~{}\textrm{cm}^{-1}$ and the total dispersion is
$4\tilde{J}=31.2~{}\textrm{cm}^{-1}$. The two-polarons energy spectrum which
controls the behavior of the non-linear optical response is reported on the
right panel of Fig. 1 for the same set of parameters. The energy spectrum
shows one continuum band characterizing the two-polaron free states with a
total bandwidth of $8\tilde{J}=62.4~{}\textrm{cm}^{-1}$ and an isolated band
corresponding to the two-polaron bound states which is determined by the
anharmonicity.Kimball, Fong, and Shen (1981)
Figure 2: Upper panel: Linear absorption spectrum of the optical phonon model
for $\Omega_{\text{opt}}=~{}50~{}\textrm{cm}^{-1}$,
$\Delta_{\text{opt}}=25~{}\textrm{cm}^{-1}$ and $T=0$ K as a function of the
value of the hopping constant $J$. Lower panel: Linear absorption spectrum for
$\Omega_{\text{opt}}=~{}50~{}\textrm{cm}^{-1}$,
$\Delta_{\text{opt}}=25~{}\textrm{cm}^{-1}$ and $J=-10~{}\textrm{cm}^{-1}$ as
a function of the temperature $T$.
The upper panel of Fig. 2 shows the dependence of the linear absorption
spectrum for the optical phonon model as a function of the hopping constant
$J$. For $J=-10$ $\textrm{cm}^{-1}$, the spectrum exhibits a sharp and strong
zero-phonon line (ZPL) located at $\omega=-28.1~{}\textrm{cm}^{-1}$
corresponding the the position of the lowest polaron energy
$\omega_{k=0}=\tilde{\omega}_{0}-2\tilde{J}$. A broad second band is also
present in the absorption spectrum. It corresponds to the one-phonon
excitation band (0-1 transition in the Franck-Condon (FC) picture) and is
shifted by $\Omega_{\text{opt}}$ with respect to the ZPL. This broad band
exhibits a double peak shape and has a bandwidth of 31 $\textrm{cm}^{-1}$
corresponding to the total polaron dispersion $4\tilde{J}$. As decreasing the
value of $|J|$ to 0 the ZPL shifts towards $\omega=-12.5~{}\textrm{cm}^{-1}$
while the second band does not shift but its bandwidth reduces significantly.
Upon increasing the value of $J$ to 10 $\textrm{cm}^{-1}$, the ZPL continues
to shift to $\omega=0~{}\textrm{cm}^{-1}$ and the one-phonon band bandwidth
increases again to recover the bandwidth for $J=-10$ $\textrm{cm}^{-1}$. This
behavior of the linear absorption spectrum is almost identical to the
numerical calculation performed by Hamm and Edler based on a direct
diagonalization of the Holstein Hamiltonian. Hamm and Edler (2006) The main
difference between this result and the result of Ref. 23 is seen in the one-
phonon band which is symmetric in this calculation but is stronger on the
lower energy side of the band in Ref. 23. This difference can be explained by
the remaining coupling term $\Delta\hat{H}$ which was neglected here and which
introduce a residual coupling between the ZPL and the one-phonon band.
However, this result captures the main features observed in Ref. 23. The lower
panel of Fig. 2 shows the temperature dependance of the linear absorption
spectrum for the optical phonon model. Upon increasing the temperature, the
linear absorption spectrum exhibits new bands on the blue side of the spectrum
corresponding to $n$th phonon bands as well as hot bands on the red side of
the spectrum. Also by increasing the temperture, the ZPL decreases sharply.
All $n$th phonon bands and hot-bands exhibit the same broad double peak shape
with a bandwidth which corresponds to the polaron dispersion $4\tilde{J}$ and
decreases with temperature. The ZPL shape is also modified as the temperature
is increased.
Figure 3: 2D spectrum for the optical phonon model for
$\Omega_{\text{opt}}=~{}50~{}\textrm{cm}^{-1}$,
$\Delta_{\text{opt}}=25~{}\textrm{cm}^{-1}$ and $T=0$ K and for $J=0$ and
$J=-10$ $\textrm{cm}^{-1}$. Vertical red lines and horizontal red lines mark
the position of the ZPL and the $n$th-phonon bands. The blue vertical lines
correspond to the ZPL and $n$th-phonon lines shifted by the anharmonicity
Fig. 3 shows the absorptive part of the 2D spectrum (sum of the rephasing
$\textbf{k}_{\text{I}}$ and non-rephasing $\textbf{k}_{\text{II}}$ signalsHamm
and Zanni (2011)) for $J=0~{}\textrm{cm}^{-1}$ and $J=-10~{}\textrm{cm}^{-1}$.
For $J=0~{}\textrm{cm}^{-1}$ the 2D spectrum shows multiple negative and
positive peaks. Horizontal red lines have been added to mark the position of
the ZPL and the one-phonon band in the absorption spectrum. The vertical red
lines mark the position of the ZPL, the one-phonon band as well as the one-
phonon hot band (1-0 transition in the FC picture). The one-phonon hot band
originates from the excited state emission. The strongest negative peak
corresponds to the anharmonically shifted ZPL originating from the excited
state absorption. The position of this transition as well as the shifted one-
phonon and two-phonons bands and the shifted hot-band are marked by vertical
blue lines. Note that the shifted hot-bands appear as positive peaks and not
negative peaks even though they originate from the ESA contribution. Analysis
of the response functions shows that the vibrational overlaps corresponding to
these peaks are negative therefore changing the sign of the peaks.
For $J=-10~{}\textrm{cm}^{-1}$ the 2D spectrum is strongly modified as most
bands disappear. Only the ZPL and the one-phonon band remain. As in the linear
absorption spectrum, the one-phonon band is broad with a bandwidth
corresponding to the polaron dispersion. However, this can only being seen
along the $\omega_{1}$ axis of the 2D spectrum. An additional peak appears on
the blue side of the ZPL along the $\omega_{3}$ axis, this peak originates
from the exciton-exciton scattering.Abramavicius (2013) A similar peak is also
visible on the side of the one-phonon band.
### III.2 Acoustical phonons
Figure 4: One-polaron (left panel) and two-polarons (right panel) energy
spectra for the acoustical phonon model as a function of the wavevector $k$
for $\Omega_{\text{ac}}=~{}100~{}\textrm{cm}^{-1}$,
$\Delta_{\text{ac}}=25~{}\textrm{cm}^{-1}$ , $T=0$ K and for
$J=-10~{}\textrm{cm}^{-1}$ (black solid lines) and $J=0~{}\textrm{cm}^{-1}$
(blue dashed lines). Figure 5: Upper panel: Linear absorption spectrum of the
acoustical phonon model for $\Omega_{\text{ac}}=~{}100~{}\textrm{cm}^{-1}$,
$\Delta_{\text{ac}}=25~{}\textrm{cm}^{-1}$ and $T=0$ K as a function of the
value of the phonon coupling $\Delta_{\text{ac}}$ and for $J=0$ (dashed line)
and $J=-10~{}\textrm{cm}^{-1}$ (full line). Lower panel: Linear absorption
spectrum for $\Omega_{\text{ac}}=~{}100~{}\textrm{cm}^{-1}$,
$\Delta_{\text{ac}}=25~{}\textrm{cm}^{-1}$ and $J=-10~{}\textrm{cm}^{-1}$ as a
function of the temperature $T$. The spectra are normalized to the maximum of
the peak.
Next, the case of the acoustical phonon model is considered. The one-polaron
and two-polarons energy spectra are depicted in Fig. 4 for the parameters
$\Omega_{\text{ac}}=~{}100~{}\textrm{cm}^{-1}$,
$\Delta_{\text{ac}}=25~{}\textrm{cm}^{-1}$ , $T=0$ K and for
$J=-10~{}\textrm{cm}^{-1}$ and $J=0~{}\textrm{cm}^{-1}$. The main difference
in the two-polarons energy spectrum with respect to the optical phonon model
can be seen by the appearance of a second bound state. This bound state is
present for all wavevector $k$ for $J=0$ but is only present near $k=\pi/2$
for $J=-10~{}\textrm{cm}^{-1}$. The presence of these two bound states are
related to the occurence of two type of anharmonicities of local and non-local
nature due to the coupling with the phonon bath. Detailed studies on the
nature of these bound states have been performed.Pouthier (2003); Falvo,
Pouthier, and Eilbeck (2006) For example, a phase diagram for the appearance
of the bound states as a function of the anharmonicity, coupling to the bath
and hopping constant, have been drawn using decimation methods.Pouthier (2003)
The presence of two bound states have been suggested to occur for the N$-$H
vibrations in $\alpha$-helix and $\beta$-sheet peptides trough the appearance
of two excited state absorption peaks in their pump-probe spectrum.Edler _et
al._ (2004, 2005); Bodis _et al._ (2009)
The upper panel of Fig. 5 shows the dependence of the linear absorption
spectrum for the acoustical phonon model as a function of the coupling
constant $\Delta_{\text{ac}}$ at $T=0$ K and for a hopping constant
$J=-10~{}\textrm{cm}^{-1}$ and $J=0~{}\textrm{cm}^{-1}$. In Fig. 5, the
spectra were normalized with respect to the maximum of the band to highlight
the increase in bandwidth as a function of the coupling. For small values of
the coupling the linear absorption is characterized by a single asymmetric
band. Upon increasing the coupling constant the absorption band bandwidth
increases keeping an asymmetric shape. Only at very large value of the
coupling the shape of the band tends to be more symmetric. The full width at
half maximum (FWHM) of the spectrum increases from $4~{}\textrm{cm}^{-1}$ for
$\Delta_{\text{ac}}=20~{}\textrm{cm}^{-1}$ to $215~{}\textrm{cm}^{-1}$ for
$\Delta_{\text{ac}}=100~{}\textrm{cm}^{-1}$. For small values of
$\Delta_{\text{ac}}$, the effect of the hopping constant $J$ is a simple shift
of the band by $2\tilde{J}$. Upon increasing the value of the coupling this
shift decreases. The lower panel of Fig. 5 shows the temperature dependance of
the linear absorption spectrum for the acoustical phonon model. Upon
increasing the temperature, the bandwidth of the absorption band increases
rapidly and its shape become more symmetric with a typical Lorentzian shape.
The bandwidth (full width at half maximum) increases from
$5~{}\textrm{cm}^{-1}$ for $T=0$ K to $120~{}\textrm{cm}^{-1}$ for $T=200$ K.
Figure 6: 2D spectrum for the acoustical phonon model for
$\Omega_{\text{ac}}=~{}100~{}\textrm{cm}^{-1}$,
$\Delta_{\text{ac}}=25~{}\textrm{cm}^{-1}$ and $T=0$ K and for $J=0$ and
$J=-10$ $\textrm{cm}^{-1}$.
Next, the nonlinear optical response in the low temperature regime is
discussed. Fig. 6 shows the absorptive part of the 2D spectrum for
$\Omega_{\text{ac}}=~{}100~{}\textrm{cm}^{-1}$,
$\Delta_{\text{ac}}=25~{}\textrm{cm}^{-1}$, $T=0$ K and for $J=0$ and $J=-10$
$\textrm{cm}^{-1}$. For $J=0$ the 2D spectrum is characterized by a pair of
negative-positive peaks located on the diagonal of the spectrum originating
from the interference of the ESA and the GSB and ESE pathways and two negative
peaks red shifted along the $\omega_{3}$ axis. These two peaks are the
signature of the two bound states visible in the two-polarons spectrum. For
$J=-10~{}\textrm{cm}^{-1}$, the pair of negative-positive peak is red shifted
by $2\tilde{J}$ and only one negative peak is present. This results from the
fact that only one bound state is present for the wavevector $k=0$.
Figure 7: 2D spectra for the acoustical phonon model for
$\Omega_{\text{ac}}=~{}100~{}\textrm{cm}^{-1}$,
$\Delta_{\text{ac}}=25~{}\textrm{cm}^{-1}$ and $T=300$ K, $J=-10$
$\textrm{cm}^{-1}$ as a function of the delay time $t_{2}$, for a 1D chain
(left panel) and for a single isolated site $N=1$.
At high temperature, the 2D spectrum is strongly modified. The left panel of
Fig. 7 shows the 2D spectrum for
$\Omega_{\text{ac}}=~{}100~{}\textrm{cm}^{-1}$,
$\Delta_{\text{ac}}=25~{}\textrm{cm}^{-1}$, $T=300$ K and for $J=0$ and
$J=-10$ $\textrm{cm}^{-1}$ and for different time delay $t_{2}$. For $t_{2}=0$
the 2D spectrum is characterized by a pair of negative-positive peaks
elongated along the diagonal. The width along the diagonal is much larger than
the width of the low temperature 2D spectrum and corresponds essentially to
the linear absorption bandwidth. The shape of the 2D spectrum is tightly
connected to the separation of homogeneous and inhomogeneous broadening. As
the waiting time $t_{2}$ increases, the shape of the two peaks is strongly
modified toward a more circular shape. This is a well known phenomenon which
has been observed in many molecular systems and is a signature of the spectral
diffusion due to fluctuations of the frequencies.Ishikawa _et al._ (2007);
Fecko _et al._ (2005); Falvo _et al._ (2015) In particular the shape of the
peaks can be directly related to the frequency-frequency correlation function
(FFCF) through metrics computed from 2D lineshape such as the center line
slope (CLS).Kwak _et al._ (2007); Kwak, Rosenfeld, and Fayer (2008); Falvo
(2016) To understand the effect of the bath-induced correlations, the 2D
spectrum for the same time delays $t_{2}$ and for a single site $N=1$, but
still assuming a infinite phonon model, is computed. This would correspond,
for example, to the spectroscopy of an impurity.Duke and Mahan (1965) For a
single site, coupling to the bath induces a similar elongated shape. But as
$t_{2}$ increases the shape of the peaks is more rapidly circular. This is a
signature of a faster decay of the FFCF.
## IV Interpretation and discussion
In the previous section, the numerical results have shown that the nature of
the bath can strongly modify the linear and non-linear optical lineshape.
Specific approximations and analytical expressions can be developed to fully
understand the relation between the spectroscopic signature and the processes
involved. In this section I will also discuss the physical meaning and the
experimental implications of these results.
### IV.1 Optical phonons
For the optical phonon model, the bath coupling and optical frequency are
independent of the wavevector $q$. In this case, very simple expressions can
be derived for the linear absorption. Introducing the Huang-Rhys coupling
constant $S_{\text{opt}}=\Delta_{\text{opt}}^{2}/\Omega_{\text{opt}}^{2}$, the
Stokes shift and the dressing factor are then expressed as
$\displaystyle\epsilon_{n}=\frac{\Delta_{\text{opt}}^{2}}{\Omega_{\text{opt}}}\delta_{n,0}=\Omega_{\text{opt}}S_{\text{opt}}\delta_{n,0},$
(46) $\displaystyle
S(\beta)=S_{\text{opt}}\coth\left(\frac{\beta\Omega_{\text{opt}}}{2}\right).$
(47)
The linebroadening functions are written as
$g_{n}(t)=S_{\text{opt}}\left\\{\coth\left(\frac{\beta\Omega_{\text{opt}}}{2}\right)\left(1-\delta_{n,0}\cos\Omega_{\text{opt}}t\right)+\textrm{i}\delta_{n,0}\sin\Omega_{\text{opt}}t\right\\}.$
(48)
After straightforward calculations, the linear response function can be
written
$J(t)=N\mu^{2}e^{-S(\beta)}\left[e^{-\textrm{i}\omega_{k=0}t}+\tilde{\rho}(t)\sum_{n=-\infty}^{\infty}M_{n}(\beta)e^{-\textrm{i}n\Omega_{\text{opt}}t}\right],$
(49)
where $\tilde{\rho}(t)=\frac{1}{N}\sum_{k}e^{-\textrm{i}\omega_{k}t}$ is the
Fourier transform of the polaron density of states and where the constants
$M_{n}(\beta)$ are defined by
$M_{n}(\beta)=I_{n}\left(S_{0}/\sinh(\beta\Omega_{\text{opt}}/2)\right)e^{n\Omega_{\text{opt}}\beta/2}-\delta_{n,0},$
(50)
where $I_{n}(x)$ are the modified Bessel functions of the first
kind.Abramowitz and Stegun (1972) The linear absorption $\alpha(\omega)$ is
directly proportional to the Fourier transform of the function $J(t)$ and is
written
$\alpha(\omega)=N\mu^{2}e^{-S(\beta)}\left(\delta\left(\omega-\omega_{k=0}\right)+\sum_{n=-\infty}^{\infty}M_{n}(\beta)\rho\left(\omega-n\Omega_{\text{opt}}\right)\right).$
(51)
Therefore the absorption spectrum is given by the sum of a delta-like ZPL and
a series of peaks corresponding to the Franck-Condon vibrational progression.
For $T=0$ K, the ZPL is not broadened by the bath while the shape of the other
bands is given by the polaron density of states
$\rho(\omega)=\frac{1}{N}\sum_{k}\delta(\omega-\omega_{k})$. From Eq. (19), it
is easy to deduce the polaron density of states in the limit
$N\rightarrow\infty$, it is written
$\rho(\omega)=\frac{1}{\pi\sqrt{4\tilde{J}^{2}-\left(\omega-\tilde{\omega}_{0}\right)^{2}}}\quad\text{if}\quad|\omega-\tilde{\omega}_{0}|<2\tilde{J},$
(52)
which diverge when $\omega=\tilde{\omega}_{0}\pm 2\tilde{J}$ resulting in a
double peak shape as observed in Fig. 2. The expressions for the linear
response obtained here are very similar to the expressions given in Ref. 85
which uses a similar derivation. The main difference appears for the ZPL which
in Ref. 85 is broadened by the phonons even at $T=0$ K while in our case it is
not since $M_{0}(\infty)=0$. Note that in Ref. 85, the regime explored
corresponds to a strong Huang-Rhys coupling constant for which the polaron
bandwidth vanishes. Eqs. (51) and (52) fully capture the behavior of the
linear absorption in Fig. 2 and in Ref. 23. Therefore, this work gives a
theoretical basis to understand the nature of the peaks observed in the linear
absorption.
Next, the previous results are used to understand the physical meaning of the
experimental measurements of ACN. In ACN, the experimental linear absorption
spectrum shows at low temperature one peak located at
$1666~{}\textrm{cm}^{-1}$ and a second band at $1650~{}\textrm{cm}^{-1}$. The
band at $1666~{}\textrm{cm}^{-1}$ shows some marked features with three
subbands which disappear at high temperature and the band at
$1650~{}\textrm{cm}^{-1}$ decreases rapidly as a function of temperature (see
for example Refs.17, 18, and 30). This behavior is very similar to the result
presented in the present study using the vibrational Holstein model. As one
can expects from the Franck-Condon picture the intensity of the low frequency
band which corresponds to the ZPL decreases sharply as a function of
temperature while the intensity of the one-phonon band increases. In addition,
the shape of the one-phonon band which corresponds to the polaron density of
states decreases as a function of temperature due to an increase of the
dressing factor (Eq. 52). This is consistent with the disappearance of the
substructure observed in the $1666~{}\textrm{cm}^{-1}$ band of ACN. Note that
in the experiment, the frequency difference between the two peaks is much
smaller than the frequency difference in the 1D Hostein model. This difference
could originate from the fact that we did not include the 3D structure of the
crystal and additional dipole-dipole couplings need to be included.Hamm and
Edler (2006); Hamm (2009) Similarly, the experimental 2D-IR spectrum of ACN
have some marked similarities with the 2D-IR septrum predicted by the
vibrational Holstein model (see for example Refs. 22 and 30). It shows clearly
a pair of negative/positive peaks located at $1650~{}\textrm{cm}^{-1}$ which
would correspond to the ZPL with an aditionnal side peak which we have
interpreted as a result of exciton-exciton scattering. In addition there is no
peaks on the diagonal corresponding to the $1666~{}\textrm{cm}^{-1}$ band
(one-phonon band in the Holstein model).
### IV.2 Acoustical phonons
For the acoustical phonon model, introducing the coupling constant
$S_{\text{ac}}=2\Delta_{\text{ac}}^{2}/\Omega_{c}^{2}$, the dressing factor
and the Stokes shifts are given by
$\displaystyle S(\beta)=\frac{8S_{\text{ac}}}{\pi}\int_{0}^{1}\textrm{d}x\
x\sqrt{1-x^{2}}\coth\left(\beta\Omega_{c}x/2\right),$ (53)
$\displaystyle\epsilon_{n}=\Omega_{c}S_{\text{ac}}\left(\delta_{n,0}+\frac{1}{2}\delta_{n,1}+\frac{1}{2}\delta_{n,-1}\right),$
(54)
giving rise to two types of couplings: a local anharmonic coupling and a
nearest-neighbor anharmonic coupling. The effect of these two types of
anharmonic couplings on the two-exciton states and the transport properties
has been studies in details.Pouthier (2003); Pouthier and Falvo (2004); Falvo,
Pouthier, and Eilbeck (2006) It is these couplings that are responsible for
the appearance of two bound states in the polaron energy spectrum as seen in
Fig. 4.
#### IV.2.1 Linear absorption
Using Eqs. (32) and (33) one can easily show that in the limit
$N\rightarrow\infty$, the linear optical response is written
$J(t)=N\mu^{2}\sum_{n}e^{-\textrm{i}\tilde{\omega}_{0}t}e^{-g_{n}(t)}(-\textrm{i})^{n}J_{n}\left(2\tilde{J}(\beta)t\right),$
(55)
where $J_{n}(x)$ are the Bessel functions of the first kind. For the case of a
vanishing hopping constant $J=0$, i.e. the anti-adiabatic limit, the optical
response is written
$J(t)=N\mu^{2}\exp\left(-\textrm{i}\tilde{\omega}_{0}t-g_{0}(t)\right),$ (56)
where the linebroadening function $g_{0}(t)$ is written
$g_{0}(t)=\frac{4S_{\text{ac}}}{\pi}\int_{0}^{1}\frac{\textrm{d}x}{x}\sqrt{1-x^{2}}\left\\{\coth\left(\beta\Omega_{c}x/2\right)(1-\cos\Omega_{c}tx)+\textrm{i}\sin\Omega_{c}tx\right\\}.$
(57)
Next, two situations are considered: the case of the strong coupling limit
where $S_{\text{ac}}\gg 1$ and the case of weak coupling limit
$S_{\text{ac}}\ll 1$.
In the strong coupling limit $S_{\text{ac}}\gg 1$, the dynamics is controlled
by the behavior of the correlation function at short times. Therefore, a
second-order Taylor expansion of the linebroadening function can be performed
in the time variable $\tau=\Omega_{c}t$, giving
$g_{0}(\tau)=\textrm{i}S_{\text{ac}}\tau+S(\beta)\tau^{2}/4+\mathcal{O}(\tau^{3}).$
(58)
The absorption spectrum is then Gaussian
$\alpha(\omega)=N\mu^{2}\sqrt{\frac{4\pi}{S(\beta)\Omega_{c}^{2}}}\exp\left(-\frac{(\omega-\omega_{0})^{2}}{S(\beta)\Omega_{c}^{2}}\right).$
(59)
This expression is valid for all temperature range.
In the weak coupling limit $S_{\text{ac}}\ll 1$, the dynamics is controlled by
the behavior of the correlation function at long times which strongly depend
on the temperature range.Duke and Mahan (1965) For the high temperature case
$\beta\Omega_{c}\ll 1$ the linebroadening function is written
$g_{0}(\tau)=\frac{8S_{\text{ac}}}{\pi\beta\Omega_{c}}\int_{0}^{1}\frac{\textrm{d}x}{x^{2}}\sqrt{1-x^{2}}(1-\cos\tau
x),$ (60)
which gives an analytical but cumbersome expression as a function of the
Bessel functions and the Struve functions.Abramowitz and Stegun (1972) At long
timescales one can use the continuum approximation which only consider the
effect of the low-frequency phonons $x=\Omega_{q}/\Omega_{c}\sim
q/2\rightarrow 0$. With this approximation the linebroadening function is
written
$g_{0}(\tau)=\frac{4S_{\text{ac}}}{\beta\Omega_{c}}|\tau|,$ (61)
The linebroadening function therefore increases linearly with time when
$\tau\rightarrow\infty$. The absorption spectrum has a Lorentzian shape with a
width proportional to the temperature and shifted by the bath reorganizational
energy
$\alpha(\omega)=N\mu^{2}\frac{8S_{\text{ac}}/\beta}{\left(\omega-\omega_{0}-\epsilon_{0}\right)^{2}+\left(4S_{\text{ac}}/\beta\right)^{2}}.$
(62)
In the low temperature regime $\beta\Omega_{c}\gg 1$, the linebroadening
function is written as
$g_{0}(\tau)=\frac{4S_{\text{ac}}}{\pi}\int_{0}^{1}\frac{\textrm{d}x}{x}\sqrt{1-x^{2}}\left(1-e^{-\textrm{i}\tau
x}\right).$ (63)
An asymptotic expansion of this expression gives
$g_{0}(\tau)=\frac{4S_{\text{ac}}}{\pi}\left(\gamma-1+\log
2+\log\tau\right)+2\textrm{i}S_{\text{ac}}\operatorname{sgn}\tau,$ (64)
where $\operatorname{sgn}\tau$ is the sign function and where $\gamma\approx
0.577216$ is Euler’s constant. The linebroadening function increases
logarithmically with time. Introducing the coupling constant
$\delta=4S_{\text{ac}}/\pi$, for a weak coupling $\delta<1$ the Fourier
transform of the function $J(t)$ can be calculated and the absorption spectrum
is then given by a power law for
$\omega>\tilde{\omega}_{0}=\omega_{0}-\epsilon_{0}$
$\alpha(\omega)=\frac{2\sin(\delta\pi)\Gamma(1-\delta)e^{-\delta(\gamma-1)}}{(2\Omega_{c})^{\delta}(\omega-\tilde{\omega}_{0})^{1-\delta}}\Theta(\omega-\tilde{\omega}_{0})$
(65)
where $\Gamma(x)$ is the gamma function. Fig. 8 shows a direct comparison of
the absorption spectrum at $T=0$ K computed directly from the linear response
function with the approximation of Eq. (65). The spectrum was centered around
its maximum located at $\tilde{\omega}_{0}=\omega_{0}-\epsilon_{0}$. To
highlight the effect of the phonon broadening, the lifetime $T_{1}$ was
increased to 50 ps. For a coupling constant
$\Delta_{\text{ac}}=50~{}\textrm{cm}^{-1}$, the long time approximation
perfectly match the full calculation for frequencies lower then 50
$\textrm{cm}^{-1}$. As decreasing the coupling constant $\Delta_{\text{ac}}$,
discrepancies occur around the maximum of the absorption band corresponding to
long time behavior from the response function. This originate from the ad-hoc
relaxation $T_{1}$ introduced in the numerical calculation which is not
included in Eq. (65). Note that in the low temperature regime, the effect of
the remaining coupling neglected in this work might control the spectral
bandwidth.
For $J\neq 0$, effects from the polaron dispersion should arise through the
sum over the index $n$ in Eq. (55). However as $n$ increases the
linebroadening function increases quickly and its contribution decreases in
the expression of the response function $J(t)$. For example, in the weak-
coupling limit, at high-temperature and using the continuum approximation the
linebroadening function is written
$g_{n}(\tau)=\frac{2S_{\text{ac}}}{\beta\Omega_{c}}\left(|\tau-2n|+|\tau+2n|\right),$
(66)
which increases linearly with $n$. In this limit, the contribution of the
polaron bandwidth is then negligible. In the low-temperature limit, $g_{n}(0)$
increases with $\log n$ and only small effects of $J$ on the absorption
spectrum should be expected. Fig. 9 shows the linebroadening function
$g_{n}(t)$ as a function of time $t$ and distance $n$ in the low temperature
regime ($T=0$ K) and in the high temperature regime ($T=300$ K). In the low
temperature regime, the real part of the linebroadening function shows clearly
a logarithmic behavior for all $n$ at long timescales. The imaginary part has
a step function behavior. In the high-temperature limit, the real part of the
linebroadening function is constant for $\Omega_{c}t<2m$ and then increases
linearly with time.
Figure 8: Linear absorption spectrum as a function of the phonon coupling
$\Delta_{\text{ac}}$ constant for
$\Omega_{\text{ac}}=~{}100~{}\textrm{cm}^{-1}$, $J=0~{}\textrm{cm}^{-1}$,
$T=0$ K. To show the specific effect of the phonon broadening the relaxation
time was increased to $T_{1}=50$ ps. The dashed line corresponds to the
absorption spectrum computed from Eq. (65). Figure 9: Linebroadening function
$g_{n}(t)$ computed from the acoustical phonon model for
$\Omega_{\text{ac}}=~{}100~{}\textrm{cm}^{-1}$ and
$\Delta_{\text{ac}}=25~{}\textrm{cm}^{-1}$ as a function of the time $t$ and
distance $n$ for $T=0$ K and $T=300$ K.
#### IV.2.2 Nonlinear spectra
To simplify, the case $J=0$ is considered. Since acoustical phonons are
considered, even if the sites are not directly coupled in this limit through
the hopping constant, coupling with the phonon bath can introduce long-range
phonon-mediated correlations. For $J=0$, the sum of GSB and ESE response
functions is given by
$R_{1}+R_{2}=N\mu^{4}e^{\textrm{i}\tilde{\omega}_{0}(t_{1}-t_{3})}\sum_{m}\left(e^{-g^{(1)}_{0,m,0}}+e^{-g^{(2)}_{m,-m,m}}\right).$
(67)
If there is no coupling to the bath, this contribution to the nonlinear signal
scales as $N^{2}$. For a weak coupling with the bath, the scaling of this
contribution will strongly depend on the temperature. In the low temperature
regime, the linebroadening function scales as $g_{m}\approx\delta\log m$. In
this case, the sum of the ESE and GSB contributions scales as $N^{2-\delta}$.
At high-temperature, the linebroadening function scales as $g_{m}\sim m$ and
the sum of the ESE and GSB contributions scales as $N$. The ESA response
function is written as
$\displaystyle R_{3}$
$\displaystyle=2N\mu^{4}e^{\textrm{i}\tilde{\omega}_{0}(t_{1}-t_{3})}\left(e^{-2\textrm{i}(\epsilon_{0}+A)t_{3}}-1\right)e^{-g^{(3)}_{0,0,0}}$
$\displaystyle+2N\mu^{4}e^{\textrm{i}\tilde{\omega}_{0}(t_{1}-t_{3})}\left(e^{-2\textrm{i}\epsilon_{1}t_{3}}-1\right)\left(e^{-g^{(3)}_{1,-1,1}}+e^{-g^{(3)}_{0,1,0}}\right)$
$\displaystyle+N\mu^{4}e^{\textrm{i}\tilde{\omega}_{0}(t_{1}-t_{3})}\sum_{m}\left(e^{-g^{(3)}_{0,m,0}}+e^{-g^{(3)}_{m,-m,m}}\right).$
(68)
The first two terms scale as $N$ while the last term is almost identical to
the sum of the ESE and GSB contributions. In fact this term completely
compensates Eq. (67) if $g^{(1)}_{0,m,0}=g^{(3)}_{0,m,0}$ and
$g^{(2)}_{m,-m,m}=g^{(3)}_{m,-m,m}$. In this case, the spectra is given by one
positive peak located on the diagonal and two negative peaks shifted due to
the two types of anharmonicity. However if $g^{(1)}_{0,m,0}\neq
g^{(3)}_{0,m,0}$ or $g^{(2)}_{m,-m,m}\neq g^{(3)}_{m,-m,m}$ this is not true
anymore. In fact for the ESE contribution the difference can be expressed as
$\displaystyle e^{-g^{(2)}_{m,-m,m}}-e^{-g^{(3)}_{m,-m,m}}$
$\displaystyle\propto e^{-g_{m}^{*}(t_{3})}-e^{-g_{m}(t_{3})}$
$\displaystyle\propto\textrm{i}\sin(g^{\prime\prime}_{m}(t_{3})),$ (69)
where $g^{\prime\prime}_{m}(t)$ is the imaginary part of the linebroadening
function. The imaginary part of the linebroadening function in the long time
approximation is given by
$g^{\prime\prime}_{m}(t_{3})\approx
2S_{\text{ac}}\operatorname{sgn}(\Omega_{c}t_{3}-2m),$ (70)
and do not vanish if $\Omega_{c}t_{3}$ is larger than $2m$. Note that the
difference appears with a $\pi/2$ dephasing which originates from the complex
factor $\textrm{i}=\sqrt{-1}$ in Eq. (69). Consequently, this contribution to
the spectrum is therefore dispersive and results in a pair of positive and
negative peaks located on the diagonal and along the $\omega_{3}$ axis as
noticed in Fig. 6. This effect however will decrease quickly as temperature
increases and the lineshape function decay faster. It can only be seen at very
low temperature as observed in Fig. 6.
On Fig. 7, strong differences have been observed between the case of an
isolated site $N=1$ and the case of a 1D chain coupled to an acoustical phonon
bath. These differences originate mostly from the bath mediated correlations.
To quantify the time evolution of the 2D spectrum, the CLS has been computed.
The CLS is commonly used as a metric to quantify the fluctuation timescales
and extract the FFCF from 2D spectra.Kwak _et al._ (2007); Kwak, Rosenfeld,
and Fayer (2008); Roy, Pshenichnikov, and Jansen (2011); Falvo (2016) The CLS
for the isolated site and the 1D chain are represented on the left panel of
Fig. 10. For $N=1$ the CLS decays quickly over the first 200 fs with an
oscillation corresponding to the frequency $\Omega_{c}$. This behavior is
characteristic of an underdamped Brownian oscillator. For the 1D chain, the
CLS still exhibits a fast decay over the first 200 fs but then it decays on a
much slower timescale. A convenient way to interpret these results is to
introduce the FFCF for a delocalized system
$D_{n}(t)=\langle\hat{v}_{n}(t)\hat{v}_{0}(0)\rangle,$ (71)
where
$\hat{v}_{n}=\frac{1}{\sqrt{N}}\sum_{q}(\Delta_{q}e^{-\textrm{i}qn}a_{q}^{\dagger}+\Delta_{q}^{*}e^{\textrm{i}qn}a_{q})$
is the frequency fluctuation operator of site $n$. This function measures the
correlation of the fluctuations between two sites separated by the distance
$n$ and after the time $t$. It can be written as
$D_{n}(t)=\frac{1}{N}\sum_{q}|\Delta_{q}|^{2}\left(\coth\left(\frac{\beta\Omega_{q}}{2}\right)\cos(\Omega_{q}t-qn)-\textrm{i}\sin(\Omega_{q}t-qn)\right).$
(72)
For the acoustical phonon model, in the high temperature limit and for
$N\rightarrow\infty$, the bath correlation function is written as a function
of the variable $\tau=\Omega_{c}t$ as
$D_{n}(\tau)=\frac{16\Delta_{\text{ac}}^{2}}{\pi\beta\Omega_{c}}\int_{0}^{1}\textrm{d}x\sqrt{1-x^{2}}\cos(\tau
x)T_{2n}\left(\sqrt{1-x^{2}}\right),$ (73)
where $T_{m}(x)$ are the Chebyshev polynomials of the first kind.Abramowitz
and Stegun (1972) For example for $n=0,1,2$, the first three functions
$D_{n}(\tau)$ are written
$\displaystyle
D_{0}(\tau)=\frac{8\Delta_{\text{ac}}^{2}}{\beta\Omega_{c}}\frac{J_{1}(\tau)}{\tau},$
(74) $\displaystyle
D_{1}(\tau)=\frac{8\Delta_{\text{ac}}^{2}}{\beta\Omega_{c}}\left(\frac{6J_{2}(\tau)}{\tau^{2}}-\frac{J_{1}(\tau)}{\tau}\right),$
(75) $\displaystyle
D_{2}(\tau)=\frac{8\Delta_{\text{ac}}^{2}}{\beta\Omega_{c}}\left(\frac{J_{1}(\tau)(\tau^{2}-120)}{\tau^{3}}-\frac{24J_{2}(\tau)(\tau^{2}-20)}{\tau^{4}}\right).$
(76)
For larger distances $n$ the correlation function can be in principle
calculated analytically but the corresponding expressions are cumbersome and
involve polynomials in $\tau$ of order $n$. The FFCFs are represented on the
right panel of Fig. 10. For $n=0$, the correlation function decays in a
similar manner as the CLS for $N=1$ with the same underdamped oscillation. The
CLS appears just shifted compared to the FFCF. This is not a surprise because
it has been shown that by including fast fluctuation processes the CLS
measures directly the scaled and shifted FFCF.Falvo (2016) As $n$ increases
the maximum of the FFCF is then located at $\Omega_{c}t=n$ and then decreases
quickly with underdamped oscillations. Looking at the maximum value of the
FFCF as a function of $n$ one can see that this maximum decreases slowly in a
similar fashion as the CLS for the 1D chain. This shows that the bath-mediated
long-range correlations measured by the FFCFs is directly responsible for the
slow decay of the CLS in the 2D-IR spectrum.
Figure 10: Left panel: Center line slope of the 2D-IR spectra for the
acoustical model for $\Omega_{\text{ac}}=~{}100~{}\textrm{cm}^{-1}$,
$\Delta_{\text{ac}}=25~{}\textrm{cm}^{-1}$ and $T=300$ K, $J=-10$
$\textrm{cm}^{-1}$ as a function of the delay time $t_{2}$, for a 1D chain
(dashed line) and for a single isolated site $N=1$ (full line). Right panel:
bath correlation functions $D_{n}(t)$ as a function of time.
#### IV.2.3 Experimental implications
In Ref. 28, Edler and coworkers measured the pump-probe spectrum of a model
$\alpha$-helix in the NH spectral range. They observed two bound states that
were interpreted as the signature of the two anharmonic couplings from Eq.
(54): one anharmonic coupling that create a pair of excitons located on the
same site and one anharmonic coupling that creates a pair of excitons located
on two nearest-neighbor sites. A similar observation was then made by Bodis
and coworkers on a model $\beta$-sheet peptide.Bodis _et al._ (2009) The
model used to interpret the experiment of Ref. 28, relied on a slightly
different Hamiltonian than the one presented in this paper as it included also
the fluctuations of the anharmonicity due to the phonons. To explain the
experimental observations it was assumed a very strong coupling between the NH
vibrations and the accoustical phonon bath, which would correspond with the
present model to a coupling constant of $S_{\text{ac}}=1.2$. However, the
intrepretation used in Ref. 28 did not take into account the spectral
broadening induced by the bath on the linear and the non-linear spectra. In
the high temperature limit, the dressing factor is given by
$S(\beta)=4S_{\text{ac}}k_{\text{B}}T/\Omega_{\text{ac}}$. Using Eq. (59) we
can compute the FWHM of the linear absorption spectrum in the strong coupling
limit. It is given by
$\Delta\omega=4\sqrt{S_{\text{ac}}\Omega_{\text{ac}}k_{\text{B}}T\ln 2}$ (77)
Therefore the FWHM is proportional to the square root of the temperature. For
a temperature of 300 K and a coupling constant $S_{\text{ac}}=1.2$ the FWHM is
$525~{}\textrm{cm}^{-1}$. This value, in complete disagreement with the
measurements, demonstrates that the model is unable to explain all the
experimental observations. In addition, as was observed in Ref. 28, the
spectral bandwidth in the NH range do not change significantly with
temperature. This shows that the Davydov Hamiltonian cannot explain the
emergence of two bound states in the NH pump-probe spectra of $\alpha$-helices
and $\beta$-sheets. Therefore additional theoretical work is needed to explain
these bound states.
## V Conclusions
In this article, a new methodology to calculate the non-linear response of
vibrational systems based on the small polaron approach is presented. This
approach relies on a unitary transformation which dresses the vibrational
excitation by a phonon cloud and is valid in the anti-adiabatic limit where
the hopping constant is small with respect to the phonon frequency. This
method allows to calculate the optical response of large systems and can
describe explicitly bath-mediated correlation. This method was used to
calculate the linear and non-linear spectroscopy of 1D model chains
considering both optical and acoustical phonon bath.
For the case of an optical phonon bath, a simple expression was derived for
the linear absorption. It shows that the absorption spectrum is characterized
by a main zero-phonon line and a series of $n$th-phonon lines. The $n$th-
phonon lines lineshape is given by the polaron density of states. Here, the
case of a 1D lattice has been considered and the density of states is
characterized by a double peak. This result can be transposed to the more
complicate case of an $n$-dimensional (nD) lattice. The density of states will
strongly depend on the dimensionality of the problem and therefore impact the
shape of the absorption spectrum. This confirms the result of Hamm and
EdlerHamm and Edler (2006) which states that 3D effects might play an
important role to explain linear absorption spectrum of ACN. The approach
presented in this article represents a first step towards a full understanding
of the linear and non-linear spectra of crystaline acetanilide. It will be
extended to include explicitly the 3D structure of ACN allowing therefore a
direct comparison with experiment.
For the case of the acoustical phonon bath, this article shows that in the
C$=$O spectral range, two bound states can be clearly visible in the 2D-IR
spectrum at low temperature. However only one is visible at ambiant
temperature. Up to now, no experimental measurements of 2D-IR spectroscopy of
$\alpha$-helix polypeptides have been made at very low temperature in the
C$=$O spectral range. New measurements in this temperature range could
therefore bring new information on the vibrational dynamics in this system.
This article shows that for the case of acoustical phonons at ambiant
temperature, the spectral diffusion measured from the 2D-IR spectrum appears
much slower for the case of a lattice than for a single site. This can be
explained by bath mediated correlations between distant sites. Experimental
observation of such process could be performed on a model $\alpha$-helix for
example by measuring the 2D-IR spectrum of the $\alpha$-helix and compare it
to an isotope labeled $\alpha$-helix for which the spectrum of a single
isolated vibration coupled to the full phonon bath could be obtained. Finally,
the present model puts in question the validity of the model used to explain
the pump-probe spectra measured in the N$-$H spectral range for a model
$\alpha$-helix and a model $\beta$-sheep peptides. The model predicts a
spectral bandwidth that increases with the square root of the temperature, a
behavior in disagreement with the experimental measurements.
The main limitation of this work resides in the fact that the residual
coupling between the polaron and the bath has been neglected. This coupling
can modify the absorption spectrum and the 2D-IR spectrum. Further theoretical
developments are necessary to fully account for this coupling in the linear
and non-linear response. In particular, inclusion of the remaining coupling by
perturbation theory seems a very promising approach.Pouthier and Falvo (2004);
Pouthier (2013); Yalouz and Pouthier (2016)
###### Acknowledgements.
The author gratefully acknowledges financial support by the Agence Nationale
de la Recherche (ANR) grants ANR-11-BS04-0027 and ANR-16-CE29-0025 as well as
the use of the computing center MésoLUM of the LUMAT research federation (FR
LUMAT 2764).
## Appendix A Optical and Acoustical phonon models
In the optical phonon model, the bath is characterized by a set of vibrational
coordinates $u_{n}$ and corresponding momentum $p_{n}$ and of harmonic
frequency $\Omega_{\text{opt}}$. The bath Hamiltonian is then written as
$\hat{H}_{b}=\sum_{n}\frac{p_{n}^{2}}{2}+\frac{\Omega_{\text{opt}}^{2}}{2}u_{n}^{2}.$
(78)
I will consider that the exciton frequency of site $n$ is linearly coupled to
the $n$th bath mode. Therefore the system-bath coupling Hamiltonian is written
$\hat{H}_{vb}=\sum_{n}\chi u_{n}b_{n}^{\dagger}b_{n},$ (79)
where $\chi$ is a coupling constant. Using a plane wave basis and the phonon
creation and annihiliation operators the expression for the bath coordinates
$u_{n}$ is given by
$\displaystyle
u_{n}=\frac{1}{\sqrt{2N\Omega_{\text{opt}}}}\sum_{q}\left(e^{\textrm{i}qn}a_{q}+e^{-\textrm{i}qn}a_{q}^{\dagger}\right).$
(80)
Similarly the corresponding momentum $p_{n}$ is written
$p_{n}=\textrm{i}\sqrt{\frac{\Omega_{\text{opt}}}{2N}}\sum_{q}\left(e^{-\textrm{i}qn}a^{\dagger}_{q}-e^{\textrm{i}qn}a_{q}\right).$
(81)
Using these expressions, one can immediately obtain the expression for the
bath and coupling Hamiltonians Eqs. (3) and (4) with
$\Omega_{q}=\Omega_{\text{opt}}$ and
$\Delta_{q}=\chi/\sqrt{2\Omega_{\text{opt}}}$.
For the model of acoustical phonons, the bath Hamiltonian is written as a
function of the bath mode $u_{n}$ as
$\hat{H}_{b}=\sum_{n}\frac{p_{n}^{2}}{2}+\frac{W}{2}\left(u_{n+1}-u_{n}\right)^{2},$
(82)
where $W$ is a coupling constant. Following Davydov,Davydov (1985) the system-
bath coupling Hamiltonian is written as
$\hat{H}_{vb}=\sum_{n}\chi\left(u_{n+1}-u_{n-1}\right)b_{n}^{\dagger}b_{n}.$
(83)
Using the phonon creation and annihilation operators, the expression for the
bath coordinates and momentum are given by
$\displaystyle
u_{n}=\sum_{q}\frac{1}{\sqrt{2N\Omega_{q}}}\left(e^{\textrm{i}qn}a_{q}+e^{-\textrm{i}qn}a_{q}^{\dagger}\right),$
(84) $\displaystyle
p_{n}=\sum_{q}\textrm{i}\sqrt{\frac{\Omega_{q}}{2N}}\left(e^{-\textrm{i}qn}a^{\dagger}_{q}-e^{\textrm{i}qn}a_{q}\right),$
(85)
where $\Omega_{q}=\Omega_{\text{ac}}|\sin q/2|$ and where the cutoff frequency
is given by $\Omega_{\text{ac}}=\sqrt{4W}$. One can then obtain immediately
the expression for the coupling Hamiltonian Eq. (4) with
$\Delta_{q}=-2\textrm{i}\chi W^{-1/4}\sqrt{|\sin q/2|}\cos q/2.$ (86)
## Appendix B $\textbf{k}_{\text{II}}$ response functions
The signal in the direction $\textbf{k}_{\text{II}}$ can be written as the sum
of three contributions
$R_{\textbf{k}_{\text{II}}}(t_{1},t_{2},t_{3})=R_{4}+R_{5}-R_{6},$ (87)
where $R_{4}$ and $R_{5}$ are respectively the GSB and ESE while $R_{6}$ is
the ESA. These response functions are written
$\displaystyle R_{4}(t_{1},t_{2},t_{3})$
$\displaystyle=\frac{\mu^{4}}{N}\sum_{k_{1}k_{2}}e^{-\textrm{i}\omega_{k_{1}}t_{1}-\textrm{i}\omega_{k_{2}}t_{3}}C^{(4)}_{k_{1}k_{2}}(t_{1},t_{2},t_{3})$
(88) $\displaystyle R_{5}(t_{1},t_{2},t_{3})$
$\displaystyle=\frac{\mu^{4}}{N}\sum_{k_{1}k_{2}}e^{-\textrm{i}\omega_{k_{1}}(t_{1}+t_{2}+t_{3})+\textrm{i}\omega_{k_{2}}t_{2}}C^{(5)}_{k_{1}k_{2}}(t_{1},t_{2},t_{3})$
(89) $\displaystyle R_{6}(t_{1},t_{2},t_{3})$
$\displaystyle=\frac{\mu^{4}}{N^{2}}\sum_{k_{1}k_{2}k_{3}\sigma}e^{-\textrm{i}\omega_{k_{1}}(t_{1}+t_{2})+\textrm{i}\omega_{k_{2}}(t_{2}+t_{3})-\textrm{i}\omega_{k_{3}\sigma}t_{3}}C^{(6)}_{k_{1}k_{2}k_{3}}(t_{1},t_{2},t_{3})A_{k_{1}k_{3}\sigma}A_{k_{2}k_{3}\sigma}$
(90)
with the functions $C^{(i)}(t_{1},t_{2},t_{3})$ defined by
$\displaystyle C^{(4)}_{k_{1}k_{2}}(t_{1},t_{2},t_{3})$
$\displaystyle=\sum_{m_{1}m_{2}m_{3}}e^{\textrm{i}k_{1}m_{1}+\textrm{i}k_{2}m_{3}}e^{-g^{(4)}_{m_{1}m_{2}m_{3}}(t_{1},t_{2},t_{3})}$
(91) $\displaystyle C^{(5)}_{k_{1}k_{2}}(t_{1},t_{2},t_{3})$
$\displaystyle=\sum_{m_{1}m_{2}m_{3}}e^{\textrm{i}k_{1}(m_{1}+m_{2}+m_{3})-\textrm{i}k_{2}m_{2}}e^{-g^{(5)}_{m_{1}m_{2}m_{3}}(t_{1},t_{2},t_{3})}$
(92) $\displaystyle C^{(6)}_{k_{1}k_{2}}(t_{1},t_{2},t_{3})$
$\displaystyle=\sum_{m_{1}m_{2}m_{3}}e^{\textrm{i}k_{1}(m_{1}+m_{2})-\textrm{i}k_{2}(m_{2}+m_{3})+\textrm{i}k_{3}m_{3}}e^{-g^{(6)}_{m_{1}m_{2}m_{3}}(t_{1},t_{2},t_{3})}$
(93)
and where the linebroadening functions are given by
$\displaystyle
g^{(4)}_{m_{1}m_{2}m_{3}}(t_{1},t_{2},t_{3})=g_{m_{1}}(t_{1})+g_{m_{2}}(t_{2})+g_{m_{3}}(t_{3})$
$\displaystyle-
g_{m_{1}+m_{2}}(t_{1}+t_{2})-g_{m_{2}+m_{3}}(t_{2}+t_{3})+g_{m_{1}+m_{2}+m_{3}}(t_{1}+t_{2}+t_{3})$
(94) $\displaystyle
g^{(5)}_{m_{1}m_{2}m_{3}}(t_{1},t_{2},t_{3})=g_{m_{1}}(t_{1})+g^{*}_{m_{2}}(t_{2})+g^{*}_{m_{3}}(t_{3})$
$\displaystyle-
g_{m_{1}+m_{2}}(t_{1}+t_{2})-g^{*}_{m_{2}+m_{3}}(t_{2}+t_{3})+g_{m_{1}+m_{2}+m_{3}}(t_{1}+t_{2}+t_{3})$
(95) $\displaystyle
g^{(6)}_{m_{1}m_{2}m_{3}}(t_{1},t_{2},t_{3})=g_{m_{1}}(t_{1})+g^{*}_{m_{2}}(t_{2})+g_{m_{3}}(t_{3})$
$\displaystyle-
g_{m_{1}+m_{2}}(t_{1}+t_{2})-g^{*}_{m_{2}+m_{3}}(t_{2}+t_{3})+g_{m_{1}+m_{2}+m_{3}}(t_{1}+t_{2}+t_{3})$
(96)
## References
* Mahan (1981) G. D. Mahan, _Many-particle physics_ (Kluwer Academic / Plenum Publishers, New York, 1981).
* Holstein (1959a) T. Holstein, Ann. Phys. 8, 325 (1959a).
* Holstein (1959b) T. Holstein, Ann. Phys. 8, 343 (1959b).
* Spano (2010) F. C. Spano, Acc. Chem. Res. 43, 429 (2010).
* Huynh _et al._ (2013) T. D. Huynh, K.-W. Sun, M. Gelin, and Y. Zhao, J. Chem. Phys. 139, 104103 (2013).
* Lu and Mukamel (1991) N. Lu and S. Mukamel, J. Chem. Phys. 95, 1588 (1991).
* Sun _et al._ (2015) K.-W. Sun, M. F. Gelin, V. Y. Chernyak, and Y. Zhao, J. Chem. Phys. 142, 212448 (2015).
* Chorošajev _et al._ (2014) V. Chorošajev, A. Gelzinis, L. Valkunas, and D. Abramavicius, J. Chem. Phys. 140, 244108 (2014).
* Chorosajev, Rancova, and Abramavicius (2016) V. Chorosajev, O. Rancova, and D. Abramavicius, Phys. Chem. Chem. Phys. 18, 7966 (2016).
* Yamagata and Spano (2014) H. Yamagata and F. C. Spano, J. Phys. Chem. Lett. 5, 622 (2014).
* Barford and Marcus (2014) W. Barford and M. Marcus, J. Chem. Phys. 141, 164101 (2014).
* Okamoto _et al._ (1992) H. Okamoto, T. Mitani, K. Toriumi, and M. Yamashita, Phys. Rev. Lett. 69, 2248 (1992).
* Fillaux (1981) F. Fillaux, Chem. Phys. 62, 287 (1981).
* Barthes _et al._ (1998) M. Barthes, H. N. Bordallo, J. Eckert, O. Maurus, G. de Nunzio, and J. Léon, J. Phys. Chem. B 102, 6177 (1998).
* Herrebout, Clou, and Desseyn (2001) W. A. Herrebout, K. Clou, and H. O. Desseyn, J. Phys. Chem. A 105, 4865 (2001).
* Careri _et al._ (1983) G. Careri, U. Buontempo, F. Carta, E. Gratton, and A. C. Scott, Phys. Rev. Lett. 51, 304 (1983).
* Careri _et al._ (1984) G. Careri, U. Buontempo, F. Galluzzi, A. C. Scott, E. Gratton, and E. Shyamsunder, Phys. Rev. B 30, 4689 (1984).
* Eilbeck, Lomdahl, and Scott (1984) J. C. Eilbeck, P. S. Lomdahl, and A. C. Scott, Phys. Rev. B 30, 4703 (1984).
* Alexander and Krumhansl (1986) D. M. Alexander and J. A. Krumhansl, Phys. Rev. B 33, 7172 (1986).
* Edler, Hamm, and Scott (2002) J. Edler, P. Hamm, and A. C. Scott, Phys. Rev. Lett. 88, 067403 (2002).
* Edler and Hamm (2002) J. Edler and P. Hamm, J. Chem. Phys. 117, 2415 (2002).
* Edler and Hamm (2003) J. Edler and P. Hamm, J. Chem. Phys. 119, 2709 (2003).
* Hamm and Edler (2006) P. Hamm and J. Edler, Phys. Rev. B 73, 094302 (2006).
* Davydov (1973) A. S. Davydov, J. Theor. Biol. 38, 559 (1973).
* Davydov (1985) A. S. Davydov, _Solitons in Molecular Systems_ (Springer Science, Dordrecht, 1985).
* Scott (1982) A. C. Scott, Phys. Rev. A 26, 578 (1982).
* Scott (1992) A. C. Scott, Phys. Rep. 217, 1 (1992).
* Edler _et al._ (2004) J. Edler, R. Pfister, V. Pouthier, C. Falvo, and P. Hamm, Phys. Rev. Lett. 93, 106405 (2004).
* Edler _et al._ (2005) J. Edler, V. Pouthier, C. Falvo, R. Pfister, and P. Hamm, in _Ultrafast Phenomena XIV_ , Springer Series in Chemical Physics, Vol. 79, edited by T. Kobayashi, T. Okada, T. Kobayashi, K. A. Nelson, and S. Silvestri (Springer, 2005) pp. 401–403.
* Hamm (2009) P. Hamm, J. Biol. Phys. 35, 17 (2009).
* Brown and Ivić (1989) D. W. Brown and Z. Ivić, Phys. Rev. B 40, 9876 (1989).
* Ivić _et al._ (1997) Z. Ivić, D. Kostić, Željko Pržulj, and D. Kapor, J. Phys. Cond. Mat. 9, 413 (1997).
* Pouthier (2003) V. Pouthier, Phys. Rev. E 68, 021909 (2003).
* Pouthier and Falvo (2004) V. Pouthier and C. Falvo, Phys. Rev. E 69, 041906 (2004).
* Falvo and Pouthier (2005a) C. Falvo and V. Pouthier, J. Chem. Phys. 123, 184709 (2005a).
* Falvo and Pouthier (2005b) C. Falvo and V. Pouthier, J. Chem. Phys. 123, 184710 (2005b).
* Falvo and Pouthier (2005c) C. Falvo and V. Pouthier, J. Chem. Phys. 122, 014701 (2005c).
* Tsivlin, Meyer, and May (2006) D. V. Tsivlin, H.-D. Meyer, and V. May, J. Chem. Phys. 124, 134907 (2006).
* Tsivlin and May (2006) D. V. Tsivlin and V. May, J. Chem. Phys. 125, 224902 (2006).
* Tsivlin and May (2007) D. V. Tsivlin and V. May, Chem. Phys. 338, 150 (2007).
* Bodis _et al._ (2009) P. Bodis, E. Schwartz, M. Koepf, J. J. L. M. Cornelissen, A. E. Rowan, R. J. M. Nolte, and S. Woutersen, J. Chem. Phys. 131, 124503 (2009).
* Cruzeiro (2009) L. Cruzeiro, J. Biol. Phys. 35, 43 (2009).
* Goj and Bittner (2011) A. Goj and E. R. Bittner, J. Chem. Phys. 134, 205103 (2011).
* Miyazawa (1960) T. Miyazawa, J. Chem. Phys. 32, 1647 (1960).
* Fayer (2013) M. D. Fayer, ed., _Ultrafast Infrared Vibrational Spectroscopy_ (CRC Press, 2013).
* Hamm and Zanni (2011) P. Hamm and M. Zanni, _Concepts and Methods of 2D Infrared Spectroscopy_ (Cambridge University Press, Cambridge, 2011).
* Khalil, Demirdöven, and Tokmakoff (2003) M. Khalil, N. Demirdöven, and A. Tokmakoff, J. Phys. Chem. A 107, 5258 (2003).
* Loparo, Roberts, and Tokmakoff (2006) J. J. Loparo, S. T. Roberts, and A. Tokmakoff, J. Chem. Phys. 125, 194522 (2006).
* Wong _et al._ (2013) D. B. Wong, C. H. Giammanco, E. E. Fenn, and M. D. Fayer, J. Phys. Chem. B 117, 623 (2013).
* Bloem _et al._ (2012) R. Bloem, K. Koziol, S. A. Waldauer, B. Buchli, R. Walser, B. Samatanga, I. Jelesarov, and P. Hamm, J. Phys. Chem. B 116, 13705 (2012).
* Middleton _et al._ (2012) C. T. Middleton, P. Marek, P. Cao, C.-C. Chiu, S. Singh, A. M. Woys, J. J. de Pablo, D. P. Raleigh, and M. T. Zanni, Nat. Chem. 4, 355 (2012).
* Bandaria _et al._ (2010) J. N. Bandaria, S. Dutta, M. W. Nydegger, W. Rock, A. Kohen, and C. M. Cheatum, Proc. Natl. Acad. Sci. USA 107, 17974 (2010).
* Ghosh _et al._ (2014) A. Ghosh, J. Wang, Y. S. Moroz, I. V. Korendovych, M. Zanni, W. F. DeGrado, F. Gai, and R. M. Hochstrasser, J. Chem. Phys. 140, 235105 (2014).
* Kim and Hochstrasser (2009) Y. S. Kim and R. M. Hochstrasser, J. Phys. Chem. B 113, 8231 (2009).
* Jansen and Knoester (2009) T. L. C. Jansen and J. Knoester, Acc. Chem. Res. 42, 1405 (2009).
* Zheng _et al._ (2005) J. Zheng, K. Kwak, J. Asbury, X. Chen, I. R. Piletic, and M. D. Fayer, Science 309, 1338 (2005).
* Falvo _et al._ (2008) C. Falvo, T. Hayashi, W. Zhuang, and S. Mukamel, J. Phys. Chem. B 112, 12479 (2008).
* Cho (2008) M. Cho, Chem. Rev. 108, 1331 (2008).
* Tempelaar _et al._ (2013) R. Tempelaar, A. Stradomska, J. Knoester, and F. C. Spano, J. Phys. Chem. B 117, 457 (2013).
* Lang and Firsov (1963) I. G. Lang and Y. A. Firsov, Sov. Phys. JETP 16, 1301 (1963).
* Yalouz, Falvo, and Pouthier (2017) S. Yalouz, C. Falvo, and V. Pouthier, Quantum Inf. Process. 16, 143 (2017).
* Jeckelmann and White (1998) E. Jeckelmann and S. R. White, Phys. Rev. B 57, 6376 (1998).
* Chen, Zhao, and Tanimura (2015) L. Chen, Y. Zhao, and Y. Tanimura, J. Phys. Chem. Lett. 6, 3110 (2015).
* Philpott (1971) M. R. Philpott, J. Chem. Phys. 55, 2039 (1971).
* Spano (2002) F. C. Spano, J. Chem. Phys. 116, 5877 (2002).
* Yalouz, Pouthier, and Falvo (2017) S. Yalouz, V. Pouthier, and C. Falvo, Phys. Rev. E 96, 022304 (2017).
* Woutersen (2007) S. Woutersen, J. Chem. Phys. 126, 226101 (2007).
* Pouthier (2013) V. Pouthier, J. Chem. Phys. 138, 044108 (2013).
* Yalouz and Pouthier (2016) S. Yalouz and V. Pouthier, Phys. Rev. E 93, 052306 (2016).
* Duke and Mahan (1965) C. Duke and G. Mahan, Phys. Rev. 139, A1965 (1965).
* Mukamel (1995) S. Mukamel, _Principles of Nonlinear Optical Spectroscopy_ (Oxford University Press, 1995).
* Abramavicius _et al._ (2009) D. Abramavicius, B. Palmieri, D. V. Voronine, F. Šanda, and S. Mukamel, Chem. Rev. 109, 2350 (2009).
* Frigo and Johnson (2005) M. Frigo and S. G. Johnson, Proc. IEEE 93, 216 (2005).
* Hamm, Lim, and Hochstrasser (1998) P. Hamm, M. Lim, and R. M. Hochstrasser, J. Phys. Chem. B 102, 6123 (1998).
* Kimball, Fong, and Shen (1981) J. C. Kimball, C. Y. Fong, and Y. R. Shen, Phys. Rev. B 23 (1981).
* Abramavicius (2013) D. Abramavicius, Europhys. Lett. 101, 57007 (2013).
* Falvo, Pouthier, and Eilbeck (2006) C. Falvo, V. Pouthier, and J. C. Eilbeck, Physica D 221, 58 (2006).
* Ishikawa _et al._ (2007) H. Ishikawa, I. J. Finkelstein, S. Kim, K. Kwak, J. K. Chung, K. Wakasugi, A. M. Massari, and M. D. Fayer, Proc. Natl. Acad. Sci. USA 104, 16116 (2007).
* Fecko _et al._ (2005) C. J. Fecko, J. J. Loparo, S. T. Roberts, and A. Tokmakoff, J. Chem. Phys. 122, 054506 (2005).
* Falvo _et al._ (2015) C. Falvo, L. Daniault, T. Vieille, V. Kemlin, J.-C. Lambry, C. Meier, M. H. Vos, A. Bonvalet, and M. Joffre, J. Phys. Chem. Lett. 6, 2216 (2015).
* Kwak _et al._ (2007) K. Kwak, S. Park, I. J. Finkelstein, and M. D. Fayer, J. Chem. Phys. 127, 124503 (2007).
* Kwak, Rosenfeld, and Fayer (2008) K. Kwak, D. E. Rosenfeld, and M. D. Fayer, J. Chem. Phys. 128, 204505 (2008).
* Falvo (2016) C. Falvo, J. Chem. Phys. 144, 234103 (2016).
* Abramowitz and Stegun (1972) M. Abramowitz and I. A. Stegun, eds., _Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables_ (Dover, New York, 1972).
* Čevizović, Zeković, and Ivić (2009) D. Čevizović, S. Zeković, and Z. Ivić, Chem. Phys. Lett. 480, 75 (2009).
* Roy, Pshenichnikov, and Jansen (2011) S. Roy, M. S. Pshenichnikov, and T. L. C. Jansen, J. Phys. Chem. B 115, 5431 (2011).
|
# EHRNoteQA: A Patient-Specific Question Answering Benchmark
for Evaluating Large Language Models in Clinical Settings
Sunjun Kweon1∗, Jiyoun Kim1, Heeyoung Kwak2,3, Dongchul Cha2,4
Hangyul Yoon1, Kwanghyun Kim5, Seunghyun Won6, Edward Choi1
KAIST1 NAVER Digital Healthcare LAB2 Naver Cloud3
NAVER Healthcare LAB4, Ewha Womans University College of Medicine5
Seoul National University Bundang Hospital6
{sean0042, jiyoun.kim, hangyulmd<EMAIL_ADDRESS>
{heeyoung.kwak<EMAIL_ADDRESS>{khkim.uro<EMAIL_ADDRESS>Equal
contribution
###### Abstract
This study introduces EHRNoteQA, a novel patient-specific question answering
benchmark tailored for evaluating Large Language Models (LLMs) in clinical
environments. Based on MIMIC-IV Electronic Health Record (EHR) (Johnson et
al., 2023), a team of three medical professionals has curated the dataset
comprising 962 unique questions, each linked to a specific patient’s EHR
clinical notes. What makes EHRNoteQA distinct from existing EHR-based
benchmarks is as follows: Firstly, it is the first dataset to adopt a multi-
choice question answering format, a design choice that effectively evaluates
LLMs with reliable scores in the context of automatic evaluation, compared to
other formats. Secondly, it requires an analysis of multiple clinical notes to
answer a single question, reflecting the complex nature of real-world clinical
decision-making where clinicians review extensive records of patient
histories. Our comprehensive evaluation on various large language models
showed that their scores on EHRNoteQA correlate more closely with their
performance in addressing real-world medical questions evaluated by clinicians
than their scores from other LLM benchmarks. This underscores the significance
of EHRNoteQA in evaluating LLMs for medical applications and highlights its
crucial role in facilitating the integration of LLMs into healthcare systems.
The dataset will be made available to the public under PhysioNet credential
access, and the code will be accessible via GitHub
repository111https://github.com/ji-youn-kim/EHRNoteQA promoting further
research in this vital field.
EHRNoteQA: A Patient-Specific Question Answering Benchmark
for Evaluating Large Language Models in Clinical Settings
Sunjun Kweon1∗, Jiyoun Kim1††thanks: Equal contribution, Heeyoung Kwak2,3,
Dongchul Cha2,4 Hangyul Yoon1, Kwanghyun Kim5, Seunghyun Won6, Edward Choi1
KAIST1 NAVER Digital Healthcare LAB2 Naver Cloud3 NAVER Healthcare LAB4, Ewha
Womans University College of Medicine5 Seoul National University Bundang
Hospital6 {sean0042, jiyoun.kim, hangyulmd<EMAIL_ADDRESS>{heeyoung.kwak<EMAIL_ADDRESS>{khkim.uro<EMAIL_ADDRESS>
## 1 Introduction
The advance of generative Large Language Models (LLMs), exemplified by the GPT
series (Brown et al., 2020; Ouyang et al., 2022; OpenAI, 2023) and open-source
models such as LLaMA (Touvron et al., 2023a, b), has significantly progressed
the field of LLM research. These models exhibit comprehensive world knowledge,
reasoning capabilities, and fluent language generation, paving the way for
their potential integration into healthcare (Cascella et al., 2023; Eysenbach
et al., 2023; Patel and Lam, 2023; Javaid et al., 2023). Still, there remains
a significant challenge in their application due to the lack of specialized
benchmarks for evaluating LLMs in clinical environments.
Figure 1: For each patient record, EHRNoteQA consists of all discharge
summaries across multiple admissions, a clinically relevant question
formulated to reflect clinician inquiries, multi-choice answer options, and
the correct answer.
Dataset | Questions | Documents | Patients | Source | Answer Type | Single/Multiple Documents
---|---|---|---|---|---|---
Raghavan et al. (2018) | 5,696 | 71 | 71 | Clinical Notes (Cleveland Clinic) | Text Span | Single
Pampari et al. (2018) | 73,111 | 303 | 303 | Discharge Summaries (n2c2) | Text Span | Single
Fan (2019) | 245 | 138 | 138 | Discharge Summaries (n2c2) | Text Span | Single
Yue et al. (2020) | 50 | - | - | Clinical Notes (MIMIC-III) | Text Span | Single
Yue et al. (2021) | 1,287 | 36 | 36 | Clinical Notes (MIMIC-III) | Text Span | Single
Soni et al. (2022) | 3,074 | 1,009 | 100 | Radiology Notes (MIMIC-III) | Text Span | Single
Lehman et al. (2022) | 2,029 | 114 | 114 | Discharge Summaries (MIMIC-III) | No Answer | Single
Moon et al. (2023) | 96,939 | 505 | 505 | Discharge Summaries (n2c2) | Text Span | Single
Fleming et al. (2023) | 983 | 37,264 | 276 | EHRs (Stanford University) | Free Text | Single
EHRNoteQA (ours) | 962 | 1,659 | 962 | Discharge Summaries (MIMIC-IV) | Multi Choice | Multiple
Table 1: Comparison of EHRNoteQA with existing patient-specific benchmarks on
EHRs. EHRNoteQA stands out as the first dataset to implement a multi-choice
question answering format, enabling reliable and accurate automatic
evaluation. It is also unique in explicitly demanding the use of two or more
clinical notes for answering a question.
Existing clinical benchmarks for LLMs (Jin et al., 2019; Hendrycks et al.,
2020; Jin et al., 2021; Pal et al., 2022) primarily focus on general medical
questions derived from either medical exams or biomedical articles. These
benchmarks, while effective in assessing LLMs’ clinical reasoning
capabilities, often fall short in capturing the complexities inherent in
individual patient cases. Our study aims to fill this gap by building a
patient-specific LLM benchmark, to address queries based on patients’
Electronic Health Records (EHRs).
In this study, we present EHRNoteQA, the first dataset for patient-specific,
multi-choice question answering dataset derived from EHR clinical notes. This
dataset, leveraging the publicly available MIMIC-IV EHR database (Johnson et
al., 2023), was initially created using GPT-4 and thoroughly reviewed and
refined by a team of three clinicians to ensure both accuracy and clinical
relevance. EHRNoteQA includes 962 unique questions, each linked to patient-
specific clinical notes, specifically discharge summaries, that occurred
during multiple admissions. The dataset demands the examination of multiple
notes to formulate accurate answers (refer to Figure 1), mirroring the complex
nature of real-world medical decision-making where clinicians gather
information across a patient’s accumulated hospitalization record.
Using EHRNoteQA, we conduct experiments with diverse large language models to
show its effectiveness as an EHR-based benchmark for LLMs. We start by
examining the rationale behind adopting a multiple-choice format for question
answering, as opposed to the free-text format which is more common in clinical
environments. In practice, when clinicians use LLMs in medical settings, they
are unlikely to present the model with predefined answer options as done in a
multiple-choice format. Despite this deviation from practical usage, our
experiment results indicated that the multi-choice format yields a more
consistent and reliable outcome for automatic evaluation when compared to the
free-text approach. Furthermore, we conduct additional tests to see if our
dataset can reflect how clinicians evaluate LLMs in actual clinical scenarios.
To do this, three clinicians evaluated and ranked the models using a
dataset222Note that we utilized DiSCQ (Lehman et al., 2022), which only
contains questions from actual doctors, but not the answers. Therefore it
cannot be used as a proper LLM benchmark unless evaluated directly by the
clinicians. that is comprised of open-ended clinical questions sourced from
real clinical environments. We then compared these rankings with those derived
from existing benchmarks and EHRNoteQA. Our results revealed a notable
alignment between the model rankings from EHRNoteQA, and those from the
clinician-led evaluation using real-world clinical questions. In conclusion,
among benchmarks currently available for evaluating LLMs, our dataset stands
out with its ability to provide a reliable evaluation that mirrors closely the
actual assessments of clinicians.
## 2 Related Works
Figure 2: An overview of the construction process for the EHRNoteQA dataset,
which involves three key stages: 1) Sampling Clinical Notes from MIMIC-IV
database, 2) Data Generation using GPT-4, and 3) Modifications by Clinicians.
While there have been several initiatives towards creating patient-specific
question answering datasets using Electronic Health Records (EHRs), each of
these works comes with its own set of limitations. For instance, Raghavan et
al. (2018) has first built a question answering dataset annotated by medical
students based on EMR, but it is not open to public. Pampari et al. (2018)
generated a clinical note reading comprehension dataset using the i2b2 dataset
(Uzuner et al., 2008; Uzuner, 2009; Uzuner et al., 2010, 2011), with questions
based on pre-defined templates related to medical entities. However, this
approach limits the diversity of questions. Fan (2019) generated why-QAs on
clinical notes by identifying sentences with ‘because’ or ‘due to’, then
splitting these sentences into questions and answers, but this method’s
limitation is that both question and answer are derived from the same
sentence, failing to mirror the complexity of real-world clinical inquiries.
Subsequently, Yue et al. (2020) introduced a QA dataset using MIMIC-III
(Johnson et al., 2016) clinical notes, but the size is small and not publicly
available. Yue et al. (2021) constructed multiple QA pairs built upon MIMIC-
III clinical notes, but around 75% were generated by a pre-defined text span
as the answer and creating a question based on it, leading to contextually
limited questions. Soni et al. (2022) proposed a physician-annotated QA
dataset on radiology reports with the answer existing as a text span. Lehman
et al. (2022) created a trigger-based question dataset from MIMIC-III
discharge summaries, but without its annotated answers. Moon et al. (2023)
developed a dataset focusing on drug-reasoning questions with answers
comprising multiple clinical entities, but were constrained by template-based
generation. Lastly, Fleming et al. (2023) proposed a patient-specific EHR QA
dataset using instructions from clinicians, but the answers were in free-text
format, lacking an accurate and reliable evaluation method without clinician
evaluation. Moreover, 80% of their data consisted of context lengths longer
than 32k tokens, exceeding the capacity of current LLMs.
As specified in Table 1, these studies predominantly use textual spans from
clinical notes as answers and evaluated models through F1 and Exact Match
scores. While this method is suitable for extractive models such as BERT
(Devlin et al., 2019), it is less effective for generative large language
models that provide more detailed and complex responses (Kamalloo et al.,
2023). Furthermore, confining answers to text span restricts the ability to
craft in-depth questions necessary in real medical settings. Such settings
often require collecting information from numerous clinical segments within
and across documents. Even in cases where the answer exists as multiple text
spans, this does not fulfill the need in real-world applications for a
complete, single response. Other datasets face their own challenges: either
lacking annotated answers (Lehman et al., 2022) or providing answers in free
text (Fleming et al., 2023), which complicates evaluation due to subjectivity
and the need for human review, often involving clinicians. This poses a
significant challenge, as consistent clinician-led evaluation is both costly
and time-consuming.
In our work, we propose a patient-specific EHR QA dataset on MIMIC-IV
discharge summaries (Johnson et al., 2023), inspected by clinicians and
reflecting real-world medical scenarios. Our dataset is unique in requiring
references to two or more clinical notes to answer a single question. Moreover
by employing a multi-choice format, our dataset serves as a clinical benchmark
that enables accurate and consistent automatic evaluation of LLMs.
## 3 Data Construction
In this section, we describe the construction of the EHRNoteQA dataset, which
consists of three main phases: Document Sampling (Section 3.1), Question-
Answer Generation (Section 3.2), and Clinician Modification (Section 3.3).
Figure 2 shows an overview of our EHRNoteQA construction process.
### 3.1 Document Sampling
For the construction of EHRNoteQA, we utilized clinical notes sourced from the
MIMIC-IV (Medical Information Mart for Intensive Care IV) EHR database
(Johnson et al., 2023), a rich source of real patient records from Beth Israel
Hospital spanning from 2008 to 2019. Specifically, we formulated our data
using discharge summaries, which are detailed records prepared when a patient
is discharged from the hospital. These summaries are crucial for clinical note
question answering, as they encapsulate the extensive information generated
from a patient’s admission to discharge.
Category | MIMIC-IV | Final (Sampled)
---|---|---
Level | # D.S. | # Patients | # Tokens | # Patients | # Tokens
1 | 1 | 38,926 | 1,819 | 264 (275) | 1,787
2 | 437 | 2,147 | 265 (275) | 2,146
2 | 1 | 44,645 | 3,514 | 145 (150) | 3,501
2 | 14,176 | 4,470 | 144 (150) | 4,581
3 | 1,161 | 4,956 | 144 (150) | 5,030
Total | 99,345 | - | 962 (1000) | -
Table 2: Quantitative analysis of patient counts and average token length per
admission in Level 1 and Level 2 of the MIMIC-IV dataset and EHRNoteQA. Values
in parentheses denote the initially sampled patient counts. D.S. indicates
Discharge Summaries.
The MIMIC-IV database encompasses 331,794 discharge summaries for 145,915
unique patients, with an average of 2.3 notes per patient. However, these
summaries are typically lengthy, with the average length of all discharge
summaries for a patient being around 8k tokens. This presents a challenge for
current LLMs, as only a limited number of them can process contexts that
exceed 8,000 tokens, making it difficult to handle such extensive clinical
notes.
To address this challenge while incorporating lengthy notes into our data, we
initially reduced the overall length of the notes without altering their
content or structure. By minimizing excessive white spaces, such as removing
spaces or tabs around newlines, we achieved an average reduction of 10% in
note length. Subsequently, we categorized patients into two levels based on
the length of their clinical notes, ensuring compatibility with the processing
capabilities of existing LLMs. The first level (Level 1) consists of patients
whose cumulative note length in the database does not exceed 3,500 tokens,
aligning with models designed to process up to 4k tokens. The second level
(Level 2) is for patients whose notes consist of 3,500 tokens to 7,500 tokens,
suitable for models that can handle up to 8k tokens.
As shown in Table 2, the first category encompasses instances involving one to
two hospital admissions, as indicated by the number of discharge summaries,
whereas the second category can accommodate cases of one to three admissions.
The remaining cases, which require models capable of handling extremely long
context lengths, are not covered in this study. For the construction of the
EHRNoteQA dataset, we randomly selected 1,000 patients—550 from Level 1 and
450 from Level 2—and prepared their discharge summaries for the next step.
### 3.2 Question-Answer Generation
Based on the documents selected as described in Section 3.1, we aimed to
construct a question-answering dataset that closely emulates the types of
questions a clinician would ask in a real-world clinical context, while also
operating as a reliable performance indicator. To achieve this, we employed
GPT-4 OpenAI (2023)333To conduct experiments with MIMIC-IV data alongside
online API-based language models such as GPT, we carried out our work on
Azure’s HIPAA-compliant platform, in accordance with the regulations set by
PhysioNet. to generate a dataset for multi-choice question answering,
utilizing the clinical notes from each patient. This involved designing five
answer choices for each question: one correct answer and four distractors. To
ensure the generated data met our objectives, we closely collaborated with
clinicians during the prompt tuning phase.
Category | Example | Percentage
---|---|---
Treatment | | What was the treatment provided for the patient’s left breast cellulitis during her second admission?
---
19%
Assessment | What was the change in the patient’s left knee condition from the first hospital admission to the second one? | 20%
Problem | | What was the patient’s chief complaints and major surgical procedures carried out during his admissions in 2167 and 2176?
---
22%
Etiology | What is the most probable cause of the patient’s diarrhea during her admission? | 10%
Sign/Symptom | | What were the main presenting symptoms during the patient’s first recorded hospital visit?
---
5%
Vitals | What was the range of the patient’s blood pressure during her second hospital stay? | 9%
Others | Was the patient’s pregnancy full-term during her second recorded admission, and why was a cesarean section required? | 5%
Table 3: Distribution of question categories across 100 sampled questions from
EHRNotQA, including examples.
The question-answer generation process unfolded in two primary steps.
Initially, we provided GPT-4 with the patients’ discharge summaries to
generate clinically meaningful questions that can be answered within the given
text. This step was crucial not only for generating clinically relevant
questions but also for creating questions with clear answers. In the
subsequent step, we re-inputted the discharge summaries along with the
formulated questions into GPT-4, in order to extract a set of answer choices
for each question: one correct and four incorrect choices, along with the
correct answer index.
The decision to employ a two-step approach, instead of a more straightforward
single-step process, was a deliberate decision made through iterative
refinement of question and answer generation prompts by incorporating
extensive feedback from medical professionals. Collaborative insights from
these clinicians underscored that this two-step methodology not only yielded
questions and answer choices that were more realistic and accurate but also
ensured that the incorrect options were sophisticated enough to avoid being
trivially dismissible. The sample data generated using this approach with
GPT-4 is illustrated in Figure 2. To support reproducibility, we have
disclosed the specific model used, its associated costs, and the exact prompts
in the Appendix A.
### 3.3 Clinician Modification
Despite incorporating feedback from clinicians during the prompt tuning phase
of GPT-4’s question-answer generation, the machine-generated data still
exhibited certain imperfections. These included questions that were trivial or
uncharacteristic of those typically asked by real clinicians, as well as
incorrect or irrelevant answers, and overly simplistic incorrect answer
choices. To address these issues, a refinement phase was conducted involving
three doctors over a period of two months, reviewing all 1,000 QA data points
generated in Section 3.2. The modifications consist of:
Data Removal: Instances where GPT-4 produced questions that are too trivial or
unrepresentative of typical clinical inquiries were identified and removed by
the doctors. A total of 38 data points were removed from the dataset of 1,000
entries.
Question Revision: Questions that were clinically meaningful but unclear,
overly detailed, or not in the typical format of clinicians’ inquiries were
directly revised. In total, 206 out of the 1,000 questions were modified by
the doctors.
Correct Answer Choice Revision: In cases where the generated correct answers
were found to be incorrect and unclear, revisions were made to ensure the
answers were completely correct, leaving no aspect of the question unanswered.
A total of 338 out of 1,000 correct answers were modified.
Increasing the Difficulty of Incorrect Answer Choices: If the incorrect answer
choices included content not present in the clinical notes, they were revised
to be plausible distractors, instead of being straightforwardly incorrect
options. Out of the 4,000 wrong answers (1,000 questions with 4 wrong answers
each), 962 were revised.
Our final dataset, refined by clinicians, now consists of 962 unique
questions, each linked to a specific patient. The final patient counts
corresponding to each level are indicated in Table 2.
We conduct further analysis of our dataset, by categorizing the question
content types according to the question classification scheme proposed by
Lehman et al. (2022). This categorization process was carried out on a subset
of 100 questions and conducted by the authors. The distribution of question
content types, as presented in Table 3, shows that our dataset encompasses a
comprehensive representation of data within each category.
## 4 Experiments
Size Model Level 1 (multi-choice) Level 2 (multi-choice) Level 1 (free-text)
Foundation Reference GPT4-turbo-preview (1106) 95.384 $\pm 0.104$ 94.368
$\pm 0.126$ 86.990 $\pm 0.740$ - OpenAI (2023) GPT4 (0613) 97.124 $\pm
0.080$ 95.104 $\pm 0.192$ 91.060 $\pm 0.800$ - OpenAI (2023)
GPT3.5-turbo-16k (0613) 88.280 $\pm 0.233$ 84.990 $\pm 0.000$ 80.980 $\pm
1.110$ - Brown et al. (2020) 70B Llama-2-70b-chat-hf 84.652 $\pm 0.282$ -
71.080 $\pm 0.670$ LLama-2-70b Touvron et al. (2023b) qCammel-70-x 85.630
$\pm 0.134$ - 72.820 $\pm 1.270$ LLama-2-70b Toma et al. (2023) Camel-
Platypus2-70B 89.790 $\pm 0.233$ - 76.030 $\pm 1.330$ LLama-2-70b Lee et al.
(2023a) Platypus2-70B-instruct 90.322 $\pm 0.159$ - 78.940 $\pm 1.820$
LLama-2-70b Lee et al. (2023a) 30B MPT-30b-instruct 79.660 $\pm 0.250$ 75.382
$\pm 0.206$ 58.100 $\pm 1.160$ MPT-30b-8k MosaicML (2023) 13B
Llama-2-13b-chat-hf 73.196 $\pm 0.309$ - 62.270 $\pm 1.460$ LLama-2-13b
Touvron et al. (2023b) vicuna-13b-v1.5 82.116 $\pm 0.217$ - 64.840 $\pm
1.050$ LLama-2-13b Chiang et al. (2023) WizardLM-13B-V1.2 80.758 $\pm 0.159$
- 64.800 $\pm 1.570$ LLama-2-13b Xu et al. (2023) qCammel-13 71.106 $\pm
0.596$ - 54.330 $\pm 0.910$ LLama-2-13b Toma et al. (2023) OpenOrca-
Platypus2-13B 85.896 $\pm 0.288$ - 72.020 $\pm 1.420$ LLama-2-13b Lee et al.
(2023b) Camel-Platypus2-13B 77.958 $\pm 0.510$ - 58.830 $\pm 1.060$
LLama-2-13b Lee et al. (2023a) Synthia-13B-v1.2 79.284 $\pm 0.213$ - 71.270
$\pm 1.240$ LLama-2-13b Tissera (2023a) 7B Llama-2-7b-chat-hf 65.672 $\pm
0.365$ - 50.700 $\pm 1.230$ LLama-2-7b Touvron et al. (2023b) vicuna-7b-v1.5
78.222 $\pm 0.510$ - 50.440 $\pm 1.580$ LLama-2-7b Chiang et al. (2023)
Mistral-7B-Instruct-v0.1 81.926 $\pm 0.170$ 65.038 $\pm 0.126$ 66.120 $\pm
0.820$ Mistral-7B-v0.1 Jiang et al. (2023) MPT-7b-8k-instruct 59.512 $\pm
0.159$ 51.362 $\pm 0.385$ 43.670 $\pm 1.380$ MPT-7B-8k MosaicML (2023)
dolphin-2.0-mistral-7b 76.218 $\pm 0.159$ - 66.840 $\pm 0.900$
Mistral-7B-v0.1 Cognitive (2023) Mistral-7B-OpenOrca 87.074 $\pm 0.104$ -
78.940 $\pm 1.720$ Mistral-7B-v0.1 Lian et al. (2023) SynthIA-7B-v1.3 78.412
$\pm 0.085$ - 73.530 $\pm 1.300$ Mistral-7B-v0.1 Tissera (2023b)
Table 4: Average scores and their standard deviations of Level 1 multi-choice,
Level 2 multi-choice scores, and Level 1 free-text. Evaluation for each model
output is conducted five times on the same prompt.
In our experiments using EHRNoteQA, we conducted a comprehensive evaluation
across a wide array of large language models, spanning from GPT series to
various instruction-tuned open-source models. We selected 22 models for this
purpose, all of which are equipped to manage a context length of 4,000 tokens,
allowing us to evaluate Level 1 data. Furthermore, within this group, 6 models
possess the capability to handle contexts up to 8,000 tokens, which enabled
our analysis of Level 2 data.
To evaluate these models on our dataset, we executed model generation followed
by subsequent assessment of the generated outputs. Traditionally, a common
approach for evaluating multi-choice questions on large language models
involves using probability-based scoring, where each model output is evaluated
based on the probability of the answer choice letter or the full answer
sequence (Brown et al., 2020; Hendrycks et al., 2020; Liang et al., 2022; Gao
et al., 2023). This approach requires providing few-shot examples to guide the
model toward generating outputs in the desired format, in order to ensure
accurate evaluation. However, the challenge arises from the limited context
length capacities of existing LLMs, which cannot accommodate multiple
patients’ discharge summaries. As a result, using a probability-based scoring
system for our dataset with a few-shot approach is infeasible.
Therefore, instead of relying on probability-based metrics, we generated each
model output by inputting discharge summaries, questions, and answer choices.
We then assessed the correctness of the model responses by providing these
outputs along with the gold answer to GPT-4-turbo, instructing it to evaluate
them. For each model, correct outputs were assigned of 1 point, while
incorrect responses were given 0 points. Across 962 questions, the scores of
each model were normalized to a 100-point scale for comparison. Despite our
efforts to eliminate stochastic factors that might affect GPT-4-turbo’s
evaluation (e.g., setting the temperature to 0), there were cases where the
same model output was inconsistently evaluated as correct or incorrect,
particularly in cases of ambiguous responses. Given the impracticality of
manually examining each instance, we had GPT-4-turbo evaluate the same output
five times, and averaged the results.
The results obtained from the evaluation of 22 distinct models utilizing this
approach is presented in Table 4. This table presents the average scores from
five repeated evaluations conducted with GPT-4-turbo and includes the standard
deviation for each score. It’s important to note that for Level 2 data, the
evaluation focused on a subset of 6 models specifically designed to support
context lengths of 8,000 tokens.
One of the key observation from the table is the performance variation within
models of the same size (e.g. Within the 7B size category, scores range from
59 to 87). These disparities can be attributed to the underlying foundation
model (such as LLaMA-2 (Touvron et al., 2023b), Mistral (Jiang et al., 2023),
or MPT (MosaicML, 2023)) or the instruction-tuning dataset that was employed
to enhance the foundation model. This highlights the significant role of the
choice of foundation models and their fine-tuned instructions in determining
performance outcomes on our dataset.
In order to facilitate future studies using our dataset for evaluating LLMs
for clinical applications, we provide GPT-4-turbo prompt used for evaluation
in Appendix A.3. In the following sections, we further conduct more detailed
analyses using our dataset. In Section 4.1, we evaluate our dataset in both
multi-choice and free-text formats to compare which format yields more
consistent and reliable outcomes in automated assessments. Section 4.2
examines the influence of the length and quantity of clinical notes on model
performance. Section 4.3 evaluates the correlation between model performances
on our dataset and physician evaluations of model responses to real-world
clinical questions, assessing our dataset’s fidelity to real-world clinical
evaluations.
### 4.1 Multi-Choice V.S. Free-Text
When engaging with LLMs in practice, clinicians typically would not present
predefined answer options to LLMs, as in multi-choice question answering.
However, we experimentally demonstrate the effectiveness of the multi-choice
format compared to the free-text format when using EHRNoteQA for automatic LLM
evaluation.
In addition to solving EHRNoteQA in a multiple-choice manner, we further
assessed our dataset in a free-text format. Given the absence of a reliable
automatic metric for free-text response assessment, many studies delegate this
evaluation to GPT Zhou et al. (2023); Sottana et al. (2023). Consequently, we
also employed GPT for the scoring of free-text responses in our dataset.
Similar to scoring multi-choice, we provided GPT-4-turbo with both the model’s
output and the correct answer for evaluating the free-text format.
The evaluation was conducted on our Level 1 data, and the results are
summarized in Table 4’s Level 1 (free-text) column. An important observation
comes from the standard deviations: while the scores’ average standard
deviation across 22 models was 0.24 when scoring with multiple-choice format,
the average standard deviation increased to 1.21 when assessing free-text
responses. This suggests that when automatically evaluating the same model,
the scores can vary significantly in free-text format. Additionally, we
computed the models’ rankings over five iterations, represented as $r_{j}^{i}$
(where $j$ denotes each model and $i$ each iteration), and identified the most
frequently occurring rankings (mode) as the gold rankings, $\hat{r_{j}}$, for
each model. A comparison of these gold rankings against those from each of the
five iterations, calculated using the formula
$\sum_{j=1}^{22}\sum_{i=1}^{5}|r_{i}-\hat{r}|$, revealed 12 instances of
ranking differences in multi-choice evaluations versus 29 in free-text
evaluations. This indicates that rankings for models can vary significantly
when scoring free-text responses, suggesting that this method might not serve
as a reliable measure for evaluating models. The specific results of each of
the five scores and rankings can be found in the Appendix B.
This experiment leads us to conclude that while free-text-based evaluation may
more closely mimic real clinical environments, using a multiple-choice format
in assessing our dataset offers a more consistent and reliable scoring system,
making it an essential choice for the automatic evaluation of LLMs.
Figure 3: Model scores on EHRNoteQA with differing number of clinical notes
for each level. For model scores in which the number of notes range from 1 to
2 denotes performance of models on Level 1 data. For model scores in which the
number of notes range from 1 to 3 denotes performance of models on Level 2
data.
Clinician Measure EHRNoteQA MedQA PubMedQA MMLU* MedMCQA ARC HellaSwag MMLU
TruthfulQA Winogrande GSM8K AVG A Spearman 0.74 0.50 0.07 0.65 0.51 0.52 0.25
0.57 0.65 0.38 0.26 0.60 Kendall 0.58 0.35 0.06 0.50 0.38 0.37 0.18 0.41 0.54
0.28 0.17 0.42 B Spearman 0.81 0.68 0.16 0.80 0.73 0.58 0.36 0.65 0.75 0.47
0.24 0.62 Kendall 0.65 0.52 0.09 0.62 0.58 0.45 0.25 0.50 0.59 0.33 0.16 0.47
C Spearman 0.77 0.59 0.12 0.68 0.67 0.53 0.28 0.58 0.65 0.44 0.20 0.58 Kendall
0.66 0.45 0.10 0.54 0.51 0.42 0.21 0.44 0.48 0.31 0.16 0.43
Table 5: Correlations between model scores from various benchmarks and
physician-evaluated model performance on real-world clinical questions, with
bold indicating the highest correlation and underlined the second highest.
### 4.2 Impact of Note Length and the Number of Notes on the Model
Performances
In this section, we conduct a comparative analysis of model performance based
on the length and the number of clinical notes in the EHRNoteQA dataset. As
outlined in Section 3.1, Level 1 data includes patient notes up to 3.5k
tokens, while Level 2 data comprises notes ranging from 3.5k to 7.5k tokens,
indicating that Level 2 involves longer note lengths. The results depicted in
Table 4 show that models consistently achieve lower scores on Level 2 data
compared to Level 1 data. This discrepancy highlights the increased complexity
associated with Level 2, due to the requirement to comprehend and interpret
more extensive clinical contexts presented in longer patient notes. Moreover,
we examine the impact of the number of notes on model performance. Level 1
data consists of clinical notes from 1 to 2 admissions, and Level 2 data
encompasses notes from 1 to 3 admissions. Figure 3 illustrates how model
scores change as the number of notes increases for both Level 1 and Level 2.
The results consistently indicate a diminishing trend in model performance as
the number of notes per question increases. This trend underscores the
challenge posed by our multi-note dataset in assessing a model’s ability to
collect and understand the cumulative content of patient discharge summaries
across multiple hospital admissions.
### 4.3 Representativeness of EHRNoteQA for Actual Clinical Assessments
In our final analysis, our aim is to assess how accurately our dataset
reflects the assessments made by physicians in real clinical settings. To
achieve this, we conducted an analysis measuring the correlation between 19
models’ scores 444Among the 22 models we tested, the 3 models from the GPT
series were overwhelmingly superior in performance, making their ranking
obvious. Therefore, we excluded them from the experiment measuring
correlation. obtained from our dataset, and the physician-assessed scores on
model responses to real-world medical questions. For a comprehensive
evaluation, we expanded our comparison to include four other widely utilized
clinical benchmarks and six general domain benchmarks, and measured the
correlation between these benchmark scores and the assessments made by
physicians as well.
Recently, Lehman et al. (2022) introduced a dataset called DiSCQ, which is
composed of questions written by medical experts under a patient handoff
scenario, utilizing MIMIC-III discharge summaries (Johnson et al., 2016). This
dataset contains 1,089 questions that stem from 114 discharge summaries. We
leveraged DiSCQ as a source of real-world questions for measuring the
correlation, because it represents patient-specific medical inquiries, without
any overlap with our dataset or other benchmark datasets. However, DiSCQ
dataset lacks predetermined correct answers, and thus it is not possible to
conduct automated evaluation. To overcome this limitation, the three
clinicians evaluated the free-form responses of the models to these questions.
From the total 1,089 DiSCQ questions, we randomly selected a sample of 300
questions, assigning 100 questions to each of the three clinicians. Each
clinician was asked to assess a total of 1,900 responses from 19 models for
his/her set of 100 questions, where the order of the model responses for each
question was shuffled to prevent any bias. With these results, we calculated
the correlation between clinician-evaluated DiSCQ scores and scores obtained
from our dataset and various benchmarks.
The correlations are provided in Table 5. Individual scores given by each
clinician and scores of the evaluated benchmarks, along with their references,
are listed in the Appendix C. Notably, the correlation with our dataset
consistently surpassed that of other benchmarks. This observation indicates
that the model scores evaluated using our dataset are more likely to reflect
the model performance on real-world medical questions compared to when using
other benchmark datasets for LLM evaluation.
## 5 Conclusion
In this paper, we introduce EHRNoteQA, a novel benchmark curated by clinicians
for assessing LLMs within the clinical domain. Unlike other patient-specific
datasets, EHRNoteQA uniquely employs a multi-choice question format that
facilitates more reliable automated evaluations of LLMs. Our dataset also
encompasses clinical notes from multiple admissions for each patient, thereby
capturing the complex nature of real-world clinical settings. Through our
research, we found that, although the multiple-choice format deviates from
practical medical situations, adopting this format is a crucial choice for
ensuring reliability. Furthermore, we address the potential limitations of the
format by demonstrating that the correlation between model scores assessed by
physicians in real clinical settings and those obtained from our dataset is
higher than with other datasets. By making EHRNoteQA available to the wider
research community, we aim to facilitate LLM integration into healthcare
services, thereby enhancing clinicians’ decision making and patient care.
## Limitations
Our study primarily focuses on the analysis of discharge summaries, excluding
other clinical documentations or data formats. Hospital EHR databases contain
not only discharge summaries but also other types of unstructured clinical
notes, structured data, and non-textual elements such as images and signals.
Although our research is the first in creating a dataset of discharge
summaries across multiple admissions, there is a need for benchmarks that
encompass a broader spectrum of clinical notes and data types.
Another constraint of our work pertains to the composition of our dataset,
which only includes questions that are answerable. In real-world clinical
settings, questions posed by healthcare professionals to an EHR system can
include both answerable or unanswerable questions. Assessing a model’s
capability to determine if a question is answerable or not is crucial for its
application in real-world scenarios. However, exploring this aspect of
question-answering falls into a distinct research topic, extending beyond the
scope of our current research.
Last limitation arises in measuring the correlation between model evaluations
using EHRNoteQA and the scores from DiSCQ dataset assessed by clinicians.
Firstly, the limited number of available models poses challenges for
statistically robust correlation measurement. Nonetheless, given the scarcity
of models supporting a context window of over 4k, this was the most feasible
option. Secondly, each clinician only evaluated 100 questions when assessing
DiSCQ questions. Due to the necessity for clinicians to review responses from
all models, employing a larger question set was not practical. Consequently,
we made efforts to ensure diverse evaluations by assigning each clinician a
unique set of 100 non-overlapping questions. While expanding the scale of our
analysis to include a wider array of models and a larger question set per
clinician would be desirable, such an expansion presents challenges requiring
a substantial increase in the number of clinicians involved. Thus, addressing
these challenges comprehensively remains a topic for future work.
## References
* Beeching et al. (2023) Edward Beeching, Clémentine Fourrier, Nathan Habib, Sheon Han, Nathan Lambert, Nazneen Rajani, Omar Sanseviero, Lewis Tunstall, and Thomas Wolf. 2023. Open llm leaderboard. https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901.
* Cascella et al. (2023) Marco Cascella, Jonathan Montomoli, Valentina Bellini, and Elena Bignami. 2023. Evaluating the feasibility of chatgpt in healthcare: an analysis of multiple clinical and research scenarios. _Journal of Medical Systems_ , 47(1):33.
* Chiang et al. (2023) Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality.
* Clark et al. (2018) Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. _arXiv preprint arXiv:1803.05457_.
* Cobbe et al. (2021) Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. _arXiv preprint arXiv:2110.14168_.
* Cognitive (2023) Cognitive. 2023. Dolphin-2.0-mistral-7b. https://huggingface.co/cognitivecomputations/dolphin-2.0-mistral-7b,.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Eysenbach et al. (2023) Gunther Eysenbach et al. 2023. The role of chatgpt, generative language models, and artificial intelligence in medical education: a conversation with chatgpt and a call for papers. _JMIR Medical Education_ , 9(1):e46885.
* Fan (2019) Jungwei Fan. 2019. Annotating and characterizing clinical sentences with explicit why-qa cues. In _Proceedings of the 2nd Clinical Natural Language Processing Workshop_ , pages 101–106.
* Fleming et al. (2023) Scott L Fleming, Alejandro Lozano, William J Haberkorn, Jenelle A Jindal, Eduardo P Reis, Rahul Thapa, Louis Blankemeier, Julian Z Genkins, Ethan Steinberg, Ashwin Nayak, et al. 2023. Medalign: A clinician-generated dataset for instruction following with electronic medical records. _arXiv preprint arXiv:2308.14089_.
* Gao et al. (2023) Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2023. A framework for few-shot language model evaluation.
* Hendrycks et al. (2020) Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. In _International Conference on Learning Representations_.
* Javaid et al. (2023) Mohd Javaid, Abid Haleem, and Ravi Pratap Singh. 2023. Chatgpt for healthcare services: An emerging stage for an innovative perspective. _BenchCouncil Transactions on Benchmarks, Standards and Evaluations_ , 3(1):100105.
* Jiang et al. (2023) Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. _arXiv preprint arXiv:2310.06825_.
* Jin et al. (2021) Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2021. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. _Applied Sciences_ , 11(14):6421.
* Jin et al. (2019) Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset for biomedical research question answering. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2567–2577.
* Johnson et al. (2023) Alistair EW Johnson, Lucas Bulgarelli, Lu Shen, Alvin Gayles, Ayad Shammout, Steven Horng, Tom J Pollard, Sicheng Hao, Benjamin Moody, Brian Gow, et al. 2023. Mimic-iv, a freely accessible electronic health record dataset. _Scientific data_ , 10(1):1.
* Johnson et al. (2016) Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. _Scientific data_ , 3(1):1–9.
* Kamalloo et al. (2023) Ehsan Kamalloo, Nouha Dziri, Charles Clarke, and Davood Rafiei. 2023. Evaluating open-domain question answering in the era of large language models. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 5591–5606, Toronto, Canada. Association for Computational Linguistics.
* Lee et al. (2023a) Ariel N Lee, Cole J Hunter, and Nataniel Ruiz. 2023a. Platypus: Quick, cheap, and powerful refinement of llms. _arXiv preprint arXiv:2308.07317_.
* Lee et al. (2023b) Ariel N. Lee, Cole J. Hunter, Nataniel Ruiz, Bleys Goodson, Wing Lian, Guan Wang, Eugene Pentland, Austin Cook, Chanvichet Vong, and "Teknium". 2023b. Openorcaplatypus: Llama2-13b model instruct-tuned on filtered openorcav1 gpt-4 dataset and merged with divergent stem and logic dataset model. https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B,.
* Lehman et al. (2022) Eric Lehman, Vladislav Lialin, Katelyn Edelwina Legaspi, Anne Janelle Sy, Patricia Therese Pile, Nicole Rose Alberto, Richard Raymund Ragasa, Corinna Victoria Puyat, Marianne Katharina Taliño, Isabelle Rose Alberto, et al. 2022. Learning to ask like a physician. In _Proceedings of the 4th Clinical Natural Language Processing Workshop_ , pages 74–86.
* Lian et al. (2023) Wing Lian, Bleys Goodson, Guan Wang, Eugene Pentland, Austin Cook, Chanvichet Vong, and "Teknium". 2023. Mistralorca: Mistral-7b model instruct-tuned on filtered openorcav1 gpt-4 dataset. https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca.
* Liang et al. (2022) Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. _arXiv preprint arXiv:2211.09110_.
* Lin et al. (2022) Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Truthfulqa: Measuring how models mimic human falsehoods. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 3214–3252.
* Moon et al. (2023) Sungrim Moon, Huan He, Heling Jia, Hongfang Liu, Jungwei Wilfred Fan, et al. 2023. Extractive clinical question-answering with multianswer and multifocus questions: Data set development and evaluation study. _JMIR AI_ , 2(1):e41818.
* MosaicML (2023) MosaicML. 2023. Introducing mpt-7b: A new standard for open-source, commercially usable llms. Accessed: 2023-05-05.
* OpenAI (2023) OpenAI. 2023. Gpt-4 technical report.
* Ouyang et al. (2022) Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. _Advances in Neural Information Processing Systems_ , 35:27730–27744.
* Pal et al. (2022) Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. 2022. Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering. In _Conference on Health, Inference, and Learning_ , pages 248–260. PMLR.
* Pampari et al. (2018) Anusri Pampari, Preethi Raghavan, Jennifer Liang, and Jian Peng. 2018. emrqa: A large corpus for question answering on electronic medical records. _arXiv preprint arXiv:1809.00732_.
* Patel and Lam (2023) Sajan B Patel and Kyle Lam. 2023. Chatgpt: the future of discharge summaries? _The Lancet Digital Health_ , 5(3):e107–e108.
* Raghavan et al. (2018) Preethi Raghavan, Siddharth Patwardhan, Jennifer J Liang, and Murthy V Devarakonda. 2018. Annotating electronic medical records for question answering. _arXiv preprint arXiv:1805.06816_.
* Sakaguchi et al. (2021) Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. Winogrande: An adversarial winograd schema challenge at scale. _Communications of the ACM_ , 64(9):99–106.
* Soni et al. (2022) Sarvesh Soni, Meghana Gudala, Atieh Pajouhi, and Kirk Roberts. 2022. Radqa: A question answering dataset to improve comprehension of radiology reports. In _Proceedings of the Thirteenth Language Resources and Evaluation Conference_ , pages 6250–6259.
* Sottana et al. (2023) Andrea Sottana, Bin Liang, Kai Zou, and Zheng Yuan. 2023. Evaluation metrics in the era of gpt-4: reliably evaluating large language models on sequence to sequence tasks. _arXiv preprint arXiv:2310.13800_.
* Tissera (2023a) Migel Tissera. 2023a. Synthia-13b-v1.2b: Synthetic intelligent agent. https://huggingface.co/migtissera/Synthia-13B.
* Tissera (2023b) Migel Tissera. 2023b. Synthia-7b-v1.3: Synthetic intelligent agent. https://huggingface.co/migtissera/Synthia-13B.
* Toma et al. (2023) Augustin Toma, Patrick R Lawler, Jimmy Ba, Rahul G Krishnan, Barry B Rubin, and Bo Wang. 2023. Clinical camel: An open-source expert-level medical language model with dialogue-based knowledge encoding. _arXiv preprint arXiv:2305.12031_.
* Touvron et al. (2023a) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_.
* Touvron et al. (2023b) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_.
* Uzuner (2009) Özlem Uzuner. 2009. Recognizing obesity and comorbidities in sparse data. _Journal of the American Medical Informatics Association_ , 16(4):561–570.
* Uzuner et al. (2008) Özlem Uzuner, Ira Goldstein, Yuan Luo, and Isaac Kohane. 2008. Identifying patient smoking status from medical discharge records. _Journal of the American Medical Informatics Association_ , 15(1):14–24.
* Uzuner et al. (2010) Özlem Uzuner, Imre Solti, and Eithon Cadag. 2010. Extracting medication information from clinical text. _Journal of the American Medical Informatics Association_ , 17(5):514–518.
* Uzuner et al. (2011) Özlem Uzuner, Brett R South, Shuying Shen, and Scott L DuVall. 2011. 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text. _Journal of the American Medical Informatics Association_ , 18(5):552–556.
* Xu et al. (2023) Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. Wizardlm: Empowering large language models to follow complex instructions. _arXiv preprint arXiv:2304.12244_.
* Yue et al. (2020) Xiang Yue, Bernal Jimenez Gutierrez, and Huan Sun. 2020. Clinical reading comprehension: A thorough analysis of the emrQA dataset. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4474–4486, Online. Association for Computational Linguistics.
* Yue et al. (2021) Xiang Yue, Xinliang Frederick Zhang, Ziyu Yao, Simon Lin, and Huan Sun. 2021. Cliniqg4qa: Generating diverse questions for domain adaptation of clinical question answering. In _2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)_ , pages 580–587. IEEE.
* Zellers et al. (2019) Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4791–4800.
* Zhou et al. (2023) Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2023. Lima: Less is more for alignment. _arXiv preprint arXiv:2305.11206_.
## Appendix A Prompts
When generating the EHRNoteQA data, the version of the GPT-4 model used was
gpt-4 (0613), and this was carried out on Azure’s HIPAA-compliant
platform555https://learn.microsoft.com/en-
us/azure/compliance/offerings/offering-hipaa-us. The total cost for creating
the data was $3,000. In terms of hyperparameters during data generation, we
set the temperature to 1 and left the rest at their default settings. When
evaluating the models using our dataset, we utilized GPT-4 from OpenAI API,
and the model version was gpt-4 (1106-preview). Scoring the entire EHRNoteQA
of 962 items once cost $4.5 per model. During model scoring, we adjusted the
temperature setting to 0, while keeping the other settings as default.
Appendices A.1 and A.2 are the prompts used for generating the data in step1
and step2, respectively. Appendix A.3 is the prompt used for evaluating the
model’s output on our dataset.
### A.1 Question Generation (Step1)
Situation : When a patient is admitted to the hospital, important clinical
records are summarized in a ’discharge summary’. On the patient’s subsequent
visit, the previous ’discharge summaries’ serve as essential reference for the
doctor’s clinical decision making. Objective : Please formulate one question
that a doctor might actually ask based on the provided ’discharge summaries’.
The questions should have clear answer, and these answer should be found
within the provided ’discharge summaries’. Note : 1\. The ’discharge summary’
is provided between [note 1 start] and [note 1 end]. If there are multiple
notes, they are labeled as [note 1 start], [note 2 start], etc. 2\. The
’discharge summaries’ are provided in chronological order. This means note1 is
a record from before note2. At the beginning of each note, there is an
admission ID and the date it was written, so please refer to that. 3\. Please
refrain from formulating questions that can be answered without referring to a
note. 4\. Do not create question that is too easy to answer. To answer your
question, someone should have the clinical expertise equivalent to a doctor
and must fully understand all provided discharge summaries. 5\. Your answer
should also contain short rationale behind the answer. 6\. When explaining the
answer and its rationale, utilize the chart date of the note. In other words,
instead of saying the first note or the second note, phrase it as ’as per the
note charted on [specific date], …’. 7\. Arrange your output in the following
format: \- Question : [Your Question] \- Answer : [Your Answer] \- Reason :
[Explanation for your answer]
### A.2 Answer Generation (Step2)
Objective : Please generate a multiple-choice question answering data with
five possible answers (A-E) based on the doctor’s question derived from the
provided ’discharge summary’. Ensure that one answer is correct, and the
remaining four are incorrect answers. Note : 1\. Use the provided doctor’s
question as the basis for the multiple-choice question without modification.
2\. The ’discharge summary’ is enclosed within [note 1 start] and [note 1
end], with additional notes being similarly labeled. 3\. All distractors
(incorrect answer choices) should contain contents that appear in the provided
discharge summary but should be wrong answer to the question. 4\. After
choosing all five choices, paraphrase them so that all choices are consistent
with the format and length. Ensure that longest length answer choice is not an
answer. 5\. The correct answer should be clearly indicated, and the rationale
should explain why this is the answer and why the other options are not
correct but are good distractors. 6\. Arrange your output in the following
format: \- Question: [The doctor’s question] \- Answer Choices: A: [First
option] B: [Second option] C: [Third option] D: [Fourth option] E: [Fifth
option] \- Correct Answer: [The letter of the correct choice] \- Reason:
[Explanation behind your answer and why each other options are incorrect but
can be good distractors]
### A.3 Evaluation
Your task is to evaluate the provided model output by determining whether it
matches the correct answer from the multiple-choice options provided. The
model output is correct and should be met with a "yes" if it accurately
reflects the content of the correct answer choice, not necessarily its exact
wording. If the content of the model output aligns with the correct answer
choice, despite any additional details or varied phrasing, you are to respond
"yes". Should the model output diverge in meaning or substance from the
correct answer—whether by selecting an alternative choice or providing a
response not aligning with any provided options—a response of "no" is
necessary. Model Output: {output} Answer Choices: {choices} Correct Answer:
{answer} With the given information, do you conclude that the model output
substantively matches the correct answer provided? Respond solely with "yes"
or "no"
## Appendix B Model Evaluation on EHRNoteQA
Model Multi-Choice Free-Text Take1 Take2 Take3 Take4 Take5 Mode Take1 Take2
Take3 Take4 Take5 Mode GPT4 turbo peview(1106-preview) 95.27 (2) 95.46 (2)
95.27 (2) 95.46 (2) 95.46 (2) 2 87.71 (2) 87.52 (2) 86.20 (2) 86.20 (2) 87.33
(2) 2 GPT4 (0613) 97.16 (1) 97.16 (1) 97.16 (1) 96.98 (1) 97.16 (1) 1 91.02
(1) 92.44 (1) 90.74 (1) 90.55 (1) 90.55 (1) 1 GPT35-turbo-16k (0613) 87.90 (5)
88.28 (5) 88.28 (5) 88.47 (5) 88.47 (5) 5 80.91 (3) 82.42 (3) 80.72 (3) 79.40
(3) 81.47 (3) 3 Llama-2-70b-chat-hf 84.50 (9) 84.69 (9) 85.07 (9) 84.69 (9)
84.31 (9) 9 71.46 (9) 72.02 (11) 70.70 (11) 70.89 (11) 70.32 (10) 11
qCammel-70-x 85.63 (8) 85.82 (8) 85.63 (8) 85.63 (8) 85.44 (8) 8 74.67 (7)
73.35 (9) 71.64 (8) 72.78 (8) 71.64 (8) 8 Camel-Platypus2-70B 89.41 (4) 89.79
(4) 89.98 (4) 89.79 (4) 89.98 (4) 4 76.37 (6) 78.07 (6) 75.61 (6) 75.61 (6)
74.48 (6) 6 Platypus2-70B-instruct 90.36 (3) 90.17 (3) 90.36 (3) 90.17 (3)
90.55 (3) 3 78.26 (5) 82.04 (4) 77.88 (4) 79.02 (4) 77.50 (4) 4
mpt-30b-instruct 79.96 (13) 79.77 (13) 79.40 (14) 79.40 (13) 79.77 (13) 13
56.33 (18) 56.90 (18) 56.71 (18) 54.06 (19) 56.52 (18) 18 Llama-2-13b-chat-hf
73.35 (19) 72.78 (19) 72.97 (19) 73.35 (19) 73.53 (19) 19 62.38 (16) 64.46
(16) 60.68 (16) 61.25 (16) 62.57 (16) 16 vicuna-13b-v1.5 82.42 (10) 82.04 (10)
82.04 (10) 81.85 (11) 82.23 (10) 10 64.27 (15) 65.78 (14) 63.33 (15) 65.78
(14) 65.03 (14) 14 WizardLM-13B-V1.2 80.72 (12) 80.72 (12) 80.53 (12) 80.91
(12) 80.91 (12) 12 67.11 (13) 64.84 (15) 64.27 (14) 62.76 (15) 65.03 (14) 15
qCammel-13 71.27 (20) 71.27 (20) 71.64 (20) 70.08 (20) 71.27 (20) 20 54.82
(19) 55.01 (19) 53.88 (19) 55.01 (18) 52.93 (19) 19 OpenOrca-Platypus2-13B
85.82 (7) 86.39 (7) 85.82 (7) 85.82 (7) 85.63 (7) 7 71.27 (10) 74.10 (8) 71.46
(9) 72.78 (8) 70.51 (9) 8 Camel-Platypus2-13B 78.26 (16) 77.32 (17) 77.88 (17)
77.69 (17) 78.64 (15) 17 58.41 (17) 60.68 (17) 58.60 (17) 58.03 (17) 58.41
(17) 17 Synthia-13B-v1.2 79.40 (14) 79.21 (14) 79.58 (13) 79.02 (14) 79.21
(14) 14 70.32 (11) 73.35 (9) 71.27 (10) 71.08 (10) 70.32 (10) 10
Llama-2-7b-chat-hf 65.41 (21) 65.97 (21) 66.16 (21) 65.41 (21) 65.41 (21) 21
52.36 (20) 49.53 (21) 50.28 (21) 51.61 (20) 49.72 (20) 20 vicuna-7b-v1.5 78.26
(16) 77.69 (16) 79.02 (15) 78.26 (16) 77.88 (17) 16 51.98 (21) 51.61 (20)
50.66 (20) 49.91 (21) 48.02 (21) 21 Mistral-7B-Instruct-v0.1 81.66 (11) 82.04
(10) 82.04 (10) 82.04 (10) 81.85 (11) 10 66.73 (14) 66.73 (13) 64.84 (13)
66.54 (12) 65.78 (13) 13 mpt-7b-8k-instruct 59.36 (22) 59.55 (22) 59.74 (22)
59.55 (22) 59.36 (22) 22 42.91 (22) 44.99 (22) 43.29 (22) 45.18 (22) 41.97
(22) 22 dolphin-2.0-mistral-7b 76.18 (18) 76.37 (18) 76.18 (18) 76.37 (18)
75.99 (18) 18 67.30 (12) 67.67 (12) 65.41 (12) 66.54 (12) 67.30 (12) 12
Mistral-7B-OpenOrca 87.15 (6) 87.15 (6) 86.96 (6) 87.15 (6) 86.96 (6) 6 80.72
(4) 80.53 (5) 77.50 (5) 79.02 (4) 76.94 (5) 5 SynthIA-7B-v1.3 78.45 (15) 78.26
(15) 78.45 (16) 78.45 (15) 78.45 (16) 15 73.53 (8) 75.61 (7) 72.02 (7) 73.35
(7) 73.16 (7) 7
Table 6: Evaluation results of EHRNoteQA in Multi-Choice format and Free-Text
format on 22 Large Language Models. The score is measured 5 times each, and
the parentheses to the right of the score indicate the rank at that time. The
most frequently occurring rank in the scoring was considered the final rank.
## Appendix C Model Evaluation on Benchmarks
As described in Section 4.3, our experiment involved three clinicians
assessing 19 models using the DiSCQ questionnaire (Lehman et al., 2022). The
outcomes of their evaluations are presented in Column Clinician A, B, and C of
Table 6. To establish a point of reference for these results, we also
evaluated the same models using 11 benchmarks, including EHRNoteQA. The
medical benchmarks such as MedQA (Jin et al., 2021), PubMedQA (Jin et al.,
2019), MMLU* (Hendrycks et al., 2020) (subset of MMLU whose topics are related
to medical. That subset includes Anatomy, Clinical Knowledge, College Biology,
College Medicine, Medical Genetics), and MedMCQA (Pal et al., 2022) were
assessed using the same method applied to EHRNoteQA. For other benchmarks like
ARC (Clark et al., 2018), Hellaswag (Zellers et al., 2019), MMLU, TruthfulQA
(Lin et al., 2022), Winogrande (Sakaguchi et al., 2021), and GSM8K (Cobbe et
al., 2021), we sourced the scores from the Hugging Face Open Source Large
Language Model leaderboard (Beeching et al., 2023). The ’AVG’ column
represents the average score across these benchmarks listed in the
leaderboard.
Model Clinician A Clinician B Clinician C EHRNoteQA MedQA PubMedQA MMLU*
MedMCQA ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K AVG Llama-2-70b-chat-hf
53 48 37 84.65 44.38 67.8 58.7 37.84 64.59 85.88 63.91 52.8 80.51 26.69 62.40
qCammel-70-x 45 60 52 85.63 55.77 62.6 66.77 43.61 68.34 87.87 70.18 57.47
84.29 29.72 66.31 Camel-Platypus2-70B 62 69 68 89.79 60.02 64 71.7 49.8 71.08
87.6 70.04 58.09 83.82 22.9 65.59 Platypus2-70B-instruct 75 88 77 90.32 56.17
64.2 73.79 50.59 71.84 87.94 70.48 62.26 82.72 40.56 69.30 mpt-30b-instruct 28
41 22 79.66 41.01 68 50.31 39.06 58.45 84.31 49.15 38.05 75.14 15.31 53.40
Llama-2-13b-chat-hf 57 52 45 73.20 36.61 58.8 49.16 32.92 59.04 81.94 54.64
44.12 74.51 15.24 54.92 vicuna-13b-v1.5 60 61 56 82.12 43.21 66.8 57.86 40.21
57.08 81.24 56.67 51.51 74.66 11.3 55.41 WizardLM-13B-V1.2 58 65 57 80.76
39.75 57.2 51.99 33.99 59.04 82.21 54.64 47.27 71.9 13.5 54.76 qCammel-13 31
45 40 71.11 41.48 55.8 51.68 33.66 60.84 83.66 56.73 47.54 76.16 11.37 56.05
OpenOrca-Platypus2-13B 69 76 65 85.90 44.3 60.6 59.12 43.1 62.8 83.15 59.39
53.08 76.24 9.02 57.28 Camel-Platypus2-13B 40 57 49 77.96 46.66 61 59.12 39.3
60.75 83.61 56.51 49.6 75.37 0.08 54.32 Synthia-13B-v1.2 51 52 53 79.28 40.61
54.8 51.47 38.32 61.26 82.93 56.47 47.27 76.48 10.99 55.90 Llama-2-7b-chat-hf
38 33 17 65.67 35.43 58.6 44.23 31.8 52.9 78.55 48.32 45.57 71.74 7.35 50.74
vicuna-7b-v1.5 42 54 52 78.22 38.65 63.4 49.37 35.29 53.24 77.39 51.04 50.34
72.14 8.19 52.06 Mistral-7B-Instruct-v0.1 55 59 55 81.93 41.16 51 55.45 38.99
54.52 75.63 55.38 56.28 73.72 14.25 54.96 mpt-7b-8k-instruct 28 20 23 59.51
33.62 59.1 38.99 35.67 45.9 74.47 41.97 35.21 65.98 20.7 47.37
dolphin-2.0-mistral-7b 55 57 51 76.22 43.99 52.4 56.5 38.58 59.22 80.26 56.9
61.09 75.37 18.65 58.58 Mistral-7B-OpenOrca 61 66 67 87.07 47.29 62 60.06
40.23 64.08 83.99 62.24 53.05 77.74 19.94 60.17 SynthIA-7B-v1.3 51 58 55 78.41
47.13 69 53.77 42.67 62.12 83.45 62.65 51.37 78.85 17.59 59.34
Table 7: Clinicians and Benchmarks evaluation results on 19 Large Language
Models.
|
11institutetext: Masahito Hayashi22institutetext: Shenzhen Institute for
Quantum Science and Engineering, Southern University of Science and
Technology, Shenzhen,518055, China, International Quantum Academy (SIQA),
Futian District, Shenzhen 518048, China, Graduate School of Mathematics,
Nagoya University, Nagoya, 464-8602, Japan. 22email<EMAIL_ADDRESS><EMAIL_ADDRESS>
# Special functions in quantum phase estimation
Masahito Hayashi
###### Abstract
This paper explains existing results for the application of special functions
to phase estimation, which is a fundamental topic in quantum information. We
focus on two special functions. One is prolate spheroidal wave function, which
approximately gives the maximum probability that the difference between the
true parameter and the estimate is smaller than a certain threshold. The other
is Mathieu function, which exactly gives the optimum estimation under the
energy constraint. It also characterizes the uncertainty relation for the
position and the momentum for periodic functions.
## 1 Introduction
It is well known that quantum system has group symmetry. Therefore, various
quantum information processing can utilize group symmetry to enhance or
optimize various operations. One typical example is the estimation of the
unknown unitary operation. In this problem setting, the set of possible
unitary operations often forms a group representation. When the input state is
fixed to a certain state, this problem can be considered as a special case of
the estimation of the unknown state under the group symmetric model. For this
type of state estimation, Holevo formulated a systematic group symmetric
approach Holevo ; Holevo2 . Holevo’s approach is known as a powerful tool for
state estimation H98 ; Group2 . By using Holevo’s approach, the above
estimation problem of the unknown unitary operation has been formulated in a
general form by CDS ; CMP .
The simplest case of the estimation of the unknown unitary operation is phase
estimation, which is formulated as optimizations of estimating methods of an
unknown element of $\mathop{\rm U}(1)$. A group symmetric approach works well
for this problem. Interestingly, although this problem can be formulated
dependently of the choices of error function and available input systems or
input states, the optimal solution under several special cases can be
characterized by special functions. This paper surveys existing results for
these relations between special functions and the optimal solution under
several examples of phase estimation. In particular, this paper focuses on two
special functions, prolate spheroidal wave function and Mathieu function.
Prolate spheroidal wave function approximately gives an optimal input state to
maximize the probability that the difference between the true parameter and
the estimate is smaller than a certain threshold. Mathieu function gives an
optimal input state under a certain energy constraint. This characterization
can be used for the uncertainty relation between the position and the momentum
on the periodic function space. In this way, these two special functions play
a central role in phase estimation.
The remaining of this paper is organized as follows. Section 2 gives the
formulation of phase estimation. Section 3 discusses the phase estimation
under more specific examples. This section presents the relation between
Prolate spheroidal wave function and phase estimation. Section 4 addresses the
phase estimation under the energy constraint. This section presents the
relation between Mathieu function and phase estimation. Section 5 applies the
result in Section 4 to the uncertainty relation between the position and the
momentum on the periodic function space.
[scale=.45]fig.pdf
Figure 1: This figure expresses the process to estimate the action of the
unknown element of $\mathop{\rm U}(1)$.
## 2 Formulation
We estimate the unknown application of an element of $\mathop{\rm U}(1)$ in
various settings. To cover various settings, this problem is formulated as
follows. First, we consider a fixed unitary representation $\mathsf{f}$ of the
group $\mathop{\rm U}(1)$ on a Hilbert space $\mathcal{H}$, which represents
our physical system. We are allowed to choose the input state $\rho$ and the
quantum measurement on the system $\mathcal{H}$ to get our estimate in
$\mathop{\rm U}(1)$. The quantum measurement on the system $\mathcal{H}$ is
rewritten as a positive operator-valued measure on $\mathcal{H}$, which is
given as ${\cal M}:=(M_{\theta})_{\theta\in[0,2\pi)}$ with the condition
$\displaystyle\int_{0}^{2\pi}M_{\theta}d\theta=I$ (1)
by identifying $\mathop{\rm U}(1)$ with $[0,2\pi)$. Our estimation scheme for
the unknown application $\mathsf{f}(e^{i\theta})$ with
$e^{i\theta}\in\mathop{\rm U}(1)$ is formulated as Fig. 1 CMP ; LP ; BDM ; PLA
; IH09 .
When the true unitary action is $\mathsf{f}(e^{i\theta})$, the output
$\hat{\theta}\in[0,2\pi)$ is generated by the distribution $\mathop{\rm
Tr}\mathsf{f}(e^{i\theta})\rho\mathsf{f}(e^{i\theta})^{\dagger}M_{\hat{\theta}}d\hat{\theta}$.
To evaluate the precision of our estimate, we consider the error function
$R(\theta,\hat{\theta})$. For the symmetry of our problem setting, we impose
the symmetric condition
$\displaystyle
R(\theta,\hat{\theta})=R(0,\hat{\theta}-\theta)=R(0,\hat{\theta}-\theta+2n\pi)$
(2)
with any integer $n$. Then, the average error is calculated as a function of
$\theta,\rho,{\cal M}$ CMP ;
$\displaystyle{\cal R}[\mathsf{f},R,\theta,\rho,{\cal
M}]:=\int_{0}^{2\pi}R(\theta,\hat{\theta})\mathop{\rm
Tr}\mathsf{f}(e^{i\theta})\rho\mathsf{f}(e^{i\theta})^{\dagger}M_{\hat{\theta}}d\hat{\theta}.$
(3)
It is natural to focus on the worst value ${\cal
R}_{\max}[\mathsf{f},R,\rho,{\cal M}]:=\max_{\theta}{\cal
R}[\mathsf{f},R,\theta,\rho,{\cal M}]$ or the average value ${\cal
R}_{av}[\mathsf{f},R,\rho,{\cal M}]:=\frac{1}{2\pi}\int_{0}^{2\pi}{\cal
R}[\mathsf{f},R,\theta,\rho,{\cal M}]d\theta$ with respect to the unknown
parameter $\theta$ CMP . We consider the following minimizations CMP
$\displaystyle{\cal R}_{\max}[\mathsf{f},R]:=\min_{\rho,{\cal M}}{\cal
R}_{\max}[\mathsf{f},R,\rho,{\cal M}],\quad{\cal
R}_{av}[\mathsf{f},R]:=\min_{\rho,{\cal M}}{\cal
R}_{av}[\mathsf{f},R,\rho,{\cal M}].$ (4)
[scale=.45]fig2.pdf
Figure 2: This figure expresses the process to estimate the action of the
unknown element of $\mathop{\rm U}(1)$ with the reference system. The input
system might be an entangled state between the system $\mathcal{H}$ and the
reference system $\mathcal{H}_{R}$.
To discuss the above problems, we consider a detailed structure. An
irreducible representation of $\mathop{\rm U}(1)$ is characterized by an
integer $n\in\mathbb{Z}$ and has a one-dimensional representation space
$\mathcal{H}_{n}$. This representation is denoted as $\mathsf{f}_{n}$ and is
defined as $\mathsf{f}_{n}(e^{i\theta})=e^{in\theta}$.
Now, we consider a general representation $\mathsf{f}$ of $\mathop{\rm U}(1)$
and its representation space $\mathcal{H}$. Let $S$ be the set of indexes $n$
whose corresponding irreducible representation $\mathsf{f}_{n}$ is contained
in $\mathsf{f}$. We denote the multiplicity of $\mathsf{f}_{n}$ in
$\mathsf{f}$ by $m_{n}$, and define an $m_{n}$-dimensional space by
$\mathcal{V}_{n}$. Then, the representation space $\mathcal{H}$ is written as
$\oplus_{n\in S}\mathcal{H}_{n}\otimes\mathcal{V}_{n}$, where the group
$\mathop{\rm U}(1)$ acts only on $\mathcal{H}_{n}$. That is, for
$x=\oplus_{n\in S}x_{n}\otimes v_{n}\in\oplus_{n\in
S}\mathcal{H}_{n}\otimes\mathcal{V}_{n}$, we have
$\displaystyle\mathsf{f}(g)x=\bigoplus_{n\in S}(\mathsf{f}_{n}(g)x_{n})\otimes
v_{n}$ (5)
for $g\in\mathop{\rm U}(1)$.
This formulation contains the case when the input state is an entangled state
between the system $\mathcal{H}$ and a reference system $\mathcal{H}_{R}$ as
Fig. 2 because the joint system $\mathcal{H}\otimes\mathcal{H}_{R}$ has the
form $\oplus_{n\in S}\mathcal{H}_{n}\otimes\mathcal{V}_{n}$.
When the multiplicity $m_{n}$ is one for any $n\in S$, the representation
$\mathsf{f}$ is called multiplicity-free with $S$ and is denoted by
$\mathsf{f}_{S}$. Under the representation $\mathsf{f}_{S}$, we denote a
normalized vector in $\mathcal{H}_{n}$ by $e_{n}$. The representation space of
the representation $\mathsf{f}_{S}$ is the space $\mathcal{H}_{S}$ spanned by
the orthogonal vectors $\\{e_{n}\\}_{n\in S}$. Under the representation
$\mathsf{f}_{S}$, we consider the following types of positive operator-valued
measure. Consider a vector $|w\rangle:=\sum_{n\in S}|e_{n}\rangle$. We choose
$M_{\hat{\theta}}:=\frac{1}{2\pi}\mathsf{f}_{S}(e^{i\hat{\theta}})^{\dagger}|w\rangle\langle
w|\mathsf{f}_{S}(e^{i\hat{\theta}})$, which satisfies the condition (1) for
POVM. This POVM is written as ${\cal M}_{w}$. Also, an element $|\phi\rangle$
of the vector space $\mathcal{H}_{n}$ can be identified with $(\phi_{n})_{n\in
S}$ through the relation $|\phi\rangle=\sum_{n\in S}\phi_{n}|e_{n}\rangle$. We
define the Fourier transform ${\cal F}[\phi](\hat{\theta})$ as $\sum_{n\in
S}\phi_{n}e^{in\hat{\theta}}=\langle
w|\mathsf{f}_{S}(e^{i\hat{\theta}})|\phi\rangle$. Then, as shown in (CMP, ,
Lemma 1 and Theorem 1) CDS , we have
$\displaystyle{\cal R}_{\max}[\mathsf{f},R]=$ $\displaystyle{\cal
R}_{av}[\mathsf{f},R]=\min_{|\phi\rangle\in\mathcal{H}_{S}}{\cal
R}[\mathsf{f}_{S},R,0,|\phi\rangle\langle\phi|,{\cal M}_{w}]$ $\displaystyle=$
$\displaystyle\min_{|\phi\rangle\in\mathcal{H}_{S}}\frac{1}{2\pi}\int_{0}^{2\pi}R(0,\hat{\theta})\langle
w|\mathsf{f}_{S}(e^{i\hat{\theta}})|\phi\rangle\langle\phi|\mathsf{f}_{S}(e^{i\hat{\theta}})^{\dagger}|w\rangle
d\hat{\theta}$ $\displaystyle=$
$\displaystyle\min_{|\phi\rangle\in\mathcal{H}_{S}}\frac{1}{2\pi}\int_{0}^{2\pi}R(0,\hat{\theta})|{\cal
F}[\phi](\hat{\theta})|^{2}d\hat{\theta}.$ (6)
## 3 Constraint for available irreducible representation
In this section, we consider several examples where available irreducible
representation is restricted. We assume that $R$ is given as
$R_{\sin}(\theta,\hat{\theta}):=2\sin^{2}\frac{\theta-\hat{\theta}}{2}=1-\cos(\theta-\hat{\theta})$.
We consider a typical representation $\mathsf{f}_{\\{0,1\\}}$. We often
consider its $n$-fold tensor product representation
$\mathsf{f}_{\\{0,1\\}}^{\otimes n}$. In this representation, the set of
indexes $S$ is $\\{0,1,\ldots,n\\}$. Hence, it is sufficient to address
$\mathsf{f}_{\\{0,1,\ldots,n\\}}$. Then, the minimization (6) is calculated as
$\displaystyle{\cal R}_{\max}[\mathsf{f}_{\\{0,1\\}}^{\otimes n},R_{\sin}]=$
$\displaystyle{\cal R}_{av}[\mathsf{f}_{\\{0,1\\}}^{\otimes n},R_{\sin}]={\cal
R}_{av}[\mathsf{f}_{\\{0,1,\ldots,n\\}},R_{\sin}]$ $\displaystyle=$
$\displaystyle\min_{|\phi\rangle\in\mathcal{H}_{\\{0,1,\ldots,n\\}}}\frac{1}{2\pi}\int_{0}^{2\pi}2\sin^{2}\frac{\hat{\theta}}{2}|{\cal
F}[\phi](\hat{\theta})|^{2}d\hat{\theta}$ $\displaystyle=$
$\displaystyle\min_{|\phi\rangle\in\mathcal{H}_{\\{0,1,\ldots,n\\}}}1-\frac{1}{2}\sum_{j=0}^{n-1}(\overline{\phi}_{j}\phi_{j+1}+\phi_{j}\overline{\phi}_{j+1}).$
(7)
For the derivation of the final step, see Holevo ; Holevo2 BDM , (PLA, ,
Section 2) (CMP, , Theorem 7).
In fact, the maximum eigenvalue of the operator
$\frac{1}{2}\sum_{j=0}^{n}(|e_{j}\rangle\langle
e_{j+1}|+|e_{j+1}\rangle\langle e_{j}|)$ is $\cos\frac{\pi}{n+1}$, and its
corresponding eigenvector is
$C\sum_{j=0}^{n}\sin\frac{j\pi}{n+1}|e_{j}\rangle$ with a normalizing constant
$C$ (CMP, , Theorem 7). Hence, the above minimum is
$\displaystyle 1-\cos\frac{\pi}{n+1}=2\sin^{2}\frac{\pi}{2(n+1)},$ (8)
which asymptotically behaves as $\frac{\pi^{2}}{2n^{2}}$. This type of
analysis was extended to the case with the group SU(2) BBM ; CDPS2 ; PLA . In
this case, the error is inverse proportional to $n^{2}$. This scaling is
called Heisenberg scaling.
###### Remark 1
Here, it is better to remark that many papers discussed Heisenberg scaling in
a misleading way GLM ; GLM2 ; NOOST ; OHNOST ; JKFABBM . The above discussion
calculated the minimum error. To discuss the asymptotic behavior of the
minimum error, instead of the above calculation, these papers employ the
relation between the estimation error and Fisher information. The estimation
error is lower bounded by the inverse of Fisher information. The attainability
of this lower bound is not trivial in general. For example, In the case of
state estimation, this lower bound can be attained by a two-step method under
a natural regularity condition HM . However, in the case of unitary
estimation, this lower bound cannot be attained. In particular, the lower
bound given by the maximum Fisher information is strictly smaller than the
optimal minimum estimation error even in the level of the first order
coefficient CMP2 . These papers considered that the maximum Fisher information
gives the estimation error even in this case while Fisher information approach
does not work for the Heisenberg scaling of the estimation error in phase
estimation.
Next, we discuss the asymptotic behavior in another way (IH09, , Section 4).
For simple analysis, we focus on the representation
$\mathsf{f}_{\\{-N,\ldots,N\\}}$ instead of $\mathsf{f}_{\\{0,1,\ldots,n\\}}$.
We consider the function space $L^{2}([-1,1])$ and its dense subset
$L_{c}^{2}([-1,1]):=L^{2}([-1,1])\cap C([-1,1])$, where $L^{2}([-1,1])$ is the
set of square integrable functions on $[-1,1]$ and $C([-1,1])$ is the set of
continuous functions on $[-1,1]$. Given a normalized continuous function
$\psi\in L_{c}^{2}([-1,1])$, we choose
$\phi^{(n)}\in\mathcal{H}_{\\{-N,\ldots,N\\}}$ as the normalized vector of
$(\psi(\frac{j}{N}))_{j=-N}^{N}$. We define the Fourier transform ${\cal F}$
on $L^{2}(\mathbb{R})$ as
$\displaystyle{\cal
F}[\psi](t):=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{itx}\psi(x)dx.$
(9)
Then, using $t=N\hat{\theta}$, we have
$\displaystyle\frac{N^{2}}{2\pi}\int_{0}^{2\pi}2\sin^{2}\frac{\hat{\theta}}{2}|{\cal
F}[\phi](\hat{\theta})|^{2}d\hat{\theta}$ $\displaystyle\cong$
$\displaystyle\frac{1}{2}\int_{-\infty}^{\infty}t^{2}|{\cal
F}[\psi](t)|dt=\frac{1}{2}\langle{\cal F}[\psi]|Q^{2}|{\cal
F}[\psi]\rangle=\frac{1}{2}\langle\psi|P^{2}|\psi\rangle.$ (10)
Here $Q$ is the multiplication operator and $P$ is the momentum operator
defined as $P\psi(x):=i\frac{d}{dx}\psi(x)$. In fact, the minimum eigenvalue
of $P^{2}$ on the function space $L^{2}([-1,1])$ is $\frac{\pi^{2}}{4}$.
Hence, the minimum of (10) is $\frac{\pi^{2}}{8}$, which coincides the
asymptotic behavior of (8) with $n=2N$.
Next, given an real number $T>0$, we maximize the probability satisfying the
condition $-\frac{T}{N}<\hat{\theta}-\theta<\frac{T}{N}$ (IH09, , Section 5).
For this aim, we choose the function $R(\theta,\hat{\theta})$ as the
probability satisfying the condition $|\hat{\theta}-\theta|\geq\frac{T}{N}$,
which is denoted by $R[T](\theta,\hat{\theta})$. Then, we have
$\displaystyle{\cal R}_{\max}[\mathsf{f}_{\\{-N,\ldots,N\\}},R[T]]=$
$\displaystyle{\cal R}_{av}[\mathsf{f}_{\\{-N,\ldots,N\\}},R[T]]={\cal
R}_{av}[\mathsf{f}_{\\{-N,\ldots,N\\}},R[T]]$ $\displaystyle=$
$\displaystyle\min_{|\phi\rangle\in\mathcal{H}_{\\{-N,\ldots,N\\}}}1-\frac{1}{2\pi}\int_{-\frac{T}{N}}^{\frac{T}{N}}|{\cal
F}[\phi](\hat{\theta})|^{2}d\hat{\theta}.$ (11)
For simple analysis, we focus on the case when the vector
$|\phi\rangle\in\mathcal{H}_{\\{-N,\ldots,N\\}}$ is given in the above way. As
shown in (IH09, , Section 5), we have
$\displaystyle\frac{1}{2\pi}\int_{-\frac{T}{N}}^{\frac{T}{N}}|{\cal
F}[\phi](\hat{\theta})|^{2}d\hat{\theta}\cong\int_{-T}^{T}|{\cal
F}[\psi](t)|dt.$ (12)
We define the projection $\Pi_{T}$ corresponding to the event that the
spectral of $Q$ belongs to $[-T,T]$. Since $\psi\in L_{c}^{2}([-1,1])$ belongs
to the range of the projection $\Pi_{1}$, we have
$\displaystyle\int_{-T}^{T}|{\cal F}[\psi](t)|dt=\langle{\cal
F}[\psi]|\Pi_{T}|{\cal F}[\psi]\rangle=\langle\psi|{\cal F}\Pi_{T}{\cal
F}|\psi\rangle=\langle\psi|\Pi_{1}{\cal F}\Pi_{T}{\cal F}\Pi_{1}|\psi\rangle.$
(13)
The problem (11) is converted to the maximization of $\langle\psi|\Pi_{1}{\cal
F}\Pi_{T}{\cal F}\Pi_{1}|\psi\rangle$. To discuss the maximum eigenvalue of
the operator $\Pi_{1}{\cal F}\Pi_{T}{\cal F}\Pi_{1}$, we consider the prolate
spheroidal wave function $\psi_{T}$, which is the solution of the differential
equation
$\displaystyle\frac{d}{dx}(1-x^{2})\frac{d\psi}{dx}+(\xi(T)-T^{2}x^{2})\psi(x)=0,$
(14)
where $\xi(T)$ is a real number depending on $T$111For the relation between
$\xi(T)$ and $T$, see Slepian and Pollak SP .. Slepian and Pollak SP showed
that the function $\psi_{T}$ is the eigenfunction of the operator
$\Pi_{1}{\cal F}\Pi_{T}{\cal F}\Pi_{1}$ with the maximum eigenvalue
$\lambda(T)$, which behaves as Slepian
$\displaystyle 1-\lambda(T)\cong 4\sqrt{\pi
T}e^{-2T}\Big{(}1-\frac{3}{32T}+O(T^{2})\Big{)}.$ (15)
In this way, the asymptotic bahavior of the problem (11) is closely linked to
a special function, the prolate spheroidal wave function.
## 4 Energy constraint
Now, we impose an energy constraint on the input state on $\mathcal{H}$ for a
representation $\mathsf{f}$ (CMP, , Section 11). We define the Hamiltonian $H$
on $\mathcal{H}$ as
$\displaystyle H:=\sum_{j\in S}j^{2}I_{j},$ (16)
where $I_{j}$ is the projection to the subspace
$\mathcal{H}_{j}\otimes\mathcal{V}_{j}$. Then, we impose the following energy
constraint to the input state $\rho$ as
$\displaystyle\mathop{\rm Tr}\rho H\leq E.$ (17)
In the following, we consider the case with $S=\mathbb{Z}$, and denote the set
of states with the condition (17) by ${\cal S}_{E}$. We consider the following
minimizations
$\displaystyle{\cal R}_{\max}[\mathsf{f},R,E]:=$
$\displaystyle\min_{\rho\in{\cal S}_{E},{\cal M}}{\cal
R}_{\max}[\mathsf{f},R,\rho,{\cal M}],$ (18) $\displaystyle{\cal
R}_{av}[\mathsf{f},R,E]:=$ $\displaystyle\min_{\rho\in{\cal S}_{E},{\cal
M}}{\cal R}_{av}[\mathsf{f},R,\rho,{\cal M}].$ (19)
Let $\mathcal{H}_{\mathbb{Z},E}$ be the set of normalized vectors
$\phi\in\mathcal{H}_{\mathbb{Z}}$ to satisfy the condition
$\langle\phi|H|\phi\rangle\leq E$. When the error function $R$ satisfies the
symmetric condition (2), as shown in (CMP, , Theorem 2) as a variant of (6),
we have
$\displaystyle{\cal R}_{\max}[\mathsf{f},R,E]=$ $\displaystyle{\cal
R}_{av}[\mathsf{f},R,E]=\min_{|\phi\rangle\in\mathcal{H}_{S}}{\cal
R}[\mathsf{f}_{S},R,0,|\phi\rangle\langle\phi|,{\cal M}_{w}]$ $\displaystyle=$
$\displaystyle\min_{|\phi\rangle\in\mathcal{H}_{\mathbb{Z},E}}\frac{1}{2\pi}\int_{0}^{2\pi}R(0,\hat{\theta})\langle
w|\mathsf{f}_{S}(e^{i\hat{\theta}})|\phi\rangle\langle\phi|\mathsf{f}_{S}(e^{i\hat{\theta}})^{\dagger}|w\rangle
d\hat{\theta}$ $\displaystyle=$
$\displaystyle\min_{|\phi\rangle\in\mathcal{H}_{\mathbb{Z},E}}\frac{1}{2\pi}\int_{0}^{2\pi}R(0,\hat{\theta})|{\cal
F}[\phi](\hat{\theta})|^{2}d\hat{\theta}.$ (20)
To consider this problem, we define the function space $L^{2}_{p}((-\pi,\pi])$
as the space of the periodic square integrable functions with the period
$2\pi$. Then, we define the function space $L^{2}_{p,even}((-\pi,\pi])$ as the
space of even functions in $L^{2}_{p}((-\pi,\pi])$. Now, we choose
$R_{\sin}(\theta,\hat{\theta})=2\sin^{2}\frac{\theta-\hat{\theta}}{2}=1-\cos(\theta-\hat{\theta})$.
Then, as shown in (CMP, , Theorem 6 and Eq. (97)), we have
$\displaystyle\min_{|\phi\rangle\in\mathcal{H}_{\mathbb{Z},E}}\frac{1}{2\pi}\int_{0}^{2\pi}R_{\sin}(0,\hat{\theta})|{\cal
F}[\phi](\hat{\theta})|^{2}d\hat{\theta}$ $\displaystyle=$
$\displaystyle\kappa(E):=\min_{\psi\in
L^{2}_{p,even}(-\pi,\pi)}\\{\langle\psi|I-\cos
Q|\psi\rangle|\langle\psi|P^{2}|\psi\rangle\leq E,\|\psi\|=1\\}.$ (21)
To calculate the function $\kappa$, we define the function
$\displaystyle\gamma(s):=$ $\displaystyle\min_{\psi\in
L^{2}((-\pi,\pi]),\|\psi\|=1}\langle\psi|I-\cos Q+sP^{2}|\psi\rangle$
$\displaystyle=$ $\displaystyle\min_{\psi\in
L^{2}((-\pi/2,\pi/2]),\|\psi\|=1}\langle\psi|I-\cos Q+sP^{2}|\psi\rangle.$
(22)
Then, $\kappa(E)$ is given by the Legendre transform of $\gamma(s)$, i.e., as
shown in (CMP, , Lemma 6), we have the formula
$\displaystyle\kappa(E)=\max_{s>0}\gamma(s)-sE.$ (23)
The value $\gamma(s)$ can be characterized as the minimum value of $\gamma$
having the solution in $L^{2}((-\pi/2,\pi/2])$ of the following differential
equation.
$\displaystyle\frac{s}{4}\frac{d^{2}}{d\theta^{2}}\varphi(\theta)+(\gamma-1+\cos(2\theta))\varphi(\theta)=0,$
(24)
which is equivalent to
$\displaystyle\frac{d^{2}}{d\theta^{2}}\varphi(\theta)+(\frac{4(\gamma-1)}{s}+\frac{4}{s}\cos(2\theta))\varphi(\theta)=0.$
(25)
Now, we consider Mathieu equation:
$\displaystyle\frac{d^{2}}{d\theta^{2}}\varphi(\theta)+(a-2q\cos(2\theta))\varphi(\theta)=0.$
(26)
A function $\varphi$ satisfies the above equation if and only if the function
$\varphi$ is the eigenfunction of the differential operator
$P^{2}+2q\cos(2Q)$. The operator $X(q):=P^{2}+2q\cos(2Q)$ preserves the
subspace $L^{2}_{p,even}((-\frac{\pi}{2},\frac{\pi}{2}])$. Then, we denote the
minimum eigenvalue in $L^{2}_{p,even}((-\frac{\pi}{2},\frac{\pi}{2}])$ by
$a_{0}(q)$, which is also the minimum eigenvalue in
$L^{2}_{p}((-\frac{\pi}{2},\frac{\pi}{2}])$ (Wolf, , Section 28.2). Mathieu
function $\mathop{\rm ce}_{0}(\theta,q)$ is defined as the solution of (26)
with $a_{0}(q)$ (Wolf, , Section 28.2(vi)).
Then, since $\gamma(s)$ is $\gamma$ in (26), we have
$\displaystyle\gamma(s)=\frac{sa_{0}(\frac{2}{s})}{4}+1.$ (27)
Hence, using the formula (23), we have
$\displaystyle\kappa(E)=\max_{s>0}\frac{sa_{0}(\frac{2}{s})}{4}+1-sE.$ (28)
The minimum in (21) is attained if and only if ${\cal
F}[\psi](\theta)=\mathop{\rm ce}_{0}(\frac{\theta}{2},-\frac{2}{s_{E}})$,
where $s_{E}:=\mathop{\rm argmax}_{s>0}\frac{sa_{0}(\frac{2}{s})}{4}+1-sE$.
When $s\to 0$, we have the approximation;
$\displaystyle\gamma(s)\cong\sqrt{\frac{s}{2}}-\frac{s}{16}.$ (29)
Then, $\kappa(E)$ is approximated as
$\displaystyle\kappa(E)\cong\frac{1}{8E}-\frac{1}{128E^{2}}.$ (30)
## 5 Application to uncertainty relation
Interestingly, the relation (28) can be used for the uncertainty relation
between the position and the momentum on the periodic function space
$L^{2}_{p}((-\pi,\pi])$. In this function space, the uncertainty of the
position is formulated as the uncertainty for the pair of operators $(\cos
Q,\sin Q)$ as
$\displaystyle\Delta_{\varphi}^{2}(\cos Q,\sin Q):=\Delta_{\varphi}^{2}\cos
Q+\Delta_{\varphi}^{2}\sin Q$ $\displaystyle=$
$\displaystyle\langle\varphi|\cos^{2}Q|\varphi\rangle+\langle\varphi|\sin^{2}Q|\varphi\rangle-\langle\varphi|\cos
Q|\varphi\rangle^{2}-\langle\varphi|\sin Q|\varphi\rangle^{2}$
$\displaystyle=$ $\displaystyle 1-\langle\varphi|\cos
Q|\varphi\rangle^{2}-\langle\varphi|\sin Q|\varphi\rangle^{2}.$ (31)
On the other hand, the uncertainty of the momentum is given as
$\Delta_{\varphi}^{2}P=\langle\varphi|P^{2}|\varphi\rangle-\langle\varphi|P|\varphi\rangle^{2}$.
Thus, the uncertainty relation is formulated as the trade-off between
$\Delta_{\varphi}^{2}(\cos Q,\sin Q)$ and $\Delta_{\varphi}^{2}P$. That is,
this trade-off can be formulated as the following minimization
$\displaystyle\min_{\varphi\in
L_{p}^{2}([-\pi,\pi))}\\{\Delta_{\varphi}^{2}(\cos Q,\sin
Q)|\Delta_{\varphi}^{2}P\leq E,\|\varphi\|=1\\}.$ (32)
Since this problem has symmetry, we can restrict our function $\varphi$ to
satisfy the conditions $\langle\varphi|\sin Q|\varphi\rangle=0$ and
$\langle\varphi|P|\varphi\rangle=0$. Then, our problem is simplified to
$\displaystyle\min_{\varphi\in L_{p}^{2}([-\pi,\pi))}\\{1-\langle\varphi|\cos
Q|\varphi\rangle^{2}|\langle\varphi|P^{2}|\varphi\rangle\leq
E,\|\varphi\|=1\\}$ $\displaystyle=$ $\displaystyle 1-\Big{(}\max_{\varphi\in
L_{p}^{2}([-\pi,\pi))}\\{\langle\varphi|\cos
Q|\varphi\rangle|\langle\varphi|P^{2}|\varphi\rangle\leq
E,\|\varphi\|=1\\}\Big{)}^{2}$ $\displaystyle=$ $\displaystyle
1-\kappa(E)^{2}.$ (33)
By using (28), this trade-off is solved as the following relation (CMP, ,
Theorem 10).
$\displaystyle\min_{\varphi\in
L_{p}^{2}([-\pi,\pi))}\\{\Delta_{\varphi}^{2}(\cos Q,\sin
Q)|\Delta_{\varphi}^{2}P\leq
E,\|\varphi\|=1\\}=\max_{s>0}1-\Big{(}sE-\frac{sa_{0}(2/s)}{4}\Big{)}^{2}.$
(34)
In addition, the minimum in (34) is attained when and only when the function
$\varphi$ is given as a shift of the Mathieu function $\mathop{\rm
ce}_{0}(\theta 2,-\frac{2}{s_{E}})$. Moreover, the right hand side of (34) is
asymptotically expanded as $\frac{1}{4E}-\frac{1}{32E^{2}}$ when $E$ goes to
infinity.
## 6 Conclusion
This paper explains several applications of special functions to phase
estimation. In particular, we have addressed prolate spheroidal wave function
and Mathieu function. Although Mathieu function works for phase estimation
under a certain energy constraint, it also works for the estimation of the
unknown unitary under a certain energy constraint when the set of unknown
unitaries form a group representation of SU(2) CMP .
Another type of energy constraint for phase estimation problem was discussed
in the reference HVK . This problem setting uses a function related to Gamma
function. In this way, special functions have various applications in quantum
information. As another example of special functions to quantum information,
the reference HAY studied the relation between Askey scheme and quantum state
distinguishability. It is expected that more special functions will be applied
to the analysis on various types of quantum information processings.
All the presented results assume the noiseless case. While the Heisenberg
scaling with the noisy case was studied in HLY , the relations with special
functions were not studied in the noisy case. Therefore, it is an open problem
to extend these relations to the noisy case.
###### Acknowledgements.
The author was supported in part by the National Natural Science Foundation of
China (Grants No. 62171212) and Guangdong Provincial Key Laboratory (Grant No.
2019B121203002).
## References
* (1) Holevo, A.S.: Covariant measurements and uncertainty relations. Rep. Math. Phys. 16, 385 – 400 (1979).
* (2) Holevo, A. S.: Probabilistic and Statistical Aspects of Quantum Theory. North-Holland, Amsterdam (1982). Originally published in Russian in 1980.
* (3) Hayashi, M.: Asymptotic estimation theory for a finite dimensional pure state model. Journal of Physics A: Mathematical and General 31, 4633 – 4655 (1998)
* (4) Hayashi, M.: A Group Theoretic Approach to Quantum Information, Springer (2017). (Originally published from Kyoritsu Shuppan in 2014 with Japanese.)
* (5) Chiribella, G., D’Ariano, G.M., Sacchi, M.F.: Optimal estimation of group transformations using entanglement. Phys. Rev. A 72, 042338 (2005)
* (6) Hayashi, M.: Fourier Analytic Approach to Quantum Estimation of Group Action. Communications in Mathematical Physics 347, 3 – 82 (2016).
* (7) Luis, A., Perina, J.: Optimum phase-shift estimation and the quantum description of the phase difference. Phys. Rev. A 54, 4564 (1996)
* (8) Bužek, V., Derka, R., Massar, S.: Optimal quantum clocks. Phys. Rev. Lett. 82, 2207 (1999).
* (9) Hayashi, M.: Parallel treatment of estimation of SU(2) and phase estimation. Phys. Lett. A 354(3), 183–189 (2006).
* (10) Imai, H., Hayashi, M.: Fourier analytic approach to phase estimation in quantum systems. New Journal of Physics 11, 043034 (2009).
* (11) Bagan, E., Baig, M., Munoz-Tapia, R.: Quantum reverse-engineering and reference-frame alignment without nonlocal correlations. Phys. Rev. A 70, 030301(R) (2004).
* (12) Chiribella, G., D’Ariano, G.M., Perinotti, P., Sacchi, M.F.: Efficient use of quantum resources for the transmission of a reference frame. Phys. Rev. Lett. 93, 180503 (2004).
* (13) Giovannetti,V., Lloyd, S., Maccone, L.: Quantum-enhanced measurements: beating the standard quantum limit. Science 306, 1330–1336 (2004).
* (14) Giovannetti, V., Lloyd, S., Maccone, L.: Quantum-enhanced “Quantum metrology. Phys. Rev. Lett. 96, 010401 (2006).
* (15) Nagata, T., Okamoto, R., O’Brien, J., Sasaki, K., Takeuchi, S.: Beating the standard quantum limit with four-entangled photons. Science 316(5825), 726 (2007).
* (16) Okamoto, R., Hofmann, H.F., Nagata, T., O’Brien, J.L., Sasaki, K., Takeuchi, S.: Beating the standard quantum limit: phase super-sensitivity of N-photon interferometers. N. J. Phys. 10, 073033 (2008).
* (17) Jones, J.A., Karlen, S.D., Fitzsimons, J., Ardavan, A., Benjamin, S.C., Briggs, G.A.D., Morton, J.J.L.: Magnetic field sensing beyond the standard quantum limit using 10-spin NOON states. Science 324, 1166–1168 (2009).
* (18) Hayashi, M., Matsumoto, K.: Statistical model with measurement degree of freedom and quantum physics. RIMS koukyuroku No 1055 (Kyoto: Kyoto University) p 96 (1998) (In Japanese); Hayashi, M., Matsumoto, K.: Asymptotic Theory of Quantum Statistical Inference. ed M Hayashi, Singapore: World Scientific, 2005, p. 162 (reprinted, English translation).
* (19) Hayashi, M.: Comparison between the Cramer-Rao and the mini-max approaches in quantum channel estimation. Commun. Math. Phys. 304(3), 689–709 (2011).
* (20) Slepian, D., Pollak, H. O.: Prolate spheroidal wave functions, Fourier analysis and uncertainty-I. Bell Syst. Tech. J. 40, 43 – 63 (1961).
* (21) Slepian, D.: Some asymptotic expansions for prolate spheroidal functions. J. Math. Phys. 44, 99–140 (1965)
* (22) Wolf, G.: Mathieu Functions and Hill’s Equation (2013).Available from http://dlmf.nist.gov/28.
* (23) Hayashi, M., Vinjanampathy, S., Kwek, L. C.: Resolving unattainable Cramer–Rao bounds for quantum sensors. J. Phys. B: At. Mol. Opt. Phys. 52, 015503 (2019).
* (24) Hayashi, M., Hora, A., Yanagida S., Asymmetry of tensor product of asymmetric and invariant vectors arising from Schur-Weyl duality based on hypergeometric orthogonal polynomial. arXiv:2104.12635 (2021).
* (25) Hayashi, M., Liu, Z.-W., Yuan, H.: Global Heisenberg scaling in noisy and practical phase estimation, Quantum Science and Technology, 7, 025030 (2022).
|
# The Structure of Turbulence in Unsteady Flow over Urban Canopies
Weiyi Li1 Marco G. Giometto1<EMAIL_ADDRESS>1Department of Civil
Engineering and Engineering Mechanics, Columbia University, New York, NY 10027
###### Abstract
The topology of turbulent coherent structures is known to regulate the
transport of energy, mass, and momentum in the atmospheric boundary layer
(ABL). While previous research has primarily focused on characterizing the
structure of turbulence in stationary ABL flows, real-world scenarios
frequently deviate from stationarity, giving rise to nuanced and poorly
understood changes in the turbulence geometry and associated transport
mechanisms. This study sheds light on this problem by examining topological
changes in ABL turbulence induced by non-stationarity and their effects on
momentum transport. Results from a large-eddy simulation of pulsatile open
channel flow over an array of surface-mounted cuboids are examined. The
analysis reveals that the flow pulsation triggers a phase-dependent shear
rate, and the ejection-sweep pattern varies with the shear rate during the
pulsatile cycle. From a turbulence structure perspective, it is attributed to
the changes in the geometry of hairpin vortices. An increase (decrease) in the
shear rate intensifies (relaxes) these structures, leading to an increase
(decrease) in the frequency of ejections and an amplification (reduction) of
their percentage contribution to the total momentum flux. Furthermore, the
size of the hairpin packets undergoes variations, which depend on the geometry
of the constituting hairpin vortices, yet the packet inclination preserves its
orientation throughout the pulsatile cycle. These observations reinforce the
important role non-stationarity holds in shaping the structure of ABL
turbulence and the momentum transport mechanisms it governs.
## 1 Introduction
Coherent turbulent structures, also known as organized structures, control the
exchange of energy, mass, and momentum between the earth’s surface and the
atmosphere, as well as within engineering systems. In wall-bounded flows,
these structures have been shown to carry a substantial fraction of the mean
shear stress (Lohou et al., 2000; Katul et al., 2006), kinetic energy (Carper
& Porté-Agel, 2004; Huang et al., 2009; Dong et al., 2020), and scalar fluxes
(Li & Bou-Zeid, 2011; Wang et al., 2014; Li & Bou-Zeid, 2019). It hence comes
as no surprise that substantial efforts have been devoted to their
characterization across many fields. These structures are of practical
relevance in applications relating to biosphere-atmosphere interaction
(Raupach et al., 1986; Pan et al., 2014), air quality control (Michioka et
al., 2014), urban climate (Christen et al., 2007), oceanography (Yang & Shen,
2009), and energy harvesting (Ali et al., 2017), to name but a few.
Previous studies on coherent structures in atmospheric boundary layer (ABL)
flows have mainly focused on the roughness sublayer (RSL) and the inertial
sublayer (ISL)—the lower portions of the ABL. These layers host physical flow
phenomena regulating land-atmosphere exchanges at scales relevant to weather
models and human activities (Stull, 1988; Oke et al., 2017). The RSL, which
extends from the surface up to 2 to 5 times the average height of roughness
elements, is characterized by flow heterogeneity due to the presence of these
elements (Fernando, 2010). In the RSL, the geometry of turbulent structures is
mainly determined by the underlying surface morphology. Through field
measurements and wind tunnel data of ABL flow over vegetation canopies,
Raupach et al. (1996) demonstrated that coherent structures near the top of a
vegetation canopy are connected to inflection-point instabilities, akin to
those found in mixing layers. Expanding on the framework of this mixing-layer
analogy, Finnigan et al. (2009) employed conditional averaging techniques to
show that the prevalent eddy structure in the RSL is a head-down hairpin
vortex followed by a head-up one. This pattern is characterized by a local
pressure peak and a strong scalar front located between the hairpin pair. More
recently, Bailey & Stoll (2016) challenged this observation by proposing an
alternative two-dimensional roller structure with streamwise spacing that
scales with the characteristic length suggested by Raupach et al. (1996).
Extending the mixing-layer analogy to the urban RSL has proven challenging. In
a numerical simulation study, Coceal et al. (2007) discovered the absence of
Kelvin-Helmholtz waves, which are a characteristic of the mixing-layer
analogy, near the top of the urban canopy. This finding, corroborated by
observations from Huq et al. (2007), suggests that the mixing-layer analogy is
not applicable to urban canopy flows. Instead, the RSL of urban canopy flows
is influenced by two length scales; the first is dictated by the size of
individual roughness elements such as buildings and trees, and the second by
the imprint of large-scale motions above the RSL. The coexistence of these two
length scales can be observed through two-point correlation maps (Castro et
al., 2006; Reynolds & Castro, 2008) and velocity spectra (Basley et al.,
2019). However, when the urban canopy has a significant aspect ratio between
the building height $h$ and width $w$, such as $h/w>4$, the momentum transport
in the RSL is dominated by mixing-layer-type eddies, as shown by Zhang et al.
(2022).
The ISL, located above the RSL, is the geophysical equivalent of the
celebrated law-of-the-wall region in high Reynolds number turbulent boundary
layer (TBL) flows. In the absence of thermal stratification effects, mean flow
in the ISL displays a logarithmic profile, and the momentum flux remains
approximately constant with height (Stull, 1988; Marusic et al., 2013;
Klewicki et al., 2014). Surface morphology has been shown to impact ISL
turbulence under certain flow conditions, and this remains a topic of active
research. Volino et al. (2007) highlighted the similarity of coherent
structures in the log region of TBL flow over smooth and three-dimensional
rough surfaces via a comparison of velocity spectra and two-point correlations
of the fluctuating velocity and swirl. Findings therein support Townsend’s
similarity hypothesis (Townsend, 1976), which states that turbulence dynamics
beyond the RSL do not depend on surface morphological features, except via
their role in setting the length and velocity scales for the outer flow
region. The said structural similarity between TBL flows over different
surfaces was later confirmed by Wu & Christensen (2007) and Coceal et al.
(2007), where a highly irregular rough surface and an urban-like roughness
were considered, respectively. However, Volino et al. (2011) later reported
pronounced signatures of surface roughness on flow structures beyond the RSL
in a TBL flow over two-dimensional bars. Similar observations were also made
in a TBL flow over a surface characterized by cross-stream heterogeneity
(Anderson et al., 2015a), thus questioning the validity of Townsend’s
similarity hypothesis. To reconcile these contrasting observations, Squire et
al. (2017) argued that structural similarity in the ISL is contingent on the
surface roughness features not producing flow patterns significantly larger
than their own size. If the surface-induced flow patterns are larger than
their own size, then they may control flow coherence in the ISL. For example,
cross-stream heterogeneous rough surfaces can induce secondary circulations as
large as the boundary-layer thickness, which profoundly modify momentum
transport and flow coherence in the ISL (Barros & Christensen, 2014; Anderson
et al., 2015a).
Although coherent structures in cases with significant surface-induced flow
patterns necessitate case-specific analyses, researchers have extensively
worked towards characterizing the topology of turbulence in cases that exhibit
ISL structural similarity. These analyses have inspired scaling laws (Meneveau
& Marusic, 2013; Yang et al., 2016; Hu et al., 2023) and the construction of
statistical models (Perry & Chong, 1982) for TBL turbulence. In this context,
the hairpin vortex packet paradigm has emerged as the predominant geometrical
model (Christensen & Adrian, 2001; Tomkins & Adrian, 2003; Adrian, 2007). The
origins of this model can be traced back to the pioneering work of Theodorsen
(1952), who hypothesized that inclined hairpin or horseshoe-shaped vortices
were the fundamental elements of TBL turbulence. This idea was later supported
by flow visualizations from laboratory experiments (Bandyopadhyay, 1980; Head
& Bandyopadhyay, 1981; Smith et al., 1991) and high-fidelity numerical
simulations (Moin & Kim, 1982, 1985; Kim & Moin, 1986). In addition to
providing evidence for the existence of hairpin vortices, Head & Bandyopadhyay
(1981) also proposed that these vortices occur in groups, with their heads
describing an envelope inclined at $15^{\circ}–20^{\circ}$ with respect to the
wall. Adrian et al. (2000) expanded on this idea, and introduced the hairpin
vortex packet paradigm, which posits that hairpin vortices are closely aligned
in a quasi-streamwise direction, forming hairpin vortex packets with a
characteristic inclination angle of $15^{\circ}–20^{\circ}$. Nested between
the legs of these hairpins are low-momentum regions, which extend
approximately 2–3 times the boundary layer thickness in the streamwise
direction. These low-momentum regions are typically referred to as large-scale
motions (Smits et al., 2011). Flow visualization studies by Hommema & Adrian
(2003) and Hutchins et al. (2012) further revealed that ABL structures in the
ISL are also organized in a similar manner.
Of relevance for this work is that previous studies on coherent structures
have predominantly focused on (quasi-)stationary flow conditions. However,
stationarity is of rare occurrence in both ABL and engineering flow systems
(Mahrt & Bou-Zeid, 2020; Lozano-durán et al., 2020). As discussed in the
recent review paper by Mahrt & Bou-Zeid (2020), there are two major drivers of
non-stationarity in the ABL. The first involves temporal variations of surface
heat flux, typically associated with evening transitions or the passage of
individual clouds (Grimsdell & Angevine, 2002). The second kind corresponds to
time variations of the horizontal pressure gradient driving the flow, which
can be induced by modes associated with propagating submeso-scale motions,
mesoscale disturbances, and synoptic fronts (Monti et al., 2002; Mahrt, 2014;
Cava et al., 2017). Previous studies have demonstrated that non-stationarity
significantly affects flow statistics in the ABL, and can result in deviations
from equilibrium turbulence Hicks et al. (2018) reported that during morning
and late afternoon transitions, the rapid change in surface heat flux disrupts
the equilibrium turbulence relations. Additionally, several observational
studies by Mahrt et al. (Mahrt, 2007, 2008; Mahrt et al., 2013) demonstrated
that time variations in the driving pressure gradient can enhance momentum
transport under stable atmospheric stratifications. Non-stationarity is also
expected to impact the geometry of turbulence in the ABL, but this problem has
not received much attention thus far. This study contributes to addressing
this knowledge gap by investigating the impact of non-stationarity of the
second kind on the topology of coherent structures in ABL turbulence and how
it affects the mechanisms controlling momentum transport. The study focuses on
flow over urban-like roughness subjected to a time-varying pressure gradient.
To represent flow unsteadiness, a pulsatile pressure gradient with a constant
average and a sinusoidal oscillating component is selected as a prototype. In
addition to its practical implications in areas such as wave-current boundary
layers, internal-wave induced flows, and arterial blood flows, this flow
regime facilitates the analysis of coherent structures, owing to the periodic
nature of flow statistics.
Pulsatile flows share some similarities with oscillatory flows, i.e., flow
driven by a time-periodic pressure gradient with zero mean. Interestingly, in
the context of oscillatory flows, several studies have been devoted to the
characterization of coherent structures. For instance, Costamagna et al.
(2003); Salon et al. (2007) carried out a numerical study on transitional and
fully turbulent oscillatory flow over smooth surfaces, and observed that
streaky structures form at the end of the acceleration phases, then distort,
intertwine, and eventually break into small vortices. Carstensen et al. (2010)
performed a series of laboratory experiments on transitional oscillatory flow,
and identified two other major coherent structures, namely, cross-stream
vortex tubes, which are the direct consequences of inflectional-point shear
layer instability, and turbulent spots, which result from the destruction of
near-wall streaky structures as those in stationary flows. Carstensen et al.
(2012) observed turbulent spots in oscillatory flows over sand-grain
roughness, suggesting that the presence of such flow structures is independent
of surface types, and it was later highlighted by Mazzuoli & Vittori (2019)
that the mechanism responsible for the turbulent spot generation is similar
over both smooth and rough surfaces. Although the primary modes of variability
in oscillatory flows are relatively well understood, the same cannot be said
for pulsatile flows. A notable study by Zhang & Simons (2019) on wave-current
boundary layers, a form of pulsatile flow, revealed phase variations in the
spacing of streaks during the wave cycle. However, a detailed analysis of this
particular behavior is still lacking.
To investigate the structure of turbulence in current-dominated pulsatile flow
over surfaces in fully-rough aerodynamic flow regimes, we conduct a wall-
modeled large-eddy simulation (LES) of flow over an array of surface-mounted
cuboids. This study builds on the findings of a companion study that was
recently accepted for publication in the Journal of Fluid Mechanics, focusing
on the time evolution of flow statistics in pulsatile flow over a similar
surface (Li & Giometto, 2023). By contrasting findings against a corresponding
stationary flow simulation, this study addresses these specific questions: (i)
Does flow unsteadiness alter the topology of coherent structures in a time-
averaged sense? (ii) How does the geometry of coherent structures evolve
throughout the pulsation cycle? (iii) What is the effect of such modifications
on the mechanisms governing momentum transfer in the ABL? Answering these
questions will achieve a twofold research objective: first, contributing to a
better understanding of coherent patterns in pulsatile flow over complex
geometries, and second, shedding light on how these patterns regulate momentum
transfer.
This paper is organized as follows. Section 2 outlines the numerical procedure
and the simulation setup. First- and second-order statistics are presented and
discussed in §3.1. Section 3.2 focuses on a quadrant analysis, whereas §3.3
and §3.4 interpret the flow field in terms of two-point correlations and
instantaneous flow behavior. Further insight on the time evolution of
turbulence topology is proposed in §3.5 via conditional averaging. Concluding
remarks are given in §4.
## 2 Methodology
### 2.1 Numerical procedure
Simulations are carried out via an in-house LES algorithm (Albertson &
Parlange, 1999a, b; Giometto et al., 2016). The LES algorithm solves the
spatially-filtered momentum and mass conservation equations, namely,
$\displaystyle\frac{\partial u_{i}}{\partial t}+u_{j}(\frac{\partial
u_{i}}{\partial x_{j}}-\frac{\partial u_{j}}{\partial x_{i}})$
$\displaystyle=$ $\displaystyle-\frac{1}{\rho}\frac{\partial P}{\partial
x_{i}}-\frac{\partial\tau_{ij}}{\partial x_{j}}-\frac{1}{\rho}\frac{\partial
P_{\infty}}{\partial x_{1}}\delta_{i1}+F_{i}$ (1) $\displaystyle\frac{\partial
u_{i}}{\partial x_{i}}$ $\displaystyle=$ $\displaystyle 0$ (2)
where $(u_{1},u_{2},u_{3})$ represent the filtered velocities along the
streamwise $x_{1}$, cross-stream $x_{2}$, and wall-normal $x_{3}$ directions,
respectively. The rotational form of the convective term is used to ensure
kinetic energy conservation in the discrete sense in the inviscid limit
(Orszag & Pao, 1975). $\tau_{ij}$ is the deviatoric part of the kinematic
subgrid-scale (SGS) stress tensor, parameterized via the Lagrangian scale-
dependent dynamic (LASD) Smagorinsky model (Bou-Zeid et al., 2005). The flow
is assumed to be in the fully rough aerodynamic regime, and viscous stresses
are not considered. $P=p+\rho\frac{1}{3}\tau_{ii}+\rho\frac{1}{2}u_{i}u_{i}$
is a modified pressure, which accounts for the trace of SGS stress and
resolved turbulent kinetic energy, and $\rho$ is a constant fluid density. The
flow is driven by a spatially uniform, pulsatile pressure gradient in the
$x_{1}$ direction, namely ${\partial P_{\infty}}/{\partial x_{1}}=-\rho
f_{m}\left[1+\alpha_{p}\sin(\omega t)\right]$, where the $f_{m}$ parameter
controls the magnitude of the temporally averaged pressure gradient,
$\alpha_{p}$ controls the forcing amplitude, and $\omega$ the forcing
frequency. $\delta_{ij}$ in (1) denotes the Kronecker delta tensor.
Periodic boundary conditions apply in the wall-parallel directions, and a
free-slip boundary condition is imposed at the top of the computational
domain. The lower surface consists of an array of uniformly distributed
cuboids, which are explicitly resolved via a discrete forcing immersed
boundary method (IBM) (Mittal & Iaccarino, 2005). The IBM approach makes use
of an artificial force $F_{i}$ to impose the no-slip boundary condition at the
solid-fluid interfaces. Additionally, it utilizes an algebraic equilibrium
wall-layer model to evaluate surface stresses (Piomelli, 2008; Bose & Park,
2018). The algorithm has been extensively validated for the simulation of flow
in urban environments (see, e.g., Tseng et al., 2006; Chester et al., 2007;
Giometto et al., 2016).
Spatial derivatives in the wall-parallel directions are computed via a pseudo-
spectral collocation method based on truncated Fourier expansions (Orszag,
1970), whereas a second-order staggered finite differences scheme is employed
in the wall-normal direction. Since dealiasing errors are known to be
detrimental for pseudo-spectral discretization (Margairaz et al., 2018), non-
linear convective terms are de-aliased exactly via the $3/2$ rule (Canuto et
al., 2007). The time integration is performed via a second-order Adams-
Bashforth scheme, and the incompressibility condition is enforced via a
fraction step method (Kim & Moin, 1985).
### 2.2 Simulation setup
Figure 1: Side and planar view of the computational domain (a,b respectively).
The red dashed line denotes the repeating unit.
Two LESs of flow over an array of surface-mounted cubes are carried out. The
two simulations only differ by the pressure forcing term: One is characterized
by a pressure gradient that is constant in space and time (CP hereafter), and
the other by a pressure gradient that is constant in space and pulsatile in
time (PP).
The computational domain for both simulations is sketched in figure 1. The
size of the box is $[0,L_{1}]\times[0,L_{2}]\times[0,H]$ with $L_{1}=72h$,
$L_{2}=24h$ and $H=8h$, where $h$ denotes the height of cubes. Cubes are
organized in an in-line arrangement with planar and frontal area fractions set
to $\lambda_{p}=\lambda_{f}=0.\overline{1}$. The relatively high packing
density along with the chosen scale separation $H/h=8$ support the existence
of a well-developed ISL and healthy coherent structures in the considered flow
system (Coceal et al., 2007; Castro, 2007; Zhang et al., 2022). In terms of
horizontal extent, $L_{1}/H$ and $L_{2}/H$ are larger than those from previous
works focusing on coherent structures above aerodynamically rough surfaces
(Coceal et al., 2007; Xie et al., 2008; Leonardi & Castro, 2010; Anderson et
al., 2015b) and are sufficient to accommodate large-scale motions (Balakumar &
Adrian, 2007). An aerodynamic roughness length $z_{0}=10^{-4}h$ is prescribed
at the cube surfaces and the ground via the algebraic wall-layer model,
resulting in negligible SGS drag contributions to the total surface drag (Yang
& Meneveau, 2016). The computational domain is discretized using a uniform
Cartesian grid of $N_{1}\times N_{2}\times N_{3}=576\times 192\times 128$, so
each cube is resolved via $8\times 8\times 16$ grid points. Such a grid
resolution yields flow statistics that are poorly sensitive to grid resolution
in both statistically stationary and pulsatile flows at the considered
oscillation frequency (Tseng et al., 2006; Li & Giometto, 2023).
For the PP case, the forcing frequency is set to $\omega T_{h}=\pi/8$, where
$T_{h}=h/u_{\tau}$ is the averaged turnover time of characteristic eddies of
the urban canopy layer (UCL) and ${u}_{\tau}=\sqrt{f_{m}H}$ the friction
velocity. This frequency selection is based on both practical and theoretical
considerations. Realistic ranges for the friction velocity and UCL height are
$0.1\leq{u}_{\tau}\leq 0.5\ \rm{m/s}$ and $3\leq h\leq 30\ \rm{m}$ (Stull,
1988). Using such values, the chosen frequency corresponds to a time period
$24\leq T\leq 4800\ \rm{s}$, where $T=2\pi/\omega=16T_{h}$. This range of time
scales pertains to sub-mesoscale motions (Mahrt, 2009; Hoover et al., 2015),
which, as outlined in §1, are a major driver of atmospheric pressure gradient
variability. From a theoretical perspective, this frequency is expected to
yield substantial modifications of coherent structures within the ISL. The
chosen frequency results in a Stokes layer thickness $\delta_{s}=5h$,
encompassing both the RSL and the ISL. Within the Stokes layer, turbulence
generation and momentum transport undergo significant modifications during the
pulsation cycle, as demonstrated in Li & Giometto (2023). Moreover, the
oscillation period $T$ is comparable to the average lifespan of eddies in the
ISL of the considered flow system, as elaborated below. Coceal et al. (2007)
showed that, in flow over rough surfaces, the characteristic length scale of
ISL eddies ($\ell$) is bounded below by $h$, thus yielding $\min{(\ell)}\sim
h$. Based on Townsend’s attached-eddy hypothesis, $\ell\sim x_{3}$, which
results in $\max{(\ell)}\sim H$. The time scale associated with ISL eddies is
$T_{\ell}=\ell/u_{\tau}$, so that $\min{(T_{\ell})}\sim h/u_{\tau}=T_{h}$ and
$\max{(T_{\ell})}\sim H/u_{\tau}=T_{H}$. The modest scale separation
characterizing our setup ($H=8h$) yields a modest separation of time scales in
the ISL, and considering $T\approx T_{H}$, one can conclude that the time
scale of ISL eddies is comparable to $T$. With $T_{\ell}\approx T$, flow
pulsation will considerably modify the structure of ISL turbulence and drive
the flow out of equilibrium conditions. This is because changes in the imposed
pressure gradient occur at a rate that enables turbulent eddies to respond.
This behavior can be contrasted to two limiting cases: with $T_{\ell}\gg T$,
turbulence is unable to respond to the rapid changes in the environment and is
advected in a “frozen” state, i.e., it does not undergo topological changes.
With $T_{\ell}\ll T$, ISL eddies do not “live” long enough to sense changes in
the environment, and maintain a quasi-steady state throughout the pulsatile
cycle. In terms of forcing amplitude, such a quantity is set to
$\alpha_{p}=12$ for the PP case; this amplitude is large enough to induce
significant changes in the coherent structures with the varying pressure
gradient while avoiding mean flow reversals.
Both simulations are initialized with velocity fields from a stationary flow
case and integrated over $400T_{H}$, corresponding to $200$ pulsatile cycles
for the PP case. Here $T_{H}=H/{u}_{\tau}$ refers to the turnover time of the
largest eddies in the domain. The time step ($\delta t$) is set to ensure
$\max{(CFL)}=u_{max}\delta t/\delta\approx 0.05$, where CFL denotes the
Courant-Friedrichs-Lewy stability condition, $u_{max}$ is the maximum velocity
magnitude at any point in the domain during the simulation, and $\delta$ is
the smallest grid stencil in the three coordinate directions. The initial
$20T_{H}$ are discarded for both the CP and PP cases (transient period for the
PP case), which correspond to about 10 oscillation periods, after which
instantaneous snapshots of velocities and pressure are saved to disk every
$0.025T_{H}$ ($1/80$ of the pulsatile cycle).
### 2.3 Notation and terminology
For the PP case, $\overline{(\cdot)}$ denotes an ensemble averaging operation,
performed over the phase dimension and over repeating surface units (see
figure 1), i.e.,
$\overline{\theta}(x_{1},x_{2},x_{3},t)=\frac{1}{N_{p}n_{1}n_{2}}\sum^{N_{p}}_{n=1}\sum^{n_{1}}_{i=1}\sum^{n_{2}}_{j=1}\theta(x_{1}+il_{1},x_{2}+jl_{2},x_{3},t+nT),\\\
0\leq x_{1}\leq l_{1},\quad 0\leq x_{2}\leq l_{2},\quad 0\leq t\leq T\ ,$ (3)
where $\theta$ is a given scalar field, $n_{1}$ and $n_{2}$ are the number of
repeating units in the streamwise and cross-stream directions, respectively.
Using the usual Reynolds decomposition, one can write
$\theta(x_{1},x_{2},x_{3},t)=\overline{\theta}(x_{1},x_{2},x_{3},t)+\theta^{\prime}(x_{1},x_{2},x_{3},t)\
\,$ (4)
where $(\cdot)^{\prime}$ denotes a fluctuation from the ensemble average. For
the CP case, $\overline{(\cdot)}$ denotes a quantity averaged over time and
repeating units. An ensemble averaged quantity can be further decomposed into
an intrinsic spatial average and a deviation from the intrinsic average
(Schmid et al., 2019), i.e.,
$\overline{\theta}(x_{1},x_{2},x_{3},t)=\langle\overline{\theta}\rangle(x_{3},t)+\overline{\theta}^{\prime\prime}(x_{1},x_{2},x_{3},t)\
.$ (5)
Note that, for each $x_{3}$, the intrinsic averaging operation is taken over a
thin horizontal “slab” $V_{f}$ of fluid, characterized by a thickness
$\delta_{3}$ in the wall-normal ($x_{3}$) direction, namely,
$\langle\overline{\theta}\rangle(x_{3},t)=\frac{1}{V_{f}}\int_{x_{3}-\delta_{3}/2}^{x_{3}+\delta_{3}/2}\int_{0}^{l_{2}}\int_{0}^{l_{1}}\overline{\theta}(x_{1},x_{2},x_{3},t)dx_{1}dx_{2}dx_{3}\
.$ (6)
Further, any phase-averaged quantity from the PP case consists of a longtime-
averaged component and an oscillatory component with a zero mean, which will
be hereafter denoted via the subscripts $l$ and $o$, respectively, i.e.,
$\overline{\theta}(x_{1},x_{2},x_{3},t)=\overline{\theta}_{l}(x_{1},x_{2},x_{3})+\overline{\theta}_{o}(x_{1},x_{2},x_{3},t)$
(7)
and
$\langle\overline{\theta}\rangle(x_{3},t)=\langle\overline{\theta}\rangle_{l}(x_{3})+\langle\overline{\theta}\rangle_{o}(x_{3},t)\
.$ (8)
As for the CP case, the longtime and ensemble averages are used
interchangeably due to the lack of an oscillatory component. In the following,
the longtime-averaged quantities from the PP case are contrasted against their
counterparts from the CP case to highlight the impact of flow unsteadiness on
flow characteristics in a longtime average sense. Oscillatory and phase-
averaged quantities are analyzed to shed light on the phase-dependent features
of the PP case.
## 3 Results
### 3.1 Overview of flow statistics
Li & Giometto (2023) have proposed a detailed analysis of pulsatile flow over
an array of surface-mounted cuboids, discussing the impact of varying forcing
amplitude and frequency on selected flow statistics. Here, we repropose and
expand upon some of the findings for the chosen oscillation frequency and
amplitude that are relevant to this work.
Figure 2: (a) Longtime-averaged shear stresses from the PP (black) and CP
(red) cases. Resolved Reynolds shear stress
$\langle\overline{u^{\prime}_{1}u^{\prime}_{3}}\rangle_{l}$, solid lines;
dispersive shear stress
$\langle\overline{u}^{\prime\prime}_{1}\overline{u}^{\prime\prime}_{3}\rangle_{l}$.
(b) Longtime-averaged turbulent and wake kinetic energy from the PP (black)
and CP (red) cases. Resolved turbulent kinetic energy
$k_{l}=\langle\overline{u^{\prime}_{i}u^{\prime}_{i}}\rangle_{l}/2$, solid
lines; wake kinetic energy
$k_{w,l}=\langle\overline{u}^{\prime\prime}_{i}\overline{u}^{\prime\prime}_{i}\rangle_{l}/2$,
dashed lines. Dashed-dotted horizontal lines denote the upper bound of the RSL
$(x_{3}^{R})$.
Figure 2(a) presents the wall-normal distributions of the longtime-averaged
resolved Reynolds shear stress
$\langle\overline{u^{\prime}_{1}u^{\prime}_{3}}\rangle_{l}$ and dispersive
shear stress
$\langle\overline{u}^{\prime\prime}_{1}\overline{u}^{\prime\prime}_{3}\rangle_{l}$.
Note that SGS components contribute less than $1\%$ to the total Reynolds
stresses and are hence not discussed. From the figure, it is apparent that
flow unsteadiness does not noticeably affect the
$\langle\overline{u^{\prime}_{1}u^{\prime}_{3}}\rangle_{l}$ profile, with
local variations from the statistically stationary scenario being within a
$3\%$ margin. On the contrary, flow pulsation within the UCL leads to
pronounced increases in
$\langle\overline{u}^{\prime\prime}_{1}\overline{u}^{\prime\prime}_{3}\rangle_{l}$,
with local surges reaching up to a fivefold increase. However, despite this
increase, the dispersive flux remains a modest contributor to the total
momentum flux in the UCL. Figure 2(b) displays the longtime-averaged resolved
turbulent kinetic energy
$k_{l}=\langle\overline{u^{\prime}_{i}u^{\prime}_{i}}\rangle_{l}/2$ and wake
kinetic energy
$k_{w,l}=\langle\overline{u}^{\prime\prime}_{i}\overline{u}^{\prime\prime}_{i}\rangle_{l}/2$.
Both $k_{l}$ and $k_{w,l}$ from the PP case feature modest ($<5\%$) local
departures from their CP counterparts, highlighting a weak dependence of both
longtime-averaged turbulent and wake kinetic energy on flow unsteadiness.
Also, the RSL thicknesses $(x_{3}^{R})$ for the CP and PP cases are depicted
in figure 2. Following the approach by Pokrajac et al. (2007), $x_{3}^{R}$ is
estimated by thresholding the spatial standard deviation of the longtime-
averaged streamwise velocity normalized by its intrinsic average, namely,
$\sigma=\frac{\sqrt{\langle(\overline{u}_{1,l}-\langle\overline{u}_{1}\rangle_{l})^{2}\rangle}}{\langle\overline{u}_{1}\rangle_{l}}\
,$ (9)
where the threshold is taken as 1%. An alternative method to evaluate
$x_{3}^{R}$ involves using phase-averaged statistics instead of longtime-
averaged ones in (9). Although not shown, such a method yields similar
predictions (with a discrepancy of less than $5\%$). Both
$\langle\overline{u}^{\prime\prime}_{1}\overline{u}^{\prime\prime}_{3}\rangle_{l}$
and $k_{w,l}$ reduce to less than $1\%$ of their peak value above $x_{3}^{R}$.
From figure 2, one can readily observe that flow unsteadiness yields a modest
increase in the extent of the RSL, with an estimated $x_{3}^{R}$ not exceeding
$1.5h$ in both cases. Hereafter, we will hence assume $x_{3}^{R}=1.5h$. As
discussed in §1, RSL and ISL feature distinct coherent structures.
Specifically, the structures in the RSL are expected to show strong imprints
of roughness elements, whereas those in the ISL should, in principle, be
independent of surface morphology (Coceal et al., 2007).
Figure 3: Space-time diagrams of (a) oscillatory shear rate
${\partial\langle\overline{u}_{1}\rangle_{o}}/{\partial x_{3}}$, (b)
oscillatory resolved Reynolds shear stress
$\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle_{o}$ and (c)
oscillatory resolved turbulent kinetic energy
$k_{o}=\langle\overline{u^{\prime}_{i}u^{\prime}_{i}}\rangle_{o}/2$ from the
PP case. Results are normalized by $u_{\tau}$ and $h$. Horizontal dashed lines
highlight the top of the UCL.
The response of selected first- and second-order flow statistics to flow
unsteadiness is depicted in figure 3. In Figure 3(a), an oscillating wave is
evident in the oscillatory shear rate
$\partial\langle\overline{u}_{1}\rangle_{o}/\partial x_{3}$. This wave,
generated at the canopy top due to flow unsteadiness, exhibits a phase lag of
$\pi/2$ relative to the pulsatile pressure forcing. Such a wave propagates in
the positive vertical direction while being attenuated and diffused by
turbulent mixing. It is noteworthy that the propagation speed of the
oscillating shear rate is to a good degree constant, as suggested by the
constant tilting angle along the $x_{3}$ direction of the
${\partial\langle\overline{u}_{1}\rangle_{o}}/{\partial x_{3}}$ contours. As
apparent from figure 3(b,c), the space-time diagrams of the oscillatory
resolved Reynolds shear stress
$\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle_{o}$ and oscillatory
resolved turbulent kinetic energy
$k_{o}=\langle\overline{u^{\prime}_{i}u^{\prime}_{i}}\rangle_{o}/2$ are also
characterized by decaying waves traveling away from the RSL at constant rates.
The speeds of these waves are similar to that of the corresponding oscillating
shear rate, which can be again inferred by the identical tilting angles in the
contours. There is clearly a causal relation for this behavior: Above the UCL,
the major contributors of shear production terms in the budget equations of
$\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle_{o}$ and $k_{o}$ are
$\langle\overline{\mathcal{P}}\rangle_{13,o}=-2\langle\overline{u_{3}^{\prime}u_{3}^{\prime}}\rangle_{l}\frac{\partial\langle\overline{u}_{1}\rangle_{o}}{\partial
x_{3}}-2\langle\overline{u_{3}^{\prime}u_{3}^{\prime}}\rangle_{o}\frac{\partial\langle\overline{u}_{1}\rangle_{l}}{\partial
x_{3}}$ (10)
and
$\langle\overline{\mathcal{P}}\rangle_{k,o}=-\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle_{l}\frac{\partial\langle\overline{u}_{1}\rangle_{o}}{\partial
x_{3}}-\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle_{o}\frac{\partial\langle\overline{u}_{1}\rangle_{l}}{\partial
x_{3}}\ ,$ (11)
respectively. As the oscillating shear rate travels upwards away from the UCL,
it interacts with the local turbulence by modulating
$\langle\overline{\mathcal{P}}\rangle_{13,o}$ and
$\langle\overline{\mathcal{P}}\rangle_{k,o}$, ultimately yielding the observed
oscillations in resolved Reynolds stresses. On the other hand, no pulsatile-
forcing-related terms appear in the budget equations of resolved Reynolds
stresses. This indicates that the oscillating shear rate induced by the
pulsatile forcing modifies the turbulence production above the UCL, rather
than the pressure forcing itself. A similar point about pulsatile flows was
made in Scotti & Piomelli (2001), where it was stated that “[…]in the former
[pulsatile flow] it is the shear generated at the wall that affects the flow.”
It is worth noting that such a study was, however, based on pulsatile flow
over smooth surfaces and at a relatively low Reynolds number.
In addition, a visual comparison of the contours of
${\partial\langle\overline{u}_{1}\rangle_{o}}/{\partial x_{3}}$ and
$-\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle_{o}$ highlights the
presence of a phase lag between such quantities throughout the flow field.
Further examination of this phase lag can be found in Li & Giometto (2023).
During the pulsatile cycle, the turbulence is hence not in equilibrium with
the mean flow. This is the case despite the fact that neither the pulsatile
forcing nor the induced oscillating shear wave significantly alters the
longtime averaged turbulence intensity, as evidenced in figure 2. To gain
further insight into this behavior, the next section examines the structure of
turbulence under this non-equilibrium condition.
### 3.2 Quadrant analysis
The discussions will first focus on the impact of flow pulsation on the
$u_{1}^{\prime}u_{3}^{\prime}$ quadrants, with a focus on the ISL. This
statistical analysis enables the quantification of contributions from
different coherent motions to turbulent momentum transport. The quadrant
analysis technique was first introduced by Wallace et al. (1972), and has
thereafter been routinely employed to characterize the structure of turbulence
across a range of flow systems (Wallace, 2016). The approach maps velocity
fluctuations to one of four types of coherent motions (quadrants) in the
$u_{1}^{\prime}-u_{3}^{\prime}$ phase space, namely,
$\begin{cases}Q1:&u_{1}^{\prime}>0,u_{3}^{\prime}>0\ ,\\\
Q2:&u_{1}^{\prime}<0,u_{3}^{\prime}>0\ ,\\\
Q3:&u_{1}^{\prime}<0,u_{3}^{\prime}<0\ ,\\\
Q4:&u_{1}^{\prime}>0,u_{3}^{\prime}<0\ .\end{cases}$ (12)
Q2 and Q4 are typically referred to as ejections and sweeps, respectively.
They are the main contributors to the Reynolds shear stress, and constitute
the majority of the events in boundary layer flows. Ejections are associated
with the lift-up of low-momentum fluid by vortex induction between the legs of
hairpin structures, whereas sweeps correspond to the down-draft of the high-
momentum fluid (Adrian et al., 2000). Q1 and Q3 denote outward and inward
interactions, and play less important roles in transporting momentum when
compared to Q2 and Q4. Coceal et al. (2007) and Finnigan (2000) showed that
the RSL of stationary flows is dominated by ejections in terms of the number
of events, but the overall Reynolds stress contribution from sweep events
exceeds that of ejections. This trend reverses in the ISL. This behavior is
indeed apparent from figure 4, where ejection and sweep profiles are shown for
the CP case (red lines).
Figure 4: (a) Relative contribution to
$\overline{u_{1}^{\prime}u_{3}^{\prime}}$ by events in each quadrant summed
over the wall-parallel planes and the whole sampling time period and (b)
relative number of events in each quadrant from the PP case (black) and CP
(red) as a function of $x_{3}$. Cross: outward interaction; triangles:
ejection; diamonds: inward interaction; circles: sweep.
We first examine the overall frequency of events in each quadrant and the
contribution of each quadrant to the resolved Reynolds shear stress. For the
considered cases, the contribution to
$\overline{u_{1}^{\prime}u_{3}^{\prime}}$ and the number of the events of each
quadrant are summed over different wall-parallel planes and over the whole
sampling time period (i.e., these are longtime-averaged quantities). Results
from this operation are also shown in figure 4. What emerges from this
analysis is that flow pulsation does not significantly alter the relative
contribution and frequency of each quadrant. Some discrepancies between CP and
PP profiles can be observed immediately above the UCL, but do not sum to more
than 4% at any given height.
Figure 5: (a) Ratio between the numbers of ejections to sweeps
($\gamma_{\\#}$) from the PP case on a streamwise/wall-normal plane. (b)
Location of the selected streamwise/wall-normal plane (red dashed line) within
a repeating unit. (c) $\gamma_{\\#}$ from the CP case on the same plane. Black
dashed lines denote $x_{3}/h=1.5$, where is the upper limit of the RSL.
A more interesting picture of the flow field emerges if we consider the phase-
dependent behavior of ejections and sweeps. Hereafter the ratio between the
numbers of ejections and sweeps is denoted by $\gamma_{\\#}$, and the ratio of
their contribution to $\overline{u_{1}^{\prime}u_{3}^{\prime}}$ by
$\gamma_{c}$. As outlined in the previous section, turbulent fluctuations are
defined as deviations from the local ensemble average. Consequently, both the
frequency of occurrences and the contribution to
$\overline{u_{1}^{\prime}u_{3}^{\prime}}$ from each quadrant are influenced by
two main factors: the relative position to the cube within the repeating unit
and the phase in the PP case. This dual dependency extends to $\gamma_{\\#}$
and $\gamma_{c}$ as well. Conversely, in the CP case, $\gamma_{\\#}$ and
$\gamma_{c}$ are only functions of the spatial location relative to the cube.
Figure 5(a,c) present $\gamma_{\\#}$ up to $x_{3}/h=2$ at a selected
streamwise/wall-normal plane for the PP and CP cases, respectively. The chosen
plane cuts through the center of a cube in the repeating unit, as shown in
5(b). In the cavity, the ejection-sweep pattern from the PP case is found to
be qualitatively similar to its CP counterpart throughout the pulsatile cycle
(compare subplots (a,c) in figure 5). Specifically, a preponderance of sweeps
characterizes a narrow region in the leeward side of the cube (the streamwise
extent of this region is $\lessapprox 0.3h$), whereas ejections dominate in
the remainder of the cavity. As also apparent from figure 5(a), the streamwise
extent of the sweep-dominated region features a modest increase (decrease)
during the acceleration (deceleration) time period. During the acceleration
phase, the shown above canopy region $(h<x_{3}<2h)$ transitions from an
ejection-dominated flow regime to a sweep-dominated one, and vice versa as the
flow decelerates. This transition initiates just above the cavity,
characterized by a higher occurrence of sweeps during the acceleration phase
and a predominance of ejections in the deceleration period. This continues
until both phenomena are distributed throughout the RSL. While not discussed
in this work, it is worth noting that the trend observed for $\gamma_{c}$ is
precisely the inverse.
Figure 6: (a) - (c): Intrinsic-averaged ratio of contributions to
$\overline{u_{1}^{\prime}u_{3}^{\prime}}$ from ejections and sweeps
($\langle\gamma_{c}\rangle$); (d) - (f): intrinsic-averaged ratio of ejections
to sweeps ($\langle\gamma_{\\#}\rangle$); (g) - (i): intrinsic and phase-
averaged shear rate ${\partial\langle\overline{u}_{1}\rangle}/{\partial
x_{3}}$ from the PP case at three wall-normal locations within the ISL (a,d,g)
$x_{3}/h=2$, (b,e,h) $x_{3}/h=3$ and (c,f,i) $x_{3}/h=4$ as a function of
phase. Black dashed lines denote longtime-averaged values, whereas solid red
lines represent corresponding quantities from the CP case.
Figure 7: (a) $\langle\gamma_{c}\rangle$ and (b) $\langle\gamma_{\\#}\rangle$
versus ${\partial\langle\overline{u}_{1}\rangle}/{\partial x_{3}}$ at
$x_{3}/h=2$ (blue), $x_{3}/h=3$ (green) and $x_{3}/h=4$ (magenta).
Shifting the attention to the ejection-sweep pattern in the ISL, which is
indeed the main focus of this study, figure 6 shows the intrinsic average of
$\gamma_{c}$ and $\gamma_{\\#}$ in the $x_{3}/h=\\{2,3,4\\}$ planes. These
quantities are hereafter denoted as $\langle\gamma_{c}\rangle$ and
$\langle\gamma_{\\#}\rangle$, respectively. The use of
$\langle\gamma_{c}\rangle$ and $\langle\gamma_{\\#}\rangle$ instead of
$\gamma_{c}$ and $\gamma_{\\#}$ to characterize the ejection-sweep pattern in
the ISL can be justified by the fact that the spatial variations in
$\gamma_{\\#}$ and $\gamma_{c}$ on the wall-parallel directions vanish rapidly
above the RSL, as apparent from figure 5. This is in line with the
observations of Kanda et al. (2004) and Castro et al. (2006) that the spatial
variations in $\gamma_{\\#}$ and $\gamma_{c}$ are concentrated in the RSL for
stationary flow over urban canopy. Further, as shown in figure 6, the
ejection-sweep pattern varies substantially during the pulsatile cycle. For
instance, at a relative height of $x_{3}/h=2$, even though the contribution
from ejections to $\overline{u_{1}^{\prime}u_{3}^{\prime}}$ dominates in a
longtime average sense ($\langle\gamma_{c}\rangle_{l}>1$), sweeps
contributions prevail for $\omega t\in[0,\pi/2]$. Interestingly, at a given
wall-normal location, this ejection-sweep pattern appears to be directly
controlled by the intrinsic and phase-averaged shear rate
${\partial\langle\overline{u}_{1}\rangle}/{\partial x_{3}}$. This is
particularly evident when $\langle\gamma_{c}\rangle$ and
$\langle\gamma_{\\#}\rangle$ are plotted against
${\partial\langle\overline{u}_{1}\rangle}/{\partial x_{3}}$ (refer to figure
7). As ${\partial\langle\overline{u}_{1}\rangle}/{\partial x_{3}}$ increases
at a given $x_{3}$, the corresponding $\langle\gamma_{c}\rangle$ increases
whereas $\langle\gamma_{\\#}\rangle$ decreases, highlighting the presence of
fewer but stronger ejections events. Maxima and minima of
$\langle\gamma_{c}\rangle$ and $\langle\gamma_{\\#}\rangle$ approximately
coincide with the maxima of
${\partial\langle\overline{u}_{1}\rangle}/{\partial x_{3}}$. This observation
is consistent across the considered planes. As discussed in the next sections,
such behavior can be attributed to time variations in the geometry of ISL
structures.
### 3.3 Spatial and temporal flow coherence
To gain a better understanding of the extent and organization of coherent
structures in the ISL, this section analyzes two-point velocity
autocorrelation maps. These flow statistics provide information on the
correlation of the flow field in space, making it an effective tool for
describing spatial flow coherence (Dennis & Nickels, 2011; Guala et al.,
2012). For the PP case, the phase-dependent two-point correlation coefficient
tensor $\overline{R}_{ij}$ can be defined as
$\overline{R}_{ij}(\Delta_{1},\Delta_{2},x_{3},x_{3}^{*},t)=\frac{\langle\overline{u_{i}^{\prime}(x_{1},x_{2},x_{3}^{*},t)u_{j}^{\prime}(x_{1}+\Delta_{1},x_{2}+\Delta_{2},x_{3},t)}\rangle}{\sqrt{\langle\overline{u_{i}^{\prime}u_{i}^{\prime}}\rangle(x_{3}^{*},t)\langle\overline{u_{j}^{\prime}u_{j}^{\prime}}\rangle(x_{3},t)}}\
,$ (13)
where $\Delta_{i}$ is the separation on the wall-parallel directions,
$x_{3}^{*}$ represents a reference wall-normal location, and $t$ denotes the
phase. In the CP case, the flow is statistically stationary, and therefore
$\overline{R}_{ij}$ is not a function of $t$, i.e.,
$\overline{R}_{ij}=\overline{R}_{ij,l}$.
Figure 8: Longtime-averaged two-point correlation coefficient tensor
$\overline{R}_{11,l}$ at (a) $x_{3}^{*}/h=1.5$, (b) $x_{3}^{*}/h=2$, (c)
$x_{3}^{*}/h=3$, and (d) $x_{3}^{*}/h=4$. Black lines correspond to the PP
case, and red lines to the CP one. $\overline{R}_{11,l}=0.6$ and
$\overline{R}_{11,l}=0.3$ are denoted by solid lines, and dashed lines
represent $\overline{R}_{11,l}=0$.
Figure 9: Time evolution of (a) the cross-stream streak width normalized by
$h$ and (b) $\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$. The
cross-stream width is identified as the first zero crossing of the
$\overline{R}_{11}=0$ field.
Figure 8 compares $\overline{R}_{11,l}$ for the PP and CP cases over the
$x_{3}^{*}/h=\\{1.5,2,3,4\\}$ planes. In both cases, $\overline{R}_{11,l}$
features an alternating sign in the cross-stream direction, signaling the
presence of low- and high-momentum streaks flanking each other in the cross-
stream direction. The cross-stream extent of longtime-averaged streaks can be
identified as the first zero-crossing of the $\overline{R}_{11,l}$ contour in
the $\Delta_{2}$ direction. Based on this definition, figure 8 shows that flow
unsteadiness has a modest impact on such a quantity. This finding agrees with
observations from Zhang & Simons (2019) for pulsatile flow over smooth
surfaces. Further, although not shown, the streamwise and cross-stream extent
of streaks increases linearly in $x_{3}$, suggesting that Townsend’s attached-
eddy hypothesis is valid in a longtime average sense (Marusic & Monty, 2019).
Turning the attention to the phase-averaged flow field, figure 9 shows the
time variation of the cross-stream streaks extent, which is identified as the
first zero crossing of the $\overline{R}_{11}=0$ field in the cross-stream
direction. The linear $x_{3}$-scaling of the streak width breaks down in a
phase-averaged sense. Such a quantity indeed varies substantially during the
pulsatile cycle, diminishing in magnitude as
${\partial\langle\overline{u}_{1}\rangle}/{\partial x_{3}}$ increases
throughout the boundary layer. Interestingly, when
${\partial\langle\overline{u}_{1}\rangle}/{\partial x_{3}}$ reaches its
maximum at $\omega t\approx\pi$ and $x_{3}/h\approx 1.5$, the cross-stream
extent of streaks approaches zero, suggesting that streaks may not be a
persistent feature of pulsatile boundary layer flows.
Figure 10: $\overline{R}_{11,l}$ in the streamwise/wall-normal plane of the PP
(black) and CP (red) cases. Results correspond to four reference wall-normal
locations: (a) $x_{3}^{*}/h=1.5$, (b) $x_{3}^{*}/h=2$, (c) $x_{3}^{*}/h=3$,
and (d) $x_{3}^{*}/h=4$. Contour levels (solid lines) range from $0.2$ to
$0.5$ with increments of $0.1$. Dashed lines denote the locus of the maximum
correlation at each streamwise location. The slopes of the dashed lines
represent the tilting angles of the structures.
To further quantify topological changes induced by flow pulsation, we
hereafter examine variations in the streamwise and wall-normal extent of
coherent structures. Such quantities will be identified via the
$\overline{R}_{11}=0.3$ contour, in line with the approach used by Krogstad &
Antonia (1994). Note that the choice of the $\overline{R}_{11}$ threshold for
such a task is somewhat subjective, and several different values have been
used in previous studies to achieve this same objective, including
$\overline{R}_{11}=0.4$ (Takimoto et al., 2013) and $\overline{R}_{11}=0.5$
(Volino et al., 2007; Guala et al., 2012). In this study, the exact threshold
is inconsequential as it does not impact the conclusions. Figure 10 presents
$\overline{R}_{11,l}$ contours in the streamwise/wall-normal plane for
$x_{3}^{*}/h=\\{1.5,2,3,4\\}$. The jagged lines at $x_{3}/h\approx 1$ (the top
of the UCL) bear the signature of roughness elements. The dashed lines passing
through $x_{3}^{*}$ identify the locus of the maxima in $\overline{R}_{11,l}$
at each streamwise location. The inclination angle of such lines can be used
as a surrogate for the longtime-averaged tilting angle of the coherent
structure (Chauhan et al., 2013; Salesky & Anderson, 2020). It is clearly
observed that at each reference wall-normal location, the tilting angle of
longtime-averaged structures is similar between the PP case and CP. The
tilting angle in both cases decreases monotonically and slowly from
$15^{\circ}$ at $x_{3}^{*}/h=1.5$ to $10^{\circ}$ at $x_{3}^{*}/h=4$—a
behavior that is in excellent agreement with results from Coceal et al.
(2007), even though a different urban canopy layout was used therein. Further,
the identified tilting angle is also similar to the one inferred from real-
world ABL observations in Hutchins et al. (2012) and Chauhan et al. (2013). On
the other hand, longtime-averaged coherent structures in the PP case are
relatively smaller than in the CP case in both the streamwise and wall-normal
coordinate directions. Discrepancies become more apparent with increasing
$x_{3}^{*}$. Specifically, the difference in the streamwise extent of the
longtime-averaged structure from the two cases increases from $2\%$ at
$x_{3}^{*}/h=1.5$ to $15\%$ at $x_{3}^{*}/h=4$. Corresponding variations in
the wall-normal extent are $2\%$ and $4\%$.
Figure 11: Time evolution of $\overline{R}_{11}=0.3$ in the streamwise/wall-
normal plane. Line colors denote the contours corresponding to different
$x_{3}^{*}$ planes: $x_{3}^{*}/h=1.5$ (black), $x_{3}^{*}/h=2$ (blue),
$x_{3}^{*}/h=3$ (green), and $x_{3}^{*}/h=4$ (magenta). Dots highlight the
location of the reference plane.
Figure 12: The locus of the maximum $\overline{R}_{11}$ at four phases:
$\omega t=0$ (solid lines), $\omega t=\pi/2$ (dashed lines), $\omega t=\pi$
(dashed dotted lines), and $\omega t=3\pi/2$ (dotted lines). Line colors
denote different reference elevations: $x_{3}^{*}/h=1.5$ (black),
$x_{3}^{*}/h=2$ (blue), $x_{3}^{*}/h=3$ (green), and $x_{3}^{*}/h=4$
(magenta).
More insight into the mechanisms underpinning the observed behavior can be
gained by examining the time evolution of such structures for the PP case in
figure 11. When taken together with figure 9(b), it becomes clear that both
the streamwise and the wall-normal extents of the coherent structures tend to
reduce with increasing local $\partial\langle\overline{u}_{1}\rangle/\partial
x_{3}$. Compared to the streamwise extent, the wall-normal extent of the
coherent structure is more sensitive to changes in
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$. For example, at
$x_{3}^{*}/h=4$, we observe an overall $15\%$ variation in the wall-normal
extent of the coherent structure during a pulsation cycle, whereas the
corresponding variation in streamwise extent is $8\%$. Further, the flow field
at the considered heights appears to be more correlated with the flow in the
UCL for small $\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$, thus
highlighting a stronger coupling between flow regions in the wall-normal
direction. Interestingly, the tilting angle of the coherent structure remains
constant during the pulsatile cycle, as shown in figure 12.
Next, we will show that the hairpin vortex packet paradigm (Adrian, 2007) can
be used to provide an interpretation for these findings. Note that alternative
paradigms, such as that proposed by Del Alamo et al. (2006), may offer
different interpretations of the results, but are not discussed in this work.
The validity of such a paradigm is supported by a vast body of evidence from
laboratory experiments of canonical TBL (Adrian et al., 2000; Christensen &
Adrian, 2001; Dennis & Nickels, 2011) to ABL field measurements (Hommema &
Adrian, 2003; Morris et al., 2007) and numerical simulations (Lee et al.,
2011; Eitel-Amor et al., 2015). This formulation assumes that the dominant ISL
structures are hairpin vortex packets, consisting of a sequence of hairpin
vortices organized in a quasi-streamwise direction with a characteristic
inclination angle relative to the wall. These structures encapsulate the low-
momentum regions, also known as “streaks.” The structural information obtained
from the two-point correlation has been considered to reflect the averaged
morphology of the hairpin vortex packets (Zhou et al., 1999;
Ganapathisubramani et al., 2005; Volino et al., 2007; Hutchins et al., 2012;
Guala et al., 2012). Specifically, in this study, the observed changes in
$\overline{R}_{11,l}$ between the CP and PP cases and of $\overline{R}_{11}$
contours during the pulsatile cycle reflect corresponding changes in the
geometry of vortex packets in a longtime- and phase-averaged sense. That is,
as $\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$ increases, the
phase-averaged size of vortex packets is expected to shrink, and, in the
longtime-averaged sense, the vortex packets are smaller than their
counterparts in the CP case. However, upon inspection of $\overline{R}_{11}$
in figure 11, it is unclear whether the observed change in packet size is
attributable to variations in the composing hairpin vortices or the tendency
for packets to break into smaller ones under high
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$ and merge into larger
ones under low $\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$. To
answer this question, we will next examine the instantaneous turbulence
structures and extract characteristic hairpin vortices through conditional
averaging. Also, the constant tilting angle of the structure evidenced in
figure 12 during the pulsatile cycle indicates that, no matter how vortex
packets break and reorganize and how individual hairpin vortices deform in
response to the time-varying shear rate, the hairpin vortices within the same
packet remain aligned with a constant tilting angle.
### 3.4 Instantaneous flow structure
Figure 13: (a,b): Instantaneous fluctuating streamwise velocity
$u_{1}^{\prime}$ normalized by ${u}_{\tau}$ at $x_{3}=2h$; (c,d): wall-normal
swirl strength $\lambda_{s,3}$ of the PP case at $x_{3}=2h$. (a,c): $\omega
t=\pi/2$, ; (b,d), $\omega t=\pi$. Shaded regions in (c,d) highlight the low-
momentum ($u_{1}^{\prime}<0$) regions. The instantaneous flow fields
correspond to the same pulsatile cycle. Green solid lines highlight the
background location of the cuboids.
Figure 14: Instantaneous fluctuating streamwise velocity $u_{1}^{\prime}$ in a
streamwise/wall-normal plane during a pulsatile cycle. Black dashed lines
denote the $12^{\circ}$ structural tilting angle of the coherent structure.
Green solid lines represent the canopy layer top.
Figure 13(a,b) show the instantaneous fluctuating streamwise velocity
$u_{1}^{\prime}$ at $x_{3}/h=1.5$ from the PP case. The chosen phases, $\omega
t=\pi/2$ and $\omega t=\pi$, correspond to the local minimum and maximum of
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$, respectively (see
figure 6,g). Streak patterns can be observed during both phases. As shown in
figure 13(a), at low $\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$
values, instantaneous $u_{1}^{\prime}$ structures intertwine with neighboring
ones, and form large streaks with a cross-stream extent of about $5h$.
Conversely, when $\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$ is
large, the streaks are shrunk into smaller structures, which have a cross-
stream extent of about $h$. This behavior is consistent with the observations
we made based on figure 9.
Further insight into the instantaneous flow field can be gained by considering
the low-pass filtered wall-normal swirl strength $\lambda_{s,3}$, shown in
figures 13(c,d). The definition of the signed planar swirl strength
$\lambda_{s,i}$ is based on the studies of Stanislas et al. (2008) and Elsinga
et al. (2012). The magnitude of $\lambda_{s,i}$ is the absolute value of the
imaginary part of the eigenvalue of the reduced velocity gradient tensor
$J_{jk}$, which is
$J_{jk}=\begin{bmatrix}{\partial u_{j}}/{\partial x_{j}}&{\partial
u_{j}}/{\partial x_{k}}\\\ {\partial u_{k}}/{\partial x_{j}}&{\partial
u_{k}}/{\partial x_{k}}\end{bmatrix},i\neq j\neq k\ ,$ (14)
with no summation over repeated indices. The sign of $\lambda_{s,i}$ is
determined by the vorticity component $\omega_{i}$. Positive and negative
$\lambda_{s,i}$ highlight regions with counterclockwise and clockwise swirling
motions, respectively. To eliminate the noise from the small-scale vortices,
we have adopted the Tomkins & Adrian (2003) idea and low-pass filtered the
$\lambda_{s,i}$ field (a compact top-hat filter) with support $h$ to better
identify instantaneous hairpin features. As apparent from this figure, low-
momentum regions are bordered by pairs of oppositely signed $\lambda_{s,3}$
regions at both the considered phases; these counter-rotating rolls are a
signature of hairpin legs. Based on these signatures, it is also apparent that
hairpin vortices tend to align in the streamwise direction. Comparing subplots
(c,d) in figure 13, it is clear that, as
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$ increases, the
swirling strength of the hairpin’s legs is intensified, which in turn
increases the momentum deficits in the low-momentum regions between the
hairpin legs. This behavior leads to a narrowing of low-momentum regions to
satisfy continuity constraints. Also, it is apparent that a larger number of
hairpin structures populates the flow field at a higher
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$, which can be
attributed to hairpin vortices spawning offsprings in both the upstream and
downstream directions as they intensify (Zhou et al., 1999).
Figure 14 displays a $u_{1}^{\prime}$ contour for the PP case at a
streamwise/wall-normal plane. Black dashed lines feature a tilting angle
$\theta=12^{\circ}$. It is evident that the interfaces of the low- and high-
momentum regions, which are representative instantaneous manifestations of
hairpin packets (Hutchins et al., 2012), feature a constant tilting angle
during the pulsatile cycle. This behavior is in agreement with findings from
the earlier $\overline{R}_{11}$ analysis, which identified the typical tilting
angle of coherent structures as lying between $10^{\circ}$ to $15^{\circ}$,
depending on the reference wall-normal location. We close this section by
noting that while the instantaneous flow field provides solid qualitative
insight into the structure of turbulence for the considered flow field, a more
statistically representative picture can be gained by conditionally averaging
the flow field on selected instantaneous events. This will be the focus of the
next section.
### 3.5 Temporal variability of the composite hairpin vortex
This section aims at providing more quantitative insights into the temporal
variability of the individual hairpin structures, and elucidating how
variations in their geometry influence the ejection-sweep pattern (§3.2) and
the spatio-temporal coherence of the flow field (§3.3). To study the phase-
dependent structural characteristics of the hairpin vortex, we utilize the
conditional averaging technique (Blackwelder, 1977). This technique involves
selecting a flow event at a specific spatial location to condition the
averaging process in time and/or space. The conditionally-averaged flow field
is then analyzed using standard flow visualization techniques to identify the
key features of the eddies involved. By applying this technique to the hairpin
vortex, we can gain valuable insights into its structural attributes and how
they vary over time.
In the past few decades, various events have been employed as triggers for the
conditional averaging operation. For example, in the context of channel flow
over aerodynamically smooth surfaces, Zhou et al. (1999) relied on an ejection
event as the trigger, which generally coincides with the passage of a hairpin
head through that point. More recently, Dennis & Nickels (2011) considered
both positive cross-stream and streamwise swirl as triggers, which are
indicative of hairpin heads and legs, respectively. In flow over homogeneous
vegetation canopies, Watanabe (2004) used a scalar microfront associated with
a sweep event. Shortly after, Finnigan et al. (2009) noted that this choice
might introduce a bias towards sweep events in the resulting structure and
instead used transient peaks in the static pressure, which are associated with
both ejection and sweep events.
Here, we adopt the approach first suggested by Coceal et al. (2007), where the
local minimum streamwise velocity over a given plane was used as the trigger.
It can be shown that this approach yields similar results as the one proposed
in Dennis & Nickels (2011) and that it is suitable for the identification of
hairpin vortices in the ISL. The conditional averaging procedure used in this
study is based on the following operations:
1. 1.
Firstly, at a chosen $x_{3}^{e}$, we identify the set of locations
$(x_{1}^{e},x_{2}^{e})$ where the instantaneous streamwise velocity is $75\%$
below its phase-averaged value. This is our “triggering event.” Such an
operation is repeated for each available velocity snapshot.
2. 2.
Next, for each identified event, the fluctuating velocity field at the
selected $x_{3}^{e}$ plane is shifted by $(-x_{1}^{e},-x_{2}^{e})$. After this
operation, all identified events are located at
$(x_{1}^{\prime},x_{2}^{\prime})=(0,0)$, where
$(x_{1}^{\prime},x_{2}^{\prime})$ is the new (translated) coordinate system.
3. 3.
Lastly, the shifted instantaneous velocity fields are averaged over the
identified events and snapshots, for each phase.
The end result is a phase-dependent, conditionally-averaged velocity field
that can be used for further analysis.
Figure 15: Vector plot of the conditionally averaged fluctuating velocity (PP
case) over the $x_{3}/h=2$ wall-parallel plane. The flow has been conditioned
on a local minimum streamwise velocity event in the same plane. Color contours
represent the wall-normal swirling strength $\lambda_{s,3}$. Green dots
identify the cores of the counter-rotating vortices.
Figure 16: Spacing between the composite vortex pair cores $d_{\omega}$,
corresponding to local minimum streamwise velocity events at $x_{3}^{e}/h=1.5$
(black lines), $x_{3}^{e}/h=2$ (blue lines), $x_{3}^{e}/h=3$ (green lines) and
$x_{3}^{e}/h=4$ (magenta lines).
Figure 15 shows a wall-parallel slice at $x_{3}/h=2$ of the conditionally
averaged fluctuating velocity field in the same plane as the triggering event.
Counter-rotating vortices associated with a low-momentum region in between
appear to be persistent features of the ISL throughout the pulsation cycle.
Vortex cores move downstream and towards each other as
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$ increases, and the
vortices intensify. This behavior occurs in the normalized time interval
$\omega t\in[\pi/2,\pi]$. Instead, when
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$ decreases, the cores
move upstream and further apart. Such behavior provides statistical evidence
of the behavior depicted in figure 13(c,d) for the instantaneous flow field.
Note that the composite counter-rotating vortex pair in the conditionally
averaged flow field is, in fact, an ensemble average of vortex pairs in the
instantaneous flow field. Thus, the spacing between the composite vortex pair
cores ($d_{\omega}$) represents a suitable metric to quantify the phase-
averaged widths of vortex packets in the considered flow system. Figure 16
presents $d_{\omega}$ evaluated with the triggering event at
$x_{3}^{e}/h=\\{1.5,2,3,4\\}$. The trend in $d_{\omega}$ is similar to that
observed in figure 9(a) for the first zero crossing of $\overline{R}_{11}$,
which is an indicator of the streak width. The explanation for this behavior
is that low-momentum regions are generated between the legs of the hairpins,
justifying the observed linear scaling of the streak width with the cross-
stream spacing of hairpin legs.
Figure 17: Time evolution of the conditionally averaged fluctuating velocity
field of the PP case in the streamwise/wall-normal plane $x_{2}^{*}/h=0$ given
a local minimum streamwise velocity event at $x_{3}^{e}/h=2$. Color contours
represent the cross-stream swirling strength $\lambda_{s,2}$. Red and blue
lines mark the $\lambda_{s,2}=0.1$ and $\lambda_{s,2}=-0.1$ contours,
respectively.
Figure 17 and 18 depict a conditionally averaged fluctuating velocity field,
which is obtained with a triggering event at $x_{3}^{e}/h=2$, in the
$x^{\prime}_{2}=0$ plane and the $x_{1}^{\prime}=-h$ plane, respectively. Note
that the $x^{\prime}_{2}=0$ plane corresponds to the center plane, and the
$x_{1}^{\prime}=-h$ cross-section is located $h$ upstream of the triggering
event. From figure 17, a region of positive $\lambda_{s,2}$ can be identified
immediately above and downstream the location of the triggering event, i.e.,
$(x_{1}^{\prime},x_{2}^{\prime},x_{3}^{e})=(0,0,2h)$. This $\lambda_{s,2}>0$
region can be interpreted as the head of the composite hairpin vortex (Adrian
et al., 2000; Ganapathisubramani et al., 2003). As
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$ increases, the vortex
structure is deflected downstream and $\lambda_{s,2}$ increases, leading to
enhanced upstream ejection events. This behavior is also apparent from figure
6, where the overall contribution from ejection events to
$\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle$ increases, while the
number of ejection events reduces, highlighting enhanced individual ejection
events. The deflection of the hairpin head in the downstream direction is
caused by two competing factors. The first is the increase in
$\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle$, which leads to the
downstream deflection. The second factor is the enhancement of the sweep
events, which induce an upstream deflection. The first factor outweighs the
second, thus, yielding the observed variations in the hairpin topology.
Figure 18: Time evolution of the conditionally averaged fluctuating velocity
field in figure 17 in a cross-stream/wall-normal plane $x^{\prime}_{1}=-h$.
Color contours represent the streamwise swirling strength $\lambda_{s,1}$. Red
and blue lines mark $\lambda_{s,1}=0.1$ and $\lambda_{s,1}=-0.1$,
respectively. Green dots identify the cores of the counter-rotating vortices.
Figure 18 shows the response of hairpin legs to changing
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$ in a cross-stream
plane at $x_{1}^{\prime}=-h$. A pair of counter-rotating streamwise rollers
are readily observed, which, as explained before, identify the legs of the
composite hairpin vortex. It also further corroborates our analysis,
highlighting that the spacing between the legs reduces from $\approx 5h$ at
$\omega t=\pi/2$ to $\approx 2h$ at $\omega t=\pi$. This also provides a
justification for findings in §3.3 and §3.4. Further, the swirling of the
hairpin legs, which is quantified with $\lambda_{s,1}$ and $\lambda_{s,3}$ in
the wall-normal/cross-stream and wall-parallel planes, respectively,
intensifies with increasing $\partial\langle\overline{u}_{1}\rangle/\partial
x_{3}$. Interestingly, when $\partial\langle\overline{u}_{1}\rangle/\partial
x_{3}$ approaches its peak value at $\omega t=\pi$, a modest albeit visible
secondary streamwise roller pair is induced by the hairpin legs at
$x_{2}^{\prime}=\pm 3$. This suggests that the hairpin vortex not only
generates new offsprings upstream and downstream, as documented in (Zhou et
al., 1999; Adrian, 2007), but also in the cross-stream direction when it
intensifies. The intensification of hairpin legs creates counter-rotating
quasi-streamwise roller pairs between the hairpin vortices adjacent to the
cross-stream direction. These roller pairs are lifted up due to the effect of
the induced velocity of one roller on the other according to the Biot–Savart
law, and the downstream ends of the rollers then connect, forming new hairpin
structures.
Figure 19: Time evolution of the conditionally averaged swirling field
$\lambda_{s}$ of the PP case given a local minimum streamwise velocity event
at $x_{3}^{e}=2h$. The shown iso-surfaces are for $\lambda_{s}=0.1$.
A more comprehensive picture is provided by isocontours of the conditionally
averaged swirling magnitude $\lambda_{s}=0.1$ shown in figure 19.
$\lambda_{s}$ is the imaginary part of the complex eigenvalue of the velocity
gradient tensor (Zhou et al., 1999). In this case, the conditionally averaged
swirling field corresponds to a triggering event at $x_{3}^{e}/h=2$. Zhou et
al. (1999) pointed out that different thresholds of the iso-surface result in
vortex structures of similar shapes but different sizes. $\lambda_{s}=0.1$, in
this case, strikes the best compromise between descriptive capabilities and
surface smoothness. Note that other vortex identification criteria, such as
the Q criterion (Hunt et al., 1988) and the $\lambda_{2}$ criterion (Jeong &
Hussain, 1995), are expected to result in qualitatively similar vortex
structures (Chakraborty et al., 2005).
The extents of the conditional eddy in figure 19 vary substantially from
roughly $10h\times 8h\times 5h$ at relatively low
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$ ($\omega t=\pi/2$), to
$6h\times 6h\times 3h$ at high
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$ ($\omega t=\pi$).
During the period of decreasing
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$, i.e., $0<\omega
t<3/4\pi$ and $\pi<\omega t<2\pi$, the conditional eddy resembles the classic
hairpin structure in the stationary case, where two hairpin legs and the
hairpin head connecting the hairpin legs can be vividly observed. The sizes of
the hairpin legs increase with decreasing
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$, and so does their
spacing, which is in line with our prior observations based on figure 18. One
possible physical interpretation for the change in the size of hairpin legs is
that the reduction in swirling strength of the hairpin head resulting from a
decrease in $\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$ weakens
the ejection between the hairpin legs, as shown in figure 17. As a result, the
swirling strength of the legs decreases, causing an increase in their size due
to the conservation of angular momentum. Conversely, during the period of
increasing $\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$
($3/4\pi<\omega t<\pi$), the hairpin structure is less pronounced. The
conditional eddy features a strengthened hairpin head, and the intensified
counter-rotating hairpin legs move closer to each other and ultimately merge
into a single region of non-zero swirling strength, as apparent from Figure
19. Moreover, downstream of the conditional eddy, a pair of streamwise
protrusions, known as “tongues” (Zhou et al., 1999), persist throughout the
pulsatile cycle. According to Adrian (2007), these protrusions reflect the
early stage of the generation process of the downstream hairpin vortex. These
protrusions would eventually grow into a quasi-streamwise vortex pair and
later develop a child hairpin vortex downstream of the original one.
In summary, the proposed analysis reveals that the time-varying shear rate
resulting from the pulsatile forcing affects the topology and swirling
intensity of hairpin vortices. As the shear rate increases (decreases),
hairpin vortices tend to shrink (grow) with a corresponding enhancement
(relaxation) of the swirling strength. These variations in hairpin geometry
are responsible for the observed time-varying ejection-sweep pattern (figure
6). Ejection events primarily occur between the hairpin legs, which become
more widely spaced as the vortices grow and less spaced as they shrink.
Therefore, a decrease in hairpin vortex size due to an increasing shear rate
reduces the number of ejection events, while an increase in vortex size due to
the decreasing shear rate leads to an increased number of ejections. Moreover,
the intensification (relaxation) of hairpin vortices at high (low) shear rates
results in enhanced (attenuated) ejection events between the hairpin legs, as
evidenced by figures 17 and 18. This enhancement and attenuation of ejection
events is also corroborated by results from figure 6, which indicated that
high (low) shear rates decrease (increase) the number of ejection events but
increase (decrease) their contribution to
$\overline{u_{1}^{\prime}u_{3}^{\prime}}$. From a flow coherence perspective,
this physical process also explains the observed time evolution of
$\overline{R}_{11}$ (see figures 9 and 11), which is a statistical signature
of hairpin packets. Changes in the size of individual hairpin vortices in
response to the shear rate directly influence the dimensions of hairpin
packets, as the latter are composed of multiple individual hairpin structures.
## 4 Conclusions
In this study, the structure of turbulence in pulsatile flow over an array of
surface-mounted cuboids was characterized and contrasted with that in
stationary flow regimes. The objective was to elucidate the effects of non-
stationarity on turbulence topology and its implications for momentum
transfer.
Flow unsteadiness was observed to not significantly alter the longtime average
profiles of turbulent kinetic energy and resolved Reynolds shear stress, but
it marginally increased the height of the RSL. In the context of quadrant
analysis, it was found that flow unsteadiness does not noticeably alter the
overall distribution within each quadrant. However, the ejection-sweep pattern
exhibited an apparent variation during the pulsation cycle. Flow acceleration
yielded a large number of ejection events within the RSL, whereas flow
deceleration favored sweeps. In the ISL, it was shown that the ejection-sweep
pattern is mainly controlled by the intrinsic and phase-averaged shear rate
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$ rather than by the
driving pressure gradient. Specifically, the relative contribution from
ejections increases, but their frequency of occurrence decreases with
increasing $\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$. The
aforementioned time variation in the ejection-sweep pattern was later found to
stem from topological variations in the structure of ISL turbulence, as
deduced from inspection of the two-point streamwise velocity correlation
function and the conditionally-averaged flow field.
Specifically, the geometry of hairpin vortex packets, which are the dominant
coherent structures in the ISL, was examined through the analysis of two-point
velocity correlation to explore its longtime-averaged and phase-dependent
characteristics. Flow unsteadiness was found to yield relatively shorter
vortex packets in a longtime average sense (up to 15% shorter). From a phase-
averaged perspective, the three-dimensional extent of hairpin packets was
found to vary during the pulsation cycle and to be primarily controlled by
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$, while their tilting
angle remained constant throughout. A visual examination of instantaneous
structures also confirmed such behavior: the size of low-momentum regions and
spacing of the hairpin legs encapsulating them were found to change with
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$, while the hairpin
vortices remained aligned at a constant angle during the pulsation cycle.
Further insight into phase variations of instantaneous hairpin structures was
later gained using conditional averaging operations, which provided compelling
quantitative evidence for the behaviors previously observed. Specifically, the
conditional averaged flow field revealed that the size and swirling intensity
of the composite hairpin vortex vary considerably with
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$. When
$\partial\langle\overline{u}_{1}\rangle/\partial x_{3}$ increases to its peak
value, the swirling strength of the hairpin head is intensified, yielding
strengthened ejections upstream of the hairpin head and a downstream
deflection of the hairpin head. As the hairpin head intensifies, there is a
corresponding increase in the intensity of the hairpin legs, coupled with a
reduction in the spacing between them. This development accounts for the noted
decrease in the extent of the ejection-dominated region. In other words,
individual ejections become stronger and are generated at a reduced frequency
as the shear rate increases, which provides a kinematic interpretation and
justification for the observed time-variability of the quadrant distribution.
Such a process, needless to say, is reversed when the shear rate decreases.
Findings from this study emphasize the significant influence that departures
from statistically stationary flow conditions can have on the structure of ABL
turbulence and associated processes. Such departures are typical in realistic
ABL flows and have garnered growing attention in recent times (Mahrt & Bou-
Zeid, 2020). While the study focuses on a particular type of non-stationarity,
its results underscore the importance of accounting for this flow phenomenon
in both geophysical and engineering applications. The modification of
turbulence structures due to flow unsteadiness has a substantial effect on
exchanges between the land- and ocean-atmosphere, as well as on the
aerodynamic drag experienced by vehicles. This underlines the necessity for
concerted efforts to fully characterize these modifications. From a modeling
perspective, empirical insights obtained from this study hold promise for
guiding the evolution of more advanced wall-layer model formulations
(Piomelli, 2008). These models are routinely used in weather and climate
forecasting, as well as in aerospace and mechanical engineering applications,
facilitating the assessment of area-aggregate exchanges between solid surfaces
and the adjacent fluid environment. A recurrent shortcoming of operational
wall-layer models lies in their reliance on assumptions of statistical
stationarity, overlooking flow unsteadiness and state-dependent turbulence
topology information (Monin & Obukhov, 1954; Skamarock et al., 2008; Piomelli,
2008). This represents an important area for improvement. Past investigations
have proposed pathways to integrate turbulence topology information into wall-
layer model predictions, leveraging parameters like the vortex packet
inclination angle and size (Marusic et al., 2001, 2010). These approaches open
a fruitful avenue for assimilating the insights derived from this study into
wall-layer model infrastructures.
Declaration of Interests. The authors report no conflict of interest.
Acknowledgements. This material is based upon work supported by, or in part
by, the Army Research Laboratory and the Army Research Office under grant
number W911NF-22-1-0178. This work used the Anvil supercomputer at Purdue
University through allocation ATM180022 from the Advanced Cyberinfrastructure
Coordination Ecosystem: Services & Support (ACCESS) program, which is
supported by National Science Foundation grants #2138259, #2138286, #2138307,
#2137603, and #2138296. The authors acknowledge the Texas Advanced Computing
Center (TACC) at The University of Texas at Austin for providing resources
that have contributed to the research results reported within this paper.
## References
* Adrian (2007) Adrian, R. J. 2007 Hairpin vortex organization in wall turbulence. Phys. Fluids 19, 041301.
* Adrian et al. (2000) Adrian, R. J., Meinhart, C. D. & Tomkins, C. D. 2000 Vortex organization in the outer region of the turbulent boundary layer. J. Fluid Mech. 422, 1–54.
* Albertson & Parlange (1999a) Albertson, J. D. & Parlange, M. B. 1999a Natural integration of scalar fluxes from complex terrain. Adv. Water Resour. 23, 239–252.
* Albertson & Parlange (1999b) Albertson, J. D. & Parlange, M. B. 1999b Surface length scales and shear stress: Implications for land-atmosphere interaction over complex terrain. Water Resour. Res. 35, 2121–2132.
* Ali et al. (2017) Ali, N., Cortina, G., Hamilton, N., Calaf, M. & Cal, R. B. 2017 Turbulence characteristics of a thermally stratified wind turbine array boundary layer via proper orthogonal decomposition. J. Fluid Mech. 828, 175–195.
* Anderson et al. (2015a) Anderson, W., Barros, J. M., Christensen, K. T. & Awasthi, A. 2015a Numerical and experimental study of mechanisms responsible for turbulent secondary flows in boundary layer flows over spanwise heterogeneous roughness. J. Fluid Mech. 768, 316–347.
* Anderson et al. (2015b) Anderson, W., Li, Q. & Bou-Zeid, E. 2015b Numerical simulation of flow over urban-like topographies and evaluation of turbulence temporal attributes. J. Turbul. 16, 809–831.
* Bailey & Stoll (2016) Bailey, B. N. & Stoll, R. 2016 The creation and evolution of coherent structures in plant canopy flows and their role in turbulent transport. J. Fluid Mech. 789, 425–460.
* Balakumar & Adrian (2007) Balakumar, B. J. & Adrian, R. J. 2007 Large-and very-large-scale motions in channel and boundary-layer flows. Philos. Trans. Royal Soc. 365, 665–681.
* Bandyopadhyay (1980) Bandyopadhyay, P. 1980 Large structure with a characteristic upstream interface in turbulent boundary layers. Phys. Fluids 23, 2326–2327.
* Barros & Christensen (2014) Barros, J. M. & Christensen, K. T. 2014 Observations of turbulent secondary flows in a rough-wall boundary layer. J. Fluid Mech. 748, R1.
* Basley et al. (2019) Basley, J., Perret, L. & Mathis, R. 2019 Structure of high reynolds number boundary layers over cube canopies. J. Fluid Mech. 870, 460–491.
* Blackwelder (1977) Blackwelder, R. 1977 On the role of phase information in conditional sampling. Phys. Fluids 20, 232–242.
* Bose & Park (2018) Bose, S. T. & Park, G. I. 2018 Wall-modeled large-eddy simulation for complex turbulent flows. Annu. Rev. Fluid Mech. 50, 535–561.
* Bou-Zeid et al. (2005) Bou-Zeid, E., Meneveau, C. & Parlange, M. B. 2005 A scale-dependent lagrangian dynamic model for large eddy simulation of complex turbulent flows. Phys. Fluids 17, 025105.
* Canuto et al. (2007) Canuto, C., Hussaini, M. Y., Quarteroni, A. & Zang, T. A. 2007 Spectral methods: evolution to complex geometries and applications to fluid dynamics. Springer Science & Business Media.
* Carper & Porté-Agel (2004) Carper, M. A. & Porté-Agel, F. 2004 The role of coherent structures in subfilter-scale dissipation of turbulence measured in the atmospheric surface layer. J. Turbul. 5, 040.
* Carstensen et al. (2010) Carstensen, S., Sumer, B. M. & Fredsøe, J. 2010 Coherent structures in wave boundary layers. part 1. oscillatory motion. J. Fluid Mech. 646, 169–206.
* Carstensen et al. (2012) Carstensen, S., Sumer, B. M. & Fredsøe, J. 2012 A note on turbulent spots over a rough bed in wave boundary layers. Phys. Fluids 24, 115104.
* Castro (2007) Castro, I. P. 2007 Rough-wall boundary layers: mean flow universality. J. Fluid Mech. 585, 469–485.
* Castro et al. (2006) Castro, I. P., Cheng, H. & Reynolds, R. 2006 Turbulence over urban-type roughness: deductions from wind-tunnel measurements. Boundary-Layer Meteorol. 118, 109–131.
* Cava et al. (2017) Cava, Daniela, Mortarini, L, Giostra, Umberto, Richiardone, Renzo & Anfossi, D 2017 A wavelet analysis of low-wind-speed submeso motions in a nocturnal boundary layer. Q. J. R. Meteorol. Soc. 143, 661–669.
* Chakraborty et al. (2005) Chakraborty, P., Balachandar, S. & Adrian, R. J. 2005 On the relationships between local vortex identification schemes. J. Fluid Mech. 535, 189–214.
* Chauhan et al. (2013) Chauhan, K., Hutchins, N., Monty, J. & Marusic, I. 2013 Structure inclination angles in the convective atmospheric surface layer. Boundary-Layer Meteorol. 147, 41–50.
* Chester et al. (2007) Chester, S., Meneveau, C. & Parlange, M. B. 2007 Modeling turbulent flow over fractal trees with renormalized numerical simulation. J. Comput. Phys. 225, 427–448.
* Christen et al. (2007) Christen, A., van Gorsel, E. & Vogt, R. 2007 Coherent structures in urban roughness sublayer turbulence. Intl J. Climatol. 27, 1955–1968.
* Christensen & Adrian (2001) Christensen, K. T. & Adrian, R. J. 2001 Statistical evidence of hairpin vortex packets in wall turbulence. J. Fluid Mech. 431, 433–443.
* Coceal et al. (2007) Coceal, O., Dobre, A., Thomas, T. G. & Belcher, S. E. 2007 Structure of turbulent flow over regular arrays of cubical roughness. J. Fluid Mech. 589, 375–409.
* Costamagna et al. (2003) Costamagna, P., Vittori, G. & Blondeaux, P. 2003 Coherent structures in oscillatory boundary layers. J. Fluid Mech. 474, 1–33.
* Del Alamo et al. (2006) Del Alamo, J. C., Jimenez, J., Zandonade, P. & Moser, R. D. 2006 Self-similar vortex clusters in the turbulent logarithmic region. J. Fluid Mech. 561, 329–358.
* Dennis & Nickels (2011) Dennis, D. J. C. & Nickels, T. B. 2011 Experimental measurement of large-scale three-dimensional structures in a turbulent boundary layer. part 1. vortex packets. J. Fluid Mech. 673, 180–217.
* Dong et al. (2020) Dong, S., Huang, y., Yuan, X. & Lozano-Durán, A. 2020 The coherent structure of the kinetic energy transfer in shear turbulence. J. Fluid Mech. 892, A22.
* Eitel-Amor et al. (2015) Eitel-Amor, G., Örlü, R., Schlatter, P. & Flores, O. 2015 Hairpin vortices in turbulent boundary layers. Phys. Fluids 27, 025108.
* Elsinga et al. (2012) Elsinga, G. E., Poelma, C., Schröder, A., Geisler, R., Scarano, F. & Westerweel, J. 2012 Tracking of vortices in a turbulent boundary layer. J. Fluid Mech. 697, 273–295.
* Fernando (2010) Fernando, H. J. S. 2010 Fluid dynamics of urban atmospheres in complex terrain. Annu. Rev. Fluid Mech. 42, 365–389.
* Finnigan (2000) Finnigan, J. J. 2000 Turbulence in plant canopies. Annu. Rev. Fluid Mech. 32, 519–571.
* Finnigan et al. (2009) Finnigan, J. J., Shaw, R. H. & Patton, E. G. 2009 Turbulence structure above a vegetation canopy. J. Fluid Mech. 637, 387–424.
* Ganapathisubramani et al. (2005) Ganapathisubramani, B., Hutchins, N., Hambleton, W. T., Longmire, E. K. & Marusic, I. 2005 Investigation of large-scale coherence in a turbulent boundary layer using two-point correlations. J. Fluid Mech. 524, 57–80.
* Ganapathisubramani et al. (2003) Ganapathisubramani, B., Longmire, E. K. & Marusic, I. 2003 Characteristics of vortex packets in turbulent boundary layers. J. Fluid Mech. 478, 35–46.
* Giometto et al. (2016) Giometto, M. G., Christen, A., Meneveau, C., Fang, J., Krafczyk, M. & Parlange, M. B. 2016 Spatial characteristics of roughness sublayer mean flow and turbulence over a realistic urban surface. Boundary-Layer Meteorol. 160, 425–452.
* Grimsdell & Angevine (2002) Grimsdell, A. W. & Angevine, W. M. 2002 Observations of the afternoon transition of the convective boundary layer. J. Appl. Meteorol. Climatol. 41, 3–11.
* Guala et al. (2012) Guala, M., Tomkins, C. D., Christensen, K. T. & Adrian, R. J. 2012 Vortex organization in a turbulent boundary layer overlying sparse roughness elements. J. Hydraul. Res. 50, 465–481.
* Head & Bandyopadhyay (1981) Head, M. R. & Bandyopadhyay, P. 1981 New aspects of turbulent boundary-layer structure. J. Fluid Mech. 107, 297–338.
* Hicks et al. (2018) Hicks, B. B., Pendergrass, W. R., Baker, B. D., Saylor, R. D., O’Dell, D. L., Eash, N. S. & McQueen, J. T. 2018 On the relevance of $\ln(z_{0}/z_{0T})=kb^{-1}$. Boundary-Layer Meteorol. 167, 285–301.
* Hommema & Adrian (2003) Hommema, S. E. & Adrian, R. J. 2003 Packet structure of surface eddies in the atmospheric boundary layer. Boundary-Layer Meteorol. 106, 147–170.
* Hoover et al. (2015) Hoover, J. D., Stauffer, D. R., Richardson, S. J., Mahrt, L., Gaudet, B. J. & Suarez, A. 2015 Submeso motions within the stable boundary layer and their relationships to local indicators and synoptic regime in moderately complex terrain. J. Appl. Meteorol. and Climatol. 54, 352–369.
* Hu et al. (2023) Hu, R., Dong, S. & Vinuesa, R. 2023 General attached eddies: Scaling laws and cascade self-similarity. Phys. Rev. Fluids 8, 044603.
* Huang et al. (2009) Huang, J., Cassiani, M. & Albertson, J. D. 2009 Analysis of coherent structures within the atmospheric boundary layer. Boundary-Layer Meteorol. 131, 147–171.
* Hunt et al. (1988) Hunt, J. C. R., Wray, A. A. & Moin, P. 1988 Eddies, streams, and convergence zones in turbulent flows. Studying Turbulence Using Numerical Simulation Databases, 2. Proceedings of the 1988 Summer Program .
* Huq et al. (2007) Huq, P., White, L. A., Carrillo, A., Redondo, J., Dharmavaram, S. & Hanna, S. R. 2007 The shear layer above and in urban canopies. J. Appl. Meteorol. Climatol. 46, 368–376.
* Hutchins et al. (2012) Hutchins, N., Chauhan, K., Marusic, I., Monty, J. & Klewicki, J. 2012 Towards reconciling the large-scale structure of turbulent boundary layers in the atmosphere and laboratory. Boundary-Layer Meteorol. 145, 273–306.
* Jeong & Hussain (1995) Jeong, J. & Hussain, F. 1995 On the identification of a vortex. J. Fluid Mech. 285, 69–94.
* Kanda et al. (2004) Kanda, M., Moriwaki, R. & Kasamatsu, F. 2004 Large-eddy simulation of turbulent organized structures within and above explicitly resolved cube arrays. Boundary-Layer Meteorol. 112, 343–368.
* Katul et al. (2006) Katul, G., Poggi, D., Cava, D. & Finnigan, J. 2006 The relative importance of ejections and sweeps to momentum transfer in the atmospheric boundary layer. Boundary-Layer Meteorol. 120, 367–375.
* Kim & Moin (1985) Kim, J. & Moin, P. 1985 Application of a fractional-step method to incompressible navier-stokes equations. J. Comput. Phys. 59, 308–323.
* Kim & Moin (1986) Kim, J. & Moin, P. 1986 The structure of the vorticity field in turbulent channel flow. part 2. study of ensemble-averaged fields. J. Fluid Mech. 162, 339–363.
* Klewicki et al. (2014) Klewicki, J., Philip, J., Marusic, I., Chauhan, K. & Morrill-Winter, C. 2014 Self-similarity in the inertial region of wall turbulence. Phys. Rev. E 90, 063015.
* Krogstad & Antonia (1994) Krogstad, P. Å. & Antonia, R. A. 1994 Structure of turbulent boundary layers on smooth and rough walls. J. Fluid Mech. 277, 1–21.
* Lee et al. (2011) Lee, J. H., Sung, H. J. & Krogstad, P. 2011 Direct numerical simulation of the turbulent boundary layer over a cube-roughened wall. J. Fluid Mech. 669, 397–431.
* Leonardi & Castro (2010) Leonardi, S. & Castro, I. P. 2010 Channel flow over large cube roughness: a direct numerical simulation study. J. Fluid Mech. 651, 519–539.
* Li & Bou-Zeid (2011) Li, D. & Bou-Zeid, E. 2011 Coherent structures and the dissimilarity of turbulent transport of momentum and scalars in the unstable atmospheric surface layer. Boundary-Layer Meteorol. 140, 243–262.
* Li & Bou-Zeid (2019) Li, Q. & Bou-Zeid, E. 2019 Contrasts between momentum and scalar transport over very rough surfaces. J. Fluid Mech. 880, 32–58.
* Li & Giometto (2023) Li, W. & Giometto, M. G. 2023 Mean flow and turbulence in unsteady canopy layers. J. Fluid Mech. 974, A33.
* Lohou et al. (2000) Lohou, F., Druilhet, A., Campistron, B., Redelsperger, J. L. & Saïd, F. 2000 Numerical study of the impact of coherent structures on vertical transfers in the atmospheric boundary layer. Boundary-Layer Meteorol. 97, 361–383.
* Lozano-durán et al. (2020) Lozano-durán, A., Giometto, M. G., Park, G. I. & Moin, P. 2020 Non-equilibrium three-dimensional boundary layers at moderate Reynolds numbers. J. Fluid Mech. 883, A20–1.
* Mahrt (2007) Mahrt, L. 2007 The influence of nonstationarity on the turbulent flux–gradient relationship for stable stratification. Boundary-Layer Meteorol. 125, 245–264.
* Mahrt (2008) Mahrt, L. 2008 The influence of transient flow distortion on turbulence in stable weak-wind conditions. Boundary-Layer Meteorol. 127, 1–16.
* Mahrt (2009) Mahrt, L. 2009 Characteristics of submeso winds in the stable boundary layer. Boundary-Layer Meteorol. 130, 1–14.
* Mahrt (2014) Mahrt, L. 2014 Stably stratified atmospheric boundary layers. Annu. Rev. Fluid Mech. 46, 23–45.
* Mahrt & Bou-Zeid (2020) Mahrt, L. & Bou-Zeid, E. 2020 Non-stationary boundary layers. Boundary-Layer Meteorol. 177, 189–204.
* Mahrt et al. (2013) Mahrt, L., Thomas, C., Richardson, S., Seaman, N., Stauffer, D. & Zeeman, M. 2013 Non-stationary generation of weak turbulence for very stable and weak-wind conditions. Boundary-Layer Meteorol. 147, 179–199.
* Margairaz et al. (2018) Margairaz, F., Giometto, M. G., Parlange, M. B. & Calaf, M. 2018 Comparison of dealiasing schemes in large-eddy simulation of neutrally stratified atmospheric flows. Geosci. Model Dev. 11, 4069–4084.
* Marusic et al. (2001) Marusic, I, Kunkel, G. J. & Porté-Agel, F. 2001 Experimental study of wall boundary conditions for large-eddy simulation. J. Fluid Mech. 446, 309–320.
* Marusic et al. (2010) Marusic, I., Mathis, R. & Hutchins, N. 2010 Predictive model for wall-bounded turbulent flow. Science 329, 193–196.
* Marusic & Monty (2019) Marusic, I. & Monty, J. P. 2019 Attached eddy model of wall turbulence. Annu. Rev. Fluid Mech. 51, 49–74.
* Marusic et al. (2013) Marusic, I., Monty, J. P., Hultmark, M. & Smits, A. J. 2013 On the logarithmic region in wall turbulence. J. Fluid Mech. 716, R3.
* Mazzuoli & Vittori (2019) Mazzuoli, M. & Vittori, G. 2019 Turbulent spots in an oscillatory flow over a rough wall. Eur. J. Mech. B/Fluids 78, 161–168.
* Meneveau & Marusic (2013) Meneveau, C. & Marusic, I. 2013 Generalized logarithmic law for high-order moments in turbulent boundary layers. J. Fluid Mech. 719, R1.
* Michioka et al. (2014) Michioka, T., Takimoto, H. & Sato, A. 2014 Large-eddy simulation of pollutant removal from a three-dimensional street canyon. Boundary-Layer Meteorol. 150, 259–275.
* Mittal & Iaccarino (2005) Mittal, R. & Iaccarino, G. 2005 Immersed boundary methods. Annu. Rev. Fluid Mech. 37, 239–261.
* Moin & Kim (1982) Moin, P. & Kim, J. 1982 Numerical investigation of turbulent channel flow. J. Fluid Mech. 118, 341–377.
* Moin & Kim (1985) Moin, P. & Kim, J. 1985 The structure of the vorticity field in turbulent channel flow. part 1. analysis of instantaneous fields and statistical correlations. J. Fluid Mech. 155, 441–464.
* Monin & Obukhov (1954) Monin, A.S. & Obukhov, A.M. 1954 Basic laws of turbulent mixing in the surface layer of the atmosphere. Contrib. Geophys. Inst. Acad. Sci. USSR 24, 163–187.
* Monti et al. (2002) Monti, P., Fernando, H. J. S., Princevac, M., Chan, W. C., Kowalewski, T. A. & Pardyjak, E. R. 2002 Observations of flow and turbulence in the nocturnal boundary layer over a slope. J. Atmos. Sci. 59, 2513–2534.
* Morris et al. (2007) Morris, S. C., Stolpa, S. R., Slaboch, P. E. & Klewicki, J. C. 2007 Near-surface particle image velocimetry measurements in a transitionally rough-wall atmospheric boundary layer. J. Fluid Mech. 580, 319–338.
* Oke et al. (2017) Oke, T. R., Mills, G., Christen, A. & Voogt, J. A. 2017 Urban climates. Cambridge University Press.
* Orszag (1970) Orszag, S. A. 1970 Analytical theories of turbulence. J. Fluid Mech. 41, 363–386.
* Orszag & Pao (1975) Orszag, S. A. & Pao, Y. 1975 Numerical computation of turbulent shear flows. In Adv. Geophys., , vol. 18, pp. 225–236. Elsevier.
* Pan et al. (2014) Pan, Y., Chamecki, M. & Isard, S. A. 2014 Large-eddy simulation of turbulence and particle dispersion inside the canopy roughness sublayer. J. Fluid Mech. 753, 499–534.
* Perry & Chong (1982) Perry, A. E. & Chong, M. S. 1982 On the mechanism of wall turbulence. J. Fluid Mech. 119, 173–217.
* Piomelli (2008) Piomelli, U. 2008 Wall-layer models for large-eddy simulations. Prog. Aerosp. Sci. 44, 437–446.
* Pokrajac et al. (2007) Pokrajac, D., Campbell, L. J., Nikora, V., Manes, C. & McEwan, I. 2007 Quadrant analysis of persistent spatial velocity perturbations over square-bar roughness. Exp. Fluids 42, 413–423.
* Raupach et al. (1986) Raupach, M. R., Coppin, P. A. & Legg, B. J. 1986 Experiments on scalar dispersion within a model plant canopy part i: The turbulence structure. Boundary-Layer Meteorol. 35, 21–52.
* Raupach et al. (1996) Raupach, M. R., Finnigan, J. J. & Brunet, Y. 1996 Coherent eddies and turbulence in vegetation canopies: the mixing-layer analogy. In Boundary-Layer Meteorol., pp. 351–382. Springer.
* Reynolds & Castro (2008) Reynolds, R. T. & Castro, I. P. 2008 Measurements in an urban-type boundary layer. Exp. Fluids 45, 141–156.
* Salesky & Anderson (2020) Salesky, S. T. & Anderson, W. 2020 Revisiting inclination of large-scale motions in unstably stratified channel flow. J. Fluid Mech. 884, R5.
* Salon et al. (2007) Salon, S., Armenio, V. & Crise, A. 2007 A numerical investigation of the stokes boundary layer in the turbulent regime. J. Fluid Mech. 570, 253–296.
* Schmid et al. (2019) Schmid, M. F., Lawrence, G. A., Parlange, M. B. & Giometto, M. G. 2019 Volume averaging for urban canopies. Boundary-Layer Meteorol. 173, 349–372.
* Scotti & Piomelli (2001) Scotti, A. & Piomelli, U. 2001 Numerical simulation of pulsating turbulent channel flow. Phys. Fluids 13, 1367–1384.
* Skamarock et al. (2008) Skamarock, W. C., Klemp, J. B., Dudhia, J., Gill, D. O., Barker, D. M., Duda, M. G., Huang, X. Y., Wang, W. & Powers, J. G. 2008 A description of the advanced research WRF version. Tech. Rep.. National Center for Atmospheric Research.
* Smith et al. (1991) Smith, C. R. T., Walker, J. D. A., Haidari, A. H. & Sobrun, U. 1991 On the dynamics of near-wall turbulence. Philos. Trans. R. Soc. London, Ser. A 336, 131–175.
* Smits et al. (2011) Smits, A. J., McKeon, B. J. & Marusic, I. 2011 High–reynolds number wall turbulence. Annu. Rev. Fluid Mech. 43, 353–375.
* Squire et al. (2017) Squire, D. T., Hutchins, N., Morrill-Winter, C., Schultz, M. P., Klewicki, J. C. & Marusic, I. 2017 Applicability of taylor’s hypothesis in rough-and smooth-wall boundary layers. J. Fluid Mech. 812, 398–417.
* Stanislas et al. (2008) Stanislas, M., Perret, L. & Foucaut, J. 2008 Vortical structures in the turbulent boundary layer: a possible route to a universal representation. J. Fluid Mech. 602, 327–382.
* Stull (1988) Stull, R. B. 1988 An introduction to boundary layer meteorology, , vol. 13. Springer Science & Business Media.
* Takimoto et al. (2013) Takimoto, H., Inagaki, A., Kanda, M., Sato, A. & Michioka, T. 2013 Length-scale similarity of turbulent organized structures over surfaces with different roughness types. Boundary-Layer Meteorol. 147, 217–236.
* Theodorsen (1952) Theodorsen, T. 1952 Mechanisms of turbulence. In Proceedings of the 2^¡ nd¿ Midwestern Conference on Fluid Mechanics, 1952.
* Tomkins & Adrian (2003) Tomkins, C. D. & Adrian, R. J. 2003 Spanwise structure and scale growth in turbulent boundary layers. J. Fluid Mech. 490, 37–74.
* Townsend (1976) Townsend, A. A. 1976 The structure of turbulent shear flow. Cambridge university press.
* Tseng et al. (2006) Tseng, Y. H., Meneveau, C. & Parlange, M. B. 2006 Modeling flow around bluff bodies and predicting urban dispersion using large eddy simulation. Environ. Sci. Technol. 40, 2653–2662.
* Volino et al. (2007) Volino, R. J., Schultz, M. P. & Flack, K. A. 2007 Turbulence structure in rough-and smooth-wall boundary layers. J. Fluid Mech. 592, 263–293.
* Volino et al. (2011) Volino, R. J., Schultz, M. P. & Flack, K. A. 2011 Turbulence structure in boundary layers over periodic two-and three-dimensional roughness. J. Fluid Mech. 676, 172–190.
* Wallace (2016) Wallace, J. M. 2016 Quadrant analysis in turbulence research: history and evolution. Annu. Rev. Fluid Mech. 48, 131–158.
* Wallace et al. (1972) Wallace, J. M., Eckelmann, H. & Brodkey, R. S. 1972 The wall region in turbulent shear flow. J. Fluid Mech. 54, 39–48.
* Wang et al. (2014) Wang, L., Li, D., Gao, Z., Sun, T., Guo, X. & Bou-Zeid, E. 2014 Turbulent transport of momentum and scalars above an urban canopy. Boundary-Layer Meteorol. 150, 485–511.
* Watanabe (2004) Watanabe, T. 2004 Large-eddy simulation of coherent turbulence structures associated with scalar ramps over plant canopies. Boundary-Layer Meteorol. 112, 307–341.
* Wu & Christensen (2007) Wu, Y. & Christensen, K. T. 2007 Outer-layer similarity in the presence of a practical rough-wall topography. Phys. Fluids 19, 085108.
* Xie et al. (2008) Xie, Z., Coceal, O. & Castro, I. P. 2008 Large-eddy simulation of flows over random urban-like obstacles. Boundary-Layer Meteorol. 129, 1–23.
* Yang & Shen (2009) Yang, Di & Shen, Lian 2009 Characteristics of coherent vortical structures in turbulent flows over progressive surface waves. Phys. Fluids 21, 125106.
* Yang et al. (2016) Yang, X. I. A., Marusic, I. & Meneveau, C. 2016 Moment generating functions and scaling laws in the inertial layer of turbulent wall-bounded flows. J. Fluid Mech. 791, R2.
* Yang & Meneveau (2016) Yang, X. I. A. & Meneveau, C. 2016 Recycling inflow method for simulations of spatially evolving turbulent boundary layers over rough surfaces. J. Turbul. 17, 75–93.
* Zhang et al. (2022) Zhang, W., Zhu, X., Yang, X. I. A. & Wan, M. 2022 Evidence for raupach et al.’s mixing-layer analogy in deep homogeneous urban-canopy flows. J. Fluid Mech. 944, A46.
* Zhang & Simons (2019) Zhang, X. & Simons, R. 2019 Experimental investigation on the structure of turbulence in the bottom wave-current boundary layers. Coast. Eng. 152, 103511.
* Zhou et al. (1999) Zhou, J., Adrian, R. J., Balachandar, S. & Kendall, T. 1999 Mechanisms for generating coherent packets of hairpin vortices in channel flow. J. Fluid Mech. 387, 353–396.
|
# The lower bound of weighted representation function
Shi-Qiang Chen111 E-mail<EMAIL_ADDRESS>(S.-Q. Chen).
School of Mathematics and Statistics,
Anhui Normal University, Wuhu 241002, P. R. China
Abstract. For any given set $A$ of nonnegative integers and for any given two
positive integers $k_{1},k_{2}$, $R_{k_{1},k_{2}}(A,n)$ is defined as the
number of solutions of the equation $n=k_{1}a_{1}+k_{2}a_{2}$ with
$a_{1},a_{2}\in A$. In this paper, we prove that if integer $k\geq 2$ and set
$A\subseteq\mathbb{N}$ such that $R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus
A,n)$ holds for all integers $n\geq n_{0}$, then $R_{1,k}(A,n)\gg\log n$. 2020
Mathematics Subject Classification: 11B13
Keywords: Partition; weighted representation function
## 1 Introduction
Let $\mathbb{N}$ be the set of all nonnegative integers. For a given set
$A\subseteq\mathbb{N}$, $n\in\mathbb{N}$, representation functions
$R_{1}(A,n)$, $R_{2}(A,n)$ and $R_{3}(A,n)$ are defined as
$R_{1}(A,n)=\mid\\{(a,a^{\prime}):n=a+a^{\prime},~{}a,a^{\prime}\in A\\}\mid,$
$R_{2}(A,n)=\mid\\{(a,a^{\prime}):n=a+a^{\prime},~{}a<a^{\prime},~{}a,a^{\prime}\in
A\\}\mid,$ $R_{3}(A,n)=\mid\\{(a,a^{\prime}):n=a+a^{\prime},~{}a\leq
a^{\prime},~{}a,a^{\prime}\in A\\}\mid,$
respectively. Sárközy once asked the following question$:$ for
$i\in\\{1,2,3\\}$, are there two sets of nonnegative integers $A$ and $B$ such
that
$|(A\cup B)\setminus(A\cap B)|=+\infty,$
$R_{i}(A,n)=R_{i}(B,n)$ for all sufficiently large integers $n$? This problem
of Sárközy has been solved completely. Recently, many researchers have
obtained many profound results around this problem of Sárközy. For related
research, please refer to [1]-[5], [7]-[10] .
For any given two positive integers $k_{1},k_{2}$ and set
$A\subseteq\mathbb{N}$, weighted representation function
$R_{k_{1},k_{2}}(A,n)$ is defined as the number of solutions of the equation
$n=k_{1}a_{1}+k_{2}a_{2}$ with $a_{1},a_{2}\in A$.
In 2012, Yang and Chen [11] studied weighted representation function. They
proved that if $k_{1}$ and $k_{2}$ are two integers with $k_{2}>k_{1}\geq 2$
and $(k_{1},k_{2})=1$, then there does not exist set $A\subseteq\mathbb{N}$
such that $R_{k_{1},k_{2}}(A,n)=R_{k_{1},k_{2}}(\mathbb{N}\setminus A,n)$ for
all sufficiently large integers $n$, and if $k$ is an integer with $k\geq 2$,
then exists a set $A\subseteq\mathbb{N}$ such that
$R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)$ for all integers $n\geq 1$.
They also asked the following question.
###### Problem 1.
Let $k$ be an integer with $k\geq 2$ and $A\subseteq\mathbb{N}$ such that
$R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)$ for all integers $n\geq
n_{0}$. Is it true that $R_{1,k}(A,n)\geq 1$ for all sufficiently larger
integers $n$? Is it true that $R_{1,k}(A,n)\rightarrow\infty$ as
$n\rightarrow\infty$?
In 2016, Qu [6] solved this problem affirmatively and proved that the
following result.
###### Theorem A.
(See [6, Theorem 1].) Let $k$ be an integer with $k>1$ and
$A\subseteq\mathbb{N}$ such that $R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus
A,n)$ for all integers $n\geq n_{0}$. Then $R_{1,k}(A,n)\rightarrow\infty$ as
$n\rightarrow\infty$.
In this paper, we continue to focus on Problem 1 and give the lower bound of
weighted representation function.
###### Theorem 1.1.
Let $k$ be an integer with $k\geq 2$ and $A\subseteq\mathbb{N}$ such that
$R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)$ holds for all integers $n\geq
n_{0}$. Then $R_{1,k}(A,n)\gg\log n$.
Throughout this paper, the characteristic function of the set
$A\subseteq\mathbb{N}$ is denoted by
$\displaystyle\chi(t)=\begin{cases}0&\mbox{ $t\not\in A$},\\\ 1&\mbox{ $t\in
A$}.\\\ \end{cases}$
Let $C(x)$ be the set of nonnegative integers in $C$ which are less than or
equal to $x$. For positive integer $k$ and sets $A,B\subseteq\mathbb{N}$,
define $kA=\\{ka:a\in A\\}$ and $A+B=\\{a+b:a\in A,~{}b\in B\\}$.
## 2 Lemmas
###### Lemma 2.1.
(See [11, Lemma 2].) Let $k\geq 2$ be an integer and $A\subseteq\mathbb{N}$.
Then $R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)$ holds for all integers
$n\geq n_{0}$ if and only if the following two conditions hold:
(a) for all $n_{0}\leq n<k+n_{0}$, we have
$\underset{a_{1}+ka_{2}=n}{\underset{a_{1}\geq 0,a_{2}\geq
0}{\sum}}1=\underset{a_{1}+ka_{2}=n}{\underset{a_{1}\geq 0,a_{2}\geq
0}{\sum}}\chi(a_{1})+\underset{a_{1}+ka_{2}=n}{\underset{a_{1}\geq 0,a_{2}\geq
0}{\sum}}\chi(a_{2});$ (2.1)
(b) for all $n\geq k+n_{0}$, we have
$\chi(n)+\chi\left(\left\lfloor\frac{n}{k}\right\rfloor\right)=1.$ (2.2)
###### Lemma 2.2.
Let $k\geq 2$ be an integer and $A\subseteq\mathbb{N}$. Then
$R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)$ holds for all integers $n\geq
n_{0}$, then for any $n\geq\lfloor\frac{n_{0}+k}{k}\rfloor+1$, we have
$\displaystyle\chi(n)+\chi(k^{i}n+j)=1,~{}~{}~{}j=0,\ldots,k^{i}-1,~{}~{}~{}\text{if
$i$ is odd};$
$\displaystyle\chi(n)=\chi(k^{i}n+j),~{}~{}~{}~{}~{}~{}j=0,\ldots,k^{i}-1,~{}~{}~{}~{}~{}\text{if
$i$ is even}.$ (2.3)
###### Proof.
We now use induction on $i$ to prove that (2.2) is true. By (2.2), we have
$\chi(n)+\chi(kn+j)=1,~{}~{}~{}j=0,\ldots,k-1.$ (2.4)
Therefore, (2.2) is true for $i=1$.
Next, we assume that (2.2) is true for $i=s$, we are going to prove the truth
of (2.2) for $i=s+1$. If $s+1$ is even, then by the induction hypothesis on
$i=s$, we have
$\chi(n)+\chi(k^{s}n+j)=1,~{}~{}~{}j=0,\ldots,k^{s}-1.$ (2.5)
By (2.2), we have
$\chi(k^{s}n+j)+\chi(k(k^{s}n+j)+u)=1,~{}~{}~{}j=0,\ldots,k^{s}-1;u=0,\ldots,k-1.$
It follows from (2.5) that
$\chi(n)=\chi(k(k^{s}n+j)+u),~{}~{}~{}j=0,\ldots,k^{s}-1;u=0,\ldots,k-1,$
that is
$\chi(n)=\chi(k^{s+1}n+j),~{}~{}~{}j=0,\ldots,k^{s+1}-1.$ (2.6)
If $s+1$ is odd, then by the induction hypothesis on $i=s$, we have
$\chi(n)=\chi(k^{s}n+j),~{}~{}~{}j=0,\ldots,k^{s}-1.$ (2.7)
By (2.2), we have
$\chi(k^{s}n+j)+\chi(k(k^{s}n+j)+u)=1,~{}~{}~{}j=0,\ldots,k^{s}-1;u=0,\ldots,k-1,$
It follows from (2.7) that
$\chi(n)+\chi(k(k^{s}n+j)+u)=1,~{}~{}~{}j=0,\ldots,k^{s}-1;u=0,\ldots,k-1,$
that is
$\chi(n)+\chi(k^{s+1}n+j)=1,~{}~{}~{}j=0,\ldots,k^{s+1}-1.$ (2.8)
Up to now, (2.2) has been proved.
This completes the proof of Lemma 2.2. ∎
## 3 Proof of Theorem 1.1
Let $T=\lfloor\frac{n_{0}+k}{k}\rfloor+1$. Given an odd
$j\in[0,\lfloor\frac{\lfloor\log_{k}{\frac{n}{T}}\rfloor}{2}\rfloor]$, for any
sufficiently larger integer $n$, there exist an integer $i$ such that
$k^{i}(k^{j}+1)T\leq n<k^{i+1}(k^{j}+1)T.$ (3.1)
Now we are going to prove $i+j=\lfloor\log_{k}{\frac{n}{T}}\rfloor$ or
$\lfloor\log_{k}{\frac{n}{T}}\rfloor-1$. In deed, if
$i+j\geq\lfloor\log_{k}{\frac{n}{T}}\rfloor+1$, then
$\frac{n}{T}=k^{\log_{k}{\frac{n}{T}}}<k^{\lfloor\log_{k}{\frac{n}{T}}\rfloor+1}\leq
k^{i+j}<k^{i+j}+k^{i}\leq\frac{n}{T},$
a contradiction. If $i+j\leq\lfloor\log_{k}{\frac{n}{T}}\rfloor-2$, then
$\frac{n}{T}<k^{i+j+1}+k^{i+1}\leq 2k^{i+j+1}\leq
2k^{\lfloor\log_{k}{\frac{n}{T}}\rfloor-1}\leq
2k^{\log_{k}{\frac{n}{T}}-1}\leq k^{\log_{k}{\frac{n}{T}}}=\frac{n}{T},$
a contradiction. By (3.1), there exist $T\leq t\leq kT-1$ and $0\leq
r<k^{i}(k^{j}+1)$ such that
$n=k^{i}(k^{j}+1)t+r.$
According to the value of $r$, we divide into the following two cases for
discussion: Case1. $0\leq r\leq k^{i+j}+k^{i}-k-1$. Noting that $j$ is an odd,
by (2.2), we have
$[k^{i+j-1}t,k^{i+j-1}t+k^{i+j-1}-1]\cup[k^{i}t,k^{i}t+k^{i}-1]\subseteq
A~{}\text{or}~{}\mathbb{N}\setminus A.$
Then
$[k^{i}(k^{j}+1)t,k^{i}(k^{j}+1)t+(k^{i+j}+k^{i}-k-1)]\subseteq
A+kA~{}\text{or}~{}(\mathbb{N}\setminus A)+k(\mathbb{N}\setminus A),$
it follows that $n\in A+kA~{}\text{or}~{}(\mathbb{N}\setminus
A)+k(\mathbb{N}\setminus A)$, which implies that
$R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\geq 1.$
Up to now, we proved that for a given odd
$j\in[0,\lfloor\frac{\lfloor\log_{k}{\frac{n}{T}}\rfloor}{2}\rfloor]$, we have
$R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\geq 1.$ (3.2)
It is clear that for any two different odds $j_{1},j_{2}$ such that
$j_{1},j_{2}\in[0,\lfloor\frac{\lfloor\log_{k}{\frac{n}{T}}\rfloor}{2}\rfloor]$
and integers $i_{1},i_{2}$ such that
$i_{1}+j_{1}=K_{1},~{}~{}~{}~{}i_{2}+j_{2}=K_{2},$
where
$K_{1},K_{2}\in\\{\lfloor\log_{k}{\frac{n}{T}}\rfloor,\lfloor\log_{k}{\frac{n}{T}}\rfloor-1\\},$
we have
$i_{1}\neq i_{2}.$ (3.3)
In deed, assume that $j_{1}<j_{2}$, since
$1=-1+2\leq K_{1}-K_{2}-j_{1}+j_{2}=i_{1}-i_{2},$
it follows that $i_{1}\neq i_{2}$. By (3.3), we have
$[k^{i_{1}}t,k^{i_{1}}t+k^{i_{1}}-1]\cap[k^{i_{2}}t,k^{i_{2}}t+k^{i_{2}}-1]=\emptyset.$
(3.4)
Therefore, by (3.2) and (3.4), we have
$R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus
A,n)\geq\lfloor\frac{\lfloor\log_{k}{\frac{n}{T}}\rfloor}{4}\rfloor\gg\log n.$
Case 2. $(k^{i+j}+k^{i}-k-1)+1\leq r\leq k^{i}(k^{j}+1)-1$. Since
$|A\cap\\{T,kT\\}|=1,$
it follows that
$|A(kT)|\geq 1,~{}|(\mathbb{N}\setminus A)(kT)|\geq 1.$ (3.5)
Let $r=k^{i+j}+k^{i}-k-1+s,~{}s\in[1,k]$. Then
$n=k^{i}((k+1)t)+k^{i+j}+k^{i}-k-1+s=k^{i}((k+1)t+k^{j})+k^{i}-k-1+s.$ (3.6)
By (2.2), we have
$[k^{i}((k+1)t+k^{j}),k^{i}((k+1)t+k^{j})+k^{i}-1]\subseteq
A~{}\text{or}~{}\mathbb{N}\setminus A.$
By (3.5), we can choose $a\in[0,kT]$ such that
$\\{a\\}\cup[k^{i}((k+1)t+k^{j}),k^{i}((k+1)t+k^{j})+k^{i}-1]\subseteq
A~{}\text{or}~{}\mathbb{N}\setminus A.$ (3.7)
Since $j\in[0,\lfloor\frac{\lfloor\log_{k}{\frac{n}{T}}\rfloor}{2}\rfloor]$,
it follows from
$i+j=\lfloor\log_{k}{\frac{n}{T}}\rfloor~{}~{}\text{or}~{}~{}\lfloor\log_{k}{\frac{n}{T}}\rfloor-1$
that
$k^{i}-k-1\geq
k^{\lfloor\frac{\lfloor\log_{k}\frac{n}{T}\rfloor}{2}\rfloor-1}-k-1\geq
k^{2}T\geq ka$
for any sufficiently larger $n$. It follows from (3.6) and (3.7) that
$k^{i}((k+1)t+k^{j})+s\leq n-ka\leq k^{i}((k+1)t+k^{j})+k^{i}-k-1+s,$
which implies that $n\in A+kA~{}\text{or}~{}(\mathbb{N}\setminus
A)+k(\mathbb{N}\setminus A)$, and so
$R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\geq 1.$
Up to now, we proved that for any given odd
$j\in[0,\lfloor\frac{\lfloor\log_{k}{\frac{n}{T}}\rfloor}{2}\rfloor]$, we have
$R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\geq 1.$ (3.8)
By (3.3), we have
$[k^{i_{1}}((k+1)t+k),k^{i_{1}}((k+1)t+k)+k^{i_{1}}-1]\cap[k^{i_{2}}((k+1)t+k),k^{i_{2}}((k+1)t+k)+k^{i_{2}}-1]=\emptyset.$
(3.9)
Therefore, by (3.8) and (3.9), we have
$R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus
A,n)\geq\lfloor\frac{\lfloor\log_{k}{\frac{n}{T}}\rfloor}{4}\rfloor\gg\log n.$
This completes the proof of Theorem 1.1.
## References
* [1] Y.G. Chen and V.F. Lev, Integer sets with identical representation functions, Integers 16(2016), A36.
* [2] G. Dombi, Additive properties of certain sets, Acta Arith. 103(2002), 137-146.
* [3] V.F. Lev, Reconstructing integer sets from their representation functions, Electron. J. Combin. 11(2004), R78.
* [4] S.Z. Kiss and C. Sándor, Partitions of the set of nonnegative integers with the same representation functions, Discrete Math. 340(2017), 1154-1161.
* [5] Z.H. Qu, A remark on weighted representation functions, Taiwanese J. Math. 18(2014), 1713-1719.
* [6] Z.H. Qu, A note on representation functions with different weights, Collq. Math. 143(2016), 105-112.
* [7] E. Rozgonyi and C. Sándor, An extension of Nathanson’s theorem on representation functions, Combinatorica 37(2017), 521-537.
* [8] C. Sándor, Partitions of natural numbers and their representation functions, Integers 4(2004), A18.
* [9] M. Tang, Partitions of the set of natural numbers and their representation functions, Discrete Math. 308(2008), 2614-2616.
* [10] M. Tang, Partitions of natural numbers and their representation functions, Chinese Ann. Math. Ser A 37(2016), 41-46. For English version, see Chinese J. Contemp. Math. 37(2016), 39-44.
* [11] Q.H. Yang and Y.G. Chen, Partitions of natural numbers with the same weightes representation functions, J. Number Theory 132(2012), 3047-3055.
|
# Well-posedness of path-dependent semilinear parabolic master equations
Shanjian Tang S. Tang, School of Mathematical Sciences, Fudan University,
Handan Road 220, 200433, Shanghai, PRC<EMAIL_ADDRESS>and Huilin Zhang
H. Zhang, Research Center for Mathematics and Interdisciplinary Sciences,
Shandong University, Binhai Road 72, 266237, Qingdao, PRC.
<EMAIL_ADDRESS>
###### Abstract.
Master equations are partial differential equations for measure-dependent
unknowns, and are introduced to describe asymptotic equilibrium of large scale
mean-field interacting systems, especially in games and control theory. In
this paper we introduce new semilinear master equations whose unknowns are
functionals of both paths and path measures. They include state-dependent
master equations, path-dependent partial differential equations (PPDEs),
history information dependent master equations and time inconsistent (e.g.
time-delayed) equations, which naturally arise in stochastic control theory
and games. We give a classical solution to the master equation by introducing
a new notation called strong vertical derivative (SVD) for path-dependent
functionals, inspired by Dupire’s vertical derivative [20], and applying
stochastic forward-backward system argument. Moreover, we consider a general
non-smooth case with a functional mollifying method.
###### Key words and phrases:
path-dependent, master equation, mean-field games, Itô-Dupire formula
###### 2010 Mathematics Subject Classification:
60G22, 60H10, 34C29
###### Contents
1. 1 Introduction
2. 2 Basic setup and Itô calculus for functionals of path and path-measure
1. 2.1 The canonical setup
2. 2.2 Strong vertical derivatives with respect to path and path-measure
3. 2.3 Itô-Dupire formula
3. 3 Differentiability of solutions for path-dependent mean-field BSDEs
1. 3.1 First-order differentiability
2. 3.2 Second-order differentiability
4. 4 Solutions of semilinear path-dependent master equations
1. 4.1 The decoupling field and its regularity
2. 4.2 Classical solutions of path-dependent master equations
3. 4.3 Some typical cases
4. 4.4 The general case via functional mollifying
5. 5 Appendix
1. 5.1 Proof of Lemma 3.4
2. 5.2 An extension of [41, Theorem 4.5] without assumption of local Lipschitz continuity in time variable
## 1\. Introduction
Denote by $\mathbb{C}_{T,d}$ the space of continuous functions on $[0,T]$ with
values in $\mathbb{R}^{d}$ and by $\mathcal{P}^{C}_{2}$ the totality of
probability measures on $\mathbb{C}_{T,d}$ with finite second order moments.
Given deterministic functions $(b_{1},\sigma_{1})$ and $(b_{2},\sigma_{2})$ on
$[0,T]$ with values in $\mathbb{R}^{d}\times\mathbb{R}^{d\times d}$, we study
the following path-dependent parabolic master equation,
(6)
$\displaystyle\left\\{\begin{array}[]{l}\partial_{t}u(t,\omega,\mu)+\frac{1}{2}\text{Tr}\
[\partial_{\omega}^{2}u(t,\omega,\mu)\sigma_{1}(t)\sigma_{1}(t)^{T}]+\partial_{\omega}u(t,\omega,\mu)b_{1}(t)\\\\[5.69054pt]
\quad+\frac{1}{2}\text{Tr}\
[\int_{\mathbb{C}_{T,d}}\partial_{\tilde{\omega}}\partial_{\mu}u(t,\omega,\mu,\tilde{\omega})\mu(d\tilde{\omega})\sigma_{2}(t)\sigma_{2}(t)^{T}]+\int_{\mathbb{C}_{T,d}}\partial_{\mu}u(t,\omega,\mu,\tilde{\omega})\mu(d\tilde{\omega})b_{2}(t)\\\\[5.69054pt]
\ \ \
+f(t,\omega,u(t,\omega,\mu),\sigma_{1}(t)\partial_{\omega}u(t,\omega,\mu),\mu,\mathcal{L}_{u(t,W^{\mu},\mu)})=0,\\\
\\\ u(T,\omega,\mu)=\Phi(\omega,\mu),\ \ \
(t,\omega,\mu)\in[0,T]\times\mathbb{C}_{T,d}\times\mathcal{P}^{C}_{2}.\end{array}\right.$
Here, (functional) derivatives $\partial_{\omega}$ and $\partial_{\mu}$ are
taken in the spirit of Dupire and Lions (see the subsequent definitions (17)
and (34)), respectively, and $W^{\mu}$ represents the canonical processes on
$\mathbb{C}_{T,d}$ under $\mu.$ Master equations (also called Hamiltonian-
Jacobi-Bellman (HJB) equations in a Wasserstein space when concerned with
control problems) naturally arise from mean-field games. The mean-field game
is introduced independently by Lasry and Lions [35] and Huang, Malhamé and
Caines [31] to the study of Nash equilibrium for systems consist of a large
number of “agents”. The classical mean-field theory, popular in statistical
physics (e.g. McKean-Vlasov and Boltzmann models), quantum mechanics and
quantum chemistry (e.g. Hartree-Fock type model), was developed in the last
century to study systems with large size of particles (see Kac [33], McKean
[37], Sznitman [45, 46, 47, 48], Bossy [3] and references therein). In the
last decade, the mean-field theory has been widely studied and applied from
theoretical areas including stochastic differential games, partial
differential equations (PDEs) and stochastic control, etc., to practical areas
such as engineering, economics and social science, just to mention a few
publications and references therein: [36], [15], [7], [11], [12], [27]. The
master equation is a PDE involving measure variables on a Wasserstein space of
infinite dimension and introduced by Lions in his lecture [36] for
differential games. It is a powerful analytic tool for the study of large
systems in physics and games with a mean-field interaction (see [14], [15]),
and includes interesting particular cases such as HJB and Fokker-Planck (FP)
equations in dynamical systems, stochastic control and mathematical finance of
mean-field type. Results are available in various frameworks: Bensoussan et
al. [5] consider the regular case when measure variables are restricted on
those measures of square integrable density functions, Cardaliaguet [15] gives
a viscosity solution for first-order HJB equations on a Wasserstein space,
Gomes and Saude [30] survey well-posedness of HJB-FP equations for reduced
mean-field games, Buckdahn et al. [9] and Chassagneux et al. [10] study
classical solutions for second order master equations through stochastic
differential equations (SDEs) and forward backward stochastic differential
equations (FBSDEs) respectively, Carmona and Delarue [13] consider the mean-
field games and corresponding master equation with common noise, Cardaliaguet
et al. [14] give an analytic approach for master equations, Pham and Wei [42]
study the dynamic programming principle for Bellman master equation, etc.
However, all these works consider the state-dependent case, where
$(\omega,\mu)$ in Equation (6) take values in
$\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{k})$. Here,
$\mathcal{P}_{2}(\mathbb{R}^{k})$ is the set of probability measures on
$\mathbb{R}^{k}$ with finite second order moments. In practice, many problems
could be non-Markovian or path-dependent: to mention a few, optional pricing
for exotic options (e.g. Asian, chooser, lookback and barrier options [20],
[19], [32], [26]), stochastic game theory and stochastic control with delayed
information ([2], [28], [44], [51], [49]), rough volatility [29], [6], etc.
Dupire [20] introduces a functional Itô formula to incorporate the calculus of
path-dependent functionals, which is subsequently developed by Cont-Fournié
[16, 17] and references therein (on the other hand, see another approach to
path-dependent problems of Flandoli and Zanco [25] by lifting the primal
problem into a functional one in Banach spaces). In contrast to the classical
functional approach to the path-dependent stochastic analysis (see Ahn [1]),
Dupire’s approach is featured by the finite dimensional vertical derivative
(see the following definition (17)), and is admitted to solve non-Markovin
problems, in particular the one proposed by Peng in his ICM 2010 lecture [40]
that whether non-Markovian FBSDEs can be connected with path-dependent PDEs
(PPDEs). Concerning the well-posedness of PPDEs, Peng and Wang [41] consider
smooth solutions of parabolic PPDEs; Ekren et al. [21, 22, 23] study the
viscosity solution of quasilinear and fully nonlinear PPDEs; Cosso et al. [18]
treat PPDEs as the Hilbert space valued equations and build the viscosity
solution; Peng and Song [43] introduce a new path derivative and build Sobolev
solutions for corresponding parabolic fully-nonlinear PPDEs via $G-$BSDEs
[39]; Wu and Zhang [50] solve a master equation with solutions in a form of
$V(t,\mu)$, $\mu\in\mathcal{P}_{2}^{C}$. However, the path-dependent master
equation with solutions in the general form of $u(t,\omega,\mu)$,
$(\omega,\mu)\in\mathbb{C}_{T,d}\times\mathcal{P}^{C}_{2}$ still remains to be
studied.
In this article, we study the classical solution of path-dependent master
equation (6). In contrast to the state-dependent case [10], smooth solution of
equation (6) by FBSDEs meets with new issues. The first comes from the very
definition of vertical derivatives with respect to paths and measures on
$\mathbb{C}_{T,d}$ (see identities (17) and (34) for details). Dupire’s
vertical derivative [20] depends on the cut-off time for paths. In the same
spirit, the derivative with respect to measures on the path space (see also
[50]), also depends on the cut-off time. Note that in this case two different
flows are involved in the probabilistic representation of the path-dependent
field $u$. Application of the flow property of solutions for SDEs involves
vertical derivatives with respect to the path and the path measure at two
different times, while only one cut-off time is available in the field $u$.
Indeed, one of them should describe the smoothness of $u$ on “past”
information and is not defined in the sense of Dupire, which is only available
for non-anticipative (or adapted) functionals and describe the smoothness on
the “present” information. Secondly, the existence of the derivative with
respect to measure in Lions’ sense usually requires the separability of the
measurable space, which is the space of càdlàg functions here in view of
Dupire’s vertical derivative. However, FBSDEs are more consistent with the
uniform norm instead of Skorokhod norm, which leaves us without the general
existence result for derivatives. To handle the first issue, we propose a new
notation called “strong vertical derivative” (SVD) (see Definition 2.1) which
is derived from the vertical derivative of Dupire and restricts functionals to
be smooth before the cut-off time. By definition, non-anticipative functionals
with SVDs also have vertical derivatives. Moreover, the assumption of SVDs is
general enough to include all interesting continuously vertical differentiable
functionals (see Example 2.2). On the other hand, the SVD can be viewed as a
pathwise definition for the Malliavin derivative (see e.g. [38]) on the
canonical space (see subsequent Remark 2.3 for details). Based on the
differentiability with respect to paths and path measures, we build the Itô
formula and the partial Itô formula. Then we consider the non-Markovian FBSDE
(7)
$\left\\{\begin{array}[]{l}dX(r)=b_{1}(r)dr+\sigma_{1}(r)dB(r),\\\\[2.84526pt]
dX^{\prime}(r)=b_{2}(r)dr+\sigma_{2}(r)dB(r),\\\\[2.84526pt]
dY(r)=-f(X_{r},Y(r),Z(r),\mathcal{L}_{X^{\prime}_{r}},\mathcal{L}_{Y^{\eta_{t}}(r)})dr+Z(r)dB(r),\
\ \ r\geq t,\\\\[2.84526pt] X_{t}=\gamma_{t},\ \ X^{\prime}_{t}=\eta_{t},\ \
Y(T)=\Phi(X_{T},\mathcal{L}_{X^{\prime}_{T}}),\end{array}\right.$
where $\eta$ is a continuous process with law $\mu,$ and $Y^{\eta_{t}}$ solves
the associated mean-field FBSDE
(8)
$\mathcal{Y}(s)=\Phi(X_{T}^{\eta_{t}},\mathcal{L}_{X^{\prime}_{T}})+\int_{s}^{T}f(X_{r}^{\eta_{t}},\mathcal{Y}(r),\mathcal{Z}(r),\mathcal{L}_{X^{\prime}_{r}},\mathcal{L}_{\mathcal{Y}(r)})dr-\int_{s}^{T}\mathcal{Z}(r)dB(r),\quad
s\in[t,T],$
with $X_{r}^{\eta_{t}}:=X_{r}|_{\gamma=\eta}$ for $r\in[t,T].$ Here we denote
by $\omega(t)$ the value of path $\omega$ at time $t$ and by
$\omega_{t}(\cdot):=\omega(t\wedge\cdot)$ the path up to time $t.$ Assuming
that the functional generator $f$ and terminal value $\Phi$ have smooth SVDs,
we prove that the solution of corresponding FBSDE also has smooth SVDs.
Moreover, we construct all strong vertical derivatives of $u(t,\omega,\mu)$
via FBSDE, and give the required regularity to apply our Itô formula (see
Theorem 2.15 and Corollary 2.16)—which will benefit both numerical and
theoretical approximation of equilibrium by finite systems (see [24], [34]).
Furthermore, we also address some non-smooth case and connect it with
viscosity solutions via a functional mollifying argument, which is illustrated
with typical examples of the time-delayed case.
In summary, our main contribution is three-fold. Firstly, we propose the
general form of path-dependent master equation (6) and give the well-
posedness. Secondly, we introduce the notation of strong vertical
differentiability and build Itô and partial Itô formulas in this framework
which are fundamental tools in the study of path-dependent mean-field
problems. Thirdly, the argument of functional smoothing also seems new in the
path-dependent framework.
The rest of the article is organized as follows. In Section 2, we introduce
notations of SVD with respect to paths and measures on path space, and build
in the framework functional Itô calculus incorporating paths and path
measures. In Section 3, we show the differentiability and regularity of
associated FBSDE solutions. In Section 4, we prove the existence and
uniqueness of smooth solutions for path-dependent parabolic master equation.
Moreover, we extend our result to more general cases by a functional
mollifying argument.
Acknowledgements. This work is supported by NSF of China (Grant Numbers
12031009, 11901104) and Laboratory of Mathematics for Nonlinear Science, Fudan
University, Shanghai, China.
## 2\. Basic setup and Itô calculus for functionals of path and path-measure
### 2.1. The canonical setup
For any fixed $T>0$, we denote by $\mathbb{C}_{T,d}=C([0,T],\mathbb{R}^{d})$
the canonical space and equip it with the supreme norm $\|\cdot\|_{[0,T]}.$
$W$ is the canonical process and $\\{\mathcal{F}_{t}^{W}\\}_{0\leq t\leq T}$
is the natural filtration. For any $(t,\omega)\in[0,T]\times\mathbb{C}_{T,d},$
${\omega}_{t}$ is the cut-off path, meaning that
$\omega_{t}\in\mathbb{C}_{T,d}$ such that
(9) $\omega_{t}(r)=\omega(r)1_{[0,t)}(r)+\omega(t)1_{[t,T]}(r),\ \ r\in[0,T];$
and $\omega(t)$ is the state of $\omega$ at time $t$. Let
$\mathcal{P}^{C}_{2}$ be the set of probability measures on
$(\mathbb{C}_{T,d},\mathcal{F}_{T}^{W})$ with finite second order moments,
i.e. $\mu\in\mathcal{P}_{2}^{C}$ iff
$|||\mu|||^{2}:=\mathbb{E}^{\mu}[\|W\|_{[0,T]}^{2}]<\infty.$ For
$\mu\in\mathcal{P}^{C}_{2},$ $\mu_{t}\in\mathcal{P}^{C}_{2}$ is the
distribution of stopped process $W_{t}$ under $\mu.$ For any
$\mu,\nu\in\mathcal{P}^{C}_{2},$ we define the following classical
2-Wasserstein distance
(10)
$W_{2}(\mu,\nu)=\inf_{P\in\mathcal{P}(\mu,\nu)}\left(\int_{\mathbb{C}_{T,d}\times\mathbb{C}_{T,d}}\|u-v\|_{[0,T]}^{2}\
dP(u,v)\right)^{\frac{1}{2}},$
where $\mathcal{P}(\mu,\nu)$ is the set of all probability measures on
$(\mathbb{C}_{T,d}\times\mathbb{C}_{T,d},\mathcal{F}^{W}_{T}\times\mathcal{F}^{W}_{T})$
with marginal measures $\mu$ and $\nu.$ To introduce functional derivative in
the space of paths, we consider the space of càdlàg paths
$\mathbb{D}_{T,d}:=D([0,T],\mathbb{R}^{d}),$ which can be equipped with the
uniform topology $\|\cdot\|_{[0,T]},$ or the Skorohod topology
(11)
$d(\omega,\omega^{\prime}):=\inf_{\lambda\in\Lambda_{[0,T]}}\sup_{t\in[0,T]}(|t-\lambda(t)|+|\omega(t)-\omega^{\prime}(t)|),$
where $\Lambda_{[0,T]}$ is the set of all strictly increasing continuous
mappings on $[0,T]$ with $\lambda(0)=0$ and $\lambda(T)=T.$ In the following,
we equip $\mathbb{D}_{T,d}$ with the uniform topology unless stated otherwise.
With the space $\mathbb{C}_{T,d}$ being replaced with $\mathbb{D}_{T,d}$,
notations such as $\mathcal{P}^{D}_{2}$ and $W_{2}(\mu,\nu)$ are self-
explained.
Suppose that $(\Omega,\mathcal{F},P)$ is an atomless probability space
supporting a $d$-dimensional Brownian motion $B,$ and
$\\{\mathcal{F}_{t}\\}_{t\in[0,T]}$ is the natural augmented filtration. For
any $t\in[0,T]$ and $r\in[t,T],$ we define $\mathcal{F}_{r}^{t}$ as the
$\sigma$-algebra generated by $\\{B(s)-B(t);t\leq s\leq r\\}$ and completed
under $P$. For any (stopped up to time $t$) process $X_{t},$ we denote by
$\mathcal{L}_{X_{t}}$ the law of the process $X_{t}$ and $\mathcal{L}_{X(t)}$
the law of the random variable $X(t)$. In the following, we use notation
$\mathbb{M}_{2}^{C}$ ($\mathbb{M}_{2}^{D}$, resp.) as the collection of
measurable continuous processes (càdlàg processes, resp.) with laws in
$\mathcal{P}_{2}^{C}$ ($\mathcal{P}_{2}^{D}$, resp.). Since for any
$\mu\in\mathcal{P}_{2}^{D},$ we can always find an atomless probability space
$(\Omega,\mathcal{F}_{t},P)$ such that there exists a càdlàg process $\eta$ on
this probability space with law $\mu,$ we will always suppose for any
$\mu\in\mathcal{P}_{2}^{D},$ $(\Omega,\mathcal{F},P)$ is rich enough to
support a càdlàg process $\eta$ such that $\mathcal{L}_{\eta}=\mu.$ Moreover,
for any progressively measurable process $X$ and random variable $\xi$ on
$(\Omega,\mathcal{F},P)$, we define the following norms if they are finite:
for any $t\in[0,T],$ $p\in\mathbb{N^{+}},$
(12) $\|X\|_{\mathbb{S}^{p},[t,T]}^{p}:=\mathbb{E}^{P}[\|X\|^{p}_{[t,T]}],\ \
\|X\|_{\mathbb{H}^{p},[t,T]}^{p}:=\mathbb{E}^{P}[(\int_{t}^{T}|X(r)|^{2}dr)^{\frac{p}{2}}],\
\ \|\xi\|_{L^{p}}^{p}:=\mathbb{E}^{P}[|\xi|^{p}].$
We write $\mathbb{S}^{p}([t,T],\mathbb{R}^{k})$,
$\mathbb{H}^{p}([t,T],\mathbb{R}^{k})$ and
$L^{p}(\mathcal{F}_{T},\mathbb{R}^{k})$ for spaces of progressively measurable
processes on $[t,T]$ and random variables with values in $\mathbb{R}^{k}$ and
finite corresponding norms. Denote by $C^{n}(\mathbb{R}^{m},\mathbb{R}^{k})$
($C^{n}_{b}(\mathbb{R}^{m},\mathbb{R}^{k})$, resp.) the space of (bounded,
resp.) continuous functions from $\mathbb{R}^{m}$ to $\mathbb{R}^{k}$ with
(bounded, resp.) continuous derivatives up to order $n.$ Usually, we omit
$\mathbb{R}^{k}$ in
$\mathbb{S}^{p}([t,T],\mathbb{R}^{k}),\mathbb{H}^{p}([t,T],\mathbb{R}^{k}),L^{p}(\mathcal{F}_{T},\mathbb{R}^{k}),C(\mathbb{R}^{m},\mathbb{R}^{k})$
when $k=1,$ and also omit the time interval $[t,T]$ if no confusion raised.
Moreover, for
$(Y,Z)\in\mathbb{S}^{p}([t,T],\mathbb{R}^{m})\times\mathbb{H}^{p}([t,T],\mathbb{R}^{n}),$
we write
(13)
$\|(Y,Z)\|_{\mathbb{S}^{p}\times\mathbb{H}^{p}}:=\left(\|Y\|_{\mathbb{S}^{p}}^{p}+\|Z\|_{\mathbb{H}^{p}}^{p}\right)^{\frac{1}{p}}.$
### 2.2. Strong vertical derivatives with respect to path and path-measure
Denote by $\hat{\mathbb{D}}_{T,d}$ the product space
$[0,T]\times\mathbb{D}_{T,d}\times\mathcal{P}^{D}_{2}$ and by $\mathscr{D}$
the space of functionals on $\hat{\mathbb{D}}_{T,d}.$ A functional
$f\in\mathscr{D}$ is said to be non-anticipative if for any $(t,\omega,\mu),$
$f(t,\omega,\mu)=f(t,\omega_{t},\mu_{t})$. For non-anticipative
$f\in\mathscr{D},$ we call $f$ continuous on $\hat{\mathbb{D}}_{T,d}$ and
write $f\in\mathscr{C}(\hat{\mathbb{D}}_{T,d})$ if $f$ is continuous in the
product space $[0,T]\times\mathbb{D}_{T,d}\times\mathcal{P}^{D}_{2}$ equipped
with the premetric:
(14)
$d_{p}((t,\omega,\mu),(t^{\prime},\omega^{\prime},\mu^{\prime})):=|t-t^{\prime}|+\|\omega_{t}-\omega_{t^{\prime}}\|+W_{2}(\mu_{t},\mu_{t^{\prime}}).$
For any non-anticipative $f\in\mathscr{D},$ the horizontal derivative is
defined as
(15) $\partial_{t}f(t,\omega,\mu):=\lim_{h\rightarrow
0^{+}}\frac{1}{h}[f(t+h,\omega_{t},\mu_{t})-f(t,\omega_{t},\mu_{t})],\
\forall\ (t,\omega,\mu)\in\hat{\mathbb{D}}_{T,d}.$
For any $(t,x)\in[0,T]\times\mathbb{R}^{d},$ define
$\omega^{t,x}\in\mathbb{D}_{T,d}$ by
(16) $\omega^{t,x}:=\omega+x1_{[t,T]}.$
For any fixed $(t,\mu)\in[0,T]\times\mathcal{P}_{2}^{D},$
$f(t,\cdot,\mu):\mathbb{D}_{T,d}\to\mathbb{R}$ is called vertically
differentiable at $(t,\omega)$ (or $\omega_{t}$ for short), if
$f(t,\omega^{t,x},\mu)$ is differentiable at $x=0$, i.e. there exits
$\partial_{\omega}f(t,\omega,\mu)\in\mathbb{R}^{d}$ such that
(17)
$f(t,\omega+x1_{[t,T]},\mu)=f(t,\omega,\mu)+\partial_{\omega}f(t,\omega,\mu)x+o(|x|),\
\ \ \forall\ x\in\mathbb{R}^{d},$
and $\partial_{\omega}f(t,\omega,\mu)$ is then called the vertical derivative.
Now we extend the vertical derivative from non-anticipative functionals taken
at cut-off time to path functionals at any time before the cut-off time.
###### Definition 2.1.
Suppose that $f:[0,T]\times\mathbb{D}_{T,d}\to\mathbb{R}.$ For any $\tau\leq
t,$ we call $f$ strongly vertically differentiable at $(\tau,t,\omega)$ (or
$\omega_{\tau}$ for short), if there exits
$\partial_{\omega_{\tau}}f(t,\omega)\in\mathbb{R}^{d}$ such that
(18)
$f(t,\omega+x1_{[\tau,T]})=f(t,\omega)+\partial_{\omega_{\tau}}f(t,\omega)x+o(|x|),\
\ \ \forall\ x\in\mathbb{R}^{d}.$
In this case, $\partial_{\omega_{\tau}}f(t,\omega)$ is called the strong
vertical derivative of $f$ at $(\tau,t,\omega)$. Moreover, if $f$ is strongly
vertically differentiable at $(\tau,t,\omega)$ for any $\tau\leq t,$ we call
$f$ strongly vertically differentiable at $(t,\omega)$ (or $\omega_{t}$ for
short).
Clearly, $f$ is strongly vertical differentiable at $\omega_{t}$ if and only
if the mapping $x\to f(t,\omega^{\tau,x})$ is differentiable at $x=0$ for any
$\tau\leq t$. In particular, if $f$ is non-anticipative and strongly
vertically differentiable, $f$ is vertically differentiable and its vertical
derivative at $(t,x)$ agrees with its strong vertical derivative at
$(t,t,\omega).$ For the SVD $\partial_{\omega_{\tau}}f(t,\omega),$ we can
further define its SVDs in the same spirit: for any $\tau^{\prime}\leq t,$
define $\partial_{\omega_{\tau^{\prime}}}\partial_{\omega_{\tau}}f(t,\omega)$
as the SVD of $\partial_{\omega_{\tau}}f(t,\omega)$ at
$(\tau^{\prime},t,\omega)$. In the following, we only need to consider the
case $\tau^{\prime}=\tau.$ We call $f$ has continuous SVDs or
$\partial_{\omega_{\tau}}f(t,\omega)$ is continuous if
$\partial_{\omega_{\tau}}f$ is continuous with respect to the metric: for any
$(\tau,t,\omega)$ and $(\tau^{\prime},t^{\prime},\omega^{\prime})$ with
$\tau\leq t,\ \tau^{\prime}\leq t^{\prime},$
(19)
$d_{sp}((\tau,t,\omega),(\tau^{\prime},t^{\prime},\omega^{\prime})):=|\tau-\tau^{\prime}|+|t-t^{\prime}|+\|\omega_{t}-\omega^{\prime}_{t^{\prime}}\|.$
Here are examples of strongly vertically differentiable functionals.
###### Example 2.2.
Let $f:[0,T]\times\mathbb{D}_{T,d}\longmapsto\mathbb{R}$ and
$(t,\omega)\in[0,T]\times\mathbb{D}_{T,d}.$
* $(i)$
If $f(t,\omega)=F(t,\omega(t))$ for a function $F\in
C^{1,k}([0,T]\times\mathbb{R}^{d})$, then we have that for any
$\tau_{1},\tau_{2},\cdots,\tau_{j}\in[0,t],$ $j\leq k,$
(20)
$\partial_{t}f(t,\omega)=\partial_{t}F(t,\omega(t)),\quad\partial_{\omega_{\tau_{j}}}\cdots\partial_{\omega_{\tau_{1}}}f(t,\omega)=D_{x}^{j}F(t,\omega(t)),$
and thus $f$ has continuous strong vertical derivatives up to order $k$.
* $(ii)$
Suppose that $f(t,\omega)=\int_{0}^{t}F(r,\omega(r))dr$ with $F\in
C^{1,k}([0,T]\times\mathbb{R}^{d})$. Then for any
$\tau_{1},\tau_{2},\cdots,\tau_{j}\in[0,t],$ $j\leq k,$
(21) $\partial_{t}f(t,\omega)=F(t,\omega(t)),\ \ \
\partial_{\omega_{\tau_{j}}}\cdots\partial_{\omega_{\tau_{1}}}f(t,\omega)=\int_{\tau}^{t}D_{x}^{j}F(r,\omega(r))dr,$
with $\tau=\max_{1\leq i\leq j}\\{\tau_{i}\\}.$ Thus $f$ has continuous SVDs
up to order $k$.
* $(iii)$
For a partition $0=t_{0}<t_{1}<\cdots<t_{n}=T,$ and a continuously
differentiable function
$F:\underbrace{\mathbb{R}^{d}\times\mathbb{R}^{d}\times\cdots\mathbb{R}^{d}}_{n}\to\mathbb{R}$,
let
(22)
$f(T,\omega):=F(\omega(t_{1}),\omega(t_{2})-\omega(t_{1}),\cdots,\omega(T)-\omega(t_{n-1})).$
Then $f$ is strongly vertically differentiable at $(T,\omega)$ with
(23)
$\partial_{\omega_{t}}f(T,\omega)=\sum_{j=1}^{n}\partial_{x_{j}}F(\omega(t_{1}),\omega(t_{2})-\omega(t_{1}),\cdots,\omega(T)-\omega(t_{n-1}))1_{(t_{j-1},t_{j}]}(t),\
\ t>0.$
* $(iv)$
For fixed $t_{0}\in(0,T)$ and $F\in C^{1}(\mathbb{R}^{d})$, define
$f(T,\omega):=F(\omega(t_{0})).$ Thus $f$ has SVDs
(24) $\partial_{\omega_{t}}f(T,\omega)=D_{x}F(\omega(t_{0}))1_{[0,t_{0}]}(t),$
which may not be continuous in $t\in[0,T]$. However, consider
(25)
$f_{\varepsilon}(T,\omega):=\int_{0}^{T}\rho_{\varepsilon}(t_{0}-s)F(\omega(s))ds,\
\ \omega\in\mathbb{D}_{T,d},\ \ \varepsilon>0,$
with $\rho_{\varepsilon}$ a standard mollifier on $\mathbb{R}.$ Then, for any
$\omega\in\mathbb{C}_{T,d},$ $\lim_{\varepsilon\rightarrow
0}f_{\varepsilon}(T,\omega)=f(T,\omega).$ Moreover, according to $(ii)$,
$f_{\varepsilon}(T,\omega)$ has continuous SVDs. Therefore, we can approximate
path-dependent master equations with non-smooth driven and terminal
functionals by those with smooth ones, see Example 4.11 for further details.
* $(v)$
For a given partition of $[0,T]:$ $0=t_{0}<t_{1}<\cdots<t_{n}=T$ and smooth
functions $\\{f_{i}\\}_{i=0}^{n-1}$ on $\mathbb{R}^{d},$ consider
(26)
$f(t,\omega):=\sum_{i=0}^{n-1}f_{i}(\omega(t_{i}))1_{[t_{i},t_{i+1})}(t).$
Then $f$ is strongly vertically differentiable at $\omega_{t}$ with
(27)
$\partial_{\omega_{\tau}}f(t,\omega)=\sum_{i=0}^{n-1}Df_{i}(\omega(t_{i}))1_{[t_{i},t_{i+1})}(t)1_{[0,t_{i}]}(\tau),\
\ \forall\tau\leq t.$
###### Remark 2.3.
The relation between vertical derivative and Malliavin derivative is
considered in [17], where an equivalence is built through martingale
representation in both frameworks (see [17, Theorem 6.1]). However, according
to $(iii)$ of Example 2.2, there is a direct equivalence between SVDs and
Malliavin derivatives. Recall that $W$ is the canonical process, and then
$f(W):=F(W(t_{1}),W(t_{2})-W(t_{1}),\cdots,W(T)-W(t_{n-1}))$ gives a
cylindrical random variable under the Wiener measure. Then its Malliavin
derivative $\mathcal{D}_{r}f(W)$ agrees with its SVD at $(r,T,W)$ and then the
SVD can be viewed as a pathwise definition of Malliavin derivative without
involving any probability measure. Furthermore, we can consider SVDs with
respect to driven signals for integrals and equations by restricting the
domain of definition. Denote by $\mathbb{C}_{T,d}^{1}$ the subspace of
$\mathbb{C}_{T,d}$ with continuous derivatives. Consider functional
$g:[0,T]\times\mathbb{C}_{T,d}^{1}\to\mathbb{R}$ given by
(28)
$g(t,\omega):=\int_{0}^{t}G(\omega(r))d\omega(r),\quad(t,\omega)\in[0,T]\times\mathbb{C}_{T,d}^{1},$
with $G\in C_{b}^{1}(\mathbb{R}^{d},\mathbb{R}^{d}).$ Then, we have
(29)
$\partial_{\omega_{\tau}}g(t,\omega)=G(\omega(\tau))+\int_{\tau}^{t}DG(\omega(r))d\omega(r),\quad\tau\in[0,t).$
Similarly, consider $\phi:[0,T]\times\mathbb{C}_{T,d}^{1}\to\mathbb{R}$ given
by $\phi(t,\omega):=x(t)$ with
(30) $x(t)=\int_{0}^{t}H(x(r))d\omega(r),$
for a given function $H\in C_{b}^{2}(\mathbb{R},\mathbb{R}^{d})$. Then by a
nontrivial argument (note that ordinary differential equations (ODEs) are not
continuous with respect to the driven signal under the uniform norm), we have
that for any $(t,\omega)\in[0,T]\times\mathbb{C}_{T,d}^{1}$, $\phi$ is
strongly vertically differentiable at any $(t,\omega)$ and the derivative
$\partial_{\omega_{\tau}}\phi(t,\omega)$ at $(\tau,t,\omega)$ solves the
following linear ODE,
(31)
$\partial_{\omega_{\tau}}x(t)=H(x(\tau))+\int_{\tau}^{t}\partial_{\omega_{\tau}}x(r)H^{\prime}(x(r))d\omega(r),\quad\
t\geq\tau.$
The following lemma follows from Definition 2.1 directly.
###### Lemma 2.4.
Suppose that $f:[0,T]\times\mathbb{D}_{T,d}\to\mathbb{R}$ is strongly
vertically differentiable, and uniformly Lipschitz continuous in $\omega:$
(32) $|f(t,\omega)-f(t,\omega^{\prime})|\leq
C\|\omega_{t}-\omega^{\prime}_{t}\|,\quad\forall(t,\omega,\omega^{\prime})\in[0,T]\times\mathbb{D}_{T,d}\times\mathbb{D}_{T,d}.$
Then we have $|\partial_{\omega_{\tau}}f(t,\omega)|\leq C$ for any
$(t,\omega)\in[0,T]\times\mathbb{D}_{T,d}$ and $\tau\leq t.$
For a non-anticipative functional $f\in\mathscr{D}$, consider its lift
$\mathbf{f}:[0,T]\times\mathbb{D}_{T,d}\times\mathbb{M}^{D}_{2}\to\mathbb{R},$
(33) $\mathbf{f}(t,\omega,\eta):=f(t,\omega,\mathcal{L}_{\eta}).$
In the spirit of Lions [36] (also see [50] for derivative with respect to
measure on the path space), we call $f$ Fréchet (vertically) differentiable at
$(t,\mu)$ (or $\mu_{t}$ for short), if for any fixed $\omega,$ $\mathbf{f}$ is
Fréchet (vertically) differentiable at $(t,\eta)$ (or $\eta_{t}$ for short)
with $\mathcal{L}_{\eta}=\mu$ in the following sense: there exits
$D_{\eta}\mathbf{f}(t,\omega,\eta)\in
L^{2}_{P}(\mathcal{F}_{t},\mathbb{R}^{d})$ such that for any $\xi\in
L^{2}_{P}(\mathcal{F}_{t},\mathbb{R}^{d}),$
(34) $\mathbf{f}(t,\omega,\eta+\xi
1_{[t,T]})=\mathbf{f}(t,\omega,\eta)+\mathbb{E}^{P}[D_{\eta}\mathbf{f}(t,\omega,\eta)\xi]+o(\|\xi\|_{L^{2}}).$
In particular, it means that the following Gâteaux derivative exits
(35) $\lim_{h\rightarrow 0}\frac{1}{h}[\mathbf{f}(t,\omega,\eta+h\xi
1_{[t,T]})-\mathbf{f}(t,\omega,\eta)]=\mathbb{E}^{P}[D_{\eta}\mathbf{f}(t,\omega,\eta)\xi].$
Moreover, if there exists a non-anticipative jointly measurable functional
$\partial_{\mu}f:\hat{\mathbb{D}}_{T,d}\times\mathbb{D}_{T,d}\to\mathbb{R},$
such that
(36) $D_{\eta}\mathbf{f}(t,\omega,\eta)=\partial_{\mu}f(t,\omega,\mu,\eta),\ \
\ P\text{-}a.s.,$
we call $f$ vertically differentiable at $(t,\mu)$ and
$\partial_{\mu}f(t,\omega,\mu,\tilde{\omega})$ the vertical derivative of
$f(t,\omega,\cdot)$ at $(t,\mu)$ (or $\mu_{t}$).
###### Remark 2.5.
Here we give a crucial remark about the validity for notations of Fréchet and
Gâteaux differentiability. Denote by $\mathbf{f}$ the lift of
$f\in\mathscr{D}.$ For any $\xi\in L^{2}_{P}(\mathcal{F}_{t},\mathbb{R}^{d}),$
let $F(t,\omega,\eta,\xi):=\mathbf{f}(t,\omega,\eta+\xi 1_{[t,T]}).$ Then
$\mathbf{f}$ is Fréchet differentiable at $(t,\eta)$ in the above sense is
equivalent to that $F(t,\omega,\eta,\xi)$ is Fréchet differentiable at $\xi=0$
in the classical sense. Similar argument for Gâteaux differentiability also
holds.
###### Remark 2.6.
Concerning the existence of the derivative functional $\partial_{\mu}f$. If
the lift $\mathbf{f}(t,\omega,\eta)$ of $f(t,\omega,\mu)$ is Fréchet
differentiable at $\eta_{t},$ and the derivative
$D_{\eta}\mathbf{f}(t,\omega,\eta)$ is continuous in the sense that
$D_{\eta}\mathbf{f}(t,\omega,\eta^{n})\stackrel{{\scriptstyle
L^{2}}}{{\longrightarrow}}D_{\eta}\mathbf{f}(t,\omega,\eta)$ as
$\eta^{n}\stackrel{{\scriptstyle L^{2}}}{{\longrightarrow}}\eta$ under the
Skorohod topology (11), then according to [50, Theorem 2.2], $\partial_{\mu}f$
exists in the sense of (36). However, to build smooth solutions for (6), we
need our Itô formula (Theorem 2.15 and Corollary 2.16) to be applicable for
the larger class of functionals, which only need to be continuous with respect
to the uniform topology. Luckily, we can construct the derivative directly by
corresponding FBSDEs.
For the uniqueness of $\partial_{\mu}f(t,\omega,\mu,\cdot),$ in view of
identity (36), we see that it is unique $\mu$-a.s. in $\mathbb{D}_{T,d}$. Then
for any $\mu\in\mathcal{P}^{D}_{2}$ such that
$\text{supp}(\mu)=\mathbb{D}_{T,d},$ if
$\partial_{\mu}f(t,\omega,\mu,\tilde{\omega})$ is continuous in
$\tilde{\omega}\in\mathbb{D}_{T,d},$ $\partial_{\mu}f(t,\omega,\mu,\cdot)$ is
unique on $\mathbb{D}_{T,d}.$ Moreover, suppose that
$\partial_{\mu}f(t,\omega,\cdot,\cdot)$ is jointly continuous on
$\mathcal{P}_{2}^{D}\times\mathbb{D}_{T,d}.$ Then for any
$\mu_{0}\in\mathcal{P}_{2}^{D},$ $\partial_{\mu}f(t,\omega,\mu_{0},\cdot)$ is
unique on $\mathbb{D}_{T,d}$. Indeed, choose any $\eta\in\mathbb{M}^{D}_{2}$
with $\mathcal{L}_{\eta}=\mu_{0}\in\mathcal{P}_{2}^{D},$ and any
$\eta^{\prime}\in(\mathbb{M}^{D}_{2})^{\prime},$ which is independent of
$\eta,$ such that $\text{supp}(\mathcal{L}_{\eta^{\prime}})=\mathbb{D}_{T,d}.$
Then for any $\varepsilon>0,$ the functional
$\partial_{\mu}f(t,\omega,\mathcal{L}_{\eta+\varepsilon\eta^{\prime}},\cdot)$
is unique on $\mathbb{D}_{T,d}.$ It follows by continuity of
$\partial_{\mu}f(t,\omega,\cdot,\cdot)$ that
$\partial_{\mu}f(t,\omega,\mu_{0},\tilde{\omega})$ is unique as the limit of
$\partial_{\mu}f(t,\omega,\mathcal{L}_{\eta+\varepsilon\eta^{\prime}},\tilde{\omega})$
as $\varepsilon$ goes to zero. In conclusion, we have the following lemma.
###### Lemma 2.7.
Suppose that for any fixed $(t,\omega)\in[0,T]\times\mathbb{D}_{T,d},$ the
functional derivative $\partial_{\mu}f(t,\omega,\cdot,\cdot)$ is jointly
continuous in $\mathcal{P}_{2}^{D}\times\mathbb{D}_{T,d}.$ Then for any
$(t,\omega,\mu)\in\hat{\mathbb{D}}_{T,d},$
$\partial_{\mu}f(t,\omega,\mu,\cdot)$ is unique on $\mathbb{D}_{T,d}$.
###### Remark 2.8.
The definition of vertical derivative given by (34) and (35) has natural
extension for Banach space valued functionals. For any $t\in[0,T],$ suppose
that $f(t,\omega,\mu)$ takes values in a (stochastic) Banach space $E_{t}$
(e.g.
$\mathbb{S}^{2}({[t,T]}),\mathbb{H}^{2}({[t,T]}),L^{2}(\mathcal{F}_{t})$).
Indeed, $f(t,\omega,\mu)$ has the natural lift $\mathbf{f}(t,\omega,\eta)\in
E_{t}$ with $\mathcal{L}_{\eta}=\mu$. If the mapping from
$L^{2}(\mathcal{F}_{t})$ to $E_{t}$
$\begin{array}[]{lccl}\mathbf{f}(t,\omega,\eta+\cdot
1_{[t,T]}):&L^{2}(\mathcal{F}_{t})&\longrightarrow&E_{t}\\\
&\xi&&\mathbf{f}(t,\omega,\eta+\xi 1_{[t,T]})\end{array}$
is Fréchet (vertical) differentiable with derivative
$D_{\eta}\mathbf{f}(t,\omega,\eta)\in L(L^{2}(\mathcal{F}_{t}),E_{t})$ at
$\xi=0,$ we call $f(t,\omega,\cdot)$ Fréchet (vertically) differentiable at
$\mu_{t}$. Moreover, if there exists a jointly measurable functional
$U:\hat{\mathbb{D}}_{T,d}\times\mathbb{D}_{T,d}\to E_{t}$ such that for any
$\xi\in L^{2}(\mathcal{F}_{t})$,
$D_{\eta}\mathbf{f}(t,\omega,\eta)(\xi)=\mathbb{E}^{P}[U(t,\omega,\mu,\eta)\xi],$
we call $\partial_{\mu}f(t,\omega,\mu,\cdot):=U(t,\omega,\mu,\cdot)$ the
vertical derivative of $f(t,\omega,\cdot)$ at $\mu_{t}.$
Now we introduce SVDs with respect to path-measure.
###### Definition 2.9.
For any $\tau,t\in[0,T]$ with $\tau\leq t$ and $\mu\in\mathcal{P}_{2}^{D},$ we
call a non-anticipative functional
$f:[0,T]\times\mathcal{P}_{2}^{D}\to\mathbb{R}$ Fréchet (strongly vertically)
differentiable at $(\tau,t,\mu)$ if its lift $\mathbf{f}(t,\eta)$ with
$\mathcal{L}_{\eta}=\mu$ is Fréchet (strongly vertically) differentiable:
there exits $D_{\eta_{\tau}}\mathbf{f}(t,\eta)\in
L^{2}_{P}(\mathcal{F}_{t},\mathbb{R}^{d})$ such that for any $\xi\in
L^{2}_{P}(\mathcal{F}_{\tau},\mathbb{R}^{d}),$
(37) $\mathbf{f}(t,\eta+\xi
1_{[\tau,T]})=\mathbf{f}(t,\eta)+\mathbb{E}^{P}[D_{\eta_{\tau}}\mathbf{f}(t,\eta)\xi]+o(\|\xi\|_{L^{2}}).$
In particular, it means that the following Gâteaux derivative exits,
(38) $\lim_{h\rightarrow 0}\frac{1}{h}[\mathbf{f}(t,\eta+h\xi
1_{[\tau,T]})-\mathbf{f}(t,\eta)]=\mathbb{E}^{P}[D_{\eta_{\tau}}\mathbf{f}(t,\eta)\xi].$
We call $f$ strongly vertically differentiable at $(t,\mu)$ or $\mu_{t},$ if
it is Fréchet differentiable at $(\tau,t,\mu)$ for any $\tau\leq t$, and
moreover, there exists a jointly measurable non-anticipative functional
$\partial_{\mu_{\tau}}f:[0,T]\times\mathcal{P}_{2}^{D}\times\mathbb{D}_{T,d}\to\mathbb{R}^{d}$
such that
(39) $D_{\eta_{\tau}}\mathbf{f}(t,\eta)=\partial_{\mu_{\tau}}f(t,\mu,\eta),\ \
\ P\text{-}a.s..$
$\partial_{\mu_{\tau}}f(t,\mu,\cdot)$ is then called the strong vertical
derivative of $f(t,\cdot)$ at $(\tau,t,\mu).$
###### Remark 2.10.
For the existence and uniqueness of the SVD at $\mu_{\tau},$ we have similar
results as Remark 2.6 and Lemma 2.7 for vertical derivatives. In particular,
if for any $t\in[0,T],$ $\partial_{\mu_{\tau}}f(t,\cdot,\cdot)$ is jointly
continuous on $\mathcal{P}_{2}^{D}\times\mathbb{D}_{T,d}$, then the SVD is
unique. Moreover, we can extend SVDs in path-measure to the (stochastic)
Banach framework as Remark 2.8.
Given strongly vertically differentiable
$f:[0,T]\times\mathcal{P}_{2}^{D}\to\mathbb{R}$, for any
$(t,\mu,\tilde{\omega})\in[0,T]\times\mathcal{P}_{2}^{D}\times\mathbb{D}_{T,d}$
and $\tau\leq t,$ we can further consider SVDs of $\partial_{\mu_{\tau}}f$
with respect to $\mu_{t}$ and $\tilde{\omega}_{t}$: for any $\tau^{\prime}\leq
t,$ consider
$\partial_{\tilde{\omega}_{\tau^{\prime}}}\partial_{\mu_{\tau}}f(t,\mu,\tilde{\omega})$
as the SVD of $\partial_{\mu_{\tau}}f(t,\mu,\tilde{\omega})$ at
$(\tau^{\prime},t,\tilde{\omega})$;
$\partial_{\mu_{\tau^{\prime}}}\partial_{\mu_{\tau}}f(t,\mu,\tilde{\omega},\tilde{\omega}^{\prime})$
as the SVD of $\partial_{\mu_{\tau}}f(t,\mu,\tilde{\omega})$ at
$(\tau^{\prime},t,\mu).$ In the subsequent sections, we only need to consider
the case $\tau^{\prime}=\tau$ and the second order derivative
$\partial_{\tilde{\omega}_{\tau^{\prime}}}\partial_{\mu_{\tau}}f(t,\mu,\tilde{\omega})$.
Moreover, we call $f$ has continuous SVDs or
$\partial_{\mu_{\tau}}f(t,\mu,\tilde{\omega})$ is continuous if
$\partial_{\mu_{\tau}}f$ is continuous with respect to the following
premetric: for any $(t,\mu,\tilde{\omega})$ and
$(t^{\prime},\mu^{\prime},\tilde{\omega}^{\prime})$ with $\tau\leq
t,\tau^{\prime}\leq t^{\prime},$
(40)
$d_{sp}((\tau,t,\mu,\tilde{\omega}),(\tau^{\prime},t^{\prime},\mu^{\prime},\tilde{\omega}^{\prime})):=|\tau-\tau^{\prime}|+|t-t^{\prime}|+W_{2}(\mu_{t},\mu^{\prime}_{t^{\prime}})+\|\tilde{\omega}_{t}-\tilde{\omega}^{\prime}_{t^{\prime}}\|.$
$f$ is said to have continuous SVDs in path-measure up to order $2$, if both
$\partial_{\mu_{\tau}}f$ and
$\partial_{\tilde{\omega}_{\tau}}\partial_{\mu_{\tau}}f$ are continuous with
respect to the above topology.
###### Example 2.11.
Here we consider $f:[0,T]\times\mathcal{P}_{2}^{D}\to\mathbb{R}$ and
$(t,\mu)\in[0,T]\times\mathcal{P}_{2}^{D}.$
* $(i)$
Suppose that $F\in C^{1,2}([0,T]\times\mathbb{R}^{d})$ with $|D_{x}^{2}F|$
being uniformly bounded, and $f(t,\mu):=\mathbb{E}^{\mu}[F(t,W(t))].$ Then we
have that
$\displaystyle\partial_{t}f(t,\mu)=\mathbb{E}^{\mu}[\partial_{t}F(t,W(t))],\ \
\ \partial_{{\mu_{\tau}}}f(t,\mu,\tilde{\omega})=D_{x}F(t,\tilde{\omega}(t)),$
$\displaystyle\quad\text{and}\quad\partial_{\tilde{\omega}_{\tau}}\partial_{{\mu_{\tau}}}f(t,\mu,\tilde{\omega})=D_{x}^{2}F(t,\tilde{\omega}(t)),\
\ \ \forall\tau\in[0,t].$
Thus $f$ has continuous SVDs up to order $2$.
* $(ii)$
Let $F$ as defined in $(i)$ and
$f(t,\mu):=\mathbb{E}^{\mu}[\int_{0}^{t}F(r,W(r))dr].$ Then for any
$\tau\in[0,t],$
$\displaystyle\partial_{t}f(t,\mu)=\mathbb{E}^{\mu}[F(t,W(t))],\ \ \
\partial_{\mu_{\tau}}f(t,\mu,\tilde{\omega})=\int_{\tau}^{t}D_{x}F(r,\tilde{\omega}(r))dr,$
$\displaystyle\quad\text{and}\quad\partial_{\tilde{\omega}_{\tau}}\partial_{\mu_{\tau}}f(t,\mu,\tilde{\omega})=\int_{\tau}^{t}D^{2}_{x}F(r,\tilde{\omega}(r))dr.$
Therefore, the functional $f$ also has continuous SVDs up to order $2$.
* $(iii)$
Let $F\in C^{1}(\mathbb{R}^{d})$ such that $|DF(x)|\leq C(1+|x|)$ for some
$C\geq 0.$ For fixed $t_{0}\in(0,T),$ consider
$\Phi(T,\mu):=\mathbb{E}^{\mu}[F(W(t_{0}))].$ Then the SVD at $\mu_{t}$ is
$\partial_{\mu_{t}}\Phi(T,\mu,\tilde{\omega}):=D_{x}F(\tilde{\omega}(t_{0}))1_{[0,t_{0}]}(t),$
which may not be continuous in $t\in[0,T]$. However, for any
$\mu\in\mathcal{P}_{2}^{D},$ consider
(41)
$\Phi_{\varepsilon}(T,\mu):=\mathbb{E}^{\mu}[\int_{0}^{T}\rho_{\varepsilon}(t_{0}-s)F(W(s))ds],$
with $\rho_{\varepsilon}$ being a standard mollifier on $\mathbb{R}.$ Then by
applying the dominated convergence theorem, we have
(42) $\lim_{\varepsilon\rightarrow 0}\Phi_{\varepsilon}(T,\mu)=\Phi(T,\mu),\ \
\forall\ \mu\in\mathcal{P}_{2}^{C}.$
Moreover, according to $(ii)$, $\Phi_{\varepsilon}(T,\mu)$ has continuous
SVDs. Therefore, we may approximate functionals with non-smooth SVDs by smooth
ones. See Example 4.11 for further application in path-dependent master
equations.
###### Example 2.12.
We consider non-anticipative functionals $f\in\mathscr{D}$ by combining
Example 2.2 and Example 2.11. For simplicity take $d=1$. Suppose that $F\in
C^{1,2}_{b}([0,T]\times\mathbb{R}^{5})$ and $f_{1},f_{2},f_{3},f_{5}\in
C^{2}_{b}(\mathbb{R})$. $f_{4}\in C^{2}_{b}(\mathbb{R}^{2})$. Consider the
following functional
$\displaystyle
f(t,\omega,\mu):=F\Big{(}t,\omega(t),\int_{0}^{t}f_{1}(\omega(r))dr,\mathbb{E}^{\mu}[f_{2}(W(t))],\mathbb{E}^{\mu}[\int_{0}^{t}f_{3}(W(r))dr],$
$\displaystyle\quad\quad\quad\quad
E^{\mu}[f_{4}(W(t),\int_{0}^{t}f_{5}(W(r))dr)]\Big{)},\quad\forall\
(t,\omega,\mu)\in\hat{\mathbb{D}}_{T,d}.$
Then we check that $f$ has continuous horizontal derivatives and twice
continuous SVDs in $\omega_{t}$ and $\mu_{t}.$ Indeed, for any $\tau\leq t,$
$\begin{split}&\partial_{t}f(t,\omega,\mu)=\partial_{t}F(t,x)+\partial_{x_{2}}F(t,x)f_{1}(\omega(t))+\partial_{x_{4}}F(t,x)\mathbb{E}^{\mu}\left[f_{3}(W(t))\right]\\\
&\ \ \ \ \quad\ \ \ \
\quad\quad+\partial_{x_{5}}F(t,x)\mathbb{E}^{\mu}\left[\partial_{y_{2}}f_{4}(Y)f_{5}(W(t))\right],\\\
&\partial_{\omega_{\tau}}f(t,\omega,\mu)=\partial_{x_{1}}F(t,x)+\partial_{x_{2}}F(t,x)\int_{\tau}^{t}f_{1}^{\prime}(\omega(r))dr,\\\
&\partial^{2}_{\omega_{\tau}}f(t,\omega,\mu)=\partial^{2}_{x_{1}}F(t,x)+\partial_{x_{2}}^{2}F(t,x)\left(\int_{\tau}^{t}f_{1}^{\prime}(\omega(r))dr\right)^{2}+\partial_{x_{2}}F(t,x)\int_{\tau}^{t}f_{1}^{(2)}(\omega(r))dr,\\\
&\partial_{\mu_{\tau}}f(t,\omega,\mu,\tilde{\omega})=\partial_{x_{3}}F(t,x)f^{\prime}_{2}(\tilde{\omega}(t))+\partial_{x_{4}}F(t,x)\int_{\tau}^{t}f_{3}^{\prime}(\tilde{\omega}(r))dr\\\
&\ \ \ \ \quad\ \ \ \quad\ \ \
\quad\quad+\partial_{x_{5}}F(t,x)\Big{[}\partial_{y_{1}}f_{4}(\tilde{y})+\partial_{y_{2}}f_{4}(\tilde{y})\int_{\tau}^{t}f_{5}^{\prime}(\tilde{\omega}(r))dr\Big{]},\quad\text{and}\\\
&\partial_{\tilde{\omega}_{\tau}}\partial_{\mu_{\tau}}f(t,\omega,\mu,\tilde{\omega})=\partial_{x_{3}}F(t,x)f^{(2)}_{2}(\tilde{\omega}(t))+\partial_{x_{4}}F(t,x)\int_{\tau}^{t}f_{3}^{(2)}(\tilde{\omega}(r))dr+\partial_{x_{5}}F(t,x)\Big{[}\partial^{2}_{y_{1}}f_{4}(\tilde{y}),\\\
&\quad\ \ \quad\quad\ \quad\quad\ \ \
\quad\quad+2\partial_{y_{2}}\partial_{y_{1}}f_{4}(\tilde{y})\int_{\tau}^{t}f_{5}^{\prime}(\tilde{\omega}(r))dr+\partial_{y_{2}}^{2}f_{4}(\tilde{y})(\int_{\tau}^{t}f_{5}^{\prime}(\tilde{\omega}(r))dr)^{2}\Big{]},\end{split}$
where
$\displaystyle(t,x)$
$\displaystyle=\left(t,\omega(t),\int_{0}^{t}f_{1}(\omega(r))dr,\mathbb{E}^{\mu}[f_{2}(W(t))],\mathbb{E}^{\mu}\Big{[}\int_{0}^{t}f_{3}(W(r))dr\Big{]},\
\mathbb{E}^{\mu}\Big{[}f_{4}(W(t),\int_{0}^{t}f_{5}(W(r))dr)\Big{]}\right),$
$\displaystyle Y$ $\displaystyle=\left(W(t),\int_{0}^{t}f_{5}(W(r))dr\right),\
\
\text{and}\quad\tilde{y}=\left(\tilde{\omega}(t),\int_{0}^{t}f_{5}(\tilde{\omega}(r))dr\right).$
In the following, for any $f\in\mathscr{D},$ we use generic notations
$(\partial_{\omega}f,\partial^{2}_{\omega}f)$
(($\partial_{\omega_{\tau}}f,\partial_{\omega_{\tau}}^{2}f)$, resp.) to denote
the vertical derivative (SVD, resp.) in path, and
$(\partial_{\mu}f,\partial_{\tilde{\omega}}\partial_{\mu}f)$
($(\partial_{\mu_{\tau}}f,\partial_{\tilde{\omega}_{\tau}}\partial_{\mu_{\tau}}f)$,
resp.) to denote the vertical derivative (SVD, resp.) in measure if there is
no confusion.
###### Definition 2.13.
Denote by $\mathscr{C}(\hat{\mathbb{D}}_{T,d})$ (or $\mathscr{C}$ when there
is no confusion), the subspace of $\mathscr{D}$ which consists of all non-
anticipative and continuous functionals. Furthermore,
* (i)
$\mathscr{C}^{1,1,1}$ ($\mathscr{C}^{1,1,1}_{s}$, resp.) is the subset of
$\mathscr{C}$ whose element is continuously horizontally differentiable,
(strongly, resp.) vertically differentiable w.r.t. both path and measure, with
all derivatives being continuous;
* (ii)
$\mathscr{C}^{1,2,1}$ ($\mathscr{C}^{1,2,1}_{s}$, resp.) is the subset of
$\mathscr{C}^{1,1,1}$ ( $\mathscr{C}^{1,1,1}_{s}$, resp.) whose element’s
derivative $\partial_{\omega}f(t,\cdot,\mu,\tilde{\omega})$ (
$\partial_{\omega_{\tau}}f(t,\cdot,\mu,\tilde{\omega})$, $\tau\leq t,$ resp.),
$(t,\omega,\mu,\tilde{\omega})\in\hat{\mathbb{D}}_{T,d}\times\mathbb{D}_{T,d}$,
is further vertically differentiable (strongly vertically differentiable at
$(\tau,t,\omega)$, resp.), with all derivatives being continuous;
* (iii)
$\mathscr{C}^{1,2,1,1}$ ($\mathscr{C}^{1,2,1,1}_{s}$, resp.) is the subset of
$\mathscr{C}^{1,2,1}$ ($\mathscr{C}^{1,2,1}_{s}$, resp.) whose element’s
derivative functional $\partial_{\mu}f(t,\omega,\mu,\cdot)$ (
$\partial_{\mu_{\tau}}f(t,\omega,\mu,\cdot)$, $\tau\leq t$, resp.),
$(t,\omega,\mu,\tilde{\omega})\in\hat{\mathbb{D}}_{T,d}\times\mathbb{D}_{T,d}$,
is further vertically differentiable (strongly vertically differentiable at
$(\tau,t,\tilde{\omega})$, resp.), with all derivatives being continuous.
Moreover, denote by $\mathscr{C}^{1,1,1}_{p}$ the subset of
$\mathscr{C}^{1,1,1}$ such that the functional and all its first order
derivatives have at most polynomial growth in the path variable: there exists
$k\in\mathbb{Z}^{+}$, such that for $\phi=f,\partial_{t}f,\partial_{\omega}f,$
$\psi=\partial_{\mu}f$ and any $K>0,$
(43) $\begin{split}&|\phi(t,\omega,\mu)|\leq C_{K}(1+\|\omega_{t}\|^{k}),\ \ \
|\psi(t,\omega,\mu,\tilde{\omega})|\leq
C_{K}(1+\|\omega_{t}\|^{k}+\|\tilde{\omega}_{t}\|^{k}),\\\ &\ \ \ \
\forall(t,\omega,\mu,\tilde{\omega})\in\hat{\mathbb{D}}_{T,d}\times\mathbb{D}_{T,d}\
\text{such that }|||\mu_{t}|||\leq K,\end{split}$
for a constant $C_{K}$ depending only on $K.$ Notations such as
$\mathscr{C}_{p},$ $\mathscr{C}^{1,1,1}_{s,p}$ $\mathscr{C}^{0,1,1}$ and
$\mathscr{C}^{1,2,1,1}_{p}$ are defined similarly.
###### Remark 2.14.
Assume that $f\in\mathscr{D}$ is non-anticipative and has a state-dependent
structure: $f(t,\omega,\mu)=\tilde{f}(t,\omega(t),\mu(t))$ for some function
$\tilde{f}$ defined on
$[0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})$. Then the
horizontal differentiability and strongly vertical differentiability of $f$ is
reduced to the differentiability of $\tilde{f}$ on
$[0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d}).$ Moreover,
(44)
$\displaystyle\partial_{t}f(t,\omega,\mu)=\partial_{t}\tilde{f}(t,\omega(t),\mu(t)),\
\
\partial_{\omega_{\tau}}f(t,\omega,\mu)=D_{x}f(t,\omega(t),\mu(t)),\quad\text{and}$
(45)
$\displaystyle\partial_{\mu_{\tau}}f(t,\omega,\mu,\tilde{\omega})=\partial_{\nu}\tilde{f}(t,\omega(t),\mu(t),\tilde{\omega}(t)),\
\forall(t,\omega,\mu)\in[0,T]\times\mathbb{D}_{T,d}\times\mathcal{P}_{2}^{D},\
\tau\leq t,$
where $\partial_{\nu}\tilde{f}$ is the Lions’ derivative (see e.g. [36]).
### 2.3. Itô-Dupire formula
Suppose that $(a,b)$ is a bounded progressively measurable process on
$(\Omega,\mathcal{F},P)$ with values in
$\mathbb{R}^{m}\times\mathbb{R}^{m\times d}.$ For any
$(t,\gamma)\in[0,T]\times\mathbb{D}_{T,d},$ $X$ is the solution of SDE
(46) $\left\\{\begin{array}[]{l}dX(r)=a(r)dr+b(r)dB(r),\\\ X_{t}=\gamma_{t},\
\ \ r\geq t.\end{array}\right.$
$(\Omega^{\prime},\mathcal{F}^{\prime},P^{\prime})$ is an atomless probability
space with a $k$-dimensional Brownian motion $B^{\prime}$ and $(c,d)$ is a
bounded progressively measurable process on
$(\Omega^{\prime},\mathcal{F}^{\prime},P^{\prime})$ with values in
$\mathbb{R}^{n}\times\mathbb{R}^{n\times k}.$ Given
$\eta\in(\mathbb{M}_{2}^{D})^{\prime}$, let $X^{\prime}$ defined by SDE
(47) $\left\\{\begin{array}[]{l}dX^{\prime}(r)=c(r)dr+d(r)dB^{\prime}(r),\\\
X^{\prime}_{t}=\eta_{t},\ \ \ r\geq t.\end{array}\right.$
Moreover, let
$({\tilde{X}}^{\prime},\tilde{c},\tilde{d},\tilde{B}^{\prime},{\tilde{\eta}})$
be an independent copy of $(X^{\prime},c,d,B^{\prime},\eta)$, which means that
$({\tilde{X}}^{\prime},\tilde{c},\tilde{d},\tilde{B}^{\prime},{\tilde{\eta}})$
is defined in an independent probability space
$(\tilde{\Omega},\tilde{\mathcal{F}},\tilde{P})$ from $(\Omega,\mathcal{F},P)$
and $(\Omega^{\prime},\mathcal{F}^{\prime},P^{\prime})$, and it has the same
law as $(X^{\prime},c,d,B^{\prime},\eta)$. Then we have the following Itô-
Dupire formula.
###### Theorem 2.15.
For any fixed
$(t,\gamma,\eta)\in[0,T]\times\mathbb{D}_{T,d}\times(\mathbb{M}^{D}_{2})^{\prime},$
$X$ and $X^{\prime}$ are diffusion processes defined by (46) and (47)
respectively. Suppose that
$f\in\mathscr{C}^{1,2,1,1}_{p}(\hat{\mathbb{D}}_{T,d})$, and then we have
$\displaystyle f(s,X,\mathcal{L}_{X^{\prime}})-f(t,\gamma,\mathcal{L}_{\eta})$
$\displaystyle\ \ \
=\int_{t}^{s}\partial_{r}f(r,X,\mathcal{L}_{X^{\prime}})dr+\int_{t}^{s}\partial_{\omega}f(r,X,\mathcal{L}_{X^{\prime}})dX(r)$
(48) $\displaystyle\ \ \ \ \ \ +\frac{1}{2}\int_{t}^{s}\text{Tr}\
[\partial_{\omega}^{2}f(r,X_{r},\mathcal{L}_{X^{\prime}})d\langle
X\rangle(r)]+\mathbb{E}^{\tilde{P}^{\prime}}[\int_{t}^{s}\partial_{\mu}f(r,X,\mathcal{L}_{X^{\prime}},\tilde{X}^{\prime})d\tilde{X}^{\prime}(r)]$
$\displaystyle\ \ \ \ \ \
+\frac{1}{2}\mathbb{E}^{\tilde{P}^{\prime}}\int_{t}^{s}\text{Tr}\
[\partial_{\tilde{\omega}}\partial_{\mu}f(r,X,\mathcal{L}_{X^{\prime}},\tilde{X}^{\prime})\tilde{d}(r)\tilde{d}(r)^{T}]dr,\quad\forall
s\geq t.$
###### Proof.
Without loss of generality, assume $d=k=m=n=1$ and $s=T.$ Since both sides of
identity (48) depend on $(X^{\prime},c,d,\eta)$ through its law, we assume
that $(\Omega^{\prime},\mathcal{F}^{\prime},P^{\prime})$ is independent from
$(\Omega,\mathcal{F},P)$ for simplicity of notations. Consider the following
discretization of $X$ and $X^{\prime}:$ for any $n\geq 1,$ take
$t=t_{0}<t_{1}<\cdots<t_{n}=T$ as any partition of $[0,T]$ with vanishing
modulus $\delta_{n}$. Define càdlàg processes $X^{n},{X^{\prime}}^{n}$ with
$X^{n}_{t}=\gamma_{t},\ {X^{\prime}}^{n}_{t}=\eta_{t}$ by
$\displaystyle
X^{n}(r):=\sum_{i=0}^{n-1}X(t_{i})1_{[t_{i},t_{i+1})}(r)+X(T)1_{\\{T\\}}(r),$
$\displaystyle{X^{\prime}}^{n}(r):=\sum_{i=0}^{n-1}X^{\prime}(t_{i})1_{[t_{i},t_{i+1})}(r)+X^{\prime}(T)1_{\\{T\\}}(r),\quad
r\geq t.$
Since $(a,b,c,d)$ is bounded, we see that for any $r\in[0,T],$
(49)
$\displaystyle\mathbb{E}\|X^{n}\|^{p}_{\mathbb{S}^{p}}\leq\mathbb{E}\|X\|^{p}_{\mathbb{S}^{p}}<\infty,\quad\lim_{n\rightarrow\infty}\|X^{n}_{t_{i}}-X_{r}\|=0,\
P\text{-}a.s.,$ (50)
$\displaystyle|||\mathcal{L}_{{X^{\prime}}^{n}}|||^{2}=\mathbb{E}\|{X^{\prime}}^{n}\|^{2}_{\mathbb{S}^{2}}\leq\mathbb{E}\|X^{\prime}\|^{2}_{\mathbb{S}^{2}}<\infty,\quad\lim_{n\rightarrow\infty}\|{X^{\prime}}^{n}_{t_{i}}-X^{\prime}_{r}\|=0,\
P^{\prime}\text{-}a.s.,$
where $i$ above satisfies $r\in[t_{i},t_{i+1}).$ It follows from (50) that
(51)
$\lim_{n\rightarrow\infty}W_{2}(\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}},\mathcal{L}_{X^{\prime}_{r}})=0.$
Then we have
(52)
$\begin{split}&f(T,X^{n}_{T},\mathcal{L}_{X^{{}^{\prime}n}_{T}})-f(t,\gamma_{t},\mathcal{L}_{\eta_{t}})\\\
&\ \ \
=\sum_{i=0}^{n-1}[f(t_{i+1},X_{t_{i+1}}^{n},\mathcal{L}_{(X^{{}^{\prime}n})_{t_{i+1}}})-f(t_{i},X_{t_{i}}^{n},\mathcal{L}_{(X^{{}^{\prime}n})_{t_{i}}})]\\\
&\ \ \
=\sum_{i=0}^{n-1}\Big{[}(f(t_{i+1},X^{n}_{t_{i}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})-f(t_{i},X^{n}_{t_{i}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}}))+(f(t_{i+1},X^{n}_{t_{i+1}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})\\\
&\ \ \ \ \ \
-f(t_{i+1},X^{n}_{t_{i}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}}))+(f(t_{i+1},X^{n}_{t_{i+1}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i+1}}})-f(t_{i+1},X^{n}_{t_{i+1}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}}))\Big{]}.\end{split}$
Since
(53)
$\begin{split}f(t_{i+1},X^{n}_{t_{i}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})-f(t_{i},X^{n}_{t_{i}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})&=\int_{t_{i}}^{t_{i+1}}\partial_{r}f(r,X^{n}_{t_{i}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})dr\\\
&=\int_{t}^{T}\partial_{r}f(r,X^{n}_{t_{i}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})1_{[t_{i},t_{i+1})}(r)dr,\end{split}$
in view of inequalities (49) and (51), applying the dominated convergence
theorem and passing to the limit for a subsequence, we have
(54)
$\lim_{n\rightarrow\infty}\sum_{i=0}^{n-1}\Big{(}f(t_{i+1},X^{n}_{t_{i}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})-f(t_{i},X^{n}_{t_{i}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})\Big{)}=\int_{t}^{T}\partial_{r}f(r,X,\mathcal{L}_{X^{\prime}})dr,\
\ \ P\text{-}a.s..$
For the second term on the right hand side of (52), since
$f\in\mathscr{C}^{1,2,1,1}_{p},$ we have that
$\phi_{i}(\theta):=f(t_{i+1},X^{n}_{t_{i}}+\theta
1_{[t_{i+1},T)},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})$ is twice continuously
differentiable in $\theta$, and moreover,
(55)
$\phi^{\prime}_{i}(\theta)=\partial_{\omega}f(t_{i+1},X^{n}_{t_{i}}+\theta
1_{[t_{i+1},T)},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}}),\ \
\phi^{\prime\prime}_{i}(\theta)=\partial_{\omega}^{2}f(t_{i+1},X^{n}_{t_{i}}+\theta
1_{[t_{i+1},T)},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}}).$
In the following, we will write $X_{i}:=X(t_{i})\equiv X^{n}(t_{i})$ and
$\delta X_{i}:=X_{i+1}-X_{i}.$ Similar notations such as $X^{{}^{\prime}}_{i}$
are self-explained. Note that
$X^{n}_{t_{i+1}}=X^{n}_{t_{i}}+(X_{i+1}-X_{i})1_{[t_{i+1},T)}.$ Using the Itô
formula to $\phi(X(r)-X_{i})$ on $r\in[t_{i},t_{i+1}],$ we have
$\displaystyle
f(t_{i+1},X^{n}_{t_{i+1}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})-f(t_{i+1},X^{n}_{t_{i}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})$
(56) $\displaystyle\ \ \
=\int_{t_{i}}^{t_{i+1}}\partial_{\omega}f(t_{i+1},X^{n}_{t_{i}}+(X(r)-X_{i})1_{[t_{i+1},T)},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})dX(r)$
$\displaystyle\ \ \ \ \ \
+\frac{1}{2}\int_{t_{i}}^{t_{i+1}}\partial_{\omega}^{2}f(t_{i+1},X^{n}_{t_{i}}+(X(r)-X_{i})1_{[t_{i+1},T]},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})d\langle
X\rangle(r).$
Since $\|X^{n}_{t_{i}}+(X(r)-X_{i})1_{[t_{i+1},T]}-X_{r}\|\rightarrow 0,\
P$-a.s. for any $r\in[t_{i},t_{i+1})$, according to inequality (56), passing
to the limit in a subsequence, we have
(57)
$\begin{split}&\lim_{n\rightarrow\infty}\sum_{i=0}^{n-1}\Big{(}f(t_{i+1},X^{n}_{t_{i+1}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})-f(t_{i+1},X^{n}_{t_{i}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})\Big{)}\\\
&\ \ \
=\int_{t}^{T}\partial_{\omega}f(r,X,\mathcal{L}_{X^{\prime}})dX(r)+\frac{1}{2}\int_{t}^{T}\partial_{\omega}^{2}f(r,X,\mathcal{L}_{X^{\prime}})d\langle
X\rangle(r),\ \ P\text{-}a.s..\end{split}$
For the last term in the decomposition (52), we have
$\displaystyle
f(t_{i+1},X^{n}_{t_{i+1}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i+1}}})-f(t_{i+1},X^{n}_{t_{i+1}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}})$
$\displaystyle\ \ \
=\int_{0}^{1}\mathbb{E}^{\prime}\Big{[}\partial_{\mu}f(t_{i+1},X^{n}_{t_{i+1}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}+\theta(\delta
X^{\prime}_{i})1_{[t_{i+1,T})}},X^{{}^{\prime}n}_{t_{i}}+\theta(\delta
X^{\prime}_{i})1_{[t_{i+1,T})})(\delta X^{\prime}_{i})\Big{]}d\theta$
$\displaystyle\ \ \
=\int_{0}^{1}\mathbb{E}^{\prime}\Big{[}\partial_{\mu}f(t_{i+1},X^{n}_{t_{i+1}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}+\theta(\delta
X^{\prime}_{i})1_{[t_{i+1,T})}},X^{{}^{\prime}n}_{t_{i}})(\delta
X^{\prime}_{i})\Big{]}d\theta$ $\displaystyle\ \ \ \ \ \
+\int_{0}^{1}\int_{0}^{1}\mathbb{E}^{\prime}\Big{[}\partial_{\tilde{\omega}}\partial_{\mu}f(t_{i+1},X^{n}_{t_{i+1}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}+\theta(\delta
X^{\prime}_{i})1_{[t_{i+1,T})}},X^{{}^{\prime}n}_{t_{i}}+\lambda\theta(\delta
X^{\prime}_{i})1_{[t_{i+1,T})})\theta(\delta X^{\prime}_{i})^{2}\Big{]}d\theta
d\lambda.$
Since $\|X^{{}^{\prime}n}_{t_{i}}+\theta(\delta
X^{\prime}_{i})1_{[t_{i+1},T)}-X^{\prime}_{r}\|\rightarrow 0,\
P^{\prime}$-a.s. for any $r\in[0,T]$ with $r\in[t_{i},t_{i+1}]$, we have
$\lim_{n\rightarrow\infty}W_{2}(\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}+\theta(\delta
X^{\prime}_{i})1_{[t_{i+1,T})}},\mathcal{L}_{X^{\prime}_{r}})=0.$
In view of (49), (51) and the dominated convergence theorem, we have
$\displaystyle\lim_{n\rightarrow\infty}\sum_{i=0}^{n-1}\int_{0}^{1}\Big{[}\partial_{\mu}f(t_{i+1},X^{n}_{t_{i+1}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}+\theta(\delta
X^{\prime}_{i})1_{[t_{i+1,T})}},X^{{}^{\prime}n}_{t_{i}})(\delta
X^{\prime}_{i})\Big{]}d\theta$ $\displaystyle\ \ \
=\int_{t}^{T}\partial_{\mu}f(r,X^{\gamma_{t}},\mathcal{L}_{X^{{}^{\prime}}},X^{{}^{\prime}})dX^{\prime}(r),\
P\times P^{\prime}\text{-}a.s..$
Then, according to Fubini’s theorem, we have
(58)
$\begin{split}&\lim_{n\rightarrow\infty}\sum_{i=0}^{n-1}\mathbb{E}^{\prime}\int_{0}^{1}\Big{[}\partial_{\mu}f(t_{i+1},X^{n}_{t_{i+1}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}+\theta(\delta
X^{\prime}_{i})1_{[t_{i+1,T})}},X^{{}^{\prime}n}_{t_{i}})(\delta
X^{\prime}_{i})\Big{]}d\theta\\\ &\ \ \
=\mathbb{E}^{\prime}[\int_{t}^{T}\partial_{\mu}f(r,X,\mathcal{L}_{X^{{}^{\prime}}},X^{{}^{\prime}})dX^{\prime}(r)],\
\ \ \ \ \ P\text{-}a.s..\end{split}$
By a similar argument as above, we have
(59)
$\begin{split}&\lim_{n\rightarrow\infty}\int_{0}^{1}\\!\\!\\!\int_{0}^{1}\mathbb{E}^{\prime}\Big{[}\partial_{\tilde{\omega}}\partial_{\mu}f(t_{i+1},X^{n}_{t_{i+1}},\mathcal{L}_{X^{{}^{\prime}n}_{t_{i}}+\theta(\delta
X^{\prime}_{i})1_{[t_{i+1,T})}},X^{{}^{\prime}n}_{t_{i}}+\lambda\theta(\delta
X^{\prime}_{i})1_{[t_{i+1,T})})\theta(\delta X^{\prime}_{i})^{2}\Big{]}d\theta
d\lambda\\\ &\ \ \
=\mathbb{E}^{\prime}[\int_{t}^{T}\partial_{\tilde{\omega}}\partial_{\mu}f(r,X,\mathcal{L}_{X^{\prime}},X^{\prime})dr],\
P\text{-}a.s..\end{split}$
In view of (54), (57), (58) and (59), taking $n\to\infty$ in (52), we obtain
the desired identity.
∎
Note that $(\omega_{s})_{\tau}=\omega_{s}$ and $(\mu_{s})_{\tau}=\mu_{s}$ for
any $\tau\geq s.$ In particular, if the non-anticipative functional $f$ is
strongly vertically differentiable, we have the following partial Itô-Dupire
formula.
###### Corollary 2.16.
Suppose that $(X,X^{\prime})$ is defined as in Theorem 2.15, and
$f\in\mathscr{C}^{0,2,1,1}_{s,p}(\hat{\mathbb{D}}_{T,d})$. Then we have that
for any $t\leq s\leq v\leq T,$
$\begin{split}&f(v,X_{s},\mathcal{L}_{X^{\prime}_{s}})-f(v,\gamma_{t},\mathcal{L}_{\eta_{t}})\\\
&\ \ \
=\int_{t}^{s}\partial_{\omega_{r}}f(v,X_{r},\mathcal{L}_{X^{\prime}_{r}})dX(r)+\frac{1}{2}\int_{t}^{s}\text{Tr}\
[\partial_{\omega_{r}}^{2}f(v,X_{r},\mathcal{L}_{X^{\prime}_{r}})d\langle
X\rangle(r)]\\\ &\ \ \ \ \ \
+\mathbb{E}^{\tilde{P}^{\prime}}[\int_{t}^{s}\partial_{\mu_{r}}f(v,X_{r},\mathcal{L}_{X^{\prime}},\tilde{X}^{\prime})d\tilde{X}^{\prime}(r)]+\frac{1}{2}\mathbb{E}^{\tilde{P}^{\prime}}\int_{t}^{s}\text{Tr}\
[\partial_{\tilde{\omega}_{r}}\partial_{\mu_{r}}f(v,X_{r},\mathcal{L}_{X^{\prime}_{r}},\tilde{X}^{\prime}_{r})\tilde{d}(r)\tilde{d}(r)^{T}]dr.\end{split}$
###### Proof.
Without loss of generality, assume $v=T.$ For any $r\in[t,s],$ let
(60) $\tilde{f}(r,\omega,\mu):=f(T,\omega_{r},\mu_{r}).$
Obviously, $\tilde{f}$ is non-anticipative, and moreover, we have that for any
$h\geq 0,$
$\tilde{f}(r+h,\omega_{r},\mu_{r})=f(T,(\omega_{r})_{r+h},(\mu_{r})_{r+h})=f(T,\omega_{r},\mu_{r})=\tilde{f}(r,\omega_{r},\mu_{r}),$
which implies $\partial_{r}\tilde{f}(r,\omega_{r},\mu_{r})=0.$ Furthermore, it
follows by definitions of vertical derivatives and strongly vertical
derivatives that
$\displaystyle\partial_{\omega}\tilde{f}(r,\omega,\mu)=\partial_{\omega_{r}}f(T,\omega_{r},\mu_{r}),\
\ \ $
$\displaystyle\partial_{\omega}^{2}\tilde{f}(r,\omega,\mu)=\partial_{\omega_{r}}^{2}f(T,\omega_{r},\mu_{r}),$
$\displaystyle\partial_{\mu}\tilde{f}(r,\omega,\mu,\tilde{\omega})=\partial_{\mu_{r}}f(T,\omega_{r},\mu_{r},\tilde{\omega}),\
\ $
$\displaystyle\text{and}\quad\partial_{\tilde{\omega}}\partial_{\mu}\tilde{f}(r,\omega,\mu,\tilde{\omega})=\partial_{\tilde{\omega}_{r}}\partial_{\mu_{r}}f(T,\omega_{r},\mu_{r},\tilde{\omega}_{r}).$
Applying Theorem 2.15 to $\tilde{f}(r,X,\mathcal{L}_{X^{\prime}})$ on
$r\in[t,s]$, and we obtain the desired formula.
∎
## 3\. Differentiability of solutions for path-dependent mean-field BSDEs
In the following, for any process $(X,Y,Z)$ on the probability space
$(\Omega,\mathcal{F},P)$, we denote by $(\tilde{X},\tilde{Y},\tilde{Z})$ an
independent copy of $(X,Y,Z)$, which means that
$(\tilde{X},\tilde{Y},\tilde{Z})$ is defined in an independent probability
space $(\tilde{\Omega},\tilde{\mathcal{F}},\tilde{P})$ and has the same law as
$(X,Y,Z).$ Recall that $B$ is a $d$-dimensional Brownian motion on
$(\Omega,\mathcal{F},P).$ The following linear mean-field BSDEs and estimates
are frequently used in subsequent discussions. Note that for a classical
linear BSDE, the generator has a linear growth in $Y$, which implies the
global well-posedness. Here in the mean-field case, things are different since
the expected evolutionary equation is an ODE. For simplicity, we only address
the one-dimensional case. Similar assertions in this section are still true in
the multi-dimensional case.
###### Lemma 3.1.
Let $\xi\in L^{2}(\mathcal{F}_{T})$ and $t\in[0,T)$. Suppose that
$(\alpha,\beta)\in\mathbb{H}^{2}([t,T],\mathbb{R}\times\mathbb{R}^{d})$ is
bounded, $c\in\mathbb{H}^{2}([t,T],\mathbb{R}^{k})$, and $h$ is a real valued
progressively measurable process such that $\int_{t}^{T}|h(r)|dr\in
L^{2}(\mathcal{F}_{T}).$ For any $(r,x)\in[t,T]\times\mathbb{R}^{k},$
$g(\cdot,x)\in\mathbb{H}^{2}([t,T])$ and $g(r,\cdot)$ is uniformly Lipschitz
continuous:
$\sup_{r\in[t,T]}|g(r,x)-g(r,y)|\leq L|x-y|,\quad\forall y\in\mathbb{R}^{k},\
P\text{-}a.s.$
for a constant $L.$ Then, the following linear mean-field BSDE
(61)
$Y(s)=\xi+\int_{s}^{T}\Big{(}\alpha(r)Y(r)+\beta(r)Z(r)+\tilde{\mathbb{E}}[g(r,\tilde{c}(r))\tilde{Y}(r)]+h(r)\Big{)}dr-\int_{s}^{T}Z(r)dB(r),s\in[t,T]$
with $(\tilde{c},\tilde{Y})$ being an independent copy of $(c,Y),$ has a
unique solution
$(Y,Z)\in\mathbb{S}^{2}([t,T])\times\mathbb{H}^{2}([t,T],\mathbb{R}^{d})$.
Moreover, we have
(62) $\|(Y,Z)\|_{\mathbb{S}^{2}\times\mathbb{H}^{2}}^{2}\leq
C(\|\xi\|_{L^{2}}^{2}+||\int_{t}^{T}|h(r)|dr||^{2}_{L^{2}})e^{C(\|c\|_{\mathbb{H}^{2}}+\|g(\cdot,0)\|_{\mathbb{H}^{2}})}$
for a constant $C$ depending on the bound of $\alpha,\beta$ and $L.$ In
particular, if $g$ is uniformly bounded, we have
(63) $\|(Y,Z)\|_{\mathbb{S}^{2}\times\mathbb{H}^{2}}^{2}\leq
C(\|\xi\|_{L^{2}}^{2}+||\int_{t}^{T}|h(r)|dr||^{2}_{L^{2}}).$
###### Remark 3.2.
Since neither $g(t,x)$ nor $g(r,c(r))$ is bounded or uniformly integrable for
any $c(r)\in\mathbb{H}^{2}([t,T],\mathbb{R}^{k})$, the well-posedness of the
mean-field BSDE is not an immediate consequence of existing works such as [8].
###### Proof.
For any $y\in\mathbb{H}^{2},$ consider the following classical linear BSDE
(64)
$Y(s)=\xi+\int_{s}^{T}\Big{(}\alpha(r)Y(r)+\beta(r)Z(r)+\tilde{\mathbb{E}}[g(r,\tilde{c}(r))\tilde{y}(r)]+h(r)\Big{)}dr-\int_{s}^{T}Z(r)dB(r),$
where $(\tilde{c},\tilde{y})$ is an independent copy of $(c,y)$. To prove that
it is well-posed on $[t,T]$, we only need to show
(65)
$\mathbb{E}\left[\int_{t}^{T}\Big{|}\tilde{\mathbb{E}}[g(r,\tilde{c}(r))\tilde{y}(r)]\Big{|}dr\right]^{2}<\infty.$
Indeed, by the uniformly Lipschitz continuity of $g$, we have
(66)
$\begin{split}&\mathbb{E}\Big{|}\int_{t}^{T}|\tilde{\mathbb{E}}[g(r,\tilde{c}(r))\tilde{y}(r)]|dr\Big{|}^{2}\leq
C\mathbb{E}\Big{[}\int_{t}^{T}|\tilde{\mathbb{E}}|g(r,0)\tilde{y}(r)|dr+\tilde{\mathbb{E}}|\tilde{c}(r)\tilde{y}(r)|dr\Big{]}^{2}\\\
&\ \ \leq
C\mathbb{E}\Big{[}\int_{t}^{T}|g(r,0)\tilde{\mathbb{E}}[\tilde{y}(r)]|dr\Big{]}^{2}+C\Big{[}\int_{t}^{T}\tilde{\mathbb{E}}[|\tilde{c}(r)\tilde{y}(r)|]dr\Big{]}^{2}\\\
&\ \ \leq
C\Big{[}(\mathbb{E}\int_{t}^{T}|g(r,0)|^{2}dr)(\int_{t}^{T}[\tilde{\mathbb{E}}|\tilde{y}(r)|]^{2}dr)+(\int_{t}^{T}\tilde{\mathbb{E}}|\tilde{c}(r)|^{2}dr)(\int_{t}^{T}\tilde{\mathbb{E}}|\tilde{y}(r)|^{2}dr)\Big{]}\\\
&\ \ \leq
C\Big{[}\|g(\cdot,0)\|_{\mathbb{H}^{2}}^{2}\|y\|_{\mathbb{H}^{2}}^{2}+\|c\|_{\mathbb{H}^{2}}^{2}\|y\|_{\mathbb{H}^{2}}^{2}\Big{]}\leq
C(\|g(\cdot,0)\|_{\mathbb{H}^{2}}^{2}+\|c\|_{\mathbb{H}^{2}}^{2})\|y\|_{\mathbb{H}^{2}}^{2},\end{split}$
where we apply the Hölder inequality to integrals in forms of $\int_{t}^{T}$
and $\int_{t}^{T}\tilde{\mathbb{E}}$ respectively in the third inequality
above. Then for any $y\in\mathbb{H}^{2},$ there exists a unique solution
$(Y,Z)\in\mathbb{H}^{2}\times\mathbb{H}^{2}$ of BSDE (64). The mapping $y\to
Y$, denoted by $\Phi$, is a transformation on $\mathbb{H}^{2}$ , and will be
shown to be a contraction under the following equivalent norm on
$\mathbb{H}^{2}$
(67)
$\|Y\|^{2}:=\mathbb{E}\int_{t}^{T}e^{As-\int_{s}^{T}(\|g(r,0)\|^{2}_{L^{2}}+\|c(r)\|^{2}_{L^{2}})dr}|Y(s)|^{2}ds$
with $A$ being a constant to be determined later. Take any
$y^{(1)},y^{(2)}\in\mathbb{H}^{2}$ and denote the corresponding solutions of
classical BSDE (64) by $(Y^{(1)},Z^{(1)}),(Y^{(2)},Z^{(2)})$. Set $(\Delta
Y,\Delta Z):=(Y^{(1)}-Y^{(2)},Z^{(1)}-Z^{(2)})$, $\Delta y:=y^{(1)}-y^{(2)}$,
and $f(r):=\|g(r,0)\|^{2}_{L^{2}}+\|c(r)\|^{2}_{L^{2}}$. Apply Itô’s formula
to $e^{As-\int_{s}^{T}f(r)dr}|\Delta Y(s)|^{2}$ on $s\in[t,T]$, and we have
$\begin{split}-e^{At-\int_{t}^{T}f(r)dr}|\Delta
Y(t)|^{2}&=\int_{t}^{T}(A+f(s))e^{As-\int_{s}^{T}f(r)dr}|\Delta Y(s)|^{2}ds\\\
&\ \ \ \ +2\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}\Delta Yd(\Delta
Y)+\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}|\Delta Z|^{2}ds.\end{split}$
Therefore,
$\displaystyle e^{At-\int_{t}^{T}f(r)dr}|\Delta
Y(t)|^{2}+\int_{t}^{T}(A+f(s))e^{As-\int_{s}^{T}f(r)dr}|\Delta
Y(s)|^{2}dr+\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}|\Delta Z(s)|^{2}dr$
$\displaystyle\quad=2\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}\Delta
Y[\alpha\Delta Y+\beta\Delta
Z+\tilde{\mathbb{E}}[g(s,\tilde{c}(s))\Delta\tilde{y}]]ds-2\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}\Delta
Y\Delta ZdW(s)$ $\displaystyle\quad\leq
C\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}|\Delta
Y|^{2}ds+C\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}|\Delta
Y|^{2}ds+\frac{1}{2}\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}|\Delta Z|^{2}ds$
$\displaystyle\ \ \ \ \ \ +2\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}|\Delta
Y||g(s,0)|\|\Delta\tilde{y}\|_{L^{2}}ds+2\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}|\Delta
Y|\|c\|_{L^{2}}\|\Delta\tilde{y}\|_{L^{2}}ds$ $\displaystyle\ \ \ \ \ \
-2\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}\Delta Y\Delta ZdB(s)$
$\displaystyle\quad\leq C\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}|\Delta
Y|^{2}ds+\frac{1}{2}\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}|\Delta
Z|^{2}ds-2\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}\Delta Y\Delta ZdB(s)$
$\displaystyle\ \ \ \ \ \ +2\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}(|\Delta
Y||g(s,0)|\|\Delta\tilde{y}\|_{L^{2}})ds+\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}(|\Delta
Y|^{2}\|c\|^{2}_{L^{2}}+\|\Delta\tilde{y}\|^{2}_{L^{2}})ds.$
Taking expectation on both sides of the above inequality, we have
$\displaystyle\int_{t}^{T}(A-C+f(s))e^{As-\int_{s}^{T}f(r)dr}\|\Delta
Y(s)\|_{L^{2}}^{2}dr$ $\displaystyle\quad\leq
2\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}\mathbb{E}[|\Delta
Y||g(s,0)|]\|\Delta\tilde{y}\|_{L^{2}}ds+\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}(\|\Delta
Y\|_{L^{2}}^{2}\|c\|^{2}_{L^{2}}+\|\Delta\tilde{y}\|^{2}_{L^{2}})ds$
$\displaystyle\quad\leq\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}\Big{[}\Big{(}\mathbb{E}[|\Delta
Y||g(s,0)|]\Big{)}^{2}+\|\Delta\tilde{y}\|_{L^{2}}^{2}\Big{]}ds+\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}(\|\Delta
Y\|_{L^{2}}^{2}\|c\|^{2}_{L^{2}}+\|\Delta\tilde{y}\|^{2}_{L^{2}})ds$
$\displaystyle\quad\leq\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}\Big{(}\|g(s,0)\|_{L^{2}}^{2}+\|c\|^{2}_{L^{2}}\Big{)}\|\Delta
Y\|_{L^{2}}^{2}ds+\int_{t}^{T}e^{As-\int_{s}^{T}f(r)dr}\|\Delta\tilde{y}\|_{L^{2}}^{2}ds.$
Therefore, choosing a sufficiently large number $A$ such that $A-C>1$, we
obtain a contraction and then the well-posedness of (61).
Now BSDE (61) can be written as the following classical BSDE
(68)
$Y(s)=\xi+\int_{s}^{T}\Big{(}\alpha(r)Y(r)+\beta(r)Z(r)+h^{\prime}(r)\Big{)}dr-\int_{s}^{T}Z(r)dB(r),$
with $h^{\prime}(r)=\tilde{\mathbb{E}}[g(r,\tilde{c}(r))\tilde{Y}(r)]+h(r).$
Thus it is standard that
(69) $\begin{split}\|Y\|_{\mathbb{S}^{2}}^{2}+\|Z\|_{\mathbb{H}^{2}}^{2}&\leq
C(\|\xi\|_{L^{2}}^{2}+\|\int_{t}^{T}|h^{\prime}(r)|dr\|^{2}_{L^{2}})\\\ &\leq
C(\|\xi\|_{L^{2}}^{2}+\|\int_{t}^{T}|h(r)|dr\|^{2}_{L^{2}}+\|\int_{t}^{T}|\tilde{\mathbb{E}}[g(r,\tilde{c}(r))\tilde{Y}(r)]|dr\|^{2}_{L^{2}}).\end{split}$
Furthermore, similar to the proof of inequality (66), we have
(70)
$\|\int_{t}^{T}|\tilde{\mathbb{E}}[g(r,\tilde{c}(r))\tilde{Y}(r)]|dr\|^{2}_{L^{2}}\leq
C[\int_{t}^{T}\|g(r,0)\|_{L^{2}}^{2}\|Y\|^{2}_{\mathbb{S}^{2},[t,r]}dr+\int_{t}^{T}\|c(r)\|_{L^{2}}^{2}\|Y\|^{2}_{\mathbb{S}^{2},[t,r]}dr].$
Then, using Gronwall’s inequality, we obtain the desired estimate (62).
∎
We consider the following path-dependent master equation
(76)
$\displaystyle\left\\{\begin{array}[]{l}\partial_{t}u(t,\gamma,\mu)+\frac{1}{2}\text{Tr}\left[\partial_{\omega}^{2}u(t,\gamma,\mu)\sigma_{1}(t)\sigma_{1}(t)^{T}\right]+\partial_{\omega}u(t,\gamma,\mu)b_{1}(t)\\\\[5.69054pt]
\quad+\frac{1}{2}\text{Tr}\left[\mathbb{E}^{P}[\partial_{\tilde{\omega}}\partial_{\mu}u(t,\gamma,\mu,\eta)]\sigma_{2}(t)\sigma_{2}(t)^{T}\right]+\mathbb{E}^{P}[\partial_{\mu}u(t,\gamma,\mu,\eta)]b_{2}(t)\\\\[5.69054pt]
\quad+f(t,\gamma,u(t,\gamma,\mu),\sigma_{1}(t)\partial_{\omega}u(t,\gamma,\mu),\mu,\mathcal{L}_{u(t,\eta,\mu)})=0,\\\
\\\ u(T,\gamma,\mu)=\Phi(\gamma_{T},\mu_{T}),\ \ \
(t,\gamma,\mu)\in[0,T]\times\mathbb{C}_{T,d}\times\mathcal{P}_{2}^{C},\end{array}\right.$
where $(b_{1},\sigma_{1},b_{2},\sigma_{2})$ are continuous functions and
$\eta\in\mathbb{M}^{C}_{2}$ with law $\mu$. For simplicity, we take
$(b_{1},\sigma_{1})=(b_{2},\sigma_{2})=(0,I)$ and refer to Remark 4.9 for the
above form. In the following, we write
$f(\omega_{t},\mu_{t}):=f(t,\omega,\mu)$ for simplicity since $f$ is non-
anticipative. Moreover, for any $\gamma,\omega\in\mathbb{D}_{T,d},$ define
$\omega^{\gamma_{t}}\in\mathbb{D}_{T,d}$ as the following
(77)
$\omega^{\gamma_{t}}(\cdot):=\gamma_{t}(\cdot)+(\omega(\cdot)-\omega(t))1_{[t,T]}(\cdot).$
To give a classical solution through FBSDEs, for any
$(t,\eta)\in[0,T]\times\mathbb{M}^{D}_{2}$, we denote by
$(Y^{\eta_{t}},Z^{\eta_{t}})$ the solution of the following path-dependent
mean-field BSDE
(78)
$Y(s)=\Phi(B_{T}^{\eta_{t}},\mathcal{L}_{B_{T}^{\eta_{t}}})+\int_{s}^{T}f(B_{r}^{\eta_{t}},Y(r),Z(r),\mathcal{L}_{B_{r}^{\eta_{t}}},\mathcal{L}_{Y(r)})dr-\int_{s}^{T}Z(r)dB(r),\quad
s\in[t,T].$
On the other hand, for any $\gamma\in\mathbb{D}_{T,d},$ let
$(Y^{\gamma_{t},\eta_{t}},Z^{\gamma_{t},\eta_{t}})$ solve the associated path-
dependent BSDE
(79)
$\mathcal{Y}{(s)}=\Phi(B_{T}^{\gamma_{t}},\mathcal{L}_{B_{T}^{\eta_{t}}})+\int_{s}^{T}f(B_{r}^{\gamma_{t}},\mathcal{Y}(r),\mathcal{Z}(r),\mathcal{L}_{B_{r}^{\eta_{t}}},\mathcal{L}_{Y^{\eta_{t}}(r)})dr-\int_{s}^{T}\mathcal{Z}(r)dB(r),\quad
s\in[t,T].$
###### Definition 3.3.
We write $\Phi\in\mathscr{C}_{T}(\hat{\mathbb{D}}_{T,d})$ (or
$\mathscr{C}_{T}$ if no confusion raised) if
$\Phi:\mathbb{D}_{T,d}\times\mathcal{P}_{2}^{D}\to\mathbb{R}$ is continuous on
$\mathbb{D}_{T,d}\times\mathcal{P}_{2}^{D}.$ Furthermore, we write
* (i)
$\Phi\in\mathscr{C}_{T,lip}$ if it is uniformly Lipschitz continuous on
$\mathbb{D}_{T,d}\times\mathcal{P}_{2}^{D}:$
$|\Phi(\omega_{T},\mu_{T})-\Phi(\omega^{\prime}_{T},\mu^{\prime}_{T})|\leq
C(\|\omega_{T}-\omega^{\prime}_{T}\|+W_{2}(\mu_{T},\mu^{\prime}_{T})),\
\forall(\omega,\mu),(\omega^{\prime},\mu^{\prime})\in\mathbb{D}_{T,d}\times\mathcal{P}_{2}^{D},$
for some constant $C$;
* (ii)
$\Phi\in\mathscr{C}^{1,1}_{T,lip}$ if $\Phi\in\mathscr{C}_{T,lip}$ and $\Phi$
is continuously strongly vertically differentiable in path and measure.
Furthermore, for any $\tau\in[0,T),$ SVDs $\partial_{\omega_{\tau}}\Phi$ and
$\partial_{\mu_{\tau}}\Phi$ are uniformly Lipschitz continuous in
$(\omega,\mu)\in\mathbb{D}_{T,d}\times\mathcal{P}_{2}^{D}$ and
$(\omega,\mu,\tilde{\omega})\in\mathbb{D}_{T,d}\times\mathcal{P}_{2}^{D}\times\mathbb{D}_{T,d}$,
respectively;
* (iii)
$\Phi\in\mathscr{C}^{2,1,1}_{T,lip}$ if $\Phi\in\mathscr{C}^{1,1}_{T,lip}$ and
for any
$(\tau,\omega,\mu,\tilde{\omega})\in\hat{\mathbb{D}}_{T,d}\times\mathbb{D}_{T,d},$
its SVDs $\partial_{\omega_{\tau}}\Phi(\cdot,\mu_{T})$ and
$\partial_{\mu_{\tau}}\Phi(\omega_{T},\mu_{T},\cdot)$ are continuously
strongly vertically differentiable at $(\tau,T,\omega)$ and
$(\tau,T,\tilde{\omega})$ respectively. Moreover, all second-order derivatives
are uniformly Lipschitz continuous.
To obtain the well-posedness and estimates of BSDEs (78) and (79), we assume
that
* (H0)
$(i)$ The functional $\Phi\in\mathscr{C}_{T,lip}(\hat{\mathbb{D}}_{T,d})$ ;
$(ii)$ $f$ is a non-anticipative continuous function on
$[0,T]\times\mathbb{D}_{T,d}\times\mathbb{R}\times\mathbb{R}^{d}\times\mathcal{P}_{2}^{D}\times\mathcal{P}_{2}(\mathbb{R})$,
and for any
$(t,\omega,\mu)\in[0,T]\times\mathbb{D}_{T,d}\times\mathcal{P}_{2}^{D},$
$f(t,\omega,\cdot,\cdot,\mu,\cdot)$ is continuously differentiable on
$\mathbb{R}\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R})$. Moreover,
for any $t\in[0,T]$, $f(t,\cdot,\cdot,\cdot,\cdot,\cdot)$ and
$\partial_{\nu}f(t,\cdot,\cdot,\cdot,\cdot,\cdot,\cdot)$ are uniformly
Lipschitz continuous.
Note that under Assumption (H0), the functional
(80)
$\hat{f}(r,y,z,\nu):=f(B_{r}^{\eta_{t}},y,z,\mathcal{L}_{B_{r}^{\eta_{t}}},\nu),\quad(r,y,z,\nu)\in[t,T]\times\mathbb{R}\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}),$
is uniformly Lipschitz continuous in $(y,z)\in\mathbb{R}\times\mathbb{R}^{d}$.
According to [11, Theorem 4.23], BSDE (78) is well posed with
$(Y^{\eta_{t}},Z^{\eta_{t}},\mathcal{L}_{Y^{\eta_{t}}})\in\mathbb{S}^{2}\times\mathbb{H}^{2}\times\mathcal{P}_{2}(\mathbb{R}).$
Then (79) is a well-defined classical BSDE with
$(Y^{\gamma_{t},\eta_{t}},Z^{\gamma_{t},\eta_{t}})\in\mathbb{S}^{p}\times\mathbb{H}^{p}$
for any $p\geq 1.$ In the following BSDEs, we write
$\Theta^{\eta_{t}}_{r}:=(B_{r}^{\eta_{t}},Y^{\eta_{t}}(r),Z^{\eta_{t}}(r)),\Theta^{\gamma_{t},\eta_{t}}_{r}:=(B_{r}^{\gamma_{t}},Y^{\gamma_{t},\eta_{t}}(r),Z^{\gamma_{t},\eta_{t}}(r))$,
$\mathcal{L}_{\Theta^{\eta_{t}}_{r}}:=(\mathcal{L}_{B_{r}^{\eta_{t}}},\mathcal{L}_{Y^{\eta_{t}}(r)})$
and $(Y,Z):=(Y(t),Z(t))$ if no confusion is raised. Then we have the following
basic estimates for BSDEs (78) and (79).
###### Lemma 3.4.
Assume that $(\Phi,f)$ satisfies (H0). For any $K>0$ and
$(\gamma,\eta),(\gamma^{\prime},\eta^{\prime})\in\mathbb{D}_{T,d}\times\mathbb{M}_{2}^{D}$
such that
$|||\mathcal{L}_{\eta_{t}}|||,|||\mathcal{L}_{\eta^{\prime}_{t}}|||\leq K,$ we
have
(81)
$\displaystyle\|(Y^{\eta_{t}},Z^{\eta_{t}})\|_{\mathbb{S}^{2}\times\mathbb{H}^{2}}\leq
C(1+\|\eta_{t}\|_{\mathbb{S}^{2}}),$ (82)
$\displaystyle\|(Y^{\gamma_{t},\eta_{t}},Z^{\gamma_{t},\eta_{t}})\|_{\mathbb{S}^{p}\times\mathbb{H}^{p}}\leq
C_{p}(1+\|\gamma_{t}\|+\|\eta_{t}\|_{\mathbb{S}^{2}}),\ \ \forall p\geq 1,$
(83)
$\displaystyle\|(Y^{\eta_{t}}-Y^{\eta^{\prime}_{t}},Z^{\eta_{t}}-Z^{\eta^{\prime}_{t}})\|_{\mathbb{S}^{2}\times\mathbb{H}^{2}}\leq
C_{K}\|\eta_{t}-\eta^{\prime}_{t}\|_{\mathbb{S}^{2}},\quad\quad\text{and }$
(84)
$\displaystyle\|(Y^{\gamma_{t},\eta_{t}}-Y^{\gamma^{\prime}_{t},\eta^{\prime}_{t}},Z^{\gamma_{t},\eta_{t}}-Z^{\gamma^{\prime}_{t},\eta^{\prime}_{t}})\|_{\mathbb{S}^{p}\times\mathbb{H}^{p}}\leq
C_{K,p}(\|\gamma_{t}-\gamma^{\prime}_{t}\|+W_{2}(\mathcal{L}_{\eta_{t}},\mathcal{L}_{\eta^{\prime}_{t}})),\
\ \forall p\geq 1,$
where $(C,C_{p})$ does not depend on $(\gamma,\eta)$, and $(C_{K},C_{K,p})$
does not depend on $(\gamma,\gamma^{\prime})$.
###### Remark 3.5.
According to inequality (84),
$(Y^{\gamma_{t},\eta_{t}},Z^{\gamma_{t},\eta_{t}})$ and
$(Y^{\gamma_{t},\eta^{\prime}_{t}},Z^{\gamma_{t},\eta^{\prime}_{t}})$ are
indistinguishable if $\mathcal{L}_{\eta_{t}}=\mathcal{L}_{\eta^{\prime}_{t}},$
which implies the following definition is well-posed
(85)
$(Y^{\gamma_{t},\mathcal{L}_{\eta_{t}}},Z^{\gamma_{t},\mathcal{L}_{\eta_{t}}}):=(Y^{\gamma_{t},\eta_{t}},Z^{\gamma_{t},\eta_{t}}).$
The proof of Lemma 3.4 is rather standard with an application of Lemma 3.1,
and is left in the appendix.
### 3.1. First-order differentiability
For any $(\gamma,\eta)\in[0,T]\times\mathbb{M}^{D}_{2},$ in the following, we
consider the first order differentiability of
$Y^{{\gamma_{t},\eta_{t}}}=Y^{\gamma_{t},\mathcal{L}_{\eta_{t}}}$ with respect
to $\gamma_{t}$ and $\mathcal{L}_{\eta_{t}}.$ For the differentiability in
$\gamma_{t},$ let
(86)
$\begin{split}&\hat{f}(\omega_{s},y,z):=f(\omega_{s},y,z,\mathcal{L}_{B_{s}^{\eta_{t}}},\mathcal{L}_{Y^{\eta_{t}}(s)}),\\\
&\hat{\Phi}(\omega_{T}):=\Phi(\omega_{T},\mathcal{L}_{B_{T}^{\eta_{t}}}),\
\forall(s,\omega,y,z)\in[t,T]\times\mathbb{D}_{T,d}\times\mathbb{R}\times\mathbb{R}^{d},\end{split}$
and then the solution $Y^{\gamma_{t},\eta_{t}}{(s)}$ to equation (79) solves
the following path-dependent BSDE
(87)
$\hat{Y}{(s)}=\hat{\Phi}(B_{T}^{\gamma_{t}})+\int_{s}^{T}\hat{f}(B^{\gamma_{t}}_{r},\hat{Y}(r),\hat{Z}(r))dr-\int_{s}^{T}\hat{Z}(r)dB(r).$
Define $\hat{u}_{\eta_{t}}(t,{\gamma}):=Y^{\gamma_{t},\eta_{t}}(t).$ If $f$
and $\Phi$ are regular enough, according to [41, Theorem 4.5] ,
$\hat{u}_{\eta_{t}}(t,{\gamma})$ is twice vertically differentiable at
$(t,\gamma),$ and moreover for any $s\geq t,$
(88) $\hat{u}_{\eta_{t}}(s,B^{\gamma_{t}})=Y^{\gamma_{t},\eta_{t}}(s),\ \ \
\partial_{\gamma_{t}}\hat{u}_{\eta_{t}}(s,B^{\gamma_{t}})=Z^{\gamma_{t},\eta_{t}}(s).$
Furthermore, $\hat{u}_{\eta_{t}}(t,\gamma)$ is the unique solution to the
following semilinear PPDE
(89)
$\left\\{\begin{array}[]{l}\partial_{t}\hat{u}_{\eta_{t}}(t,\gamma)+\frac{1}{2}\text{Tr}\left[\partial_{\omega}^{2}\hat{u}_{\eta_{t}}(t,\gamma)\right]+\hat{f}(\gamma_{t},\hat{u}_{\eta_{t}}(t,{\gamma}),\partial_{\omega}\hat{u}_{\eta_{t}}(t,{\gamma}))=0,\\\\[5.69054pt]
\hat{u}_{\eta_{t}}(T,\gamma)=\hat{\Phi}(\gamma),\ \ \
(t,\gamma)\in[0,T]\times\mathbb{C}_{T,d}.\end{array}\right.$
In the following, we denote by
$\partial_{(t,\omega,y,z,\mu,\nu,{\omega_{\tau}},{\mu_{\tau}})}f$ the
derivative vector
$(\partial_{t}f,\partial_{\omega}f,\partial_{y}f,\partial_{z}f,\partial_{\mu}f,\partial_{\nu}f,\partial_{\omega_{\tau}}f,\partial_{\mu_{\tau}}f).$
In this subsection, we assume that
* (H1)
$(i)$ The functional
$\Phi\in\mathscr{C}^{1,1}_{T,lip}(\hat{\mathbb{D}}_{T,d})$; $(ii)$ $f$ is a
non-anticipative continuous function on
$[0,T]\times\mathbb{D}_{T,d}\times\mathbb{R}\times\mathbb{R}^{d}\times\mathcal{P}_{2}^{D}\times\mathcal{P}_{2}(\mathbb{R})$,
and for any
$(t,\omega,\mu)\in[0,T]\times\mathbb{D}_{T,d}\times\mathcal{P}_{2}^{D},$
$f(t,\omega,\cdot,\cdot,\mu,\cdot)$ is differentiable on
$\mathbb{R}\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R})$ with bounded
derivatives. For any
$(y,z,\nu)\in\mathbb{R}\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}),$
$f(t,\omega,y,z,\cdot,\nu)$ is strongly vertically differentiable at $\mu_{t}$
and $f(t,\cdot,y,z,\mu,\nu)$ is strongly vertically differentiable at
$\omega_{t}$. Moreover, $\partial_{(y,z,\nu,{\omega_{\tau}},{\mu_{\tau}})}f$
is continuous, and for any $\tau\leq t,$
$(I,\partial_{(y,z,\nu,{\omega_{\tau}},{\mu_{\tau}})})f(t,\cdot)$ is uniformly
Lipschitz continuous.
###### Remark 3.6.
Assume that $\Phi:\mathbb{D}_{T,d}\to\mathbb{R}$ is twice continuously
strongly vertically differentiable and satisfies the following locally
Lipschitz continuous condition: for any $t\in[0,T]$ and
$\phi=\Phi,\partial_{\omega_{t}}\Phi,\partial_{\omega_{t}}^{2}\Phi$,
(90) $|\phi(\omega_{T})-\phi(\omega^{\prime}_{T})|\leq
C(1+\|\omega_{T}|^{k}+\|\omega^{\prime}_{T}\|^{k})\|\omega_{T}-\omega^{\prime}_{T}\|,\quad\forall\
(\omega,\omega^{\prime})\in\mathbb{D}_{T,d}^{2}.$
Then, the main result [41, Theorem 4.5] is still true. For the reader’s
convenience, the proof is sketched in the appendix, using our partial Itô-
Dupire formula.
###### Lemma 3.7.
Let $(f,\Phi)$ satisfy Assumption (H1). Then
$(Y^{\gamma_{t},\eta_{t}},Z^{\gamma_{t},\eta_{t}})$ is almost surely
vertically differentiable at $(t,\gamma).$ The derivative
$(\partial_{\omega_{t}}Y^{{\gamma_{t},\eta_{t}}},\partial_{\omega_{t}}Z^{{\gamma_{t},\eta_{t}}})\in\mathbb{S}^{p}([t,T],\mathbb{R}^{d})\times\mathbb{H}^{p}([t,T],\mathbb{R}^{d\times
d}),$ for any $p\geq 1$, is the unique solution to BSDE
(91) $\begin{split}\mathcal{Y}(s)=&\
\partial_{\omega_{t}}\Phi(B^{\gamma_{t}},\mathcal{L}_{B^{\eta_{t}}})+\int_{s}^{T}\partial_{\omega_{t}}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})dr+\int_{s}^{T}\partial_{y}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\mathcal{Y}(r)dr\\\
&+\int_{s}^{T}\mathcal{Z}(r)\partial_{z}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})dr-\int_{s}^{T}\mathcal{Z}(r)dB(r),\
\ s\in[t,T].\end{split}$
Furthermore, since
$(\partial_{\omega_{t}}Y^{{\gamma_{t},\eta_{t}}},\partial_{\omega_{t}}Z^{{\gamma_{t},\eta_{t}}})$
is independent of $\mathcal{F}_{t}$, we have that for any $K>0,$ there are
positive constants $C_{p}$ and $C_{K,p}$ such that
(92)
$\begin{split}&\|(\partial_{\omega_{t}}Y^{{\gamma_{t},\eta_{t}}},\partial_{\omega_{t}}Z^{{\gamma_{t},\eta_{t}}})\|_{\mathbb{S}^{p}\times\mathbb{H}^{p}}<C_{p},\\\
&\|(\partial_{\omega_{t}}Y^{{\gamma_{t},\eta_{t}}}-\partial_{\omega_{t}}Y^{\gamma^{\prime}_{t},\eta^{\prime}_{t}},\partial_{\omega_{t}}Z^{{\gamma_{t},\eta_{t}}}-\partial_{\omega_{t}}Z^{\gamma^{\prime}_{t},\eta^{\prime}_{t}})\|_{\mathbb{S}^{p}\times\mathbb{H}^{p}}<C_{K,p}(\|\gamma_{t}-\gamma^{\prime}_{t}\|+W_{2}(\mathcal{L}_{\eta_{t}},\mathcal{L}_{\eta^{\prime}_{t}})),\\\
&\ \ \ \ \ \ \forall\
(\gamma,\eta),(\gamma^{\prime},\eta^{\prime})\in\mathbb{D}_{T,d}\times\mathbb{M}_{2}^{D}\text{
such that
}|||\mathcal{L}_{\eta_{t}}|||,|||\mathcal{L}_{\eta^{\prime}_{t}}|||\leq
K.\end{split}$
###### Proof.
According to the preceding remark and [41, Lemma 3.8], we have the first two
assertions.
In view of the standard estimate for linear BSDEs, Lipschitz continuity of
$(\Phi,f)$ and Lemma 2.4, we have
$\displaystyle\|\partial_{\omega_{t}}Y^{{\gamma_{t},\eta_{t}}}\|_{\mathbb{S}^{p}}+\|\partial_{\omega_{t}}Z^{{\gamma_{t},\eta_{t}}}\|_{\mathbb{H}^{p}}$
$\displaystyle\ \ \ \leq
C(\|\partial_{\omega_{t}}\Phi(B^{\gamma_{t}},\mathcal{L}_{B^{\eta_{t}}})\|_{L^{p}}+\|\int_{t}^{T}|\partial_{\omega_{t}}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})|dr\|_{L^{p}}),$
and thus the first inequality of (92). The last inequality is proved in a
similar way to Lemma 3.4. ∎
Furthermore, we have
###### Proposition 3.8.
Let $(f,\Phi)$ satisfy Assumption (H1). Then for any $\tau\leq t,$
$(Y^{{\gamma_{t},\eta_{t}}}(s),Z^{{\gamma_{t},\eta_{t}}}(s))$ is strongly
vertically differentiable at $(\tau,t,\gamma)$. Moreover, the derivative
$(\partial_{\omega_{\tau}}Y^{{\gamma_{t},\eta_{t}}},\partial_{\omega_{\tau}}Z^{{\gamma_{t},\eta_{t}}})\in\mathbb{S}^{p}([t,T],\mathbb{R}^{d})\times\mathbb{H}^{p}([t,T],\mathbb{R}^{d\times
d}),\ \forall\ p\geq 1$, is the unique solution to BSDE
(93) $\begin{split}\mathcal{Y}(s)=\
&\partial_{\omega_{\tau}}\Phi(B^{\gamma_{t}},\mathcal{L}_{B^{\eta_{t}}})+\int_{s}^{T}\partial_{\omega_{\tau}}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})dr+\int_{s}^{T}\partial_{y}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\mathcal{Y}(r)dr\\\
&+\int_{s}^{T}\partial_{z}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\mathcal{Z}(r)dr-\int_{s}^{T}\mathcal{Z}(r)dB(r),\
\ s\in[t,T].\end{split}$
Furthermore, since
$(\partial_{\omega_{\tau}}Y^{{\gamma_{t},\eta_{t}}},\partial_{\omega_{\tau}}Z^{{\gamma_{t},\eta_{t}}})$
is independent of $\mathcal{F}_{t}$, we have that for any $K>0,$
(94)
$\begin{split}&\|(\partial_{\omega_{\tau}}Y^{{\gamma_{t},\eta_{t}}},\partial_{\omega_{\tau}}Z^{{\gamma_{t},\eta_{t}}})\|_{\mathbb{S}^{p}\times\mathbb{H}^{p}}<C_{p},\\\
&\|(\partial_{\omega_{\tau}}Y^{{\gamma_{t},\eta_{t}}}-\partial_{\omega_{\tau}}Y^{\gamma^{\prime}_{t},\eta^{\prime}_{t}},\partial_{\omega_{\tau}}Z^{{\gamma_{t},\eta_{t}}}-\partial_{\omega_{\tau}}Z^{\gamma^{\prime}_{t},\eta^{\prime}_{t}})\|_{\mathbb{S}^{p}\times\mathbb{H}^{p}}<C_{K,p}(\|\gamma_{t}-\gamma^{\prime}_{t}\|+W_{2}(\mathcal{L}_{\eta_{t}},\mathcal{L}_{\eta^{\prime}_{t}})),\\\
&\ \ \ \ \ \ \forall\
(\gamma,\eta),(\gamma^{\prime},\eta^{\prime})\in\mathbb{D}_{T,d}\times\mathbb{M}_{2}^{D}\text{
such that
}|||\mathcal{L}_{\eta_{t}}|||,|||\mathcal{L}_{\eta^{\prime}_{t}}|||\leq
K,\end{split}$
for some positive constants $C_{p}$ and $C_{K,p}$.
###### Proof.
In view of Assumption (H1) and Lemma 3.1, we see that equation (93) has a
unique solution
$(\partial_{\omega_{\tau}}Y,\partial_{\omega_{\tau}}Z)\in\mathbb{S}^{p}\times\mathbb{H}^{p},\
\forall p\geq 1.$ Here, we consider the one-dimensional case for simplicity.
For any $h>0,$ recall that $\gamma^{\tau,h}=\gamma+h1_{[\tau,T]}$. Set
(95)
$\begin{split}&\gamma^{\prime}:=\gamma^{\tau,h},\quad\Delta_{h}Y:=\frac{1}{h}(Y^{\prime}-Y):=\frac{1}{h}(Y^{\gamma^{\tau,h}_{t},\eta_{t}}-Y^{{\gamma_{t},\eta_{t}}}),\quad\quad\text{and}\\\
&\quad\Delta_{h}Z:=\frac{1}{h}(Z^{\prime}-Z):=\frac{1}{h}(Z^{\gamma^{\tau,h}_{t},\eta_{t}}-Z^{{\gamma_{t},\eta_{t}}}).\end{split}$
Then we know that $(\Delta_{h}Y,\Delta_{h}Z)$ solves the following BSDE
$\begin{split}\Delta_{h}Y(s)&=\frac{1}{h}(\Phi^{\prime}-\Phi)+\frac{1}{h}\int_{s}^{T}[f(\Theta^{\gamma^{\prime},\eta}_{r},\mathcal{L}_{\Theta^{\eta}_{r}})-f(\Theta^{\gamma,\eta}_{r},\mathcal{L}_{\Theta^{\eta}_{r}})]dr-\int_{s}^{T}\Delta_{h}Z(r)dB(r)\\\
&=:\Delta_{h}\Phi+\int_{s}^{T}\Big{(}a_{r}\Delta_{h}Y(r)+b_{r}\Delta_{h}Z(r)+\Delta_{h}f\Big{)}dr-\int_{s}^{T}\Delta_{h}Z(r)dB(r),\end{split}$
where
$\displaystyle\Phi^{\prime}:=\Phi(B^{\gamma^{\prime}},\mathcal{L}_{B^{\eta}}),\
\ \Phi:=\Phi(B^{\gamma},\mathcal{L}_{B^{\eta}}),\ \
\Delta_{h}\Phi:=\int_{0}^{1}\partial_{\gamma_{\tau}}\Phi(B^{\gamma^{\tau,h\theta}},\mathcal{L}_{B^{\eta}})\
d\theta,$ $\displaystyle
a_{r}:=\int_{0}^{1}\partial_{y}{f}(B^{\gamma^{\prime}}_{r},Y+\theta(Y^{\prime}-Y),Z^{\prime},\mathcal{L}_{\Theta^{\eta}_{r}})\
d\theta,\ \
b_{r}:=\int_{0}^{1}\partial_{z}{f}(B^{\gamma^{\prime}}_{r},Y,Z+\theta(Z^{\prime}-Z),\mathcal{L}_{\Theta^{\eta}_{r}})\
d\theta,$
$\displaystyle\quad\text{and}\quad\Delta_{h}f:=\frac{1}{h}f(B^{\omega}_{r},Y,Z,\mathcal{L}_{B^{\eta}_{r}},\mathcal{L}_{Y})\Big{|}_{\omega=\gamma}^{\omega=\gamma^{\prime}}=\int_{0}^{1}\partial_{\omega_{\tau}}f(B^{\gamma^{\tau,h\theta}},Y,Z,\mathcal{L}_{B^{\eta}_{r}},\mathcal{L}_{Y})\
d\theta.$
Then $(\delta Y,\delta
Z):=(\Delta_{h}Y-\partial_{\omega_{\tau}}Y,\Delta_{h}Z-\partial_{\omega_{\tau}}Z)$
satisfies BSDE
$\begin{split}\delta Y(s)=&\
(\Delta_{h}\Phi-\partial_{\omega_{\tau}}\Phi)+\int_{s}^{T}\left(a_{r}\delta
Y+b_{r}\delta
Z+(\Delta_{h}f-\partial_{\omega_{\tau}}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}}))\right)dr\\\
\ \
&+\int_{s}^{T}[(a_{r}-\partial_{y}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}}))\partial_{\omega_{\tau}}Y+(b_{r}-\partial_{z}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}}))\partial_{\omega_{\tau}}Z]dr-\int_{s}^{T}\delta
ZdB(r).\end{split}$
According to standard estimate for BSDEs (or Lemma 3.1 for $p=2$) and Lemma
3.4, we have
$\begin{split}\|\delta Y\|_{\mathbb{S}^{p}}^{p}+\|\delta
Z\|_{\mathbb{H}^{p}}^{p}&\leq
C\|\Delta_{h}\Phi-\partial_{\omega_{\tau}}\Phi\|_{L^{p}}^{p}+\|\int_{t}^{T}|\Delta_{h}f-\partial_{\omega_{\tau}}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})|dr\|_{L^{p}}^{p}+O(|h|)\\\
&\leq O(|h|),\end{split}$
and thus the strongly vertical differentiability.
∎
To show the differentiability of $Y^{\gamma_{t},\eta_{t}}$ with respect to
$\eta_{t},$ we follow a similar argument as in the state-dependent case for
SDEs made in [9]. Firstly we show that $Y^{\gamma_{t},\eta_{t}}$ is Gâteaux
differentiable in $\eta_{t}$ in the sense of (35) and Remark 2.8. To this end,
we need to prove that for any $\xi\in L^{2}(\mathcal{F}_{t},\mathbb{R}^{d})$
and $\eta_{t}^{\lambda\xi}:=\eta_{t}+\lambda\xi 1_{[t,T]}$, $\lambda>0,$ the
following limit exits in $\mathbb{S}^{2}([t,T],\mathbb{R}^{d}),$
(96) $\partial_{\eta}Y^{\gamma_{t},\eta_{t}}(\xi):=\lim_{\lambda\rightarrow
0}\frac{1}{\lambda}(Y^{\gamma_{t},\eta_{t}^{\lambda\xi}}-Y^{\gamma_{t},\eta_{t}}).$
Then we show that
$\partial_{\eta}Y^{\gamma_{t},\eta_{t}}(\cdot):L^{2}(\mathcal{F}_{t},\mathbb{R}^{d})\to\mathbb{S}^{2}([t,T],\mathbb{R}^{d})$
is a bounded linear operator, and moreover, it is continuous in the following
sense: for any $\zeta\in L^{2}(\mathcal{F}_{t},\mathbb{R}^{d}),$
$\partial_{\eta}Y^{\gamma_{t},\eta_{t}+\zeta 1_{[t,T]}}$ converges to
$\partial_{\eta}Y^{\gamma_{t},\eta_{t}}$ in the sense of operators as $\zeta$
goes to zero. In view of Remark 2.5, we see that $Y^{\gamma_{t},\eta_{t}}$ is
Fréchet (vertically) differentiable in the sense of (34) and Remark 2.8.
To this end, consider the following linear BSDE
(97)
$\begin{split}\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi}(s)&=\tilde{\mathbb{E}}[\partial_{\mu_{t}}\Phi(B^{\gamma_{t}},\mathcal{L}_{B^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})\tilde{\xi}]+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\mu_{t}}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})\tilde{\xi}]dr\\\
&\ \ \
+\int_{s}^{T}\partial_{y}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi}(r)dr+\int_{s}^{T}\partial_{z}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\mathcal{Z}^{{\gamma_{t},\eta_{t}},\xi}(r)dr\\\
&\ \ \
+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}})(\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta}_{t},\mathcal{L}_{\eta_{t}}}\tilde{\xi}+\tilde{\mathcal{Y}}^{\tilde{\eta}_{t},\tilde{\xi}})(r)]dr\\\
&\ \ \ -\int_{s}^{T}\mathcal{Z}^{{\gamma_{t},\eta_{t}},\xi}(r)dB(r),\quad
s\in[t,T].\end{split}$
Here,
$(\tilde{B},\tilde{\eta},\tilde{\xi},\tilde{Y}^{\tilde{\eta}},\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta}_{t},\mathcal{L}_{\eta_{t}}},\tilde{\mathcal{Y}}^{\tilde{\eta}_{t},\tilde{\xi}})$
is an independent copy of
$(B,\eta,\xi,Y^{\eta_{t}},\partial_{\omega_{t}}Y^{\gamma_{t},\mathcal{L}_{\eta_{t}}}|_{\gamma=\eta},\mathcal{Y}^{\eta_{t},\xi})$,
and $\mathcal{Y}^{\eta_{t},\xi}$ satisfies the following linear mean-field
BSDE
(98)
$\begin{split}\mathcal{Y}^{\eta_{t},\xi}(s)&=\tilde{\mathbb{E}}[\partial_{\mu_{t}}\Phi(B^{\eta_{t}},\mathcal{L}_{B^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})\tilde{\xi}]+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\mu_{t}}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}}_{r})\tilde{\xi}]dr\\\
&\ \ \
+\int_{s}^{T}\partial_{y}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\mathcal{Y}^{{\eta_{t}},\xi}(r)dr+\int_{s}^{T}\partial_{z}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\mathcal{Z}^{{\eta_{t}},\xi}(r)dr\\\
&\ \ \
+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}})(\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta}_{t},\mathcal{L}_{\eta_{t}}}\tilde{\xi}+\tilde{\mathcal{Y}}^{\tilde{\eta}_{t},\tilde{\xi}})(r)]dr\\\
&\ \ \ -\int_{s}^{T}\mathcal{Z}^{{\eta_{t}},\xi}(r)dB(r),\quad
s\in[t,T].\end{split}$
###### Lemma 3.9.
For any $\xi\in L^{2}(\mathcal{F}_{t},\mathbb{R}^{d}),$ there exits a unique
solution
$(\mathcal{Y}^{\eta_{t},\xi},\mathcal{Z}^{\eta_{t},\xi})\in\mathbb{S}^{2}([t,T])\times\mathbb{H}^{2}([t,T],\mathbb{R}^{d})$
for BSDE (98). Moreover,
$(\mathcal{Y}^{\eta_{t},\xi},\mathcal{Z}^{\eta_{t},\xi})$ is linear in $\xi$,
and we have
(99)
$\|(\mathcal{Y}^{\eta_{t},\xi},\mathcal{Z}^{\eta_{t},\xi})\|_{\mathbb{S}^{2}\times\mathbb{H}^{2}}\leq
C\|\xi\|_{L^{2}}$
for some constant $C$.
###### Proof.
By Lipschitz continuity of $(\partial_{\mu_{t}}\Phi,\partial_{\mu_{t}}f),$ we
have
$\tilde{\mathbb{E}}[\partial_{\mu_{t}}\Phi(B_{T}^{\eta_{t}},\mathcal{L}_{B_{T}^{\eta_{t}}},\tilde{B}_{T}^{\tilde{\eta}_{t}})\tilde{\xi}]\in
L^{2}(\mathcal{F}_{T}),\ \ \
\tilde{\mathbb{E}}[\partial_{\mu_{t}}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}}_{r})\tilde{\xi}]\in
L^{2}(\mathcal{F}_{r}).$
Since $f$ is uniformly Lipschitz continuous in $(y,z),$
$\partial_{(y,z)}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})$
is uniformly bounded. Set
$g(r,x):=\partial_{\nu}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},x)$.
In view of Lemma 3.4 and Assumption (H1), we see that
$g(\cdot,0)\in\mathbb{H}^{2}.$ Then by Lemma 3.1, to show the well-posedness
of linear mean-field BSDE (98), we only need to check the following
$\int_{t}^{T}\left|\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}})(\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta}_{t},\mathcal{L}_{\eta_{t}}}\tilde{\xi})]\right|dr\in
L^{2}(\mathcal{F}_{T}).$
Let
(100)
$F_{2}(t,x,y,z,\mu,\nu):=\tilde{\mathbb{E}}[\partial_{\nu}f(t,x,y,z,\mu,\nu,\tilde{Y}^{\tilde{\eta}_{t}}(r))(\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta}_{t},\mathcal{L}_{\eta_{t}}}(r)\tilde{\xi})].$
Then by Lipschitz continuity of $\partial_{\nu}f$ and Lemma 3.7, we have
$\displaystyle F_{2}(t,x,y,z,\mu,\nu)$ $\displaystyle\ \
=\tilde{\mathbb{E}}\Big{[}\tilde{\mathbb{E}}_{\tilde{\mathcal{F}}_{t}}[\partial_{\nu}f(t,x,y,z,\mu,\nu,\tilde{Y}^{\gamma_{t},\mathcal{L}_{\eta_{t}}}(r))(\partial_{\omega_{t}}\tilde{Y}^{\gamma_{t},\mathcal{L}_{\eta_{t}}}(r))]\Big{|}_{\gamma_{t}=\tilde{\eta}_{t}}\tilde{\xi}\Big{]}$
$\displaystyle\ \ \leq
C\tilde{\mathbb{E}}\Big{[}\tilde{\mathbb{E}}_{\tilde{\mathcal{F}}_{t}}[|\tilde{Y}^{\gamma_{t},\mathcal{L}_{\eta_{t}}}||\partial_{\omega_{t}}\tilde{Y}^{\gamma_{t},\mathcal{L}_{\eta_{t}}}(r)|]\left|{}_{\gamma_{t}=\tilde{\eta}_{t}}\right.\tilde{\xi}\Big{]}+\partial_{\nu}f(t,x,y,z,\mu,\nu,0)\tilde{\mathbb{E}}\Big{[}\tilde{\mathbb{E}}_{\tilde{\mathcal{F}}_{t}}[\tilde{Y}^{\gamma_{t},\mathcal{L}_{\eta_{t}}}(r)|]\left|{}_{\gamma_{t}=\tilde{\eta}_{t}}\right.\tilde{\xi}\Big{]}$
$\displaystyle\ \ \leq
C\tilde{\mathbb{E}}\left[(\tilde{\mathbb{E}}_{\tilde{\mathcal{F}}_{t}}[|\tilde{Y}^{\gamma_{t},\mathcal{L}_{\eta_{t}}}|^{2}]^{\frac{1}{2}})(\tilde{\mathbb{E}}_{\tilde{\mathcal{F}}_{t}}[|\partial_{\omega_{t}}\tilde{Y}^{\gamma_{t},\mathcal{L}_{\eta_{t}}}(r)|^{2}]^{\frac{1}{2}})\Big{|}_{\gamma_{t}=\tilde{\eta}_{t}}\tilde{\xi}\right]+C\partial_{\nu}f(t,x,y,z,\mu,\nu,0)$
$\displaystyle\ \ \leq
C\tilde{\mathbb{E}}\left[\tilde{\mathbb{E}}_{\tilde{\mathcal{F}}_{t}}[(1+\|\gamma_{t}\|)]\Big{|}_{\gamma_{t}=\tilde{\eta}_{t}}\tilde{\xi}\right]+C\partial_{\nu}f(t,x,y,z,\mu,\nu,0)$
$\displaystyle\ \ \leq C+C\partial_{\nu}f(t,x,y,z,\mu,\nu,0),$
where we applied Lemmas 3.4 and 3.7 in the second last inequality. Then
according to Lemma 3.4, we have
$\partial_{\nu}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta^{\eta_{t}}_{r}},0)\in\mathbb{H}^{2},$
and thus the well-posedness of (98). For inequality (99), similar to the proof
of Lemma 3.4, we have
$\displaystyle\|\mathcal{Y}^{\eta_{t},\xi}\|_{\mathbb{S}^{2}}^{2}+\|\mathcal{Z}^{\eta_{t},\xi}\|^{2}_{\mathbb{H}^{2}}$
$\displaystyle\ \ \leq
C\mathbb{E}\Big{[}|\tilde{\mathbb{E}}[\partial_{\mu_{t}}\Phi(B^{\eta_{t}},\mathcal{L}_{B^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})\tilde{\xi}]|^{2}+\int_{s}^{T}|\tilde{\mathbb{E}}[\partial_{\mu_{t}}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})\tilde{\xi}]|^{2}dr$
$\displaystyle\ \ \ \ \ \
+\int_{s}^{T}|\tilde{\mathbb{E}}\partial_{\nu}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}})(\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta}_{t},\mathcal{L}_{\eta_{t}}}\tilde{\xi})|^{2}dr\Big{]}$
$\displaystyle\ \ \leq
C\Big{(}(\tilde{\mathbb{E}}\|\tilde{B}^{\tilde{\eta}_{t}}\|\
|\tilde{\xi}|)^{2}+\|\xi\|_{L^{2}}^{2}\mathbb{E}|\partial_{\mu_{t}}\Phi(B^{\eta_{t}},\mathcal{L}_{B^{\eta_{t}}},0)|^{2}$
$\displaystyle\ \ \ \ \ \
+\|\xi\|_{L^{2}}^{2}\mathbb{E}\int_{s}^{T}|\partial_{\mu_{t}}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},0)|^{2}dr+\mathbb{E}\int_{s}^{T}|\tilde{\mathbb{E}}[\tilde{Y}^{\tilde{\eta}_{t}}\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta}_{t},\mathcal{L}_{\eta_{t}}}\tilde{\xi}]|^{2}dr$
$\displaystyle\ \ \ \ \ \
+\|\xi\|_{L^{2}}^{2}\mathbb{E}\int_{s}^{T}|\partial_{\nu}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},0)|^{2}dr\Big{)}\leq
C\|\xi\|_{L^{2}}^{2}.$
∎
Since BSDE (98) is well-posed, we see that BSDE (97) is also well-posed. In
conclusion, we have
###### Corollary 3.10.
There exits a unique solution
$(\mathcal{Y}^{\gamma_{t},\eta_{t},\xi},\mathcal{Z}^{\gamma_{t},\eta_{t},\xi})\in\mathbb{S}^{2}([t,T])\times\mathbb{H}^{2}([t,T],\mathbb{R}^{d})$
to BSDE (97). Moreover,
(101)
$(\mathcal{Y}^{\eta_{t},\xi},\mathcal{Z}^{\eta_{t},\xi})=(\mathcal{Y}^{\gamma_{t},\eta_{t},\xi},\mathcal{Z}^{\gamma_{t},\eta_{t},\xi})|_{\gamma=\eta}.$
###### Lemma 3.11.
The map $\xi\to\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi}$ is a bounded linear
operator from $L^{2}({\mathcal{F}_{t}},\mathbb{R}^{d})$ to
$\mathbb{S}^{2}([t,T])$. Moreover, it is the Gâteaux derivative of
$Y^{\gamma_{t},\eta_{t}}$ with respect to $\eta_{t}$ in the following sense
(102) $\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi}=\lim_{\lambda\rightarrow
0}\frac{1}{\lambda}(Y^{\gamma_{t},\eta_{t}^{\lambda\xi}}-Y^{\gamma_{t},\eta_{t}})\quad\text{strongly
in $\mathbb{S}^{2}([t,T])$}.$
In particular, $\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi}(s)$ is the Gâteaux
derivative of $Y^{\gamma_{t},\eta_{t}}(s)$ in the sense of (35).
###### Proof.
Since $\mathcal{Y}^{\eta_{t},\xi}$ is linear in $\xi,$ we see that
$(\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi},\
\mathcal{Z}^{{\gamma_{t},\eta_{t}},\xi})$ is also linear in $\xi$. Moreover,
we have the following estimate
(103)
$\|(\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi},\mathcal{Z}^{{\gamma_{t},\eta_{t}},\xi})\|_{\mathbb{S}^{2}\times\mathbb{H}^{2}}\leq
C\|\xi\|_{L^{2}}.$
Therefore, we have the first assertion.
In the following, we omit the fixed subscript $t$ and write
$(Y,Z):=(Y(r),Z(r))$ if no confusion raised. Besides, the constant $C$ may
change from line to line. Set
(104)
$\begin{split}&\Delta_{\lambda}Y:=\frac{1}{\lambda}(Y^{\gamma,\eta^{\lambda\xi}}-Y^{\gamma,\eta}),\
\
\Delta_{\lambda}Z:=\frac{1}{\lambda}(Z^{\gamma,\eta^{\lambda\xi}}-Z^{\gamma,\eta}),\quad\quad\text{and
}\\\
&\quad\Delta_{\lambda}\Phi:=\frac{1}{\lambda}[\Phi(B_{T}^{\gamma},\mathcal{L}_{B_{T}^{\eta^{\lambda\xi}}})-\Phi(B_{T}^{\gamma},\mathcal{L}_{B_{T}^{\eta}})].\end{split}$
Then according to Lemma 3.4, we have
(105)
$\|\Delta_{\lambda}Y\|_{\mathbb{S}^{2}}+\|\Delta_{\lambda}Z\|_{\mathbb{H}^{2}}\leq
C\frac{1}{\lambda}\|\eta_{t}^{\lambda\xi}-\eta_{t}\|_{\mathbb{S}^{2}}\leq
C\|\xi\|_{L^{2}}.$
In view of BSDE (79), we see that $(\Delta_{\lambda}Y,\Delta_{\lambda}Z)$
satisfies the following linear mean-field BSDE
(106)
$\begin{split}\Delta_{\lambda}Y&=\Delta_{\lambda}\Phi+\int_{s}^{T}\left[\alpha(r)\Delta_{\lambda}Y+\beta(r)\Delta_{\lambda}Z+\tilde{\mathbb{E}}[\tilde{g}(r)\frac{1}{\lambda}(\tilde{Y}^{\tilde{\eta}^{\lambda\tilde{\xi}}}-\tilde{Y}^{\tilde{\eta}})]+\Delta_{\lambda}f\right]dr\\\
&\quad-\int_{s}^{T}\Delta_{\lambda}ZdB(r),\end{split}$
where
$\displaystyle\alpha(r):=\int_{0}^{1}\partial_{y}{f}(B^{\gamma}_{r},Y^{\gamma,\eta}+\theta(Y^{\gamma,\eta^{\lambda\xi}}-Y^{\gamma,\eta}),Z^{\gamma,\eta^{\lambda\xi}},\mathcal{L}_{\Theta^{\eta^{\lambda\xi}}_{r}})d\theta,$
$\displaystyle\beta(r):=\int_{0}^{1}\partial_{z}{f}(B^{\gamma}_{r},Y^{\gamma,\eta},Z^{\gamma,\eta}+\theta(Z^{\gamma,\eta^{\lambda\xi}}-Z^{\gamma,\eta}),\mathcal{L}_{\Theta^{\eta^{\lambda\xi}}_{r}})d\theta,$
$\displaystyle\tilde{g}(r):=\int_{0}^{1}\partial_{\nu}f(\Theta^{\gamma,\eta},\mathcal{L}_{B^{\eta^{\lambda\xi}}},\mathcal{L}_{Y^{\eta}+\theta(Y^{\eta^{\lambda\xi}}-Y^{\eta})},\tilde{Y}^{\tilde{\eta}}+\theta(\tilde{Y}^{\tilde{\eta}^{\lambda\tilde{\xi}}}-\tilde{Y}^{\tilde{\eta}}))d\theta,\quad\quad\text{and}$
$\displaystyle\Delta_{\lambda}f(r):=\frac{1}{\lambda}[f(\Theta^{\gamma,\eta},\mathcal{L}_{B^{\eta^{\lambda\xi}}_{r}},\mathcal{L}_{Y^{\eta}})-f(\Theta^{\gamma,\eta},\mathcal{L}_{B^{\eta}_{r}},\mathcal{L}_{Y^{\eta}})].$
According to estimate (83) in Lemma 3.4, we have
(107)
$\|\Delta_{\lambda}\tilde{Y}^{\tilde{\eta}}\|_{\mathbb{S}^{2}}:=\|\frac{1}{\lambda}(\tilde{Y}^{\tilde{\eta}^{\lambda\tilde{\xi}}}-\tilde{Y}^{\tilde{\eta}})\|_{\mathbb{S}^{2}}\leq
C\frac{1}{\lambda}\|\eta_{t}^{\lambda\xi}-\eta_{t}\|_{\mathbb{S}^{2}}\leq
C\|\xi\|_{L^{2}}.$
Then, in view of Assumption (H1), we have
(108)
$\|\int_{t}^{T}\tilde{\mathbb{E}}[\tilde{g}(r)\Delta_{\lambda}\tilde{Y}^{\tilde{\eta}}]dr\|_{L^{2}}+\|\int_{t}^{T}\Delta_{\lambda}fdr\|_{L^{2}}\leq
C\|\xi\|_{L^{2}}.$
Thus BSDE (106) has a unique solution $(\Delta_{\lambda}Y,\Delta_{\lambda}Z)$,
and therefore,
$(\Delta_{\lambda}Y-\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi},\Delta_{\lambda}Z-\mathcal{Z}^{{\gamma_{t},\eta_{t}},\xi})$
is the unique solution of the following BSDE
$\displaystyle Y(s)=$ $\displaystyle\
(\Delta_{\lambda}\Phi-\tilde{\mathbb{E}}[\partial_{\mu_{t}}\Phi(B^{\gamma_{t}},\mathcal{L}_{B^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})\tilde{\xi}])+\int_{s}^{T}\partial_{y}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})Ydr$
$\displaystyle\
+\int_{s}^{T}\partial_{z}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})Z(r)dr+\int_{s}^{T}(\Delta_{\lambda}f-\tilde{\mathbb{E}}[\partial_{\mu_{t}}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})\tilde{\xi}])dr$
$\displaystyle\
+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}})(\Delta_{\lambda}\tilde{Y}^{\tilde{\eta}}-\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta}_{t},\mathcal{L}_{\eta_{t}}}\tilde{\xi}-\tilde{\mathcal{Y}}^{\tilde{\eta}_{t},\tilde{\xi}})]dr$
$\displaystyle\ +\int_{t}^{T}R_{1}(r)dr-\int_{s}^{T}ZdB(r)$
with
$\displaystyle R_{1}(r):=$
$\displaystyle\left(\alpha(r)-\partial_{y}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\right)\Delta_{\lambda}Y+\left(\beta(r)-\partial_{z}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\right)\Delta_{\lambda}Z$
$\displaystyle+\tilde{\mathbb{E}}\left[(\tilde{g}(r)-\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}}))\Delta_{\lambda}\tilde{Y}^{\tilde{\eta}}\right].$
Since $\partial_{(y,z)}f$ is bounded, from the standard estimate for solutions
of BSDEs, we have
(109)
$\|\Delta_{\lambda}Y-\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi}\|^{2}_{\mathbb{S}^{2}}\leq
C(\|A_{1}\|^{2}_{L^{2}}+\|A_{2}\|^{2}_{L^{2}}+\|A_{3}\|^{2}_{L^{2}}+\|A_{4}\|^{2}_{L^{2}})$
with
$\displaystyle A_{1}$
$\displaystyle:=\Delta_{\lambda}\Phi-\tilde{\mathbb{E}}[\partial_{\mu_{t}}\Phi(B^{\gamma_{t}},\mathcal{L}_{B^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})\tilde{\xi}],\quad
A_{2}:=\int_{t}^{T}|R_{1}(r)|dr,$ $\displaystyle A_{3}$
$\displaystyle:=\int_{t}^{T}\left|(\Delta_{\lambda}f-\tilde{\mathbb{E}}[\partial_{\mu_{t}}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})\tilde{\xi}])\right|dr,\quad\text{and}$
$\displaystyle A_{4}$
$\displaystyle:=\int_{t}^{T}\left|\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}})(\Delta_{\lambda}\tilde{Y}^{\tilde{\eta}}-\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta}_{t},\mathcal{L}_{\eta_{t}}}\tilde{\xi}-\tilde{\mathcal{Y}}^{\tilde{\eta}_{t},\tilde{\xi}})(r)]\right|dr.$
For $A_{1},$ according to the Lipschitz continuity of
$\partial_{\mu_{t}}\Phi,$ we have
(110) $\begin{split}\mathbb{E}|A_{1}|^{2}=&\
\mathbb{E}\Big{|}\int_{0}^{1}\tilde{\mathbb{E}}[\partial_{\mu_{t}}\Phi(B^{\gamma},\mathcal{L}_{B^{\eta}+\theta(B^{\eta^{\lambda\xi}}-B^{\eta})},\tilde{B}^{\tilde{\eta}}+\theta(\tilde{B}^{\tilde{\eta}^{\lambda\tilde{\xi}}}-\tilde{B}^{\tilde{\eta}}))\tilde{\xi}\\\
&\
-\partial_{\mu_{t}}\Phi(B^{\gamma_{t}},\mathcal{L}_{B^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})\tilde{\xi}]d\theta\Big{|}^{2}\\\
\leq&\
C\left([\bar{E}\|\bar{B}^{\bar{\eta}^{\lambda\bar{\xi}}}-\bar{B}^{\bar{\eta}}\|^{2}]^{\frac{1}{2}}\|\xi\|_{L^{2}}+\tilde{\mathbb{E}}[\|\tilde{B}^{\tilde{\eta}^{\lambda\tilde{\xi}}}-\tilde{B}^{\tilde{\eta}}\||\tilde{\xi}|]\right)^{2}\leq
C\lambda^{2}\|\xi\|_{L^{2}}^{4},\end{split}$
for a constant $C$ independent of $\gamma$ and $\eta.$
Term $A_{2}$ is estimated as follows:
(111) $\begin{split}|A_{2}|^{2}\leq&\
C\Big{|}\int_{t}^{T}(\alpha(r)-\partial_{y}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}}))\Delta_{\lambda}Ydr\Big{|}^{2}\\\
&+C\Big{|}\int_{t}^{T}\left(\beta(r)-\partial_{z}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\right)\Delta_{\lambda}Zdr\Big{|}^{2}\\\
&+C\Big{|}\int_{t}^{T}\tilde{\mathbb{E}}[(\tilde{g}(r)-\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}}))\Delta_{\lambda}\tilde{Y}^{\tilde{\eta}}]dr\Big{|}^{2}.\end{split}$
For the first two terms on the right hand side of the above inequality, by the
Lipschitz continuity of $\partial_{(y,z)}f$ and inequality (105), we obtain
$\displaystyle\mathbb{E}\Big{|}\int_{t}^{T}\left(\alpha(r)-\partial_{y}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\right)\Delta_{\lambda}Ydr\Big{|}^{2}+\Big{|}\int_{t}^{T}\left(\beta(r)-\partial_{z}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\right)\Delta_{\lambda}Zdr\Big{|}^{2}\leq
C\lambda^{2}\|\xi\|_{L^{2}}^{4}.$
For the third term, we claim that
(112)
$\mathbb{E}\Big{|}\int_{t}^{T}\tilde{\mathbb{E}}\left[(\tilde{g}(r)-\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}}))\Delta_{\lambda}\tilde{Y}^{\tilde{\eta}}\right]dr\Big{|}^{2}\leq
C\lambda^{2}\|\xi\|_{L^{2}}^{4},$
with $C$ depending only on $\|\eta_{t}\|_{\mathbb{S}^{2}},$ and therefore we
have
(113) $\mathbb{E}|A_{2}|^{2}\leq C\lambda^{2}\|\xi\|_{L^{2}}^{4},$
in view of (111) and above estimates. Indeed, by the Hölder inequality and
estimate (107), we have
$\displaystyle\mathbb{E}\Big{|}\int_{t}^{T}\tilde{\mathbb{E}}[(\tilde{g}(r)-\partial_{\nu}f)\Delta_{\lambda}\tilde{Y}^{\tilde{\eta}}]dr\Big{|}^{2}\leq\mathbb{E}[\int_{t}^{T}\tilde{\mathbb{E}}|\tilde{g}-\partial_{\nu}f|^{2}dr]\int_{t}^{T}\tilde{\mathbb{E}}|\Delta_{\lambda}\tilde{Y}^{\tilde{\eta}}|^{2}dr$
$\displaystyle\quad\leq
C\|\xi\|_{L^{2}}^{2}\mathbb{E}\int_{t}^{T}\tilde{\mathbb{E}}\Big{|}\int_{0}^{1}(\partial_{\nu}f(\Theta^{\gamma,\eta},\mathcal{L}_{B^{\eta^{\lambda\xi}}},\mathcal{L}_{Y^{\eta}+\theta(Y^{\eta^{\lambda\xi}}-Y^{\eta})},\tilde{Y}^{\tilde{\eta}}+\theta(\tilde{Y}^{\tilde{\eta}^{\lambda\tilde{\xi}}}-\tilde{Y}^{\tilde{\eta}}))$
$\displaystyle\quad\quad-\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}}))d\theta\Big{|}^{2}dr$
$\displaystyle\quad\leq
C\|\xi\|_{L^{2}}^{2}(\|\tilde{Y}^{\tilde{\eta}^{\lambda\tilde{\xi}}}-\tilde{Y}^{\tilde{\eta}}\|_{\mathbb{S}^{2}}+\|B^{\eta^{\lambda\xi}}-B^{\eta}\|_{\mathbb{S}^{2}})^{2}\leq
C\lambda^{2}\|\xi\|_{L^{2}}^{4}.$
For $A_{3},$ from Lipschitz continuity of $\partial_{\mu_{t}}f$ in
$(\mu,\nu,\tilde{\omega})$, we have
(114) $\begin{split}\mathbb{E}|A_{3}|^{2}=&\
\mathbb{E}\big{|}\int_{t}^{T}\int_{0}^{1}\tilde{\mathbb{E}}[\partial_{\mu_{t}}f(\Theta^{{\gamma_{t},\eta_{t}}},\mathcal{L}_{B^{\eta+\theta(\eta^{\lambda\xi}-\eta)}},\mathcal{L}_{Y^{\eta}},\tilde{B}^{\tilde{\eta}+\theta(\tilde{\eta}^{\lambda\tilde{\xi}}-\tilde{\eta})})\\\
&\
-\partial_{\mu_{t}}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})]\tilde{\xi}d\theta
dr\big{|}^{2}\\\ \leq&\
C\left|\tilde{\mathbb{E}}[(\|\eta^{\lambda\xi}_{t}-\eta_{t}\|_{\mathbb{S}^{2}}+\|\tilde{\eta}^{\lambda\tilde{\xi}}-\tilde{\eta}\|)|\tilde{\xi}|]\right|^{2}\\\
\leq&\
C\left|\tilde{\mathbb{E}}[\lambda\|\xi\|_{L^{2}}\tilde{\xi}+\lambda|\tilde{\xi}|^{2}]\right|^{2}\leq
C\lambda^{2}\|\xi\|_{L^{2}}^{4}.\end{split}$
We now estimate $A_{4}$. Since
(115)
$\Delta_{\lambda}\tilde{Y}^{\tilde{\eta}}-\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta}_{t},\mathcal{L}_{\eta_{t}}}\tilde{\xi}-\tilde{\mathcal{Y}}^{\tilde{\eta}_{t},\tilde{\xi}}=\
A_{41}+A_{42}$
with
(116) $\begin{split}A_{41}&:=\
[\frac{1}{\lambda}(\tilde{Y}^{\tilde{\eta}^{\lambda\tilde{\xi}},\mathcal{L}_{\tilde{\eta}^{\lambda\tilde{\xi}}}}-\tilde{Y}^{\tilde{\eta},\mathcal{L}_{\tilde{\eta}^{\lambda\tilde{\xi}}}})-\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta}_{t},\mathcal{L}_{\eta_{t}}}\tilde{\xi}],\quad\quad\text{and}\\\
A_{42}&:=[\frac{1}{\lambda}(\tilde{Y}^{\tilde{\eta},\mathcal{L}_{\tilde{\eta}^{\lambda\tilde{\xi}}}}-\tilde{Y}^{\tilde{\eta},\mathcal{L}_{\tilde{\eta}}})-\tilde{\mathcal{Y}}^{\tilde{\eta}_{t},\tilde{\xi}}],\end{split}$
then, from boundedness of $\partial_{\nu}f,$ we have
(117)
$\begin{split}\mathbb{E}|A_{4}|^{2}&=\mathbb{E}\Big{|}\int_{t}^{T}\tilde{\mathbb{E}}\Big{[}\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}})(A_{41}(r)+A_{42}(r))\Big{]}dr\Big{|}^{2}\\\
&\leq
C\Big{(}\Big{|}\int_{t}^{T}\tilde{\mathbb{E}}[A_{41}]dr\Big{|}^{2}+\Big{|}\int_{t}^{T}\tilde{\mathbb{E}}[A_{42}]dr\Big{|}^{2}\Big{)}.\end{split}$
From Lemma 3.7, we have
(118)
$\begin{split}\left|\int_{t}^{T}\tilde{\mathbb{E}}[A_{41}]dr\right|^{2}&\leq
C\int_{t}^{T}\left[\tilde{\mathbb{E}}\int_{0}^{1}|(\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta}^{\lambda\theta\tilde{\xi}},\mathcal{L}_{\eta^{\lambda\xi}}}-\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta},\mathcal{L}_{\eta}})\tilde{\xi}|d\theta\right]^{2}dr\\\
&\leq
C\|\xi\|_{L^{2}}^{2}\int_{t}^{T}\int_{0}^{1}\tilde{\mathbb{E}}|\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta}^{\lambda\theta\tilde{\xi}},\mathcal{L}_{\eta^{\lambda\xi}}}-\partial_{\omega_{t}}\tilde{Y}^{\tilde{\eta},\mathcal{L}_{\eta}}|^{2}d\theta
dr\\\ &\leq
C\|\xi\|_{L^{2}}^{2}\int_{t}^{T}\int_{0}^{1}\tilde{\mathbb{E}}(\|\tilde{\eta}_{t}^{\lambda\theta\tilde{\xi}}-\tilde{\eta}_{t}\|)^{2}d\theta
dr\leq C\lambda^{2}\|\xi\|_{L^{2}}^{4}\end{split}$
for a constant $C$ only depending on $\|\eta_{t}\|_{\mathbb{S}^{2}}.$ Since
(119)
$\displaystyle\left|\int_{t}^{T}\tilde{\mathbb{E}}[A_{42}]dr\right|^{2}\leq\int_{t}^{T}\tilde{\mathbb{E}}|A_{42}|^{2}dr\leq
C\sup_{\gamma_{t}}\int_{t}^{T}\tilde{\mathbb{E}}|\Delta_{\lambda}Y-\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi}|^{2}dr,$
for a constant $C$ independent of $(\gamma,\eta),$ we have
(120) $\mathbb{E}|A_{4}|^{2}\leq
C\left(\lambda^{2}\|\xi\|_{L^{2}}^{4}+\sup_{\gamma_{t}}\int_{t}^{T}\tilde{\mathbb{E}}|\Delta_{\lambda}Y-\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi}|^{2}dr\right).$
Finally, in view of inequalities (110), (113), (114), (120) and (109), we have
$\displaystyle\|\Delta_{\lambda}Y-\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi}\|^{2}_{\mathbb{S}^{2}}\leq
C(\lambda^{2}\|\xi\|_{L^{2}}^{4}+\sup_{\gamma_{t}}\int_{t}^{T}\|\Delta_{\lambda}Y-\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi}\|_{\mathbb{S}^{2}}^{2}dr),$
where $C$ only depends on $\|\eta_{t}\|_{\mathbb{S}^{2}}$. Then, using
Gronwall’s inequality, we have
(121)
$\|\Delta_{\lambda}Y-\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi}\|^{2}_{\mathbb{S}^{2}}\leq
C\lambda^{2}\|\xi\|_{L^{2}}^{4}\rightarrow 0,\ \ \text{ as }\lambda\rightarrow
0.$
∎
In view of our Assumption (H1), the solution
$(Y^{\gamma_{t},\eta_{t}},Z^{\gamma_{t},\eta_{t}})$ of BSDE (79) is indeed
strongly vertically differentiable in $\eta_{t}.$ According to Definition 2.9,
for any $\tau\leq t$ and $\xi\in L^{2}(\mathcal{F}_{\tau},\mathbb{R}^{d}),$
let $\eta_{t}^{\tau,\lambda\xi}:=\eta_{t}+\lambda\xi 1_{[\tau,T]}$. Similar as
the vertical differentiable case, we firstly need to show the following limit
exits in $\mathbb{S}^{2}([t,T]),$
(122)
$\partial_{\eta_{\tau}}Y^{\gamma_{t},\eta_{t},\xi}:=\lim_{\lambda\rightarrow
0}\frac{1}{\lambda}(Y^{\gamma_{t},\eta_{t}^{\tau,\lambda\xi}}-Y^{\gamma_{t},\eta_{t}}).$
Indeed, $\partial_{\eta_{\tau}}Y^{\gamma_{t},\eta_{t},\xi}$ is the unique
solution of the following BSDE
(123)
$\begin{split}\partial_{\eta_{\tau}}Y^{{\gamma_{t},\eta_{t}},\xi}(s)&=\tilde{\mathbb{E}}[\partial_{\mu_{\tau}}\Phi(B^{\gamma_{t}},\mathcal{L}_{B^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})\tilde{\xi}]+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\mu_{\tau}}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})\tilde{\xi}]dr\\\
&\ \ \
+\int_{s}^{T}\partial_{y}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\partial_{\eta_{\tau}}Y^{{\gamma_{t},\eta_{t}},\xi}(r)dr\\\
&\ \ \
+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}})(\partial_{\omega_{\tau}}\tilde{Y}^{\tilde{\eta}_{t},\mathcal{L}_{\eta_{t}}}\tilde{\xi}+\partial_{\eta_{\tau}}\tilde{Y}^{\tilde{\eta}_{t},\tilde{\xi}})(r)]dr\\\
&\ \ \
+\int_{s}^{T}\partial_{z}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\partial_{\eta_{\tau}}Z^{{\gamma_{t},\eta_{t}},\xi}(r)dr\\\
&\ \ \
-\int_{s}^{T}\partial_{\eta_{\tau}}Z^{{\gamma_{t},\eta_{t}},\xi}(r)dB(r),\
s\in[t,T],\end{split}$
where $\partial_{\eta_{\tau}}{Y}^{\eta_{t},\xi}$ solves the following mean-
field BSDE
(124)
$\begin{split}\partial_{\eta_{\tau}}Y^{\eta_{t},\xi}(s)&=\tilde{\mathbb{E}}[\partial_{\mu_{\tau}}\Phi(B^{\eta_{t}},\mathcal{L}_{B^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})\tilde{\xi}]+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\mu_{\tau}}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{B}^{\tilde{\eta}_{t}})\tilde{\xi}]dr\\\
&\ \ \
+\int_{s}^{T}\partial_{y}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\partial_{\eta_{\tau}}Y^{{\eta_{t}},\xi}(r)dr\\\
&\ \ \
+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}})(\partial_{\omega_{\tau}}\tilde{Y}^{\tilde{\eta}_{t},\mathcal{L}_{\eta_{t}}}\tilde{\xi}+\partial_{\eta_{\tau}}\tilde{Y}^{\tilde{\eta}_{t},\tilde{\xi}})(r)]dr\\\
&\ \ \
+\int_{s}^{T}\partial_{z}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\partial_{\eta_{\tau}}Z^{{\eta_{t}},\xi}(r)dr-\int_{s}^{T}\partial_{\eta_{\tau}}Z^{{\eta_{t}},\xi}(r)dB(r),\
s\in[t,T].\end{split}$
According to Assumption (H1), we see that BSDEs (124) and (123) are well-
posed. Moreover, following a similar argument as in Lemma 3.11, for the
Gâteaux strong vertical differentiability, we have
###### Lemma 3.12.
$\partial_{\eta_{\tau}}Y^{{\gamma_{t},\eta_{t}},\cdot}$ is a bounded linear
operator from $L^{2}({\mathcal{F}_{\tau}},\mathbb{R}^{d})$ to
$\mathbb{S}^{2}([t,T])$. Moreover,
$\partial_{\eta_{\tau}}Y^{{\gamma_{t},\eta_{t}},\xi}$ is the Gâteaux strong
vertical derivative of $Y^{\gamma_{t},\eta_{t}}$ at $(\tau,t,\eta)$:
(125)
$\partial_{\eta_{\tau}}Y^{{\gamma_{t},\eta_{t}},\xi}=\lim_{\lambda\rightarrow
0}\frac{1}{\lambda}(Y^{\gamma_{t},\eta_{t}^{\tau,\lambda\xi}}-Y^{\gamma_{t},\eta_{t}}),\quad\text{strongly
in $\mathbb{S}^{2}([t,T])$}.$
In particular, $\partial_{\eta_{\tau}}Y^{{\gamma_{t},\eta_{t}},\cdot}(s)$ is
the Gâteaux derivative of $Y^{\gamma_{t},\eta_{t}}(s)$ at $(\tau,t,\eta)$ in
the sense of (38).
To give an explicit representation of the vertical derivative
$Y^{\gamma_{t},\mathcal{L}_{\eta_{t}}}(\cdot)$ with respect to
$\mathcal{L}_{\eta_{t}}$ in view of (36), we need to find out a measurable
random field
$U^{\gamma_{t},\mathcal{L}_{\eta_{t}}}(\cdot):\mathbb{D}_{T,d}\to\mathbb{S}^{2}([t,T],\mathbb{R}^{d})$,
such that for any $s\geq t$ and $\xi\in
L^{2}(\mathcal{F}_{t},\mathbb{R}^{d}),$
(126)
$\mathcal{Y}^{{\gamma_{t},\eta_{t}},\xi}(s)=\bar{E}[U^{\gamma_{t},\mathcal{L}_{\eta_{t}}}(\bar{\eta}_{t})(s)\bar{\xi}],$
where $(\bar{\eta},\bar{\xi})$ is an independent copy of $(\eta,\xi).$ If
(126) holds and moreover we show that $Y^{\gamma_{t},{\eta_{t}}}$ is Fréchet
differentiable with respect to $\eta_{t}$ in the sense of (34) and Remark 2.8,
we have that
(127)
$\partial_{\mu_{t}}Y^{\gamma_{t},\mathcal{L}_{\eta_{t}}}(x_{t}):=U^{\gamma_{t},\mathcal{L}_{\eta_{t}}}(x_{t}),\
\ \forall\ x\in\mathbb{D}_{T,d},$
is the vertical derivative of $Y^{\gamma_{t},\mathcal{L}_{\eta_{t}}}$ at
$\mathcal{L}_{\eta_{t}}$. Here and in the following, we write $\partial_{\mu}$
instead of $\partial_{\mathcal{L}_{\eta}}$. In view of (97) and (126), we
formally deduce that
$(U^{\gamma_{t},\mathcal{L}_{\eta_{t}}}(x_{t}),V^{\gamma_{t},\mathcal{L}_{\eta_{t}}}(x_{t}))$
solves the following BSDE
(128)
$\begin{split}U^{{\gamma_{t},\eta_{t}},x_{t}}(s)&=\tilde{\mathbb{E}}[\partial_{\mu_{t}}\Phi(B^{\gamma_{t}},\mathcal{L}_{B^{\eta_{t}}},\tilde{B}^{x_{t}})]+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\mu_{t}}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{B}^{x_{t}})]dr\\\
&\ \ \
+\int_{s}^{T}\partial_{y}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})U^{{\gamma_{t},\eta_{t}},x_{t}}(r)dr\\\
&\ \ \
+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{x_{t},\mathcal{L}_{\eta_{t}}})\partial_{\omega_{t}}\tilde{Y}^{x_{t},\mathcal{L}_{\eta_{t}}}(r)]dr\\\
&\ \ \
+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}})\tilde{U}^{\tilde{\eta}_{t},x_{t}}(r)]dr\\\
&\ \ \
+\int_{s}^{T}\partial_{z}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})V^{{\gamma_{t},\eta_{t}},x_{t}}(r)dr-\int_{s}^{T}V^{{\gamma_{t},\eta_{t}},x_{t}}(r)dB(r),\quad
s\in[t,T],\end{split}$
where ${U}^{\eta_{t},x_{t}}$ solves the associated mean-field BSDE:
(129)
$\begin{split}U^{\eta_{t},x_{t}}(s)&=\tilde{\mathbb{E}}[\partial_{\mu_{t}}\Phi(B^{\eta_{t}},\mathcal{L}_{B^{\eta_{t}}},\tilde{B}^{x_{t}})]+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\mu_{t}}f(\Theta^{\eta}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{B}^{x_{t}})]dr\\\
&\ \ \
+\int_{s}^{T}\partial_{y}f(\Theta^{\eta}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})U^{\eta_{t},x_{t}}(r)dr\\\
&\ \ \
+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{x_{t},\mathcal{L}_{\eta_{t}}})\partial_{\omega_{t}}\tilde{Y}^{x_{t},\mathcal{L}_{\eta_{t}}}(r)]dr\\\
&\ \ \
+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}})\tilde{U}^{\tilde{\eta}_{t},x_{t}})(r)]dr\\\
&\ \ \
+\int_{s}^{T}\partial_{z}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})V^{\eta_{t},x_{t}}(r)dr-\int_{s}^{T}V^{\eta_{t},x_{t}}(r)dB(r).\end{split}$
According to Lemma 3.1, we see that mean-field BSDE (129) is well posed with
$(U^{\eta_{t},x_{t}},V^{\eta_{t},x_{t}})\in\mathbb{S}^{2}([t,T],\mathbb{R}^{d})\times\mathbb{H}^{2}([t,T],\mathbb{R}^{d\times
d}).$ Then BSDE (128) also has a unique solution
$(U^{{\gamma_{t},\eta_{t}},x_{t}},V^{{\gamma_{t},\eta_{t}},x_{t}})\in\mathbb{S}^{2}([t,T],\mathbb{R}^{d})\times\mathbb{H}^{2}([t,T],\mathbb{R}^{d\times
d})$. Moreover, according to the uniqueness of solutions for BSDEs (129), we
see $U^{\eta_{t},x_{t}}=U^{\gamma_{t},{\eta_{t}},x_{t}}|_{\gamma=\eta}.$
Concerning the regularity of $U^{{\gamma_{t},\eta_{t}},x_{t}}$ and
$U^{\eta_{t},x_{t}}$ with respect to $(\gamma,\eta,x)$, we have
###### Lemma 3.13.
For any $x,x^{\prime},\gamma,\gamma^{\prime}\in\mathbb{D}_{T,d},$ and
$\eta,\eta^{\prime}\in\mathbb{M}_{2}^{D},$ we have
(130)
$\displaystyle\|U^{\eta_{t},x_{t}}-U^{\eta^{\prime}_{t},x^{\prime}_{t}}\|_{\mathbb{S}^{2}}\leq
C(\|\eta_{t}-\eta^{\prime}_{t}\|_{\mathbb{S}^{2}}+\|x_{t}-x^{\prime}_{t}\|),\quad\text{and}$
(131)
$\displaystyle\|U^{{\gamma_{t},\eta_{t}},x_{t}}-U^{\gamma^{\prime}_{t},\eta^{\prime}_{t},x^{\prime}_{t}}\|_{\mathbb{S}^{2}}\leq
C(\|\gamma_{t}-\gamma^{\prime}_{t}\|+W_{2}(\mathcal{L}_{\eta_{t}},\mathcal{L}_{\eta^{\prime}_{t}})+\|x_{t}-x^{\prime}_{t}\|),$
with $C$ only depending on
$\|\eta_{t}\|_{\mathbb{S}^{2}}+\|\eta^{\prime}_{t}\|_{\mathbb{S}^{2}}$.
###### Remark 3.14.
Similar to Lemma 3.4, according to estimate (131),
$U^{\gamma_{t},\mathcal{L}_{\eta_{t}},x_{t}}:=U^{{\gamma_{t},\eta_{t}},x_{t}}$
is well-defined.
###### Proof.
In the following we omit the subscript $t$ and write
$(U,V,Y,Z):=(U(r),V(r),Y(r),Z(r))$. Moreover, we only show the proof for (130)
since (131) follows by (130) and similar argument. Denote
$\displaystyle(\Delta U,\Delta
V):=(U^{\eta,x}-U^{\eta^{\prime},x^{\prime}},V^{\eta,x}-V^{\eta^{\prime},x^{\prime}}),$
$\displaystyle\Delta\partial_{\mu}\Phi:=\partial_{\mu_{t}}\Phi(B^{\eta},\mathcal{L}_{B^{\eta}},B^{x})-\partial_{\mu_{t}}\Phi(B^{\eta^{\prime}},\mathcal{L}_{B^{\eta^{\prime}}},B^{x^{\prime}}),$
$\displaystyle\Delta\partial_{\mu}f:=\partial_{\mu_{t}}f(\Theta^{\eta}_{r},\mathcal{L}_{\Theta_{r}^{\eta}},\tilde{B}^{x_{t}})-\partial_{\mu_{t}}f(\Theta^{\eta^{\prime}}_{r},\mathcal{L}_{\Theta_{r}^{\eta^{\prime}}},\tilde{B}^{x^{\prime}_{t}}),$
$\displaystyle\Delta\partial_{\nu}f^{(1)}:=\partial_{\nu}f(\Theta^{\eta}_{r},\mathcal{L}_{\Theta_{r}^{\eta}},\tilde{Y}^{x_{t},\mathcal{L}_{\eta_{t}}})-\partial_{\nu}f(\Theta^{\eta^{\prime}}_{r},\mathcal{L}_{\Theta_{r}^{\eta^{\prime}}},\tilde{Y}^{x^{\prime}_{t},\mathcal{L}_{\eta^{\prime}_{t}}}),$
$\displaystyle\Delta\partial_{\nu}f^{(2)}:=\partial_{\nu}f(\Theta^{\eta}_{r},\mathcal{L}_{\Theta_{r}^{\eta}},\tilde{Y}^{\tilde{\eta}_{t}})-\partial_{\nu}f(\Theta^{\eta^{\prime}}_{r},\mathcal{L}_{\Theta_{r}^{\eta^{\prime}}},\tilde{Y}^{\tilde{\eta}^{\prime}_{t}}),$
$\displaystyle\Delta\partial_{(y,z)}f:=\partial_{(y,z)}f(\Theta^{\eta}_{r},\mathcal{L}_{\Theta_{r}^{\eta}})-\partial_{(y,z)}f(\Theta^{\eta^{\prime}}_{r},\mathcal{L}_{\Theta_{r}^{\eta^{\prime}}}),\quad\text{and}\quad$
$\displaystyle\Delta\partial_{\omega}\tilde{Y}:=\partial_{\omega_{t}}\tilde{Y}^{x_{t},\mathcal{L}_{\eta_{t}}}-\partial_{\omega_{t}}\tilde{Y}^{x^{\prime}_{t},\mathcal{L}_{\eta^{\prime}_{t}}},$
and we see that $(\Delta U,\Delta V)$ is the unique solution of BSDE
(132) $\begin{split}\Delta
U(s)&=\Delta\partial_{\mu}\Phi+\int_{s}^{T}\tilde{\mathbb{E}}[\Delta\partial_{\mu}f]dr+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\eta}_{r},\mathcal{L}_{\Theta_{r}^{\eta}},\tilde{Y}^{\tilde{\eta}})\Delta\tilde{U}]dr\\\
&\ \ \
+\int_{s}^{T}(\partial_{y}f(\Theta^{\eta}_{r},\mathcal{L}_{\Theta_{r}^{\eta}})\Delta
U+\partial_{z}f(\Theta^{\eta}_{r},\mathcal{L}_{\Theta_{r}^{\eta}})\Delta
V)dr-\int_{s}^{T}\Delta VdB(r)\\\ &\ \ \
+\int_{s}^{T}\tilde{\mathbb{E}}[(\Delta\partial_{\nu}f^{(1)})\partial_{\omega}\tilde{Y}^{x^{\prime},\tilde{\eta}^{\prime}}+(\Delta\partial_{\nu}f^{(2)})\tilde{U}^{\tilde{\eta}^{\prime},x^{\prime}}]dr\\\
&\ \ \
+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\eta}_{r},\mathcal{L}_{\Theta_{r}^{\eta}},\tilde{Y}^{x,\mathcal{L}_{\eta}})\Delta\partial_{\omega}\tilde{Y}^{x,\mathcal{L}_{\eta}}]dr\\\
&\ \ \
+\int_{s}^{T}\left((\Delta\partial_{y}f)U^{\eta^{\prime},x^{\prime}}+(\Delta\partial_{z}f)V^{\eta^{\prime},x^{\prime}}\right)dr.\end{split}$
By Lipschitz continuity of $(\partial_{(\mu,\nu,y,z)}f,\partial_{\mu}\Phi),$
and boudnedness of $\partial_{(y,z)}f,$ we see that
$\displaystyle\|\Delta\partial_{\mu}\Phi\|^{2}_{L^{2}}+\|\int_{t}^{T}\tilde{\mathbb{E}}[\Delta\partial_{\mu}f]dr\|^{2}_{L^{2}}+\mathbb{E}[\int_{t}^{T}\tilde{\mathbb{E}}(|\Delta\partial_{\nu}f^{(1)}|^{2}+|\Delta\partial_{\nu}f^{(2)}|^{2})dr]$
(133) $\displaystyle\ \ \ \ \
+\mathbb{E}[\int_{t}^{T}|\Delta\partial_{y}f|^{2}dr]+\mathbb{E}[\int_{t}^{T}|\Delta\partial_{z}f|^{2}dr]$
$\displaystyle\ \ \ \ \ \ \ \ \ \ \leq
C(\|\eta_{t}-\eta^{\prime}_{t}\|^{2}_{\mathbb{S}^{2}}+\|x_{t}-x^{\prime}_{t}\|^{2}).$
Moreover, since
$\partial_{\omega}\tilde{Y}^{x^{\prime},\tilde{\eta}^{\prime}},\tilde{U}^{\eta^{\prime},x^{\prime}},\tilde{Y}^{x,\mathcal{L}_{\eta}},\tilde{Y}^{\tilde{\eta}}\in\mathbb{S}^{2}$,
and $V^{\eta^{\prime},x^{\prime}}\in\mathbb{H}^{2},$ from the above estimate
and the Cauchy-Schwartz inequality, we obtain
(134)
$\begin{split}&\|\int_{s}^{T}\tilde{\mathbb{E}}\left[(\Delta\partial_{\nu}f^{(1)})\partial_{\omega}\tilde{Y}^{x^{\prime},\tilde{\eta}^{\prime}}+(\Delta\partial_{\nu}f^{(2)})\tilde{U}^{\tilde{\eta}^{\prime},x^{\prime}}\right]dr\|_{L^{2}}\\\
&\ \ \ \ \
+\|\int_{s}^{T}\left((\Delta\partial_{y}f)U^{\eta^{\prime},x^{\prime}}+(\Delta\partial_{z}f)V^{\eta^{\prime},x^{\prime}}\right)dr\|_{L^{2}}\\\
&\ \ \ \ \ \ \ \ \leq
C(\|\eta_{t}-\eta^{\prime}_{t}\|_{\mathbb{S}^{2}}+\|x_{t}-x^{\prime}_{t}\|).\end{split}$
According to estimates given in Lemma 3.7 and boundedness of
$\partial_{\nu}f,$ we check that
(135)
$\|\int_{s}^{T}|\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\eta}_{r},\mathcal{L}_{\Theta_{r}^{\eta}},\tilde{Y}^{x,\mathcal{L}_{\eta}})\Delta\partial_{\omega}\tilde{Y}^{x,\mathcal{L}_{\eta}}]|dr\|_{L^{2}}\leq
C(\|\eta_{t}-\eta^{\prime}_{t}\|_{\mathbb{S}^{2}}+\|x_{t}-x^{\prime}_{t}\|).$
Then in view of Lemma 3.1, inequalities (133), (134) and (135), we obtain the
desired inequality (130).
∎
Concerning the SVD $\partial_{\mu_{\tau}}Y^{{\gamma_{t},\eta_{t}},\cdot}$ of
$Y^{\gamma_{t},\mathcal{L}_{\eta_{t}}}$ at $(\tau,t,\mathcal{L}_{\eta})$,
$\tau\leq t,$ in view of Definition 2.9 and BSDE (123), we deduce that it is
the unique solution of the following BSDE: for any $x\in\mathbb{D}_{T,d},$
(136)
$\begin{split}\partial_{\mu_{\tau}}Y^{{\gamma_{t},\eta_{t}},x_{t}}(s)&=\tilde{\mathbb{E}}[\partial_{\mu_{\tau}}\Phi(B^{\gamma_{t}},\mathcal{L}_{B^{\eta_{t}}},\tilde{B}^{x_{t}})]+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\mu_{\tau}}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{B}^{x_{t}})]dr\\\
&\ \ \
+\int_{s}^{T}\partial_{y}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\partial_{\mu_{\tau}}Y^{{\gamma_{t},\eta_{t}},x_{t}}(r)dr\\\
&\ \ \
+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}})\partial_{\mu_{\tau}}\tilde{Y}^{\tilde{\eta}_{t},x_{t}}(r)]dr\\\
&\ \ \
+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{x_{t},\mathcal{L}_{\eta_{t}}})\partial_{\omega_{\tau}}\tilde{Y}^{x_{t},\mathcal{L}_{\eta_{t}}}(r)]dr\\\
&\ \ \
+\int_{s}^{T}\partial_{z}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\partial_{\mu_{\tau}}Z^{{\gamma_{t},\eta_{t}},x_{t}}(r)dr\\\
&\ \ \
-\int_{s}^{T}\partial_{\mu_{\tau}}Z^{{\gamma_{t},\eta_{t}},x_{t}}(r)dB(r),\
s\in[t,T],\end{split}$
where $\partial_{\mu_{\tau}}Y^{\eta_{t},x_{t}}$ sloves the mean-field BSDE
below
(137)
$\begin{split}\partial_{\mu_{\tau}}Y^{\eta_{t},x_{t}}(s)&=\tilde{\mathbb{E}}[\partial_{\mu_{\tau}}\Phi(B^{\eta_{t}},\mathcal{L}_{B^{\eta_{t}}},\tilde{B}^{x_{t}})]+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\mu_{\tau}}f(\Theta^{\eta}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{B}^{x_{t}})]dr\\\
&\ \ \
+\int_{s}^{T}\partial_{y}f(\Theta^{\eta}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\partial_{\mu_{\tau}}Y^{\eta_{t},x_{t}}(r)dr\\\
&\ \ \
+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{x_{t},\mathcal{L}_{\eta_{t}}})\partial_{\omega_{\tau}}\tilde{Y}^{x_{t},\mathcal{L}_{\eta_{t}}}(r)]dr\\\
&\ \ \
+\int_{s}^{T}\tilde{\mathbb{E}}[\partial_{\nu}f(\Theta^{\gamma_{t},\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}},\tilde{Y}^{\tilde{\eta}_{t}})\partial_{\mu_{\tau}}\tilde{Y}^{\tilde{\eta}_{t},x_{t}})(r)]dr\\\
&\ \ \
+\int_{s}^{T}\partial_{z}f(\Theta^{\eta_{t}}_{r},\mathcal{L}_{\Theta_{r}^{\eta_{t}}})\partial_{\mu_{\tau}}Z^{\eta_{t},x_{t}}(r)dr-\int_{s}^{T}\partial_{\mu_{\tau}}Z^{\eta_{t},x_{t}}(r)dB(r),\
s\in[t,T].\end{split}$
Thanks to Lemma 3.1 again, mean-field BSDE (137) has a unique solution
$(\partial_{\mu_{\tau}}Y^{\eta_{t},x_{t}},\partial_{\mu_{\tau}}Z^{\eta_{t},x_{t}})\in\mathbb{S}^{2}\times\mathbb{H}^{2}.$
Then the well-posedness of equation (136) follows similarly. Moreover we have
that if $\tau=t,$
(138)
$\partial_{\mu_{t}}Y^{{\gamma_{t},\eta_{t}},x_{t}}=U^{{\gamma_{t},\eta_{t}},x_{t}},\
\ \ \partial_{\mu_{t}}Y^{\eta_{t},x_{t}}=U^{\eta_{t},x_{t}},$
and
$\partial_{\mu_{\tau}}Y^{\eta_{t},x_{t}}=\partial_{\mu_{\tau}}Y^{\gamma_{t},\mathcal{L}_{\eta_{t}},x_{t}}|_{\gamma=\eta}.$
Thus the following lemma follows similarly as that of Lemma 3.13.
###### Lemma 3.15.
For any $x,x^{\prime},\gamma,\gamma^{\prime}\in\mathbb{D}_{T,d},$ and |
11institutetext: Institut für Physik, Universität Rostock, D-18051 Rostock,
Germany 22institutetext: School of Physics, Nanjing University, Nanjing
210093, China 33institutetext: Key Laboratory of Nuclear Physics and Ion-Beam
Application (MoE), Institute of Modern Physics, Fudan University, 200433,
Shanghai, China 44institutetext: Shanghai Research Center for Theoretical
Nuclear Physics, NSFC and Fudan University, 200438, Shanghai, China
55institutetext: School of Physics Science and Engineering, Tongji University,
Shanghai 200092, China 66institutetext: Laboratory of Physics, Kanto Gakuin
University, Yokohama 236-8501, Japan 77institutetext: Research Center for
Nuclear Physics (RCNP), Osaka University, Osaka 567-0047, Japan
88institutetext: College of Science, Nanjing University of Aeronautics and
Astronautics, Nanjing, China
# Alpha-like correlations in 20Ne, comparison of quartetting wave function and
THSR approaches
G. Röpke 11 C. Xu 22 B. Zhou 3344 Z. Z. Ren 55 Y. Funaki 66 H. Horiuchi 77 M.
Lyu 88 A. Tohsaki 77 T. Yamada 66
(Received: date / Revised version: date)
###### Abstract
20Ne can be considered as a double-magic 16O core nucleus surrounded by four
nucleons, the constituents of an $\alpha$-like quartet. Similar to other
nuclei (212Po, 104Ti, etc.) with a quartet on top of a double-magic core
nucleus, significant $\alpha$-like correlations are expected. Correlations in
the ground state of 20Ne are investigated using different approaches. The
quartetting wave function approach (QWFA) predicts a large $\alpha$-like
cluster contribution near the surface of the nuclei. The Tohsaki-Horiuchi-
Schuck-Röpke (THSR) approach describes $\alpha$-like clustering in nuclear
systems. The results of the QWFA in the Thomas-Fermi and shell-model
approximation are compared with THSR calculations for the container model.
Results for the $\alpha$ formation probability and the rms radii are shown.
###### pacs:
PACS-keydiscribing text of that key and PACS-keydiscribing text of that key
## 1 Introduction
The liquid-drop model of nuclei, which can be considered as a simple version
of a local density approach, reflects important properties of nuclear
structure, for example the famous Bethe-Weizsäcker mass formula. Other
properties such as the occurrence of magic numbers are explained by the shell
model, which considers nucleonic quasiparticle states, and many properties of
nuclei, including pairing, are studied in this approach, see e.g. the
fundamental book by Ring and Schuck RingSchuck . However, the description of
correlations, in particular of $\alpha$-like clusters in nuclei, requires
going beyond the quasiparticle approach.
The nucleus 212Po shows a significant $\alpha$-like correlation in the skin
region Po14 ; Xu16 ; Xu17 . It can be assumed that it consists of a double-
magic, relatively stable 208Pb core nucleus surrounded by an $\alpha$-like
cluster. This $\alpha$-like quartet experiences a potential pocket for the
center-of-mass (c.m.) motion outside a critical radius $r_{\rm cr}$ where it
can exist as a quasi-bound state. Its intrinsic structure is dissolved at
smaller distances when the nucleon density of the core nucleus exceeds a
critical value $n_{\rm cr}=0.03$ fm-3. The reason for this is the Pauli
principle, which applies to the nucleons that form the $\alpha$ particle.
Their mutual interaction is blocked in the dense medium of the nucleons of the
core nucleus that occupy the Fermi sphere in momentum space. This is a
consequence of the antisymmetrization of the full many-fermion wave function
of the entire nucleonic system. The $\alpha$ particle, which is preformed in a
near-surface pocket, can escape from the 212Po nucleus by tunneling. The
calculations were performed using the quartet wave function approach (QWFA).
It was found that the calculated $\alpha$ decay half-life agrees well with the
observed data Xu16 ; Xu17 .
A similar behavior is expected for other nuclei consisting of a double magic
core nucleus and an additional $\alpha$ cluster. Calculations were performed
for 104Te Yang20 . The observed half-life of $\alpha$ decay was successfully
reproduced in QWFA for 104Te as well as for additional $\alpha$-decaying
nuclei Yang21 . Improvements of the quartet model have been made in Refs.
Bai19 ; Wang22 , see also Jin23 ; Li23 . Using QWFA, the influence of
$\alpha$-like clustering in nuclei on the nuclear symmetry energy was analyzed
in Ref. Yang23 .
Another nucleus, consisting of a double-magic core nucleus surrounded by an
$\alpha$-like cluster, is 20Ne. In this work, we present calculations within
QWFA and compare them with other approaches. A main result is the preformation
fraction of $\alpha$-like clusters and the point rms radius which are
determined by the wave function including correlations. We compare the Thomas-
Fermi approximation with shell model calculations.
A consistent description of quartetting ($\alpha$-like correlations) has
recently been developed in the framework of the Tohsaki-Horiuchi-Schuck-Röpke
(THSR) approach THSR . This approach provides an excellent description of low-
density 4$n$ nuclei such as 8Be, the Hoyle state of 12C and excited states of
16O, but has also been applied to more complex nuclei such as 20Ne Bo12 ; Bo13
; Bo14 as well as 4$n$ nuclei with additional nucleons Lyu ; Lyu1 . Recently,
calculations for 20Ne were performed by Bo et al. Bo23 using the two-
parameter container model. A review on microscopic clustering in light nuclei
was presented in Ref. Freer18 .
In this work, we also compare the two approaches. Heavy nuclei with a large
number of nucleons like 212Po are not yet computable with the THSR approach.
The QWFA provides better results for heavier nuclei where a mean-field
approach for the core nucleus is more justified. For 20Ne, both approaches are
feasible. The comparison of the results for the quartetting wave function
approach and THSR calculations allows a better understanding of the
description of correlations in nuclear systems.
We study the c.m. motion of a quartet
$\\{n_{\uparrow},n_{\downarrow},p_{\uparrow},p_{\downarrow}\\}$ as a new
collective degree of freedom and compare the wave functions for both
approaches, the QWFA and the THSR approach. Instead of an uncorrelated Fermi
gas model for the cluster environment, an improvement of the quartet wave
function approach is investigated to achieve a consistent description of
cluster formation in a clustered medium.
After a brief explanation of the QWFA in Sec. 2, we carry out calculations
using the Thomas-Fermi approach in Sec. 3. Calculations with the shell model
are shown in Sec. 4. Comparisons with the THSR approach are discussed in Sec.
5, and concluding remarks are made in Sec. 6.
## 2 The quartet wave equation
The treatment of the interacting many-nucleon system requires some
approximations which can be obtained in a consistent way from a Green’s
function approach. The quartetting wave function approach Po14 ; wir
considers the wave function $\Psi({\bf r}_{1}{\bf r}_{2}{\bf r}_{3}{\bf
r}_{4})$ of the quartet (spin and isospin variables are dropped) which obeys
the in-medium wave equation
$\displaystyle[E_{4}\\!-\\!\hat{h}_{1}\\!-\\!\hat{h}_{2}\\!-\\!\hat{h}_{3}\\!-\\!\hat{h}_{4}]\Psi({\bf
r}_{1},{\bf r}_{2},{\bf r}_{3},{\bf r}_{4})$
$\displaystyle=\int\\!\\!d^{3}{\bf r}_{1}^{\prime}\,d^{3}{\bf
r}_{2}^{\prime}\langle{\bf r}_{1}{\bf
r}_{2}|\hat{B}(1,2)\,\,\hat{V}_{N-N}|{\bf r}_{1}^{\prime}{\bf
r}_{2}^{\prime}\rangle\Psi({\bf r}_{1}^{\prime},{\bf r}_{2}^{\prime},{\bf
r}_{3},{\bf r}_{4})$ $\displaystyle+\int d^{3}{\bf
r}_{1}^{\prime}\,\,d^{3}{\bf r}_{3}^{\prime}\langle{\bf r}_{1}{\bf
r}_{3}|\hat{B}(1,3)\,\,\hat{V}_{N-N}|{\bf r}_{1}^{\prime}{\bf
r}_{3}^{\prime}\rangle\Psi({\bf r}_{1}^{\prime},{\bf r}_{2},{\bf
r}_{3}^{\prime},{\bf r}_{4})$ $\displaystyle+{\rm
four\,\,further\,\,permutations,}$ (1)
with the single-quasiparticle Hamiltonian (single-nucleon shell states
$|n,\nu\rangle$)
$\hat{h}_{i}=\frac{\hbar^{2}\hat{p}_{i}^{2}}{2m}+[1-\hat{f}_{\nu_{i}}]\,V_{\nu_{i}}^{\rm
mf}(\hat{r}),\qquad\hat{f}_{\nu}=\sum_{n}^{{\rm occ.}}|n,\nu\rangle\langle
n,\nu|$ (2)
denotes the phase space which, according to the Pauli principle, cannot be
used for an interaction process of a nucleon with an intrinsic quantum state
$\nu=\sigma,\,\tau$. In addition to the nucleon-nucleon potential
$\hat{V}_{N-N}$, the nucleon-nucleon interaction terms also contain the
blocking operator $\hat{B}(1,2)=[1-\hat{f}_{1}-\hat{f}_{2}]$ for the first
term on the r.h.s. of Eq. (1), and corresponding expressions for the other 5
terms. The mean-field potential $V_{\nu_{i}}^{\rm mf}(\hat{r})$ contains the
strong core-nucleon interaction $V^{\rm ext}(r)$ as well as the Coulomb
potential of the core nucleus. It is considered as an external potential. The
Pauli blocking terms, which are given by the occupation numbers
$\hat{f}_{\nu}$, are not easy to treat as will be explained below. The mean-
field approach treats the motion within the cluster independent of the motion
in the surrounding medium, and neglects any correlations between the two. If
such further correlations exist, clusters with a larger number of nucleons are
formed. This concept is known from the shell model at the one-particle level,
for pairing at the two-particle level. We first discuss here the motion of
four nucleons under the influence of an external potential.
A main aspect of the cluster approach is the introduction of the center-of-
mass (c.m.) motion $\bf R$ as new collective degree of freedom, and ${\bf
s}_{j}=\\{\bf S,s,s^{\prime}\\}$ for the intrinsic motion (Jacobi-Moshinsky
coordinates). As shown in Po14 , the normalized quartet wave function
$\Phi({\bf R},{\bf s}_{j})$,
$\int d^{3}R\,\int d^{9}s_{j}\,|\Phi({\bf R},{\bf s}_{j})|^{2}=1,$ (3)
can be decomposed in a unique way (up to a phase factor),
$\Phi({\bf R},{\bf s}_{j})=\varphi^{{\rm intr}}({\bf s}_{j},{\bf
R})\,\Psi^{\rm c.m.}({\bf R})$ (4)
with the individual normalizations
$\int d^{3}R\,|\Psi^{\rm c.m.}({\bf R})|^{2}=1\,,{\rm and}\int
d^{9}s_{j}|\varphi^{{\rm intr}}({\bf s}_{j},{\bf R})|^{2}=1$ (5)
for arbitrary ${\bf R}$.
The Hamiltonian of a four-nucleon cluster can be written as
$\displaystyle H$ $\displaystyle=$
$\displaystyle\left(-\frac{\hbar^{2}}{8m}\nabla_{R}^{2}+T[\nabla_{s_{j}}]\right)\delta^{3}({\bf
R}-{\bf R}^{\prime})\delta^{3}({\bf s}_{j}-{\bf s}^{\prime}_{j})$ (6)
$\displaystyle+V({\bf R},{\bf s}_{j};{\bf R}^{\prime},{\bf s}^{\prime}_{j})$
with the kinetic energy of the c.m. motion of the cluster, and the kinetic
energy $T[\nabla_{s_{j}}]$ of the internal motion. The interaction $V({\bf
R},{\bf s}_{j};{\bf R}^{\prime},{\bf s}^{\prime}_{j})$ contains the mutual
interaction $V_{ij}({\bf r}_{i},{\bf r}_{j},{\bf r}^{\prime}_{i},{\bf
r}^{\prime}_{j})$ between the quartet particles as well as the interaction
with an external potential (for instance, the mean-field potential of the core
nucleus), with strict fulfillment of the Pauli principle.
For the c.m. motion we have the wave equation
$\displaystyle-\frac{\hbar^{2}}{8m}\nabla_{R}^{2}\Psi^{\rm c.m.}({\bf
R})-\frac{\hbar^{2}}{4m}\int d^{9}s_{j}$ $\displaystyle\times\varphi^{{\rm
intr},*}({\bf s}_{j},{\bf R})[\nabla_{R}\varphi^{{\rm intr}}({\bf s}_{j},{\bf
R})][\nabla_{R}\Psi^{\rm c.m.}({\bf R})]-$
$\displaystyle-\frac{\hbar^{2}}{8m}\int\\!\\!d^{9}s_{j}\varphi^{{\rm
intr},*}({\bf s}_{j},{\bf R})[\nabla_{R}^{2}\varphi^{{\rm intr}}({\bf
s}_{j},{\bf R})]\Psi^{\rm c.m.}({\bf R})$
$\displaystyle+\\!\\!\int\\!\\!d^{3}R^{\prime}\,W({\bf R},{\bf
R}^{\prime}),\Psi^{\rm c.m.}({\bf R}^{\prime})\\!=\\!E\,\Psi^{\rm c.m.}({\bf
R}),$
with the c.m. potential
$\displaystyle W({\bf R},{\bf R}^{\prime})=\int
d^{9}s_{j}\,d^{9}s^{\prime}_{j}\,\varphi^{{\rm intr},*}({\bf s}_{j},{\bf
R})\left[T[\nabla_{s_{j}}]\right.$ (7)
$\displaystyle\left.\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\times\delta^{3}({\bf
R}-{\bf R}^{\prime})\delta^{9}({\bf s}_{j}-{\bf s}^{\prime}_{j})+V({\bf
R},{\bf s}_{j};{\bf R}^{\prime},{\bf s}^{\prime}_{j})\right]\varphi^{{\rm
intr}}({\bf s}^{\prime}_{j},{\bf R}^{\prime}).$
For the intrinsic motion we find the wave equation
$\displaystyle-\frac{\hbar^{2}}{4m}\Psi^{\rm c.m.*}({\bf
R})[\nabla_{R}\Psi^{\rm c.m.}({\bf R})][\nabla_{R}\varphi^{{\rm intr}}({\bf
s}_{j},{\bf R})]$ $\displaystyle-\frac{\hbar^{2}}{8m}|\Psi^{\rm c.m.}({\bf
R})|^{2}\nabla_{R}^{2}\varphi^{{\rm intr}}({\bf s}_{j},{\bf R})$
$\displaystyle+\int d^{3}R^{\prime}\,d^{9}s^{\prime}_{j}\,\Psi^{\rm
c.m.*}({\bf R})\left[T[\nabla_{s_{j}}]\delta^{3}({\bf R}-{\bf
R}^{\prime})\delta^{9}({\bf s}_{j}-{\bf s}^{\prime}_{j})\right.$
$\displaystyle\left.+V({\bf R},{\bf s}_{j};{\bf R}^{\prime},{\bf
s}^{\prime}_{j})\right]\Psi^{\rm c.m.}({\bf R}^{\prime})\varphi^{{\rm
intr}}({\bf s}^{\prime}_{j},{\bf R}^{\prime})$ $\displaystyle=F({\bf
R})\varphi^{{\rm intr}}({\bf s}_{j},{\bf R})\,.$ (8)
Both the c.m. and intrinsic Schrödinger equations, Eqs. (2) and (2),
respectively, are coupled by contributions containing the expression
$\nabla_{R}\varphi^{{\rm intr}}({\bf s}_{j},{\bf R})$. This expression
vanishes in homogeneous matter, and we recover the in-medium Schrödinger
equation for $\alpha$ clusters in matter without external potential. Then, the
eigenvalue $F({\bf R})$ of Eq. (2) represents the bound state energy of the
$\alpha$ particle which is shifted in dense matter because of Pauli blocking.
The contribution of the gradient terms was recently investigated by Yang et
al. Yang23 . It can be shown that the second term of Eq. (2) vanishes. In the
present work, we neglect the contributions of the gradient terms. This
corresponds to a local density approximation, as is often used in many-body
theories.
## 3 Quartets in nuclei in Thomas-Fermi approximation
### 3.1 Mean field for the c.m. motion
We would like to emphasize that in general non-local interactions are
possible. In particular, the Pauli blocking considered in the following is
non-local. To simplify the calculations, local approximations are often used,
$\displaystyle W({\bf R},{\bf R}^{\prime})\approx W({\bf R})\delta^{3}({\bf
R}-{\bf R}^{\prime}),$ $\displaystyle W({\bf R})=W^{\rm ext}({\bf R})+W^{\rm
intr}({\bf R}).$ (9)
$W^{\rm ext}({\bf R})=W^{\rm mf}({\bf R})$ is the contribution of external
potentials, here the mean field of the core nucleons. The interaction within
the cluster according Eq. (2) gives the contribution $W^{\rm intr}({\bf R})$.
We give a short description, for details see Refs. wir ; R17 ; Ro18 .
If we know the nucleon densities of the core nucleus, the mean fields can be
easily calculated. The mean-field contribution $W^{\rm mf}({\bf R})$ is
obtained by double folding the density distribution of the core nucleus and
the intrinsic density distribution of the quartet at c.m. position $\bf R$
with the interaction potential of the nucleons. An $\alpha$-like Gaussian
density distribution was assumed for the bound quartet.
For the Coulomb interaction we calculate the double-folding potential
$V^{\rm Coul}_{\alpha-{\rm O}}(R)=\int d^{3}r_{1}\int d^{3}r_{2}\rho_{{\rm
O}}({\bf r}_{1})\rho_{\alpha}({\bf r}_{2})\frac{e^{2}}{|{\bf R}-{\bf
r}_{1}+{\bf r}_{2}|}\,.$ (10)
The charge density of the $\alpha$ nucleus according to
$\rho_{\alpha}(r)=0.2114\,\,{\rm fm}^{-3}\,e^{-0.7024\,\,r^{2}/{\rm fm}^{2}}$
(11)
reproduces the measured rms point radius 1.45 fm. For the density distribution
of 16O, the expression Qu2011
$n^{\rm WS}_{B,{\rm O}}(r)=\frac{0.168\,{\rm fm}^{-3}}{1+e^{(r/{\rm
fm}-2.6)/0.45}}$ (12)
was given which reproduces the rms point radius 2.6201 fm, or Gaussians wir .
The convolution integral (10) is easily evaluated in Fourier representation
and gives for the parameter values considered here wir
$\displaystyle V^{\rm Coul}_{\alpha-{\rm O}}(R)=\frac{16\times 1.44}{R}\,{\rm
MeV\,\,fm}$ $\displaystyle\times\left[{\rm Erf}(0.7683\,\,R/{\rm
fm})-0.9097\,\,(R/{\rm fm})\,\,e^{-0.2274\,\,R^{2}/{\rm fm}^{2}}\right]\,.$
For the nucleon-nucleon contribution to the mean field, a parametrized
effective nucleon interaction (distance $s$)
$V_{N-N}(s/{\rm fm})=c\,\exp(-4s)/(4s)-d\,\exp(-2.5s)/(2.5s)$ (14)
can be used which is motivated by the M3Y interaction M3YReview , $s$ denotes
the distance of nucleons. The parameters $c,d$ are adapted to reproduce known
data, see Po14 ; Xu16 ; Xu17 for the case of a lead core nucleus. For the
oxygen core nucleus, parameter values $c,d$ are given below in Eq. (22). As
also known from other mean-field approaches, we fit the mean field parameter
to measured data. The nucleonic contribution $V^{\rm N-N}_{\alpha-{\rm O}}(R)$
to the mean field is calculated in analogy to Eq. (10) replacing the Coulomb
interaction by the nucleon interaction (14). With both contributions, the
mean-field part of the cluster potential is
$W^{\rm ext}({\bf R})=W^{\rm mf}({\bf R})=V^{\rm Coul}_{\alpha-{\rm
O}}(R)+V^{\rm N-N}_{\alpha-{\rm O}}(R).$ (15)
The local approximation $W^{\rm intr}({\bf R})$, Eq. (3.1), for the intrinsic
contribution to the effective c.m. potential is more involved. It contains the
binding energy of the cluster taking into account the Pauli blocking of the
surrounding matter. The local density approximation neglects any gradient
terms so that homogeneous-matter results can be used.
The intrinsic wave equation (2) describes in the zero density limit the
formation of an $\alpha$ cluster with binding energy $B_{\alpha}=28.3$ MeV. In
homogeneous matter, the binding energy is reduced due to Pauli blocking. The
shift of the binding energy is determined by the baryon density
$n_{B}=n_{n}+n_{p}$ and the asymmetry $\delta=2n_{p}/n_{B}-1$. For the c.m.
momentum ${\bf P}=0$, the Pauli blocking term depends on the baryon density
$n_{B}$ Po14 ; wir as
$\displaystyle W^{\rm Pauli}(n_{B},\delta)$ $\displaystyle\approx$
$\displaystyle 4515.9\,{\rm MeV\,fm}^{3}n_{B}$ (16)
$\displaystyle-100935\,{\rm MeV\,fm}^{6}n_{B}^{2}(1+\delta^{2})$
$\displaystyle+1202538\,{\rm MeV\,fm}^{9}n_{B}^{3}(1+3\delta^{2})\,.$
This approximation formula applies to the density range $n_{B}\leq n_{\rm
crit}=0.02917$ fm-3. In particular, the bound state is dissolved and merges
with the continuum of the scattering states at the critical density $n_{\rm
crit}$ (introduced as Mott density). A more detailed discussion of this ansatz
for the Pauli blocking term will follow below, see section 4. For the
intrinsic wave function of the quartet, we can assume an $\alpha$-like
Gaussian function to describe the bound state. The width parameter of the free
$\alpha$ particle is only weakly changed when it approaches the critical
density, see Ref. Po14 .
Below the critical density, $n_{B}\leq n_{\rm crit}$, the intrinsic potential
$W^{\rm intr}({\bf R})=-B_{\alpha}+W^{\rm Pauli}[n_{B}({\bf R})],\qquad
n_{B}\leq n_{\rm crit}$ (17)
results in a local density approximation. The intrinsic energy of the quartet
for densities above the critical density is a minimum if all four nucleons are
at the Fermi energy (ideal Fermi gas), for symmetric matter and $n_{B}\geq
n_{\rm crit}$
$W^{\rm intr}({\bf R})=4E_{F}[n_{B}({\bf R})],$ (18)
with
$E_{F}(n_{B})=(\hbar^{2}/2m)(3\pi^{2}n_{B}/2)^{2/3}.$ (19)
### 3.2 Thomas-Fermi rule and results for 20Ne in local density approximation
The quartetting wave function approach for 20Ne in local density approximation
has been considered in Ref. wir . We are presenting some results for the
effective potential $W({\bf R})$ and the wave function $\psi({\bf R})$, see
Fig. 1. To this purpose, we use empirical data from the nuclei involved.
Figure 1: Effective potential $W^{\rm TF}(R)$ for the center of mass motion of
the quartet on top of 16O. The Thomas-Fermi model has been used. The formation
of a pocket is shown.
The mean-field contribution $W^{\rm ext}(R)$ (15) is given by the double-
folding Coulomb and $N-N$ potentials. Empirical values for the densities of
the $\alpha$ particle (11) and the 16O core nucleus (12) are known from
scattering experiments, such as the rms radii, so that the Coulomb interaction
$V^{\rm Coul}_{\alpha-{\rm O}}(R)$ (3.1) as well as the nucleon-nucleon
interaction $V^{\rm N-N}_{\alpha-{\rm O}}(R)$ can be calculated.
With respect to $W^{\rm intr}({\bf R})$, the local density approximation is
also very simple: There are two regions separated by the critical radius
$r_{\rm crit}=3.302$ fm in which the density of the 16O core nucleus (12) has
the critical value $n_{B}(r_{\rm crit})=n_{\rm crit}=0.02917$ fm-3. We obtain
$-B_{\alpha}+W^{\rm Pauli}[n_{B}(r_{\rm crit})]=4E_{F}[n_{B}(r_{\rm crit})]$,
and the bound state merges with the continuum of scattering states.
For $R>r_{\rm crit}$, the intrinsic part $W^{\rm intr}(R)$ contains the bound
state energy -28.3 MeV of the free $\alpha$ particle, which is shifted due to
Pauli blocking. At $r_{\rm crit}$ the bound state merges with the continuum,
so that we have the condition (symmetric matter)
$W(r_{\rm crit})=W^{\rm ext}(r_{\rm crit})+4E_{F}(n_{\rm crit})=\mu_{4},$ (20)
the intrinsic wave function changes from a bound state to four uncorrelated
quasiparticles on top of the Fermi sphere (the states below the Fermi energy
are already occupied).
For $R<r_{\rm crit}$, the Fermi energy $4E_{F}[n(R)]$ appears in addition to
the mean-field contribution $W^{\rm ext}(R)$. In the Thomas-Fermi model, for a
given potential $W^{\rm ext}(R)$ the density is determined by the condition
that $W^{\rm ext}(R)+4E_{F}[n_{B}(R)]$ remains a constant, here $\mu_{4}$. We
find the effective potential $W^{\rm TF}(R)$, which is continuous but has a
kink at $r_{\rm crit}$. It is an advantage of the Thomas-Fermi model that the
condition $W^{\rm TF}(R)=\mu_{4}=$ const applies to the entire range $R<r_{\rm
crit}$, independently of the shape of the mean-field potential $W^{\rm
ext}(R)$ and the corresponding density distribution. We analyze this property
in the following section.
While the Coulomb part of the external potential as well as the intrinsic part
of the effective potential $W^{\rm TF}(R)$ are fixed, the two parameters $c,d$
in Eq. (14) for the $N-N$ part of the external potential can be adjusted such
that measured data are reproduced. In the case of heavy nuclei that are
$\alpha$ emitters, such as 212Po Po14 , two conditions can be formulated:
i) For $\alpha$ emitters, the normalized solution of the c.m. wave equation
(neglecting the decay) gives the energy eigenvalue $E_{\alpha}=E_{\rm
tunnel}$. This eigenvalue should correspond to the measured energy after
decay, which is given by the $Q$ value.
ii) This value $E_{\rm tunnel}$ should coincide with the value $\mu_{4}$. In
the context of the local density approach, this is the value that the four
nucleons must have in order to implement them into the core nucleus. We denote
this condition
$E_{\alpha}=\mu_{4}$ (21)
as the Thomas-Fermi rule wir . With both conditions, the parameter $c,d$ for
the double folding $N-N$ interaction potential are found, and values for the
preformation factor and the half-life of the $\alpha$ decay were determined
for heavy nuclei, see Ref. Po14 ; Xu16 ; Xu17 , where further discussions were
made.
In contrast to the $\alpha$ decay of 212Po where the $Q$ value can be used to
estimate the chemical potential $\mu_{4}$ Po14 , the nucleus 20Ne is stable.
However, we can use the additional bonding in the transition from 16O
($B_{{}^{16}{\rm O}}=127.66$ MeV) to 20Ne ($B_{{}^{20}{\rm Ne}}=160.645$ MeV)
by adding the four nucleons. The difference fixes the position of the in-core
effective potential $\mu_{4}=B_{{}^{16}{\rm O}}-B_{{}^{20}{\rm Ne}}=-33.0$
MeV.
Another condition is that the solution of the Schrödinger equation for the
four-nucleon c.m. motion in the effective potential $W(R)$ gives the energy
eigenvalue $E_{\alpha,{\rm bound}}$ at this value -33 MeV, so that the energy
eigenvalue of the $\alpha$-like cluster coincides with the Fermi energy
$\mu_{4}$ (the Thomas-Fermi rule, see also the discussion in Ref. Xu16 ). Both
conditions are used to fix the parameters $c,d$. The values
$c=4650\,\,{\rm MeV\,\,\,\,and}\,\,\,d=1900\,\,{\rm MeV}$ (22)
have been found wir .
The resulting effective potential $W^{\rm TF}(R)$ (17) for the center of mass
motion of the quartet is shown in Fig. 1. One can see the formation of a
pocket near the surface caused by the formation of an $\alpha$-like cluster.
The sharp kink at the critical radius $r_{\rm crit}=3.302$ fm is a consequence
of the local approximation for the Pauli blocking term. A smooth behavior is
expected if the finite extension of the $\alpha$-like cluster is taken into
account so that the kink generated by the local density approximation is
smeared out.
The wave function for the quartet center-of-mass motion $\psi^{\rm TF}_{\rm
c.m.}(R)$ is found as a solution of the Schrödinger equation, mass $4m$, with
the potential $W^{\rm TF}(R)$. The energy eigenvalue is -33.0 MeV. A graph of
$(4\pi)^{1/2}R\,\psi^{\rm TF}_{\rm c.m.}(R)$ is shown in Fig. 2. As a result,
in Ref. wir the rms point radius 2.864 fm for 20Ne was calculated which is in
good agreement with the experimental rms point radius of 2.87 fm. The
normalization is $4\pi\int_{0}^{\infty}r^{2}\psi^{2}_{\rm c.m.}(r)dr=1$.
Integrating from 0 to $r_{\rm crit}=3.302$ fm, the part of the quartet where
the internal structure is the product of free states, comes out at 0.3612. The
remaining part where the internal structure is given by an $\alpha$-like bound
state is 0.6388.
Figure 2: Wave function for the c.m. motion of the quartet. A prefactor
$(4\pi)^{1/2}R$ is introduced so that the integral over $R$ of the squared
quantity is normalized to 1. The solution for the Thomas-Fermi model
$\psi^{\rm TF}_{\rm c.m.}(R)$ (red, dashed) is compared with the non-
interacting shell-model calculation $\psi_{2s^{4}}(R)$ (blue). The shift of
the maximum is caused by the formation of a pocket, see Fig. 1. Dotted line:
$r_{\rm crit}=3.302$ fm.
A further discussion of optical model description and double-folding potential
is given in App. A. Note that the standard approaches of optical model
potentials have a diverging repulsive potential below $r_{\rm crit}$.
### 3.3 Discussion of the Thomas-Fermi rule $E_{\alpha}=\mu_{4}$
The condition $E_{\alpha}=\mu_{4}$ (21) is a consequence of the Thomas-Fermi
model, which applies to infinite matter: an additional nucleon with given spin
and isospin can be introduced at the corresponding chemical potential
$\mu_{\sigma,\tau}$. At zero temperature, this coincides with the
corresponding Fermi energy (plus the potential energy). For finite system such
as nuclei, the energy levels of the single-nucleon states are discrete. If we
add a nucleon to the core nucleus in which all the single-nucleon states below
a certain energy are occupied, the next free single-nucleon state that is free
has a distance to the chemical potential. This means, under these
considerations, the quartet cannot be introduced at $\mu_{4}$ but at a higher
value $E_{\alpha}>\mu_{4}$ which is now a new parameter. This aspect has been
worked out already in Xu17 . We do the same here for 20Ne.
We compare our calculations with values for 212Po. The $\alpha$ decay energy
$Q_{\alpha}$ was introduced as the difference between the binding energy of
the mother nucleus (212Po) and the binding energies of the daughter nuclei
(208Pb and $\alpha$). Similarly, we have -4.73 MeV, so that the energy
eigenvalue of the Schrödingier equation comes out as
$E^{0}_{\alpha}-Q_{\alpha}=-28.3-4.73$ MeV=-33.03 MeV. As a second condition,
we used the results for 212Po. If $d=3415.56$ remains the same, the given
energy eigenvalue is of the Schrödingier equation is reproduced with
$c=10623$. This results in the value $\mu_{4}=-32.388$ MeV and
$P_{\alpha}=0.72$ follow. If we take $c=11032$ from Po, we get $d=3513.46$ as
well as $\mu_{4}=-32.12$ MeV and $P_{\alpha}=0.74$.
We reproduce a large preformation factor $P_{\alpha}$ in both cases. In
contrast to the Thomas-Fermi model, the condition $E_{\alpha}=\mu_{4}$ is not
valid. The value of $\mu_{4}$ is not below $E_{\alpha}$ as expected from the
shell model consideration, but $E_{\alpha}<\mu_{4}$. This means that it is
energetically more favorable for the nucleus to form correlated quartets
instead of remaining in uncorrelated single-nucleon (shell model) states. This
will be seen from the THSR calculations, in which the core nucleus 16O also
shows $\alpha$-like correlations.
## 4 Shell model calculations
### 4.1 Comparison with the harmonic oscillator model
The local density approximation (Thomas-Fermi model) is not able to describe
the nuclear structure of the core nucleus. In particular, the Thomas-Fermi
rule must be replaced by a more microscopic approach, see Po14 ; Xu16 ; Xu17 .
However, the behavior of the effective c.m. potential which remains almost
constant within the core nucleus, is also interesting in the case that shell
model states are used. A first attempt was made in Ref. wir with harmonic
oscillator states. The results of the simple Thomas-Fermi model, in particular
the approximate constancy of the c.m. quartetting potential in the core
nucleus and the Thomas-Fermi rule, can be verified. However, the harmonic
oscillator potential is not realistic for nuclei, especially near the surface
of the core nucleus where $\alpha$-like quartets are formed. We present here
calculations with more realistic potentials (units MeV, fm), see also Mirea .
The intrinsic nucleon-nucleon interaction $W^{\rm intr}(R)$, which is
suppressed due to Pauli blocking, is not considered in this section 4.1.
A more systematic way to find a suitable simple basis of single-particle
states is to use the Woods-Saxon potential WS for $Z=N$, see Ref. Ro18 ,
$V_{\rm WS}(r)=\frac{V_{0}(1+3\kappa/A)}{1+\exp[(r-R_{0}A^{1/3})/a]}$ (23)
with $V_{0}=-52.06$ MeV, $\kappa=0.639$, $R_{0}=1.26$ fm, $a=0.662$ fm. The
normalized solution $\psi_{2s}(r)$ for the 2$s$ state is shown in Fig. 3,
eigenvalue $E_{2s}=-9.162$ MeV. For comparison, the harmonic oscillator wave
function
$\psi_{2s}^{\rm HO}(r)=-\left(\frac{a^{\rm HO}}{\pi}\right)^{3/4}e^{-a^{\rm
HO}r^{2}/2}\left(a^{\rm
HO}r^{2}-\frac{3}{2}\right)\left(\frac{2}{3}\right)^{1/2},$ (24)
is also shown, where the parameter $a^{\rm HO}=0.31047$ fm is chosen so that
the values coincide at $r=0$. A scaling of the $r$-axis is considered to make
both coincide, $\psi_{2s}^{\rm HO}(r^{\prime})=\psi_{2s}(r)/(1+0.0024719\,r)$.
(The amplitude correction is necessary to reproduce the correct value of the
minimum). This defines the relationship $r^{\prime}=f_{\rm scal}(r)$ shown in
Fig. 3.
Figure 3: Normalized wave function $\psi_{2s}(r)$ for the Woods-Saxon
potential (23). For comparison, the harmonic oscillator wave function
$\psi_{2s}^{\rm HO}(r)$ is also given, where the potential parameter $a^{\rm
HO}$ is chosen so that $\psi_{2s}(0)$ coincides. The scaling function $f_{\rm
scal}(r)$ give full coincidence of both wave functions.
Neglecting any intrinsic interaction, the 2$s$ wave functions can be used to
construct the quartet wave function
$\Phi_{2s^{4}}({\bf R,S,s,s}^{\prime})=\psi_{2s}({\bf
r}_{n,\uparrow})\,\psi_{2s}({\bf r}_{n,\downarrow})\,\psi_{2s}({\bf
r}_{p,\uparrow})\,\psi_{2s}({\bf r}_{p,\downarrow}).$ (25)
The wave function for the c.o.m. motion follows as (Jacobi-Moshinsky
coordinates ${\bf R,S,s,s}^{\prime}$ wir )
$\psi_{2s^{4}}({\bf R})=\left[\int
d^{3}Sd^{3}sd^{3}s^{\prime}|\Phi_{2s^{4}}({\bf
R,S,s,s}^{\prime})|^{2}\right]^{1/2}\,.$ (26)
The evaluation of the 9-fold integral in (26) is very time-consuming. An
approximation can be given comparing with the solution for the harmonic
oscillator wir
$\displaystyle\varrho_{2s^{4}}^{\rm cm,HO}(a,R)=|\psi^{\rm
HO}_{2s^{4}}(R)|^{2}=\left(\frac{a}{\pi}\right)^{3/2}e^{-4aR^{2}}$
$\displaystyle\times\frac{1}{10616832}(24695649+14905152\,aR^{2}+354818304\,a^{2}R^{4}$
$\displaystyle-876834816\,a^{3}R^{6}+1503289344\,a^{4}R^{8}-1261699072\,a^{5}R^{10}$
$\displaystyle+613416960\,a^{6}R^{12}-150994944\,a^{7}R^{14}+16777216\,a^{8}R^{16}).$
The parameter $a^{\prime\prime}=0.287038$ fm can be chosen to reproduce the
value at $R=0$ (three-fold integral). The scaling $R^{\prime\prime}=f_{\rm
scal}(R)+0.174\,(e^{R/2.924}-1)$ fulfills normalization and improves the
asymptotic behavior for large $R$, so that $\varrho_{2s^{4}}^{\rm
cm}(R)\approx\varrho_{2s^{4}}^{\rm cm,HO}(a^{\prime\prime},R")$. A plot of
$(4\pi R^{2})^{1/2}\psi_{2s^{4}}(R)$ is shown in Fig. 2. The normalization
$\int_{0}^{\infty}4\pi R^{2}\psi^{2}_{2s^{4}}(R)dR=1$ holds.
We reconstruct the effective potential from the wave function
$\psi_{2s^{4}}(R)=(\varrho_{2s^{4}}^{\rm cm}(R))^{1/2}$ wir . If we restrict
us to $s$ states ($l=0$) and introduce
$u_{2s^{4}}(R)=(4\pi)^{1/2}R\psi_{2s^{4}}(R)$, we have
$W_{2s^{4}}(R)-E_{2s^{4}}=\frac{\hbar^{2}}{8m}\frac{1}{u_{2s^{4}}(R)}\frac{d^{2}}{dR^{2}}u_{2s^{4}}(R).$
(28)
The result is shown in Fig. 4.
Figure 4: The c.m. potential $W_{2s^{4}}(R)$, Eq. (28), compared with the
Woods-Saxon potential of the quartet.
We conclude from this: The effective c.m. potential $W(R)$ remains almost
constant within the core as expected from the Thomas-Fermi model. The value
$E_{2s^{4}}=-36.65$ MeV is near to the estimate $\mu_{4}=-33$ MeV from the
Thomas-Fermi rule. It is slightly increasing near the surface, possibly
because the quartet is not localized at a point, but smeared out, so that it
”feels” the weakening of the potential near the surface. Another reason could
be the gradient terms in Eq. (2), which are neglected here. A similar behavior
was also observed for the harmonic oscillator potential in wir . In contrast
to the harmonic oscillator, where the effective potential increases with $R$,
the behavior near the surface is now more realistic. The weakening of the
Thomas-Fermi rule has been shown in Refs. Po14 ; Xu16 ; Xu17 ; wir .
### 4.2 Intrinsic interaction and Pauli blocking
We have introduced an effective c.m. potential $W(R)$, which describes the
influence of the environment (here the core nucleus) on the c.m. motion of the
quartet in mean-field approximation. Specifically, we have simulated a quartet
of 4 uncorrelated nucleons in $2s$ states moving under the influence of the
core nucleus 16O. The corresponding potential $W_{2s^{4}}(R)$ shows
approximately the constancy of the chemical potential required within the
Thomas-Fermi model.
To describe the formation of an $\alpha$-like cluster, we need to consider the
interaction within the quartet. To estimate the intrinsic interaction of the
quartet, we add for $R>r_{\rm crit}$ the energy shift $W^{\rm intr}({\bf R})$,
Eq. (17), which describes the formation of the cluster and the dissolution due
to Pauli blocking, see fig. 5. The Coulomb potential is added, and the free
effective potential of the shell model $W_{2s^{4}}(R)$ is used instead of
$W^{\rm ext}$. A harmonic oscillator base was essentially used here Ro18 . We
denote this approximation for the potential for the c.m. motion as $W_{\rm
appr}(R)$.
Obviously, this c.m. potential $W_{\rm appr}(R)$ is only a rough
approximation. In particular, the sharp peak due to the sudden switching off
of the intrinsic interaction at $r_{\rm crit}=3.302$ fm does not seem
realistic. A similar peak at $r_{\rm crit}$ was also obtained for the heavy
isotopes Xu16 ; Xu17 , but it was less pronounced than for the light isotope
20Ne.
The behavior for large $R$ is correctly reproduced, the asymptote
$\lim_{R\to\infty}W(R)=-28.3$ MeV is the binding energy of the $\alpha$
particle, and the Coulomb repulsion is well represented. The attractive $N-N$
interaction is also visible, as in other approaches using an optical
potential, see App. A. As the density of the core increases, the binding
energy of the $\alpha$ cluster is weakened due to Pauli blocking, and a pocket
is formed. The behavior for small $R\leq 2$ fm is also well reproduced. The
fluctuations around the Thomas-Fermi value are due to the shell structure.
An improvement of the effective quartet potential is particularly necessary in
the vicinity of the critical density. Instead of a sharp switchover, in which
all correlations above the critical density are omitted, these decrease
continuously. Quartet correlations are also present for $R\leq r_{\rm crit}$.
They can provide a contribution as resonances in the continuum, which
decreases steadily with increasing density. Furthermore, Pauli-blocking is
calculated for uncorrelated nucleons in the environment, which is expressed in
the use of the Fermi function. Correlations in the surrounding matter would
also reduce the Pauli blocking. Taking into account the c.m. movement of the
$\alpha$-cluster, the Pauli blocking is also reduced.
Furthermore, we are dealing with an inhomogeneous system, so that gradient
terms can become important. As an extended system, the $\alpha$-like cluster
is determined not only by the properties of the surrounding matter at the
position of the center of mass, but by the properties within the extension of
the cluster. Finally, the Pauli principle is a non-local effect, which is
treated as local only after some approximations. We have collected several
arguments which show that the effect of Pauli blocking should be treated as a
continuous function of density. This can help to reduce the peak at the
critical radius.
Figure 5: Quartet c.m. potentials $W(R)$. The Thomas-Fermi approximation
$W^{\rm TF}(R)$ is compared with the calculation $W_{\rm appr}(R)$ using
harmonic oscillator shell model states. Note the peak at $r_{\rm crit}=3.302$
fm.
### 4.3 Shell-model calculations
First results to use shell-model calculations for 20Ne to perform calculations
within the QWFA have been presented in Refs. Yang21 ; Bai19 . We use the
widely-used Woods-Saxon potential
$V_{\rm WS}\left(r\right)=\frac{V_{0}}{1+\textrm{exp}(\frac{r-R_{0}}{a})},$
(29)
together with the spin-orbit coupling interaction
$V_{\rm so}\left(r\right)=\frac{1}{2\mu^{2}r}\left(\frac{\partial}{\partial
r}\frac{\lambda V_{0}}{1+\textrm{exp}(\frac{r-R_{\rm so}}{a_{\rm
so}})}\right)\bf l\cdot\bf s$ (30)
to determine the shell model wave functions of quartet nucleons in 20Ne. The
strength of the Woods-Saxon potential is parameterized as
$V_{0}=-46\left[1\pm 0.97\left(\frac{N-Z}{A}\right)\right]$ (31)
(“$+$” for protons and “$-$” for neutrons). The parameter $R_{0}$ is
$1.43\,A^{1/3}$ fm for both protons and neutrons while the parameter $R_{\rm
so}$ is $1.37\,A^{1/3}$ fm. The diffusivity parameter $a$ and $a_{\rm so}$ are
chosen to be the same value 0.7 fm. $\mu$ is the reduced mass of the
$\alpha$-core system and the normalization factor of the $ls$ coupling
strength $\lambda$ is 37.5 for neutrons and 31 for protons, respectively. The
Coulomb potential we adopt is
$\displaystyle V_{C}(r)$ $\displaystyle=$ $\displaystyle(Z-1)e^{2}(3R_{\rm
Coul}^{2}-r^{2})/2R_{\rm Coul}^{3},\quad r\leq R_{\rm Coul},$ (32)
$\displaystyle=$ $\displaystyle(Z-1)e^{2}/r,\quad r>R_{\rm Coul}.$
with the radius $R_{\rm Coul}=1.25\,A^{1/3}$ fm. The effective c.m. potential
constructed from the shell model quartet state for 20Ne is shown in Fig. 6.
A general discussion of the Pauli blocking term is necessary to avoid the peak
in Figs. 5 and 6. Various approximations were made when calculating the
effective potential. We mention the neglect of the gradient terms and the non-
local property of the potential $W({\bf R},{\bf R}^{\prime})$, Eq. (3.1), in
particular due to the Pauli blocking term. We emphasize that Eq. (16) was
derived for $\alpha$-particles in an uncorrelated medium. At zero temperature,
the medium can be strongly correlated and form $\alpha$ matter. A correlated
medium was considered in Ref. Tak04 , and the merging with the continuum was
observed at a slightly higher critical density. If we use this calculation to
construct the Pauli blocking shift, this could possibly lead to a smoother
transition and reduce the peak.
Figure 6: Quartet c.m. potential $W(R)$ for 20Ne using shell model states.
For 20Ne, the probability to find the $\alpha$-particle in the localized shell
model states can be defined as
$\displaystyle{\cal F}_{\alpha}=\int dR\,4\pi R^{2}\rho_{\rm quartet}^{\rm
c.m.}(R)\left|\left\langle\varphi_{\alpha}^{\rm intr}|\varphi_{\rm
quartet}^{\rm intr}\right\rangle(R)\right|^{2},$ (33)
where $\left\langle\varphi_{\alpha}^{\rm intr}|\varphi_{\rm quartet}^{\rm
intr}\right\rangle(R)$ is the overlap between the intrinsic wave functions of
a quartet $\varphi_{\rm quartet}^{\rm intr}$ and a free $\alpha$-particle as a
function of c.m. variable $R$. The density at the c.m. position $\bf R$ is
$\rho_{\rm quartet}^{\rm c.m.}(R)=\mid{\Psi_{\rm quartet}^{\rm c.m.}(\bf
R)}\mid^{2}$. As expected, the probability ${\cal F}_{\alpha}=2.004\times
10^{-3}$ is quite small for 20Ne as the wave function of the quartet is
approximated by a product of shell model states. However, the probability
${\cal F}_{\alpha}$ is significantly enhanced for the $\alpha$ \+ doubly magic
core system 20Ne as compared to those of their neighboring isotopes We show in
Fig. 7 the overlap between the wave functions of the quartet and the
$\alpha$-particle as a function of c.m. coordinate $R$ for 20Ne. It is clearly
demonstrated that there exists a peak in the region beyond the critical radius
(i.e. the surface region of the core). Inside the core, the probability to
find the $\alpha$-like state is quite low for 20Ne.
Figure 7: The overlap between the intrinsic wave functions of the quartet and
the $\alpha$-particle as a function of c.o.m. coordinate $R$ for the
$\alpha$+doubly magic core system 20Ne.
## 5 Comparison with the THSR model and other approaches
### 5.1 Calculations for 20Ne
Figure 8: Variational calculations for the energy of 16O with respect to the
harmonical osciallator parameter $b$ and size parameter $\beta_{0}$ using the
THSR wave function THSR .
The THSR ansatz adeptly describes the low-density regime of $\alpha$ matter as
well as the shell model states, particularly when the c.m. wave function
coincides with the intrinsic wave function. Notably, when four $\alpha$
clusters merge into a ${}^{16}\mathrm{O}$-like configuration, the
antisymmetrization process gives rise to nucleonic $s$ and $p$ orbitals,
especially as the inter-cluster distance approaches zero. Deviations in the
Gaussian width parameters signal the presence of correlations. The $N\alpha$
THSR wave function THSR can be written as,
$\Phi_{n\alpha}^{\rm THSR}\\!\\!\propto{\cal A}\
\Big{\\{}\prod_{i=1}^{n}\exp\Big{[}-\frac{2(\vec{X}_{i}-\vec{X}_{G})^{2}}{b^{2}+2\beta_{0}^{2}}\Big{]}\phi(\alpha_{i})\Big{\\}},$
(34)
where $\vec{X}_{i}$ and $\vec{X}_{G}$ are the c.m. coordinate of the $\alpha$
cluster and the total c.m. coordinate of $N\alpha$ cluster, respectively.
Figure 8 presents a THSR calculation for ${}^{16}\mathrm{O}$, where the
parameter $\beta_{0}$ reflects deviations from shell model behavior. The
observed energy minimum at a finite $\beta_{0}$ (size parameter of the THSR
wave function) signifies the existence of $\alpha$-like correlations even in
the ground state.
The uncorrelated mean-field approximation, often invoked to compute Pauli
blocking effects, may not be universally valid. In particular, $\alpha$ matter
exemplifies a scenario where the medium undergoes a transformation into a
correlated state. Analogous reconfigurations are evident in pairing phenomena
at temperatures descending below the critical value. The Tohsaki-Horiuchi-
Schuck-Röpke (THSR) formalism was conceived to elucidate $\alpha$ clustering
within such tenuous nuclear environments, exemplified by the Hoyle state of
${}^{12}\mathrm{C}$. Here, the environment of an $\alpha$ cluster is composed
of other $\alpha$ clusters, leading to a pronouncedly clustered structure.
This method has been successfully employed to investigate various $4n$ nuclei,
including ${}^{20}\mathrm{Ne}$.
Figure 9: Energy curve of 20Ne with the increase of the size parameter $\beta$
using the intrinsic THSR wave function. The asymptotic -154.16 MeV for the
binding energy of separated ${}^{16}\mathrm{O}$ and $\alpha$ clusters is also
shown.
The microscopic THSR wave function for the nucleus ${}^{20}\mathrm{Ne}$ can be
written as
$\displaystyle{\widehat{\Phi}}_{{\rm THSR}}(\beta)={\cal
A}[\exp(-\frac{8r^{2}}{5(b^{2}+2\beta^{2})}\phi(\alpha)\phi({{}^{16}{\rm
O}})],$ (35)
where ${\bf r}={\bf X}_{1}-{\bf X}_{2}$. ${\bf X}_{1}$ and ${\bf X}_{2}$
represent the center-of-mass coordinates of the $\alpha$ cluster and the
${}^{16}\mathrm{O}$ cluster, respectively. It should noted that the
${}^{16}\mathrm{O}$ cluster is described as the shell model wave function.
Fig. 9 shows the energy curve with the increase of the size parameter $\beta$.
This can be transformed to the energy curve as a function of the inter-cluster
distance. The extracted effective $\alpha$-O potential would be of interest.
It should be noted, however, that the inter-cluster distance between clusters
cannot be precisely defined, especially when clusters are in close proximity,
owing to the effects of antisymmetrization.
It is not directly possible to define the inter-cluster distance $D$ in THSR
approach. According to Matsuse Mat75 one can introduce the distance $D$
according the relation for the rms radii
$20\langle r^{2}\rangle_{\rm Ne}=16\langle r^{2}\rangle_{\rm O}+4\langle
r^{2}\rangle_{\alpha}+\frac{16}{5}\langle D^{2}\rangle$ (36)
so that
$\langle D^{2}\rangle=\frac{25}{4}\langle r^{2}\rangle_{\rm
Ne}-\frac{195}{16}b^{2}.$ (37)
follows. We used this quantity $D$ for the distance $r$ in Fig. 11.
Very recently, the $5\alpha$ clustering structure of ${}^{20}\mathrm{Ne}$ was
scrutinized by Bo et al. Bo23 utilizing the THSR framework, which adopts the
container model. In this model, the intrinsic $\alpha$ cluster width parameter
$b$ is complemented by two additional parameters: $\beta_{1}$ (denoting the
width of the ${}^{16}\mathrm{O}$ core nucleus) and $\beta_{2}$ (representing
the center-of-mass motion of the residual $\alpha$ cluster). As illustrated in
Fig. 10, the energy minimum is observed at $\beta_{1}=1.5$ fm and
$\beta_{2}=3.0$ fm, corresponding to an energy of approximately $-155.3$ MeV.
The GCM calculations yield an energy of $-156.4$ MeV. Additionally, the
calculated rms radius is $2.96$ fm. A notable aspect of the THSR wave function
is its inclusion of the shell model limit, thereby ensuring an accurate
representation of the ground state of the ${}^{16}\mathrm{O}$ core nucleus.
The orthogonality between the additional fifth $\alpha$ particle and the core
states is rigorously preserved. Theoretical calculations yield favorable
comparisons with empirical data for both the binding energy and the root-mean-
square (rms) radius of the ground state. The disparity between the values of
$\beta_{1}$ and $\beta_{2}$ suggests the presence of an $\alpha$ particle atop
the doubly-magic ${}^{16}\mathrm{O}$ core. These results from the $5\alpha$
calculations set the stage for future studies to develop a more realistic
${}^{16}\mathrm{O}$-$\alpha$ effective interaction.
Figure 10: Contour plot for the ground state of ${}^{20}\mathrm{Ne}$ in the
spherical $\beta_{1}$ and $\beta_{2}$ parameter space.
Figure 11: 16O - $\alpha$ effective interaction potential as function of the
center-of-mass distance $R$. The THSR calculations (blue full line) are
compared with the Thomas-Fermi approximation of the quartetting wave function
(red full line). The total potential (TF, THSR) is shown as well as the
Coulomb contribution (dashed lines). In addition, the Coulomb interaction for
the harmonic oscillator density of the O-core is also shown.
### 5.2 $\alpha$ matter
The equilibrium composition of homogeneous nuclear matter at low densities and
temperatures is a complex problem, since below the saturation density of
symmetric matter $\rho_{\rm sat}=0.15$ fm-3 a thermodynamic instability occurs
and clusters are formed. The highest binding energy per nucleon is found for
the nucleus 56Fe. Here we only consider the formation of $\alpha$-clusters
from the nucleons.
At a fixed baryon density, the mass fraction of the $\alpha$ clusters
increases with decreasing temperature. At a critical temperature, a quantum
condensate can be formed. In analogy to pairing, the $\alpha$-like quartets
are the bosonic components of the condensate. However, they are modified by
the medium Fun08 . As known from pairing, where the Bogoliubov transformation
allows to describe the nuclear matter below the critical temperature, below
the critical temperature for quartetting we have to consider a correlated
medium, the so-called $\alpha$ matter.
In analogy to the THSR approach for low-density nuclei such as 8Be or the
Hoyle state of 12C, calculations for periodic $\alpha$-like structures were
performed in Tak04 . Orthonormal Bloch states were introduced so that Pauli
blocking by nucleons bound in $\alpha$-clusters is strictly realized. One
problem is the separation of the c.m. contribution to the kinetic energy,
which is solved by a simple ansatz based on the energy gap at zero momentum.
As a result, in Ref. Tak04 it was shown that the bound state merges with the
continuum at about $0.2\rho_{\rm sat}$.
We have also performed exploratory calculations with a separable potential
adapted to reproduce the free-$\alpha$ properties mass and rms radius, see
Appendix B. The difference between the energy per nucleon in the uncorrelated
free-nucleon state and the $\alpha$-matter state is shown in Fig. 12. A value
$\rho_{\rm Mott}=0.04$ fm-3 was found for the dissolution of the bound state.
For comparison, in Fig. 12 also shown is the shift of the binding energy for
uncorrelated matter where the surrounding nuleons occupy free single-particle
states,
$E_{\rm bound}^{\rm uncorr}(n_{B})=-7.07\,{\rm MeV}+W^{\rm
Pauli}(n_{B})-E_{F}(n_{B}).$ (38)
Compared to the Pauli blocking by free nucleons considered in Eq. (16), the
blocking in $\alpha$-matter is smaller because the distribution in momentum
space is spreed out, and the blocking is less efficient. As a result, the
critical density where bound states are dissolved, comes out to be larger if
cluster formation in the surrounding is taken into account. We expect that
this modification makes the peak in the figures 5 and 6 smoother. Further
investigations are necessary to find a better treatment of the dissolution of
clusters due to Pauli blocking.
Figure 12: Shift of the binding energy per nucleon for an $\alpha$-cluster as
function of the nucleon density $n_{B}$. The difference of the energy per
nucleon in $\alpha$-matter and in momentum eigenstates (red) is compared with
the shift (blue) in uncorrelated matter, Eq. (38).
Another approach to show that nuclear matter dissolves into clusters at low
density was presented in Ref. Ebran20 . Restricted Hartree-Fock calculations
were performed that allow the formation of separate cluster structures. Even
with this approach, the strict separation of the kinetic energy of the c.m.
remains open. An unresolved question is whether the disappearance of the
cluster structures and the appearance of a homogeneous phase is a first-order
transition.
### 5.3 Other approaches to $\alpha$-clustering in nuclei
Based on a local density approach with composition and energy shifts derived
from R1 ; R11 , Typel T14 has considered the formation of $\alpha$-particle
correlations at the surface of heavy nuclei to study the neutron skin
thickness of heavy nuclei and the slope of the nuclear symmetry energy. The
$\alpha$ particle density was considered as a function of radius for the tin
isotopes 108Sn to 132Sn, and it was shown that as the neutron density at the
skin increases, the abundance of $\alpha$-particles is suppressed as a result
of Pauli blocking. The experimental evidence for the $\alpha$ cluster
formation in the surface of neutron-rich tin isotopes was given using quasi-
free $\alpha$ cluster-knockout reactions T20 ; T21 . Note that the occurrence
of $\alpha$-cluster at the surface of 48Ca and 208Pb and and its impact on the
extraction of symmetry energy from skin thickness is also investigated by
using QWFA Yang23 . Strong closed shell structure effects and complex
derivative terms of the intrinsic wave function are properly taken into
account in QWFA Yang23 .
The question of $\alpha$ formation in the ground state of heavy nuclei has
been investigated using the AMD approach AMD ; AMD04 in several recent
publications. The AMD approach also describes the suppression of clusters
using the Pauli blocking effect. The manifestation of clustering at the
surface region where the density is low has induced many investigations on
$\alpha$-break-up reactions. We do not give a comprehensive account of various
investigations of specific isotopes Freer18 ; Yos19 ; Chi17 ; Tan21 ; Yos22 ;
Qi10 ; Nak23 ; Kimura22 ; PG11 in this paper. We would like to emphasize that
the approach of the quartet wave function presented here is also of interest
for these examples.
## 6 Conclusions
We investigated the c.m. motion of an $\alpha$-like quartet, which moves under
the influence of a core nucleus, here the 16O nucleus. In local density
approximation, an effective potential $W(R)$ for the quartet c.m. motion is
obtained, which shows a pocket structure near the surface of the nucleus. This
is important for the preformation of $\alpha$ particles near the surface. A
new aspect is the behavior of $W(R)$ inside the core nucleus, i.e. for $R\leq
r_{\rm crit}$, where the quartet bound state is dissolved due to Pauli
blocking. In contrast to earlier studies, which assume an increase in the
effective $\alpha-^{16}$O-potential with decreasing $R$, in a Thomas-Fermi
approach $W^{\rm TF}(R)=\mu_{4}$ remains constant in this range $R\leq r_{\rm
crit}$ Po14 ; Xu16 ; Xu17 ; wir . In the present work, we also show for the
shell model approach that the effective potential $W(R)$ remains almost
constant in the core nucleus. The reason for this is the exchange interaction
or Pauli blocking between the quartet nucleons and the core nucleus.
For large distances, the empirically determined M3Y potential used for $W(R)$
agrees with the optical potentials derived from scattering experiments. Near
the surface of the nucleus the Pauli blocking becomes relevant. A pocket that
is formed for the effective potential $W^{\rm TF}(R)$ is also retained after
the introduction of single-particle shell model states for the core nucleus.
However, the local density approximation for the Pauli blocking should be
improved, and it is expected that sharp peak structures observed for $W(R)$ in
shell model calculations will be smeared out.
Of interest is the comparison with the THSR approach THSR ; Toh17 , which
treats the quartets self-consistently. If a mean-field description for the
surrounding medium based on uncorrelated single-particle states is no longer
possible, correlations in the medium, especially quartetting, should be taken
into account. The full antisymmetrization of the many-body wave function is a
great challenge. The THSR approach offers us such a self-consistent,
antisymmetrized treatment of quartetting of all nucleons. A variational
principle with Gaussian wave functions was used, and nuclei with $A\leq 20$
were treated in this way. Interesting results were obtained for 20Ne Bo ; Bo2
; Bo3 considering the full antisymmetrization of the $\alpha$\- and 16O-wave
functions . We have tried to find appropriate observables in the THSR approach
to derive an effective potential $W(R)$ and a wave function $\psi(R)$ for the
quartet c.m. motion, to compare them with the quartetting wave function
approach.
Our general vision is to treat quartetting in the nuclear matter self-
consistently, as is the case for pairing. The approaches described in this
paper provide only partial answers to this project. The THSR approach comes
closest to this goal, but it contains some restrictions, so that it is not
generally applicable. Although the quartet wave function approach is generally
applicable, it contains several approximations that still need to be improved.
One main problem is the treatment of Pauli blocking. The local approximation
with a cut-off of $\alpha$-like clusters at the critical density needs to be
improved in future work.
### Acknowledgement
We would like to dedicate this work to the memory of our esteemed colleague
and friend, Peter Schuck, with whom we have had the privilege of collaborating
for many years. Peter’s broad interests and profound insights in nuclear
physics have been an inspiration to us all. We are deeply grateful for his
companionship and contributions throughout the years. He will be dearly
missed. His spirit and dedication to the pursuit of knowledge continue to
guide us, and in his memory, we commit to advancing the work he so
passionately embraced.
C. Xu is supported by the National Natural Science Foundation of China (Grant
No. 12275129). This work was supported in part by the National Natural Science
Foundation of China under contract Nos. 12175042,12147101. Zhongzhou REN
thanks the support of National Natural Science Foundation of China with grant
number 12035011. The work of G.R. received support via a joint stipend from
the Alexander von Humboldt Foundation and the Foundation for Polish Science.
## Appendix A Optical model description and double-folding potential
We discuss the effective c.m. potential $W^{\rm TF}(R)$ and compare with other
approaches, see also wir . In particular, we check whether the choice (22) for
the double-folding potential parameters $c,\,\,d$ are realistic. Several
approaches to the optical potential are shown in Fig. 13.
The elastic scattering of $\alpha$ particles on the 16O nucleus was
investigated, and the corresponding optical potentials were inferred. There is
a large uncertainty for small values of $R$. A first expression for the real
part of the optical potential is Fadden66
$-\frac{V_{0}}{1+e^{(r-r_{0}A^{1/3})/a}}$ (39)
with $V_{0}=43.9$ MeV, $r_{0}=1.912$ fm and $a=0.451$ fm. Improvements were
madein Ref. Fukui16 considering the 16O (6Li,d) 20Ne transfer reaction, where
the model potential (39) with $r_{0}=1.25$ fm and $a=0.76$ fm was used,
$V_{0}$ was adjusted to reproduce the value 4.73 MeV of the binding energy.
Kumar and Kailas Kumar give the parameter values $V_{0}=142.5$ MeV,
$r_{0}=1.18$ fm and $a_{0}=0.76$ fm.
Another approach Michel was compared with experiments Oertzen ; Oertzen1 .
They used the expression
$-V_{0}\frac{1+\alpha e^{-(r/\rho)^{2}}}{[1+e^{(r-R_{R})/(2a_{R})}]^{2}}$ (40)
with $V_{0}=38$ MeV, $\rho=4.5$ fm, $R_{R}=4.3$ fm, $a_{R}=0.6$ fm, and the
energy-dependent $\alpha=3.625$. More recently, in Ref. Ohkubo a density
dependent effective M3Y interaction (DDM3Y) was used, and a double-folding
potential was derived (Fig. 3 in Ohkubo ) which ranges at $R=0$ to -110 MeV.
Figure 13: Optical model potentials from $\alpha$ \- 16O scattering: The
double-folding Coulomb plus nucleon-nucleon interaction and the Thomas-Fermi
approach $W^{\rm TF}$ plus the medium-dependent $\alpha$-binding energy in
comparison with empirical expressions by Michel Michel , McFadden Fadden66 ,
and Kumar Kumar .
Note that $V_{\rm eff}(R)=W(R)+B_{\alpha}\approx V^{\rm Coul}_{\alpha-{\rm
O}}(R)+V^{\rm N-N}_{\alpha-{\rm O}}(R)$ is the mean field relative to the free
$\alpha$ particle. Below $R=5$ fm, Pauli blocking terms occur, see Eq. (17).
The agreement with Michel et al. Michel is quite good. We conclude that the
choice (22) is reasonable. The standard approaches of the optical model
potentials have a diverging repulsive potential below $r_{\rm crit}$.
## Appendix B $\alpha$-shifts
This Section is not yet completed, in preparation
In order to obtain a simple model to reproduce the essential properties of the
$\alpha$-particle, we consider a microscopic model to describe correlations.
With the separable interaction R1
$V(p_{1},p_{2};p^{\prime}_{1},p^{\prime}_{2})=-\frac{\lambda}{\Omega}e^{\frac{(p_{2}-p_{1})^{2}}{4\gamma^{2}}}e^{\frac{(p^{\prime}_{2}-p^{\prime}_{1})^{2}}{4\gamma^{2}}}\delta_{p_{1}+p_{2},p^{\prime}_{1}+p^{\prime}_{2}}\delta_{\sigma\tau,\sigma^{\prime}\tau^{\prime}}$
(41)
with $\Omega$ the normalization volume, $\lambda=1449.6$ MeV fm3,
$\gamma=1.152$ fm-1, we solve the $\alpha$ cluster within a variational ansatz
$\Phi_{\alpha}^{\rm Gauss}(p_{1},p_{2},p_{3},p_{4})=\frac{1}{\rm
norm^{2}}e^{-(p_{1}^{2}+p_{2}^{2}+p_{3}^{2}+p_{4}^{2})b^{2}/4}$ (42)
with the c.m. momentum $P=p_{1}+p_{2}+p_{3}+p_{4}$. The norm follows from
${\rm
norm}=\sum_{p}e^{-b^{2}p^{2}/2}=\int\frac{d^{3}p\Omega}{(2\pi)^{3}}e^{-b^{2}p^{2}/2}=\frac{\Omega}{(2\pi
b^{2})^{3/2}}$ (43)
We calculate the kinetic energy
$\displaystyle T=\frac{\hbar^{2}}{2m}\frac{1}{\rm
norm^{4}}\sum_{p}e^{-(p_{1}^{2}+p_{2}^{2}+p_{3}^{2}+p_{4}^{2})b^{2}/2}(p_{1}^{2}+p_{2}^{2}+p_{3}^{2}+p_{4}^{2})$
$\displaystyle=4\frac{\hbar^{2}}{2m}\frac{1}{\rm
norm}\int\frac{d^{3}p\Omega}{(2\pi)^{3}}e^{-b^{2}p^{2}/2}p^{2}=12\frac{\hbar^{2}}{2mb^{2}}$
(44)
From this, 1/4 is connected with the c.m. motion (introducing Jacobian
coordinates,
$p_{1}^{2}+p_{2}^{2}+p_{3}^{2}+p_{4}^{2}=2q_{1}^{2}+\frac{3}{2}q_{2}^{2}+\frac{4}{3}q_{3}^{2}+\frac{1}{4}P^{2}$).
The intrinsic kinetic energy is $9\hbar^{2}/(2mb^{2})$.
The potential energy results as
$\displaystyle
4^{2}\frac{3}{4}\frac{1}{2}\sum_{12,1^{\prime}2^{\prime}}\phi(p_{1})\phi(p_{2})V(12,1^{\prime}2^{\prime})\phi(p^{\prime}_{1})\phi(p^{\prime}_{2})$
$\displaystyle=-6\lambda\frac{\gamma^{6}b^{3}}{\pi^{3/2}(\gamma^{2}b^{2}+2)^{3}}$
(45)
For the total energy the minimum -28.3087 MeV at $b=1.93354$ occurs. The
energy pro nucleon is -7.04 MeV. The empirical rms point radius is reproduced.
In a next step, we calculate the energy per nucleon $E^{\rm free}(n_{B})$ of
the symmetric matter, baryon density $n_{B}$, in a cubic box of length $La$
with periodic boundary conditions. The volume is $\Omega=(La)^{3}$. We have in
the average one nucleon with given spin and isospin in the elementary box
$a^{3}$, so that $n_{B}=4/a^{3}$. The total number of $\alpha$-particles is
$N_{\alpha}=L^{3}$, the total number of nucleons is $4N_{\alpha}$.
Free nucleon states with ${\bf k}=2\pi/(La)\\{n_{x},n_{y},n_{z}\\}$ are
introduced, which are occupied within the Fermi cube with $k_{F}=\pi/a$ in all
three directions $x,y,z$. Kinetic energy is
$\displaystyle T_{\rm cub}=4\times
3\frac{\hbar^{2}}{2m}\int_{-\pi/a}^{\pi/a}\frac{dk_{x}La}{2\pi}k_{x}^{2}\int_{-\pi/a}^{\pi/a}\frac{dk_{y}La}{2\pi}$
$\displaystyle\times\int_{-\pi/a}^{\pi/a}\frac{dk_{z}La}{2\pi}=\frac{\hbar^{2}}{m}2N_{\alpha}\frac{\pi^{2}}{a^{2}}.$
(46)
The KE per nucleon is $\frac{\hbar^{2}}{m}\frac{\pi^{2}}{2\times
4^{2/3}}n_{B}^{2/3}$. This value $\frac{\pi^{2}}{2\times 4^{2/3}}=1.958$ is a
little bit larger than $3/10(3\pi^{2}/2)^{2/3}=1.808$ for the Fermi sphere
instead of the Fermi cube.
The potential energy is
$\displaystyle V_{\rm cub}=4\times
3/2\sum_{12,1^{\prime}2^{\prime}}V(12,1^{\prime}2^{\prime})=-6\frac{\lambda}{\Omega}\int_{-\pi/a}^{\pi/a}\frac{dk_{1}^{x}La}{2\pi}$
$\displaystyle\dots\int_{-\pi/a}^{\pi/a}\frac{dk_{2}^{z}La}{2\pi}e^{-(k_{2}^{x}-k_{1}^{x})^{2}/2\gamma^{2}}\dots
e^{-(k_{2}^{z}-k_{1}^{z})^{2}/2\gamma^{2}}.$ (47)
With
$\displaystyle
I=\int_{-\pi/a}^{\pi/a}dk_{1}^{x}\int_{-\pi/a}^{\pi/a}dk_{2}^{x}e^{-(k_{2}^{x}-k_{1}^{x})^{2}/2\gamma^{2}}$
$\displaystyle=2\gamma\left[\left(e^{-2\pi^{2}/a^{2}\gamma^{2}}-1\right)\gamma+\frac{1}{a}2^{1/2}\pi^{3/2}{\rm
erf}(2^{1/2}\pi/a\gamma)\right]$
so that
$V_{\rm cub}=-6\frac{\lambda\Omega}{(2\pi)^{6}}I^{3},\qquad
a=(4/n_{B})^{1/3}.$ (49)
and the potential energy per nucleon $-6\frac{\lambda}{(2\pi)^{6}n_{B}}I^{3}$.
The energy per nucleon comes out as
$E_{\rm cub}(n_{B})=\frac{\hbar^{2}}{m}\frac{\pi^{2}}{2\times
4^{2/3}}n_{B}^{2/3}-6\frac{\lambda}{(2\pi)^{6}n_{B}}I^{3}.$ (50)
We compare this result with standard expressions. The chemical potential
contains the kinetic energy degeneracy 4, Fermi wave number
$k_{F}=(3\pi^{2}n/2)^{1/3}$)
$E_{\rm
kin,Fermi}(n)=\frac{\hbar^{2}}{2m}k_{F}^{2}=\frac{\hbar^{2}}{2m}\left(\frac{3\pi^{2}}{2}\right)^{2/3}n^{2/3}$
(51)
so that $\mu(n)=E_{\rm kin,Fermi}(n)+\Delta E^{\rm SE}(n)$. The self-energy
shift of the single-nucleon states can be estimated by the Skyrme model,
$\Delta E^{\rm SE}(n)=-\frac{3}{4}1057.3n+\frac{3}{16}14463.5n^{2}$ (52)
The energy per nucleon follows as
$\displaystyle E/N=\frac{1}{n}\int_{0}^{n}\mu(n^{\prime})dn^{\prime}$ (53)
$\displaystyle=\frac{3\hbar^{2}}{10m}\left(\frac{3\pi^{2}}{2}\right)^{2/3}n^{2/3}-\frac{3}{8}1057.3n+\frac{1}{16}14463.5n^{2}.$
A better parametrization is given by the RMF approach, the DD2 version gives
$\mu(n)=((mc^{2}-s(n))^{2}+(\hbar ck_{F})^{2})^{1/2}-mc^{2}+v(n)$ (54)
(for a parametrization of $s$ and $v$ see R .) The minimum occurs at
$E(0.148327{\rm fm}^{-3})=-16.2784$ MeV. In the subsaturated range of density,
the three approaches are in reasonable agreement, see Fig. 14.
Figure 14: Energy per nucleon as function of the nucleon density $n_{B}$.
Results for the separable potential (50) are compared with RMF-DD2 and Skyrme
calculations. The energy per nucleon for $\alpha$-matter (magenta full line)
takes at zero density the value -28.3/4 MeV.
Between both limits, the free $\alpha$-cluster gas at low densities and the
free-nucleon quasiparticle gas at high densities, we consider a Bloch ansatz
Tak04 .
$\phi_{k}(p)=\frac{1}{N_{k}^{1/2}N_{\alpha}^{1/2}}(2\pi
b^{2})^{1/4}\sum_{m=-N_{\alpha}/2}^{N_{\alpha}/2}e^{imka+impa-b^{2}p^{2}/4}$
(55)
The kinetic energy follows as (4 for spin/isospin, 3 for the components
$x,y,z$)
$\displaystyle T_{\rm Bloch}=4\times 3\frac{\hbar^{2}}{2m}\sum_{k}$
$\displaystyle\times\frac{\sum_{m_{1}m_{2}}\sum_{p}e^{i(m_{1}-m_{2})(k^{x}+p)a-b^{2}p^{2}/2}p^{2}}{\sum_{m_{1}m_{2}}\sum_{p^{\prime}}e^{i(m_{1}-m_{2})(k+{p^{\prime}})a-b^{2}{p^{\prime}}^{2}/2}}.$
(56)
After performing the $p,p^{\prime}$ integrals we have
$\displaystyle T_{\rm
Bloch}/N_{B}=3\frac{\hbar^{2}}{2mb^{2}}\int_{-\pi/a}^{\pi/a}\frac{dk^{x}a}{2\pi}$
$\displaystyle\times\frac{\sum_{m=-L/2}^{L/2}e^{imk^{x}a-a^{2}m^{2}/(2b^{2})}\left(1-\frac{m^{2}a^{2}}{b^{2}}\right)}{\sum_{m^{\prime}=-L/2}^{L/2}e^{im^{\prime}ka-a^{2}m^{\prime
2}/(2b^{2})}}.$ (57)
This expression contains the c.m. kinetic energy. To separate the c.m. energy,
it was proposes in Tak04 to consider the ratio $(x=a/b)$
$N_{c}(x)=1-\frac{1}{4}\left[1-\frac{\sum_{m=-L/2}^{L/2}e^{-x^{2}m^{2}/2}(m^{2}x^{2})}{\sum_{m^{\prime}=-L/2}^{L/2}e^{-x^{2}m^{\prime
2}/2}}\right]$ (58)
as a factor for the kinetic energy to exclude the c.m. kinetic energy in the
low-density region. The evaluation of the potential energy is somewhat lengthy
so that we give only the final result
$\displaystyle V_{\rm Bloch}=-\frac{3}{2\pi^{3/2}}\frac{\lambda
b^{3}}{(b^{2}+2/\gamma^{2})^{3}}\left(\int_{-\pi/a}^{\pi/a}\frac{dk_{1}a}{2\pi}\int_{-\pi/a}^{\pi/a}\frac{dk_{2}a}{2\pi}\right.$
$\displaystyle\left.\times\frac{\Sigma_{1}^{e}[\Sigma_{2}^{e}\Sigma_{3}^{e}+\Sigma_{2}^{o}\Sigma_{3}^{o}]+\Sigma_{1}^{o}[\Sigma_{2}^{e}\Sigma_{3}^{o}+\Sigma_{2}^{o}\Sigma_{3}^{e}]}{\Sigma_{4}(k_{1})\Sigma_{4}(k_{2})}\right)^{3}$
(59)
where
$\displaystyle\Sigma_{1}^{e}=\sum_{m}e^{i2m(k_{1}+k_{2})a/2-m^{2}x^{2}}$
$\displaystyle\Sigma_{1}^{o}=\sum_{m}e^{i(2m+1)(k_{1}+k_{2})a/2-(2m+1)^{2}x^{2}/4}$
$\displaystyle\Sigma_{2}^{e}=\sum_{m}e^{i2m(k_{2}-k_{1})a/2-2m^{2}a^{2}/(b^{2}+2/\gamma^{2})}$
$\displaystyle\Sigma_{2}^{o}=\sum_{m}e^{i(2m+1)(k_{2}-k_{1})a/2-(2m+1)^{2}a^{2}/(2b^{2}+4/\gamma^{2})}$
$\displaystyle\Sigma_{3}^{e}=\sum_{m}e^{i2m(k_{1}-k_{2})a/2-2m^{2}a^{2}/(b^{2}+2/\gamma^{2})}$
$\displaystyle\Sigma_{3}^{o}=\sum_{m}e^{i(2m+1)(k_{1}-k_{2})a/2-(2m+1)^{2}a^{2}/(2b^{2}+4/\gamma^{2})}$
$\displaystyle\Sigma_{4}(k)=\sum_{m}e^{imka-m^{2}a^{2}/(2b^{2})}$
Within our approach one has to search for the minimum of the energy as
function of the width parameter $b$, see Fig. 15. At low densities, the
minimum occurs at $b=1.934$ fm. This value is slightly increasing with
increasing density. At the nucleon density $n_{B}=0.0387$ fm-3 it jumps to the
free nucleon value, see Fig. 15. This is a sharp transition. It is not clear
whether this phase transition is due to the approximations such as the
separation of the c.m. kinetic energy or the Gauss ansatz for the wave
function, or is a real sharp quantum phase transition to a correlated state.
To understand the dissolution of the bound state in the case of a sharp
quantum phase transition, we can use the equilibrium condition of equal
chemical potential $\mu$ in both phases. With the energy per nucleon
$e(n_{B})=E/N_{B}$ we have
$\mu(n_{B})=e(n_{B})+n_{B}\frac{\partial e(n_{B})}{\partial n_{B}}.$ (61)
The disappearance of the $\alpha$-matter phase occurs if the chemical
potential coincides with the free momentum quasiparticle phase.
Figure 15: Energy per nucleon as function of the width parameter $b$ at
different nucleon density $n_{B}$. The result (50) for the uncorrelated
nuclear matter is shown as dotted line.
## References
* (1) P. Ring and P. Schuck, The Nuclear Many-Body Problem (Springer, Berlin 1980).
* (2) G. Röpke, P. Schuck, Y. Funaki, H. Horiuchi, Zhongzhou Ren, A. Tohsaki, Chang Xu, T. Yamada, and Bo Zhou, Phys. Rev. C 90, 034304 (2014).
* (3) Chang Xu, Zhongzhou Ren, G. Röpke, P. Schuck, Y. Funaki, H. Horiuchi, A. Tohsaki, T. Yamada, and Bo Zhou, Phys. Rev. C 93, 011306(R) (2016).
* (4) Chang Xu, G. Röpke, P. Schuck, Zhongzhou Ren, Y. Funaki, H. Horiuchi, A. Tohsaki, T. Yamada, and Bo Zhou, Phys. Rev. C 95, 061306(R) (2017).
* (5) Shuo Yang, Chang Xu, Gerd Röpke, Peter Schuck, Zhongzhou Ren, Yasuro Funaki, Hisashi Horiuchi, Akihiro Tohsaki, Taiichi Yamada, and Bo Zhou, Phys. Rev. C 101, 024316 (2020).
* (6) Shuo Yang, Chang Xu, and Gerd Röpke, Phys. Rev. C 104, 034302 (2021).
* (7) Dong Bai, Zhongzhou Ren, and Gerd Röpke, Phys. Rev. C 99, 034305 (2019).
* (8) Zhen Wang, Dong Bai, and Zhongzhou Ren, Phys. Rev. C 105, 024327 (2022).
* (9) Zisheng Jin, Mingshuai Yan, Hao Zhou, An Cheng, Zhongzhou Ren, and Jian Liu, Phys. Rev. C 108, 014326 (2023).
* (10) Ruijia Li and Chang Xu, Phys. Rev. C 107, 064301 (2023).
* (11) Shuo Yang, Ruijia Li, and Chang Xu, Phys. Rev. C 108, L021303 (2023).
* (12) A. Tohsaki, H. Horiuchi, P. Schuck, and G. Röpke, Phys. Rev. Lett. 87, 192501 (2001).
* (13) Bo Zhou, Zhongzhou Ren, Chang Xu, Y. Funaki, T. Yamada, A. Tohsaki, H. Horiuchi, P. Schuck, and G. Röpke, Phys. Rev. C 86, 014301 (2012).
* (14) Bo Zhou, Y. Funaki, H. Horiuchi, Zhongzhou Ren, G. Röpke, P. Schuck, A. Tohsaki, Chang Xu, and T. Yamada, Phys. Rev. Lett. 110, 262501 (2013).
* (15) Bo Zhou, Y. Funaki, H. Horiuchi, Zhongzhou Ren, G. Röpke, P. Schuck, A. Tohsaki, Chang Xu, and T. Yamada, Phys. Rev. C 89, 034319 (2014).
* (16) Mengjiao Lyu, Zhongzhou Ren, Bo Zhou, Yasuro Funaki, Hisashi Horiuchi, Gerd Röpke, Peter Schuck, Akihiro Tohsaki, Chang Xu, and Taiichi Yamada, Phys. Rev. C 91, 014313 (2015).
* (17) Mengjiao Lyu, Zhongzhou Ren, Bo Zhou, Yasuro Funaki, Hisashi Horiuchi, Gerd Röpke, Peter Schuck, Akihiro Tohsaki, Chang Xu, and Taiichi Yamada, Phys. Rev. C 93, 054308 (2016).
* (18) Bo Zhou, Yasuro Funaki, Hisashi Horiuchi, Yu-Gang Ma, Gerd Röpke, Peter Schuck, Akihiro Tohsaki, and Taiichi Yamada, Nature Comm. 14 (2023) 8206.
* (19) M. Freer, H. Horiuchi, Y. Kanada-En’yo, D. Lee, and U.-G. Meißner Rev. Mod. Phys. 90, 035004 (2018).
* (20) G. Röpke, P. Schuck, Chang Xu, Zhongzhou Ren, M. Lyu, Bo Zhou, Y. Funaki, H. Horiuchi, A. Tohsaki, and T. Yamada, J. Low. Temp. Phys. 189, 383 (2017) [arXiv:1707.04517 [nucl-th]].
* (21) G. Röpke, P. Schuck, Y. Funaki, H. Horiuchi, Zhongzhou Ren, A. Tohsaki, Chang Xu, T. Yamada, and Bo Zhou, J. Phys. Conf. Ser. 863, 012006 (2017).
* (22) G. Röpke,, in Proceedings of the 4th International Workshop on ”State of the Art in Nuclear Cluster Physics” (SOTANCP4), AIP Conf. Proc. 2038, 020008-1-020008-10 (AIP, New York, 2018). arXiv:1810.01274 [nucl-th].
* (23) W.W. Qu, G.L. Zhang, X.Y. Le, Nucl. Phys. A 868 1 (2011).
* (24) G.R. Satchler and W.G. Love, Phys. Rep. 55, 183 (1979).
* (25) M. Mirea, Phys. Rev. C 96, 064607 (2017).
* (26) N. Schwierz, I. Wiedenhover, and A. Volya, Parameterization of the Woods-Saxon Potential for Shell-Model Calculations. arXiv:0709.3525 (2007).
* (27) Hiroki Takemoto, Masahiro Fukushima, Satoshi Chiba, Hisashi Horiuchi, Yoshinori Akaishi, and Akihiro Tohsaki, Phys. Rev. C 69, 035802 (2004).
* (28) T. Matsuse, M. Kamimura, and Y. Fukushima, Prog. Theor. Phys. 53, 706 (1975).
* (29) Y. Funaki, H. Horiuchi, G. Röpke, P. Schuck, A. Tohsaki, and T. Yamada, Phys. Rev. C 77, 064312 (2008).
* (30) J.-P. Ebran, M. Girod, E. Khan, R. D. Lasseri, and P. Schuck, Phys. Rev. C 102, 014305 (2020).
* (31) G. Röpke, Phys. Rev. C 79, 014002 (2009).
* (32) G. Röpke, Nucl. Phys. A 867, 66 (2011).
* (33) S. Typel, Phys. Rev. C 89, 064321 (2014).
* (34) Z. Yang, J. Tanaka, S. Typel, T. Aumann, J. Zenihiro, S. Adachi, S. Bai, P. van Beek, D. Beaume, Y. Fujikawa, J. Han, S. Heil, S. Huang, A. Inoue, Y. Jiang, M. Knösel, N. Kobayashi, Y. Kubota, W. Liu, J. Lou, Y. Maeda, Y. Matsuda, K. Miki, S. Nakamura, K. Ogata, V. Panin, H. Scheit, F. Schindler, P. Schrock, D. Symochko, A. Tamii, T. Uesaka, V. Wagner, K. Yoshida, JPS Conf. Proc. 31, 011019 (2020).
* (35) J. Tanaka, Z.H. Yang, S. Typel, S. Adachi, S. Bai, P. Van Beek, D. Beaumel, Y. Fujikawa, J. Han, S. Heil, S. Huang, A. Inoue, Y. Jiang, M. Knösel, N. Kobayashi, Y. Kubota, W. Liu, J. Lou, Y. Maeda, Y. Matsuda, K. Miki, S. Nakamura, K. Ogata, V. Panin, H. Scheit, F. Schindler, P. Schrock, D. Symochko, A. Tamii, T. Uesaka, V. Wagner, K. Yoshida, J. Zenihiro, T. Aumann, Science 371, 200 (2021).
* (36) Y. Kanada-En’yo, Prog.. Theor. Phys. 117, 655 (2007).
* (37) A. Ono, H. Horiuchi, Prog. Part. Nucl. Phys. 53 (2004) 501.
* (38) K. Yoshida, Y. Chiba, M. Kimura, Y. Taniguchi, Y. Kanada-En’yo, and K. Ogata, Phys. Rev. C 100, 044601 (2019).
* (39) Y. Chiba and M. Kimura, Prog. Theor. Exp. Phys. 2017, 053D01 (2017).
* (40) Y. Taniguchi, K. Yoshida, Y. Chiba, Y. Kanada-En’yo, M. Kimura, and K. Ogata, PRC 103, L031305 (2021).
* (41) K. Yoshida and J. Tanaka, Phys. Rev. C 106, 014621 (2022).
* (42) C. Qi, A.N. Andreyev, M. Huyse, R.J. Liotta, P. Van Duppen, and R.A. Wyss, Phys. Rev. C 81, 064319 (2010).
* (43) T. Nakatsukasa, N. Hinohara, Phys. Rev. C 108, 014318 (2023).
* (44) Q. Zhao, M. Kimura, B. Zhou, and S. Shin, Phys. Rev. C 106, 054313 (2022).
* (45) P.-G. Reinhard, J. A. Maruhn, A. S. Umar, and V. E. Oberacker, Phys. Rev. C 83, 034312 (2011).
* (46) A. Tohsaki, H. Horiuchi, P. Schuck, and G. Röpke, Rev. Mod. Phys. 89, 011002 (2017).
* (47) Bo Zhou, Zhongzhou Ren, Chang Xu, Y. Funaki, T. Yamada, A. Tohsaki, H. Horiuchi, P. Schuck, and G. Röpke, Phys. Rev. C 86, 014301 (2012).
* (48) Bo Zhou, Y. Funaki, H. Horiuchi, Zhongzhou Ren, G. Röpke, P. Schuck, A. Tohsaki, Chang Xu, and T. Yamada, Phys. Rev. C 89, 034319 (2014).
* (49) Bo Zhou, Y. Funaki, H. Horiuchi, Zhongzhou Ren, G. Röpke, P. Schuck, A. Tohsaki, Chang Xu, and T. Yamada, Phys. Rev. Lett. 110, 262501 (2013).
* (50) L. McFadden and G.R. Satchler, Nucl. Phys. 84, 177 (1966).
* (51) F. Michel, J. Albinski, P. Belery, T. Delbar, G. Grégoire, B. Tasiaux, and G. Reidemeister, Phys. Rev. C 28, 1904 (1983).
* (52) W. von Oertzen, M. Freer, and Y. Kanada-Enyo, Phys. Reports 432, 43(2006);
* (53) W. von Oertzen, in: C. Beck (Ed.), Clusters in Nuclei, vol. 1, p.109 (Springer, Berlin 2010).
* (54) Y. Hirabayashi and S. Ohkubo, Phys. Rev. C 88, 014314 (2013).
* (55) T. Fukui, Y. Taniguchi, T. Suhara, Y. Kanada-En’yo, and K. Ogata, Phys. Rev. C 93, 034606 (2016).
* (56) Ashok Kumar and S. Kailas, Nucl. Phys. A 776, 105 (2006).
* (57) G. Röpke, Phys. Rev. C 92, 054001 (2015).
|
# Measurement of infrared magic wavelength for an all-optical trapping of
40Ca+ ion clock
Yao Huang State Key Laboratory of Magnetic Resonance and Atomic and Molecular
Physics, Wuhan Institute of Physics and Mathematics, Chinese Academy of
Sciences, Wuhan 430071, China Key Laboratory of Atomic Frequency Standards,
Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan
430071, China Hua Guan State Key Laboratory of Magnetic Resonance and Atomic
and Molecular Physics, Wuhan Institute of Physics and Mathematics, Chinese
Academy of Sciences, Wuhan 430071, China Key Laboratory of Atomic Frequency
Standards, Wuhan Institute of Physics and Mathematics, Chinese Academy of
Sciences, Wuhan 430071, China Chengbin Li State Key Laboratory of Magnetic
Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and
Mathematics, Chinese Academy of Sciences, Wuhan 430071, China Key Laboratory
of Atomic Frequency Standards, Wuhan Institute of Physics and Mathematics,
Chinese Academy of Sciences, Wuhan 430071, China Huaqing Zhang State Key
Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan
Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan
430071, China Key Laboratory of Atomic Frequency Standards, Wuhan Institute
of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071, China
University of Chinese Academy of Sciences, Beijing 100049, China Baolin Zhang
State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics,
Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan
430071, China Key Laboratory of Atomic Frequency Standards, Wuhan Institute
of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071, China
University of Chinese Academy of Sciences, Beijing 100049, China Miao Wang
State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics,
Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan
430071, China Key Laboratory of Atomic Frequency Standards, Wuhan Institute
of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071, China
University of Chinese Academy of Sciences, Beijing 100049, China Liyan Tang
State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics,
Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan
430071, China Key Laboratory of Atomic Frequency Standards, Wuhan Institute
of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071, China
Tingyun Shi State Key Laboratory of Magnetic Resonance and Atomic and
Molecular Physics, Wuhan Institute of Physics and Mathematics, Chinese Academy
of Sciences, Wuhan 430071, China Key Laboratory of Atomic Frequency
Standards, Wuhan Institute of Physics and Mathematics, Chinese Academy of
Sciences, Wuhan 430071, China K. Gao<EMAIL_ADDRESS>State Key Laboratory
of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Institute of
Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071, China Key
Laboratory of Atomic Frequency Standards, Wuhan Institute of Physics and
Mathematics, Chinese Academy of Sciences, Wuhan 430071, China Center for Cold
Atom Physics, Chinese Academy of Sciences, Wuhan 430071, China
(August 27, 2024)
###### Abstract
For the first time, we experimentally determine the infrared magic wavelength
for the 40Ca+ $4s\,^{2}\\!S_{1/2}\rightarrow 3d\,^{2}\\!D_{5/2}$ electric
quadrupole transition by observation of the light shift canceling in 40Ca+
optical clock. A ”magic” magnetic field direction is chosen to make the magic
wavelength insensitive to both the linear polarization purity and the
polarization direction of the laser. The determined magic wavelength for this
transition is 1056.37(9) nm, which is not only in good agreement with
theoretical predictions but also more precise by a factor of about 300. Using
this measured magic wavelength we also derive the differential static
polarizability to be $-44.32(32)$ a.u., which will be an important input for
the evaluation of the blackbody radiation shift at room temperatures. Our work
paves a way for all-optical-trapping of 40Ca+ optical clock.
###### pacs:
32.10.Dk, 06.20.F, 06.30.Ft, 37.10.Ty
With rapid development of laser technology, state-of-the-art optical clocks
have now reached an accuracy or frequency stability at the level of 10-18 or
higher Ushijima15 ; Huntemann16 ; McGrew18 ; Brewer19 ; Bothwell19 , which is
two orders of magnitude better than the state-of-the-art microwave atomic
clocks. At this level of accuracy, optical clocks can play a critical role in
redefining the second Targat13 , in searching for variation of fundamental
constants Huntemann14 ; Safronova18 , and in chronometric leveling Grotti18 .
For many neutral-atom optical lattice clocks, the ac-Stark shift due to black
body radiation (BBR) or lattice lasers McGrew18 ; Bothwell19 can be a
limiting factor for achieving such high accuracy McGrew18 ; Bothwell19 ; for
ion-based clocks, on the other hand, micromotion shifts Huntemann16 ; Brewer19
may limit the accuracy of some clocks. One way to reduce the micromotion
shifts is to apply the all-optical trapping technique Schneider10 ; Huber14 ;
Lambrecht17 , where the micromotion shift will be gone when the rf field is
switched off. Since the laser used for all-optical trapping can be chosen at a
magic wavelength LeBlanc07 ; Arora11 ; Herold12 ; Holmgren12 , the energy
shift in the relevant transition will be zero and thus the trapping potential
will introduce no shift in the clock transition frequency. Therefore, for a
magic-wavelength optical-trapped ion, both the micromotion and ac-Stark shift
can be greatly suppressed. In addition to the accuracy of a clock, the
frequency stability is also a very important issue when evaluating a clock.
Comparing to neutral-atom lattice clocks, the stability of a single ion clock
is limited by the signal to noise ratio. Recently, the optical trapping of
Coulomb ion crystals has been demonstrated Schmidt18 , which sheds a light on
the development of all-optical trapping ion clocks using multiple ions to
achieve a better frequency stability
Precision measurements of magic wavelengths in atoms are also very important
in fundamental studies of atomic structure. For example, a measurement of line
strength ratio by magic wavelength can bring a new perspective for determining
accurate transition matrix elements, which are important in testing atomic
computational methods and in interpreting atomic parity non-conservation
Derevianko00 ; Sahoo06 ; Porsev09 . Precision measurements of magic
wavelengths in ions can be used to derive relevant oscillator strengths and
polarizabilities for clock states Liu15 , which is essential for evaluating
the BBR shift at the 10-18 level at room temperatures.
The magic wavelengths of Ca+ have recently been studied both theoretically
Tang13 ; Kaur15 ; Jiang17 and experimentally Liu15 . Two magic wavelengths
for the $4s_{1/2}\rightarrow 3d_{5/2}$ ($m_{J}$ = 1/2, 3/2) clock transitions
near 395.79 nm have been measured to high accuracy, which are in well
agreement with all existing theoretical predictions. However, these magic
wavelengths are very close to the $4s_{1/2}\rightarrow 4p_{3/2}$ and
$4s_{1/2}\rightarrow 4p_{1/2}$ resonant transitions. The near resonant light
has high spontaneous photon scattering rates that can result in a high heating
process Haycock97 . Thus, these magic wavelengths are not ideal choices for
the optical trapping of the ions. Therefore, in order to do optical trapping
of ions, it is important to search for magic wavelengths far off any resonant
transitions; for 40Ca+ in particular, magic wavelengths in the infrared region
are desirable.
In this Letter, we will report the experimental measurement of an infrared
magic wavelength by observation of the light shift canceling in 40Ca+ optical
clock. The clock has an uncertainty of 2.2 $\times$ 10-17 and a 10-16 level
stability at a few seconds Huang19 . The clock is suitable for making a
differential measurement, the clock uncertainty would only introduce a
negligible measurement uncertainty of $<$ 0.001 nm. We will present a method
to extract a reduced transition matrix element using our measured magic
wavelength. We will also determine a static differential polarizability that
is an important parameter in evaluating the BBR shift at room temperatures.
Calculating or measuring an infrared magic wavelength is very different from
measuring a near-resonance magic wavelength Liu15 . Briefly speaking,in
theoretical calculation, the predicted magic wavelengths have much larger
uncertainty compared to the near-resonance magic wavelengths; in the
experiments, for a near-resonance magic wavelength, it is much less sensitive
to magnetic field direction, laser propagation direction, and laser
polarization direction. For measuring a far-off-resonance magic wavelength,
however, one needs to carefully control the laser and magnetic field
conditions and carefully evaluate systematic shifts. To setup the experiment,
first of all, a single 40Ca+ ion is trapped in a miniature ring Paul trap and
the temperature of ion is laser cooled to a few mK. To measure the magic
wavelength, the clock laser is locked to the Zeeman components of clock
transition and the light shift on the clock transition can be observed by
switching on and off the laser with wavelength around 1050 nm (named Lm laser
for short in the following sections). To keep the Lm laser linearly polarized
during the measurement, a polarizer (Glan-Tyler Prism) is placed in the light
path before ion-light interaction takes place. In doing so, the linear
polarization purity can reach $>$ 99%, which can be derived by analyzing the
incident and transmission lights of Lm laser. The wavelength of the Lm laser
used in the experiment is measured with a precision of 100 MHz by a wavemeter
(WS-7, HighFinesse GmbH). The power of Lm laser is measured using a commercial
power meter (S120VC, Thorlabs Inc.) with a power variation within 5%. To
increase the measurement accuracy, a ”magic” magnetic field direction is
chosen to make the magic wavelength insensitive to both the linear
polarization purity and the polarization direction of the laser.
The ac Stark shift caused by a laser can be written in the form
$\begin{split}\Delta
E_{i}=&-\frac{F^{2}}{2}\bigg{[}\alpha_{i}^{S}(\omega)+A\cos\theta_{k}\frac{m_{J}}{2J}\alpha_{i}^{V}(\omega)\\\
&+\frac{3\cos^{2}\theta_{p}-1}{2}\cdot\frac{3m_{J}^{2}-J(J+1)}{J(2J-1)}\alpha_{i}^{T}(\omega)\bigg{]},\end{split}$
(1)
where $F$ is the strength of the ac electromagnetic field,
$\alpha_{i}^{S}(\omega)$, $\alpha_{i}^{V}(\omega)$, and
$\alpha_{i}^{T}(\omega)$ are, respectively, the scalar, the vector, and the
tensor polarizabilities for quantum state $i$ at frequency $\omega$, and the
tensor component will be taken into account only when $J>1/2$. Also in Eq.
(1), the laser polarization $A$, the angle $\theta_{k}$ between the laser
propagation direction $\hat{k}$ and the magnetic field direction $\hat{B}$,
the angle $\theta_{p}$ between the laser polarization direction and $\hat{B}$
are all important parameters affecting the ac Stark shift. In previous
theoretical calculations Tang13 ; Kaur15 ; Jiang17 , $A=0$ and
$\cos\theta_{p}=1$ were chosen when calculating the polarizabilities and
extracting the magic wavelengths under a linearly polarized laser field.
We first consider the case where $A=0$ and $\cos\theta_{p}=1$ in our
experiment. Unlike the 395 nm magic wavelength measurement, it is found that
the magic wavelength here is very sensitive to the parameters $A$,
$\theta_{k}$, and $\theta_{p}$. Thus, we have to make sure that these
parameters are very stable and precise. The parameter $A$ is measured to be
0.005(5) that corresponds to an almost linear polarization, but the
$A\cos\theta_{k}$ term still affects the measurement because the ac Stark
shifts to the sublevels $m_{J}=-3/2$ and $m_{J}=3/2$ are found to be
different. Setting $\cos\theta_{k}$ to be 0 will lower the effect caused by
the polarization impurity.
In the experimental setup, the Lm laser polarization and propagation
directions are kept unchanged. In the beginning of our measurement, the
background magnetic field of the ion is compensated to 0 by adjusting the
currents in the three pairs of Helmholtz coils. The magnetic field amplitude
can be measured by observing the clock transition Zeeman components. By
adjusting the currents in the coils, the relationship between the current in
each pair of coils and the magnetic field it produces is measured. By changing
the currents in the coils, one can produce the magnetic field of any direction
while keeping the amplitude constant. In the end of our measurement, the
compensated background magnetic field is measured again so that the background
magnetic field drift amplitude can be evaluated.
To measure the magic wavelength $\lambda_{m}$, we studied ac Stark shift
within a few nanometers around $\lambda_{m}$. We measured the ac Stark shifts
at six wavelengths of Lm laser, each being measured for about 2000 s. Then the
six points were fitted linearly and the magic wavelength was obtained.
Evaluation of systematic shifts is of great importance in the measurement of
the infrared magic wavelength since it is sensitive to the above-mentioned
parameters. The systematic shifts caused by the uncertainties in $\theta_{k}$
and $\theta_{p}$, by the laser power, by the broadband laser spectrum, and by
the background magnetic field drift were also evaluated.
For estimating the systematic shift due to $\theta_{p}$, we scanned
$\theta_{p}$ from $-30^{\circ}$ to $30^{\circ}$. We found that the measured
magic wavelength became longer when $\theta_{p}$ was near 0, as observed in
Ref. Jiang17 . Experimentally we can change $\theta_{p}$ until the measured
magic wavelength becomes the longest. According to the precision of
$\theta_{p}$ that we can experimentally have, $\theta_{p}$ could cause a
measurement uncertainty of 0.03 nm. For estimating the systematic shift due to
$\theta_{k}$ and $A$, we let the laser pass through a polarizer before it
enters into the vacuum chamber. However, for some reasons, such as the
viewports that would change the polarization slightly, we can still see strong
effects caused by $A$. Technically, the magnetic field direction can be
adjusted to make $\cos\theta_{k}=0$. By scanning $\theta_{k}$, it was found
that the measured magic wavelength was longer when $\theta_{k}$ was closer to
90∘. When measuring the magic wavelength difference between $m_{J}=3/2$ and
$m_{J}=-3/2$, we found that this difference came to 0 when
$\theta_{k}=90^{\circ}$, indicating that the $A\cos\theta_{k}$ term no longer
contributed to the systematic shift. Experimentally we can change $\theta_{k}$
until the measured magic wavelength difference between $m_{J}=3/2$ and
$m_{J}=-3/2$ becomes 0. The experimental precision of $\theta_{k}$ would cause
a measurement uncertainty of 0.01 nm.
Figure 1: The magic wavelength as a function of $\theta_{p}$. $\theta_{p}$
represents the angle between magnetic field direction and the laser
polarization. Each data point shows the average of an experiment lasts for 1-4
hours. The error bars only include the statistical errors, yet the systematic
errors caused by the magnetic field drifting, the laser power drifting, and
the laser pointing drifting are not included. The fitted solid curve is a
polynomial fit of the data set to the 4th order.
The background magnetic field may be changing during the measurement. Since
the measurement was found to be sensitive to the magnetic field direction, the
effects of magnetic field change should be considered. By measuring the
compensated magnetic field amplitude (which should be about 0) every few
hours, the background magnetic field would only be changed by less than 30 nT
during the whole experiment. Since the applied magnetic field amplitude is
3800 nT, we estimated that both $\theta_{p}$ and $\theta_{k}$ would gain an
uncertainty of less than $0.5^{\circ}$ due to the background magnetic field
change. According to the relationship between the magic wavelength and those
parameters, magnetic field change during the whole experiment would cause a
magic wavelength measurement uncertainty of 0.08 nm.
Table 1 lists the systematic error budget. Details about the systematic shift
evaluation can be found in the Supplementary Materials.
Table 1: Uncertainty budget for the infrared magic wavelength measurement. Effects with both shift and uncertainty smaller than 0.001 nm are not listed. Units are in nm. Source | Shift | Uncertainty
---|---|---
Statistical | - | 0.02
$\theta_{p}$ | 0 | 0.03
$\theta_{k}$ | 0 | 0.01
Laser power | $-0.03$ | 0.03
Broadband laser spectrum | 0.005 | 0.005
Background magnetic field shift | 0 | 0.08
Total uncertainty | $-0.04$ | 0.09
Magic wavelength | |
with correction | | 1056.37(9)
With the corrections shown in Table 1, the infrared magic wavelength for
$|m_{J}|=3/2$ is determined as 1056.37(9) nm. To date, there are a few
theoretical calculations on this wavelength Tang13 ; Kaur15 ; Jiang17 , as
listed in Table 2. One can see that our result is in fairly good agreement
with these calculations but with much smaller uncertainty.
Theoretically, using the perturbation theory, the dynamic electric dipole
polarizabilities of a given atomic state can be expressed as
$\displaystyle\alpha_{i}^{S}$
$\displaystyle(\omega)=\frac{2}{3(2J_{i}+1)}\sum_{k}\frac{\Delta
E_{ki}|\langle\Psi_{i}||D||\Psi_{k}\rangle|^{2}}{\Delta
E_{ki}^{2}-\omega^{2}}$ (2) $\displaystyle\alpha_{i}^{V}$
$\displaystyle(\omega)=\sqrt{\frac{24J_{i}}{(J_{i}+1)(2J_{i}+1)}}$
$\displaystyle\times\sum_{k}(-1)^{(J_{i}+J_{k}+1)}\begin{Bmatrix}J_{i}&1&J_{i}\\\
1&J_{k}&1\end{Bmatrix}\frac{\omega|\langle\Psi_{i}||D||\Psi_{k}\rangle|^{2}}{\Delta
E_{ki}^{2}-\omega^{2}}$ $\displaystyle\alpha_{i}^{T}$
$\displaystyle(\omega)=\sqrt{\frac{40J_{i}(2J_{i}-1)}{3(J_{i}+1)(2J_{i}+1)(2J_{i}+3)}}$
$\displaystyle\times\sum_{k}(-1)^{(J_{i}+J_{k})}\begin{Bmatrix}J_{i}&2&J_{i}\\\
1&J_{k}&1\end{Bmatrix}\frac{\Delta
E_{ki}|\langle\Psi_{i}||D||\Psi_{k}\rangle|^{2}}{\Delta
E_{ki}^{2}-\omega^{2}}$
where $D$ is the electric dipole transition operator. It is noted that, when
$\omega=0$, $\alpha_{i}^{S}(\omega)$, $\alpha_{i}^{V}(\omega)$, and
$\alpha_{i}^{T}(\omega)$ are referred, respectively, as the static scalar,
vector, and tensor polarizabilities for state $i$. The uncertainties of the
polarizabilities are governed by the uncertainties of the reduced transition
matrix elements. Under our experimental conditions, the ac Stark shift at the
magic wavelength includes the contributions from $\alpha_{4s}^{S}(\omega)$,
$\alpha_{3d_{5/2}}^{S}(\omega)$, and $\alpha_{3d_{5/2}}^{T}(\omega)$, and the
contribution from $\alpha^{V}(\omega)$ can be neglected.
Since the ac Stark shift of the clock transition at the magic wavelength is
zero, the dynamic polarizabilities are the same for both $4s_{1/2}$ and
$3d_{5/2}$ states. Theoretical works Tang13 ; Safronova11 show that the
contributions from the $4s_{1/2}\rightarrow 4p_{1/2}$ and $4s_{1/2}\rightarrow
4p_{3/2}$ transitions dominate the polarizability of the $4s_{1/2}$ state, and
the contributions to the polarizability of the $3d_{5/2}$ state are dominated
by the $3d_{5/2}\rightarrow 4p_{3/2}$ transition that constitutes over 80% of
the polarizability. Based upon the magic wavelength measured here, the energy
levels of atomic states in Ca+ given by NIST Kramida18 , the experimentally
obtained high precision matrix elements for the $4s_{1/2}\rightarrow 4p_{1/2}$
and $4s_{1/2}\rightarrow 4p_{3/2}$ transitions Liu15 , and other reduced
matrix elements from RCC Safronova11 ; Kaur17 and DFCP calculations, the
matrix element $|\langle 3d_{5/2}||D||4p_{3/2}\rangle|$ is extracted to be
3.295(15) a.u..
The BBR shift to the $4s_{1/2}\rightarrow 5d_{5/2}$ clock transition frequency
can be evaluated according to
$\displaystyle\Delta_{\rm BBR}(4s_{1/2}\rightarrow 5d_{5/2})=$
$\displaystyle-\frac{1}{2}(\alpha_{0,4s_{1/2}}-\alpha_{0,3d_{5/2}})$ (3)
$\displaystyle\times(831.9{\rm V}/{\rm m})^{2}\bigg{(}\frac{T(\rm
K)}{300}\bigg{)}^{4}$
where $\alpha_{0}$ is the static electric-dipole polarizability. Combining the
matrix element $\left|\langle 3d_{5/2}\left\|D\right\|4p_{3/2}\rangle\right|$
obtained above and other matrix elements from both experiment and theoretical
calculations, the differential static polarizability between the $4s_{1/2}$
and $3d_{5/2}$ states is determined to be $-44.32(32)$ a.u.. The corresponding
BBR shift at 300 K is 0.3816(28) Hz. Comparing to the existing theoretical
values, as listed in Table 2, the present value agrees with and slightly
better than the best previous theoretical calculation of Ref. Safronova11 .
The fractional uncertainty of BBR shift can now be updated to be 6.8$\times
10^{-18}$. The uncertainty due to the knowledge of the dynamic
polarizabilities can be further reduced with the method in Ref. Barrett19 .
Table 2: Comparison of the infrared magic wavelength (nm) and the Ca+ blackbody radiation shift (Hz) at 300 K. | Present | Theory
---|---|---
| | All-order | DFCP
| | method | method
Magic wavelength | 1056.37(9) | 1052.26 Kaur15 | 1074(26) Tang13
| | | 1074(32) Jiang17
BBR shift | 0.3816(28) | 0.3811(44) Safronova11 | 0.380(14) Arora07
| | 0.31(1) Sahoo09 | 0.368 Mitroy08
In summary, we have performed an experimental determination of the infrared
magic wavelength in Ca+ with uncertainty less than $0.1$ nm. Our result agrees
well with theoretical values but with 1-2 orders of magnitude improvement. By
using our measured result, the differential static scalar polarizability has
been determined as $-44.32(32)$ a.u., also in agreement with the previous
theoretical values but with higher accuracy. The blackbody radiation shift at
300 K has then evaluated as 0.3816(28) Hz, which is also in good agreement
with our recent measurement Huang19 . It is noted that the infrared magic
wavelength for the $4s_{1/2}\rightarrow 5d_{5/2}$ transition ($m_{J}=1/2$) was
also predicted theoretically in Ref. Tang13 . The matrix element of
$3d_{5/2}\rightarrow 4f_{7/2}$ transition, whose theoretical uncertainty is
1.1% using relativistic all-order method, could be extracted and improved from
further measurement on this magic wavelength, which can help reduce the BBR
shift uncertainty further. Although the differential static scalar
polarizability can be experimentally obtained with a better accuracy by
measuring the magic rf field Huang19 that could result in the BBR shift with
lower uncertainty, it requires that the differential static polarizability of
the clock transition is negative Huang19 ; Dube14 . However, many optical
clock candidates, such as Yb+, In+, Sr, and Yb, do not satisfy this criterion.
The scheme in this work, which uses the magic wavelength to extract the
transition matrix elements, can be an alternative and more general way to
determine the differential static polarizability.
Furthermore, the determination of the infrared magic wavelength is also a very
important step for building an all-optical trapping ion optical clock in the
near future. Long-time all-optical trapping of the ions has already been
achieved recently by Schatz’s group Lambrecht17 . It is found that one can
trap an ion with optical dipole trap only if the trap potential is higher than
the ion kinetic motion energy, and the heating rate of the dipole trap will be
higher with relatively near resonance wavelength. The ion lifetime in dipole
trap would be a few ms with a few hundreds of GHz red detuning lasers
Schneider10 ; Huber14 ; yet the lifetime can be extended to a few second with
a few hundred THz far-off-resonance lasers. To realize an ion-based optical
clock with all-optical trapping scheme, lifetime of at least 100 ms is
required and the heating rate should be maintained as low as possible in order
to lower the Doppler and Stark shifts. Building a clock with infrared lasers
of hundreds THz of red detuning is a better choice comparing to the 395 nm
laser. Besides, one can easily obtain a fiber laser with higher power ($>$ 60
W) at the Ca+ infrared magic wavelength in the range of $1000\sim 1100$ nm.
The all-optical trapping ion optical clock scheme can be used to trap multiple
ions Schmidt18 , which will potentially increase clock stability. However, The
magic wavelength in sensitive to the alignment of the beam and its
polarization relative to the magnetic field orientation, in our case, these
effect would limit the precision of the magic wavelength to the 0.1 nm level,
this would limit the accuracy of the optical clocks. In the practical point of
view, building a high accuracy all optical ion clock would require techniques
to make the laser pointing and magnetic field more stable.
We thank Jun Jiang, Yongbo Tang, Fangfei Wu, V. Yudin, A. Taichenachev,
Zongchao Yan, B. Sahoo, and J. Ye for help and fruitful discussions. This work
is supported by the National Key R&D Program of China (Grant Nos.
2018YFA0307500, 2017YFA0304401, 2017YFA0304404, 2017YFF0212003), the Natural
Science Foundation of China (Grant Nos. 11634013, 11622434, 91736310,
11774388), the Strategic Priority Research Program of the Chinese Academy of
Sciences (Grant No. XDB21030100), CAS Youth Innovation Promotion Association
(Grant Nos. 2015274, 2018364), and Hubei Province Science Fund for
Distinguished Young Scholars (Grant No. 2017CFA040).
## References
* (1) Ushijima, I., Takamoto, M., Das, M., Ohkubo, T. & Katori, H. Nat. Photonics 9, 185-189 (2015).
* (2) Huntemann, N., Sanner, C., Lipphardt, B., Tamm, Chr. & Peik, E. Phys. Rev. Lett. 116, 063001 (2016).
* (3) McGrew, W. F., Zhang, X., Fasano, R. J., Sch$\ddot{a}$ffer, S. A., Beloy, K., Nicolodi, D., Brown, R. C., Hinkley, N., Milani, G., Schioppo, M., Yoon, T. H. & Ludlow, A. D. Nature 564, 87-90 (2018).
* (4) S. M. Brewer, J.-S. Chen, A. M. Hankin, E. R. Clements, C.W. Chou, D. J. Wineland, D. B. Hume & D. R. Leibrandt Phys. Rev. Lett. 123, 033201 (2019).
* (5) T. Bothwell, Kedar, D. , Oelker, E. , Robinson, J. M. , Bromley, S. L. , Tew, W. L., Ye, J. &Kennedy, C. J. , Metrologia 56, 065004, 2019.
* (6) R. Le Targat, L. Lorini, Y. Le Coq, M. Zawada, J. Guena, M. Abgrall, M. Gurov, P. Rosenbusch, D. G. Rovera, B. Nagórny, R. Gartman, P. G. Westergaard, M. E. Tobar, M. Lours, G. Santarelli, A. Clairon, S. Bize, P. Laurent, P. Lemonde& J. Lodewyck, Experimental realization of an optical second with strontium lattice clocks, Nat. Commun. 4 2109 (2013).
* (7) M. S. Safronova, D. Budker, D. DeMille, D. F. J. Kimball, A. Derevianko & C. W. Clark, Search for new physics with atoms and molecules, Rev. Mod. Phys. 90, 025008(2018).
* (8) N. Huntemann, B. Lipphardt, Chr. Tamm, V. Gerginov, S. Weyers & E. Peik, Phys. Rev. Lett. 113, 210802 (2014).
* (9) J. Grotti et al., Geodesy and metrology with a transportable optical clock, Nat. Phys. 14, 437 (2018).
* (10) Schneider, Ch., Enderlein, M., Huber, T. & Schaetz, T. Optical trapping of an ion. Nat. Photonics 4, 772-775 (2010).
* (11) Huber, T., Lambrecht, A., Schmidt, J., Karpa, L. & Schaetz, T. Nat. Communi. 5, 5587 (2014).
* (12) Lambrecht, A., Schmidt, J., Weckesser, P., Debatin, M., Karpa, L. & Schaetz, T. Nat. Photonics 11, 704-707 (2017).
* (13) LeBlanc, L. J. & Thywissen, J. H. Species-specific optical lattices. Phys. Rev. A 75, 053612 (2007).
* (14) Arora, B., Safronova, M. S. & Clark, C. W. Tune-out wavelengths of alkali-metal atoms and their applications. Phys. Rev. A 84, 043401 (2011).
* (15) Herold, C. D., Vaidya, V. D., Li, X., Rolston, S. L., Porto, J. V. & Safronova, M. S. Precision Measurement of Transition Matrix Elements via Light Shift Cancellation. Phys. Rev. Lett. 109, 243003 (2012).
* (16) Holmgren, W. F., Trubko, R., Hromada, I. & Cronin, A. D. Measurement of a Wavelength of Light for Which the Energy Shift for an Atom Vanishes. Phys. Rev. Lett. 109, 243004 (2012).
* (17) Schmidt, J., Lambrecht, A., Weckesser, P., Debatin, M., Karpa, L. & Schaetz, T. Optical Trapping of Ion Coulomb Crystals. Phys. Rev. X 8, 021028 (2018).
* (18) Derevianko, A. Reconciliation of the Measurement of Parity Nonconservation in Cs with the Standard Model. Phys. Rev. Lett. 85, 1618-1621 (2000).
* (19) Sahoo, B. K., Chaudhuri, R., Das, B. P. & Mukherjee, D. Relativistic Coupled-Cluster Theory of Atomic Parity Nonconservation: Application to 137Ba+. Phys. Rev. Lett. 96, 163003 (2006).
* (20) Porsev, S. G., Beloy, K. & Derevianko, A. Precision Determination of Electroweak Coupling from Atomic Parity Violation and Implications for Particle Physics. Phys. Rev. Lett. 102, 181601 (2009).
* (21) Liu, P.-L., Huang, Y., Bian, W., Shao, H., Guan, H., Tang, Y.-B., Li, C.-B., Mitroy, J. & Gao, K.-L. Measurement of Magic Wavelengths for the 40Ca+ Clock Transition. Phys. Rev. Lett. 114, 223001 (2015).
* (22) Tang, Y.-B., Qiao, H.-X., Shi, T.-Y. & Mitroy, J. Dynamic polarizabilities for the low-lying states of Ca+. Phys. Rev. A 87, 042517 (2013).
* (23) Kaur, J., Singh, S., Arora, B. & Sahoo, B. K. Magic wavelengths in the alkaline-earth-metal ions. Phys. Rev. A 92,031402 (2015).
* (24) Jiang, J., Jiang, L., Wang, X., Zhang, D.-H., Xie, L.-Y. & Dong, C.-Z. Magic wavelengths of the Ca+ ion for circularly polarized light. Phys. Rev. A 96, 042503 (2017).
* (25) Haycock, D. L., Hamann, S. E., Klose, G. & Jessen, P. S. Atom trapping in deeply bound states of a far-off-resonance optical lattice. Phys. Rev. A 55, R3991(R) (1997).
* (26) Huang, Y., Guan, H., Zeng, M., Tang, L. & Gao, K. 40Ca+ ion optical clock with micromotion-induced shifts below 1$\times$10-18. Phys. Rev. A 99, 011401(R) (2019).
* (27) Safronova, M. S. & Safronova, U. I. Blackbody radiation shift, multipole polarizabilities, oscillator strengths, lifetimes, hyperfine constants, and excitation energies in Ca+. Phys. Rev. A 83, 012503 (2011).
* (28) Kramida, A., Ralchenko, Yu., Reader, J. & NIST ASD Team (2018). NIST Atomic Spectra Database (version 5.6.1), https://physics.nist.gov/asd.
* (29) Kaur, J., Singh, S., Arora, B. & Sahoo, B. K. Annexing magic and tune-out wavelengths to the clock transitions of the alkaline-earth-metal ions. Phys. Rev. A 95, 042501 (2017).
* (30) Arora, B., Safronova, M. S. & Clark, C. W. Blackbody-radiation shift in a 43Ca+ ion optical frequency standard. Phys. Rev. A 76, 064501 (2007).
* (31) Sahoo, B. K., Das, B. P. & Mukherjee D. Relativistic coupled-cluster studies of ionization potentials, lifetimes, and polarizabilities in singly ionized calcium. Phys. Rev. A 79, 052511 (2009).
* (32) Mitroy, J. & Zhang, J. Y. Long range interactions of the Mg+ and Ca+ ions. Eur. Phys. J. D 46, 415-424 (2008).
* (33) Barrett, M. D., Arnold, K. J., & Safronova, M. S. Polarizability assessments of ion-based optical clocks. Phys. Rev. A 100, 043418(R) (2019).
* (34) Dubé, P., Madej, A. A., Tibbo, M. & Bernard, J. E. High-Accuracy Measurement of the Differential Scalar Polarizability of a 88Sr+ Clock Using the Time-Dilation Effect. Phys. Rev. Lett. 112, 173002 (2014).
|
# The effect of wave dark matter on equal mass black hole mergers
Josu C. Aurrekoetxea<EMAIL_ADDRESS>Astrophysics,
University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH,
United Kingdom Katy Clough Geometry, Analysis and Gravitation, School of
Mathematical Sciences, Queen Mary University of London, Mile End Road, London
E1 4NS, United Kingdom Jamie Bamber Departments of Physics and Astronomy,
University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA Pedro G.
Ferreira Astrophysics, University of Oxford, Denys Wilkinson Building, Keble
Road, Oxford OX1 3RH, United Kingdom
###### Abstract
For dark matter to be detectable with gravitational waves from binary black
holes, it must reach higher than average densities in their vicinity. In the
case of light (wave-like) dark matter, the density of dark matter between the
binary can be significantly enhanced by accretion from the surrounding
environment. Here we show that the resulting dephasing effect on the last ten
orbits of an equal mass binary is maximized when the Compton wavelength of the
scalar particle is comparable to the orbital separation, $2\pi/\mu\sim d$. The
phenomenology of the effect is different to the channels that are usually
discussed, where dynamical friction (along the orbital path) and radiation of
energy and angular momentum drive the dephasing, and is rather dominated by
the radial force (the spacetime curvature in the radial direction) towards the
overdensity between the black holes. Whilst our numerical studies limit us to
scales of the same order, this effect may persist at larger separations and/or
particle masses, playing a significant role in the merger history of binaries.
Introduction.— Gravitational-wave observations provide a unique window that
can be used not only to infer the astrophysical properties of black holes
(BHs), but also to gather information about the environments they live in. The
presence of matter around BHs during a binary merger event results in
modifications to the trajectories, which in turn changes the gravitational-
wave signal in a characteristic way [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
13, 14, 15, 16, 17, 18, 19]. Environments may arise from standard matter, such
as accretion discs, or from dark matter (DM). In this work we focus on the
latter case.
Figure 1: Dephasing in the coalescence time $\Delta t_{\mathrm{c}}$ for a
10-orbit binary for different scalar masses $\mu$ and initial densities
$\rho_{0}$. Here $\bar{t}_{\mathrm{c}}$ is the merger (coalescence) time in
the absence of a dark matter cloud. The effect is maximized for $\mu\approx
0.45M^{-1}$, corresponding to a Compton wavelength of the dark matter particle
that is comparable to the initial separation of the orbit
$\lambda_{c}=2\pi/\mu\sim d_{0}\approx 12M$. Whilst we focus on the regime of
small separations for numerical reasons, larger ones may support effects from
smaller mass DM candidates. We also note that the effect persists even at
larger masses that would normally already be showing a more particle-like
behaviour. The effect could therefore be more generic than our study suggests.
The DM energy densities required to give significant effects on the signal are
high relative to the expected average galactic values, with the latter
determined by large scale observations [20, 21, 22, 23, 24]. Therefore the
impact of such effects may be expected to be small [6]. However, average
galactic densities describe DM on large scales only, and its distribution on
small scales (in particular the parsec and below scales relevant for
astrophysical BHs) is not well constrained. There exist several mechanisms
that could create DM overdensities around isolated BH. One well known
possibility is the superradiant instability, in which a bosonic field extracts
energy and angular momentum from a highly spinning black hole via repeated
scattering in the ergoregion [25, 26, 27, 28, 29, 30] (see [31] for a review).
Another more prosaic effect is simply the accretion of dark matter in the
potential well around BHs, which results in the formation of “dark matter
spikes” [32] (a combination of both superradiance and accretion may lead to
even higher densities [33]). Such spikes were originally proposed in the
context of WIMP-like dark matter, but in general their profile is a power law
with an exponent that depends on the effective equation of state of the dark
matter [34, 35, 36, 37, 38, 39, 40]. However, they also occur for low mass,
wave-like DM candidates, with a form that is dependent on the relative Compton
wavelength of the DM particle and the black hole horizon [41, 42, 43, 44, 45,
46]. In both cases, the DM density near the BHs depends on the asymptotic dark
matter environment and on the particle properties.
However, a key question is whether these overdensities around isolated objects
persist during a binary merger event. In the case of heavy (particle-like) DM
[47], N-body simulations have shown that they disperse for equal mass mergers,
meaning that objects close to merger or with a violent merger history are
likely to have lost their DM environment [48, 49, 50]. Dark matter spikes
nevertheless remain relevant for intermediate and extreme mass ratio inspirals
(IMRIs and EMRIs) or primordial black holes, with signatures potentially
detectable in next generation space and ground based observations [51, 52, 53,
54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65]. For light or wave-like DM
[66] (see [67, 68, 69] for reviews), much work has focused on the impact of
black holes moving in galactic DM halos [70, 71, 72, 73, 74, 75, 76, 77, 78]
or with superradiant clouds [79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90,
91, 92, 93, 94, 95, 96, 97, 98, 99]. Some of this work has suggested that the
cloud is not completely lost. In a previous publication [100], we demonstrated
that overdensities around equal mass binaries grew into a quasi-stationary
profile that persisted up until the merger (see also [101, 102, 103, 104]).
In this paper, we build on our study to better understand how generic such an
effect is, and to properly quantify the impact that the DM has on the binary
close to merger. We focus on the effect of wave DM on equal mass BH mergers,
and in particular its dependence on the mass of the scalar particle. We
simulate a 10-orbit binary black hole in an initially homogeneous dark matter
environment starting from initial conditions satisfying the Hamiltonian and
momentum constraints. We identify the decay of the orbit (and, as a
consequence, dephasing of the gravitational wave signal) as being a direct
result of the scalar cloud. Our key results are illustrated in Fig. 1, where
we show the dephasing is maximized when the mass of the scalar particle is
such that its Compton wavelength is comparable to the initial separation of
the orbit $\lambda_{c}=2\pi/\mu\sim d_{0}$. In addition, we are able to
quantify the different channels that contribute to the dephasing in our
scenario, finding the dominant effect to be driven not by radiation or
dynamical friction drag forces, as are often discussed, but rather the
attraction of the binary to the central overdensity.
Key background and physical setup.— We consider a minimally coupled massive
complex scalar field $\Phi$ described by the action
$S=\int\differential^{4}x\sqrt{-g}\left(\frac{R}{16\pi
G}-\frac{1}{2}\left(\nabla_{\mu}\Phi\right)^{*}\left(\nabla^{\mu}\Phi\right)-V(\Phi)\right),$
(1)
with a quadratic potential
$V(\Phi,\Phi^{*})=\frac{1}{2}\mu^{2}\Phi^{*}\Phi.$ (2)
where $\mu$ is a parameter related to the scalar field mass111The parameter
$\mu$ is the inverse length scale $\mu=2\pi/\lambda_{c}=m_{s}c/\hbar$
associated with the scalar field mass $m_{s}$. In Planck units $\mu=m_{s}$, so
it is common to refer to $\mu$ simply as “the scalar mass”.. The dynamics of
the scalar field is given by the Klein-Gordon equation coupled to gravity
$\left[\nabla^{\alpha}\nabla_{\alpha}-\mu^{2}\right]\Phi=0\,.$ (3)
Figure 2: Cloud density for two values of $\mu$: one large with
$\lambda_{c}\approx d_{0}$ (right) in which case the binary obtains an
enhanced density cloud, and one smaller $\lambda_{c}\gg d_{0}$ (left) in which
case the pressure coming from the long wavelength of the collective
excitations of the field prevents a high density cloud from forming. We refer
to these as the “cloud” and “no cloud” cases respectively. Simulation movie in
[105].
In the case of a single BH immersed in a reservoir of such scalar DM, the
stationary solution near the black hole is described by the Heun functions
[42, 106, 107, 108], with a power law envelope and characteristic oscillations
in the spatial profile on length scales set by the scalar wavelength. In the
case of a binary no analytic form for a stationary state is known, but
simulations [100] using the numerical codes grchombo [109] and grdzhadzha
[110] have indicated that for a range of initial configurations and within a
few orbits, the scalar matter evolves into a persistent quasi-stationary
profile with density spikes near the black holes and an overdensity between
them.
Ideally we would set this “natural” quasi-stationary DM configuration as an
initial condition, and study the impact the cloud has on the binary merger
using general relativity. However, even if an analytic form was known, a
consistent solution of the GR constraints would lead to changes to the initial
effective masses and momenta of the black holes for different densities and
profiles, making comparisons of the subsequent evolutions difficult to
interpret. In particular, it is hard to know if the additional dephasing is
arising due to matter effects or due to the increased initial eccentricity of
the orbits. One can mitigate this by applying eccentricity reduction schemes
to the initial data, but the fact that the clouds can be very dense near the
horizon makes this challenging as the eccentricity is extremely sensitive to
small changes.
In this paper we take a simpler approach. Given the short relaxation timescale
of the cloud ($\sim 2$ orbits), compared to the timescale of the merger we are
simulating ($\sim 10$ orbits), we start all simulations from a homogeneous
configuration with fixed initial density
$\rho_{0}=\mu^{2}\phi_{0}^{2}\,,$ (4)
and allow the cloud to build up dynamically during the simulation. To do so,
we choose homogeneous initial conditions for the real and imaginary components
of the scalar field, $\Phi=(\phi_{0},0)$ and
$\partial_{t}\Phi=(0,\mu\phi_{0})$. As we vary $\mu$ (which gives us different
cloud configurations, Fig. 2), we adjust $\phi_{0}$ so as to we keep the
initial density $\rho_{0}$ unchanged, thus the initial trajectories of the
binary as a result of solving the initial data will be the same in all cases,
which allows comparison between different masses (and different scalar cloud
profiles) in a more controlled way. We still need to be conscious of the
effect of the transient phase, but since this can be clearly identified in the
evolution during the first few orbits, it becomes easier to separate out.
We decompose the line element in the ADM form [111]
$ds^{2}=-\alpha^{2}\mathrm{d}t^{2}+\gamma_{ij}(\mathrm{d}x^{i}+\beta^{i}\mathrm{d}t)(\mathrm{d}x^{j}+\beta^{j}\mathrm{d}t),$
(5)
where $\gamma_{ij}$ is the three-dimensional spatial metric that we decompose
into a conformal factor and a conformally related metric
$\gamma_{ij}=\bar{\gamma}_{ij}/\chi$. The lapse and shift gauge functions
$\alpha$ and $\beta^{i}$ determine the choice of spatial hyperslicings and
their coordinates, which in numerical relativity are dynamically determined.
The extrinsic curvature tensor
$K_{ij}=(2D_{(i}\beta_{j)}-\partial_{t}\gamma_{ij})/2\alpha$ is decomposed
into a trace $K$ and a traceless part $A_{ij}$, i.e.
$K_{ij}=A_{ij}+(1/3)K\gamma_{ij}$.
We solve the Hamiltonian constraint using Bowen-York initial data using the
CTTK hybrid method [112]. In the homogeneous limit, this reduces to choosing
the trace of the extrinsic curvature tensor $K^{2}=24\pi G\rho$ and solving
for a correction of the conformal factor $\chi$ sourced by the traceless
components $A_{ij}$. We use the open-source numerical relativity code grchombo
[109, 113] with adaptive mesh refinement [114] to solve the full Einstein
equations using the CCZ4 formalism [115] with the moving puncture gauge [116,
117, 118, 119, 120]. We use a simulation box length $L=512M$ and $9$ levels of
mesh refinement and impose reflecting boundary conditions at $z=0$, while for
the other boundaries we apply zeroth order extrapolating boundaries to the
scalar field and Sommerfeld boundary conditions to the metric. Further
technical details and the parameters for each case are given in the
Supplemental Material.
Figure 3: Dephasing of the gravitational wave signal due to the accretion,
dynamical friction and emission of wave dark matter around a binary black hole
merger. Top panel is the real part of the $\psi_{22}$ mode, whilst mid and
bottom panels are its modulus and phase, respectively. The black solid line
corresponds to $(\mu,\phi_{0})=(68,50)\times 10^{-4}$, which we refer to as
“no cloud”, as the Compton wavelength of the scalar field is much larger and
we do not efficiency excite a DM cloud. The blue solid line corresponds to
$(\mu,\phi_{0})=(4300,0.79)\times 10^{-4}$, and causes a $\Delta
t_{\mathrm{c}}/\bar{t}_{\mathrm{c}}\approx 10\%$ dephasing of the merger time.
The initial densities for both these cases is $\rho_{0}\approx 10^{-9}M^{-2}$.
Movie in [105].
Dephasing of the binary.— We study the tensor gravitational-wave modes
$\psi_{lm}$ emitted by the binary black hole extracting the Newman-Penrose
scalar $\Psi_{4}$ with tetrads proposed by [121], projected into spin-weight
$-2$ spherical harmonics ${}_{-2}Y^{lm}$
$r_{\text{ex}}\psi_{lm}=\oint_{S^{2}}r_{\mathrm{ex}}\Psi_{4}|_{r=r_{\mathrm{ex}}}\left[{}_{-2}\bar{Y}^{lm}\right]\,\mathrm{d}\Omega\,,$
(6)
where $\mathrm{d}\Omega=\sin\theta\,\mathrm{d}\theta\,\mathrm{d}\varphi$ is
the area element on the $S^{2}$ unit sphere222Due to the non zero flux of
matter at the boundary, we are doing the extraction of gravitational waves in
a region that is not strictly asymptotically flat, but in each constant
density case the small non zero value of $K$ is the same, meaning that our
comparisons are still meaningful.. The merger or coalescence time for our
10-orbit binary in vacuum is $\bar{t}_{\mathrm{c}}\approx 2000\,M$, defined as
when $|\psi_{22}|$ peaks, which is the dominant mode. This is also the case
for small initial densities $\rho_{0}$ since there is less backreaction of the
matter on the binary metric. For a given density, a smaller effect is also
seen for masses $\mu\ll M^{-1}$, as the Compton wavelength $\lambda_{c}\gg d$
and the cloud is not efficiently excited, see Fig. 2. We refer to the case of
small $\mu$ as the “no cloud” configuration (it has the same initial, non zero
density, but no structure forms around the binary), and the higher $\mu$ case
as “with cloud”.
A typical result can be seen in Fig. 3. As expected, the presence of wave dark
matter around the binary results in a dephasing of the gravitational-wave
signal $\Delta t_{\mathrm{c}}\equiv t_{\mathrm{c}}-\bar{t}_{\mathrm{c}}$,
which is caused by effects like accretion and dynamical friction from the
cloud. We will give explanations and order of magnitude estimates for the
various effects in the following section.
In Fig. 1 we compare the dephasing for different DM masses
$\mu\in\\{0.0068,\,0.86\\}M^{-1}$, corresponding to wavelengths
$\lambda_{c}\in\\{924,\,7\\}M$ that span a range above and below the initial
binary separation $d_{0}\approx 12\,M$. We find that the dephasing is
maximized for $\mu\approx 0.45M^{-1}$, corresponding to $\lambda_{c}\approx
14M\approx d_{0}$. If the mass is smaller $\mu<0.45M^{-1}$, the cloud quickly
becomes suppressed and the dephasing becomes negligible, but we note that
larger separations earlier in the lifetime of the binary may support clouds at
smaller masses. If the mass is larger $\mu>0.45M^{-1}$, the dephasing is
smaller but remains significant, and we still find an efficient excitation of
the cloud. One the one hand, this is not so surprising – even at our highest
mass, we are still in a regime where $\mu\approx M^{-1}$, and so as the merger
radiates gravitational waves and inspirals in, it eventually approaches an
orbital separation comparable to $\lambda_{c}$. On the other hand, in other
studies (see e.g. [80]) one often finds that the behaviour at this limit is
already reasonably well described by the particle limit, and so we might have
expected to see a greater dissipation of the cloud and suppression of the
effect. The fact that this is not the case implies that the mass does not need
to be very finely tuned for the effects to be significant, and motivates a
more detailed study to find the boundary between the wave and particle
regimes.
We also vary the asymptotic energy density to find the value at which the
dephasing is detectable in our simulations during the last 10 orbits, which
gives an indication of the value required for effects to be significant at
merger. See the conclusion section for these values in physical units. Smaller
values may still give detectable effects if observations can happen over a
longer time frame (e.g. by combined LISA and LVK observations), but here we
aim to consider the simplest case where the dephasing is significant in the
merger part of the signal alone.
Quantification of the causes of the dephasing.— To quantify the origins of the
dephasing we want to identify the changes in the energy, angular and radial
momentum of the binary that relate to the presence of the cloud of matter. In
the Newtonian picture, these would include the effects of accretion of the
matter and gravitational forces coming from the uneven distribution of the
surrounding cloud (e.g. the effect of dynamical friction where the object
builds up an overdense tail of matter). In the GR picture these are curvature
effects not forces, but we can still quantify them, up to an unavoidable
slicing dependence333Here we say slicing dependence rather than gauge
dependence because the defined quantities are scalars and in principle do not
depend on the gauge. In practise, however, their definition is with respect to
the coordinate observers of the dynamical evolution and so choosing a
different slicing will result in physically different scalars, with
consequently different values..
We follow the approach of [122, 123, 124, 80], and define a current
$J^{\mu}=\xi^{\nu}T^{\mu}_{\nu}$ in the direction $\xi^{\nu}$ and associated
charge and flux
$Q=-n_{\mu}J^{\mu}\qquad F=\alpha N_{i}J^{i}\,,$ (7)
where $N_{i}$ is the outward normal direction to the surface that bounds the
volume. If $\xi^{\nu}$ is a Killing vector then $\nabla_{\mu}J^{\mu}=0$ and
the change in charge is balanced by a flux through a surface. When this is not
the case the conservation laws require an additional “source” term
$S=\alpha T^{\mu}_{\nu}\nabla_{\mu}\xi^{\nu}\,,$ (8)
describing the exchange of the charge between matter and curvature. It is this
quantity that corresponds to gravitational forces in the Newtonian limit, and
that quantifies the way in which momentum is extracted from the binary by the
matter444In our plots we also include in this quantity the accretion of the
matter charge into spheres around the BHs, since the volume integral is not
defined at the singularity. The split between accretion and the source term
depends strongly on the location of the spheres and the gauge and so is not
very meaningful (see [80, 124] for a more detailed discussion). The total
quantity remains slicing dependent but to a significantly lesser extent. The
conservation law is then written as
$\partial_{t}\left(\int Q\mathrm{d}V\right)=\int S\mathrm{d}V-\int
F\mathrm{d}A\,,$ (9)
where these correspond to the change in charge in the cloud, the exchange
between matter from/to curvature, and the flux to/from infinity. In this work
we will focus on extracting the charges and fluxes related to the energy,
angular and radial momentum, via $\xi_{t}^{\nu}=(1,0,0,0)$,
$\xi_{\phi}^{\nu}=(0,-y,x,0)$ and $\xi_{r}^{\nu}=-(0,x,y,z)/r$. We write the
explicit expressions in terms of the ADM variables in the Supplemental
Material. In Fig. 4 we plot the time integration of each of these quantities.
In each case the black line should equal the sum of the blue and red lines and
provides a check on the error. It is the red line that quantifies the
extraction of the relevant charge from the binary by the matter, and therefore
this is what drives the dephasing. The initial period before $t-r_{ex}\approx
40M$ will contain transient effects related to the growth of the cloud from an
unphysical initial state, but after this the effects are representative of the
quasi-stationary state. We discuss below their evolution and roughly quantify
their effect on the dephasing.
Figure 4: Conservation law for the energy, angular (corotating) and radial
inward momentum currents of the matter in a sphere of radius $40M$ around the
binary, that as we can see in Fig. 2, contains the main cloud overdensity. The
black line shows the change in the total integrated charge of the cloud: $\int
Q\mathrm{d}V$. The red line describes the time integration of the exchange
between matter and curvature $\int\left(\int S\mathrm{d}V\right)\mathrm{d}t$.
The blue line is (minus) the total flux across the outer bounding surface
$\int\left(\int-F\mathrm{d}A\right)\mathrm{d}t$, positive and negative
representing ingoing and outgoing flow, respectively. We see from this the
transfers of energy and momentum from the curvature to the matter and can
infer the energy accretion onto the binary, the extraction and radiation of
angular momentum and the inwards radial force due to momentum accretion and
the central overdensity.
In the top panel of Fig 4, we see that the energy of the matter within the
volume increases, due to the flux of matter energy across the outer surface –
this is simply reflecting the fact that the central cloud density grows over
time due to accretion from the environment. The increase is partially offset
by a negative source term, which is mainly driven by the accretion of energy
into the BHs, increasing their masses by approximately 1% over the course of
the merger (we can check that this agrees to the change in their measured
masses from the AH finder). Outside of the BHs there is very little exchange
in the energy between the curvature and the matter cloud since the spacetime
settles into a quasi-stationary state within the first $\approx 250M$ (it
would be exactly zero for a spacetime with a time-like Killing vector). After
the merger, the energy in the cloud around the remnant decreases slightly as
some is accreted onto the remnant, but it does not completely dissipate.
In the second panel, the angular momentum held in the matter cloud initially
increases as the curvature of the binary “stirs up” the cloud during the
transient phase, then reaches a reasonably steady state during which the
amount of extraction of angular momentum from the spacetime curvature (the
stirring) is balanced by its flux out of the outer surface. The result of this
is that the angular momentum of the spacetime of the binary is decreased, and
carried away by scalar waves. After merger there is an increased flux of
radiation from the outer surface which carries away all the angular momentum
built up in the cloud during the inspiral. We can view the source from/to
curvature as a dynamical friction effect – the extraction of the angular
momentum of the spacetime of the binary by the matter. Can this loss account
for the dephasing observed? A very approximate expression for the dephasing
over one period $\Delta T$ as a result of the loss of angular momentum $\Delta
J$ is
$\frac{\Delta T}{T}\sim\frac{\Delta J}{J}$ (10)
This uses the Newtonian expression $L=mr^{2}\omega$ and assumes a roughly
circular orbit, so that over the period
$|\dot{r}\dot{\theta}|\ll|r\ddot{\theta}|$ and $\ddot{r}$ can be neglected.
Assuming a constant rate of dephasing, $\Delta J/J^{\mathrm{bbh}}_{0}$ of the
system should be around 10% to account for the dephasing observed in this
case. We see that this is far from being the case, with the loss only of order
$0.2\%$. Something else is required to explain the dephasing.
In the third panel, the radial momentum held in the matter cloud is tracked.
Here, we see that the matter cloud initially gains some inward radial
momentum. However, the accretion of inward radial momentum from the outer
surface is roughly balanced by its loss into curvature - partly as a result of
the binary accreting the ingoing momentum, and partly as a result of it being
attracted to the central overdensity. This gives the binary an inward pull
that accelerates during the final plunge. Quantifying the effect is difficult
as it is happening in an extremely non linear regime. However, roughly
speaking the inward radial force on each BH is
$-\mathrm{d}P_{r}/(2\mathrm{d}t)$, i.e. minus the slope of the red line in the
plot. Treating this as a constant in time force:
$\frac{\Delta r}{r_{o}}\sim\frac{\Delta P_{r}\Delta t}{Md_{0}}$ (11)
Putting in the numbers for the time from $t_{\mathrm{ret}}\approx 500M$ to
$t_{\mathrm{ret}}\approx 1500M$ gives $\Delta r/r_{0}\approx 20\%$. To an
order of magnitude estimate, this explains well the dephasing that is
observed. After merger, the flux of radial momentum from the outer surface is
balanced by the accretion into the BH remnant and so the total inward momentum
of the matter remains (approximately) constant, as expected for the final
stationary state.
Conclusion.— Using general relativistic simulations of a binary accreting dark
matter, we have shown that the dephasing in the gravitational-wave signal of
an equal mass black hole merger is maximized when the Compton wavelength of
the dark matter particle is comparable to the orbital distance of the binary,
$2\pi/\mu\sim d$. We need the mass of the scalar to be sufficiently large for
a central overdensity to build up – low mass scalars suppress structure on
smaller scales than their Compton wavelength. Converting into physical units,
the optimal scalar mass to induce dephasing in the last 10 orbits of an equal
mass binary with total mass $M$ is then
$\mu\approx 5\times
10^{-17}\,\left(\frac{M}{10^{6}M_{\odot}}\right)^{-1}\,\mathrm{eV}\,,$ (12)
which can result in a $10\%$ dephasing during the last 10 orbits of the binary
(taking the blue line in Fig. 1) for asymptotic densities around the BH of
$\rho_{0}\approx
10^{20}\,\left(\frac{M}{10^{6}M_{\odot}}\right)^{-2}\,\frac{M_{\odot}}{\mathrm{pc}^{3}}\,.$
(13)
This is high relative to the average DM density, but the dephasing that we
obtain is only over a short period of the binary’s lifetime ($\sim 10$
orbits), and has a cumulative effect. Therefore smaller densities could give
sufficient dephasing to be detectable assuming the effect is triggered at
larger separations (which would also allow lower mass candidates to contribute
to the effect). In particular, multi-band observations between the LISA and
LVK detectors should be able to probe the whole range of frequencies the
binary explores, accumulating effects and thus probing smaller densities than
the ones discussed here. There is also potential for direct detection of the
cloud for models with standard model couplings [125].
The simulations in this work demonstrate that accumulation of wave-like dark
matter between the binary could have a significant effect on the merger
history of binaries, unlike in particle cases where the dark matter tends to
disperse. As recently suggested in [70], it could even go some way to
explaining the final parsec problem. In particular, we highlight the
importance of considering the radial force arising from any central
overdensity that forms, in addition to the radiation of waves carrying angular
momentum and energy away from the binary. As noted above, the effects remain
significant even at the higher end of the masses that we can probe in our
simulations, at which $\mu M\approx 1$. Further investigations should be made
to determine the point at which particle-like behaviour takes effect, and to
study the importance of the relativistic features in our simulations such as
the presence of black hole horizons.
Acknowledgements.— We would like to thank Jean Alexandre, Emanuele Berti,
Gianfranco Bertone, Robin Croft, Giuseppe Ficarra, Thomas Helfer, Charlie Hoy,
Lam Hui, Macarena Lagos, Eugene Lim, Miren Radia, Dina Traykova, Rodrigo
Vicente, Sebastian von Hausegger and Helvi Witek for helpful conversations. We
thank the GRChombo collaboration (www.grchombo.org) for their support and code
development work. JCA acknowledges funding from the Beecroft Trust and The
Queen’s College via an extraordinary Junior Research Fellowship (eJRF). KC
acknowledges funding from the UKRI Ernest Rutherford Fellowship (grant number
ST/V003240/1). JB acknowledges funding from a Science and Technology
Facilities Council (STFC) PhD studentship and funding from National Science
Foundation (NSF) Grant PHY-2006066. PGF acknowledges support from STFC and the
Beecroft Trust.
This work was performed using the DiRAC@Durham facility managed by the
Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility
(www.dirac.ac.uk) under DiRAC RAC15 Grant ACTP316. The equipment was funded by
BEIS capital funding via STFC capital grants ST/P002293/1, ST/R002371/1 and
ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. Part
of this work was performed using the DiRAC Data Intensive service at
Leicester, operated by the University of Leicester IT Services, which forms
part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was
funded by BEIS capital funding via STFC capital grants ST/K000373/1 and
ST/R002363/1 and STFC DiRAC Operations Grant ST/R001014/1. This work also used
the DiRAC@Durham facility managed by the Institute for Computational Cosmology
on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was
funded by BEIS capital funding via STFC Capital Grants ST/P002293/1,
ST/R002371/1 and ST/S002502/1, Durham University and STFC Operations Grant
ST/R000832/1. DiRAC is part of the National e-Infrastructure.
## References
* Barack _et al._ [2019] L. Barack _et al._ , Class. Quant. Grav. 36, 143001 (2019), arXiv:1806.05195 [gr-qc] .
* Barausse _et al._ [2020] E. Barausse _et al._ , Gen. Rel. Grav. 52, 81 (2020), arXiv:2001.09793 [gr-qc] .
* Afshordi _et al._ [2023] N. Afshordi _et al._ (LISA Consortium Waveform Working Group), (2023), arXiv:2311.01300 [gr-qc] .
* Caneva Santoro _et al._ [2023] G. Caneva Santoro, S. Roy, R. Vicente, M. Haney, O. J. Piccinni, W. Del Pozzo, and M. Martinez, (2023), arXiv:2309.05061 [gr-qc] .
* Cardoso and Maselli [2020] V. Cardoso and A. Maselli, Astron. Astrophys. 644, A147 (2020), arXiv:1909.05870 [astro-ph.HE] .
* Barausse _et al._ [2014] E. Barausse, V. Cardoso, and P. Pani, Phys. Rev. D 89, 104059 (2014), arXiv:1404.7149 [gr-qc] .
* Yunes _et al._ [2011] N. Yunes, B. Kocsis, A. Loeb, and Z. Haiman, Phys. Rev. Lett. 107, 171103 (2011), arXiv:1103.4609 [astro-ph.CO] .
* Kocsis _et al._ [2011] B. Kocsis, N. Yunes, and A. Loeb, Phys. Rev. D 84, 024032 (2011), arXiv:1104.2322 [astro-ph.GA] .
* Macedo _et al._ [2013] C. F. B. Macedo, P. Pani, V. Cardoso, and L. C. B. Crispino, Astrophys. J. 774, 48 (2013), arXiv:1302.2646 [gr-qc] .
* Cardoso and Macedo [2020] V. Cardoso and C. F. B. Macedo, Mon. Not. Roy. Astron. Soc. 498, 1963 (2020), arXiv:2008.01091 [astro-ph.HE] .
* Bertone and Tait [2018] G. Bertone and T. Tait, M. P., Nature 562, 51 (2018), arXiv:1810.01668 [astro-ph.CO] .
* Alves Batista _et al._ [2021] R. Alves Batista _et al._ , (2021), arXiv:2110.10074 [astro-ph.HE] .
* Zwick _et al._ [2022] L. Zwick, A. Derdzinski, M. Garg, P. R. Capelo, and L. Mayer, Mon. Not. Roy. Astron. Soc. 511, 6143 (2022), arXiv:2110.09097 [astro-ph.HE] .
* Cardoso and Duque [2020] V. Cardoso and F. Duque, Phys. Rev. D 101, 064028 (2020), arXiv:1912.07616 [gr-qc] .
* Maselli _et al._ [2022] A. Maselli, N. Franchini, L. Gualtieri, T. P. Sotiriou, S. Barsanti, and P. Pani, Nature Astron. 6, 464 (2022), arXiv:2106.11325 [gr-qc] .
* Amaro-Seoane [2018] P. Amaro-Seoane, Living Rev. Rel. 21, 4 (2018), arXiv:1205.5240 [astro-ph.CO] .
* Cardoso _et al._ [2022a] V. Cardoso, K. Destounis, F. Duque, R. Panosso Macedo, and A. Maselli, Phys. Rev. Lett. 129, 241103 (2022a), arXiv:2210.01133 [gr-qc] .
* Cole _et al._ [2023] P. S. Cole, G. Bertone, A. Coogan, D. Gaggero, T. Karydas, B. J. Kavanagh, T. F. M. Spieksma, and G. M. Tomaselli, Nature Astron. 7, 943 (2023), arXiv:2211.01362 [gr-qc] .
* Krolak and Schutz [1987] A. Krolak and B. F. Schutz, Gen. Rel. Grav. 19, 1163 (1987).
* Pato _et al._ [2015] M. Pato, F. Iocco, and G. Bertone, JCAP 12, 001, arXiv:1504.06324 [astro-ph.GA] .
* Nesti and Salucci [2013] F. Nesti and P. Salucci, JCAP 07, 016, arXiv:1304.5127 [astro-ph.GA] .
* Li _et al._ [2020] Z. Li, J. Shen, and H.-Y. Schive 10.3847/1538-4357/ab6598 (2020), arXiv:2001.00318 [astro-ph.GA] .
* De Martino _et al._ [2020] I. De Martino, T. Broadhurst, S. H. H. Tye, T. Chiueh, and H.-Y. Schive, Phys. Dark Univ. 28, 100503 (2020), arXiv:1807.08153 [astro-ph.GA] .
* Ablimit _et al._ [2020] I. Ablimit, G. Zhao, C. Flynn, and S. A. Bird, Astrophys. J. 895, L12 (2020), arXiv:2004.13768 [astro-ph.GA] .
* Zel’Dovich [1971] Y. B. Zel’Dovich, Soviet Journal of Experimental and Theoretical Physics Letters 14, 180 (1971).
* Press and Teukolsky [1972] W. H. Press and S. A. Teukolsky, Nature 238, 211 (1972).
* Zouros and Eardley [1979] T. J. M. Zouros and D. M. Eardley, Annals Phys. 118, 139 (1979).
* Detweiler [1980] S. L. Detweiler, Phys. Rev. D 22, 2323 (1980).
* Cardoso _et al._ [2004] V. Cardoso, O. J. C. Dias, J. P. S. Lemos, and S. Yoshida, Phys. Rev. D 70, 044039 (2004), [Erratum: Phys.Rev.D 70, 049903 (2004)], arXiv:hep-th/0404096 .
* East and Pretorius [2017] W. E. East and F. Pretorius, Phys. Rev. Lett. 119, 041101 (2017), arXiv:1704.04791 [gr-qc] .
* Brito _et al._ [2015] R. Brito, V. Cardoso, and P. Pani, Lect. Notes Phys. 906, pp.1 (2015), arXiv:1501.06570 [gr-qc] .
* Gondolo and Silk [1999] P. Gondolo and J. Silk, Phys. Rev. Lett. 83, 1719 (1999), arXiv:astro-ph/9906391 .
* Hui _et al._ [2023] L. Hui, Y. T. A. Law, L. Santoni, G. Sun, G. M. Tomaselli, and E. Trincherini, Phys. Rev. D 107, 104018 (2023), arXiv:2208.06408 [gr-qc] .
* De Luca and Khoury [2023] V. De Luca and J. Khoury, JCAP 04, 048, arXiv:2302.10286 [astro-ph.CO] .
* Berezhiani _et al._ [2023] L. Berezhiani, G. Cintia, V. De Luca, and J. Khoury, (2023), arXiv:2311.07672 [astro-ph.CO] .
* Sadeghian _et al._ [2013] L. Sadeghian, F. Ferrer, and C. M. Will, Phys. Rev. D 88, 063522 (2013), arXiv:1305.2619 [astro-ph.GA] .
* Gnedin and Primack [2004] O. Y. Gnedin and J. R. Primack, Phys. Rev. Lett. 93, 061302 (2004), arXiv:astro-ph/0308385 .
* Merritt _et al._ [2007] D. Merritt, S. Harfst, and G. Bertone, Phys. Rev. D 75, 043517 (2007), arXiv:astro-ph/0610425 .
* Merritt [2004] D. Merritt, Phys. Rev. Lett. 92, 201304 (2004), arXiv:astro-ph/0311594 .
* Shapiro and Heggie [2022] S. L. Shapiro and D. C. Heggie, Phys. Rev. D 106, 043018 (2022), arXiv:2209.08105 [astro-ph.GA] .
* Clough _et al._ [2019] K. Clough, P. G. Ferreira, and M. Lagos, Phys. Rev. D 100, 063014 (2019), arXiv:1904.12783 [gr-qc] .
* Hui _et al._ [2019] L. Hui, D. Kabat, X. Li, L. Santoni, and S. S. C. Wong, JCAP 06, 038, arXiv:1904.12803 [gr-qc] .
* Bamber _et al._ [2021] J. Bamber, K. Clough, P. G. Ferreira, L. Hui, and M. Lagos, Phys. Rev. D 103, 044059 (2021), arXiv:2011.07870 [gr-qc] .
* Bucciotti and Trincherini [2023] B. Bucciotti and E. Trincherini, (2023), arXiv:2309.02482 [hep-th] .
* de Cesare and Oliveri [2023] M. de Cesare and R. Oliveri, Phys. Rev. D 108, 044050 (2023), arXiv:2305.04970 [gr-qc] .
* Sanchis-Gual _et al._ [2016] N. Sanchis-Gual, J. C. Degollado, P. Izquierdo, J. A. Font, and P. J. Montero, Phys. Rev. D 94, 043004 (2016), arXiv:1606.05146 [gr-qc] .
* Bertone _et al._ [2005] G. Bertone, D. Hooper, and J. Silk, Phys. Rept. 405, 279 (2005), arXiv:hep-ph/0404175 .
* Merritt and Milosavljevic [2002] D. Merritt and M. Milosavljevic, in _4th International Heidelberg Conference on Dark Matter in Astro and Particle Physics_ (2002) pp. 79–89, arXiv:astro-ph/0205140 .
* Bertone and Merritt [2005] G. Bertone and D. Merritt, Phys. Rev. D 72, 103502 (2005), arXiv:astro-ph/0501555 .
* Kavanagh _et al._ [2018] B. J. Kavanagh, D. Gaggero, and G. Bertone, Phys. Rev. D 98, 023536 (2018), arXiv:1805.09034 [astro-ph.CO] .
* Ferreira _et al._ [2017] M. C. Ferreira, C. F. B. Macedo, and V. Cardoso, Phys. Rev. D 96, 083017 (2017), arXiv:1710.00830 [gr-qc] .
* Eda _et al._ [2013] K. Eda, Y. Itoh, S. Kuroyanagi, and J. Silk, Phys. Rev. Lett. 110, 221101 (2013), arXiv:1301.5971 [gr-qc] .
* Coogan _et al._ [2022] A. Coogan, G. Bertone, D. Gaggero, B. J. Kavanagh, and D. A. Nichols, Phys. Rev. D 105, 043009 (2022), arXiv:2108.04154 [gr-qc] .
* Cole _et al._ [2022] P. S. Cole, A. Coogan, B. J. Kavanagh, and G. Bertone, (2022), arXiv:2207.07576 [astro-ph.CO] .
* Kavanagh _et al._ [2020] B. J. Kavanagh, D. A. Nichols, G. Bertone, and D. Gaggero, Phys. Rev. D 102, 083006 (2020), arXiv:2002.12811 [gr-qc] .
* Hannuksela _et al._ [2020] O. A. Hannuksela, K. C. Y. Ng, and T. G. F. Li, Phys. Rev. D 102, 103022 (2020), arXiv:1906.11845 [astro-ph.CO] .
* Polcar _et al._ [2022] L. Polcar, G. Lukes-Gerakopoulos, and V. Witzany, Phys. Rev. D 106, 044069 (2022), arXiv:2205.08516 [gr-qc] .
* Amaro-Seoane _et al._ [2007] P. Amaro-Seoane, J. R. Gair, M. Freitag, M. Coleman Miller, I. Mandel, C. J. Cutler, and S. Babak, Class. Quant. Grav. 24, R113 (2007), arXiv:astro-ph/0703495 .
* Jangra _et al._ [2023] P. Jangra, B. J. Kavanagh, and J. M. Diego, JCAP 11, 069, arXiv:2304.05892 [astro-ph.CO] .
* Li _et al._ [2022] G.-L. Li, Y. Tang, and Y.-L. Wu, Sci. China Phys. Mech. Astron. 65, 100412 (2022), arXiv:2112.14041 [astro-ph.CO] .
* Yue and Cao [2019] X.-J. Yue and Z. Cao, Phys. Rev. D 100, 043013 (2019), arXiv:1908.10241 [astro-ph.HE] .
* Yue and Han [2018] X.-J. Yue and W.-B. Han, Phys. Rev. D 97, 064003 (2018), arXiv:1711.09706 [gr-qc] .
* Becker _et al._ [2022] N. Becker, L. Sagunski, L. Prinz, and S. Rastgoo, Phys. Rev. D 105, 063029 (2022), arXiv:2112.09586 [gr-qc] .
* Takahashi _et al._ [2023] T. Takahashi, H. Omiya, and T. Tanaka, Phys. Rev. D 107, 103020 (2023), arXiv:2301.13213 [gr-qc] .
* Barsanti _et al._ [2023] S. Barsanti, A. Maselli, T. P. Sotiriou, and L. Gualtieri, Phys. Rev. Lett. 131, 051401 (2023), arXiv:2212.03888 [gr-qc] .
* Schive _et al._ [2014] H.-Y. Schive, T. Chiueh, and T. Broadhurst, Nature Phys. 10, 496 (2014), arXiv:1406.6586 [astro-ph.GA] .
* Hui [2021] L. Hui, Ann. Rev. Astron. Astrophys. 59, 247 (2021), arXiv:2101.11735 [astro-ph.CO] .
* Ureña López [2019] L. A. Ureña López, Front. Astron. Space Sci. 6, 47 (2019).
* Niemeyer [2019] J. C. Niemeyer 10.1016/j.ppnp.2020.103787 (2019), arXiv:1912.07064 [astro-ph.CO] .
* Koo _et al._ [2023] H. Koo, D. Bak, I. Park, S. E. Hong, and J.-W. Lee, arXiv e-prints , arXiv:2311.03412 (2023), arXiv:2311.03412 [astro-ph.GA] .
* Boudon _et al._ [2023] A. Boudon, P. Brax, and P. Valageas, Phys. Rev. D 108, 103517 (2023), arXiv:2307.15391 [astro-ph.CO] .
* Zhong _et al._ [2023] Z. Zhong, V. Cardoso, T. Ikeda, and M. Zilhão, Phys. Rev. D 108, 084051 (2023), arXiv:2307.02548 [gr-qc] .
* Cardoso _et al._ [2022b] V. Cardoso, T. Ikeda, R. Vicente, and M. Zilhão, Phys. Rev. D 106, L121302 (2022b), arXiv:2207.09469 [gr-qc] .
* Cardoso _et al._ [2022c] V. Cardoso, T. Ikeda, Z. Zhong, and M. Zilhão, Phys. Rev. D 106, 044030 (2022c), arXiv:2206.00021 [gr-qc] .
* Vicente and Cardoso [2022] R. Vicente and V. Cardoso, Phys. Rev. D 105, 083008 (2022), arXiv:2201.08854 [gr-qc] .
* Annulli _et al._ [2020a] L. Annulli, V. Cardoso, and R. Vicente, Phys. Rev. D 102, 063022 (2020a), arXiv:2009.00012 [gr-qc] .
* Annulli _et al._ [2020b] L. Annulli, V. Cardoso, and R. Vicente, Phys. Lett. B 811, 135944 (2020b), arXiv:2007.03700 [astro-ph.HE] .
* Brax _et al._ [2020] P. Brax, J. A. R. Cembranos, and P. Valageas, Phys. Rev. D 101, 023521 (2020), arXiv:1909.02614 [astro-ph.CO] .
* Cao and Tang [2023] Y. Cao and Y. Tang, (2023), arXiv:2307.05181 [gr-qc] .
* Traykova _et al._ [2023] D. Traykova, R. Vicente, K. Clough, T. Helfer, E. Berti, P. G. Ferreira, and L. Hui, (2023), arXiv:2305.10492 [gr-qc] .
* Chia _et al._ [2023a] H. S. Chia, C. Doorman, A. Wernersson, T. Hinderer, and S. Nissanke, JCAP 04, 018, arXiv:2212.11948 [gr-qc] .
* Baumann _et al._ [2019] D. Baumann, H. S. Chia, and R. A. Porto, Phys. Rev. D 99, 044001 (2019), arXiv:1804.03208 [gr-qc] .
* Baumann _et al._ [2020] D. Baumann, H. S. Chia, R. A. Porto, and J. Stout, Phys. Rev. D 101, 083019 (2020), arXiv:1912.04932 [gr-qc] .
* Ikeda _et al._ [2021] T. Ikeda, L. Bernard, V. Cardoso, and M. Zilhão, Phys. Rev. D 103, 024020 (2021), arXiv:2010.00008 [gr-qc] .
* Cardoso _et al._ [2020] V. Cardoso, F. Duque, and T. Ikeda, Phys. Rev. D 101, 064054 (2020), arXiv:2001.01729 [gr-qc] .
* Hannuksela _et al._ [2019] O. A. Hannuksela, K. W. K. Wong, R. Brito, E. Berti, and T. G. F. Li, Nature Astron. 3, 447 (2019), arXiv:1804.09659 [astro-ph.HE] .
* Baumann _et al._ [2022a] D. Baumann, G. Bertone, J. Stout, and G. M. Tomaselli, Phys. Rev. Lett. 128, 221102 (2022a), arXiv:2206.01212 [gr-qc] .
* Zhang and Yang [2020] J. Zhang and H. Yang, Phys. Rev. D 101, 043020 (2020), arXiv:1907.13582 [gr-qc] .
* Kumar Poddar _et al._ [2020] T. Kumar Poddar, S. Mohanty, and S. Jana, Phys. Rev. D 101, 083007 (2020), arXiv:1906.00666 [hep-ph] .
* Wong [2019] L. K. Wong, Phys. Rev. D 100, 044051 (2019), arXiv:1905.08543 [hep-th] .
* Wong [2020] L. K. Wong, Phys. Rev. D 101, 124049 (2020), arXiv:2004.03570 [hep-th] .
* Kavic _et al._ [2020] M. Kavic, S. L. Liebling, M. Lippert, and J. H. Simonetti, JCAP 08, 005, arXiv:1910.06977 [astro-ph.HE] .
* Chia _et al._ [2023b] H. S. Chia, T. D. P. Edwards, D. Wadekar, A. Zimmerman, S. Olsen, J. Roulet, T. Venumadhav, B. Zackay, and M. Zaldarriaga, (2023b), arXiv:2306.00050 [gr-qc] .
* East [2018] W. E. East, Phys. Rev. Lett. 121, 131104 (2018), arXiv:1807.00043 [gr-qc] .
* Siemonsen and East [2020] N. Siemonsen and W. E. East, Phys. Rev. D 101, 024019 (2020), arXiv:1910.09476 [gr-qc] .
* Tsukada _et al._ [2021] L. Tsukada, R. Brito, W. E. East, and N. Siemonsen, Phys. Rev. D 103, 083005 (2021), arXiv:2011.06995 [astro-ph.HE] .
* Baumann _et al._ [2022b] D. Baumann, G. Bertone, J. Stout, and G. M. Tomaselli, Phys. Rev. D 105, 115036 (2022b), arXiv:2112.14777 [gr-qc] .
* Siemonsen _et al._ [2023] N. Siemonsen, T. May, and W. E. East, Phys. Rev. D 107, 104003 (2023), arXiv:2211.03845 [gr-qc] .
* Tomaselli _et al._ [2023] G. M. Tomaselli, T. F. M. Spieksma, and G. Bertone, JCAP 07, 070, arXiv:2305.15460 [gr-qc] .
* Bamber _et al._ [2023] J. Bamber, J. C. Aurrekoetxea, K. Clough, and P. G. Ferreira, Phys. Rev. D 107, 024035 (2023), arXiv:2210.09254 [gr-qc] .
* Ficarra [2021] G. Ficarra, in _55th Rencontres de Moriond on Gravitation_ (2021) arXiv:2105.05918 [gr-qc] .
* Zhang _et al._ [2023] Y.-P. Zhang, M. Gracia-Linares, P. Laguna, D. Shoemaker, and Y.-X. Liu, Phys. Rev. D 107, 044039 (2023), arXiv:2209.11814 [gr-qc] .
* Choudhary _et al._ [2021] S. Choudhary, N. Sanchis-Gual, A. Gupta, J. C. Degollado, S. Bose, and J. A. Font, Phys. Rev. D 103, 044032 (2021), arXiv:2010.00935 [gr-qc] .
* Yang _et al._ [2018] Q. Yang, L.-W. Ji, B. Hu, Z.-J. Cao, and R.-G. Cai, Res. Astron. Astrophys. 18, 065 (2018), arXiv:1706.00678 [gr-qc] .
* mov [2023] The effect of wave dark matter on equal mass black hole mergers (2023), https://youtu.be/i4gRWCL-ZIk.
* Santos and Herdeiro [2020] N. M. Santos and C. A. R. Herdeiro, Int. J. Mod. Phys. D 29, 2041013 (2020), arXiv:2005.07201 [gr-qc] .
* Vieira _et al._ [2014] H. S. Vieira, V. B. Bezerra, and C. R. Muniz, Annals Phys. 350, 14 (2014), arXiv:1401.5397 [gr-qc] .
* Hortacsu [2012] M. Hortacsu, , 23 (2012), arXiv:1101.0471 [math-ph] .
* Andrade _et al._ [2021] T. Andrade _et al._ , J. Open Source Softw. 6, 3703 (2021), arXiv:2201.03458 [gr-qc] .
* Aurrekoetxea _et al._ [2023a] J. C. Aurrekoetxea, J. Bamber, S. E. Brady, K. Clough, T. Helfer, J. Marsden, D. Traykova, and Z. Wang, (2023a), arXiv:2308.08299 [gr-qc] .
* Arnowitt _et al._ [2008] R. L. Arnowitt, S. Deser, and C. W. Misner, Gen. Rel. Grav. 40, 1997 (2008), arXiv:gr-qc/0405109 .
* Aurrekoetxea _et al._ [2023b] J. C. Aurrekoetxea, K. Clough, and E. A. Lim, Class. Quant. Grav. 40, 075003 (2023b), arXiv:2207.03125 [gr-qc] .
* Clough _et al._ [2015] K. Clough, P. Figueras, H. Finkel, M. Kunesch, E. A. Lim, and S. Tunyasuvunakool, Class. Quant. Grav. 32, 245011 (2015), arXiv:1503.03436 [gr-qc] .
* Radia _et al._ [2022] M. Radia, U. Sperhake, A. Drew, K. Clough, P. Figueras, E. A. Lim, J. L. Ripley, J. C. Aurrekoetxea, T. França, and T. Helfer, Class. Quant. Grav. 39, 135006 (2022), arXiv:2112.10567 [gr-qc] .
* Alic _et al._ [2012] D. Alic, C. Bona-Casas, C. Bona, L. Rezzolla, and C. Palenzuela, Phys. Rev. D 85, 064040 (2012), arXiv:1106.2254 [gr-qc] .
* Bona _et al._ [1995] C. Bona, J. Masso, E. Seidel, and J. Stela, Phys. Rev. Lett. 75, 600 (1995), arXiv:gr-qc/9412071 .
* Baker _et al._ [2006] J. G. Baker, J. Centrella, D.-I. Choi, M. Koppitz, and J. van Meter, Phys. Rev. Lett. 96, 111102 (2006), arXiv:gr-qc/0511103 .
* Campanelli _et al._ [2006] M. Campanelli, C. O. Lousto, P. Marronetti, and Y. Zlochower, Phys. Rev. Lett. 96, 111101 (2006), arXiv:gr-qc/0511048 .
* Hannam _et al._ [2007] M. Hannam, S. Husa, B. Bruegmann, J. A. Gonzalez, U. Sperhake, and N. O. Murchadha, J. Phys. Conf. Ser. 66, 012047 (2007), arXiv:gr-qc/0612097 .
* van Meter _et al._ [2006] J. R. van Meter, J. G. Baker, M. Koppitz, and D.-I. Choi, Phys. Rev. D 73, 124011 (2006), arXiv:gr-qc/0605030 .
* Baker _et al._ [2002] J. G. Baker, M. Campanelli, and C. O. Lousto, Phys. Rev. D 65, 044001 (2002), arXiv:gr-qc/0104063 .
* Clough [2021] K. Clough, Class. Quant. Grav. 38, 167001 (2021), arXiv:2104.13420 [gr-qc] .
* Croft [2023] R. Croft, Class. Quant. Grav. 40, 105007 (2023), arXiv:2203.13845 [gr-qc] .
* Traykova _et al._ [2021] D. Traykova, K. Clough, T. Helfer, E. Berti, P. G. Ferreira, and L. Hui, Phys. Rev. D 104, 103014 (2021), arXiv:2106.08280 [gr-qc] .
* Yuan _et al._ [2021] G.-W. Yuan, Z.-Q. Xia, C. Tang, Y. Zhao, Y.-F. Cai, Y. Chen, J. Shu, and Q. Yuan, JCAP 03, 018, arXiv:2008.13662 [astro-ph.HE] .
## Numerical implementation, diagnostics and convergence tests
We evolve the gravity sector solving the Einstein field equations for a line
element that we decompose in the usual ADM form [111]
$ds^{2}=-\alpha^{2}\mathrm{d}t^{2}+\gamma_{ij}(\mathrm{d}x^{i}+\beta^{i}\mathrm{d}t)(\mathrm{d}x^{j}+\beta^{j}\mathrm{d}t),$
(14)
where $\gamma_{ij}$ is the three-dimensional spatial metric that we decompose
into a conformal factor and a conformally related metric
$\gamma_{ij}=\bar{\gamma}_{ij}/\chi$. The lapse and shift gauge functions
$\alpha$ and $\beta^{i}$ determine the choice of spatial hyperslicings and
their coordinates, which in numerical relativity are dynamically determined.
The extrinsic curvature tensor
$K_{ij}=(2D_{(i}\beta_{j)}-\partial_{t}\gamma_{ij})/2\alpha$ is decomposed
into a trace $K$ and a traceless part $A_{ij}$, i.e.
$K_{ij}=A_{ij}+(1/3)K\gamma_{ij}$. We evolve use the CCZ4 formulation [115]
and the moving puncture gauge [116, 117, 118, 120] with grchombo [109, 114,
113].
We solve the Hamiltonian constraint using Bowen-York initial data using the
CTTK hybrid method [112] (see table 1 for the binary parameters). In the
homogeneous limit, this reduces to choosing the trace of the extrinsic
curvature tensor $K^{2}=24\pi G\rho$ and solving for a correction of the
conformal factor $\chi$ sourced by the traceless components $A_{ij}$. We
choose $K<0$ so that the scalar field is initially decaying, which chooses the
more conservative impact on the merger. The value is small and the effect of
either choice on the overall trends observed is not significant.
$d/M$ | $12.21358$ | $|p_{x}|/M$ | $5.10846\times 10^{-4}$
---|---|---|---
$M_{\mathrm{BH}}/M$ | $0.48847892320123$ | $|p_{y}|/M$ | $8.41746\times 10^{-2}$
$T/M$ | $271.34$ | $|p_{z}|/M$ | $0$
Table 1: Black hole binary initial parameters. The black holes are initially
aligned along the $x$ axis in the $z=0$ plane, with initial momenta
$\vec{p}_{1}=(-|p_{x}|,+|p_{y}|,0)$ for the BH with initial position
$\vec{r}_{1}=(d/2,0,0)$ and $\vec{p}_{2}=(+|p_{x}|,-|p_{y}|,0)$ for the one at
$\vec{r}_{2}=(-d/2,0,0)$.
We use a simulation box length $L=512M$ and $9$ levels of mesh refinement (See
Fig. 5 for convergence tests). We impose reflecting boundary conditions at
$z=0$, while for the other boundaries we impose either zeroth order
extrapolating boundary conditions (matching the values on the exterior ghost
cells to the value (radially directed) inside the simulation grid) or
Sommerfeld boundary conditions.
Figure 5: Convergence testing of the gravitational-wave phase evolution for
the largest dephasing case: $(\mu,\phi_{0})=(4300,2.5)\times 10^{-3}$, so that
the initial density is $\rho_{0}\approx 10^{-8}M^{-2}$. We use three different
resolutions with $N^{3}$ number of grid points on the coarsest level. The
decrease in the error is consistent with $1^{\mathrm{st}}$ order.
Following the approach of [122, 123], the energy momentum tensor is decomposed
into the energy density, momentum density and stress-energy density measured
by normal observers as
$T^{\mu\nu}=\rho n^{\mu}n^{\nu}+S^{\mu}n^{\nu}+S^{\nu}n^{\mu}+S^{\mu\nu}.$
(15)
The energy and angular momentum currents are related to the time-like, angular
and radial directions $J^{\mu}_{t}=T^{\mu}_{\nu}\xi^{\nu}_{t}$,
$J^{\mu}_{\phi}=T^{\mu}_{\nu}\xi^{\nu}_{\phi}$ and
$J^{\mu}_{r}=T^{\mu}_{\nu}\xi^{\nu}_{r}$, where $\xi_{t}^{\nu}=(1,0,0,0)$,
$\xi_{\phi}^{\nu}=(0,-y,x,0)$ and $\xi_{r}^{\nu}=-(0,x,y,z)/r$. The respective
charges $Q$ and fluxes $F$ in terms of ADM quantities are then
$\displaystyle Q_{t}$ $\displaystyle=-\alpha\rho+\beta_{k}S^{k}$ (16)
$\displaystyle F_{t}$
$\displaystyle=N_{i}\left(\beta^{i}(\alpha\rho-\beta^{j}S_{j})+\alpha(\beta^{k}S_{k}^{i}-\alpha
S^{i}\right)$ (17) $\displaystyle Q_{\\{{\phi,r\\}}}$
$\displaystyle=S_{i}\xi^{i}_{\\{{\phi,r\\}}}$ (18) $\displaystyle
F_{\\{{\phi,r\\}}}$
$\displaystyle=-N_{i}\beta^{i}S_{j}\xi^{j}_{\\{{\phi,r\\}}}+\alpha
N_{i}S^{i}_{j}\xi^{j}_{\\{{\phi,r\\}}}\,,$ (19)
where $N_{i}=(x,y,z)/r$ is the normalised radial unit vector, with
$s_{i}=(x,y,z)/r$ and $N_{i}=s_{i}/\sqrt{(\gamma^{jk}s_{j}s_{k})}$. The source
terms
$\displaystyle S_{t}=$
$\displaystyle-\rho\partial_{t}\alpha+S_{i}\partial_{t}\beta^{i}+\frac{\alpha}{2}S^{ij}\partial_{t}\gamma_{ij}$
(20) $\displaystyle S_{\\{{\phi,r\\}}}=$ $\displaystyle\alpha
S^{\mu}_{\nu}\partial_{\mu}\xi^{\nu}_{\\{{\phi,r\\}}}+\alpha
S^{\mu}_{\nu}{}^{(3)}\Gamma^{\nu}_{\mu\sigma}\xi^{\sigma}_{\\{{\phi,r\\}}}$
$\displaystyle-
S_{\nu}\beta^{i}\partial_{i}\xi^{\nu}_{\\{{\phi,r\\}}}+S_{\nu}\xi^{\mu}_{\\{{\phi,r\\}}}\partial_{\mu}\beta^{\nu}-\rho\xi^{\mu}_{\\{{\phi,r\\}}}\partial_{\mu}\alpha\,$
(21)
where $\partial_{t}\gamma_{ij}=-2\alpha K_{ij}+D_{i}\beta_{j}+D_{j}\beta_{i}$,
and both $\partial_{t}\alpha$ and $\partial_{t}\beta^{i}$ are given by our
moving puncture gauge conditions. The quantities $\partial_{\mu}\xi^{\nu}$ are
all zero except $\partial_{x}\xi^{y}=1$ and $\partial_{y}\xi^{x}=-1$
We also track the flux through inner surfaces that move together with the
black holes, which introduces additional advection terms to the flux
$F^{\mathrm{BH}}=\alpha
N_{i}^{\mathrm{BH}}J^{i}-N_{i}^{\mathrm{BH}}\beta^{i}\left(Q-S/2\right)\,,$
(22)
where $N_{i}^{\mathrm{BH}}$ is defined above.
|
# Communication and Localization with Extremely Large Lens Antenna Array
Jie Yang, Yong Zeng, Shi Jin, Chao-Kai Wen, Pingping Xu Jie Yang, Yong Zeng,
Shi Jin, and Pingping Xu are with the National Mobile Communications Research
Laboratory, Southeast University, Nanjing, China (e-mail:
{yangjie;yong_zeng;jinshi;xpp}@seu.edu.cn). Chao-Kai Wen is with the Institute
of Communications Engineering, National Sun Yat-sen University, Kaohsiung,
804, Taiwan (e-mail: chaokai.wen@mail.nsysu.edu.tw).
###### Abstract
Achieving high-rate communication with accurate localization and wireless
environment sensing has emerged as an important trend of beyond-fifth and
sixth generation cellular systems. Extension of the antenna array to an
extremely large scale is a potential technology for achieving such goals.
However, the super massive operating antennas significantly increases the
computational complexity of the system. Motivated by the inherent advantages
of lens antenna arrays in reducing system complexity, we consider
communication and localization problems with an extremely large lens antenna
array, which we call “ExLens”. Since radiative near-field property emerges in
the setting, we derive the closed-form array response of the lens antenna
array with spherical wave, which includes the array response obtained on the
basis of uniform plane wave as a special case. Our derivation result reveals a
window effect for energy focusing property of ExLens, which indicates that
ExLens has great potential in position sensing and multi-user communication.
We also propose an effective method for location and channel parameters
estimation, which is able to achieve the localization performance close to the
Cramér-Rao lower bound. Finally, we examine the multi-user communication
performance of ExLens that serves coexisting near-field and far-field users.
Numerical results demonstrate the effectiveness of the proposed channel
estimation method and show that ExLens with a minimum mean square error
receiver achieves significant spectral efficiency gains and complexity-and-
cost reductions compared with a uniform linear array.
###### Index Terms:
Array response, extremely large lens antenna array, localization, millimeter-
wave communications, spherical wave-front.
## I Introduction
In comparison with previous generations, the fifth generation (5G) mobile
network is a major breakthrough because of the introduce of massive multiple-
input multiple-output (MIMO), millimeter-wave (mmWave), and ultra-dense
network [1, 2]. However, realizing the full vision of supporting Internet of
Everything services to connect billions of people and machines remains a
challenge for 5G. Thus, research communities worldwide have implemented
initiatives to conceive the next-generation (e.g., the sixth generation (6G))
mobile communication systems [3, 4, 5, 6, 7]. The requirement of various
applications, such as extended reality, autonomous systems, pervasive health
monitoring, and brain computer interactions, are driving the evolution of 6G
towards a more intelligent and software reconfigurable functionality paradigm
that can provide ubiquitous communications and also sense, control, and even
optimize wireless environments.
To fulfill the visions of 6G for high throughput, massive connectivity, ultra-
reliability, and ultra-low latency, on the one hand, mmWave and Tera-Hertz
(THz) frequencies will be exploited further, furthermore, multiple frequency
bands (e.g., microwave/mmWave/THz frequencies) must be integrated to provide
seamless connectivity [8]; on the other hand, the antenna deployment will
evolve towards larger apertures and greater numbers, furthermore, the
extremely large aperture array has been proposed to boost spatial diversity
further [9, 10, 11]. Moreover, intelligent reflecting surfaces or
reconfigurable intelligent surfaces, artificial intelligence, and integrated
terrestrial-aerial-satellite networks are regarded as promising technologies
towards 6G[12, 13, 14, 15, 16]. However, many open problems need to be solved
to reap the full benefits of the aforementioned techniques. In particular,
when the antenna dimension continues to increase, the range of the radiative
near-field of the antenna array expands, and the user equipment (UE) and
significant scatterers are likely to be located in the near-field of the
array. Consequently, the prominent uniform plane wave assumption will no
longer hold for extremely large antenna arrays [17]. Moreover, the use of
thousands or more active antenna elements will generate prohibitive cost in
terms of hardware implementation, energy consumption, and signal processing
complexity [18].
The radiation field of an antenna array is divided into the near-field region
and the far-field region via the Rayleigh distance [19, 20], which is given as
$R={2D^{2}}/{\lambda},$ where $D$ is the maximum dimension of the antenna
array, and $\lambda$ is the wavelength. When the distance between the UE (or
scatterer) and the base station (BS) is smaller than the Rayleigh distance,
the UE (or scatterer) is located in the near-field region, where the spherical
wave-front over the antenna array is observed. For example, a uniform linear
array (ULA) of 1 meter (m) that operates at $30$ GHz corresponds to a Rayleigh
distance of approximately $200\,m$ and nullifies the uniform plane wave-front
model usually assumed in prior research on wireless communications. Few works
have considered the near-field property for modeling and analyzing massive
MIMO channels by proposing the ULA response vector [21] and analyzing the
channel estimation performance [11] under the spherical wave assumption. The
spherical wave-front is also proven to provide an underlying generic
parametric model for estimating the position of UE and scatterers [22, 23].
Several works start to investigate the localization potential with large
advanced antenna arrays to realize the vision of multi-purpose services for 6G
(joint communication, control, localization, and sensing) [24, 25, 26, 27, 28,
29] in addition to communication capabilities. Concentrated and distributed
large antenna arrays are compared in [24] in terms of localization
performance. The theoretical uplink localization and synchronization
performance is analyzed in [25] for ULA illuminated by spherical waves.
Parameter-based localization methods are developed in [26, 27, 28] for lens
antenna arrays in the far-field, [29] considers direct localization by
utilizing the near-field property, and provides a coarse localization
accuracy.
An effective solution to significantly reduce the system complexity and
implementation cost caused by the large number of antennas and UE is to
partition the antenna array into a few disjoint subarrays [10, 30]. In this
work, we propose an alternative solution by using the energy focusing property
of an extremely large lens antenna array denoted as “ExLens”, which can fully
utilize the aperture offered by the large antenna arrays. Recent studies have
confirmed that the signal processing complexity and radio frequency (RF) chain
cost could be significantly reduced without notable performance degradation
for mmWave and massive MIMO systems by utilizing lens antenna arrays [31, 32,
33, 34, 35]. Electromagnetic (EM) lenses can provide variable phase shifting
for EM rays at different points on the lens aperture to achieve angle-
dependent energy focusing property. Therefore, lens antenna arrays can
transform the signal from the antenna space to the beamspace (the latter has
lower dimensions) to reduce the RF chains significantly. In [34, 35], the
array responses of lens antenna arrays have been derived in closed-form as a
“sinc” function of the angle of arrival (AOA)/angle of departure (AOD) of the
impinging/departure signals. However, existing research on the lens antenna
arrays are limited to the far-field assumption. To the best of the authors’
knowledge, the array response of an ExLens for the general spherical wave-
front has not been reported in prior works, let alone conducting a study on
multi-user communication with an ExLens in the coexistence of near-field and
far-field UE.
In this study, we explore the property of ExLens illuminated by spherical
waves, including the capabilities of localization and multi-user
communication, on the basis of the inherent localization information carried
by spherical waves and the great potentials of lens antenna arrays in reducing
system complexity. In summary, we derive a closed-form array response of an
ExLens, based on which we develop an effective method to obtain location
parameters together with channel gains. On the one hand, we can realize
localization with the estimated location parameters. On the other hand, we can
design data transmission with the reconstructed channel. Our main
contributions are presented as follows:
* •
Array Response: We first derive the closed-form expression for the array
response of ExLens by considering the general spherical wave-front for two
different EM lens designs, and then reveal that the obtained array response
(derived based on the spherical wave assumption) includes the “sinc”- type
array response[34] (derived based on the uniform plane wave assumption) as a
special case. Next, we analyze differences of the energy focusing
characteristics of ExLens illuminated by the spherical and plane wave-fronts.
The window focusing property in the near-field of ExLens shows its great
potential for position sensing and multi-user communication. The approximation
error of the derived closed-form array response is verified ignorable.
* •
Position Sensing: We analyze the uplink localization ability of an ExLens
equipped at the BS. We first study the theoretical localization performance
from a Fisher information perspective and confirm that the localization
performance improves as the aperture of the lens antenna array increases. By
exploring the energy focusing window of ExLens, we propose an effective
parameterized estimation method to obtain location parameters together with
channel gains. Thus, localization can be performed by directly reusing the
communication signals. Comprehensive simulations show that the localization
performance of the proposed method is close to the Cramér-Rao lower bound
(CRLB) and the channel can also be effectively reconstructed.
* •
Multi-user Communication: We investigate the multi-user communication
performance of ExLens with coexisting near-field and far-field UE and
scatterers. Power-based antenna selection is applied to ExLens to reduce the
number of RF chains, together with the maximal ratio combining (MRC)- and
minimum mean square error (MMSE)-based combining schemes to maximize the sum-
rate. The multi-user communication performance of the ExLens with perfect and
estimated channel state information (CSIs) are compared. Simulation results
verify the effectiveness of the proposed channel estimation method and show
that the proposed ExLens with an MMSE receiver achieves significant spectral
efficiency gains and complexity-and-cost reductions compared with the
benchmark ULA schemes, when serving coexisting near-field and far-field UE.
The rest of this paper is organized as follows: In Section II, we introduce an
ExLens mmWave system model and derive the closed-form expression of ExLens
array response. The property of ExLens array response is explained in Section
III. In Section IV, we explore the localization capbility of ExLens and
propose an effective method to obtain location parameters together with
channel gains. In Section V, we analyze the multi-user communication
performance of ExLens. Our simulation results are presented in Section VI. We
conclude the paper in Section VII.
Notations—In this paper, upper- and lower-case bold letters denote matrices
and vectors, respectively. For a matrix $\mathbf{A}$, $\mathbf{A}^{-1}$,
$\mathbf{A}^{\text{T}}$, and $\mathbf{A}^{\text{H}}$ represent inverse,
transpose, and Hermitian operators, respectively.
$\mbox{blkdiag}(\mathbf{A}_{1},\ldots,\mathbf{A}_{k})$ denotes a block-
diagonal matrix constructed by $\mathbf{A}_{1},\ldots,\mathbf{A}_{k}$. For a
vector $\mathbf{a}$, the L2-norm is signified by $\|\mathbf{a}\|$. For a
complex value $c$, the module is represented by $|c|$ and the real part is
denoted by $\mathcal{R}\\{c\\}$. For a real number $a$, $\lfloor a\rfloor$
denotes the largest integer that is not greater than $a$. sinc$(\cdot)$ is the
“sinc” function defined as sinc$(x)=\sin(\pi x)/(\pi x)$.
$\mathbb{E}\\{\cdot\\}$ indicates the statistical expectation.
## II System Model
We consider a BS equipped with an ExLens in the two-dimensional coordinate
system (Fig. 1(a)). The EM lens is placed on the y-axis with physical length
$D_{y}$ and is centered at the origin. The antenna elements are placed on the
focal arc, which is defined as a semi-circle around the center of the EM lens
with radius $F$. As the aperture of an antenna array further increases, UE and
significant scatterers are likely to be located in the near-field of the
array, where the uniform plane wave-front assumption no longer holds.
Therefore, we consider the more general spherical wave-front, which leads to
more novel phase design of the EM lens and has greater energy focus on the
lens antenna array compared with plane wave-front [34].
(a) ExLens
(b) ULA
Figure 1: Different antenna arrays illuminated by spherical wave-front.
We first investigate the receive array response by assuming that ExLens is
illuminated by a spherical wave-front emitted from a UE located at
$\mathbf{u}=[-d\cos\phi,d\sin\phi]$, where $d$ is the distance between the UE
and the center of the EM lens, and $\phi\in(-{\pi}/{2},{\pi}/{2})$ is the
angle of the UE relative to the x-axis (Fig. 1(a)). 111 We assume that the
signal source is in front of the lens antenna array (i.e., it is located at
the opposite side of the EM lens with the array elements). This assumption
practically holds because BSs apply sectorization technique. Each antenna
array serves one sector in practice to cover the range of $60^{\circ}$ to
$120^{\circ}$. Multiple lens antenna arrays can be combined to cover a range
of $360^{\circ}$. For simplicity, we assume that the UE is equipped with an
omni-directional antenna, and is regarded as a point source. The signal
transmitted by the UE is assumed to be $1$, and the signal arrived at any
point $\mathbf{p}=[0,y]$ on the EM lens aperture is given by [21, 24]
$s(\mathbf{u},\mathbf{p})=\eta(\mathbf{u},\mathbf{p})e^{-jk_{0}\parallel\mathbf{u}-\mathbf{p}\parallel},\vspace{-0.4cm}$
(1)
where $k_{0}={2\pi}/{\lambda}$ is the wave number that corresponds to the
signal wavelength $\lambda$, and
$\eta(\mathbf{u},\mathbf{p})={\lambda}/{(4\pi\\!\parallel\\!\mathbf{u}-\mathbf{p}\\!\parallel})$
corresponds to the free space path loss from point $\mathbf{u}$ to point
$\mathbf{p}$. We define $\theta\in(-{\pi}/{2},{\pi}/{2})$, where $\theta$ is
positive below the x-axis and negative above the x-axis (Fig. 1(a)). The
received signal $r(\theta,d,\phi)$ at any point
$\mathbf{b}=[F\cos\theta,-F\sin\theta]$ at the focal arc 222The case that the
center of the EM lens and the focal arc are not coinciding is left for future
investigation. can be expressed as
$r(\theta,d,\phi)=\int\limits_{-{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}^{{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}{s(\mathbf{u},\mathbf{p})\kappa(\mathbf{p},\mathbf{b})e^{-j\varphi(\mathbf{p},\mathbf{b})}}dy=\int\limits_{-{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}^{{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}{\eta(\mathbf{u},\mathbf{p})e^{-jk_{0}\parallel\mathbf{u}-\mathbf{p}\parallel}\kappa(\mathbf{p},\mathbf{b})e^{-j\varphi(\mathbf{p},\mathbf{b})}}dy.\vspace{-0.2cm}$
(2)
where $\mathbf{p}$ is a function of $y$, $\mathbf{u}$ is a function of
$(d,\phi)$, and $\mathbf{b}$ is a function of $\theta$;
$\kappa(\mathbf{p},\mathbf{b})={\lambda}/{(4\pi\\!\parallel\\!\mathbf{p}-\mathbf{b}\\!\parallel})$
accounts for the free-space path loss from point $\mathbf{p}$ on the EM lens
to point $\mathbf{b}$ on the focal arc;
$\varphi(\mathbf{p},\mathbf{b})=\psi(\mathbf{p})+k_{0}||\mathbf{p}-\mathbf{b}||$,
and $\psi(\mathbf{p})$ is the fixed phase shift determined by the EM lens
design. Therefore, $\varphi(\mathbf{p},\mathbf{b})$ is the total phase shift
of the signal by the EM lens and the propagation delay between EM lens and
focal arc. Eq. (2) follows the principle of linear superposition of signals.
(a) Design 1: Incident plane wave-front perpendicular to the lens surface
converges at the focal point, where
${\psi}(\mathbf{p})={\phi_{0}}-{k_{0}}\lVert\mathbf{p}-{\mathbf{b}_{0}}\rVert$.
(b) Design 2: Incident spherical wave-front from the left focal point
converges at the right focal point, where
${\psi}(\mathbf{p})={\phi_{0}}-{k_{0}}(\lVert{\mathbf{c}_{0}}-\mathbf{p}\rVert+\lVert\mathbf{p}-\mathbf{b}_{0}\rVert)$.
Figure 2: Two design approaches for the EM lens.
We first review the fundamental principle of operation for EM lenses: the EM
lenses are similar to optical lenses, which can alter the propagation
directions of the EM rays to achieve energy focusing or beam collimation. The
function of EM lens can be effectively realized by appropriately designing
$\psi(\mathbf{p})$ in $\varphi(\mathbf{p},\mathbf{b})$ in (2). In this study,
we consider two different EM lens designs (Fig. 2). In Design 1, where
incident plane wave-front perpendicular to the lens surface converges at the
focal point $\mathbf{b}_{0}=[F,0]$ (Fig. 2(a)), we have
${\phi_{0}}={\psi}(\mathbf{p})+{k_{0}}\lVert\mathbf{p}-{\mathbf{b}_{0}}\rVert$,
where the constant $\phi_{0}$ is the arrived signal phase at the focal point
$\mathbf{b}_{0}$. Hence, we obtain
${\psi}(\mathbf{p})={\phi_{0}}-{k_{0}}\lVert\mathbf{p}-{\mathbf{b}_{0}}\rVert.\vspace{-0.4cm}$
(3)
The total phase shift for a signal from point $\mathbf{p}$ to point
$\mathbf{b}$ is given by
$\begin{array}[]{l}\varphi(\mathbf{p},\mathbf{b})={\psi}(\mathbf{p})+{k_{0}}\lVert\mathbf{p}-\mathbf{b}\rVert={\phi_{0}}+{k_{0}}\left(\lVert\mathbf{p}-\mathbf{b}\rVert-\lVert\mathbf{p}-\mathbf{b}_{0}\rVert\right).\end{array}\vspace{-0.25cm}$
(4)
In Design 2, where the incident spherical wave-front from point
$\mathbf{c}_{0}=[F_{0},0]$ converges at the focal point $\mathbf{b}_{0}=[F,0]$
(Fig. 2(b)), we have
${\phi_{0}}={k_{0}}\lVert{\mathbf{c}_{0}}-\mathbf{p}\rVert+{\psi}(\mathbf{p})+{k_{0}}\lVert\mathbf{p}-\mathbf{b}_{0}\rVert$.
Then, we obtain the following:
${\psi}(\mathbf{p})={\phi_{0}}-{k_{0}}(\lVert{\mathbf{c}_{0}}-\mathbf{p}\rVert+\lVert\mathbf{p}-\mathbf{b}_{0}\rVert).\vspace{-0.5cm}$
(5)
We also obtain the total phase shift as follows:
$\begin{array}[]{l}\varphi(\mathbf{p},\mathbf{b})={\psi}(\mathbf{p})+{k_{0}}\lVert\mathbf{p}-\mathbf{b}\rVert={\phi_{0}}+{k_{0}}(\lVert\mathbf{p}-\mathbf{b}\rVert-\lVert\mathbf{p}-{\mathbf{b}_{0}}\rVert-\lVert{\mathbf{c}_{0}}-\mathbf{p}\rVert).\end{array}\vspace{-0.25cm}$
(6)
Design 1 can be regarded as a special case of Design 2 with
$F_{0}\rightarrow\infty$. The EM lens is designed according to (3) (Design 1)
or (5) (Design 2) and works in the spherical wave-front scenarios (Fig. 1(a)).
Then, we define the response on point $\mathbf{b}=[F\cos\theta,-F\sin\theta]$
at the focal arc as
$a(\theta,d,\phi)={16\pi^{2}Fd}/({{\lambda^{2}{e^{-j{k_{0}d}}}}})\times
r(\theta,d,\phi),\vspace{-0.25cm}$ (7)
the closed-form expression of which can be obtained in the following theorem.
###### Theorem 1.
When illuminated by a spherical wave-front, with the assumption $d,F\gg
D_{y}$, the array response of ExLens on any point
$\mathbf{b}=[F\cos\theta,-F\sin\theta]$ at the focal arc can be approximated
as
$a(\theta,d,\phi)\approx\frac{{\sqrt{\pi}}}{{{\rm{2}}\sqrt{\alpha}}}{e^{-j\left({\frac{{{{{\pi^{2}\beta^{2}}}}}}{{\alpha}}-\frac{5\pi}{4}}\right)}}\left({\mathrm{erf}\left({\frac{{\alpha{D_{y}}+2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)+\mathrm{erf}\left({\frac{{\alpha{D_{y}}-2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)}\right),\vspace{-0.15cm}$
(8)
where
$\mathrm{erf}\left(x\right)=\frac{2}{{\sqrt{\pi}}}\int\limits_{0}^{x}{{e^{-{t^{2}}}}dt},\vspace{-0.15cm}$
(9)
$\beta=(\sin\theta-\sin\phi)/{\lambda}$ and $\alpha$ for the two different
lens designs is given in Table I.
TABLE I: Parameter $\alpha$ for different lens designs. | Design 1 | Design 2
---|---|---
$\alpha$ | $\dfrac{{\pi{{\sin}^{\rm{2}}}\theta}}{{\lambda F}}-\dfrac{{\pi{{\cos}^{\rm{2}}}\phi}}{{\lambda d}}$ | $\dfrac{{\pi{{\sin}^{\rm{2}}}\theta}}{{\lambda F}}-\dfrac{{\pi{{\cos}^{\rm{2}}}\phi}}{{\lambda d}}+\dfrac{\pi}{{\lambda{F_{0}}}}$
###### Proof.
Please refer to Appendix A. ∎
According to Theorem 1, the parameter $\alpha$ of Design 2 reduces to that of
Design 1 when $F_{0}\to\infty$, as
$\lim\limits_{F_{0}\to\infty}{\pi}/(\lambda{F_{0}})=0$ in Table I. This again
shows that Design 1 can be regarded as a special case of Design 2 with
$F_{0}\rightarrow\infty$. The energy focusing property of ExLens is determined
by the item
${\mathrm{erf}\left({\frac{{\alpha{D_{y}}+2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)+\mathrm{erf}\left({\frac{{\alpha{D_{y}}-2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)}$
in (8) and is further analyzed in Section III. The parameter $\theta$ in
Theorem 1 is a continuous value, whereas $\theta$ should be sampled for a
particular antenna placement. Here, we assume that
$N_{a}=2\lfloor\tilde{D}_{y}\rfloor+1$ antenna elements are placed on the
focal arc of the EM lens [34], where $\tilde{D}_{y}={D_{y}}/{\lambda}$ denotes
the electrical length of the EM lens. For notational convenience, $N_{a}$ is
assumed to be an odd number. Let $\theta_{n}$ signify the angle of the $n$-th
antenna element relative to the x-axis, where $n\in\\{0,\pm 1,\ldots,\pm N\\}$
and $N={(N_{a}-1)}/{2}$. The deployment of the antenna elements obeys the rule
$\sin\theta_{n}={n}/{N}$. Therefore, the array response of the $n$-th antenna
element located at point $\mathbf{b}_{n}=[F\cos\theta_{n},-F\sin\theta_{n}]$
accroding to (8) can be expressed as
$a_{n}(d,\phi)\approx\frac{{\sqrt{\pi}}}{{{\rm{2}}\sqrt{\alpha}}}e^{-j\left({\frac{{{{{\pi^{2}\beta^{2}}}}}}{{\alpha}}-\frac{5\pi}{4}}\right)}\left({\mathrm{erf}\left({\frac{{\alpha{D_{y}}+2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)+\mathrm{erf}\left({\frac{{\alpha{D_{y}}-2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)}\right),\vspace{-0.15cm}$
(10)
where $\sin\theta$ in $\alpha$ and $\beta$ is replaced by
$\sin\theta_{n}={n}/{N}$. With the $n$-th element given in (10), the antenna
array response vector $\mathbf{a}(d,\phi)\in\mathbb{C}^{N_{a}\times 1}$ can be
obtained accordingly.
## III Property of ExLens Array Response
In this section, we analyze the relationship and differences between the array
responses of the lens antenna array illuminated by spherical wave-fronts
(near-field scenarios) and plane wave-fronts (far-field scenarios). Before
entering the in-depth comparison, we review the array response of the lens
antenna array illuminated by plane wave-fronts. The array response of a lens
antenna array for an element located at the focal arc with angle $\theta$ and
illuminated by a uniform plane wave with AOA $\phi$ is given by [34]
$a(\theta,\phi)={D_{y}}\mathrm{sinc}\left({\tilde{D}_{y}\sin\theta-\tilde{D}_{y}\sin\phi}\right).\vspace{-0.45cm}$
(11)
In the far-field scenarios, the array response follows the “sinc” function as
given in (11). For any incident/departure signal from/to a particular
direction $\phi$, only those antennas located near the focal point would
receive/steer significant power. Notably, the focal point reflects the
information of $\phi$, whereas the information of $d$ cannot be reflected from
(11). Furthermore, the angle resolution of the lens antenna array is
determined by the width of the main lobe of the “sinc” function, which is
$2/\tilde{D}_{y}$. When $\tilde{D}_{y}$ increases, the main lobe becomes
narrower such that other multi-paths can be resolved in the spatial domain.
### III-A Generality Analysis
We reveal the relationship between the array responses of the lens antenna
array illuminated by the spherical wave-front in (8) and plane wave-front in
(11). The following lemma shows that (11) is a special case of the derived
ExLens array response (8).
###### Lemma 1.
When $d$ and $F_{0}$ (for Design 2) increase to infinite and in which the
spherical wave-front reduces to the plane wave-front, the array response given
in (8) converges to (11) as
$\begin{array}[]{ll}\lim\limits_{d,F_{0}\to\infty}\\!\frac{{\sqrt{\pi}}}{{{\rm{2}}\sqrt{\alpha}}}e^{-j\left({\frac{{{{{\pi^{2}\beta^{2}}}}}}{{\alpha}}-\frac{5\pi}{4}}\right)}\\!\\!\left({\mathrm{erf}\left({\frac{{\alpha{D_{y}}+2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)\\!\\!+\mathrm{erf}\left({\frac{{\alpha{D_{y}}-2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)}\right)\\!=\\!{D_{y}}\mathrm{sinc}\left({\tilde{D}_{y}\sin\theta-\tilde{D}_{y}\sin\phi}\right).\end{array}\vspace{-0.25cm}$
(12)
###### Proof.
Refer to Appendix B. ∎
###### Remark 1.
Lemma 1 reveals that the derived array response of the ExLens illuminated by a
spherical wave-front in (8) is a more general result compared to the result in
[34], which means that the derived array response (8) is applicable to far-
field (plane wave-front) and near-field (spherical wave-front) scenarios.
### III-B Window Effect
We first analyze differences of the energy focusing characteristics of the
lens antenna array illuminated by the spherical wave-front in (8) and plane
wave-front in (11). Specifically, in the near-field scenarios, the array
response (8) has an evident window effect for the energy focusing property,
which does not exist in the far-field scenarios. To better understand the
window effect, we split the array response (8) into three parts as follows:
$a(\theta,d,\phi)=\underbrace{\frac{{\sqrt{\pi}}}{{{\rm{2}}\sqrt{\alpha}}}}_{\left({\rm{a}}\right)}\underbrace{e^{-j\left({\frac{{{{{\pi^{2}\beta^{2}}}}}}{{\alpha}}-\frac{5\pi}{4}}\right)}}_{\left({\rm{b}}\right)}\underbrace{\left({\mathrm{erf}\left({\frac{{\alpha{D_{y}}+2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)+\mathrm{erf}\left({\frac{{\alpha{D_{y}}-2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)}\right)}_{\left({\rm{c}}\right)},\vspace{-0.2cm}$
(13)
where part $(a)$ is the amplitude, part $(b)$ is the phase, and part $(c)$ is
the window effect for the energy focusing property in the near-field
scenarios. We denote a “window” function as
${w(\theta,d,\phi)}\buildrel\Delta\over{=}{\mathrm{erf}\left({\frac{{\alpha{D_{y}}+2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)+\mathrm{erf}\left({\frac{{\alpha{D_{y}}-2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)}.\vspace{-0.2cm}$
(14)
###### Lemma 2.
Let $v_{1}$ and $v_{2}$ denote the zero points of $\mathrm{erf}(\xi_{1})$ and
$\mathrm{erf}(\xi_{2})$, respectively, where
$\xi_{1}={\frac{{\alpha{D_{y}}+2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}$
and
$\xi_{2}={\frac{{\alpha{D_{y}}-2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}$.
We define the center and width of the energy focusing window as
${v_{c}}=({v_{1}}+{v_{2}})/{2}$ and $\Delta v=|v_{1}-v_{2}|$, respectively. In
lens Design 1, the center and width of the energy focusing window are given as
follows:
${v_{c}}=\sin\phi,\ \ \Delta
v=\dfrac{{{D_{y}}{{\cos}^{2}}\phi}}{d}.\vspace{-0.25cm}$ (15)
In lens Design 2, the center and width of the energy focusing window are
obtained as follows:
${v_{c}}=\sin\phi,\ \ \Delta
v={D_{y}}\left|{\dfrac{1}{{{F_{0}}}}-\dfrac{{{{\cos}^{2}}\phi}}{d}}\right|.\vspace{-0.25cm}$
(16)
###### Proof.
Refer to Appendix C. ∎
###### Remark 2.
The similarity of the energy focusing property of the lens antenna array
illuminated by the spherical wave-front in (8) and plane wave-front in (11) is
attributed to the following: the center of the focusing area is approximately
equal to $\sin\phi$, where $\phi$ is the angle of the source point relative to
the x-axis. The differences of the energy focusing property of the “window”
function in (8) (near-field scenarios) and that of the “sinc” function (11)
(far-field scenarios) are as follows: The width of the focusing window
reflects the distance information $d$, according to (15) and (16). This
feature implies that a single ExLens has the positioning capability in the
spherical wave-front scenarios. Therefore, the position of the UE can also be
easily extracted from the information of the energy focusing windows according
to the received communication signals. By contrast, the “sinc” function has a
maximum energy point and cannot reflect the information of $d$, according to
(11).
Next, we explore the changes in the energy focusing properties from the far-
field to the near-field, as aperture size of the lens antenna array increases.
For illustration, we take the lens Design 2 as an example and plot changes in
the energy focusing property by increasing the effective aperture
$\tilde{D}_{y}$ (Fig. 3). Although $\mathrm{erf}(\xi_{1})$ and
$\mathrm{erf}(\xi_{2})$ are complex values, their imaginary parts are close to
zero. Hence we draw the real parts of $\mathrm{erf}(\xi_{1})$ and
$\mathrm{erf}(\xi_{2})$ on the right-hand side of Fig. 3. When $\tilde{D}_{y}$
is small ($\tilde{D}_{y}=5$ and $\tilde{D}_{y}=10$), $d=50\,m$ is much larger
than the Rayleigh distance $R={2D_{y}^{2}}/{\lambda}$, the plane wave-front
assumption holds, and $|\mathrm{erf}(\xi_{1})+\mathrm{erf}(\xi_{2})|$ on the
left-hand side of Fig. 3 presents a representation of the “sinc” function. As
long as the plane wave-front assumption holds, the “sinc” function will become
finer and sharper as $\tilde{D}_{y}$ increases. Thus, the focusing property
and the angle resolution of the lens antenna array improves, as described in
[34]. When $\tilde{D}_{y}$ further increases to a sufficiently large value,
the plane wave-front assumption no longer holds. Furthermore, the “sinc”
function cannot reflect the energy focusing property accurately, say, when
$\tilde{D}_{y}\geqslant 70$. With the further increase in $\tilde{D}_{y}$
($\tilde{D}_{y}=100$ and $\tilde{D}_{y}=200$), the window effect for the
energy focusing property appears. The energy received in the focusing window
is approximatley equal and the energy of the side lobes is extremely small, as
shown in the last two subfigures of Fig. 3. The width of the energy focusing
window increases with $\tilde{D}_{y}$, thereby receiving more energy, but
pronouncing energy diffusion effect. The aforementioned phenomenon is caused
by the relative distance of the lines of $\mathrm{erf}(\xi_{1})$ and
$\mathrm{erf}(\xi_{2})$. The relative distance of the lines of
$\mathrm{erf}(\xi_{1})$ and $\mathrm{erf}(\xi_{2})$ increases with
$\tilde{D}_{y}$, thereby resulting different line shapes of
$|\mathrm{erf}(\xi_{1})+\mathrm{erf}(\xi_{2})|$ (Fig. 3). In summary, the
energy focusing property of the lens antenna array is described by the “sinc”
function in the far-field scenarios and the “window” function in the near-
field scenarios.
###### Remark 3.
The derived array response of the ExLens illuminated by a spherical wave-front
in (8) can describe the transition between the far-field and the near-field
scenarios. In the special case of far-field (i.e., $d,F_{0}\to\infty$), the
angle resolution of the lens antenna array is determined by the width of the
main lobe of the “sinc” function, which is $2/\tilde{D}_{y}$. By contrast, the
width $\Delta v$ determines the angle resolution of the ExLens in the near-
field. The angle resolution makes ExLens illuminated by spherical waves also
suitable for multi-user communication.
Figure 3: Changes in the energy focusing property of the lens antenna array as
the effective aperture $\tilde{D}_{y}$ increases, where $\phi=0$ rad,
$d=50\,m$, $F_{0}=15\,m$, $F=5\,m$, and $\lambda=0.01\,m$.
### III-C Approximation Tightness and Antenna Deployment
Figure 4: Differences of the energy focusing properties of the ExLens with two
different lens designs, where $\tilde{D}_{y}=100$, $d=7\,m$, $F_{0}=F=5\,m$,
and $\lambda=0.01\,m$.
The approximation error between the derived closed-form array response and the
original one in integral form is compared in Fig. 4. The solid line denotes
the original array response in the integral-form. The dotted line represents
the approximated closed-form array response given in (8). The solid line
matches well with the dashed line when the array power response is above $-20$
dB, thereby indicating that the approximation error is small. Specifically,
the approximation error can be safely ignored when
$\phi\in[-36^{\circ},36^{\circ}]$. When the array power response is below
$-20$ dB, the approximate error slightly increases with $|\phi|$. Moreover,
the two different lens designs also have dissimilar energy focusing
characteristics (Fig. 4). For Design 1, the width of the energy focusing
window $\Delta v={{{D_{y}}{{\cos}^{2}}\phi}}/{d}$ is smaller with larger
$|\phi|$, and this characteristic means that the energy focusing phenomenon is
evident when $|\phi|$ is large. However, for Design 2, we have $\Delta
v={D_{y}}\left|{{1}/{{{F_{0}}}}-{{{{\cos}^{2}}\phi}}/{d}}\right|$ and
$d>{F_{0}}\cos^{2}\phi$, the energy focusing phenomenon is evident when
$|\phi|$ is small. This outcome is expected given that the ExLens of Design 1
has good energy focusing property for the uniform plane incident wave. When we
apply this design into spherical wave-front scenarios, the incident spherical
wave-front becomes closer to the plane wave-front as $|\phi|$ increases;
hence, the energy focusing performance improves. However, the ExLens of Design
2 has good energy focusing property when the source point is near the focal
point $\mathbf{c}_{0}=[-F_{0},0]$. When $|\phi|$ becomes larger, the source
point is farther from the focal point $\mathbf{c}_{0}$; thus, the energy
focusing performance deteriorates. As mentioned before, when $F_{0}\to\infty$,
Design 2 becomes to Design 1. Accordingly, the width of the energy focusing
window of Design 2 equals to that of Design 1, i.e.,
$\lim\limits_{F_{0}\to\infty}{D_{y}}\left|{{1}/{{{F_{0}}}}-{{{{\cos}^{2}}\phi}}/{d}}\right|={{{D_{y}}{{\cos}^{2}}\phi}}/{d}$.
To be more general, we use the lens Design 2 for analysis in the following
sections. We adopt the antenna elements deployment $\sin\theta_{n}={n}/{N}$,
as mentioned in Section II. The right hand-side of Fig. 4 shows that the
energy focusing window is narrower in the center and wider on the edges.
Therefore, placing denser antenna elements in the center of the array can
prevent the non-detection of strong signals. For the far-field scenario, the
antenna array response reduces to the “sinc” function [34], and this kind of
antenna deployment is applicable.
From the analysis in this section, we get such insight that the window effect
for the energy focusing property makes ExLens illuminated by spherical waves
suitable for single-station localization and multi-user communication, which
are analyzed in-depth in the following sections.
## IV Position Sensing
In this section, we explore the localization capbility of ExLens. The signal
that arrived at different points of the lens aperture has the same incident
angle under the assumption of plane wave-front. However, when the UE is
located in the near-field of ExLens, the signal with a spherical wave-front
arrived at different points of the lens aperture has different incident
angles. Thus, relative to that of the plane wave-front, the received signal
with spherical wave-front contains more abundant angular information that
changes continuously from one edge of the lens aperture to another. According
to the traditional multi-point localization [36], more angular measurements
can ensure more accurate localization. We can infer that a single ExLens has
the localization capbility with the abundant angular information from the
spherical wave-front. Thus, in this section, we analyze the theoretical
localization capbility of ExLens and then propose a parameterized estimation
method to obtain the location parameters.
For ease of expression, we take one UE with single antenna for illustration in
this section. The system model can be easily extended to solve the case with
multiple UEs as long as the pilot signals for different UEs are orthogonal in
time. We consider the narrow band mmWave multi-path channel model. Thus, the
uplink channel is given by
$\textbf{h}=\sum\limits_{l=1}^{L}g_{l}\textbf{a}(d_{l},\phi_{l}),\vspace{-0.25cm}$
(17)
where $g_{l}$ is the complex gain of the $l$-th path,
$\textbf{a}(\cdot)\in\mathbb{C}^{N_{a}\times 1}$ is the array response vector
with elements defined in (10), $l=1$ represents the LOS path,
$(d_{1},\phi_{1})$ is a pair of position parameters of the UE, $l>1$
represents the NLOS path, and $(d_{l},\phi_{l})$ is a pair of position
parameters of the $l$-th scatterer. We only consider the last-jump scatterers.
If all one pilots are used, then, the received signal at the ExLens antenna
array is modelled as
$\textbf{r}=\textbf{h}+\textbf{n}=\sum\limits_{l=1}^{L}g_{l}\textbf{a}(d_{l},\phi_{l})+\textbf{n},\vspace{-0.25cm}$
(18)
where $\textbf{n}\in\mathbb{C}^{N_{a}\times 1}$ represents the circularly
symmetric complex Gaussian noise with zero-mean and covariance matrix
$\sigma^{2}\textbf{I}$. Here, we define the receive signal-to-noise ratio
(SNR) as SNR $=10\lg({\textbf{h}^{\text{H}}\textbf{h}}/{(N_{a}\sigma^{2})})$.
Let $\bm{\eta}_{l}=[g_{l},d_{l},\phi_{l}]$,
$\bm{\eta}=[\bm{\eta}_{1},\ldots,\bm{\eta}_{L}]$, and
$\textbf{h}(\bm{\eta})=\sum\limits_{l=1}^{L}g_{l}\textbf{a}(d_{l},\phi_{l})$.
In the localization, we aim at determining $\bm{\eta}$ based on the received
signal r in (18).
### IV-A Theoretical Localization Analysis
According to [37], the $3L\times 3L$ positive definite Fisher information
matrix (FIM) of $\bm{\eta}$ is given by
${\bf{F}}\left(\bm{\eta}\right)=\left[\begin{matrix}{\bf{F}}_{11}\left(\bm{\eta}\right)&\ldots&{\bf{F}}_{1L}\left(\bm{\eta}\right)\\\
\vdots&\ddots&\vdots\\\
{\bf{F}}_{L1}\left(\bm{\eta}\right)&\ldots&{\bf{F}}_{LL}\left(\bm{\eta}\right)\end{matrix}\right],$
(19)
where the $3\times 3$ matrix ${\bf{F}}_{ll^{\prime}}\left({\bm{\eta}}\right)$
is defined by
${\bf{F}}_{ll^{\prime}}\left({\bm{\eta}}\right)=\frac{2}{\sigma^{2}}\mathcal{R}\left\\{\dfrac{\partial\textbf{h}^{\text{H}}(\bm{\eta})}{\partial\bm{\eta}_{l}}\dfrac{\partial\textbf{h}(\bm{\eta})}{\partial\bm{\eta}_{l^{\prime}}}\right\\}.\vspace{-0.25cm}$
(20)
The information inequality for the covariance matrix of any unbiased estimate
$\hat{\bm{\eta}}$ reads [37]
$\mathbb{E}\\{(\hat{\bm{\eta}}-\bm{\eta})^{\text{H}}(\hat{\bm{\eta}}-\bm{\eta})\\}\geq{\bf{F}^{-1}}\left(\mathbf{\bm{\eta}}\right).\vspace{-0.3cm}$
(21)
The ${\bf{F}}\left(\bm{\eta}\right)$ is represented in the polar coordinates.
With $x_{l}=-d_{l}\cos\phi_{l}$ and $y_{l}=d_{l}\sin\phi_{l}$, the position of
the UE or scatterer in Cartesian coordinates is given as
$\mathbf{u}_{l}=[x_{l},y_{l}]$, for $l=1,\ldots,L$. Let
$\mathbf{u}=[\mathbf{u}_{1},\ldots,\mathbf{u}_{L}]$,
$\tilde{\mathbf{u}}_{l}=[g_{l},x_{l},y_{l}]$, and
$\tilde{\mathbf{u}}=[\tilde{\mathbf{u}}_{1},\ldots,\tilde{\mathbf{u}}_{L}]$,
thus, transformation to the position domain is achieved as follows: the FIM of
$\tilde{\mathbf{u}}$ is given by
${\bf{F}}\left(\tilde{\mathbf{u}}\right)={\bf{T}}^{\text{T}}{\bf{F}}\left({\bm{\eta}}\right){\bf{T}}$,
where ${\bf{T}}={\rm blkdiag}\\{\mathbf{T}_{1},\ldots,\mathbf{T}_{L}\\}$, and
${\bf{T}}_{l}=[1,0,0;0,x_{l}/d_{l},y_{l}/d_{l};0,y_{l}/d_{l}^{2},-x_{l}/d_{l}^{2}]$
is the Jacobian matrix used to describe the coordinate system transformation,
in which the “;” operator separates rows in a matrix. Then, we define the
position error bound (PEB) from ${\bf{F}}\left(\tilde{\mathbf{u}}\right)$ as
${\text{PEB}}(\mathbf{u})={\sqrt{{\rm
trace}([{\bf{F}^{-1}}\left(\tilde{\mathbf{u}}\right)]_{(\\{2:3,5:6,\ldots,3L-1:3L\\},\\{2:3,5:6,\ldots,3L-1:3L\\})})}}.\vspace{-0.3cm}$
(22)
The root mean-square estimation error (RMSE) of an unbiased estimate of
${\mathbf{u}}$ is lower-bounded by ${\text{PEB}}(\mathbf{u})$. We need
${{\partial\mathbf{a}(d_{l},\phi_{l})}}/{{\partial d_{l}}}$ and
${{\partial\mathbf{a}(d_{l},\phi_{l})}}/{{\partial\phi_{l}}}$, which is
derived in Appendix D, to calculate the PEB given in (22).
Fig. 5 shows PEBs as functions of $d$, $\phi$, $D_{y}$, and $F_{0}$ ($F$ shows
a similar property as $F_{0}$). Given that some approximations are made to
derive the closed-form array response (10), we denote the obtained PEB with
the approximated closed-form array response as APEB. We also calculate the PEB
with the original received signal in the integral form denoted as OPEB. We can
evaluate the accuracy of the approximation by comparing APEB with OPEB. Given
the minimal approximation error shown in Fig. 5 (only when $|\phi|>1.2$ rad or
${D}_{y}>3\,m$, the APEB slightly deviates from the OPEB), the theoretical
localization analysis based on (10) is accurate.
###### Remark 4.
The localization performance of ExLens degrades with the increase in $d$ and
$|\phi|$. This finding is expected given that the UE transits from near-field
to far-field because $d$ increases, thereby demonstrating that a single BS
loses its localization capability. The localization performance improves with
the increase in ${D}_{y}$. By contrast, the value of $F_{0}$ has a little
effect on PEBs. Given that the increase in ${D}_{y}$ can bring rich angle
measurements, which is similar to adding additional BSs in the multi-point
localization. Under the given configuration in Fig. 5, an ExLens with the
electrical aperture $\tilde{D}_{y}=100$ and SNR $=20$ dB can theoretically
provide around centimeter-level localization accuracy for a UE located with
$d<50\,m$ and $\phi<1$ rad to the BS.
Figure 5: (a) PEB as a function of $d$ and $\phi$ with $D_{y}=1\,m$,
$F_{0}=F=5\,m$, $\lambda=0.01\,m$, $N_{a}=201$, $L=1$, and SNR $=20$ dB. (b)
PEB as a function of ${D}_{y}$ and $F_{0}$ with $d=18\,m$, $\phi=0$ rad,
$F=5\,m$, $\lambda=0.01\,m$, $N_{a}=201$, $L=1$, and SNR $=20$ dB.
### IV-B Location Parameter Estimation Method
In this subsection, we propose a location parameters estimation method to
determine $(d_{l},\phi_{l})$, and the gain $g_{l}$ for $l=1,\ldots,L$. The
maximum likelihood (ML) estimator is given by
$(\bm{\hat{d}},\bm{\hat{\phi}},\bm{\hat{g}})=\mathop{\arg\min}\limits_{\bm{d}\in\mathbb{R}^{L},\bm{\phi}\in(-\frac{\pi}{2},\frac{\pi}{2})^{L},\bm{g}\in\mathbb{C}^{L}}\left\Arrowvert\textbf{r}-\sum\limits_{l=1}^{L}g_{l}\textbf{a}(d_{l},\phi_{l})\right\Arrowvert^{2},\vspace{-0.25cm}$
(23)
where ${\bm{d}}=[d_{1},\ldots,d_{L}]$,
${\bm{\phi}}=[\phi_{1},\ldots,\phi_{L}]$, and ${\bm{g}}=[g_{1},\ldots,g_{L}]$.
333 We assume that only $M_{RF}$ RF chains are available in the ExLens system,
where $M_{RF}<N_{a}$. Thus, the low-complexity power-based antenna selection
method is applied. We let $\textbf{r}\in\mathbb{C}^{N_{a}\times 1}$ denote the
received signal after the antenna selection, which has $M_{RF}$ non-zero
elements. The brute-force search for the optimal estimate of
$(\bm{d},\bm{\phi},\bm{g})$ in the whole continuous domain
($\bm{d}\in\mathbb{R}^{L}$, $\bm{\phi}\in(-{\pi}/{2},{\pi}/{2})^{L}$, and
$\bm{g}\in\mathbb{C}^{L}$) is infeasible. Hence, we propose an effective
localization method, which contains three stages: (1) Initialization stage,
where we propose a window-based coarse localization algorithm to determine the
grid search region. (2) detection stage, in which we find a relatively
accurate estimate of $(d_{l},\phi_{l})$, for $l=1,\ldots,L$, from discrete
grids by discrete OMP (DOMP) algorithm. (3) refinement stage, where we
iteratively refine the location parameters $(d_{l},\phi_{l})$ and gain $g_{l}$
for $l=1,\ldots,L$ by Newton algorithm [38, 39].
#### IV-B1 Initialization stage
We utilize the window effect for energy focusing property of ExLens to narrow
down the search region. Lemma 2 is developed for single-path scenarios. For
multi-path scenarios, we let $v_{1l}$ and $v_{2l}$ denote the window edges for
the $l$-th path, which are affected by the position parameters
$(d_{l},\phi_{l})$, where $l=1,\ldots,L$. In Appendix C, we derive the
relationships between the window edges and the location parameters. After
parameters $v_{1l}$ and $v_{2l}$ are measured, we can obtain a coarse
estimation of $(d_{l},\phi_{l})$ for $l=1,\ldots,L$. We apply power detection
to the received signal by each antenna, and for a given threshold, we can
obtain $\hat{v}_{1l}$ and $\hat{v}_{2l}$ for $l=1,\ldots,L$. According to (67)
and (69), we have the following expression for the $l$-th path:
$\mathbf{g}_{l}=\mathbf{G}\mathbf{q}_{l}+\mathbf{e}_{l},\vspace{-0.5cm}$ (24)
where
$\mathbf{g}_{l}=\\!\left(\\!\left(\\!\hat{v}_{1l}\\!+\\!\frac{F}{D_{y}}\right)^{2}\\!\\!\\!-\\!\left(\\!\frac{F}{D_{y}}\\!\right)^{2}\\!\\!\\!+\\!\frac{F}{F_{0}},\\!\left(\\!\hat{v}_{2l}\\!-\\!\frac{F}{D_{y}}\\!\right)^{2}\\!\\!\\!-\\!\left(\frac{F}{D_{y}}\right)^{2}\\!\\!\\!+\\!\frac{F}{F_{0}}\right)^{\text{T}}\\!\\!\\!,\
\ \mathbf{G}=\begin{pmatrix}\frac{2F}{D_{y}}&F\\\
\frac{-2F}{D_{y}}&F\end{pmatrix},\ \
\mathbf{q}_{l}=\begin{pmatrix}\sin\phi_{l}\\\
\frac{\cos^{2}\phi_{l}}{d_{l}}\end{pmatrix},\vspace{-0.1cm}$ (25)
and $\mathbf{e}_{l}$ is the noise vector caused by measurement error. Then,
the least squares estimator is given by
$\mathbf{\hat{q}}_{l}=(\mathbf{G}^{\text{H}}\mathbf{G})^{-1}\mathbf{G}^{\text{H}}\mathbf{g}_{l},\vspace{-0.3cm}$
(26)
with $\mathbf{\hat{q}}_{l}=[\hat{q}_{1l},\hat{q}_{2l}]$. Thus, the position
parameters $(d_{l},\phi_{l})$ can be recovered by
$\hat{\phi}_{l}=\arcsin\hat{q}_{1l},\ \
\hat{d}_{l}=({1-\hat{q}_{1l}^{2}})/{\hat{q}_{2l}}.\vspace{-0.3cm}$ (27)
We denote the sets $\mathbb{S}_{d}=\cup\mathbb{S}_{d}^{l}$ and
$\mathbb{S}_{\phi}=\cup\mathbb{S}_{\phi}^{l}$, for $l=1,\ldots,L$, where
$\mathbb{S}_{d}^{l}=\\{\hat{d}_{l}-\Delta d\leq d\leq\hat{d}_{l}+\Delta d\\}$
and
$\mathbb{S}_{\phi}^{l}=\\{\hat{\phi}_{l}-\Delta\phi\leq\phi\leq\hat{\phi}_{l}+\Delta\phi\\}$.
We generate finite discrete sets by taking $N_{d}$ and $N_{\phi}$ grids on the
obtained sets $\mathbb{S}_{d}$ and $\mathbb{S}_{\phi}$ as
$\mathbb{\bar{S}}_{d}$ and $\mathbb{\bar{S}}_{\phi}$, respectively. The total
search region is initialized as $\mathbb{\bar{S}}_{d}$ and
$\mathbb{\bar{S}}_{\phi}$.
#### IV-B2 Detection stage
We apply DOMP algorithm to detect $d_{l}$ and $\phi_{l}$ from the discrete
sets $\mathbb{\bar{S}}_{d}$ and $\mathbb{\bar{S}}_{\phi}$, respectively, for
$l=1,\ldots,L$. We take the detection of the $l^{\prime}$-th path as an
example for illustration. Let $(\hat{d}_{l},\hat{\phi}_{l},\hat{g}_{l})$, for
$l=1,\ldots,l^{\prime}-1$, denote the estimates of the first $l^{\prime}-1$
paths. Then, the residual measurement is given by
$\textbf{r}_{r}=\textbf{r}-\sum\limits_{l=1}^{l^{\prime}-1}\hat{g}_{l}\textbf{a}(\hat{d}_{l},\hat{\phi}_{l}).\vspace{-0.25cm}$
(28)
We apply the ML estimates by minimizing the residual power
$\left\Arrowvert\textbf{r}_{r}-g\textbf{a}(d,\phi)\right\Arrowvert^{2}$, or
equivalently, by maximizing $S(d,\phi,g)$, where
$S(d,\phi,g)=2\mathcal{R}\left\\{\textbf{r}_{r}^{\text{H}}g\textbf{a}(d,\phi)\right\\}-\left\Arrowvert
g\textbf{a}(d,\phi)\right\Arrowvert^{2}.\vspace{-0.25cm}$ (29)
The generalized likelihood ratio test estimate of
$(d_{l^{\prime}},\phi_{l^{\prime}})$ of the $l^{\prime}$-th path is the
solution of the following optimization problem
$(\hat{d}_{l^{\prime}},\hat{\phi}_{l^{\prime}})=\mathop{\arg\max}\limits_{d\in\mathbb{\bar{S}}_{d},\phi\in\mathbb{\bar{S}}_{\phi}}|\textbf{a}(d,\phi)^{\text{H}}\textbf{r}_{r}|^{2}/\left\Arrowvert\textbf{a}(d,\phi)\right\Arrowvert^{2}.\vspace{-0.25cm}$
(30)
The corresponding gain of the $l^{\prime}$-th path that maximizes
$S(d,\phi,g)$ is given by
$\hat{g}_{l^{\prime}}=\left(\textbf{a}(\hat{d}_{l^{\prime}},\hat{\phi}_{l^{\prime}})^{\text{H}}\textbf{r}_{r}\right)/\left\Arrowvert\textbf{a}(\hat{d}_{l^{\prime}},\hat{\phi}_{l^{\prime}})\right\Arrowvert^{2}.\vspace{-0.25cm}$
(31)
#### IV-B3 Refinement stage
Given that $d_{l^{\prime}}$ and $\phi_{l^{\prime}}$ can take any value in
$\mathbb{R}$ and $(-{\pi}/{2},{\pi}/{2})$, respectively, we add a refinement
stage by utilizing Newton algorithm to reduce the off-grid effect and enhance
the estimation accuracy. Let
$(\hat{d}_{l^{\prime}},\hat{\phi}_{l^{\prime}},\hat{g}_{l^{\prime}})$ denote
the current estimates. The Newton refinement is given by
$\begin{pmatrix}\hat{\hat{d}}_{l^{\prime}}\\\
\hat{\hat{\phi}}_{l^{\prime}}\end{pmatrix}=\begin{pmatrix}\hat{{d}}_{l^{\prime}}\\\
\hat{{\phi}}_{l^{\prime}}\end{pmatrix}-\begin{pmatrix}\frac{\partial^{2}S}{\partial
d^{2}}&\frac{\partial^{2}S}{\partial d\partial\phi}\\\
\frac{\partial^{2}S}{\partial\phi\partial
d}&\frac{\partial^{2}S}{\partial\phi^{2}}\end{pmatrix}^{-1}\begin{pmatrix}\frac{\partial
S}{\partial d}\\\ \frac{\partial
S}{\partial\phi}\end{pmatrix}\vspace{-0.25cm}$ (32)
where the first-order partial derivatives of $S(d,\phi,g)$ is given by
$\frac{\partial S}{\partial
x}=\mathcal{R}\left\\{(\textbf{r}_{r}-g\textbf{a}(d,\phi))^{\text{H}}g\frac{\partial\textbf{a}(d,\phi)}{\partial
x}\right\\},\vspace{-0.25cm}$ (33)
where $x$ can be $d$ or $\phi$. The second-order partial derivatives of
$S(d,\phi,g)$ is given by
$\frac{\partial^{2}S}{\partial x\partial
y}=\mathcal{R}\left\\{\left(\textbf{r}_{r}-g\textbf{a}(d,\phi)\right)^{\text{H}}g\frac{\partial^{2}\textbf{a}(d,\phi)}{\partial
x\partial y}-|g|^{2}\frac{\partial\textbf{a}^{\text{H}}(d,\phi)}{\partial
x}\frac{\partial\textbf{a}(d,\phi)}{\partial y}\right\\},\vspace{-0.25cm}$
(34)
where $x$ and $y$ can be $d$ or $\phi$. Refer to (74)-(81) and some tedious
calculations, (33) and (34) can be obtained. The gain is then updated to
$\hat{\hat{g}}_{l^{\prime}}=\left(\textbf{a}(\hat{\hat{d}}_{l^{\prime}},\hat{\hat{\phi}}_{l^{\prime}})^{\text{H}}\textbf{r}_{r}\right)/\left\Arrowvert\textbf{a}(\hat{\hat{d}}_{l^{\prime}},\hat{\hat{\phi}}_{l^{\prime}})\right\Arrowvert^{2}.\vspace{-0.25cm}$
(35)
We accept a refinement only if the new residual power
$\left\Arrowvert\textbf{r}_{r}-\hat{\hat{g}}_{l^{\prime}}\textbf{a}(\hat{\hat{d}}_{l^{\prime}},\hat{\hat{\phi}}_{l^{\prime}})\right\Arrowvert^{2}$
is smaller than the old residual power
$\left\Arrowvert\textbf{r}_{r}-\hat{g}_{l^{\prime}}\textbf{a}(\hat{d}_{l^{\prime}},\hat{\phi}_{l^{\prime}})\right\Arrowvert^{2}$.
Note that, on the one hand, we can realize localization with the estimated
location parameters; on the other hand, we can design data transmission with
the reconstructed the channel between the BS and the UE by using (17). We can
reconstruct the channel between the BS and all different UEs by orthogonal
pilot signals. The spectral efficiency based on the reconstructed channel is
also analyzed in Section VI.
## V Multi-user Communication
Figure 6: Multi-user communication with ExLens in spherical-wave scenarios.
The method proposed in Section IV can obtain the location parameters together
with the channel gains. Multiple UEs can be simultaneously served by the
ExLens system with the channels of all UEs reconstructed. In this section, we
analyze the multi-user communication performance of ExLens with limited RF
chains in the coexistence of near-field and far-field UE.
We assume that $K$ single-antenna UEs are served by a BS, which is equipped
with an ExLens with $N_{a}$ antenna elements (Fig. 6). We consider the multi-
path environment because of the presence of scatterers. For illustration
purposes, we only draw two paths for each UE (one line-of-sight (LoS) path and
one non-line-of-sight (NLoS) path) in Fig. 6 and omit the UE in the far-field.
Each path corresponds to an energy focusing window (FW) at the focal arc, and
the overlap of the FWs of different UEs introduces inter-user-interference
(IUI). We consider the narrow band mmWave multi-path channel model [11].
According to (17), the channel between the BS and the $k$-th UE can be
expressed as
$\textbf{h}_{k}=\sum\limits_{l=1}^{L_{k}}g_{kl}\textbf{a}(d_{kl},\phi_{kl}),\vspace{-0.25cm}$
(36)
where $L_{k}$ is the number of paths of the $k$-th UE, in which $l=1$
corresponds to the LoS path and $1<l\leq L_{k}$ corresponds to the NLoS path,
$g_{kl}$ is the complex path gain for the $l$-th path of the $k$-th UE,
$(d_{k1},\phi_{k1})$ are the position parameters of the $k$-th UE,
$(d_{kl},\phi_{kl})$ are the position parameters of the $l$-th scatterer of
the $k$-th UE with ($1<l\leq L_{k}$), and $\textbf{a}(d_{kl},\phi_{kl})$ is
the array response vector with elements given in (10). Let
$x_{k}=\sqrt{p_{k}}s_{k}$ represent the transmitted signal by the $k$-th UE,
where $\sqrt{p_{k}}$ denotes the transmitted power, and $s_{k}$ denotes the
independent information-bearing symbol with $\mathbb{E}\\{|s_{k}|^{2}\\}=1$.
The signal received at the BS is given as
$\tilde{\textbf{r}}=\sum\limits_{k=1}^{K}\mathbf{h}_{k}{x_{k}}+\textbf{n}=\sum\limits_{k=1}^{K}\sum\limits_{l=1}^{L_{k}}g_{kl}\textbf{a}(d_{kl},\phi_{kl}){x_{k}}+\textbf{n},\vspace{-0.25cm}$
(37)
where $\textbf{n}\in\mathbb{C}^{N_{a}\times 1}$ represents the Gaussian noise
with zero-mean and covariance matrix ${\sigma^{2}}\textbf{I}$.
We can reduce the number of RF chains for systems equipped with such array by
exploiting the energy focusing property of the ExLens in near-field and far-
field. We assume that only $M_{RF}$ RF chains are available, where
$M_{RF}<N_{a}$. Thus, antenna selection (e.g., by the low-complexity power-
based antenna selection method) must be applied. Let
$\textbf{W}_{\text{RF}}\in\mathbb{R}^{N_{a}\times M_{RF}}$ denote the power-
based antenna selection matrix, where the elements of $\textbf{W}_{\text{RF}}$
are $0$ or $1$. To avoid the scenario in which antenna selection favors nearby
UE over distant ones, we assume that the channel-inversion based power control
is applied during the antenna selection phase. Thus, the received signals at
the BS from different UEs have comparable strength. The signal received by the
selected antennas can be expressed as
$\textbf{W}_{\text{RF}}^{\text{H}}\tilde{\textbf{r}}=\sum\limits_{k=1}^{K}\textbf{W}_{\text{RF}}^{\text{H}}\mathbf{h}_{k}{x_{k}}+\textbf{W}_{\text{RF}}^{\text{H}}\textbf{n}.\vspace{-0.25cm}$
(38)
Given
$\tilde{\textbf{r}}_{s}\buildrel\Delta\over{=}\textbf{W}_{\text{RF}}^{\text{H}}\tilde{\textbf{r}}$,
$\mathbf{h}_{k,s}\buildrel\Delta\over{=}\textbf{W}_{\text{RF}}^{\text{H}}\mathbf{h}_{k}$,
and
$\textbf{n}_{s}\buildrel\Delta\over{=}\textbf{W}_{\text{RF}}^{\text{H}}\textbf{n}$,
we have
$\tilde{\textbf{r}}_{s}=\sum\limits_{k=1}^{K}\mathbf{h}_{k,s}{x_{k}}+\textbf{n}_{s},\vspace{-0.25cm}$
(39)
which can be rewritten as
$\tilde{\textbf{r}}_{s}=\mathbf{h}_{k,s}{x_{k}}+\sum\limits_{k^{\prime}\neq
k}^{K}\mathbf{h}_{k^{\prime},s}{x_{k^{\prime}}}+\textbf{n}_{s},\vspace{-0.25cm}$
(40)
where the term $\sum\limits_{k^{\prime}\neq
k}^{K}\mathbf{h}_{k^{\prime},s}{x_{k^{\prime}}}$ is the IUI for the $k$-th UE,
and $\textbf{n}_{s}\in\mathbb{C}^{M_{RF}\times 1}$ represents the Gaussian
noise at the selected antennas with zero-mean and covariance matrixc
${\sigma^{2}}\textbf{I}$. Let $\textbf{u}_{k}\in\mathbb{C}^{M_{RF}\times 1}$
represent the baseband combining vector for the $k$-th UE, where
$||\textbf{u}_{k}||=1$. The bandwidth-normalized achievable rate for the
$k$-th UE is given by
$R_{k}=\log_{2}\left(1+\dfrac{p_{k}|\textbf{u}_{k}^{\text{H}}\mathbf{h}_{k,s}|^{2}}{\sum\limits_{k^{\prime}\neq
k}^{K}{p_{k^{\prime}}}|\textbf{u}_{k}^{\text{H}}\mathbf{h}_{k^{\prime},s}|^{2}+\sigma^{2}}\right),\vspace{-0.25cm}$
(41)
and for all the $K$ UE, we obtain the sum-rate as
$R=\sum\limits_{k=1}^{K}R_{k}=\sum\limits_{k=1}^{K}\log_{2}\left(1+\dfrac{p_{k}|\textbf{u}_{k}^{\text{H}}\mathbf{h}_{k,s}|^{2}}{\sum\limits_{k^{\prime}\neq
k}^{K}{p_{k^{\prime}}}|\textbf{u}_{k}^{\text{H}}\mathbf{h}_{k^{\prime},s}|^{2}+\sigma^{2}}\right).\vspace{-0.25cm}$
(42)
In near-field and far-field scenarios, the ExLens has the energy focusing
ability. When the incident angles of different UEs are sufficiently separated,
the ExLens can resolve various UEs. For general systems where the BS cannot
resolve all the UEs perfectly, we apply the linear receivers described in the
following subsection to detect the signals from different UEs.
### V-A Linear Receivers
The combining vector $\textbf{u}_{k}$ is applied to (40) to detect $s_{k}$.
First, we consider the MRC scheme, which disregards the IUI term in (40). In
this case, $\textbf{u}_{k}$ is designed to simply maximize the desired signal
power of the $k$-th UE, as given by
$\textbf{u}_{k}^{*}=\mathop{\arg\max}_{\parallel\textbf{u}_{k}\parallel=1}\mid\textbf{u}_{k}^{\text{H}}\mathbf{h}_{k,s}\mid^{2}.\vspace{-0.25cm}$
(43)
The optimal solution to (43) is
$\textbf{u}_{k}^{*}=\dfrac{\mathbf{h}_{k,s}}{\parallel\mathbf{h}_{k,s}\parallel}.\vspace{-0.25cm}$
(44)
The combining vector $\textbf{u}_{k}$ designed by the MRC scheme in (44) is
sub-optimal in general because it ignores the IUI.
To further mitigate the IUI, we apply the MMSE-based combining scheme. The
MMSE considers the interference and finds the $\textbf{u}_{k}$, which
minimizes the mean square error of the combined received and the desired
signals, given as
$\textbf{u}_{k}^{*}=\mathop{\arg\min}_{\parallel\textbf{u}_{k}\parallel=1}\mathbb{E}\\{\mid\textbf{u}_{k}^{\text{H}}\tilde{\mathbf{r}}_{s}-s_{k}\mid^{2}\\}.\vspace{-0.25cm}$
(45)
The optimal MMSE solution is
$\textbf{u}_{k}^{*}=\dfrac{\mathbf{R}_{rr}^{-1}\mathbf{R}_{rs}}{\parallel\mathbf{R}_{rr}^{-1}\mathbf{R}_{rs}\parallel},\vspace{-0.25cm}$
(46)
where
$\mathbf{R}_{rr}=\sum\limits_{k=1}^{K}p_{k}\mathbf{h}_{k,s}\mathbf{h}_{k,s}^{\text{H}}+\sigma^{2}\mathbf{I}$
denotes the autocorrelation matrix of received signal $\tilde{\mathbf{r}}_{s}$
and $\mathbf{R}_{rs}=\sqrt{p_{k}}\mathbf{h}_{k,s}$ denotes the cross-
correlation of the received signal $\tilde{\mathbf{r}}_{s}$ and the desired
signal $s_{k}$.
### V-B Benchmark Schemes
We compare the multi-user communication performance of ExLens with a
conventional ULA in which both are illuminated by spherical wave-fronts. We
assume that both types of arrays have the same electrical aperture
$\tilde{D}_{y}$ and number of antenna elements
($N_{a}=2\lfloor\tilde{D}_{y}\rfloor+1$). Let ULA be placed along the y-axis
centered at the origin, and the space between two adjacent antenna elements is
$\Delta d={\lambda}/{2}$ (Fig. 1(b)). Take one UE for example, whose position
parameters are $(d,\phi)$, with $d$ denoting the distance between the UE and
the original point, and $\phi\in(-\pi/2,\pi/2)$ as the angle of the UE
relative to the x-axis. According to [22], the array response of the ULA
illuminated by the spherical wave-front is given by
$\textbf{a}(d,\phi)=\left(\dfrac{d}{d_{{}_{-N}}}e^{-jk_{0}(d_{{}_{-N}}-d)},\ldots,\dfrac{d}{d_{{}_{-1}}}e^{-jk_{0}(d_{{}_{-1}}-d)},1,\dfrac{d}{d_{{}_{1}}}e^{-jk_{0}(d_{{}_{1}}-d)},\ldots,\dfrac{d}{d_{{}_{N}}}e^{-jk_{0}(d_{{}_{N}}-d)}\right),\vspace{-0.15cm}$
(47)
where $d_{n}=\sqrt{d^{2}+n^{2}\Delta d^{2}-2nd\Delta d\sin\phi}$ and
$n\in\\{0,\pm 1,\ldots,\pm N\\}$. When $d\to\infty$, ${d}/{d_{n}}\to 1$ and
$d_{n}-d\to-n\Delta d\sin\phi$ are clearly observed. Then, the array response
illuminated by the spherical wave-front given in (47) reduces to that
illuminated by plane wave-front. Thus, the array response given in (47) for
ULA can be used for near-field and far-field scenarios. We obtain the signal
received by the ULA with a spherical wave-front by substituting (47) into (37)
with $K$ UE. We assume that the ULA is equipped with $M_{RF}$ RF chains. Thus,
antenna selection is necessary. However, given that the optimal antenna
selection scheme for the ULA system illuminated by the spherical wave-front
with multi-users is unknown in general, we apply two analog combining matrix
design schemes as benchmarks for a remarkable comparison. In the first
benchmark, we adopt the power-based antenna selection because of its
simplicity. Then, we apply the MRC and MMSE-based digital combining vector
design schemes to the ULA system. However, when $M_{RF}$ is small, the
performance for ULA is limited because of the limited array gain with the
small number of antennas selected. Thus, we also consider the second benchmark
by applying the approximate Gram Schmidt-based hybrid precoding scheme to
design the analog combining matrix for ULA [40]. To mitigate the IUI, the
MMSE-based digital combining vector design scheme is then applied.
## VI Numerical Results
### VI-A Localization Performance
In this subsection, we discuss the performance of the proposed localization
method. We define
$\mbox{CRLB}(\mathbf{d})=\sqrt{\sum_{k=1}^{L}{\bf{F}}^{-1}(\bm{\eta})_{(2k-1,2k-1)}}$
for comparison, where ${\bf{F}}^{-1}(\bm{\eta})_{(2k-1,2k-1)}$ denote the
$(2k-1,2k-1)$-th element of the matrix ${\bf{F}}^{-1}(\bm{\eta})$. Similarly,
we define
$\mbox{CRLB}({\bm{\phi}})=\sqrt{\sum_{k=1}^{L}{\bf{F}}^{-1}(\bm{\eta})_{(2k,2k)}}$.
The PEB($\mathbf{u}$) is calculated according to (22). Let $\mathbf{x}$
represent $\mathbf{d}$, ${\bm{\phi}}$, or $\mathbf{u}$, and
$\hat{\mathbf{x}}_{i}$ denote the estimate of $\mathbf{x}$ at the $i$-th Monte
Carlo simulation. The RMSE is defined as
$\mbox{RMSE}(\mathbf{x})=\sqrt{\sum_{i=1}^{T}||\hat{\mathbf{x}}_{i}-\mathbf{x}||^{2}/T}$.
We define the normalized mean square error (NMSE) as
$\sum_{i=1}^{T}||\mathbf{h}-\hat{\mathbf{h}}_{i}||^{2}/||\mathbf{h}||^{2}/T$
for channel estimation, where $\hat{\mathbf{h}}_{i}$ is the estimate of
$\mathbf{h}$ at the $i$-th Monte Carlo simulation. All numerical results
provided in this subsection are obtained from $T=1000$ independent Monte Carlo
simulations. The position parameters $(d,\phi)$ of a UE is generated with
$d\sim\mathcal{U}[7,30]\,m$ and $\phi\sim\mathcal{U}[-\pi/5,\pi/5]$ rad, where
$\mathcal{U}$ denotes the uniform distribution. The settings for the ExLens
are fixed to $D_{y}=1\,m$, $F_{0}=F=5\,m$, $\lambda=0.01\,m$, and $N_{a}=201$.
Fig. 7 demonstrates that (a) RMSE versus CRLB for the estimate of
$\mathbf{d}$; (b) RMSE versus CRLB for the estimate of ${\bm{\phi}}$; (c) RMSE
versus PEB for the estimate of $\mathbf{u}$; and (d) NMSE for the channel
estimation. First, we observe an improvement in the estimation accuracy of
proposed method (denoted as NOMP) with respect to DOMP, where DOMP reaches
performance floors with the increase in SNR. The performance plateau shows a
fundamental algorithmic limitation of DOMP, and highlights the critical role
of cyclic Newton refinements in NOMP, as explained in [38]. Second, NOMP does
not achieve the CRLB in the estimate of $\mathbf{d}$ (Fig. 7(a)), but it
closely follows the bound for all SNRs. In the estimate of ${\bm{\phi}}$ (Fig.
7(b)), NOMP can achieve the CRLB. The array response of the ExLens is more
sensitive to the changes in $\phi$ than that in $d$ in spherical scenarios.
Accordingly, NOMP performs better in ${\bm{\phi}}$ estimation than
$\mathbf{d}$ estimation. Third, the performance of $\mathbf{u}$ estimates
(Fig. 7(c)) shows similar trends to that of $\mathbf{d}$ (Fig. 7(a)). The
estimation error of position $\mathbf{u}$ is mainly determined by the
estimation error of $\mathbf{d}$ because the estimate of ${\bm{\phi}}$
achieves CRLB. In the simulation settings, the proposed localization method
can achieve meter-, decimeter-, and centimeter-level accuracies when SNR $>0$,
$>20$, and $>40$ dB, respectively. Moreover, Fig. 7(d) shows that the proposed
method performs well in channel estimation, thereby demonstrating that the
channel and location parameter estimation can be simultaneously performed in
spherical-wave scenarios by directly reusing the communication signals.
Figure 7: (a) RMSE versus CRLB for the estimate of $\mathbf{d}$. (b) RMSE
versus CRLB for the estimate of ${\bm{\phi}}$. (c) RMSE versus PEB for the
estimate of $\mathbf{u}$. (d) NMSE for the estimate of $\mathbf{h}$. The
settings for the ExLens are fixed to $D_{y}=1\,m$, $F_{0}=F=5\,m$,
$\lambda=0.01\,m$, and $N_{a}=201$. In the case $L=1$, $d=16.8837\,m$ and
$\phi=0.0693$ rad. In the case $L=2$, $d_{1}=12.8657\,m$, $\phi_{1}=-0.1935$
rad, $d_{2}=14.4962\,m$, and $\phi_{2}=0.1897$ rad.
### VI-B Multi-user Communication Performance
In this subsection, we compare the multi-user communication performance of
ExLens with that of the conventional ULA antenna array. In the following
simulations, the ExLens and ULA systems serve near-field and far-field UE
simultaneously. We assume that ExLens and ULA have the same electrical
aperture $\tilde{D}_{y}$ and number of antenna elements
$N_{a}=2\lfloor\tilde{D}_{y}\rfloor+1$. For the approximate Gram Schmidt-based
hybrid precoding scheme applied to the benchmark ULA system, the size of the
beamsteering codebook $N_{cb}$ is set to $1024$, and the resolution of the
phase shifters in the analog combining network is assumed to be $10$ bits. All
numerical results provided in this section are obtained from Monte Carlo
simulations with $1000$ independent channel realizations. The position
parameters $(d,\phi)$ of a UE is generated with $d\sim\mathcal{U}[20,320]\,m$
and $\phi\sim\mathcal{U}[-\pi/5,\pi/5]$ rad. The low-complexity power-based
antenna selection method is applied to “LENS MMSE”, “LENS MRC”, “ULA MMSE”,
and “ULA MRC”, and the Gram Schmidt-based analog combining method is applied
to “ULA GS MMSE”.
Figure 8: Comparison of the spectral efficiencies for single-user with
different SNRs, where $K=1$, $M_{RF}=5$, $L=2$, $F=5\,m$, $F_{0}=15\,m$,
$\tilde{D}_{y}=100$, $\lambda=0.01\,m$, and $N_{a}=201$.
Figure 9: Comparison of the spectral efficiencies for multi-user scenarios
with different SNRs, where $K=5$, $M_{RF}=25$, $L_{k}=2$, $F=5\,m$,
$F_{0}=15\,m$, $\tilde{D}_{y}=100$, $\lambda=0.01\,m$, and $N_{a}=201$.
Figs. 9 and 9 compare the spectral efficiencies of different schemes with
varying SNRs. The single-user (Fig. 9 with $K=1$, $L_{k}=2$, $M_{RF}=5$) and
multi-user (Fig. 9 with $K=5$, $L_{k}=2$, $M_{RF}=25$) scenarios are
considered. The other parameters for ExLens are fixed to $F=5\,m$,
$F_{0}=15\,m$, $\tilde{D}_{y}=100$, $\lambda=0.01\,m$, and $N_{a}=201$. In the
benchmark ULA system, we assume that perfect CSI is available at the BS. In
the ExLens system, we consider both cases with perfect and estimated CSIs. In
the ExLens system with a limited number of RF chains, we apply the power based
antenna selection before channel estimation. The channel is then estimated by
the method proposed in Section IV-B with the received signal from the selected
antennas. First, the spectral efficiency of the ExLens systems outperforms the
ULA systems because most energy of the received signal is concentrated on the
selected antennas for the ExLens systems with the energy focusing property. By
contrast, the energy in the ULA system is almost evenly spread across each
antenna. The simple power-based antenna selection method causes significant
energy loss for the ULA systems, thereby resulting in poor performance in
terms of spectral efficiency. Second, in single-user scenarios (Fig. 9), the
“ULA GS MMSE” scheme outperforms the simple power-based antenna selection
schemes “ULA MMSE” and “ULA MRC”. This phenomenon is expected because the
approximate Gram Schmidt-based hybrid precoding method considers the channel
characteristic when it is applied to the “ULA GS MMSE” scheme. However, the
performance of the “ULA GS MMSE” scheme with much higher computational
complexity is still worse than that of “LENS MMSE” and “LENS MRC” schemes.
Given the absence of the IUI for single-user scenarios, the MRC and MMSE
schemes have the same performance. In multi-user scenarios (Fig. 9), the
advantages of the ExLens systems are more pronounced over the ULA systems. The
MRC schemes perform worse than the MMSE schemes, especially for high SNRs, due
to the presence of the IUI. Lastly, the performance of the “LENS MMSE” and
“LENS MRC” schemes with the estimated CSI is close to that based on perfect
CSI, thereby showing the effectiveness of the proposed channel estimation
method.
Figure 10: Comparison of the spectral efficiencies for single-user and multi-
user scenarios with different number of RF chains, where $L_{k}=2$ for the
$k$-th UE, $F=5\,m$, $F_{0}=15\,m$, $\tilde{D}_{y}=100$, SNR$=10$ dB,
$\lambda=0.01\,m$, and $N_{a}=201$.
Figure 11: Comparison of the spectral efficiencies for multi-user scenarios
with different number of UE, where $L_{k}=2$ for the $k$-th UE, $M_{RF}=20$,
$F=5\,m$, $F_{0}=15\,m$, $\tilde{D}_{y}=100$, SNR$=10$ dB, $\lambda=0.01\,m$,
and $N_{a}=201$.
Then, we compare the spectral efficiencies of different schemes by increasing
the number of RF chains for both single-user ($K=1$) and multi-user ($K=5$)
scenarios (Fig. 11). As it benefits from the energy focusing property of
ExLens, the “LENS MMSE” scheme always outperforms the ULA schemes for
different numbers of RF chains. For single-user scenarios, as $M_{RF}$
increases, the spectral efficiencies of different schemes improve. When
$M_{RF}=15$, the performance of the ExLens schemes almost reaches maximum. The
ULA schemes require much more RF chains to achieve a similar performance.
Therefore, the energy focusing property of ExLens is beneficial for reducing
the number of RF chains, and this outcome helps to significantly reduce the
signal processing complexity and hardware cost without notable performance
degradation. For the ULA schemes, “ULA GS MMSE” shows advantages over other
schemes when $M_{RF}$ is small, but with further increase in $M_{RF}$, the
performance of “ULA GS MMSE” saturates, a situation which is also explained in
[40]. For multi-user scenarios, the number of RF chains required by the “LENS
MMSE” to achieve the optimal performance increases to around $45$ when $K$
increases to $5$. Thus, more RF chains are needed to distinguish more UEs.
However, compared with the total number of antenna elements, the number of RF
chains needed by the ExLens system remains low ($45<201$). The advantage of
the “LENS MMSE” scheme is more evident than the “ULA MMSE” scheme with a
smaller number of RF chains ($M_{RF}<45$).
The performance of spectral efficiency versus the number of served UE for
different schemes is shown in Fig. 11, by fixing the number of the RF chains
to $20$, the value of SNR to $10$ dB, and parameters for ExLens to $F=5\,m$,
$F_{0}=15\,m$, and $\tilde{D}_{y}=100$. The “LENS MMSE” scheme always has the
highest spectral efficiency among all others. Therefore, the UE resolution of
the ExLens system is greater than that of the ULA system with limited RF
chains. Since the IUI becomes larger as $K$ increases, the performance of the
MRC schemes become worse than that of the MMSE schemes. The beamsteering
codebook for the approximate Gram Schmidt scheme is designed for the single-
user systems of the ULA, where the analog combiner designed by the approximate
Gram Schmidt scheme exhibits larger deviation from the real channel as $K$
increases. Such higher deviation causes the worse performance of the “ULA GS
MMSE” than the “ULA MMSE” when $K>5$. The spectral efficiency of the “LENS
MMSE” scheme initially increases with $K$, then decreases when $K>17$. This
trend is because the ability of the ExLens system to serve UE becomes limited
given a number of RF chains.
Figure 12: Comparison of the spectral efficiencies for multi-user scenarios
with different antenna array aperture sizes, where $K=5$, $L_{k}=2$ for the
$k$-th UE, $M_{RF}=5$, $F=5\,m$, $F_{0}=15\,m$, SNR$=10$ dB, and
$\lambda=0.01\,m$.
Finally, we evaluate the spectral efficiencies of different schemes by
increasing the electrical aperture $\tilde{D}_{y}$ of the lens antenna array.
The results are shown in Fig. 12, with $K=5$, $L_{k}=2$ for the $k$-th UE,
$F=5\,m$, $F_{0}=15\,m$, $M_{RF}=5$, and SNR$=10$ dB. Channel inversion-based
power control during the antenna selection phase is applied for all
simulations. With unchanged total received power of all systems, as
$\tilde{D}_{y}$ increases, the total number of antenna elements increases for
ULA systems, and the energy at each antenna element decreases. Thus, with a
fixed number of RF chains, the total received energy decreases accordingly,
thereby leading to lower spectral efficiency for the ULA schemes. However,
ExLens systems present an interesting phenomenon, i.e., the spectral
efficiency of the ExLens schemes shows a trend of first increasing and then
decreasing. According to the change of the energy focusing effects from far-
field to near-field (Fig. 3), this phenomenon is easily understood. When
$\tilde{D}_{y}$ is very small, the “sinc” function holds, and the width of the
main lobe is $2/\tilde{D}_{y}$ and determines the system resolution to the UE.
As $\tilde{D}_{y}$ increases, the system resolution rises. Hence, the spectral
efficiency of the lens systems increases with $\tilde{D}_{y}$ initially. As
$\tilde{D}_{y}$ further increases, the “sinc” function no longer holds, and
the near-field effect becomes obvious. Then, the width of the focusing window
determines the system resolution to the UE. As $\tilde{D}_{y}$ further
increases, the system resolution decreases. Thus, the spectral efficiency of
the lens systems decreases with the further increase of $\tilde{D}_{y}$.
During this process, the ExLens system resolution to the UE will reach the
maximum at some value of $\tilde{D}_{y}$. Moreover, the optimal size of
$\tilde{D}_{y}$ is $60$ under the simulation configuration given in Fig. 12.
These observations are instructive for the design of the electrical aperture
size of the ExLens.
## VII Conclusion
We considered the communication and localization problems with an ExLens.
First, we derived the closed-form antenna array response of ExLens by
considering the spherical wave-front for two different EM lens designs. The
relationship between the antenna array response of ExLens in the near-field
and far-field revealed that the derived near-field array response includes the
existing “sinc” function response as a special case. We further analyzed the
changes in the energy focusing properties from the far-field to the near-field
and the difference of the energy focusing properties of the two EM lens
designs. The window focusing property in the near-field also revealed the
great potential of ExLens for position sensing and multi-user communication.
The theoretical uplink localization ability of an ExLens was analyzed through
the Fisher information. To utilize the window focusing property for position
sensing, an effective location parameter estimation method was next proposed.
The results showed that the localization performance is close to the CRLB and
can be enhanced as the aperture of ExLens increase. In addition, the channel
can be effectively reconstructed by the proposed estimation method. Finally,
the multi-user communication performance of ExLens that serves UE in near-
field and far-field was investigated with perfect and estimated CSIs.
Simulation results verified the effectiveness of the proposed channel
estimation method and showed that the proposed ExLens with MMSE receiver
achieves significant spectral efficiency gains and complexity-and-cost
reductions compared with the ULA systems.
## Appendix A
In this section, we derive the array response of ExLens illuminated by a
spherical wave-front. For Design 1, by bringing (4) into (2), together with
$||\mathbf{u}-\mathbf{p}||=\sqrt{{d^{2}}+{y^{2}}-2dy\sin\phi}$,
$||\mathbf{p}-\mathbf{b}_{0}||=\sqrt{{F^{2}}+{y^{2}}}$,
$||\mathbf{p}-\mathbf{b}||=\sqrt{{F^{2}}+{y^{2}}+2yF\sin\theta}$,
$\eta(\mathbf{u},\mathbf{p})={\lambda/{\vphantom{\lambda{({2\pi\sqrt{{d^{2}}+{y^{2}}-2dy\sin\phi}})}}}{({4\pi\sqrt{{d^{2}}+{y^{2}}-2dy\sin\phi}})}}$,
and
$\kappa(\mathbf{p},\mathbf{b})={\lambda}/{(4\pi\\!\sqrt{{F^{2}}+{y^{2}}+2yF\sin\theta})}$,
we get
$r(\theta,d,\phi)=\int\limits_{-{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}^{{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}{\frac{\lambda^{2}{e^{-j{k_{0}}\sqrt{{d^{2}}+{y^{2}}-2dy\sin\phi}}}{e^{-j\left({{\rm{}}{\phi_{0}}+{k_{0}}\left({\sqrt{{F^{2}}+{y^{2}}+2yF\sin\theta}{\rm{-}}\sqrt{{F^{2}}+{y^{2}}}}\right)}\right)}}}{{16\pi^{2}\sqrt{{d^{2}}+{y^{2}}-2dy\sin\phi}\
\sqrt{{F^{2}}+{y^{2}}+2yF\sin\theta}}}}dy.\vspace{-0.25cm}$ (48)
To derive the closed form of (48), we have to make the following assumptions:
(A1) $d\gg y$ and (A2) $F\gg y$, where $y\in[-D_{y}/2,D_{y}/2]$. Given (A1),
we utilize Taylor series approximation and have
$\sqrt{{d^{2}}+{y^{2}}-2dy\sin\phi}\approx
d-y\sin\phi+{y^{2}}\frac{{{{\left({\cos\phi}\right)}^{2}}}}{{2d}}.\vspace{-0.25cm}$
(49)
Moreover, for the same reason, we have
$\frac{1}{{\sqrt{{d^{2}}+{y^{2}}-2dy\sin\phi}}}=\frac{1}{d}{\left({1+\frac{{{y^{2}}}}{{{d^{2}}}}-\frac{{2y}}{d}\sin\phi}\right)^{-\frac{1}{2}}}\approx\frac{1}{d}.\vspace{-0.25cm}$
(50)
Then, given (A2), we also utilize Taylor series approximation and obtain
$\sqrt{{F^{2}}+{y^{2}}+2yF\sin\theta}{\rm{-}}\sqrt{{F^{2}}+{y^{2}}}\approx\sqrt{{F^{2}}+{y^{2}}}\times\left[{\frac{{yF\sin\theta}}{{{F^{2}}+{y^{2}}}}{\rm{-}}\frac{{\rm{1}}}{{\rm{2}}}{{\left({\frac{{yF\sin\theta}}{{{F^{2}}+{y^{2}}}}}\right)}^{\rm{2}}}}\right],\vspace{-0.25cm}$
(51)
for further simplification, we obtain
$\sqrt{{F^{2}}+{y^{2}}+2yF\sin\theta}{\rm{-}}\sqrt{{F^{2}}+{y^{2}}}\approx
y\sin\theta-\frac{{{{\left({y\sin\theta}\right)}^{2}}}}{{2F}}.\vspace{-0.25cm}$
(52)
Moreover, for the same reason, we have
$\frac{1}{\sqrt{{F^{2}}+{y^{2}}+2yF\sin\theta}}=\frac{1}{F}{\left({1+\frac{{{y^{2}}}}{{{F^{2}}}}-\frac{{2y}}{F}\sin\theta}\right)^{-\frac{1}{2}}}\approx\frac{1}{F}.\vspace{-0.25cm}$
(53)
Substituting (49)-(53) into (48), we have
$r(\theta,d,\phi)\approx\int\limits_{-{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}^{{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}\frac{\lambda^{2}}{16\pi^{2}dF}{e^{-j{k_{0}}\left({d-y\sin\phi+{y^{2}}\frac{\cos^{2}\phi}{2d}}\right)}}e^{-j\left(\phi_{0}+k_{0}y\sin\theta-\frac{k_{0}y^{2}\sin^{2}\theta}{2F}\right)}dy.\vspace{-0.25cm}$
(54)
Rewritten (54), we get
$r(\theta,d,\phi)\approx\frac{\lambda^{2}{e^{-j\left({{k_{0}}d+{\phi_{0}}}\right)}}}{16\pi^{2}dF}\int\limits_{-{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}^{{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}{{e^{j{y^{2}}{k_{0}}\left({\frac{{{{\sin}^{\rm{2}}}\theta}}{{{\rm{2}}F}}-\frac{{{{\cos}^{2}}\phi}}{{2d}}}\right)}}{e^{-jy{k_{0}}\left({\sin\theta-\sin\phi}\right)}}}dy.\vspace{-0.25cm}$
(55)
Without loss of generality, we assume $\phi_{0}=2\pi$ for the first lens
design. Since $\phi_{0}$ is common for all antenna elements, the phase term
$e^{-j\phi_{0}}$ can be ignored. Denote
$\alpha=\frac{{\pi\sin^{2}\theta}}{{\lambda
F}}-\frac{{\pi\cos^{2}\phi}}{{\lambda d}}$, and
$\beta=({{\sin\theta-\sin\phi}})/{\lambda}$, we obtain
$r(\theta,d,\phi)\approx\frac{\lambda^{2}{e^{-j{{k_{0}}d}}}}{16\pi^{2}dF}\int\limits_{-{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}^{{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}{{e^{j\alpha{y^{2}}}}{e^{-j2\pi\beta
y}}}dy.\vspace{-0.25cm}$ (56)
For Design 2, with
$\lVert{\mathbf{c}_{0}}-\mathbf{p}\rVert=\sqrt{{F_{0}}^{2}+{y^{2}}}$, we have
$r(\theta,d,\phi)=\int\limits_{-{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}^{{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}{\frac{\lambda^{2}{e^{-j{k_{0}}(\sqrt{{d^{2}}+{y^{2}}-2dy\sin\phi}-\sqrt{{F_{0}}^{2}+{y^{2}}})}}{e^{-j\left({{\rm{}}{\phi_{0}}+{k_{0}}\left({\sqrt{{F^{2}}+{y^{2}}+2yF\sin\theta}{\rm{-}}\sqrt{{F^{2}}+{y^{2}}}}\right)}\right)}}}{{16\pi^{2}\sqrt{{d^{2}}+{y^{2}}-2dy\sin\phi}\
\sqrt{{F^{2}}+{y^{2}}+2yF\sin\theta}}}}dy.\vspace{-0.25cm}$ (57)
Similarly, without loss of generality, we assume $\phi_{0}-k_{0}F_{0}=2\pi$
for the second lens design. Since $\phi_{0}-k_{0}F_{0}$ is common for all
antenna elements, the phase term $e^{-j(\phi_{0}-k_{0}F_{0})}$ can be ignored.
The received signal can have the same approximate expression as (56) with
$\alpha=\frac{{\pi{{\sin}^{\rm{2}}}\theta}}{{\lambda
F}}-\frac{{\pi{{\cos}^{\rm{2}}}\phi}}{{\lambda
d}}+\frac{\pi}{{\lambda{F_{0}}}}$, as summarized in Table I. Let
${J_{a}}=\int\limits_{-{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}^{{{{D_{y}}}\mathord{\left/{\vphantom{{{D_{y}}}2}}\right.\kern-1.2pt}2}}{{e^{j\alpha{y^{2}}}}{e^{-j2\pi\beta
y}}dy}$, we have
${J_{a}}=\frac{{\sqrt{\pi}}}{{{\rm{2}}\sqrt{\alpha}}}{e^{-j\left({\frac{{{{\left({2\pi\beta}\right)}^{2}}}}{{4\alpha}}-\frac{5\pi}{4}}\right)}}\left({\mathrm{erf}\left({\frac{{\alpha{D_{y}}+2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)+\mathrm{erf}\left({\frac{{\alpha{D_{y}}-2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)}\right).\vspace{-0.25cm}$
(58)
Hence, we obtain
$r(\theta,d,\phi)\approx\frac{{\lambda^{2}{e^{-j{k_{0}d}}}}}{{16\pi^{2}dF}}J_{a}.\vspace{-0.25cm}$
(59)
Then, we define that the effective lens antenna array response on point
$\mathbf{b}=[F\cos\theta,-F\sin\theta]$ at the focal arc as
$a(\theta,d,\phi)=r(\theta,d,\phi)\times{16\pi^{2}dF}/({{\lambda^{2}{e^{-j{k_{0}d}}}}})$.
It then follows from (59) that we have
$a(\theta,d,\phi)\approx\frac{{\sqrt{\pi}}}{{{\rm{2}}\sqrt{\alpha}}}{e^{-j\left({\frac{{{{\left({2\pi\beta}\right)}^{2}}}}{{4\alpha}}-\frac{5\pi}{4}}\right)}}\left({\mathrm{erf}\left({\frac{{\alpha{D_{y}}+2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)+\mathrm{erf}\left({\frac{{\alpha{D_{y}}-2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}\right)}\right),\vspace{-0.25cm}$
(60)
where $\beta={{(\sin\theta-\sin\phi)}}/{\lambda}$ and $\alpha$ is given in
Table I for different lens designs.
## Appendix B
In this section, we give the proof of Lemma 1. By substituting the definition
of $\mathrm{erf}(x)$ given in (9) into (8), and after some manipulations, we
have
$a(\theta,d,\phi)=-\left.{{{e^{j\frac{\pi}{4}}}\left({\int\limits_{0}^{\left({\frac{{{D_{y}}\sqrt{\alpha}}}{2}+\frac{{\pi\beta}}{{\sqrt{\alpha}}}}\right){e^{j\frac{{3\pi}}{4}}}}{{e^{-{t^{2}}}}dt}+\int\limits_{0}^{\left({\frac{{{D_{y}}\sqrt{\alpha}}}{2}-\frac{{\pi\beta}}{{\sqrt{\alpha}}}}\right){e^{j\frac{{3\pi}}{4}}}}{{e^{-{t^{2}}}}dt}}\right)}}\middle/{{(\sqrt{\alpha}{e^{j\frac{{{{\left({\pi\beta}\right)}^{2}}}}{\alpha}}})}}\right..\vspace{-0.25cm}$
(61)
The assumption of plane wave-front holds when $d\to\infty$ and
$F_{0}\to\infty$ (for the second lens design), and also with the assumption
that $F\gg y$, we can assume that in the far-field $\alpha\rightarrow 0$. Let
$x\triangleq\sqrt{\alpha}$, we have
$\lim\limits_{d,F_{0}\to\infty}a(\theta,d,\phi)=\mathop{\lim}\limits_{x\to
0}{\rm{-}}\left.{{{e^{j\frac{\pi}{4}}}\left({\int\limits_{0}^{\left({\frac{{{D_{y}}x}}{2}+\frac{{\pi\beta}}{x}}\right){e^{j\frac{{3\pi}}{4}}}}{{e^{-{t^{2}}}}dt}+\int\limits_{0}^{\left({\frac{{{D_{y}}x}}{2}-\frac{{\pi\beta}}{x}}\right){e^{j\frac{{3\pi}}{4}}}}{{e^{-{t^{2}}}}dt}}\right)}}\middle/{{x{e^{j\frac{{{{\left({\pi\beta}\right)}^{2}}}}{{{x^{2}}}}}}}}\right..\vspace{-0.25cm}$
(62)
Utilizing the L’Hopital’s rule, (62) can be simplified to
$\begin{array}[]{ll}&\mathop{\lim}\limits_{x\to
0}-\frac{{{e^{j\pi}}\left({\left({\frac{{{D_{y}}}}{2}-\frac{{\pi\beta}}{{{x^{2}}}}}\right){e^{j\left({\frac{{{D_{y}}^{2}{x^{2}}}}{4}+\frac{{{{\left({\pi\beta}\right)}^{2}}}}{{{x^{2}}}}+{D_{y}}\pi\beta}\right)}}+\left({\frac{{{D_{y}}}}{2}+\frac{{\pi\beta}}{{{x^{2}}}}}\right){e^{j\left({\frac{{{D_{y}}^{2}{x^{2}}}}{4}+\frac{{{{\left({\pi\beta}\right)}^{2}}}}{{{x^{2}}}}-{D_{y}}\pi\beta}\right)}}}\right)}}{{{e^{j\frac{{{{\left({\pi\beta}\right)}^{2}}}}{{{x^{2}}}}}}-2j\frac{{{{\left({\pi\beta}\right)}^{2}}}}{{{x^{2}}}}{e^{j\frac{{{{\left({\pi\beta}\right)}^{2}}}}{{{x^{2}}}}}}}}\\\
=&\mathop{\lim}\limits_{x\to
0}-\frac{{{e^{j\pi}}\left({\left({\frac{{{D_{y}}}}{2}{x^{2}}-\pi\beta}\right){e^{j\left({\frac{{{D_{y}}^{2}{x^{2}}}}{4}+{D_{y}}\pi\beta}\right)}}+\left({\frac{{{D_{y}}}}{2}{x^{2}}+\pi\beta}\right){e^{j\left({\frac{{{D_{y}}^{2}{x^{2}}}}{4}-{D_{y}}\pi\beta}\right)}}}\right)}}{{{x^{2}}-2j{{\left({\pi\beta}\right)}^{2}}}},\end{array}\vspace{-0.25cm}$
(63)
and with ${x\to 0}$, we have
$\mathop{\lim}\limits_{x\to
0}-\frac{{{e^{j\pi}}\left({\left({\frac{{{D_{y}}}}{2}{x^{2}}-\pi\beta}\right){e^{j\left({\frac{{{D_{y}}^{2}{x^{2}}}}{4}+{D_{y}}\pi\beta}\right)}}+\left({\frac{{{D_{y}}}}{2}{x^{2}}+\pi\beta}\right){e^{j\left({\frac{{{D_{y}}^{2}{x^{2}}}}{4}-{D_{y}}\pi\beta}\right)}}}\right)}}{{{x^{2}}-2j{{\left({\pi\beta}\right)}^{2}}}}=\frac{{{e^{j{D_{y}}\pi\beta}}-{e^{-j{D_{y}}\pi\beta}}}}{{2j\pi\beta}},\vspace{-0.25cm}$
(64)
since
$({{{e^{j{D_{y}}\pi\beta}}-{e^{-j{D_{y}}\pi\beta}}}})/({{2j\pi\beta}})={D_{y}}\mathrm{sinc}\left({{{{\tilde{D}_{y}}}}\sin\theta-{{{\tilde{D}_{y}}}}\sin\phi}\right)$,
obviously, we have
$\lim\limits_{d,F_{0}\to\infty}a(\theta,d,\phi)={D_{y}}\mathrm{sinc}\left({{{{\tilde{D}_{y}}}}\sin\theta-{{{\tilde{D}_{y}}}}\sin\phi}\right).\vspace{-0.25cm}$
(65)
## Appendix C
In this section, we give the proof of Lemma 2. To analyze the property of
${w(\theta,d,\phi)}$ in (14), we treat $\alpha$ and $\beta$ as a continuous
function of $\theta$. We firstly review the property of the $\mathrm{erf}(x)$
defined in (9), where $\mathrm{erf}(0)=0$, $\mathrm{erf}(\infty)=1$, and
$\mathrm{erf}(-x)=-\mathrm{erf}(x)$. According to (14), ${w(\theta,d,\phi)}$
is the sum of two $\mathrm{erf}$ functions, and the amplitude of
${w(\theta,d,\phi)}$ is shown in Fig. 3, which is similar to a rectangular
window function in the near-field. The edges of the window are determined by
zero points of two $\mathrm{erf}$ functions, namely $v_{1}$ and $v_{2}$. Take
the second lens design as an example, a given received waveform is shown in
the last two subfigures of Fig. 3, where $v_{1}$ is obtained when $\xi_{1}=0$,
and similarly $v_{2}$ is obtained when $\xi_{2}=0$. Note that
$\xi_{1}={\frac{{\alpha{D_{y}}+2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}$
and
$\xi_{2}={\frac{{\alpha{D_{y}}-2\pi\beta}}{{2\sqrt{\alpha}}}{e^{j\frac{{3\pi}}{4}}}}$.
Denote $v\buildrel\Delta\over{=}\sin\theta$, where $v\in(-1,1)$. Let
$\xi_{1}=0$, we have
${v^{2}}+\frac{{2F}}{{{D_{y}}}}v+\left({\frac{F}{{{F_{0}}}}-\frac{{2F\sin\phi}}{{{D_{y}}}}-\frac{{F{{\cos}^{2}}\phi}}{d}}\right)=0.\vspace{-0.25cm}$
(66)
Since $v\in(-1,1)$, the only solution of (66) is
${v_{1}}=-{{\frac{{F}}{{{D_{y}}}}+\sqrt{{{\left({\frac{{F}}{{{D_{y}}}}}\right)}^{2}}-\left({\frac{F}{{{F_{0}}}}-\frac{{2F\sin\phi}}{{{D_{y}}}}-\frac{{F{{\cos}^{2}}\phi}}{d}}\right)}}}.\vspace{-0.25cm}$
(67)
Similarly, let $\xi_{2}=0$, we have
${v^{2}}-\frac{{2F}}{{{D_{y}}}}v+\left({\frac{F}{{{F_{0}}}}+\frac{{2F\sin\phi}}{{{D_{y}}}}-\frac{{F{{\cos}^{2}}\phi}}{d}}\right)=0.\vspace{-0.25cm}$
(68)
Since $v\in(-1,1)$, the only solution of (68) is
${v_{\rm{2}}}={{\frac{{F}}{{{D_{y}}}}{\rm{-}}\sqrt{{{\left({\frac{{F}}{{{D_{y}}}}}\right)}^{2}}-\left({\frac{F}{{{F_{0}}}}{\rm{+}}\frac{{2F\sin\phi}}{{{D_{y}}}}-\frac{{F{{\cos}^{2}}\phi}}{d}}\right)}}}.\vspace{-0.25cm}$
(69)
According to (67) and (69), we can further obtain the center and width of the
focusing window. Let $v_{c}$ denote the center of the focusing window, we have
${v_{c}}=\frac{{{v_{1}}+{v_{2}}}}{2}=\frac{{\frac{{16F\sin\phi}}{{{D_{y}}}}}}{{\frac{{8F}}{{{D_{y}}}}\left({\sqrt{1-\left({\frac{{{D_{y}}^{2}}}{{F{F_{0}}}}+\frac{{2{D_{y}}\sin\phi}}{F}-\frac{{{D_{y}}^{2}{{\cos}^{2}}\phi}}{{Fd}}}\right)}+\sqrt{1-\left({\frac{{{D_{y}}^{2}}}{{F{F_{0}}}}-\frac{{2{D_{y}}\sin\phi}}{F}-\frac{{{D_{y}}^{2}{{\cos}^{2}}\phi}}{{Fd}}}\right)}}\right)}},\vspace{-0.25cm}$
(70)
since $F,F_{0},d\gg D_{y}$, we obtain
${v_{c}}\approx\left.\left({{\frac{{16F\sin\phi}}{{{D_{y}}}}}}\right)\middle/\left({{\frac{{16F}}{{{D_{y}}}}}}\right)\right.=\sin\phi.\vspace{-0.25cm}$
(71)
Let $\Delta v$ denote the width of the focusing window, we have
$\begin{array}[]{ll}\Delta
v\\!\\!\\!\\!\\!&=\left|{{v_{\rm{1}}}-{v_{\rm{2}}}}\right|\\\
&={{\left|{\frac{{F}}{{{D_{y}}}}\left({{\rm{1}}\\!-\\!\sqrt{{\rm{1}}-\left({\frac{{{D_{y}}^{2}}}{{F{F_{0}}}}+\frac{{{\rm{2}}{D_{y}}\sin\phi}}{F}-\frac{{{D_{y}}^{2}{{\cos}^{2}}\phi}}{{Fd}}}\right)}}\right){\rm{+}}\frac{{F}}{{{D_{y}}}}\left({{\rm{1}}\\!-\\!\sqrt{{\rm{1}}-\left({\frac{{{D_{y}}^{2}}}{{F{F_{0}}}}\\!-\\!\frac{{{\rm{2}}{D_{y}}\sin\phi}}{F}-\frac{{{D_{y}}^{2}{{\cos}^{2}}\phi}}{{Fd}}}\right)}}\right)}\right|}},\end{array}\vspace{-0.25cm}$
(72)
similarly, since $F,F_{0},d\gg D_{y}$, we get
$\begin{array}[]{ll}\Delta
v&\approx\dfrac{{{\left|{\frac{F}{{{D_{y}}}}\left({\frac{{{D_{y}}^{2}}}{{F{F_{0}}}}+\frac{{{\rm{2}}{D_{y}}\sin\phi}}{F}-\frac{{{D_{y}}^{2}{{\cos}^{2}}\phi}}{{Fd}}}\right){\rm{+}}\frac{F}{{{D_{y}}}}\left({\frac{{{D_{y}}^{2}}}{{F{F_{0}}}}-\frac{{{\rm{2}}{D_{y}}\sin\phi}}{F}-\frac{{{D_{y}}^{2}{{\cos}^{2}}\phi}}{{Fd}}}\right)}\right|}}}{2}\\\
&={D_{y}}\left|{\dfrac{1}{{{F_{0}}}}-\dfrac{{{{\cos}^{2}}\phi}}{d}}\right|.\end{array}\vspace{-0.25cm}$
(73)
The same procedure can be applied to obtain the approximate center and width
of the focusing window for the first lens design.
## Appendix D
For the $n$-th element in $\mathbf{a}(d_{l},\phi_{l})$, we have
$a_{n}(d_{l},\phi_{l})=am_{n}(d_{l},\phi_{l})\times
ph_{n}(d_{l},\phi_{l})\times w_{n}(d_{l},\phi_{l}),\vspace{-0.25cm}$ (74)
where $am_{n}(d_{l},\phi_{l})=\frac{{\sqrt{\pi}}}{{{\rm{2}}\sqrt{\alpha}}}$,
$ph_{n}(d_{l},\phi_{l})=e^{-j\left({\frac{{{{{\pi^{2}\beta^{2}}}}}}{{\alpha}}-\frac{5\pi}{4}}\right)}$,
and $w_{n}(d_{l},\phi_{l})$ is the discrete “window” function derived from
(14) by replacing $\sin\theta$ with $\sin\theta_{n}={n}/{N}$ in $\alpha$ and
$\beta$ for $n\in\\{0,\pm 1,\ldots,\pm N\\}$. We simplify
$am_{n}(d_{l},\phi_{l})$, $ph_{n}(d_{l},\phi_{l})$ and $w_{n}(d_{l},\phi_{l})$
as $am_{n}$, $ph_{n}$ and $w_{n}$. Then, we have
$\dfrac{{\partial a_{n}(d_{l},\phi_{l})}}{{\partial d_{l}}}=\dfrac{{\partial
am_{n}}}{{\partial d_{l}}}\times ph_{n}\times
w_{n}+am_{n}\times\dfrac{{\partial ph_{n}}}{{\partial d_{l}}}\times
w_{n}+am_{n}\times ph_{n}\times\dfrac{{\partial w_{n}}}{{\partial
d_{l}}},\vspace{-0.25cm}$ (75)
where
$\dfrac{{\partial am_{n}}}{{\partial
d_{l}}}=\dfrac{\pi\sqrt{\pi}\cos^{2}\phi_{l}}{4\lambda
d_{l}^{2}\alpha\sqrt{\alpha}},\vspace{-0.25cm}$ (76) $\dfrac{{\partial
ph_{n}}}{{\partial
d_{l}}}=e^{-j\left({\frac{{{{{\pi^{2}\beta^{2}}}}}}{{\alpha}}-\frac{5\pi}{4}}\right)}\dfrac{{{e^{j\frac{\pi}{2}}}{\pi^{3}}{\beta^{2}}{{\cos}^{2}}\phi_{l}}}{{\lambda{d_{l}^{2}}{\alpha^{2}}}},\vspace{-0.25cm}$
(77)
and
$\dfrac{{\partial w_{n}}}{{\partial
d_{l}}}=\dfrac{{\sqrt{\pi}{e^{j\frac{{3\pi}}{4}}}{{\cos}^{2}}\phi_{l}}}{{\lambda{d_{l}^{2}}\alpha}}\left({\zeta_{1}{e^{j\zeta^{2}_{2}}}+\zeta_{2}{e^{j\zeta^{2}_{1}}}}\right),\vspace{-0.4cm}$
(78)
where $\zeta_{1}=\frac{{\alpha{D_{y}}+2\pi\beta}}{{2\sqrt{\alpha}}}$ and
$\zeta_{2}=\frac{{\alpha{D_{y}}-2\pi\beta}}{{2\sqrt{\alpha}}}$. Similarly,
$\dfrac{{\partial a_{n}(d_{l},\phi_{l})}}{{\partial\phi_{l}}}$ can be obtained
with
$\dfrac{{\partial
am_{n}}}{{\partial\phi_{l}}}=-\dfrac{\pi\sqrt{\pi}\sin(2\phi_{l})}{4\lambda
d_{l}\alpha\sqrt{\alpha}},\vspace{-0.25cm}$ (79) $\dfrac{{\partial
ph_{n}}}{{\partial\phi_{l}}}=e^{-j\left({\frac{{{{{\pi^{2}\beta^{2}}}}}}{{\alpha}}-\frac{5\pi}{4}}\right)}\left(\dfrac{2\pi^{2}{e^{j\frac{\pi}{2}}}\beta\cos\phi}{\lambda\alpha}+\dfrac{\pi^{3}{e^{j\frac{\pi}{2}}}\beta^{2}\sin(2\phi)}{\lambda
d\alpha^{2}}\right),\vspace{-0.4cm}$ (80)
and
$\dfrac{{\partial
w_{n}}}{{\partial\phi_{l}}}=\dfrac{{\sqrt{\pi}{e^{j\frac{{3\pi}}{4}}}{{\sin}}(2\phi_{l})}}{{\lambda{d_{l}}\alpha}}\left({\zeta_{1}{e^{j\zeta^{2}_{2}}}+\zeta_{2}{e^{j\zeta^{2}_{1}}}}\right)+\dfrac{{2\sqrt{\pi}{e^{j\frac{{3\pi}}{4}}}{{\cos}}\phi_{l}}}{{\lambda\sqrt{\alpha}}}\left({{e^{j\zeta_{2}^{2}}}-{e^{j\zeta_{1}^{2}}}}\right).\vspace{-0.25cm}$
(81)
## References
* [1] F. Boccardi, R. W. Heath Jr., A. Lozano, T. L. Marzetta, and P. Popovski, “Five disruptive technology directions for 5G,” IEEE Commun. Mag., vol. 52, no. 2, pp. 74-80, Feb. 2014.
* [2] J. G. Andrews, S. Buzzi, W. Choi, S. V. Hanly, A. Lozano, A. C. K. Soong, and J. C. Zhang, “What will 5G be?” IEEE J. Sel. Areas Commun., vol. 32, no. 6, pp. 1065-1082, Jun. 2014.
* [3] M. Latva-aho and K. Leppänen, Key drivers and research challenges for 6G ubiquitous wireless intelligence. University of Oulu, 2019. [Online]. Available: http://urn.fi/urn:isbn:9789526223544
* [4] Z. Zhang et al., “6G wireless networks: Vision, requirements, architecture, and key technologies,” IEEE Veh. Technol. Mag., vol. 14, no. 3, pp. 28-41, Sept. 2019.
* [5] F. Tariq, M. R. A. Khandaker, K.-K. Wong, M. Imran, M. Bennis, and M. Debbah, “A speculative study on 6G,” [Online]. Available: https://arxiv.org/abs/1902.06700
* [6] W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems: Applications, trends, technologies, and open research problems,” [Online]. Available: https://arxiv.org/abs/1902.10265
* [7] C. Huang et al., “Holographic MIMO surfaces for 6G wireless networks: Opportunities, challenges, and trends,” [Online]. Available: https://arxiv.org/abs/1911.12296v1
* [8] I. F. Akyildiz, C. Han, and S. Nie, “Combating the distance problem in the millimeter wave and terahertz frequency bands,” IEEE Commun. Mag., vol. 56, no. 6, pp. 102-108, Jun. 2018.
* [9] A. Amiri, M. Angjelichinoski, E. De Carvalho, and R. W. Heath, “Extremely large aperture massive MIMO: Low complexity receiver architectures,” in Proc. IEEE Globecom Workshops, Dec. 2018, pp. 1–6.
* [10] H. Wang, A. Kosasih, C. K. Wen, S. Jin, and W. Hardjawana, “Expectation propagation detector for extra-large scale massive MIMO,” IEEE Trans. Wireless Commun., Early Access, 2020.
* [11] Y. Han, S. Jin, C. K. Wen, and X. Ma, “Channel estimation for extremely large-scale massive MIMO systems,” IEEE Wireless Commun. Lett., Early Access, 2020.
* [12] Q. Wu and R. Zhang, “Towards smart and reconfigurable environment: Intelligent reflecting surface aided wireless network,” IEEE Commun. Mag., Early Access, Nov. 2019.
* [13] Y. Han, W. Tang, S. Jin, C.-K. Wen, and X. Ma, “Large intelligent surface-assisted wireless communication exploiting statistical CSI,” IEEE Trans. Veh. Technol., vol. 68, no. 8, pp. 8238-8242, Aug. 2019.
* [14] W. Tang et al., “Wireless communications with reconfigurable intelligent surface: Path loss modeling and experimental measurement,” [Online]. Available: https://arxiv.org/abs/1911.05326
* [15] M. Mozaffari, A. Taleb Zadeh Kasgari, W. Saad, M. Bennis, and M. Debbah, “Beyond 5G with UAVs: Foundations of a 3D wireless cellular network,” IEEE Trans. Wireless Commun., vol. 18, no. 1, pp. 357-372, Jan. 2019.
* [16] Y. Zeng, J. Lyu, and R. Zhang, “Cellular-connected UAV: Potential, challenges, and promising technologies,” IEEE Wireless Commun. vol. 26, no. 1, pp. 120-127, Sept. 2018.
* [17] J. Jiang, and M. A. Ingram, “Spherical-wave model for short-range MIMO,” IEEE Trans. Commun., vol. 53, no. 9, pp. 1534-1541, Sept. 2005.
* [18] L. V. der Perre, L. Liu, and E. G. Larsson, “Efficient DSP and circuit architectures for massive MIMO: State of the art and future directions,” IEEE Trans. Signal Process., vol. 66, no. 18, pp. 4717-4736, Sept. 2018.
* [19] J. D. Kraus and R. J. Marhefka, “Antenna for all applications,” Upper Saddle River, NJ: McGraw Hill, 2002.
* [20] A. F. Molisch, Wireless Communications, John Wiley & Sons, 2007.
* [21] Z. Zhou, X. Gao, J. Fang, and Z. Chen, “Spherical wave channel and analysis for large linear array in LoS conditions,” in Proc. IEEE Globecom Workshops, Dec. 2015, pp. 1-6.
* [22] B. Friedlander, “Localization of signals in the near-field of an antenna array,” IEEE Trans. Signal Process., vol. 67, no. 15, pp. 3885-3893, Aug. 2019.
* [23] X. Yin, S. Wang, N. Zhang, and B. Ai, “Scatterer localization using large-scale antenna arrays based on a spherical wave-front parametric model,” IEEE Trans. Wireless Commun., vol. 16, no. 10, pp. 6543-6556, Jul. 2017.
* [24] S. Hu, F. Rusek, and O. Edfors, “Beyond massive MIMO: The potential of Localization with large intelligent surfaces,” IEEE Trans. Signal Process., vol. 66, no. 7, pp. 1761-1774, Apr. 2018.
* [25] H. Wymeersch, “Near-field joint localization and synchronization,” [Online]. Available: https://arxiv.org/abs/1907.07411v2
* [26] S. A. Shaikh and A. M. Tonello, “Localization based on angle of arrival in EM lens-focusing massive MIMO”, in Proc. IEEE ICCE, Sept. 2016, pp. 124-128.
* [27] S. A. Shaikh and A. M. Tonello, “Radio source localization in multipath channels using EM lens assisted massive antennas arrays”, IEEE Access, vol. 7, pp. 9001-9012, Jan. 2019.
* [28] A. Shahmansoori, B. Uguen, G. Destino, G. S.-Granados, and H. Wymeersch, “Tracking position and orientation through millimeter wave lens MIMO in 5G systems”, IEEE Signal Process. Lett., vol. 26, no. 8, pp. 1222-1226, Aug. 2019.
* [29] D. Dardari and F. Guidi, “Direct position estimation from wavefront curvature with single antenna array,” in Proc. IEEE ICL-GNSS, Jun. 2018, pp. 1-5.
* [30] A. Ali, E. de Carvalho, and R. W. Heath, “Linear receivers in nonstationary massive MIMO channels with visibility regions,” IEEE Wireless Commun. Lett., vol. 8, no. 3, pp. 885-888, Jun. 2019.
* [31] J. Brady, N. Behdad, and A. M. Sayeed, “Beamspace MIMO for millimeter-wave communications: System architecture, modeling, analysis, and measurements,” IEEE Trans. Antennas Propag., vol. 61, no. 7, pp. 3814-3827, Jul. 2013.
* [32] J. Brady and A. Sayeed, “Beamspace MU-MIMO for high-density gigabit small cell access at millimeter-wave frequencies,” in Proc. SPAWC, Jun. 2014, pp. 80-84.
* [33] Y. Zeng, R. Zhang, and Z.-N. Chen, “Electromagnetic lens-focusing antenna enabled massive MIMO: Performance improvement and cost reduction,” IEEE J. Sel. Areas Commun., vol. 32, no. 6, pp. 1194-1206, Jun. 2014.
* [34] Y. Zeng and R. Zhang, “Millimeter wave MIMO with lens antenna array: A new path division multiplexing paradigm,” IEEE Trans. Commun., vol. 64, no. 4, pp. 1557-1571, Apr. 2016.
* [35] Y. Zeng, L. Yang, and R. Zhang, “Multi-user millimeter wave MIMO with full-dimensional lens antenna array,” IEEE Trans. Wireless Commun., vol. 17, no. 4, pp. 2800-2814, Feb. 2018.
* [36] J. A. del Peral-Rosado, R. Raulefs, J. A. López-Salcedo, and G. Seco-Granados, “Survey of cellular mobile radio localization methods: From 1G to 5G,” IEEE Commun. Surv. Tutor., vol. 20, no. 2, pp. 1124-1148, May 2018.
* [37] S. M. Kay, Fundamentals of statistical signal processing, Estimation Theory. Englewood Cliffs, NJ, USA: Prentice-Hall, 1993.
* [38] B. Mamandipoor, D. Ramasamy, and U. Madhow, “Newtonized orthogonal matching pursuit: Frequency estimation over the continuum,” IEEE Trans. Signal Process., vol. 64, no. 19, pp. 5066-5081, Oct. 2016.
* [39] Y. Han, Q. Liu, C.-K. Wen, S. Jin, and K.-K. Wong, “FDD massive MIMO based on efficient downlink channel reconstruction,” IEEE Trans. Wireless Commun., vol. 67, no. 6, pp. 4020-4034, Jun. 2019.
* [40] A. Alkhateeb and R. W. Heath Jr, “Frequency selective hybrid precoding for limited feedback millimeter wave systems,” IEEE Trans. Commun., vol. 64, no. 5, pp. 1801-1818, Apr. 2016.
|
# Trajectory Optimization under Contact Timing Uncertainties
Haizhou Zhao1 , Majid Khadiv1 1Munich Institute of Robotics and Machine
Intelligence, Technical University of Munich, Germany
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Most interesting problems in robotics (e.g., locomotion and manipulation) are
realized through intermittent contact with the environment. Due to the
perception and modeling errors, assuming an exact time for establishing
contact with the environment is unrealistic. On the other hand, handling
uncertainties in contact timing is notoriously difficult as it gives rise to
either handling uncertain complementarity systems or solving combinatorial
optimization problems at run-time. This work presents a novel optimal control
formulation to find robust control policies under contact timing
uncertainties. Our main novelty lies in casting the stochastic problem to a
deterministic optimization over the uncertainty set that ensures robustness
criterion satisfaction of candidate pre-contact states and optimizes for
contact-relevant objectives. This way, we only need to solve a manageable
standard nonlinear programming problem without complementarity constraints or
combinatorial explosion. Our simulation results on multiple simplified
locomotion and manipulation tasks demonstrate the robustness of our
uncertainty-aware formulation compared to the nominal optimal control
formulation.
## I Introduction
Intermittent contact with the world renders locomotion and object manipulation
problems hybrid. When using optimal control to generate plans for these
systems, the resulting problem to solve would be a mixed-integer optimization
problem [1, 2]. Several works have tried to solve the problem by relaxing the
hybrid nature, e.g., smoothing the contact transition by regularizing the
Delasus matrix [3], handling physical consistency as a soft constraint [4], or
relaxing contact with complementarity slackness in the solver [5]. Most recent
efforts to implement MPC for locomotion and manipulation have focused on
solving a hierarchical problem instead of the holistic one and could achieve
impressive behaviors on real hardware [6, 7, 8, 9, 10]. These approaches
consider a fixed contact plan and control the whole body motion for the given
plan.
Figure 1: Illustration of Uncertain Hybrid Systems
They also assume that contact events happen at exact times, i. e., the
predefined switching times. However, in reality, this is a very restrictive
assumption. For instance, the robot’s perception of the environment is always
with some errors. Furthermore, the tracking error of the end-effector
establishing contact can also lead to a mismatch between the planned and
realized time of contact. To handle these situations, the whole-body MPC
frameworks available in the literature either use heuristics [8] or rely on
the intrinsic robustness of MPC through fast replanning to handle
uncertainties in contact events [6, 9, 7]. However, these approaches are very
limited and a more systematic approach is required.
Recently, [11, 12, 13, 14] investigated the use of robust and stochastic
optimal control for contact-rich robotics problems. While these approaches
provide a very concrete understanding of the problem and interesting safety
guarantees, they generally fall short in handling contact timing uncertainty.
[12] has shown that adjusting the end-effector impedance as a function of
disturbances can mitigate the problem of impact when the contact event is
uncertain. In this framework, the contact event is considered to be uncertain
with a known distribution, and the impact is mitigated using a risk-sensitive
optimal controller. However, not adapting desired trajectories can highly
limit the capability of the controller in handling different situations such
as late foot touch-down during locomotion.
The primary contribution of this work is to provide a deterministic re-
formulation of the stochastic hybrid optimal control problem with uncertainty
in the switching event that does not add run-time computational complexity
compared to the deterministic optimal control problem. In doing so, we propose
a robust optimal control formulation that accounts for a trajectory of
possible switching states over the uncertainty set. The proposed approach can
be adapted for general contact dynamics from locomotion to manipulation.
Through several simplified examples on locomotion and manipulation problems,
we demonstrate the robustness of our approach compared to the standard nominal
optimal control problem.
The rest of the paper is structured as follows: in Section II, we provide the
necessary ingredients to formulate the problem. In section III, we detail our
proposed formulation. In section IV, we present the results of applying our
formulation to several simplified locomotion and manipulation problems.
Finally, Section V presents the concluding remarks and future work.
## II Preliminaries
In this section, we first define the terminology required for describing our
problem. Then, we present a deterministic optimal control formulation for
hybrid dynamical systems.
### II-A Deterministic Hybrid Systems
Locomotion and manipulation are realized through intermittent contact with the
environment. One way to formalize this problem is through the framework of
hybrid dynamical systems [15]. In this work, we consider the following
definition of hybrid systems [16, 17]
$\mathcal{H}:\bigg{\\{}\begin{array}[]{ll}\dot{\mathbf{x}}=\mathcal{F}_{I}(\mathbf{x},\mathbf{u}),&\mathbf{x}\in\mathcal{D}_{I}\backslash\mathcal{G}_{I}^{J},\mathbf{u}\in\mathcal{U}_{I},\\\
\mathbf{x}^{+}=\mathcal{R}_{I}^{J}(\mathbf{x}^{-}),&\mathbf{x}^{-}\in\mathcal{G}_{I}^{J},\mathbf{x}^{+}\in\mathcal{D}_{J},\\\
\end{array}$ (1)
with $\mathcal{J}=\\{I,J,...\\}$ being the finite set of discrete modes such
that for a mode $I\in\mathcal{J}$,
* •
$\mathcal{F}_{I}$ is the continuous dynamics,
* •
$\mathcal{D}_{I}$ is the domain of states,
* •
$\mathcal{U}_{I}$ is the set of admissible input,
* •
$\mathcal{G}_{I}^{J}:=\\{\mathbf{x}\in\mathcal{D}_{I}|g_{I}^{J}(\mathbf{x})\leq
0\\}$ is the guard (Fig. 1),
* •
$\mathcal{R}_{I}^{J}:\mathcal{G}_{I}^{J}\to\mathcal{D}_{J}$ is the reset map
that projects states in $D_{I}$ to $D_{J}$ when the guard condition
$g_{I}^{J}(\mathbf{x}^{-})\leq 0$ is met.
A simple example of a hybrid robotic system is a jumping 1D hopper. Upon
landing, the robot’s states enter the guard from the aerial phase to stance,
undergoing a reset by an impulsive impact force.
### II-B From Hybrid to Switching systems
Given the sequence of contacts for a hybrid system, the problem can be
simplified to a switching system. In this formulation, the system’s dynamics
are smooth between consecutive switches, while the time of the switch can
still be optimized. Recently, many fast solvers [18, 19, 20] have been
developed for real-time resolution of (2). In the following, we present the
multiple-shooting transcription of the switching system.
Let $\mathcal{S}$ be the set of shooting node indices where a switch is
expected. For a given initial state $\mathbf{x}_{0}$, the time-based direct-
multiple-shooting optimal control problem can be formulated as
$\displaystyle\min_{\mathbf{x},\mathbf{u}}\quad$ $\displaystyle
L_{N}(\mathbf{x}_{N})+\sum_{i=0}^{N-1}L_{i}(\mathbf{x}_{i},\mathbf{u}_{i})$
(2a) s.t. $\displaystyle\forall i\notin\mathcal{S}:$
$\displaystyle\quad\mathbf{f}_{i}(\mathbf{x}_{i},\mathbf{u}_{i},\mathbf{x}_{i+1},\Delta
t_{i})=\mathbf{0},$ (2b) $\displaystyle\quad g_{i}(\mathbf{x}_{i+1})>0,$ (2c)
$\displaystyle\forall i\in\mathcal{S}:$
$\displaystyle\quad\mathbf{f}_{i}(\mathbf{x}_{i},\mathbf{u}_{i},\mathbf{x}_{i+1}^{-},\Delta
t_{i})=\mathbf{0},$ (2d)
$\displaystyle\quad\mathbf{x}_{i+1}=\mathcal{R}_{i}(\mathbf{x}_{i+1}^{-}),$
(2e) $\displaystyle\quad g_{i}(\mathbf{x}_{i+1}^{-})=0,$ (2f)
$\displaystyle\mathbf{h}(\mathbf{x}_{i},\mathbf{u}_{i})\leq 0,$ (2g)
where $\Delta t_{i}$ is the phase-wise timestep, $N$ is the number of shooting
nodes, $L_{N}$ is the terminal cost, $L_{i}$ is the running cost, (2b) is the
non-switching implicit dynamics, $\eqref{eq:impdynpre}$ is the pre-switching
continuous dynamics derived from $\mathcal{F}_{(\cdot)}$ in (1), $x^{-}_{i+1}$
denotes the pre-reset state, (2c),(2f) ensure switching consistency, (2e) is
the state reset equation at the switch, and (2g) is the state-input inequality
constraints.
## III Uncertainty-Aware Optimal Control
The formulation in (2) assumes that contact happens at a certain time and
state (where the distance between the end-effector and the environment goes to
zero). However, due to uncertainties in the environment perception and end-
effector tracking errors, it is highly unlikely that the end-effector touches
the ground at the exact pre-defined time. to formalize this situation, we
introduce the following uncertain guard as illustrated in Fig. 1:
$\hat{\mathcal{G}}_{I}^{J}(\delta)=\\{\mathbf{x}\in\mathcal{D}_{I}|{g}_{I}^{J}(\mathbf{x})\leq\delta\\},\delta\in[-d,d],$
(3)
where $\delta$ is the guard uncertainty bounded by $d$. With
$\hat{\mathcal{G}}$, a state cannot be deterministically predicted to incur
switching, leading to uncertain contact timing and thus the switching time
between modes. This is naturally incompatible with the deterministic structure
of (2).
### III-A Issues of the Nominal Approach
Trajectories generated from nominal time-based optimal control with a nominal
guard ($\delta=0$) only ensure that the nominal switching state is feasible.
If the switching does not happen as planned (i.e., early or late contact), the
system may evolve unexpectedly. Typical issues include:
* •
For late contact, the controller is unknown after the nominal contact timing.
Problem-specific solutions include reference spreading [21] or simplistic
zero-order-hold of the last input.
* •
For early contact, the system usually encounters unfavorable impact forces. In
such cases, the system may fail due to failures in the mechanical structure or
bouncing of the end-effector impacting the ground.
* •
Since the nominal problem is only concerned with the exact contact event, it
can lead to highly aggressive motions before or after contact. An example is
that when a trajectory is aggressive for performance, states near its nominal
switching time may be outside the feasibility set of the post-impact problem.
In this section, we introduce our main contribution: an uncertainty-aware
optimal control formulation that resolves the above issues.
### III-B A Deterministic Transcription
Figure 2: Illustration of the difference between (a) the nominal optimal
control and (b) the proposed approach. The proposed method does not switch the
mode but generates a trajectory of feasible switching states over the
uncertainty set.
Intuitively, a robust time-based trajectory that solves the problem in Sec.
III-A should be deterministic until a switch is triggered. Since the switching
may happen at any moment, when the states are within the uncertain region, all
these candidate pre-switching states should not harm the system safety or the
feasibility of the post-switch optimal control. Based on this intuition, we
consider that for a single phase, an uncertain sub-phase (namely the robust
phase) is appended to the pre-switching phase. For its index set
$\mathcal{K}$, the following constraints must be satisfied:
$\displaystyle i=\mathcal{K}_{0},~{}$ $\displaystyle g_{i}(\mathbf{x}_{i})=d,$
(4a) $\displaystyle i=\mathcal{K}_{-1},~{}$ $\displaystyle
g_{i}(\mathbf{x}_{i})=-d,$ (4b) $\displaystyle\forall i\in\mathcal{K},~{}$
$\displaystyle\dot{g}_{i}(\mathbf{x}_{i})\leq 0,$ (4c)
where $\mathcal{K}_{0},\mathcal{K}_{-1}$ denote the earliest and latest
indices in $\mathcal{K}$, respectively. An uncertainty-aware optimal control
problem can then be formulated as a parameterized optimization problem as
introduced in [22]:
$\displaystyle\min_{\mathbf{x},\mathbf{u},\Delta t,d}\quad$
$\displaystyle\sum_{i=0}^{N-1}L_{i}(\mathbf{x}_{i},\mathbf{u}_{i})+\sum_{i\in\mathcal{K}}L_{K}(\mathbf{x}_{i},\mathbf{p}_{i})$
(5a) s.t.
$\displaystyle\mathbf{h}_{K}(\mathbf{x}_{i},\mathbf{p}_{i})\leq\mathbf{0},\forall
i\in\mathcal{K},$ (5b) $\displaystyle\Delta t_{i}\in[\Delta t_{\min},\Delta
t_{\max}],d\in[d_{\min},d_{\max}],$ (5c) $\displaystyle\forall
i\notin\mathcal{K},\eqref{eq:guardpreineq},$ $\displaystyle\forall
i\in\mathcal{K},\eqref{eq:uncertainconstr},$
$\displaystyle\eqref{eq:impdyn},\eqref{eq:ineqnormal}$
where $\mathcal{K}_{0}=N$, $L_{K}$ is the contact-related objective,
$\mathbf{h}_{K}$ is the contact-related constraint, and $\mathbf{p}_{i}$ is
the collection of auxiliary variables including the timesteps and uncertainty.
Notice that $\delta t$ and $d$ are decision variables in this new formulation,
bounded by (5c), which can be crucial for the feasibility and convergence of
the optimization problem. For instance, depending on the problem if $d$ is set
to a large fixed value, there might be no feasible solution that can satisfy
all the constraints for all the possible contact events. Also, since the
number of nodes in the robust phase is fixed, optimizing time is important to
regulate the robot’s behavior in the uncertain region.
The _robust phase_ in (4) ensures that the trajectory traverses the uncertain
region, constituting a continuous collection of possible switching states. To
show the effectiveness of our proposed formulation in (5) in generating robust
trajectories, we can adapt $L_{K},\mathbf{h}_{K}$ in (5b) in the following
ways: we can model safety-related or feasibility-related criteria as
inequality constraints $\mathbf{h}_{K}$; we can also adapt $L_{K}$ to reach
various goals such as robustness maximization and impact minimization. We will
show the flexibility of our formulation in different case studies in the next
section.
###### Remark 1
(Uncertainty optimization) In (5), uncertainty $d$ is also a decision
variable. Depending on the specific problem setting, this handling enables
finding the maximum possible uncertainty, where either the uncertainty can be
increased to gain better robustness or decreased to show the maximum feasible
value.
###### Remark 2
(Optimality) For long-term optimality, the formulation in (5) can further be
extended to a parallelizable tree-structured optimal control problem [23] that
branches at each shooting node in $\mathcal{K}$. Nevertheless, we only focus
in this paper on the transcription of the uncertainty into a robust phase
without trying to achieve long-term optimality.
## IV Case Studies
In this section, we show case studies of various locomotion and manipulation
tasks based on the proposed optimal control formulation. We also compare the
results of our proposed robust formulation to the nominal case. All examples
are implemented using the Opti stack of CasADi [24] and IPOPT [25].
### IV-A Impact Minimization of a Hopping Robot
Figure 3: Illustration of the planar two-link point-footed robot. (a) The
robot has two joints (hip and knee) and a 2-DoF base joint. The black dot
denotes the whole-body center of mass (CoM). (b) When landing, the ground
position is uncertain. Figure 4: Simulation data during the robust phase. The
weights for maximizing the uncertainty in (a),(c) are respectively 1000x that
in (b),(d). ’mi’ denotes the impact minimization over the given uncertainty
[-0.05, 0.05]m. ’mu_(x)’ denotes uncertainty maximization for known impact
limits x (unit: N), where the flat region denotes the uncertainty. ’nom’
denotes the nominal optimal control data. Flat parts of ’mu_(x)’ denote the
optimized uncertainty region where the impact limits are satisfied.
One of the classical examples in robot locomotion control is impact
minimization for jumping robots [26]. In this task, a planar two-link point-
feet hopper jumps continuously while the height of the support surface can
suddenly change within a bound, as shown in Fig. 3. The robot has a 2-DoF X-Y
base joint The task is to perform in-place hopping to reach a desired height.
#### IV-A1 Dynamic Model
Let $\mathbf{q}=[y_{b},\theta_{h},\theta_{k}]^{\top}$, and
$\dot{\mathbf{q}}=[\dot{y}_{b},\dot{\theta}_{h},\dot{\theta}_{k}]^{\top}$.
$y_{b}$ is the base height, and $\theta_{h},\theta_{k}$ are the hip and knee
angles, respectively. The dynamics of the system can be written as
$\displaystyle\mathbf{M}(\mathbf{q})\ddot{\mathbf{q}}+\mathbf{H}(\mathbf{q},\dot{\mathbf{q}})$
$\displaystyle=\mathbf{S}\bm{\tau}+\mathbf{J}_{c}^{T}\mathbf{F}_{c},\,$ (6a)
where $\mathbf{M}\in\mathbb{R}^{3\times 3}$ is the joint-space inertia matrix,
$\mathbf{H}\in\mathbb{R}^{3}$ is the nonlinear effects,
$\mathbf{S}=\begin{bmatrix}\mathbf{0}_{2\times
2}&\mathbf{I}_{2}\end{bmatrix}^{\top}$ is the selection matrix,
$\bm{\tau}\in\mathbb{R}^{2}$ is the joint torques,
$\mathbf{J}_{c}\in\mathbb{R}^{2\times 3}$ is the foot contact jacobian,
$\mathbf{F}_{c}=[F_{y},F_{x}]^{\top}\in\mathbb{R}^{2}$ is the contact force
subject to the following constraints:
$\displaystyle 0<F_{y}$ $\displaystyle~{}\bot~{}y_{f}-y_{g}>0,$ (7a)
$\displaystyle F_{y}$ $\displaystyle\geq\mu|F_{x}|,$ (7b)
where (7a) is the contact complementary constraints, $y_{f},y_{g}$ are the
foot and the ground height, respectively. Equation (7b) encodes the planar
friction cone constraint. We assume purely inelastic impact, i.e., zero post-
impact foot velocity. Based on maximum dissipation principle, the impact
impulse $\bm{\lambda}=[\lambda_{x},\lambda_{y}]^{\top}$ can be modeled as
$\displaystyle\bm{\lambda}=$
$\displaystyle\operatorname*{arg\,min}~{}||\mathbf{J}_{c}^{T}\dot{\mathbf{q}}^{+}||^{2}$
(8a) s.t.
$\displaystyle\mathbf{M}(\dot{\mathbf{q}}^{+}-\dot{\mathbf{q}}^{-})=\mathbf{J}_{c}^{\top}\bm{\lambda}+[\mathbf{S}\bm{\tau}-H(q,\dot{\mathbf{q}}^{-})]\Delta
t,$ (8b) $\displaystyle\lambda_{y}\geq\mu|\lambda_{x}|,$ (8c)
$\displaystyle\mathbf{J}_{c}^{N}\dot{\mathbf{q}}^{+}=0,$ (8d)
where the superscription $N$ and $T$ denote normal and tangential components
of velocity w.r.t. the ground, $\Delta t$ denotes the impact duration, which
is set to be 2ms in our tests. Note that impulse is used instead of force to
improve the numerical conditioning.
#### IV-A2 Nominal Optimal Control
In the form of (2), a hopping loop is divided into three phases: take-off
(stance), ascendance, and falling. A terminal constraint of the base height is
added to the ascendance phase to ensure the base reaches the desired position.
The guard is chosen as
$g_{i}:=y_{f}-y_{g}.$ (9)
Let $\mathbf{r}_{f}=[x_{f},y_{f}]^{\top}$. To maintain the discretized contact
constraint during the stance phase, we add the velocity-level stabilization at
each shooting node:
$k_{f}\dot{\mathbf{r}}_{f}+\mathbf{r}_{f}=\mathbf{r}_{f}^{0},$ (10)
where $k_{f}=1e3$ in our setting, $\mathbf{r}_{f}^{0}$ is the initial foot
position. The center-of-mass (CoM) of the robot is set to be right above the
foot during the whole procedure for in-place hopping. Upon switching, (8) is
added and the horizontal post-impact velocity of the foot is constrained to be
zero as a terminal constraint of the falling phase to avoid slip. Torques,
joint positions, and velocities are also constrained according to the hardware
implementation of the robot for realistic settings.
#### IV-A3 Robust Formulation
The robust formulation is the same as the nominal one except that an extra
robust phase is added. The guard (9) is used in the form of (4). Two realistic
scenarios are tested to show the flexibility of our method:
* •
Minimizing impact force for the worst-case uncertainty within a given range.
In this case, we minimize the upper bound of vertical impact
$\bar{\lambda}_{y}$ in the robust phase, i.e.,
$\displaystyle L_{K}$ $\displaystyle:=w_{\lambda}\bar{\lambda}_{y},$ (11a)
$\displaystyle h_{K}$ $\displaystyle:=\lambda_{y}<\bar{\lambda}_{y}.$ (11b)
Note that $d$ is a parameter and $\bar{\lambda}_{y}$ is a decision variable.
* •
Maximizing uncertainty based on the worst-feasible impact force. This is the
safety-critical case when the maximum tolerable impact by the structure of the
robot $\bar{\lambda}_{y}$ is obtained from mechanical design. In this case,
for the robust phase, we have:
$\displaystyle L_{K}$ $\displaystyle:=-w_{d}d,$ (12a) $\displaystyle h_{K}$
$\displaystyle:=\lambda_{y}<\bar{\lambda}_{y}.$ (12b)
Note that $\bar{\lambda}_{y}$ is a parameter and $d$ is a decision variable.
#### IV-A4 Result and Discussion
Two desired heights (0.65m, 0.8m) are tested for the nominal approach and the
two scenarios of the robust approach. The friction coefficient is set at 0.7.
The data is shown in Fig. 4. In terms of impact minimization, it can be
observed that the robust method can have approximately up to 30%-50%
improvement over the nominal method for about 70% of the uncertain region.
For uncertainty maximization, a wide range for the weight $w_{d}$ is
considered to generate diverse solutions. For low $w_{d}$, as the impact limit
$\bar{\lambda}\to 0$, the robust solution converges to the nominal case. For
high $w_{d}$, the feasible uncertainty can be larger, at the cost of higher
impact force outside the uncertain region. The low $w_{d}$ cases can also be
interpreted as reducing the uncertainty to obtain better average improvements
over the nominal method i.e., the percentage of the original uncertain region
with lower impact forces than the nominal solution.
### IV-B Object Catching
Figure 5: Illustration of how the manipulator catches a free-falling object.
The manipulator (a) lifts its end-effector (EF) to a high position and then
(b) lowers its EF to reduce the velocity w.r.t. the object.
Figure 6: Optimization and simulation data of the manipulator object-catching
task. (a) The (solved) optimized uncertainty w.r.t. the initial
$x_{\text{obj}}$ with differential initializations. (b),(c) are the y-position
and -velocity trajectories of the object and the EF with the ’init L’
initialization, where the number denotes the $x_{\text{obj}}$. The manipulator
follows the strategy of reducing velocity difference at possible impacts.
This task shows a torque-controlled manipulator catching an object, of which
the shape is uncertain. It is a typical safety-critical case as an object can
be fragile and may break if the impact upon contact is high. The setup is
shown in Fig. 5. For simplicity, instead of using impulse as safety criteria,
it is assumed that the object will crack if the impact velocity difference
between the EF and the object exceeds a maximal value.
Let $y_{\text{object}},y_{\text{EF}}$ be respectively the y-position of the
nominal bottom of the object and the EF. The uncertain guard in (4) is chosen
as
$g_{i}:=y_{\text{obj}}-y_{\text{EF}},$ (13)
with the following constraints on their x-positions to ensure consistent
geometry during catching
$\forall i\in\mathcal{K},x_{\text{EF}}=x_{\text{obj}},\dot{x}_{\text{EF}}=0.$
(14)
The initial state of the manipulator is set the same for all tests. The
shoulder joint (the first joint attached to the fixed base) is located at the
origin. The object falls from $y_{\text{obj}}=1$m and different
$x_{\text{obj}}$. The results are shown in Fig. 6 for the following optimizer
initializations:
* •
’init L’: initialization using the initial state where $y_{\text{EF}}<0$,
i.e., the EF is lower than the shoulder joint.
* •
’init H’: initialization using the state where $y_{\text{EF}}>0$, i.e., the EF
is higher than the shoulder joint.
which lead to distinct optimized uncertainty as in Fig. 6(a). Optimization
with ’init L’ is infeasible with low $x_{\text{obj}}$. Solutions to the two
initializations diverge from each other for $x_{\text{obj}}\lessapprox 0.28$
as represented by the discontinuity (black dashed line). The y-position and
velocity plots can further illustrate it as in Fig. 5(b,c) where for ’init L’,
the trajectories with $x_{\text{obj}}\in\\{0.1,0.2\\}$ are different from the
ones with $x_{\text{obj}}\in\\{0.3,0.4,0.5\\}$. This indicates that the
uncertainty optimization is affected by the non-convexity of the original
problem. As can be seen in 5(b,c), the manipulator reduces the velocity
difference between its end-effector and the object to reduce the impact force.
### IV-C Cart-Pole With a Rigid Wall
Figure 7: Illustration of the cart-pole system recovering balance. (a) The
pole angular velocity is disturbed. Since the cart input is limited, it moves
to the wall to seek impact that will reverse the direction of pole velocity.
(b) After Impact, the cart-pole can recover its balance and position.
In this case, we test a task similar to [27] where a cart-pole system can use
contact with the wall to stabilize itself under disturbance. As shown in Fig.
7, the pole will bounce when colliding with a rigid wall, and the cart
position is limited.
#### IV-C1 Dynamic Model
Let $\mathbf{q}=[x,\theta]$ where $x$ is the cart position and $\theta$ is the
counterclockwise pole angle. Similar to the hopping robot, the equation of
motion of the cart-pole can be written as
$\displaystyle\mathbf{M}(\mathbf{q})\ddot{\mathbf{q}}+\mathbf{H}(\mathbf{q},\dot{\mathbf{q}})=[1,0]^{\top}\tau+\mathbf{J}_{c}^{T}\mathbf{F}_{c}$
(15a) $\displaystyle\mathbf{M}=\begin{bmatrix}m_{c}+m_{p}&m_{p}lc_{\theta}\\\
m_{p}lc_{\theta}&m_{p}l^{2}\end{bmatrix},\mathbf{H}=-m_{p}ls_{\theta}\begin{bmatrix}\dot{\theta}^{2}\\\
g\end{bmatrix},$ (15b)
where $m_{c},m_{p}$ are respectively the mass of the cart and the pole, $l$ is
the pole length (its CoM is assumed to be at the end), $c_{\theta},s_{\theta}$
are cosine and sine of the pole angle, $\tau$ is the cart linear driving
force, $\mathbf{J}_{c}\in\mathbb{R}^{2\times 2}$ is the contact jacobian of
the pole and $\mathbf{F}_{c}\in\mathbb{R}^{2}$ is the wall reaction force.
When the pole is upright, $\theta=\pi$. Its impact model is similar to (8)
except that for the normal velocity w.r.t. the wall $v_{N}$, we assume a
restitution coefficient $C$, such that
$v_{N}^{+}=-Cv_{N}^{-}.$ (16)
#### IV-C2 Nominal Optimal Control
The nominal optimal control comprises the pre-impact and post-impact phases.
The pole is constrained to collide with the wall at the terminal node of the
pre-impact phase. In the cases of early contact, the nominal optimal control
is degraded into a single-phase problem with the post-impact states as its
initial state. For late contact cases, the wall position is updated to the
actual value if the nominal contact is not triggered.
#### IV-C3 Proposed Method
(a) Convex hull
(b) Fitting error (m)
Figure 8: Approximation of the feasible set of the cart-pole system. The
colors represent (a) the stopping distance (unit: m) and (b) the fitting error
of the quadratic approximation. Figure 9: Optimized uncertainty for difference
disturbed angular velocities and position limits. Only feasible solutions are
plotted.
Since the cart-pole is an unstable and constrained system, it is important
that the robot’s state after the impact remains in a set from which there
exists a solution to stabilize the system under the constraints (a.k.a
viability). In general, finding this set is very difficult and out of the
scope of this paper. Here, we present a simple brute-force approach to
approximate this set. We used grid search to sample a small batch of pre-
impact states $\mathbf{x}=[\dot{x},\theta,\dot{\theta}]^{\top}$ and
approximated the feasible ones as a convex hull as shown in Fig. 8a. The
stopping distance, i.e., the maximum position of the cart during the
balancing, is approximated by a quadratic function $\phi$ of the pre-impact
states as shown in Fig. 8b. These two approximations are sufficient for robust
optimization with different $x_{\max}$.
Let $\mathbf{A}\mathbf{x}+\mathbf{b}\leq\mathbf{0}$ be the convex hull. The
constraints of the robust phase can be designed as
$\displaystyle\ \mathbf{h}_{K}^{\text{cvxh}}$
$\displaystyle:=\mathbf{A}\mathbf{x}+\mathbf{b}+\mathbf{s},$ (17a)
$\displaystyle h_{K}^{\text{dist}}$
$\displaystyle:=\phi(\mathbf{x})-x_{\max},$ (17b)
where $\mathbf{s}$ is the conservativeness parameter, (17a) is the convex hull
constraint and (17b) is the maximum stopping distance constraint.
###### Remark 3
(Conservativeness) Since the convex hull is merely an approximation of the
feasible sample set, states close to its boundary may still be infeasible. The
conservativeness parameters shrink the boundary to push the states into the
interior to improve robustness.
#### IV-C4 Results and Discussion
Figure 10: Success-failure plot for the comparison experiment.
’NominalSuccess’ denotes the success achieved by purely the nominal approach.
’RobustSuccess’ denotes the success achieved by the robust method in addition
to the ’NominalSuccess’. Note that the robust method will also succeed in
’NominalSuccess’ settings. The dashed lines denote that the nominal method can
find a nominal solution for the given setting. Note that the blocks do not
include the nominal settings (zero uncertainty).
The nominal and robust methods are tested on various $\dot{\theta}(0)$ and
$x_{\max}$ settings. The restitution coefficient in (16) is 0.8 and the
friction coefficient is 0.7. The optimized uncertainties are shown in Fig. 9
where the monotonicity w.r.t. $\dot{\theta}(0)$ and $x_{\max}$ can be
summarized as respectively negative and positive.
The comparison results are shown in Fig. 10. The robust approach has a higher
success rate for both early and late contact, while the nominal approach could
fail. This phenomenon shows that the robustness of the nominal approach is
limited by its potentially aggressive solution. Nevertheless, the robust
approach cannot always ensure success since the feasibility of the original
problem can vary between settings, which is irrelevant to uncertainty.
## V Conclusions and future work
In this work, we present an uncertainty-aware optimal control formulation that
takes the uncertainty in contact events into account using the notion of
guards in hybrid systems and enables tractable resolution of the problem. Our
proposed formulation features constraint satisfaction and uncertainty
optimization within a robust phase, making it applicable to various problems
in robotics with uncertain contact events. Several case studies showed that,
in addition to generating robust trajectories, uncertainty optimization is
important to avoid failure.
In the future, we plan to extend the uncertainty-aware approach to
parallelized tree-structure optimal control for applications that emphasize
long-term optimality. We also plan to implement a fast parameterized and
constrained optimal control solver for real-world experiments. Real-world
experiments are also part of our future vision for this work.
## References
* [1] R. Deits and R. Tedrake, “Footstep planning on uneven terrain with mixed-integer convex optimization,” in 2014 IEEE-RAS international conference on humanoid robots, pp. 279–286, IEEE, 2014.
* [2] M. A. Toussaint, K. R. Allen, K. A. Smith, and J. B. Tenenbaum, “Differentiable physics and stable modes for tool-use and manipulation planning,” 2018.
* [3] Y. Tassa, T. Erez, and E. Todorov, “Synthesis and stabilization of complex behaviors through online trajectory optimization,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4906–4913, IEEE, 2012.
* [4] I. Mordatch, E. Todorov, and Z. Popović, “Discovery of complex behaviors through contact-invariant optimization,” ACM Transactions on Graphics (ToG), vol. 31, no. 4, pp. 1–8, 2012.
* [5] M. Posa, C. Cantu, and R. Tedrake, “A direct method for trajectory optimization of rigid bodies through contact,” The International Journal of Robotics Research, vol. 33, no. 1, pp. 69–81, 2014.
* [6] M. Toussaint, J. Harris, J.-S. Ha, D. Driess, and W. Hönig, “Sequence-of-constraints mpc: Reactive timing-optimal control of sequential manipulation,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 13753–13760, IEEE, 2022.
* [7] C. Mastalli, W. Merkt, G. Xin, J. Shim, M. Mistry, I. Havoutis, and S. Vijayakumar, “Agile maneuvers in legged robots: a predictive control approach,” IEEE Transactions on Robotics, 2023.
* [8] R. Grandia, F. Jenelten, S. Yang, F. Farshidian, and M. Hutter, “Perceptive locomotion through nonlinear model predictive control,” IEEE Transactions on Robotics, 2023.
* [9] A. Meduri, P. Shah, J. Viereck, M. Khadiv, I. Havoutis, and L. Righetti, “Biconmp: A nonlinear model predictive control framework for whole body motion planning,” IEEE Transactions on Robotics, 2023.
* [10] H. Zhu, A. Meduri, and L. Righetti, “Efficient object manipulation planning with monte carlo tree search,” in 2023 IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, 2023.
* [11] L. Drnach and Y. Zhao, “Robust trajectory optimization over uncertain terrain with stochastic complementarity,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1168–1175, 2021.
* [12] B. Hammoud, M. Khadiv, and L. Righetti, “Impedance optimization for uncertain contact interactions through risk sensitive optimal control,” IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 4766–4773, 2021.
* [13] A. Gazar, M. Khadiv, S. Kleff, A. Del Prete, and L. Righetti, “Nonlinear stochastic trajectory optimization for centroidal momentum motion generation of legged robots,” in Robotics Research, pp. 420–435, Springer Nature Switzerland Cham, 2023.
* [14] A. Gazar, M. Khadiv, A. Del Prete, and L. Righetti, “Multi-contact stochastic predictive control for legged robots with contact locations uncertainty,” arXiv preprint arXiv:2309.04469, 2023.
* [15] E. R. Westervelt, J. W. Grizzle, and D. E. Koditschek, “Hybrid zero dynamics of planar biped walkers,” IEEE transactions on automatic control, vol. 48, no. 1, pp. 42–56, 2003.
* [16] A. M. Johnson, S. A. Burden, and D. E. Koditschek, “A hybrid systems model for simple manipulation and self-manipulation systems,” The International Journal of Robotics Research, vol. 35, no. 11, pp. 1354–1392, 2016.
* [17] N. J. Kong, C. Li, G. Council, and A. M. Johnson, “Hybrid iLQR Model Predictive Control for Contact Implicit Stabilization on Legged Robots,” IEEE Transactions on Robotics, vol. 39, no. 6, pp. 4712–4727, 2023.
* [18] F. Farshidian et al., “OCS2: An open source library for optimal control of switched systems.” [Online]. Available: https://github.com/leggedrobotics/ocs2.
* [19] C. Mastalli, R. Budhiraja, W. Merkt, G. Saurel, B. Hammoud, M. Naveau, J. Carpentier, L. Righetti, S. Vijayakumar, and N. Mansard, “Crocoddyl: An Efficient and Versatile Framework for Multi-Contact Optimal Control,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 2536–2542, 2020.
* [20] C. Mastalli, S. P. Chhatoi, T. Corbéres, S. Tonneau, and S. Vijayakumar, “Inverse-dynamics mpc via nullspace resolution,” IEEE Transactions on Robotics, vol. 39, no. 4, pp. 3222–3241, 2023.
* [21] J. v. Steen, G. v. d. Brandt, N. v. d. Wouw, J. Kober, and A. Saccon, “Quadratic programming-based reference spreading control for dual-arm robotic manipulation with planned simultaneous impacts,” IEEE Transactions on Robotics, pp. 1–14, 2024.
* [22] A. Oshin, M. D. Houghton, M. J. Acheson, I. M. Gregory, and E. A. Theodorou, “Parameterized Differential Dynamic Programming,” in Robotics: Science and Systems 2022, 2022.
* [23] G. Frison and M. Diehl, “HPIPM: a high-performance quadratic programming framework for model predictive control,” IFAC-PapersOnLine, vol. 53, no. 2, pp. 6563–6569, 2020.
* [24] J. Andersson, J. Gillis, G. Horn, J. Rawlings, and M. Diehl, “CasADi: a software framework for nonlinear optimization and optimal control,” Mathematical Programming Computation, vol. 11, 07 2018.
* [25] A. Wächter and L. T. Biegler, “On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming,” Mathematical Programming, vol. 106, pp. 25–57, 2006.
* [26] M. Bogdanovic, M. Khadiv, and L. Righetti, “Learning variable impedance control for contact sensitive tasks,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6129–6136, 2020.
* [27] A. Aydinoglu and M. Posa, “Real-time multi-contact model predictive control via admm,” in 2022 IEEE/RAS International Conference on Robotics and Automation (ICRA), pp. 3414–3421, 05 2022.
|
Nine models were evaluated as candidate glomerular filtration rate (GFR) reference standards in three datasets using [$^{51}$Cr(EDTA)]$^-$ or [$^{169}$Yb(DTPA)]$^{2-}$ anions in 98 studies. Noncompartmental methods formed an upper limit for estimating mass excreted and voluntary urine collection formed a lower limit. For current models and methods, reduced GFR in adults resulted in inflated clearance estimates. Two different logarithmic models with exponential tails were created and may have underestimated reduced clearance. The logarithmic formulae can be used with only two plasma samples, and fit 13 studies totalling 162 plasma samples drawn from 5 min to 24 h with an 8% standard deviation of residuals compared to 20% error for monoexponentials. For shorter times (4 or 5 h) the fit errors decreased but the ratio of errors remained at circa 2.5 times lesser for the logarithmic versus monoexponential models. Adaptively regularised gamma variate, Tk-GV, models that are well documented, but not in common use, were largely contained within the reference extreme values, were unbiased for different levels of clearance and were the only models to be uncorrelated to volume of distribution from mean residence time divided by weight. Using Tk-GV as a candidate reference standard, potentially better methods for routine clinical usage were discussed. Prospective clinical testing, and metabolic scaling of decreased renal function is advised for potential changes to patient triage.
$\mathbf{Keywords}$: Glomerular Filtration Rate; Radiopharmaceuticals; Injections, Intravenous; Plasma; Reference Standards
§ INTRODUCTION
Glomerular filtration rate, GFR, can be measured as the volume of arterial blood plasma per unit time totally cleared of nonindigenous, entirely-solvated, low-enough molecular-weight inert markers to be freely eliminated by renal filtration alone. GFR is widely considered to be the most useful measure of renal function [1]. This usefulness is likely due to a homeostatic balance between normal glomerular elimination of the products of metabolism and metabolic rate itself, such that reduced GFR signifies increased plasma concentration of a host of metabolites [2]. This work presents and tests new and well known bolus intravenous GFR plasma models for use with venous sampling of radiochelates and other nonindigenous GFR markers for the purpose of stratifying models as to their relevance with respect to GFR reference standards. The bounds for reference standards used were noncompartmental plasma modelling and voluntary urinary drug mass collections. Moore et al. found noncompartmental methods with an additional plasma volume concentration estimate at $t=0$ to overestimate renal clearance by circa 10% [3] at 4 h. Unfortunately, those authors did not test whether renal clearance should used as a reference standard. Most bolus intravenous injection pharmacokinetic models are venous plasma concentration sampling models of two principle types. The simplest and most commonly used type is the washout model; monotonically decreasing functions of time that have maximum concentration initially, at $t=0$. Models of the second type allow for the increasing concentration from an initial zero concentration in a peripheral sampling site, i.e., $C(0)=0$, and typically require more early data for fitting than washout models. This work reports on several new washout models based on logarithmic functions having exponential tails, and a comparison of the results of multiple model types from three different series and two different radiopharmaceuticals.
§.§ The Schloerb challenge
In 1960, Schloerb [4]
published the results of intravenous infusion of tritiated water, urea, and creatinine in nephrectomised dogs. Schloerb noted that plasma concentration of creatinine decreased with elapsing time and appeared to come to equilibrium after 4 hours, but then noted that this was only an apparent equilibrium as the expected complete equilibrium with total body water had not been achieved even at 24 h. He concluded that a near infinite number of compartments would need to be invoked to explain his results. That is, if we were to fit a monoexponential (E1) to Schloerb's disappearance curves, we would obtain a finite AUC, where AUC would have to be infinite to be consistent with the actual renal clearance of zero in a nephrectomised animal. Thus, monoexponentials and their sums fit to concentration curves from an infusion with data acquired for a short time exaggerate clearance. Moreover, most current models of plasma and renal clearance, be they from bolus intravenous injections, constant infusion, or subcutaneous injections do not reliably quantify renal insufficiency defined here as less than or equal to 25 ml/min for an adult. We refer to this problem as the Schloerb challenge, that is, to find a plasma disappearance curve model having a limiting infinite AUC with zero plasma clearance as renal clearance goes to zero.
Typical clinical measurements using monoexponential (E1) models collect two or more time-samples between 2 and 4 hours. However, in severe renal insufficiency and/or fluid overload (ascites, tumour) the first time-sample should be collected at two or five h and the last at 24 h [5, 6, 7], and even then the E1 results from 2 h to 24 h sample-times required correction for AUC underestimation [7]. One way to address the Schloerb challenge is to ignore plasma concentration models and instead measure GFR markers in urine. As Schloerb predicted, comparative measurements of E1 $\geq$ 2 h models of plasma clearance with renal (urine) clearance have shown that exponential plasma models predict substantial clearance values, when renal clearance was zero, i.e., causing an irreducible intercept error, e.g., 11.3 ml$\cdot$min$^{-1}$ [8].
Current correction methods do not address the overestimation of zero renal clearance by plasma E1 models. For example, the Chantler-Barratt and Brøchner-Mortensen, corrections of E1 clearance ($\text{CL}_{\text{E1}\geq2\,\text{h}}$) lack the appropriate nonlinearity at zero renal clearance to correct for a linear model's irreducible intercept, respectively, $\text{CL}\approx 0.87\, \text{CL}_{\text{E1}\geq2\,\text{h}}$ and $\text{CL}\approx 0.990778\, \text{CL}_{\text{E1}\geq2\,\text{h}}-0.001218\,\text{CL}_{\text{E1}\geq2\,\text{h}}^2$ [9, 10, 11]. Other formulas (Fleming, Jødal, Ng [12, 13, 14]) of the form $\text{CL}\approx \text{CL}_{\text{E1}\geq2\,\text{h}}/(1-f\cdot\text{CL}_{\text{E1}\geq2\,\text{h}})$ are asymptotically $\text{CL}\simeq \text{CL}_{\text{E1}\geq2\,\text{h}}$ as clearance goes to zero, thus offer no correction for renal insufficiency. In specific, to reconcile a line equation negative intercept for using $\text{CL}_{\text{E1}\geq2\,\text{h}}$ plasma clearance to estimate renal clearance one requires a nonlinear equation with a slope at the origin that is asymptotically zero as in the contrary case, linear conversion risks returning negative numbers for low renal clearance values. Therefore, renal clearance is not being properly estimated, and it is clear that reference standards, including renal clearance, need to be investigated.
A conversion of GFR to 1.73 m$^2$ divided by estimated body surface area (eBSA) is often performed. Although one can argue that creatinine plasma level scales approximately as BSA (circa weight to the 2/3 power), GFR certainly does not (circa weight to the 3/4 power) [15, 2]. Another difficulty occurs in acute renal failure, which can be defined clinically by: creatinine levels (however, creatinine levels take days to build up); by loss of GFR, (presumably as GFR-indices from creatinine levels); or by 12 h of anuria or 24 h of severe oliguria of < 0.3 ml$\cdot$h$^{-1}$ per kg body weight [16]. In anuria, or severe oliguria, urine collection volumes are inadequate.
This, and other factors, have led to a divergence between pharmacokinetics and nephrology with current nephrology guidelines suggesting multiple timed voluntary urine collections for a noisy underestimating approximate body surface area normalised renal clearance reference standard from subcutaneous injections of ($^{125}$I)iothalamate, a marker with circa 18% non-renal clearance [17], see Uprob in the Methods section. That standard is currently recommended for calibrating a heuristic endogenous plasma creatinine GFR index [18]. Creatinine, in turn, is a mixed GFR and tubular extraction marker, and overestimates renal filtration in a variety of clinical conditions most notoriously in liver failure and renal insufficiency [19]. On the other hand, pharmacokinetics is concerned with drug effects most often correlated to venous plasma drug concentrations (GFR is arterial), utilise plasma (not renal) models that are tailored for route of administration, and might body scale per kilogram body mass for veterinary work, or occasionally BSA body scale for dose calculations, and would not likely claim that an 18% non-renal cleared marker is a GFR marker. Thus, it is important to answer the Schloerb challenge as neither nephrologist nor pharmacokineticist has accurate methodology to offer the renal insufficient patient.
§.§ Answering the Schloerb challenge
Our first attempt to answer the Schloerb challenge produced the more accurate measurement of GFR obtained using the Tikhonov adaptively regularised gamma variate fitting (Tk-GV) method, which smooths the data to obtain that flattened curve that best reduces the relative error of propagation of the rate parameter of a gamma variate [20, 7, 21]. Because of this curve flattening, which becomes severe for renal failure, the Tk-GV algorithm is not a curve fit method in the ordinary sense. Compared to Tk-GV GFR-values, E1 and biexponential (E2) GFR values are larger, especially in severe renal insufficiency, because exponential methods overall underestimate both early and late concentrations [22, 20, 7]. The use of the Tk-GV algorithm for measuring GFR was unique enough that patents were granted in the USA and multiple other jurisdictions [23].
For bolus intravenous injections, mixing takes a long time, thus concentration does not decrease in proportion to the logarithm of concentration. Indeed, in a prior publication, concentration before 2 to 4 h following a bolus injection of a GFR marker more accurately back-extrapolated as the logarithm of time, than as an area underestimating exponential, or an area overestimating power function [24]. The intent here was to characterise and test multiple models, and develop bounds for GFR reference standards especially for reduced renal function.
§ THEORY: THE LINEAR-LOGARITHM HYPOTHESIS
For a very long time it has been supposed that as a first approximation, the concentration of an intravenously injected GFR marker is proportional to the logarithm of concentration. That supposition implies an instantly achieved static volume of distribution with drug concentration that is changing in time. An additional requirement is sometime referred to as instant mixing, but strictly speaking the requirement is that the mean concentration within that volume is what is eliminated. In 2015, it was noted that during the first few hours following intravenous injections of a GFR marker, concentration decreased less exponentially, i.e., less linearly with the logarithm of concentration, and decreased more linearly with the logarithm of time [24].
It would be better physiology to assume that early concentration is logarithmic as this assigns a starting volume of zero, but then modify the logarithm to later become exponential to allow for a terminal volume of drug distribution. In general, the family of functions having $t=0$ asymptotes that are logarithms and are asymptotic to zero concentration in the tail is the negative logarithm of sigmoid function family. Standard sigmoid functions have a slope of 1 at the origin and approach 1 from below in the right tail. Not all sigmoid functions are standard; some have slopes not equal to 1 at the origin. We examined two negative logarithmic sigmoid functions with exponential tails.[The two new formulas, LCE and ln-coth, are from a more general model $C(t)=c\ln \big(\frac{\alpha}{e^{\beta \,t}-1}+1\big).$ Setting $\alpha=1$ yields $-c\ln\left(1-e^{-\beta\, t}\right)$, which is the LCE function, and for
$\alpha=2$, the general model reduces to $c\ln \big[\coth \big(\frac{\beta \, t}{2}\big)\big]$, the ln-coth model.] Of the many such formulas, one of them assigns concentration as proportional to $-\ln(1-e^{-\beta\,t})$, called the logarithm of cumulative exponential function (LCE), and another is $\ln\big[\coth(\frac{\beta\,t}{2})\big]$ called the ln-coth function. These functions correspond to model formulas that are presented in Table <ref>, and whose derivations appear in the sec:appendix section. The LCE model is potentially the more useful one, such that more information is presented for it than for ln-coth. One can write LCE model in pharmacokinetic form using a constant of proportionality, $c=\text{AUC}\frac{6\, \beta }{\pi ^2}$,
\begin{equation}\label{eq4}
C(t)=-c\ln \left(1-e^{-\beta\, t}\right);\;\;\;
\text{AUC}=c\frac{\pi ^2}{6\, \beta }\;\;\;,
\end{equation}
called the LCE model as $1-e^{-\beta\, t}$ is the Cumulative Exponential distribution. Similarly, one can write the ln-coth pharmacokinetic model as,
\begin{equation}\label{coth}
C(t)=c\ln \bigg[\coth \bigg(\frac{\beta\, t}{2}\bigg)\bigg];\;\;\;
\text{AUC}=c\frac{\pi ^2}{4\, \beta }\;\;\;.
\end{equation}
Comparison of the ln-coth and Logarithm of Cumulative Exponential (LCE) distributions. $^a$
Distribution ln-coth LCE Notes
Type Washout Washout Monotonic decreasing
Parameters $ \beta>0$, rate $\beta>0$, rate Rate is 1/scale
Support $t\in[0,\infty)$ $t\in[0,\infty)$ Semi-infinite support
Density function, $f(t)$ $\frac{4 \,\beta }{\pi ^2}\ln \left[\coth (\frac{\beta\, t}{2})\right]$ $-\frac{6\, \beta }{\pi ^2}\ln \left(1-e^{-\beta\, t}\right)$ Probability $f(t)$ only: PDF
CDF, $F(t)\;^\text{ b}$ $\frac{4 }{\pi ^2}\Big[\ln (y)\ln (y+1)$ $1-\frac{6 }{\pi ^2}\text{Li}_2\left(e^{-\beta\, t}\right)$ Li$_n(z)$ is the polylogarithm
$+\text{Li}_2(1-y)$ Li$_2(z)$ is a dilogarithm
& $y=\coth (\frac{ \beta \,t}{2})$
$t_{m}:F(t_m)=\frac{1}{2}$ $\approx \frac{0.526862}{\beta}$ $\approx \frac{0.415389}{\beta}$ Median residence time
$\lim_{t\to0}f(t)$ $- \ln (\frac{\beta \,t}{2})$ $-\ln (\beta \,t)$ Asymptotes logarithmic as $t \to 0$
$\lim_{t\to\infty}f(t)$ $2e^{-\beta \,t}$ $e^{-\beta \,t}$ Asymptotes exponential at $t\to \infty$
$t_{x}:$ limits $\equiv$ at $\frac{W(2)}{\beta}\approx\frac{0.852606}{\beta}$ $\frac{W(1)}{\beta}=\frac{\Omega}{\beta}\approx\frac{0.567143}{\beta}$ Asymptotes intersect at $t_x$
$\Omega$ is Lambert's $W(1)$
MRT $=\int_0^\infty t\,f(t)\,dt$ $\frac{7 \zeta (3)}{\pi ^2 \beta}\approx\frac{0.852557}{\beta}$ $\frac{6\, \zeta (3)}{\pi ^2\, \beta}\approx\frac{0.730763}{\beta }$ $\zeta (n)$ is the zeta function
V$_\text{MRT}=\text{CL MRT}$ $\frac{\text{CL}}{\beta}\frac{7 \zeta (3)}{\pi ^2 }$ $\frac{\text{CL}}{\beta}\frac{6\, \zeta (3)}{\pi ^2}$ Pharm.: V$_{\text{SS}}$; Vol. steady state
$V_\text{d}(t)\;^\text{c}$ $0\leq\text{CL} \frac{1-F(t)}{f(t)}\leq\frac{\text{CL}}{\beta}$ $0\leq-\frac{\text{CL}}{\beta}\frac{\text{Li}_2\left(e^{-\beta \,t}\right)}{\ln \left(1-e^{-\beta \,t}\right)}\leq\frac{\text{CL}}{\beta}$ $V_\text{d}(0)\leq V_\text{d}(t)\leq V_\text{d}(\infty)$
$M_{\text{urine}}(t)$ $M_0 F(t)$ $M_0 F(t)$ Dose ($M_0$) in urine at time $t$
$^\text{a }$By definition a density function, $f(t)\myeq\frac{C(t)}{\text{AUC}}$, thus $C(t)=\text{AUC}\,f(t)$, also, see the sec:appendix section.
$^\text{b }$CDF, the cumulative density function, is the integral of the density function, i.e., $F(t)=\int_0^t f(x)\,dx$.
$^\text{c }V_d(t)$ for the ln-coth model is listed in unsubstituted (general) form as its $F(t)$ is a long formula.
As shown in Figure <ref>, the LCE and ln-coth models, Eqs. (<ref>) and (<ref>), each have two convergent asymptotes; the first a logarithm as $t\to0$ and the second an exponential as $t\to\infty$. There is a time when these asymptotes are equal, which for the LCE model is $\beta\, t$ such that,
$$-c\ln (\beta \,t)\equiv c\,e^{-\beta \,t}\;\;.$$
Let $u=\beta\, t$, then as $c$ cancels, this equation becomes $-\ln (u)=e^{-u}$, whose solution is $u=\Omega$, where $\Omega$, is Lambert's Omega or $W(1)\approx0.567143$. Also called the product logarithm function, Lambert's $W(z)$, satisfies $w\, e^w=z$. In this case, $\Omega e^{\Omega}=1$, and we can write the intersection time for the asymptotes, $t_x$, as,
where $t_{x}$ is a time before which the LCE is predominantly a logarithmic function, and after which the LCE is relatively more exponential. From Table <ref> and the sec:appendix section, the LCE model $t_m<t_{x}<\text{MRT}$. That is, the median residence time ($t_{m}\approx 0.415389\,\beta^{-1}$) occurs when the LCE density is predominantly a logarithmic function of time, whereas its mean residence time (MRT $\approx 0.730763\,\beta^{-1}$), occurs when the LCE is more exponential.
The intersection of the asymptotes of the ln-coth model occurs when $- \ln (\frac{\beta \,t}{2})\equiv 2e^{-\beta \,t}$, that is, at $W(2)/\beta$ (Table <ref>). The ln-coth model has a more abrupt transition between its logarithmic and exponential asymptotes than the more gradually transitioning LCE model, see Figure <ref>. The ln-coth model is a member of a larger family
Panel a shows an LCE model, $C(t)=-c\ln(1-e^{-\beta \,t})$, as a red coloured concentration versus time scaled logarithmically plot. Panel b shows an ln-coth model, $C(t)=c\ln \big[\coth \big(\frac{\beta\, t}{2}\big)\big]$, in red. In both panels the logarithmic asymptotes are black and dotted, and the exponential asymptotes are black and dashed. For the ln-coth model, the intersection of its logarithmic and exponential functions are vertically closer to the model itself, i.e., the three curves shown overlap in panel b more closely than for the LCE model in panel a. From fits to the same time-samples, the intersection times, $t_x$, were similar but not identical, 523- and 576-min in panels a and b, respectively.
of functions; coth is hyperbolic cotangent, i.e., the reciprocal of hyperbolic tangent, and hyperbolic tangent is a standard sigmoid function; it goes through the origin with a slope of 1 and later approaches 1 from below. Any sigmoid function, $\textit{sf}\,(t)$, can be used to construct a terminal tail for a logarithm as $\lim_{t\to\infty}\ln[\frac{1}{\textit{sf}(t)}]\to0^+$, that is, as $\textit{sf}\,(t)$ approaches 1 from below $(1^-)$, its negative logarithm approaches zero from above ($0^+$), which causes concentration to be asymptotic to the late time axis.
Other sigmoid functions, e.g., the error function, or the Gudermannian function could be used to make faster or slower decaying than exponential tails (stats: lighter or heavier tails) in this same fashion. The LCE, ln-coth and Tk-GV models[where GV is a gamma variate; $C(t)=c\,t^{\alpha-1}e^{-\beta\,t}$, and the Tk-GV algorithm minimises the relative error of $\beta$.] (when $\alpha<1$) have zero initial volume of distribution, which requires an infinite concentration at $t=0$. For the Tk-GV model, this is accomplished by adaptive fitting that yields $\alpha<1$. For all three models the infinity is integrable and better mimics arterial concentration before the first sample times for small molecules like EDTA and DTPA chelates, and less so for inulin [25] and is our preferred method of adjusting venous sampling to arterial GFR conditions.
For $-$log-sigmoid models and sums of exponential term (SET) models the constants of proportionality are equal to the models' concentrations at different times. For SETs the total concentration $C(0)=c_1+c_2+c_3+\cdots+c_n$ at $t=0$. For the LCE model the time when its concentration equals $c$ occurs at $t:\ln(e-1)\,\beta^{-1}\approx 0.541325\,\beta^{-1}$. As per Table <ref> and Figure <ref>, the LCE and ln-coth models have a zero initial volume of distribution; $V_d(0)=0$, which is unlike the SET value, $V_c>0$, that is, the central (i.e., initial, non-zero) volume of distribution. For the LCE model, the volume of drug distribution at which concentration curve shape becomes more exponential is 81% $V_z$ occurring at time $t_x = \Omega\,\beta^{-1}$ and is a substantial portion of $V_z$, the terminal volume. This is from the LCE volume equation, $V_d(t)$, as follows,
\begin{equation}
V_\Omega=-\mfrac{\text{Li}_2\left(e^{-\Omega}\right)}{\ln \left(1-e^{-\Omega}\right)}\;V_z\approx 0.81000437\;V_z\;\;,
\end{equation}
where $V_\Omega$ is $V_d(t_x)$ and almost exactly 81% of the LCE $V_z$. For SETs, $V_c>0$, and $V_c$ is such that the mean concentration in that volume is assumed to be instantly presented for exchange between any compartments and sources of elimination. This unphysical assumption does not pertain to the Tk-GV, ln-coth and LCE models. The initial volumes of distribution are zero for the LCE, ln-coth and Tk-GV models, e.g., see $V_d(t)$ in Table <ref> and LCE in Figure <ref>.
Shown is a plot of LCE volume of distribution as a function of time, $V_d(t)$ (Table <ref>), with reuse of the same parameters used to create Figure <ref>.
Note that $V_\Omega \approx 0.81V_z $, where $V_\Omega $ occurs at $t_{x}=\Omega\,\beta^{-1}$.
§ METHODS
§.§ Datasets 1-3
Dataset 1 was a group of 13 adult liver transplant candidates most having ascites who underwent bolus intravenous [$^{51}$Cr(EDTA)]$^-$ injections followed by plasma collection of a total of 162 time-samples drawn at 5 min to 24 h for routine assessment of renal function. Approval was obtained from the Royal Free Hospital Research Ethics Committee for the required extra blood sampling (REC reference number 07/H07211/70). The time-samples were obtained at circa 5, 10, 15, 20, 30, 40, 50, 60, 90, 120, 180, 240, 360, 480, 720, and 1440 min. The results of E1 and Tk-GV renal modelling appeared elsewhere [20, 7].
Dataset 2 was from 44 adults with cirrhosis and moderate to tense ascites from a project approved by the Ethics Committee for Medical Research in Copenhagen (J. nr. (KF) 11-110/02), i.e., group I of reference [26]. These subjects underwent bolus [$^{51}$Cr(EDTA)]$^-$ intravenous injection followed by plasma collection of a total of 555 time-samples drawn at 5 min to 5 h, as well as circa 5 h of voluntary urine collection with assay of accumulated urinary drug activity. Time-samples were acquired at 0, 5, 10, 15, 30, 60, 90, 120, 150, 180, 240, and 300 min.
Dataset 3 contains 328 plasma samples of [$^{169}$Yb(DTPA)]$^{2-}$ anion from 41 adult studies in whom time-samples were drawn at 10 min to 4 h following bolus intravenous injection. The eight time-samples in each study were collected at circa 10, 20, 30, 45, 60, 120, 180, and 240 min. These data are from an older study prior to routine publication of ethics committee identification numbers, but were nevertheless ethically obtained [27]. At that time, there were problems with DTPA-chelate plasma binding [28], likely due to improper pH buffering in certain commercial DTPA chelation kits, and the [$^{169}$Yb(DTPA)]$^{2-}$ anion, gamma count time-samples were plasma protein binding corrected using ultrafiltration. This group had subjects whose renal function varied from renal failure to normal renal function without evidence of fluid disturbance.
§.§ Urinary reference standards
Current nephrology guidelines recommend using a variation of voluntary urine collection data as a reference standard for calibration of GFR [29]. Fortunately, the data here uses a better marker, [$^{51}$Cr(EDTA)]$^-$, and a better route of injection (intravenous) than the iothalamate and subcutaneous route[The subcutaneous route may have been chosen in an attempt to mimic constant infusion.] used for creatinine formula calibration. The classical renal clearance formula, used when constant infusion of a marker has reached a steady state plasma concentration, is CL is equal to $\frac{\text{U}\,\text{V}}{\text{P}}$, where U is Urinary concentration of an exogenous plasma marker during a short time interval, e.g., 20 min, some hours after infusion has begun, V is Volume of urine collected during that brief test time interval and P is the constant plasma concentration during that short collection time. Note that the product U$\times$V is marker mass accumulated during the urine collection. In their classical work, Walser and Bodenlos, using bolus intravenous E1 models, noted an unexpected 30 to 90 min delay between disappearance of radiolabeled urea from plasma and its appearance in urine [30]. This should serve as a reminder that $\text{CL}=\frac{\text{U V}}{\text{P}}$ is only defined for P (plasma concentration) under steady-state conditions. Dataset 2 lists total urinary drug mass (in our case radioactivity) collected during the entire circa 300 min following injection. This has the advantage of being more accurate in the sense of having a lot of data and not being a short collection time. However, the disadvantage of this is that the bolus intravenous plasma concentration curve changes in time, and is not any particular constant value, which prevents us from calculating a clearance without also knowing what the exact plasma concentration curve shape is. To be clear, each plasma concentration cumulative curve appropriate for use for a bolus experiment $\frac{\text{U V}}{\text{P}}$ calculation would be different for each different curve model. It is possible to back calculate the renal CL-values for each plasma model, but that would not tell us which renal CL-value is correct. Accordingly, a different calculation was used for reference value testing. The objective of testing different plasma concentration curve models was accomplished by comparing the urinary drug mass collected (U V) with the mass predicted to be excreted from each plasma concentration bolus model ($M_0 F(t)$, Table <ref>). Even then, there were further considerations.
The plasma concentration sampling time correction to account for the delay between zero time and marker first appearance in urine during a bolus experiment has been estimated as circa four min average, where literature estimates of average times were 2.5-8 min [31]. However, this time is longer in dilated urine collecting structures, e.g., renal pelvises and ureters, and for other reasons, e.g., renal insufficiency or intermittent obstructive disease. This time delay includes circulatory mixing time. That is, renal glomeruli filter arterial, not venous, blood. All of the plasma samples in this report are venous. Cousins et al. showed negative arteriovenous differences for individual inulin and [$^{99m}$Tc(DTPA)$]^{2-}$ time-samples at 30 min and beyond[25]. Thus, the concentration appropriate as a divisor for the U V mass product, i.e., Urine drug concentration times Volume of urine, is a later, smaller, venous plasma concentration than the venous plasma concentration occurring at the time of urine collection with the effect that renal clearance will be otherwise underestimated.
There are multiple other accuracy problems for voluntary urine collection: neglecting to save a voided volume [32]; post void residual urine in the adult bladder [33]; worse and more variable residuals in the elderly from genitourinary pathology (including uterine prolapse and prostatic hypertrophy) [34]; bladder resorption of x-ray contrast [35] and other drugs with resorption made worse with long elapsed time between voids [36, 37]. Review of 24 h urine collections suggested that catheterisation avoids neglecting to save a voided volume and avoiding bladder drug resorption. Moreover, bladder catheterisation may correct some of the problems of residual urine in the bladder post void. However, even with catheterisation improper catheter placement itself led to residual bladder urine 26% of the time [38]. Another problem is that there can be so little urine output in severe renal insufficiency that a small amount of bladder residual can render renal clearance based upon urine collection problematic.
In Dataset 2, case 6 of 44 had 6.5% more urine mass collected (4.21$\times 10^{6}$ cpm) than administered (3.9533$\times 10^{6}$ cpm), which is unphysical. That case was excluded from mass balance comparisons. The other 43 cases were processed in two stages, initial screening, which showed an acceptable confidence interval agreement of mass balance between urine drug mass collected and the LCE and other methods of predicting urine drug mass. Subsequently, to test whether the agreement was only a statistical aberration, the LCE prediction was adjusted to occur four minutes earlier as per [31], the voided volume was augmented by a positional average post void bladder residual of 13.014 ml as per [33][13.014 ml is the straight average of five average residual bladder urine volumes from men and women after voiding in various positions.] followed by discard of those voided volumes that were less than 70% of predicted as recommended [32], wherein the frequency of incomplete urine collections was noted as 6% to 47%. This procedure was repeated after dropping the initial time-samples to discover that LCE urine mass predictions from models whose first time-sample started at > 14 min agreed slightly better with the urinary mass calculations.
§.§ Noncompartmental reference standards
Noncompartmental exponential reference standards (NC) of clearance are often used by pharmacokineticists and were originally defined by Purves [39]. This consisted of solving for the exponential functions that connected each adjacent plasma time-sample, then extrapolating using exponential fit functions to the last three or four samples, and when the concentration is increasing linear functions were recommended. For use here, the linear solutions and curve fitting were replaced with the solutions to the first or last sample and the weighted average of the next two or prior two samples. This provides two points, one natural and one averaged for an exact continuous solution that avoids having curve discontinuities at the extreme sample times.
Consider, for example, that if at 300 min we had two different concentrations, one measured and one from a fit function, the urinary drug mass excreted at 300 min would be ambiguous. Solving for an extrapolating function that at 300 min has the same concentration as the time-sample itself obviates that problem, and works better.
The formula for predicting drug mass (as cpm) excreted in urine at elapsed time, $t_U$, following bolus intravenous injection is approximately $M_U=\text{CL}\int_0^{t_U}C(t)\,dt$, where for noncompartmental (NC) methods, $C(t)$ is the piecewise defined concentration supported on $t=0$ to $\infty$ .
§.§ Summary of models used in this work
Table <ref> shows a summary of the models used in this work. Not all of the models were applied to all three datasets. In some cases, this is because they cannot be, for example the E1 $\geq 5$ h model proposed by Brøchner-Mortensen and Freund [5] can only be used for Dataset 1, which is the only one having enough temporal data for its application. Dataset 2 was particularly demanding as mass equivalent modelling was needed rather than renal clearance modelling. Renal clearance is best defined for steady state conditions following long term constant in-
Summary of models used in this work
Model $C(t)$ Description Dataset $^a$
E1 $c\, e^{-\lambda\,t}$ Monoexponential 1, 2, 3
E1 $\geq$ 2 h " E1 with time-samples $\geq$ 2 h 1, 2, 3
E1 $\geq$ 5 h " E1 with time-samples $\geq$ 5 h (24 h data only) 1
E2 $c_1\,e^{-\lambda_1\,t}+c_2\,e^{-\lambda_2\,t}$ Biexponential 1, 2, 3
LCE $-c\,\ln \left(1-e^{-\beta\, t}\right)$ Logarithm of cumulative exponential 1, 2, 3
LCE > 14 min " LCE with time-samples > 14 min 2
ln-coth $c\,\ln \left[\coth (\frac{\beta\, t}{2})\right]$ Log hyperbolic cotangent 1, 2, 3
NC$^\text{ b}$ $-----$ Noncompartmental plasma model for excretion prediction 2
Tk-GV $c\,t^{\alpha-1}e^{-\beta\,t}$ Tikhonov minimised relative error of $\beta$. 1, 2, 3
Urine U$\cdot$V as (cpm/ml)$\cdot$(ml) Drug mass (as cpm) in $\sim$300 min urine collection 2
$^\text{a }$For Dataset 2, the mass expected to be cleared is calculated at the end of the urine collection time with the exception of LCE > 14 min, which used a time 4 min earlier than that.
$^\text{b }$See the NCprob section for the procedure.
fusion, not bolus intravenous conditions. The analysis for Dataset 2 includes three models not used elsewhere, (1) noncompartmental plasma model prediction of cumulative urinary drug mass (as radioactivity), (2) the adjusted LCE > 14 min excreted drug demonstration model, and (3) Urine, the total excreted drug mass calculation.
§.§ Statistical methods
§.§.§ Regression analysis
For each dataset several regression targets were tested for accuracy including: ordinary least squares (OLS), $\frac{1}{C_{obs}}$ weighted OLS, $\frac{1}{C_{obs}^2}$ weighted OLS, and OLS regression of log-log transformed $C_{obs}$ and sample times, where $C_{obs}$ are the observed concentrations. Of the regression targets tested, the $\frac{1}{C_{obs}^2}$ weighted OLS, also called proportional error modelling, proved the most accurate with the exception that log-log transformed regression is native to the Tk-GV clearance method, and not very different from proportional error modelling, see Eq. (39) and surrounding text in reference [40]. For the Tk-GV method, the regression target is not curve fitting, but minimisation of the propagated proportional error of either clearance (CL) or of the exponential rate parameter ($\beta$) of a gamma distribution. Apart from the Tk-GV results, only the proportional minimum norm results are presented here. The regression method used for all targets was Nelder-Mead, which is more robust for absolute minimisation than gradient descent and most other methods, and is the most popular numericist's choice for regression analysis. Some pharmacokineticists prefer an adaptation of the maximum likelihood regression method from random variate minimisation, however, that was not tested here. The implementation was performed using the Mathematica 13.2.1.0 language on an Apple M1 iMac. All LCE model regressions converged rapidly, e.g., for Dataset 1 in 156.2 iterations at 52 milliseconds per case (mean values). For biexponentials, in one case of 57, the convergence was to a degenerate model, which 1.75% failure rate is consistent with the circa 2% failure rate reported elsewhere [41, 21]. That model was $\lambda_2=\infty$ type; Dataset 2, case 19, 1470 iterations, 725 milliseconds, $C(t)= 0.100126 e^{-0.00755689 \,t}+0.0148127$, where $+0.0148127$ is a non-zero asymptotic value leading to CL$=0$. No other method yielded a zero CL for this case, the range being approximately 38.9 to 49.4 ml/min.
Widely used for clinical laboratory assay calibration, Passing-Bablok type I linear regression was applied to the results including comparison of predicted and observed urine mass [42]. Passing-Bablok
type I regressions are used to evaluate replacement same-scale methods and are bivariate nonparametric regressions. In specific, these regressions find least squares in $x$ and $y$ where the regression target is replacement, that is, a best linear functional relationship, whereas ordinary (OLS) regression yields a minimum error line for predicting $y$-values. This is done to mitigate what for econometrics is called omitted variable bias for bivariate data, and for statistics is called regression dilution[43, 44]. It corrects the flattening of slope (magnitude) that occurs when a least error predictor of $y$-alone, like ordinary least squares in $y$, is used to estimate a bivariate functional relationship, and is exaggerated for small magnitude correlations. Passing-Bablok regression works very accurately with good precision when comparing methods on the same scale, i.e., with slopes near 1, but it does so by discard of all possible two point negative slope combinations within the sample and then finding the median slope of the myriad combinations having positive slopes between any two points. Obviously, if the true slope were actually zero, Passing-Bablok would return a positive slope, so for slopes that are small in magnitude or negative the discards should not be performed. Passing-Bablok without negative slope discard is called Theil-Sen line regression and is both more robust to outliers and more accurate for bivariate problems than least squares in $y$, while not being completely unbiased for predicting bivariate linear relationships [45]. Theil-Sen lines were used to examine how the differences between models behaved for various levels of renal function, for which the slopes can be zero or negative, i.e., Theil-Sen was used for those cases for which Passing-Bablok regression is not appropriate.
§.§.§ Moving average and extrapolation testing
For residual analysis, i.e., of the difference between the concentrations of model values and time-samples, there is a need to examine how the models perform on average. As there are multiple plasma samples drawn at the same time following injection, one can take the number of earliest time-samples and average them to create a mean prediction for all the same model types. Next, one can drop an averaged time-sample from that group and bring in another averaged value from the next later group of time-samples, and assign that new group to have occurred at a new averaged time. This is performed until all the time samples have been average-averaged. This may seem contrived. However, if one were to drop and include unaveraged concentration values in each sample-time group, one would create a curve whose shape is dependant upon an arbitrary selection order of time-sample concentrations dropped or included. Finally, as each averaged, average-value is from the same number of averaged time-samples, it is equal-value weighted over the whole curve, and it is possible to do statistical analysis, such as finding a reliable standard deviation that shows how well model curve shapes match those of noise reduced data, and which procedure is asymptotically correct as the number of samples increases.
Extrapolation testing is done without withholding data by testing with Wilcoxon signed-rank sum one-sample differences from zero of the first and also the last groups of time-sample residuals from all of the curves in a dataset. Small probabilities indicate that it is unlikely that the model extrapolates properly.
§.§.§ Correlation of clearance to volume divided by weight
The reason for establishing that volume of distribution divided by weight is a relative constant irrespective of body habitus is because CL is spuriously correlated to volume of distribution via a third controlling variable: body mass. That is, mice with low body mass or smaller children have lesser clearance than elephants or larger children with larger body mass. It would appear that V/W is a normalisation that should be uncorrelated to clearance for a given population with certain exceptions. In ascites there is increased V/W, but within a given dataset of ascitic patients, there should still not be much if any covariation of V/W for CL. In renal failure it is possible to have increased sodium and body fluid for those patients who are not adequately controlled medically. This could lead to a negative correlation between CL and V/W. However, V/W > 1 as well as positive correlations between CL and V/W would not be so easily explained.
As evidence that volume of distribution of extracellular fluid[For plasma models, V is volume of drug distribution, not to be confused with the Volume of urine (also V) of a renal model.] (V) divided by weight (W) is a relative constant, we review a paper in which obese children were misleadingly claimed to have expanded extracellular fluid space compared to controls (Battistini 1995) [46]. This claim was made based on relatively reduced lean body mass for obese children, which as shown next is irrelevant. Those authors did not examine volume of distribution (V) by the bromine method divided by body mass (W) i.e., V/W. V/W in that paper was 12.3 litres for 56.8 kg obese children or 0.217 l/kg ($n=21$). For 18 controls, 8.9 litres corresponded to 41.0 kg body weight or also 0.217 l/kg. That is, there was no difference to three significant figures between values of V/W for obese versus control children.[Battistini et al. used oral dosing of bromide, which is not as defensible as long term constant infusion, e.g., see Schwartz et al. [47], such that although their average of obese and normal V/W values are the same, both values may be underestimations.] As the density of human fat tissue is $0.9000 \pm 0.00068$ (mean $\pm$ standard deviation) [48], to make the same extracellular water content per kilogram as in denser tissues, there has to be less extracellular water per litre of fat, so there is in no sense expanded extracellular water content in fatty tissue. What there is, is relatively reduced intracellular water content in fat cells because water and fat are not very miscible. The authors neglected to appreciate that the ratio of V to ICW (intracellular water) increases not because V increases disproportionally (it does not, as above), but because ICW relatively decreases as relative fat content increases.
§ RESULTS
§.§ Dataset 1 results
Figure <ref> shows two competing plot types for viewing Dataset 1. The overall linear grouping of Figure <ref>a can be interpreted as concentration propagating in time as a negative logarithm. However, negative logarithms would eventually yield negative concentrations. Thus, at some point in time, the logarithm should convert to an $x$-axis asymptote. Panel b shows relatively smooth but pronounced early-time log convexity,=0pt
Dataset 1 had 13 data series collected between 5 min and 24 h. These are shown as connected line segments, and plotted in two different ways. Panel a shows the cases plotted as linear concentration versus time on a logarithmic scale. Note the near linearity until late time of the line segments. Panel b shows semilog plots of the same data. Note the early time curvilinearity of the connected line segments.
which are not linear and therefore not exponential for early-time on semi-log plotting. The curve fitting errors for those methods using proportional error modelling are displayed as residual plots in Figure <ref>. Even though Dataset 1 has 13 cases, only 12 cases have 5 min time-samples and 12 have 24 h time-samples. A stationary adaptation of a so-called moving average of same sample-time averages was used as per the average Methods subsection. The standard deviation of those averages increased from a 1.83% mean error of fitting of the ln-coth models, to a 2.38% error for the LCE models, a 2.87% error for the E2 models and a 14.17% for the E1 models. For the LCE and ln-coth models, the 12 earliest and 12 latest time-sample errors were insignificantly different from zero, (respectively, $p=\{0.364,0.124\}$, and $p=\{1,\,0.675\}$) and very significantly different for the E1 and E2 models (respectively, $p=\{0.002,0.002\}$, and $p=\{0.004,0.002\}$).=0pt
Shown are Dataset 1 residuals for two parameter models in panel a; LCE models, and panel b; E1 models. Panel c shows four parameter biexponential model residuals. The circles are proportional modelling errors. The heavy black curves are 12 sample moving averages. The probabilities are the likelihood of the earliest and latest 12 samples having no fit error. The ln-coth fits are similar to panel a, see the text for the details.
This suggests that on average for accuracy of curve fitting, ln-coth and LCE models with only two parameters outperformed E1 and E2, despite the latter having an extra two fit parameters. The standard deviations of the residuals themselves worsen in a different order, 5.69% for E2, 8.01% for ln-coth, 8.42% for LCE, and 20.07% for E1. Thus, the E2 fits, compared to the LCE and ln-coth fits are overfit, and overfitting can cause a spurious reduction of error under the curve, and does cause erroneous extrapolation [49], which given the significant earliest and latest time-sample underestimation causes underestimation of AUC and overestimation of CL.
The results in Table <ref> shows the MRT values longer than the 24 h data (>1440 min) in bold font. The number of MRT-values longer than 24 h decreased in the following order: LCE, ln-coth, Tk-GV, E2, E1 having respectively 7, 4, 4, 2, 1 of 13 total. The longer MRT-values led to larger AUC-values, and smaller clearances. The number of CL-values in the severe renal insufficiency range $(<20\text{ ml}\cdot\text{min}^{-1},$ bold type) decreased as LCE, ln-coth, Tk-GV, E2, E1 having respectively 5, 4, 3, 3, 1 of those CL-values. The smallest CL-value, LCE: 2.4 ml$\cdot$min$^{-1}$,=0pt
Dataset 1, some LCE, ln-coth, Tk-GV, biexponential (E2) and monoexponential (E1) model results.$^{\text{ a}}$
5cMRT (min) 5cCL (ml$\cdot$min$^{-1}$) 5c$V_\text{MRT}$ (L)
LCE ln-coth .8[1.0]Tk-GV E2 E1 LCE ln-coth .8[1.0]Tk-GV E2 E1 LCE ln-coth .8[1.0]Tk-GV E2 E1
—– —– —–
Min 389 373 373 373 349 2.4 3.1 4.0 7.6 12.0 20.3 18.8 17.6 16.5 13.2
1st Quartile 451 431 453 437 365 14.4 18.7 18.7 18.8 20.8 24.8 21.8 20.7 20.3 17.1
Median 1454 1087 830 812 598 36.6 40.5 37.5 40.2 45.9 33.1 30.8 26.8 27.9 20.3
3rd Quartile 2873 1999 1995 1189 863 51.5 47.1 49.8 51.0 52.6 51.6 42.2 39.9 35.3 30.5
Max 32735 20003 8395 4251 2285 84.4 81.6 79.7 80.3 86.6 78.9 61.8 59.7 44.2 35.4
—– —– —–
Mean 4096 2662 1595 1115 731 37.3 38.3 37.6 39.7 42.7 38.4 32.6 31.0 28.4 23.6
$^{\text{a }}$AUC is unit dose scaled. Results corresponding to MRT > 24 h and CL < 20 $\text{ml}\cdot\text{min}^{-1}$ are in bold font type.
had the longest MRT: 32735 min. The volumes of distribution (as $V_\text{MRT}$) decreased overall in the sequence LCE, ln-coth, Tk-GV, E2, E1.
As mentioned in the Introduction, in severe renal insufficiency and/or fluid overload, there are two published suggestions for not using early time-samples to form better E1 model CL-prediction using 24 hours of data. The Wickham et al. E1 $\geq$ 2 h method [7] would have us discard data before 2 h to improve CL-values overall, and the Brøchner-Mortensen and Freund E1 $\geq$ 5 h method would have us discard data before 5 h to better predict severe renal insufficiency CL-values [5]. We compared proportional error regression for E1 models having time-samples $>0$, $\geq2$, or $\geq5$ h with the LCE and Tk-GV CL results. Table <ref> shows Passing-Bablok regression line prediction of the three CL$_{\text{E1}}$ models with the CL$_{\text{LCE}}$ and the CL$_{\text{Tk-GV}}$ values. In that Table, as the earliest E1 data is increasingly ignored, the intercepts decrease in magnitude, but the slopes increase.=0pt
1Dataset 1, Passing-Bablok regression line, $y=m\,x+b$, and confidence intervals (CI) of CL-values of LCE and Tk-GV ($x,\, \text{ml}\cdot\text{min}^{-1}$) versus E1 ($y$) models with various first time-samples, and correlations ($r$).
$x$, $y$, $b$, 95% CI $\left(\frac{\text{ml}}{\text{min}}\right)$ $m$, 95% CI $r$
LCE E1 11.63, 8.24 to 14.0 0.823, 0.709 to 0.992 0.97903
E1 $\geq$ 2 h 6.886, 2.38 to 9.11 0.988, 0.903 to 1.103 0.99091
E1 $\geq$ 5 h 4.362, $-$0.327 to 6.65 1.144, 1.011 to 1.269 0.98635
Tk-GV E1 7.386, 2.26 to 9.75 0.899, 0.789 to 1.079 0.97393
E1 $\geq$ 2 h 1.813, $-$2.34 to 4.86 1.113, 1.007 to 1.252 0.99049
E1 $\geq$ 5 h $-$2.410, $-$7.07 to 2.18 1.303, 1.128 to 1.442 0.98723
None of the E1 model types tested have both slopes of 1 and intercepts of 0 with confidence, which means that those E1 models are different from the LCE and Tk-GV models. Moreover, most of the intercepts are positive, which if true, means that to predict LCE or Tk-GV CL-values, negative intercept values would have to be subtracted from most of the E1 model types.[Note that the equations in Table <ref> can be solved for $x=m^*y+b^*$, where $m^*=1/m$ and $b^*=-b/m$, only because the regressions are Passing-Bablok type. In general, least squares in $y$ does not agree in that fashion with least squares in $x$.] Such intercepts are ill-conditioned as correction formulae because they may produce negative CL-values for reduced CL-values. To avoid negatives, E1 correction formulas should be non-linear, and go through the origin with slope zero at the origin when their Table 4 intercepts are positive.
One quick way to check LCE and Tk-GV accuracy is to take their CL-values and divide that by the mean E1 CL, which yields a ratio of 0.873 for LCE and 0.879 for Tk-GV. Those ratios agree with the Chantler-Barratt [9] E1 correction factor of 0.87, so the LCE and Tk-GV mean CL-values, at least, are not implausible. However, as our objective was explore the entire range of CL-values with special attention to decreased renal function, it behoved us to do the same thing that Chantler and Barratt did, compare with urinary drug mass excreted. Thus, we next analysed Dataset 2, which has that information.
§.§ Dataset 2 results
Table <ref> shows Passing-Bablok
Dataset 2, Passing-Bablok regression lines, $y=m\,x+b$, and confidence intervals (CI) for 8 models versus urine [$^{51}$Cr(EDTA)]$^-\,\times10^6$ cpm, number of cases (n) and correlations ($r$).
Urine $10^6\cdot$cpm, $b$, 95% CI $m$,
95% CI n $r$
LCE > 14 min$^{\text{ a}}$ 0.129, $-$0.188 to 0.527 1.002, 0.893 to 1.119 36 0.95429
LCE 0.367, $-$0.191 to 0.835 1.070, 0.903 to 1.232 43 0.90124
ln-coth 0.604, 0.147 to 1.258 1.107, 0.896 to 1.271 43 0.88820
NC 0.853, 0.487 to 1.548 1.036, 0.838 to 1.203 43 0.88431
Tk-GV 0.910, 0.402 to 1.337 1.018, 0.877 to 1.168 43 0.89411
E1 $\geq 2$ h 0.989 0.537 to 1.490 0.991 0.826 to 1.149 43 0.88570
E2 1.098, 0.624 to 1.649 1.027, 0.849 to 1.179 42 0.88507
E1 1.676, 0.820 to 2.065 1.046, 0.846 to 1.248 43 0.87096
$^{\text{a }}$LCE > 14 min was adjusted to 4 min earlier than the urine collection time. This
was the only model compared to $\sim$13 ml (residual) augmented urine volume (and
drug mass) with 7 cases discarded that had < 70% predicted urinary drug mass.
regression slopes and intercepts with 95% confidence intervals for Dataset 2's 43 useful urinary masses at circa 300 min compared to the predicted amounts from 8 plasma models. Most of these regressed models appear in Figure <ref>. Only the LCE and LCE > 14 min models had 95% confidence intervals for intercepts that included zero, but all 8 plasma models had slopes that included one.
As used for clinical laboratory assay calibration, Passing-Bablok type I regressions were used to evaluate equivalent or replacement same-scale methods for Dataset 2's voluntarily collected urine drug mass measured as 10$^6$ counts per min (cpm) of [$^{51}$Cr(EDTA)]$^-$ activity. Panel a shows mono- and bi-exponential (E1 & E2), Tk-GV and LCE urinary mass predictions. Panel b shows bladder residual adjusted urine mass versus and LCE urinary mass predicted 4 min earlier from fits starting with > 14 min plasma data and then discard of 7 cases with less than 70% of the LCE predicted urine drug mass. Only the LCE and LCE > 14 min models had no significant difference (95% CI's) between slopes of 1 and intercepts of 0 compared to urinary drug mass collected. The ln-coth model is not shown to avoid overlap, but appears in Table <ref>.
The LCE > 14 min model served to further demonstrate that the error between the LCE model and urine mass collected was negligible, with average error was reduced to 0.4% by making multiple corrections. Those were by correction of urine count rate for 13.014 ml expected bladder residual, correction for a urine transit time of four min, by a slight improvement in the LCE fits by dropping early time-samples leaving for start time of > 14 min (LCE > 14 min), and finally by discard of the 7 recalculated urine samples with less than 70% of the then adjusted LCE predicted activity to adjust for missing urine collections. This yielded tighter confidence intervals,
better correlation, and is illustrated in Figure <ref>b. It is not known in absolute terms that the voluntary urine collections used here were incomplete [26] and the literature is quite clear that a 70% cutoff is heuristic [32]. The histogram of ratios of corrected urine drug mass collected to corrected LCE > 14 min predicted mass of Figure <ref> has no strong evidence of two separate populations, but the sample size is small.=0pt
Shown is a histogram of counts per minute (cpm), the radioactive equivalent of mass, in adjusted urine volume divided by LCE > 14 min models with predicted cpm at 4 min earlier than the end time of total urine collection. Note the blue line showing the discard upper limit of 0.7 used to create Figure 5b. Even though there are no values in the bin containing the 0.7 cut-point, there is no significant grouping into two populations: one with, and one without missed urine collection. The two apparent large ratio outliers may be due to underestimation of mass excreted by the LCE > 14 min models.
Nonetheless, one expects renal drug collection to be mass deficient at any particular elapsed time compared to pre-renal loss of drug mass at that same time due to numerous problems, as per the Uprob Methods subsection, including: possible missed collections of urine, urinary system transit delay time, possible bladder resorption of drug, possible increased urine dead spaces, and possible intermittent urinary obstruction. Thus, renal clearance appears to be a lower limit for reference clearance values. To complete the analysis, an upper limit for reference values was explored.
As we have seen for monoexponentials and biexponentials, the first and last time-samples are almost always underestimated concentration leading to overestimation of clearance. Consequently the reference standard in common usage in pharmacokinetics, the plasma clearance noncompartmental method [39], that also uses exponential functions to extrapolate concentrations are, as per the methods used here, exact at the extreme sample times but still underestimate extrapolated and back extrapolated concentration as per [3, 22, 24, 50]. Thus, the two standards in common usage; renal clearance and noncompartmental clearance, can be used to establish lower and upper respective bounds for reference standard values to then explore which if any of the other curve fitting methods examined produce results within those bounds. To explore this, the differences between NC and other model results were examined using Theil-Sen lines rather than Passing-Bablok regression, which latter is not useful for difference functions, see the RA Methods subsection. Figure <ref>a shows Theil-Sen regression lines fit to the=0pt
Shown are various models' values minus each paired noncompartmental (NC) value. In panel a, Theil-Sen lines are shown from fits to those pair-wise differences. The grey area between NC estimated and (Urine) measured drug fraction illustrates the upper and lower bounds for a reference standard. Note that the curve fit methods are less accurate (with the exception of Tk-GV) when the mass excreted and clearance were reduced, with E1, E2 overestimating NC clearance. Panel b shows the non-parametric statistics as boxplots including median, quartiles, confidence intervals and outliers for each difference. The worst percentage of values within the bounds of the standard ranges were seen for E1 (0%) and E2 (22%), and was >50% for all other fit functions. Note the increased variability of urinary drug mass collected minus NC drug mass excreted (Urine).
models' predicted mass excreted with the noncompartmental paired values subtracted out. Using NC as a basis for this calculation rather than drug mass in urine reduces noise, if for no other reason than plasma sample models are more alike to each other than any of them are to urinary drug mass measurements. Interestingly, the Theil-Sen regression slope of the urine mass minus NC predicted mass excreted at that same time is minuscule (0.00653). Assuming that a proper reference method should be between the NC mass predicted and the measured drug mass at that time in urine, there are only two fit models' regression lines that fit that criterion, the Tk-GV and ln-coth models. Overall, the models performed worse for reduced renal function than for normal function as illustrated by their fan shaped divergent to the left of Figure <ref>a. In the reduced function range the models ranked from overestimating to underestimating as E1, E2, E1$\,\geq\,$2 h, NC, Tk-GV, ln-coth, urine mass, LCE and LCE$\,>\,$14 min. The E1 and E2 model lines did not cross into reference standard range at any level of function. Figure <ref>b shows the sequentially decreasing median model minus NC pairs of the methods and the percent of values for each method included between the actual individual case values of the upper and lower reference standards. Of these, as likewise for Figure <ref>a, the best fit function behaviour overall is from Tk-GV, having the least slope for a fit function (0.0109), the second least overall variability, best symmetry, and a good percentage of values within the reference standard range (69%). The LCE$\,>$14 min fit model had the largest percentage of values within the reference standard range at 74%, followed by LCE fit to all time-samples at 71%. 55% of the E1$\,\geq\,$2 h results were within the reference range.
There were few results in the renal insufficiency range in Dataset 2, with the least plasma CL values for LCE, Tk-GV, E2 and E1 being 13.0, 24.3, 25.0 and 26.4 ml$\cdot$min$^{-1}$ respectively, with (uncorrected) LCE having three CL-values less than 20.0 ml$\cdot$min$^{-1}$. Tk-GV clearance was 3.33 ml$\cdot$min$^{-1}$ below the NC value (mean, p = 0.0001, 2 sided t-test) and its Passing-Bablok intercept was 5.47 ml$\cdot$min$^{-1}$ below the NC-value. The Chantler-Barratt style correction factor (OLS regression through origin for E1$\,\geq\,$2 h to predict LCE CL) for Dataset 2 was 0.781.
§.§ Dataset 3 results
Dataset 3 consists of 41 adult studies using [$^{169}$Yb(DTPA)]$^{2-}$ anion with eight time-samples collected from 10 min to 4 h. Of interest for this dataset were how the formulae behaved 1) for a different GFR marker, 2) for subjects who did not have evidence of fluid disturbance and 3) for severe renal insufficiency. Upon LCE identification of nominal CL-values $<20\text{ ml}\cdot$min$^{-1}$, the dataset was sorted into cases with and without evidence of severe renal insufficiency. This is shown in Figure <ref> as a clear difference between the behaviour of those two=0pt
Dataset 3 linear-log plots shown for all cases in panel a, without renal failure in panel b and with failure in panel c.
groups of studies. That is, the suspected severely renal insufficient cases changed only slightly in concentration over 4 h, (Figure <ref>c) as linearly decreased concentration with elapsed time on a logarithmic scale. The more normal renal cases, Figure <ref>b, approached the $t$-axis in late time as a group, with sometimes slight asymptotic flattening in late time. The LCE model fit error for all 41 cases (Figure <ref>a) was significantly greater than the fit error of the eight renal insufficient cases (4.88%). The E1 models' error of fitting to these cases was 6.87% and
significantly more variable than LCE fit error (Conover $p=0.003$).
Figure <ref> shows plots of the minimum and maximum plasma clearances cases for LCE and E1, where the LCE
Dataset 3 linear-log plots of the greatest (case 15, panels a and b) and least (case 19, panels c and d) plasma CL-values from LCE and E1 models. Panel a, greatest CL LCE model. Panel b, greatest CL E1 model. Panel c, renal failure LCE model, and Panel d, renal failure E1 model. In panels a & c the solid lines are the LCE models, the straight dot-dashed lines are the logarithmic early asymptotes and the dashed lines are the terminal exponentials. In Panel c, the early asymptote and model curve are superimposed.
CL-values ranged from $9.27\times10^{-10}$ to 163.7 ml$\cdot$min$^{-1}$, and for E1 from 4.30 to 176.1 ml$\cdot$min$^{-1}$, respectively for cases 19 and 15. Overall, the fits for the LCE models have a 4.85% standard deviation of proportional error, compared to 10.10% for E1. Note that these errors are approximately 1/2 of the values seen for Dataset 1, where Dataset 1 data was acquired for six times as long, i.e., 24 h versus 4 h. Figure <ref>a shows an asymptotic approach to the time-axis after $t_x$, the intersection of the exponential curve and the early time asymptote; a straight line on linear-log plotting. However, in Figure <ref>c, the LCE model and its logarithm are superimposed and the exponential (dashed) is flattened. In this worst case, the asymptotes intersected at a geologically long time; 4979 millennia. =0pt
Dataset 3 renal failure candidates' LCE, ln-coth, Tk-GV, E2 & E1 model CLs (ml$\cdot\text{min}^{-1}$).
Study No LCE ln-coth Tk-GV E2 E1
19 9.27$\cdot10^{-10}$ 1.24$\cdot10^{-9}$ 1.24 2.60 4.30
6 1.19$\cdot10^{-6}$ 1.58$\cdot10^{-6}$ 2.85 5.56 7.05
36 0.0312 0.0416 6.29 5.63 18.2
41 0.406 0.504 10.0 11.7 22.3
3 1.06 1.41 9.49 13.9 20.3
31 1.13 1.48 5.72 8.30 17.3
18 2.89 3.83 27.2 43.5 48.7
40 3.17 4.18 17.0 20.7 30.0
In Table <ref> the largest LCE CL-value of 3.17 ml$\cdot\text{min}^{-1}$ for these suspected renal failure cases had the shortest $t_x$ at 7.27 days; still largely beyond the capacity for validation for most experiments. The E1 model only identified half of the eight severe renal insufficiency candidates of the LCE models. Proper identification of renal failure from E1 model usage is implausible as all 41 E1 models of Dataset 3 underestimated the concentrations of the first sample-times and 39 of 41 underestimated their last sample-time concentrations (Wilcoxon one-sample two-tailed $\textit{p}\ll0.0001$), and which correspond to systematic overestimation of CL, just as Schloerb observed. Similarly, the E2 first and last time-samples were significantly underestimating. The Tk-GV model identified seven of the eight cases having LCE CL $<20$, but at multiples of the LCE predicted plasma clearance values. The Chantler-Barratt style correction factor for Dataset 3 using LCE as the reference standard was 0.810, and 0.819 using Tk-GV.
§.§ Results, all datasets
For the total of 98 subjects analysed, there were 16, 13, 10, 9, and 6 having GFR-values < 20 ml$\cdot$min$^{-1}$ respectively for the LCE, ln-coth, Tk-GV, E2 and E1 models. The 95% reference intervals for GFR were for: the LCE model from 0.015 to 167.9 ml$\cdot$min$^{-1}$; the ln-coth model from 0.020 to 172.7; the Tk-GV model 3.38 to 163.9; the E2 model 5.59 to 174.0, and for E1 9.40 to 182.2 ml$\cdot$min$^{-1}$, which explains the frequency of detection of the methods for GFR-values < 20 ml$\cdot$min$^{-1}$, e.g., E1 was unlikely to return a GFR value lower than 9.40 ml$\cdot$min$^{-1}$. Figure <ref>=0pt
Superimposed are the Q-Q plots for LEC (open circles) and E1 (open triangles) GFR-values. The solid grey lines give the locations of normally distributed values. The LCE values become abnormal very close to zero clearance. However, the E1 GFR-values transition beginning at approximately at 20 ml$\cdot$min$^{-1}$, which suggests why it is difficult to measure GFR < 20 ml$\cdot$min$^{-1}$ using current methods.
shows how this occurred by quantile-quantile (Q-Q) plotting of all 98 GFR measurements for the LCE and the E1 models. This type of plot shows how measured values depart from the theoretical distribution used; in this case, the normal distribution. If one supposes that GFR-values are normally distributed a problem occurs because normal distributions extend from negative infinity to positive infinity, but GFR values cannot be less than zero. In practice that means that there should be a departure from normally distributed GFR-values in the region near zero GFR. Indeed, there is an abrupt departure from a normal distribution for the LCE model CL-values near zero, and a more gradual transition for the E1 CL-values. To investigate how abrupt this change should be the correlations between CL and fluid volume divided by weight, $\mfrac{V_\text{MRT}}{W}$, were examined, see Table <ref>. Referring to that table, it is not obvious why the pattern of significance is different for dataset 2. The difference in pattern implies procedural or population differences between datasets such that all 98 studies were not correlation tested as a single group. Instead, a weighted average of correlations obtained in each dataset was used to rank correlations of each CL method with its V$_\text{MRT}/$W from greatest to least as E1, E2, Tk-GV, ln-coth, and LCE. =0pt
Correlations of CL with V$_{\text{MRT}}$/W
Dataset 1 2 3 all
$n$ 13 44 (E2 43) 41 $n$-weighted
model mean
E1 0.38 0.52 $^a$ 0.27 0.40
E2 0.22 0.33 0.12 0.23
Tk-GV -0.09 0.18 -0.01 0.06
ln-coth -0.57 -0.01 -0.53 -0.30
LCE -0.67 -0.15 -0.54 -0.38
5l$^\text{a }$ Significant results ($p<0.05$) in red.
For the three datasets, only Tk-GV had zero significant correlations. Taking at face value, it would seem that the Tk-GV models =0pt
yielded the more reliable volumes of drug distribution. As a further example, for Dataset 2 the noncompartmental reference standard CL-values were significantly correlated to its tediously calculated V$_\text{MRT}/$W, R = 0.36, with a 95% confidence interval of 0.07 to 0.59, a significant result comparable to that of E2 models.
Both Kruskal-Wallis rank testing and 1-way ANOVA showed significantly different central measures of clearance between the hepatorenal compromised subjects in Dataset 1 (mean 37.3 ml$\cdot$min$^{-1}$) and the other datasets. However, there was no significant difference between Datasets 2 and 3 (means 73.6 and 77.0 ml$\cdot$min$^{-1}$, respectively), for LCE (or Tk-GV) CL-values from [$^{51}$Cr(EDTA)]$^-$ and [$^{169}$Yb(DTPA)]$^{2-}$ anions despite moderate to tense ascites in the former and the lack of fluid disturbance in the latter (Dataset 3): "Patients with edema ... were excluded from the study."[27] Moreover, that clinical history can be examined retrospectively using the Tk-GV measures of V$_\text{MRT}/$W with 1-way ANOVA or the Kruskal-Wallis test, the results of which were in agreement. The ANOVA results are easier to follow. The mean Tk-GV V$_\text{MRT}/$W for Datasets 1, 2, and 3 were 0.386-, 0.293- and 0.248-l/kg, respectively. Normal extracellular fluid volume following 7.5 h (mean, $n=7$) constant infusion of thiocyanate anions was found to be 0.246 l/kg (mean) by Schwartz et al., Table I [47], such that the Dataset 3 Tk-GV almost identical mean of 0.248 l/kg seems normal range despite the methodological differences between studies. However, by Dunnett contrasts, V$_\text{MRT}/$W was significantly increased in Datasets 1 and 2 compared to Dataset 3. In other words, there is no evidence of fluid disturbance in Dataset 3, whereas Datasets 1 and 2 have significant relative fluid disturbance.
Seven of the ten Tk-GV CL-values less than 20 ml$\cdot$min$^{-1}$ were from Dataset 3, none were from Dataset 2 and three were from Dataset 1, such that if there were negative correlations between CL and V$_\text{MRT}/$W for Tk-GV values it would be seen in Datasets 1 and 3. There were small magnitude, insignificantly negative correlations from Datasets 1 and 3 between CL and V$_\text{MRT}/$W from Tk-GV processing, see Table <ref>. Thus, one can say that the Tk-GV values for V$_\text{MRT}/$W are apparently consistent with the clinical history.
On the other hand, LCE and ln-coth had significantly negative correlations between reduced CL and V$_\text{MRT}/$W for Datasets 1 and 3, with some reduced CL-values for V$_\text{MRT}/$W that were > 1. That type of physiologic behaviour cannot be ruled out with certainty, but at face value seems less plausible than the results from Tk-GV.
Finally, the Chantler-Barratt style E1$\,\geq\,$2 h correction factor using the LCE model as the standard for all 98 cases was 0.800, and for TK-GV CL-values was 0.824.
§ DISCUSSION
The initial concentration in a peripheral venous sampling site is zero at the time of a bolus intravenous injection in a different vein. To model the entire concentration curve including the zero initial concentration typically requires more early data, processing and theory than are typically used for routine drug assessment [51, 40]. The alternative is to use incomplete models that do not model the very rapidly changing early vascular effects with the caveat that first time-sampling be drawn some minutes or hours following the time of peak venous concentration. How many minutes or hours following injection one should wait to take samples depends on the model. For the Tk-GV model, 5 min is enough. For bolus injections of inulin and [$^{99m}$Tc(DTPA)]$^{2-}$ anions, 25 minutes in adult humans was the time at which arteriovenous differences of concentration concentrations equalised [25]. The LCE model produced possibly slightly better results with sampling times starting at 15 min rather than 5 min for Dataset 2, compared to start times beginning at 2 or 5 hours and ongoing for 24 h for E1 as suggested by Wickham et al. [7] and Brøchner-Mortensen [5], respectively. Table <ref> shows this effect for Dataset 1, the only dataset with 24 h data. Compared to the LCE and Tk-GV model CL-values, using an E1 model with 24 h data beginning at 2- or 5-h proved more accurate than fitting E1 to the complete data beginning at 5 min, but this comes at the cost of having to acquire 24 h data and still having to use correction formulas (Chantler-Barratt, Brøchner-Mortensen, and other corrections).
Not unexpectedly, the results showed that the attempts to fit E1 or E2 to time-limited data resulted in poor quality fits of the AUC underestimating type attributed to the curve shape of the data being more linear-logarithmic than exponential. This was the same problem for all three datasets, and is shown for Dataset 1 in Figure <ref>. The change in concentration as apportioned in time logarithmically is not unknown. For instance, in Datasets 1 and 3 above, the time-samples were independently selected to be drawn at times that form a nearly geometric progression, where for example, a perfectly geometric progression would be a doubling time: 7.5, 15, 30, 60, 120, 240, 480,$\dots$ min. Such a scale is equidistant when its logarithm is taken, where the motive for doing so is to acquire data such that the change in concentration is more or less linear and detectable between time-samples. So clearly equal log-time, time-sample spacing is appreciated by some experimentalists. The search for incorporating that observation into a plausible model that forms a better basis for quantifying concentration curves than exponentials yielded several models, and potentially many others. For a more general model, $C(t)=c\ln \big(\frac{\alpha}{e^{\beta \,t}-1}+1\big)$, Lambert's $W$ solves $t_x=\frac{W(\alpha)}{\beta}$ as the time at which the asymptotes are equal. For example, the LCE model results from setting $\alpha=1$, and the ln-coth model results when $\alpha=2$. In even more general terms, the asymptotes of the negative logarithms of sigmoid functions may not intersect at all. For example, $-c\ln[\text{erf}(\beta\,t)]$, where erf is the error function, has a tail whose decay is so fast (stats, light) that an intersection of its asymptotes[The asymptotes are $c\ln \left(\frac{\sqrt{\pi }}{2 \beta \,t}\right)$ and $-c\ln \left(1-\frac{e^{-(\beta \,t)^2}}{\sqrt{\pi } \beta \,t}\right)$] does not occur. However, even in that case, there is a local minimum concentration difference between those asymptotes that signals when the character of the curves changes from its logarithmic predominant shape to its tail shape.
The behaviour of negative log-sigmoid functions is every bit as complicated as that of biexponentials. For a biexponential, without loss of generality, one assumes $\lambda_1>\lambda_2$, and there are two compartments: A central compartment with concentration $C_C(t)=c_1e^{-\lambda_1 t}+c_2e^{-\lambda_2 t}$, and a peripheral compartment with concentration $C_P(t)=\frac{c_1\, \lambda_2+c_2 \,\lambda_1}{\lambda_1-\lambda_2}(e^{-\lambda_2 t}-e^{-\lambda_1 t})$. The time at which those concentration curves are equal is $\frac{\ln \left(\lambda_1/\lambda_2 \right)}{\lambda_1 -\lambda_2 }$. Sometimes (if rarely) called the time of pseudoequilibrium, that is also the time at which $C_P$ concentration peaks before which is called the distribution phase, and after which is called the elimination phase. So, recapping, the $-$log-sigmoid functions have distinct curve phases just like biexponentials do, but have a larger selection of tail behaviours, whereas biexponentials have twice the number of parameters for little gain in goodness of fit.
Biexponentials, and higher order mammalian models have compartments that imply a diffusion or osmotic barrier type of exchange predicated upon flow being proportional to a concentration difference across semipermeable membranes (forward osmosis), whereas many capillary membranes are washed, and undergo some combination of reverse osmotic and bulk pore flow with solutes being carried away on both membrane sides such that as a first approximation GFR markers are transported physically due to pressure differences, not concentration differences. In the kidney this is called ultrafiltration. To put it another way, when clearance is held constant, we typically assume that renal filtration of a solute is proportional to concentration of that solute, and as much of the small molecule transport to the interstitium occurs in those capillaries that have similar architectures, pressure differences and functionality [52], there is no special call for diffusion barrier modelling, at least not for GFR markers. Schwartz et al. (1949) [47] thought that inulin (circa 3,500$-$5,500 Da) was a better extracellular fluid marker than thiocyanate or bromide based on the assumption that diffusion into intracellular water was occurring for the smaller molecules. The concept of molecular sieving through capillary pores dates from Pappenheimer et al. (1951) [53, 54] and provides an alternative explanation. That concept implies that inulin's volume of distribution is smaller because the molecule is so large that its flow through capillary pores is partly impeded, which would not be the case smaller molecules like EDTA and DTPA chelates, thiocyanate and bromide. Electron microscopic examination of renal glomerular and other capillaries has demonstrated the existence of these high flow rate pores in some but not all tissues. [52] Finally, those tissues without high flow rate pores still have reverse osmosis as an anion transport mechanism.
An alternative explanation for biexponential behaviour with greater generality does not invoke compartments. That is, the variable volume model as first proposed by Niazi and later extended to all concentration scaled semi-infinite density functions [55, 50]. Indeed, examination of how drug volume changes in time shows that biexponential and other summed exponentials all have a large initial volume of distribution which is unphysical, and is unlike Figure <ref>, which is a variable volume of drug distribution plot that starts with zero initial drug volume by virtue of have an unbounded large initial concentration from an LCE model. It is possible to extend biexponentials and other washout models to have zero initial concentration, and zero initial drug volume by convolution modelling, but that is not a feature of simple washout models. George Box, a statistician, is well known for having written "Essentially, all models are wrong, but some are useful" [56]. Indeed, sums of exponential term models are used almost to the exclusion of everything else. However, E2 models are all too frequently unphysical, and was the only model that was not robust when applied to the data. For example, Dataset 2 subject 19 had zero E2 clearance, with other models having 38-49 ml/min. Finally E2 models were outperformed by noncompartmental methods.
The development of clinical reference standards is important [57]. There is a need to refine reference standards for measured GFR. All too frequently, a GFR reference standard is assumed without preamble to be a true gold standard. For example, one of the E1$\,\geq\,$2 h clearances correction papers cited above used the word true to describe an E2 model no less than two dozen times, the same model shown here to produce inflated CL-values compared to noncompartmental methods. In turn, noncompartmental methods yielded inflated CL-values compared to renal clearance, which latter some authors assume to be true clearance [3]. Measurement standards evolve only through extensive testing. Gone, for example, is the circa 1901 platinum-iridium standard reference kilogram [58]. Finally in 2019, following an effort lasting four decades, one kilogram was redefined as equal to the speed of light squared divided by the product of Planck's constant and the hyperfine transition frequency of $^{133}$Cs, and is precise and accurate to within several parts per billion.
Compared to that, the median difference between the NC and urinary drug standard of 14 parts per hundred seems imprecise.
Of the many tests performed in this paper several stand out as critical to our understanding of what a plausible reference standard for measured GFR should be. An important result was the test for correlation of CL with volume of distribution divided by weight, Table <ref>. An unanticipated outcome was that the least correlated results were for the Tk-GV models, which were without significant correlation for all three datasets. This reflects the assumption that CL and the drug volume of distribution should be largely uncorrelated. In that same vein, Figure <ref> is particularly revealing. In Figure <ref>a, we noted that many of the eight models compared to noncompartmental models had more error measuring decreased cleared mass than the more normal range values did. This is an increased error of absolute drug mass cleared, and not just a relative or percentage value. That result implicates reduced clearance as meriting special attention for the evaluation of reference standards, and that clinicians should be aware that reduced GFR measurements obtained from most current methods tend to underestimate the severity of the reduction. Of the eight methods compared to noncompartmental methods in Figure <ref>, only two were not slope biased on the lower end of renal function. Those two were the Tk-GV plasma clearance method, and the renal clearance method. The most moderate of these methods, and the most plausible was the Tk-GV method, and that is unfortunate because it is not widely available, and those interested in using it should contact the author. Compared to the NC method the Tk-GV results were the second least variable (Figure <ref>b). That is not too surprising because the least variable (by a hair) was the E2 model, which structurally is closely related the NC reference standard to which all others models were compared in that figure. However, the E2 models yielded so few results (22%) between those of the NC method and actual urine mass of drug collected that they are not plausibly accurate measurements. The most frequently seen results within the reference range, 74%, were from LCE fits starting at > 14 min.
§.§ Discussion, clinical relevance
Using current methods, few measured plasma and renal GFR clinical studies are performed for patients having less than 20 ml$\cdot$min$^{-1}$, e.g., there were none in Dataset 2. Renal clearance was well emulated by plasma CL$_{\text{LCE}\,>\,14\;\text{min}}$, However, neither measure included reduced CL-values and a patient having 10 ml$\cdot$min$^{-1}$ Tk-GV clearance may merit different management than one with 0.406 to 22.3 ml$\cdot$min$^{-1}$, i.e., the range of CL-values in Table <ref> of study 41. In lieu of a direct measurement of GFR, a current practice is, for example, to use the average of creatinine and urea renal CL-values or 24 h creatinine renal CL-values as well as urinary albumin levels as rough indicators of what the appropriate clinical management may be [5]. Even using exogenous radiotracers, bolus injection urinary collection measurements are problematic, see the Uprob Methods subsection for details. For example, an oliguric patient may have undetectable renal clearance values. In prior work, Tk-GV clearances were more accurate and precise than E2 clearances [21]. Most current plasma clearance methods fail the Schloerb challenge to quantify a lack of renal function but the TK-GV method apparently succeeded. However, only prospective studies can determine how a method agrees with other patient management indicators, including selecting patients for dialysis, or reliably detecting even moderate loss of renal function from chemotherapy, radiation therapy, surgery, or disease.
To use Tk-GV as a reference standard for conversion of commonly performed procedures, a new Chantler-Barratt formula was constructed using E1$\,\geq\,$2 h clearance values, see Figure <ref>. This yielded,
Shown are 98 cases used with two models to predict Tk-GV GFR results as a reference standard. Panel a shows conversions of E1$\,\geq\,$2 h to Tk-GV clearances using a new Chantler-Barratt slope (red line) and the exponential of a quadratic polynomial of logarithms of E1$\,\geq\,$2 h (black curve). Panel b shows prediction of Tk-GV clearances using the pairwise sum of weighted E1 and LCE CL-values obtained using all samples.
\begin{equation*}
\begin{aligned}
&\text{CL}_\text{Tk-GV}=0.8243\,\text{CL}_{\text{E1$\geq$\text{2 h}}}\;\\
\text{R}=0.9908,\;\;&\text{SE}=5.89\text{ (ml/min),\;\;\text{Relative error}\,=\,39.0\%}
\end{aligned}\;\;,
\end{equation*}
where R is the ANOVA correlation, SE is the standard error of measurements, and the relative error is one standard deviation of proportional error. The new formula yields GFR-values that are $0.8243/0.87=0.9479$ times the old correction factor's GFR-values because of the new reference standard used, i.e., CL$_{\text{Tk-GV}}$. This provides us with a crude indication of how decreased CL$_{\text{Tk-GV}}$ values are (0.9479$\times$) compared to using corrected CL$_{\text{E1$\geq$\text{2 h}}}$ values. However, the relative error is intractably large, as follows. Less than a raw CL$_{\text{E1$\geq$\text{2 h}}}$ of 60.7 ml/min, which corresponds to a corrected GFR of 50 ml/min, the proportional error is 66.7%. This is largely due to the 12.5% of clearances < 50 ml/min that are inflated by 33% to 307%. Thus, corrected CL$_{\text{E1$\geq$\text{2 h}}}$-values are not reliable measures of reduced clearance, which provides a justification for clinicians not relying on such measurements. Chantler-Barratt found an actual regression line slope of 0.77 when the line was constrained to go through the origin, and then added 0.10 to make 0.87 as a correction for venous rather than arterial sampling, but did so without any supporting results to validate that hypothesis [9]. A finding of 0.82 is the average of 0.77 and 0.87, and is supported by findings. Chantler-Barratt also found problems with nonlinearity especially for low or very high GFR-values. This is not unexpected given that their unconstrained regression line was CL$_{\text{renal}}\approx 0.70\,\text{CL}_{\text{E1}\geq2\text{ h}}+7.2$ (1.73 m$^2\cdot$eBSA$^{-1}\cdot$ml$\cdot$min$^{-1}$). In the introduction section, it was mentioned that a nonlinear correction for E1$\,\geq\,$2 h clearance values should have a slope of zero as it approaches the origin to clear the constant appearing in an unconstrained linear regression. To account for the nonlinearity at the origin, an asymptotically zero slope as CL$\to0$ formula (there are many) was obtained by fitting a quadratic, a ${P}_2(x)=a_0+a_1 x+a_2 \,x^2$, to the logarithms of the 98 Tk-GV and E1$\,\geq\,$2 h clearances provided that $a_2<0$, and as,
\[\mathbb{P}_2(\ln \text{CL}_{\text{Tk-GV}})=-1.088+1.398 \ln \text{CL}_{\text{E1}\geq2\text{ h}}-0.04374\, (\ln \text{CL}_{\text{E1}\geq2\text{ h}})^2\;\;,\]
$a_2=-0.04374<0$, that is indeed the case. To then predict Tk-GV clearances, one takes the exponential of both sides of the above to obtain
\begin{equation*}
\begin{aligned}
\text{CL}_{\text{Tk-GV}}&=\exp\left(-1.088+1.398 \ln \text{CL}_{\text{E1}\geq2\text{ h}}-0.04374 \ln ^2\,\text{CL}_{\text{E1}\geq2\text{ h}}\right)\\
\text{R}&=0.9912,\;\;\text{SE}=5.76\text{ (ml/min),\;\;\text{Relative error}\,=\,27.8\%}
\end{aligned}\;\;,
\end{equation*}
which is improved compared to the errors of the new Chantler-Barratt regression. This is the solid black non-linear curve in Figure <ref>a. Notice that in the insert in Figure <ref>a that the five smallest GFR values form a pattern that looks like the number 5 on a die, i.e., uncorrelated and disappointingly variable. None of those values are less than 5 ml/min. However, correcting nonlinearity by brute force is unnecessarily complicated. Instead, Tk-GV CL-values can be estimated by weighted averaging of CL$_\text{LCE}$ and $\text{CL}_{\text{E1}\geq2\text{ h}}$. This represents a weighted average CL-values from the two independent models fit to the same data.
\begin{equation*}
\begin{aligned}
&\text{CL}_{\text{Tk-GV}} = 0.2673\,\text{CL}_{\text{LCE}}+0.6105\, \text{CL}_{\text{E1}\geq2\text{ h}}\\
\text{R}=0.&9919,\;\;\text{SE}=5.53\text{ (ml/min),\;\;\text{Relative error}\,=\,26.6\%}
\end{aligned}\;\;,
\end{equation*}
This equation could be applied to convert E1$\,\geq\,$2 h clearances to approximate Tk-GV clearances. However, the relative error at 26.6% is still large, and the correction is still suboptimal. No matter how E1$\,\geq\,$2 h CL-values are corrected, the CL-values are too noisy for measuring reduced CL-values to be used for that purpose. This can be improved upon by using methods that do not back extrapolate for 2 h. Note in Figure <ref>a that the large magnitude negative slope of an E1 model for increasing renal function is almost perfectly reflected by a strong positive slope for the LCE model. If this is correct, one would expect that some linear combination of E1 and LCE clearances might approximate clearance better than either E1 or LCE taken separately. This is simpler in that theoretically only two samples would be needed for E1-LCE averaging, the potential advantage of which is that similar to a single sample method, one would then need only two sessions with a subject, one just after (flush) bolus intravenous injection of the GFR marker with an early sample drawn at 5 min, and one later. However, the three datasets used here are perhaps not the best ones to explore two plasma sample modelling. Instead, all the samples were included, which yielded,
\begin{equation*}
\begin{aligned}
\text{R}=0.9&930,\;\;\text{SE}=5.06 \text{ (ml/min),\;\;\text{Relative error}\,=\,11.7\%}
\end{aligned}\;\;,
\end{equation*}
which although it has slightly better R- and SE-values than correcting E1$\,\geq\,$2 h CL-values, there is a major improvement in relative error. At 11.7% the relative error, although substantial, is no longer intractably large. To examine how this improvement in relative error has occurred, see the insert in Figure <ref>b, which shows that the lower CL-values are now better linearised, and there are two values below 5 ml/min, whereas there were none for estimating Tk-GV CL from E1$\,\geq\,$2 CL-values. Something, then, allowed Tk-GV CL-values to be predicted, without an intercept, whose partial probability, $p=0.3$, indicated discard. If Tk-GV CL-values are reliable, then there should be other ways of predicting them. Indeed, substitution of E2 CL-values for the E1 values above yielded
\begin{equation*}
\begin{aligned}
\text{CL}&_{\text{Tk-GV}}=0.3196\,\text{CL}_{\text{LCE}}+0.6452\,\text{CL}_{\text{E2}}\\
\text{R}=0.9954&,\;\;\text{SE}=4.15\text{ (ml/min),\;\;\text{Relative error}\,=\,10.4\%}
\end{aligned}\;\;,
\end{equation*}
where discard of the insignificant intercept ($-0.60$ ml$\cdot$min$^{-1}$, $p=0.5$) improved the standard error slightly. This then gives us a method of converting E2 model CL-values to Tk-GV CL-values with fairly good precision and accuracy. That such methods exist implies that Tk-GV CL and its V$_\text{MRT}$ investigated above are plausible reference standards. From this latest formula above the 30 estimates that were less than 50 ml/min had a standard error of only 2.13 ml/min but a relative error of 16.4%. The 67 results > 50 ml/min had a standard error of 4.79 ml/min and a relative error of only 6.17%. It is important when comparing methods to inspect the range of GFR values being analysed as there are many methods that are unreliable below 50-60 ml/min, for example, CL$_{\text{E1}\geq2\,h}$ as above, and the single sample methods of the literature that use CL$_{\text{E1}\geq2\,\text{h}}$ as a reference standard [59].
§.§ Limitations
Long term, steady state, constant infusion renal clearance modelling with bladder catheterisation, e.g. see [47], can overcome some of the problems associated with bolus, i.e., dynamic, renal clearance. Such data was, unfortunately, not available. Much of the mathematical exploration and statistical testing performed to generate this report have been omitted in order to present the most important observations without undue burden placed upon the reader. For example, the LCE and ln-coth density functions were identified from simplification of a four parameter model and proportional error modelling was selected as best of class from four methods. The regression types tested were ordinary least squares (OLS), 1/y weighted OLS, 1/y$^2$ weighted OLS, and log-log OLS. The Tk-GV model is the only one for which log-log regression is needed (mathematically). All the other regressions presented were 1/y$^2$ weighted OLS. Many formulas, e.g., for constant infusion, half-life of volume and concentration as functions of time were similarly omitted. Alternatives to exponential tails were not extensively tested. Clearance was assumed to be constant in time without proof. The sec:appendix section outlines those derivations specific to this report, where a more complete set of equations is merely a routine application of the calculus to a more complete set of general equations as previously presented and applied respectively for the gamma and gamma-Pareto distributions in [50, 40].
The Tk-GV method has been applied in a clinical setting both retrospectively and prospectively. Four time-samples can be obtained at 5, 20, 60 and 240 min following flush bolus intravenous injection of a good GFR marker. Unlike for E2, nine time-samples obtained up to 8 h post-injection produced CL-values did not significantly differ from the four time-sample, 4-h results when the Tikhonov relative fit error of $\beta$ of a gamma-variate, $K t^{\alpha-1}e^{-\beta\,t}$, was minimised ($n=412$) [21]. For quality assurance, only results for which $\alpha<1$ are correct. In extensive simulation studies using leave-outs, $\alpha>1$ can occur when the first time-sample is obtained later than 10 min or the last sample is obtained earlier than 180 min, this has not occurred clinically. When a saline flush is not used it is not uncommon to create a subcutaneous depot of radioactivity [60]. In one prospective clinical case a second time-sample was drawn from a partially infiltrated injection site. This led to a spuriously higher concentration at 20 min than at 5 min, and an $\alpha>1$. The incidence of quality assurance failures has been approximately one in 500.
The LCE and ln-coth models have been presented as fits to multiple samples between approximately 5- to 15-min and 4-, 5- and 24-h. Clinical application should be investigated for more minimalistic curve solutions using only two plasma samples, possibly one at 5- to 15-min and another at 4 h. However, the three datasets evaluated in this paper are not optimal for such a study and any such modelling is left for future work, as it requires the development a normal range for fractal [61] renal metabolic scaling [2] with preliminary results suggesting smaller negative fraction and better accuracy for segregating abnormal from normal GFR [62].
Scaling of measured GFR is needed to classify sufficiency of renal function versus metabolic demand and should be done with respect to normal measured GFR by 1) normalising powers of variables like volume of distribution and body weight over 2) at least an 8-fold weight range, as well as over 3) a range of abnormal fluid balance, e.g., see [2, 20]. As volume of distribution raised to a power is by far the single most predictive variable for metabolically expected renal function, obtaining volume values uncorrelated to measured and possibly abnormal clearance is highly desirable, but was not determined for the new models of the discussion section. This then would provide for a reference standard for calculating estimating formulas for creatinine, cystatin-C and any other endogenous metabolite. This has not been done in this introductory paper. Clinical correlation, as well as body scaling and normal range calibration are needed for final interpretation of the value of the results.
§ CONCLUSIONS
The working hypothesis that there are better GFR models than Tk-GV remained unconfirmed. Methods that appear to have potential applicability to the reduced renal function measurement problem are the Tk-GV method and the weighted average of E1 or E2 clearances with LCE clearance values for predicting Tk-GV values. These appear to meet the Schloerb challenge of quantifying anephric conditions. The Tk-GV method produced values that are frequently within the reference standard range, and was the only plasma clearance method tested that was consistently uncorrelated with its weight normalised V$_\text{MRT}$.
§ ACKNOWLEDGEMENTS
The editors and reviewers, especially unnamed Reviewer 1, are thanked for the extensive improvements made during the preparation of this paper. Prof. Geoffrey T. Tucker of the University of Sheffield, Sheffield, UK is thanked for his suggestions concerning this manuscript. Maria T. Burniston and coauthors in the UK [20] are thanked for graciously providing Dataset 1. Prof. Jens H. Henriksen of the University of Copenhagen, Denmark is thanked for providing Dataset 2. Prof. Charles D. Russell of the University of Alabama at Birmingham is thanked for providing Dataset 3. Surajith N. Wanasundara is thanked for his help with computer implementation of an earlier version of the Tk-GV processing program.
§ APPENDIX
Concentration models have a finite area under the curve (AUC) from $t=0$ to $t=\infty$, i.e., AUC$\,\myeq\int_0^\infty C(t)$. Density functions, $f(t)$, have a total area of one, that is, $\int_0^\infty f(t)=1$, and are found by applying the definition $f(t)=\frac{C(t)}{\text{AUC}}$. Multiplying both sides that definition by AUC and reversing the order of equality yields,
\begin{equation}
\label{eq1}C(t)=\text{AUC }f(t)\;\;,
\end{equation}
To be clear, AUC is from curve fitting but is the area under the entire curve, not just the data from the first to last time-samples. For example, for E1, let,
$$f(t)=\lambda\, e^{-\lambda\,t},\;\;\;C(t)=\text{AUC}\,\lambda \,e^{-\lambda\,t},$$
where setting $c=\text{AUC}\,\lambda$ yields the more common notation $C(t)=c \,e^{-\lambda\,t}$. However, $c$ is a dummy variable, i.e., it is unnecessary. In addition to extracting AUC-values immediately from data fitting, identifying the density function makes the rules for its manipulation immediately available. One such rule is the cumulative density function, CDF, also written as $F(t)$, where $F(t) \myeq \int_0^t f(\tau)\,d\tau$, i.e., the 0 to $t$ integral of $f(t)$, such that $\lim_{t\to\infty}F(t)=1$. The CDF of an exponential density, $\lambda\,e^{-\lambda\,t}$, is thus
$$F(t)= \int_0^t \lambda\,e^{-\lambda\,t}\,dx=1-e^{-\lambda\,t}\;\;.$$
As the inside of $-\ln(1-e^{-\beta\,t})$, i.e., $1-e^{-\beta\,t}$, is a cumulative exponential, $-\ln(1-e^{-\beta\,t})$ is a negative Logarithm of a Cumulative Exponential, or LCE as an acronym. To make the LCE into a density function, $-\ln(1-e^{-\beta\,t})$ is multiplied by a constant that makes its total area equal to one, $\lim_{t\to\infty}F(t)=1$. That is,
\begin{equation}\label{eq2}
f(t)=-\frac{6\, \beta }{\pi ^2}\ln \left(1-e^{-\beta\, t}\right)\;\;,
\end{equation}
where that constant is $\mfrac{6\, \beta }{\pi ^2}$. Combining Eqs. (<ref>) and (<ref>) yields the fit equation for the LCE model used in this manuscript,
\begin{equation}\label{eq3}
C(t)=\text{AUC}\cdot f(t)=-\text{AUC}\,\frac{6\, \beta }{\pi ^2}\ln \left(1-e^{-\beta\, t}\right)\;\;.
\end{equation}
Theorem. The density function for $-\ln \left(1-e^{-\beta\, t}\right)$ is $f(t)=-\dfrac{6\, \beta }{\pi ^2}\ln \left(1-e^{-\beta\, t}\right)$, i.e., the log cumulative exponential (LCE) distribution.
Proof. We first note that the derivative that yields $-\ln \left(1-e^{-\beta\, t}\right)$ is,
\begin{equation}\label{eqA1}
\frac{d}{d\,t}\left[-\frac{1}{\beta}\text{Li}_2\left(e^{-\beta\, t}\right)\right]=-\ln \left(1-e^{-\beta\, t}\right)\;\;,
\end{equation}
where $\text{Li}_n(z)=\sum _{k=1}^{\infty } \frac{z^k}{k^n}$ is the polylogarithm function of order $n=2.$ $\Big($Hint, let $u=e^{-\beta\,t}$, then $\frac{d}{d\,t}\text{Li}_2\left(e^{-\beta\,t}\right)=\frac{d}{d\,t}\text{Li}_2(u)\frac{d\,u}{d\,t}$, and $\frac{d}{d\,u}\text{Li}_2(u)=-\frac{\ln(1-u)}{u}.\Big)$ Next, we scale $-\ln \left(1-e^{-\beta\, t}\right)$ to be a density function by dividing by the total area from 0 to $t\to \infty$ of its antiderivative. That is since,
\begin{equation}\int_0^{\infty}\frac{d}{d\,t}\left[-\frac{1}{\beta}\text{Li}_2\left(e^{-\beta\, t}\right)\right]=\lim_{t\to \infty}\left[-\frac{1}{\beta}\text{Li}_2\left(e^{-\beta\, t}\right)\right]+\frac{1}{\beta}\text{Li}_2\left(e^{-\beta\cdot 0}\right)=0+\frac{\pi ^2}{6\, \beta}\;\;,\end{equation}
$$f(t)=-\ln \left(1-e^{-\beta\, t}\right)\bigg/ \frac{\pi ^2}{6\, \beta}=-\frac{6\, \beta }{\pi ^2}\ln \left(1-e^{-\beta\, t}\right)\;\;.\qed$$
Corollary. Similarly, the CDF and CCDF = CDF $-$ 1, are from the antiderivative evaluated between 0 and $t$,
\begin{equation}\label{Ft}
F(t)=1-\frac{6 }{\pi ^2}\text{Li}_2\left(e^{-\beta\, t}\right),\;\;\;S(t)=\frac{6 }{\pi ^2}\text{Li}_2\left(e^{-\beta\, t}\right)\;\;,
\end{equation}
where CCDF is the complementary CDF. The CCDF is symbolised $S(t)$ here, even though $S(t)$, survival functions, are technically from mass functions, not density functions. For example, the formula for volume of distribution in Table <ref>, V$_{\text{d}}(t)=\text{CL} \frac{1-F(t)}{f(t)}\leftrightarrow \text{Dose} \frac{S(t)}{C(t)}$, i.e., volume of distribution is the surviving dose in the body at a time divided by the concentration at that same time. Note that how long it takes for $F(t)$ to converge to 1 is dependent on a single parameter, $\beta$; the smaller $\beta$ is, the longer it takes.
The mean residence time, where MRT $=\int_0^\infty t\,f(t)\,dt$ for the LCE density function, was found from evaluating its antiderivative from $t$ equals 0 to $\infty$,
$$\text{MRT}=\frac{6\, \zeta (3)}{\pi ^2\, \beta}\approx\frac{0.730763}{\beta }\;\;,$$
where the zeta ($\zeta$) function of 3 is approximately 1.20206. Note that the ratio of MRT$_\text{LCE}$ and $t_x$ is a constant equal to $\mfrac{6\, \zeta (3)}{\pi ^2\, \Omega}$. That is, MRT$_\text{LCE}$ occurs at a time approximately 1.2885 times longer than $t_x$, and thus the MRT occurs when the tail is already predominantly an exponential function.
The median residence time ($t_m$, LCE half-survival) was calculated by Newton-Raphson's method for $u$ such that the $S(u)=\frac{6 }{\pi ^2}\text{Li}_2\left(e^{-u}\right)=\frac{1}{2}$. Then, let $u=\beta\,t_{m}$, and solve for $t_{m}$, which yields,
$$t_{m}\approx \frac{0.415389}{\beta}\;\;.$$
Theorem. For ln-coth, $f(t) = \mfrac{4 \beta }{\pi ^2}\ln \bigg[\coth \left(\mfrac{\beta \, t}{2}\right)\bigg]$ is the density function and the CDF is
$$F(t)=\frac{4 }{\pi ^2}\big[\text{Li}_2(1-y)+\text{Li}_2(-y)+\ln (y+1) \ln (y)\big]+\frac{4}{3},\;\;\text{ where }y=\ln \bigg[\coth \bigg(\frac{\beta \, t}{2}\bigg)\bigg]\;\;.$$
Proof. Differentiate. (Hint: $F'(t)=F'(y)\,\dfrac{dy}{dt}$, where $F'(y)=-\dfrac{8 \ln (y)}{\pi ^2 \left(y^2-1\right)}$, $\dfrac{dy}{dt}=-\mfrac{\beta}{2} \, \text{csch}^2\left(\mfrac{\beta \, t}{2}\right)$, substitute and simplify.) Note that $F(0)=0$ and $\lim_{t\to\infty}F(t)=1$, i.e., $f(t)$ is the ln-coth density function of $F(t).\;\;\qed$
[1]
Stevens LA, Coresh J, Greene T, Levey AS.
Assessing kidney function—measured and estimated glomerular
filtration rate.
N Engl J Med. 2006;354(23):2473–2483.
Available from: <https://doi.org/10.1056/NEJMra054415>.
[2]
Wesolowski CA, Babyn PS, Puetter RC.
An improved method for determining renal sufficiency using volume of
distribution and weight from bolus $^{99m}$Tc-DTPA, two blood sample,
paediatric data.
Nucl Med Commun. 2006;27(12):963–970.
Available from:
[3]
Moore AE, Park-Holohan SJ, Blake GM, Fogelman I.
Conventional measurements of GFR using 51 Cr-EDTA overestimate
true renal clearance by 10 percent.
Eur J Nucl Med Mol Imaging. 2003;30(1):4–8.
[4]
Schloerb PR.
Total body water distribution of creatinine and urea in
nephrectomized dogs.
American Journal of Physiology-Legacy Content. 1960;199(4):661–665.
Available from:
[5]
Brøchner-Mortensen J, Freund LG.
Reliability of routine clearance methods for assessment of glomerular
filtration rate in advanced renal insufficiency.
Scand J Clin Lab. 1981;41(1):91–97.
Available from: <https://doi.org/10.3109/00365518109092020>.
[6]
Brøchner-Mortensen J.
Current status on assessment and measurement of glomerular filtration
Clin Physiol. 1985;5(1):1–17.
Available from:
[7]
Wickham F, Burniston MT, Xirouchakis E, et al.
Development of a modified sampling and calculation method for isotope
plasma clearance assessment of the glomerular filtration rate in patients
with cirrhosis and ascites.
Nucl Med Commun. 2013;34(11):1124–1132.
Available from: <https://doi.org/10.1097/MNM.0b013e32836529ab>.
[8]
LaFrance ND, Drew HH, Walser M.
Radioisotopic measurement of glomerular filtration rate in severe
chronic renal failure.
J Nuc Med. 1988;29(12):1927–1930.
[9]
Chantler C, Barratt TM.
Estimation of glomerular filtration rate from plasma clearance of
51-chromium edetic acid.
Arch Dis Child. 1972;47(254):613–617.
[10]
Bröchner-Mortensen J.
A simple method for the determination of glomerular filtration rate.
Scand J Clin Lab. 1972;30(3):271–274.
[11]
Murray AW, Barnfield MC, Waller ML, Telford T, Peters AM.
Assessment of glomerular filtration rate measurement with plasma
sampling: a technical review.
J Nucl Med Technol. 2013;41(2):67–75.
Available from: <https://doi.org/10.2967/jnmt.113.121004>.
[12]
Fleming JS.
An improved equation for correcting slope–intercept measurements of
glomerular filtration rate for the single exponential approximation.
Nucl Med Commun. 2007;28(4):315–320.
[13]
Jødal L, Brøchner-Mortensen J.
Reassessment of a classical single injection $^{51}$Cr-EDTA
clearance method for determination of renal function in children and adults.
Part I: Analytically correct relationship between total and one-pool
Scand J Clin Lab. 2009;69(3):305–313.
[14]
Ng DKS, Schwartz GJ, Jacobson LP, et al.
Universal GFR determination based on two time points during plasma
iohexol disappearance.
Kidney Int. 2011;80(4):423–430.
[15]
Adolph EF.
Quantitative relations in the physiological constitutions of mammals.
Science. 1949;109(2841):579–585.
[16]
Hilton R.
Defining acute renal failure.
CMAJ. 2011;183(10):1167–1169.
Available from: <https://doi.org/10.1503/cmaj.081170>.
[17]
Prueksaritanont T, Lui CY, Lee MG, Chiou WL.
Renal and non-renal clearances of iothalamate.
Biopharm Drug Dispos. 1986;7(4):347–355.
[18]
Delgado C, Baweja M, Crews DC, et al.
A unifying approach for GFR estimation: recommendations of the
NKF-ASN task force on reassessing the inclusion of race in diagnosing
kidney disease.
Am J Kidney Dis. 2022;79(2):268–288.
Available from: <https://doi.org/10.1053/j.ajkd.2021.08.003>.
[19]
Gault MH, Longerich LL, Harnett JD, Wesolowski CA.
Predicting glomerular function from adjusted serum creatinine.
Nephron. 1992;62:249–256.
Available from: <https://doi.org/10.1159/000187054>.
[20]
Wesolowski CA, Ling L, Xirouchakis E, et al.
Validation of Tikhonov adaptively regularized gamma variate fitting
with 24-h plasma clearance in cirrhotic patients with ascites.
Eur J Nucl Med Mol Imaging. 2011;38(12):2247–2256.
Available from: <https://doi.org/10.1007/s00259-011-1887-9>.
[21]
Wanasundara SN, Wesolowski MJ, Barnfield MC, et al.
Accurate and precise plasma clearance measurement using four
$^{99m}$Tc-DTPA plasma samples over 4 h.
Nucl Med Commun. 2016;37(1):79.
Available from: <https://doi.org/10.1097/MNM.0000000000000405>.
[22]
Wesolowski CA, Puetter RC, Ling L, Babyn PS.
Tikhonov adaptively regularized gamma variate fitting to assess
plasma clearance of inert renal markers.
J Pharmacokinet Pharmacodyn. 2010;37(5):435–474.
Available from: <https://doi.org/10.1007/s10928-010-9167-z>.
[23]
Wesolowski CA, Babyn PS, Puetter RC, inventors; Carl A. Wesolowski, assignee.
Method for evaluating renal function.
US Patent 8,738,345; 2014.
Available from:
[24]
Wanasundara SN, Wesolowski MJ, Puetter RC, et al.
The early plasma concentration of $^{51}$Cr-EDTA in patients with
cirrhosis and ascites: a comparison of three models.
Nucl Med Commun. 2015;36(4):392.
Available from: <http://dx.doi.org/10.1097/MNM.0000000000000255>.
[25]
Cousins C, Gunasekera RD, Mubashar M, et al.
Comparative kinetics of microvascular inulin and
$^{99m}$Tc-labelled diethylenetriaminepenta-acetic acid exchange.
Clin Sci. 1997;93(5):471–477.
Available from: <https://doi.org/10.1042/cs0930471>.
[26]
Henriksen UL, Hansen HB, Ring-Larsen H, Bendtsen F, Henriksen JH.
Total plasma clearance versus urinary plasma clearance of
$^{51}$Cr-EDTA in patients with cirrhosis with and without fluid retention.
Scand J Clin Lab. 2015;75(1):64–72.
Available from: <https://doi.org/10.3109/00365513.2014.980313>.
[27]
Russell CD, Bischoff PG, Kontzen FN, et al.
Measurement of glomerular filtration rate: single injection plasma
clearance method without urine collection.
J Nuc Med. 1985;26(11):1243–1247.
[28]
Carlsen JE, Møller ML, Lund JO, Trap-Jensen J.
Comparison of four commercial Tc-99m (Sn) DTPA preparations
used for the measurement of glomerular filtration rate: concise
J Nuc Med. 1980;21(2):126–129.
[29]
Meeusen JW, Kasozi RN, Larson TS, Lieske JC.
Clinical impact of the refit CKD-EPI 2021 creatinine-based eGFR
Clin Chem. 2022;68(4):534–539.
[30]
Walser M, Bodenlos LJ.
Urea metabolism in man.
J Clin Investig. 1959;38(9):1617–1626.
[31]
Ekins RP, Nashat FS, Portal RW, Sgherzi AM.
The use of labelled vitamin B12 in the measurement of glomerular
filtration rate.
J Physiol. 1966;186(2):347–362.
[32]
John KA, Cogswell ME, Campbell NR, et al.
Accuracy and usefulness of select methods for assessing complete
collection of 24-hour urine: a systematic review.
J Clin Hypertens. 2016;18(5):456–467.
Available from: <https://doi.org/10.1111/jch.12763>.
[33]
Ünsal A, Çimentepe E.
Voiding position does not affect uroflowmetric parameters and
post-void residual urine volume in healthy volunteers.
Scand J Clin Lab. 2004;38(6):469–471.
Available from: <https://doi.org/10.1080/00365590410018675>.
[34]
Griffiths DJ, Harrison G, Moore K, McCracken P.
Variability of post-void residual urine volume in the elderly.
Urol res. 1996;24(1):23–26.
Available from: <https://doi.org/10.1007/BF00296729>.
[35]
Currarino G, Weinberg A, Putnam R.
Resorption of contrast material from the bladder during
cystourethrography causing an excretory urogram.
Radiology. 1977;123(1):149–150.
Available from: <https://doi.org/10.1148/123.1.149>.
[36]
Dalton JT, Wientjes MG, Au JLS.
Effects of bladder resorption on pharmacokinetic data analysis.
J Pharmacokinet Biopharm. 1994;22(3):183–205.
Available from: <https://doi.org/10.1007/bf02353328>.
[37]
Wood JH, Leonard TW.
Kinetic implications of drug resorption from the bladder.
Drug Metab Rev. 1983;14(3):407–423.
[38]
Stoller ML, Millard RJ.
The accuracy of a catheterized residual urine.
J Urol. 1989;141(1):15–16.
Available from: <https://doi.org/10.1016/s0022-5347(17)40572-6>.
[39]
Purves RD.
Optimum numerical integration methods for estimation of
area-under-the-curve (AUC) and area-under-the-moment-curve (AUMC).
J Pharmacokinet Biopharm. 1992;20:211–226.
[40]
Wesolowski CA, Wanasundara SN, Babyn PS, Alcorn J.
Comparison of the gamma-Pareto convolution with conventional
methods of characterising metformin pharmacokinetics in dogs.
J Pharmacokinet Pharmacodyn. 2020;47(1):19–45.
Available from: <http://dx.doi.org/10.1007/s10928-019-09666-z>.
[41]
Russell CD, Taylor Jr AT, Dubovsky EV.
A Bayesian regression model for plasma clearance.
J Nuc Med. 2002;43(6):762.
[42]
Passing H, Bablok W.
A new biometrical procedure for testing the equality of
measurements from two different analytical methods. Application of linear
regression procedures for method comparison studies in Clinical
Chemistry, Part I.
J Clin Chem Clin Biochem. 1983;21:709–720.
[43]
Clarke KA.
The phantom menace: Omitted variable bias in econometric research.
Confl Manag Peace Sci. 2005;22(4):341–352.
[44]
Frost C, Thompson SG.
Correcting for regression dilution bias: comparison of methods for a
single predictor variable.
Journal of the Royal Statistical Society Series A: Statistics in
Society. 2000;163(2):173–189.
[45]
Wilcox R.
A note on the Theil-Sen regression estimator when the regressor
is random and the error term is heteroscedastic.
Biom J. 1998;40(3):261–268.
[46]
Battistini N, Virgili F, Severi S, et al.
Relative expansion of extracellular water in obese vs. normal
Journal of Applied Physiology. 1995;79(1):94–96.
Available from: <https://doi.org/10.1152/jappl.1995.79.1.94>.
[47]
Schwartz IL, Schachter D, Freinkel N.
The measurement of extracellular fluid in man by means of a constant
infusion technique.
The Journal of Clinical Investigation. 1949;28(5):1117–1125.
Available from: <https://doi.org/10.1172/JCI102145>.
[48]
Fidanza F, Keys A, Anderson JT.
Density of body fat in man and other mammals.
Journal of Applied Physiology. 1953;6(4):252–256.
Available from: <https://doi.org/10.1152/jappl.1953.6.4.252>.
[49]
Hawkins DM.
The problem of overfitting.
J Chem Inf Comput Sci. 2004;44(1):1–12.
Available from: <https://doi.org/10.1021/ci0342472>.
[50]
Wesolowski CA, Wesolowski MJ, Babyn PS, Wanasundara SN.
Time varying apparent volume of distribution and drug half-lives
following intravenous bolus injections.
PLoS ONE. 2016;11(7):e0158798.
Available from: <https://doi.org/10.1371/journal.pone.0158798>.
[51]
Wesolowski CA, Wanasundara SN, Wesolowski MJ, Erbas B, Babyn PS.
A gamma-distribution convolution model of $^{99m}$Tc-MIBI thyroid
time-activity curves.
EJNMMI Physics. 2016;3(1):31.
Available from: <https://doi.org/10.1186/s40658-016-0166-z>.
[52]
Rostgaard J, Qvortrup K.
Electron microscopic demonstrations of filamentous molecular sieve
plugs in capillary fenestrae.
Microvascular research. 1997;53(1):1–13.
Available from: <https://doi.org/10.1006/mvre.1996.1987>.
[53]
Pappenheimer JR, Renkin EM, Borrero LM.
Filtration, diffusion and molecular sieving through peripheral
capillary membranes: a contribution to the pore theory of capillary
American journal of physiology-legacy content. 1951;167(1):13–46.
Available from:
[54]
Pappenheimer JR.
Passage of molecules through capillary walls.
Physiological Reviews. 1953;33(3):387–423.
Available from:
[55]
Niazi S.
Volume of distribution as a function of time.
J Pharm Sci. 1976;65(3):452–454.
[56]
Box GEP, Draper NR, et al.
Empirical model-building and response surfaces.. vol. 424.
Wiley New York; 1987.
[57]
Schimmel H, Zegers I.
Performance criteria for reference measurement procedures and
reference materials.
Clin Chem Lab Med. 2015;53(6):899–904.
[58]
Richard P, Fang H, Davis R.
Foundation for the redefinition of the kilogram.
Metrologia. 2016;53(5):A6.
[59]
Ptáčník V, Terš J, Šámal M, et al.
The importance of sampling time in radionuclide measurement of
glomerular filtration rate in adults using single blood sample.
Clinical and Translational Imaging. 2023;p. 1–12.
[60]
Wesolowski CA, Hogendoorn P, Vandierendonck R, Driedger AA.
Bolus injections of measured amounts of radioactivity.
J Nucl Med Technol. 1988;16(1):1–4.
Available from:
[61]
West GB, Brown JH, Enquist BJ.
The fourth dimension of life: fractal geometry and allometric scaling
of organisms.
Science. (1999);284(5420):1677–1679.
Available from:
[62]
Wesolowski CA, Babyn PS, Puetter RC.
A power law for determining renal sufficiency using volume of
distribution and weight from bolus $^{99m}$Tc-DTPA, two blood sample,
pediatric data.
In: 2006 IEEE Nuclear Science Symposium Conference Record. vol. 4.
IEEE; 2006. p. 1986–1994.
|
# The $p$-arithmetic homology of mod $p$ representations of
$\text{GL}_{2}(\mathbb{Q}_{p})$
Guillem Tarrach
###### Abstract.
We compute the non-Eisenstein systems of Hecke eigenvalues contributing to the
$p$-arithmetic homology of irreducible smooth mod $p$ representations $\pi$ of
$\text{GL}_{2}(\mathbb{Q}_{p})$ and to the cohomology of their duals. We show
that in most cases they are associated to odd irreducible 2-dimensional Galois
representations whose local component at $p$ corresponds under the mod $p$
local Langlands correspondence to a smooth representation that contains $\pi$
as a subrepresentation.
###### Contents
1. 1 Introduction
2. 2 Preliminaries
3. 3 $p$-arithmetic homology of $\pi(r,\lambda,\chi)$
4. 4 Preparation for the non-generic cases
5. 5 Proof of Theorem 1.1
## 1\. Introduction
Let $p\geq 5$ be a prime number and $N\geq 3$ an integer coprime to $p$. Let
$\Gamma_{1}^{p}(N)$ be the subgroup of matrices in
$\text{GL}_{2}(\mathbb{Z}[1/p])$ that have positive determinant and are
congruent modulo $N\mathbb{Z}[1/p]$ to a matrix of the form
$\begin{pmatrix}*&*\\\ 0&1\end{pmatrix}$. The goal of this article is to
compute the systems of Hecke eigenvalues contributing to the homology of
$\Gamma_{1}^{p}(N)$ with coefficients in the irreducible mod $p$
representations of $\text{GL}_{2}(\mathbb{Q}_{p})$ and the cohomology of their
duals. More specifically, we prove the following local-global compatibility
result.
###### Theorem 1.1.
Let $\pi$ be an irreducible smooth mod $p$ representation of
$\text{GL}_{2}(\mathbb{Q}_{p})$ over $\overline{\mathbb{F}}_{p}$, $\pi^{\vee}$
its abstract $\overline{\mathbb{F}}_{p}$-dual. Then:
1. (i)
The $p$-arithmetic homology $H_{*}(\Gamma_{1}^{p}(N),\pi)$ and cohomology
$H^{*}(\Gamma_{1}^{p}(N),\pi^{\vee})$ are finite-dimensional and vanish in
degrees outside the range $[0,3]$.
2. (ii)
To any system of Hecke eigenvalues in these spaces, one can associate a
2-dimensional odd semisimple mod $p$ Galois representation of
$\text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ satisfying the usual relations
at primes not dividing $pN$.
3. (iii)
Let
$\rho\colon\text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\longrightarrow\text{GL}_{2}(\overline{\mathbb{F}}_{p})$
be a 2-dimensional odd irreducible Galois representation. Then, $\rho$
contributes to $H_{*}(\Gamma_{1}^{p}(N),\pi)$ and
$H^{*}(\Gamma_{1}^{p}(N),\pi^{\vee})$ if and only if $N$ is a multiple of the
minimal level $N(\rho)$ attached to $\rho$ by Serre [Ser87], and one of the
following is satisfied:
1. (a)
$\pi$ is a subrepresentation of the representation associated to the
restriction of $\rho$ at a decomposition group $\mathcal{G}_{p}$ at $p$ by the
mod $p$ local Langlands correspondence for $\text{GL}_{2}(\mathbb{Q}_{p})$. In
this case, $\rho$ contributes to (co)homology in degrees 1, 2 and 3, unless
$\pi$ is a twist of the Steinberg representation, in which case $\rho$
contributes to cohomology in degrees 1 and 2.
2. (b)
$\pi$ is a character, say $\pi=\chi\circ\det$, and $\rho|_{\mathcal{G}_{p}}$
is an extension of $\chi\omega^{-1}$ by $\chi$, where we have identified
$\chi$ with a character of $\mathcal{G}_{p}$ via local class field theory and
$\omega$ denotes the mod $p$ cyclotomic character. In this case, $\rho$
contributes to (co)homology in degrees 2 and 3.
The proof of the theorem is obtained by combining the explicit construction of
the irreducible mod $p$ representations of $\text{GL}_{2}(\mathbb{Q}_{p})$ due
to Barthel–Livné [BL94] and Breuil [Bre03], a result relating $p$-arithmetic
homology to arithmetic homology in the spirit of [KS12] and [Tar22], and
classical results on the weight part of Serre’s conjecture. These are already
enough to prove the generic case where $\pi$ is supersingular or principal
series. The cases of (twists of) the trivial and Steinberg representations
require more work, and involve the group cohomological analogue of
multiplication of mod $p$ modular forms by the Hasse invariant studied by
Edixhoven–Khare [EK03] and an interpretation of this map in terms of the
representation theory of the local group $\text{GL}_{2}(\mathbb{Q}_{p})$.
### 1.1. Notation and conventions
Write $G=\text{GL}_{2}(\mathbb{Q}_{p})$, $K=\text{GL}_{2}(\mathbb{Z}_{p})$ and
$Z$ for the center of $G$, so that $Z\simeq\mathbb{Q}_{p}^{\times}$. Let
$\alpha=\begin{pmatrix}1&0\\\ 0&p\end{pmatrix}\in G$ and
$\beta=\begin{pmatrix}p&0\\\ 0&p\end{pmatrix}\in Z$. Write also $B$ for the
subgroup of upper-triangular matrices in $G$ and $I=K\cap\alpha K\alpha^{-1}$
for the Iwahori subgroup of matrices in $K$ that are upper-triangular modulo
$p$. Let $G^{+}\subseteq G$ be the submonoid of matrices whose entries lie in
$\mathbb{Z}_{p}$, and $G^{-}=(G^{+})^{-1}$. We will write $\omega$ for the
character $\mathbb{Q}_{p}^{\times}\longrightarrow\mathbb{F}_{p}^{\times}$
defined by $x\longmapsto x|x|\mod p$. Write
$k=\overline{\mathbb{F}}_{p}^{\times}$.
We normalise local class field theory so that uniformisers correspond to
geometric Frobenii, and for each prime $\ell$ we let $\text{Frob}_{\ell}$ be
the geometric Frobenius corresponding to $\ell$. Choose embeddings
$\overline{\mathbb{Q}}\xhookrightarrow{}\overline{\mathbb{Q}}_{\ell}$ for all
$\ell$, and let $\mathcal{G}_{\ell}$ denote the corresponding decomposition
group at $\ell$ in $\text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$. We will use
our normalisation of local class field theory to identify characters of
$\mathcal{G}_{\ell}$ and $\mathbb{Q}_{\ell}^{\times}$ without coming. Write
$\varepsilon\colon\text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\longrightarrow
k^{\times}$ for the mod $p$ cyclotomic character, it satisfies
$\varepsilon(\text{Frob}_{\ell})=\ell^{-1}\mod p$ and
$\varepsilon(\text{Frob}_{p})=1$. Its restriction to $\mathcal{G}_{p}$ at $p$
corresponds to $\omega$ under local class field theory. Write
$\mathcal{I}_{p}\subseteq\mathcal{G}_{p}$ for the inertia subgroup. Let
$\omega_{2}\colon\mathcal{I}_{p}\longrightarrow\mu_{p^{2}-1}(\overline{\mathbb{Q}}_{p}^{\times})\subseteq\overline{\mathbb{F}}_{p}^{\times}$
be Serre’s fundamental character of level 2, defined by
$\omega_{2}(g)=(gp^{1/(p^{2}-1)})/p^{1/(p^{2}-1)}$, and for $0\leq s\leq p$
let $\operatorname{Ind}(\omega_{2}^{s})$ be the irreducible representation of
$\mathcal{G}_{p}$ over $\chi$ with determinant $\omega^{s}$ and
$\operatorname{Ind}(\omega_{2}^{s})|_{\mathcal{I}_{p}}=\omega_{2}^{s}\oplus\omega_{2}^{ps}$.
All irreducible 2-dimensional representations of $\mathcal{G}_{p}$ over $k$
are of the form $\operatorname{Ind}(\omega_{2}^{s})\otimes\chi$ for some $s$
as above and character $\chi$. Given a two-dimensional odd and irreducible
representation $\rho$ of $\text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ over
$k$, we will write $N(\rho)$ for the minimal level attached to $\rho$ by Serre
in [Ser87].
Given $b\in k$, we will write $\mathrm{unr}_{{b}}$ for the $k$-valued
unramified characters of $\mathbb{Q}_{p}^{\times}$ and of
$\text{Gal}(\overline{\mathbb{Q}}_{p}/\mathbb{Q}_{p})$ sending $p$ and
$\text{Frob}_{p}$ respectively to $b$. Thus, all continuous characters
$\mathbb{Q}_{p}^{\times}\longrightarrow k^{\times}$ are of the form
$\omega^{a}\mathrm{unr}_{{b}}$ for some $b\in k$ and $0\leq a\leq p-2$. If $V$
is any representation of $G$ (resp. $K$) and $\chi$ is a $k$-valued continuous
character of $\mathbb{Q}_{p}^{\times}$ (resp. $\mathbb{Z}_{p}^{\times}$), we
will write $V\otimes\chi$ instead of $V\otimes(\chi\circ\det)$.
### Acknowledgements
The author is grateful to Jack Thorne for suggesting the question of studying
the $p$-arithmetic cohomology of smooth mod $p$ representations of
$\text{GL}_{2}(\mathbb{Q}_{p})$ and for his comments on earlier drafts of this
article.
## 2\. Preliminaries
### 2.1. Arithmetic and $p$-arithmetic (co)homology
Let $U^{p}\subseteq\text{GL}_{2}(\mathbb{A}^{p\infty})$ be a compact open
subgroup of the form $\prod_{\ell\nmid
N}\text{GL}_{2}(\mathbb{Z}_{\ell})\times U_{N}$ for some $N\geq 1$ coprime to
$p$ and open compact subgroup $U_{N}\subseteq\prod_{\ell\mid
N}\text{GL}_{2}(\mathbb{Q}_{\ell})$. Assume $U^{p}$ is neat. Let
$\text{GL}_{2}(\mathbb{R})^{\circ}$ be the subgroup of
$\text{GL}_{2}(\mathbb{R})$ consisting of matrices with positive determinant.
The (arithmetic) homology of level $U^{p}K$ of a left $k[K]$-module $M$, is
defined as
$H_{i}(U^{p}K,M):=\text{Tor}_{i}^{k[\text{GL}_{2}(\mathbb{Q})\times
U^{p}\times
K\times\text{GL}_{2}(\mathbb{R})^{\circ}]}(k[\text{GL}_{2}(\mathbb{A})],M).$
Here, we view $M$ as a $\text{GL}_{2}(\mathbb{Q})\times U^{p}\times
K\times\text{GL}_{2}(\mathbb{R})^{\circ}$ module by letting
$\text{GL}_{2}(\mathbb{Q})\times U^{p}\times\text{GL}_{2}(\mathbb{R})^{\circ}$
act trivially, and $k[\text{GL}_{2}(\mathbb{A})]$ is acted on by
$\text{GL}_{2}(\mathbb{Q})$ by multiplication on the left and by $U^{p}\times
K\times\text{GL}_{2}(\mathbb{R})^{\circ}$ by multiplication on the right. The
(arithmetic) cohomology of $M$ in level $U^{p}K$ is defined analogously as
$H_{i}(U^{p}K,M):=\text{Ext}^{i}_{k[\text{GL}_{2}(\mathbb{Q})\times
U^{p}\times
K\times\text{GL}_{2}(\mathbb{R})^{\circ}]}(k[\text{GL}_{2}(\mathbb{A})],M).$
Similarly, if $M$ is a $k[G]$-module, the $p$-arithmetic homology and
cohomology of $M$ in level $U^{p}$ are defined as
$\displaystyle H_{i}(U^{p},M)$
$\displaystyle:=\text{Tor}_{i}^{k[\text{GL}_{2}(\mathbb{Q})\times
U^{p}\times\text{GL}_{2}(\mathbb{Q}_{p})\times\text{GL}_{2}(\mathbb{R})^{\circ}]}(k[\text{GL}_{2}(\mathbb{A})],M),$
$\displaystyle H^{i}(U^{p},M)$
$\displaystyle:=\text{Ext}^{i}_{k[\text{GL}_{2}(\mathbb{Q})\times
U^{p}\times\text{GL}_{2}(\mathbb{Q}_{p})\times\text{GL}_{2}(\mathbb{R})^{\circ}]}(k[\text{GL}_{2}(\mathbb{A})],M).$
For both arithmetic and $p$-arithmetic homology (and similarly for
cohomology), one can canonically define complexes computing them as in [Tar22,
Section 5.1], where they were denoted $C^{\text{ad}}_{\bullet}(U^{p}K,M)$ and
$C^{\text{ad}}_{\bullet}(U^{p},M)$; here we will denote them by
$C_{\bullet}(U^{p}K,M)$ and $C_{\bullet}(U^{p},M)$ respectively. One can also
speak of arithmetic and $p$-arithmetic hyperhomology and hypercohomology of
complexes of $k[K]$ or $k[G]$-modules; these are just the derived tensor
products and derived Hom corresponding to the Tor and Ext modules above in
their corresponding derived category. In this article we will only be
interested in the case where
$U^{p}=U^{p}_{1}(N):=\prod_{\ell\nmid
N}\text{GL}_{2}(\mathbb{Z}_{\ell})\times\prod_{\ell\mid
N}\left\\{g\in\text{GL}_{2}(\mathbb{Z}_{\ell}):g\equiv\begin{pmatrix}*&*\\\
0&1\end{pmatrix}\mod\ell^{v_{\ell}(N)}\right\\}$
and $N\geq 3$. In this case, there are canonical isomorphisms
$\displaystyle H_{*}(U^{p}_{1}(N)K,{-})$ $\displaystyle\simeq
H_{*}(\Gamma_{1}(N),{-}),$ $\displaystyle H^{*}(U^{p}_{1}(N)K,{-})$
$\displaystyle\simeq H^{*}(\Gamma_{1}(N),{-}),$ $\displaystyle
H_{*}(U^{p}_{1}(N),{-})$ $\displaystyle\simeq H_{*}(\Gamma^{p}_{1}(N),{-}),$
$\displaystyle H^{*}(U^{p}_{1}(N),{-})$ $\displaystyle\simeq
H^{*}(\Gamma^{p}_{1}(N),{-}),$
where the right-hand sides denote group homology or cohomology,
$\Gamma_{1}(N)\subseteq\text{SL}_{2}(\mathbb{Z})$ is the usual congruence
subgroup and $\Gamma^{p}_{1}(N)$ is defined as in the introduction. The
arithmetic (resp. $p$-arithmetic) (co)homology groups are non-zero in degrees
outside the range $[0,1]$ (resp. $[0,3]$).
### 2.2. Hecke operators
Let $H$ is any locally profinite group, $H_{0}$ a compact open subgroup and
$H_{+}\subseteq H$ a submonoid containing $H_{0}$, and write
$H_{-}=H_{+}^{-1}$. If $M$ is a (left) $k[H_{-}]$-module (resp.
$k[H_{+}]$-module), then the $H_{0}$-coinvariants $M_{H_{0}}$ (resp.
$H_{0}$-invariants $M^{H_{0}}$) are naturally a right (resp. left) module for
the Hecke algebra $\mathcal{H}(H_{+},H_{0})_{k}$, the algebra of smooth
compactly supported $H_{0}$-biinvariant functions $H_{+}\longrightarrow H$
under convolution.
This discussion applies in particular to arithmetic and $p$-arithmetic
(co)homology. Let $U^{p}$ and $N$ be as in the previous section. Let
$\mathbb{T}(pN)$ denote the abstract unramified Hecke algebra for
$\text{GL}_{2}$ away from $pN$ with coefficients in $\mathbb{Z}$, that is, the
restricted tensor product of the local Hecke algebras
$\mathcal{H}(\text{GL}_{2}(\mathbb{Q}_{\ell}),\text{GL}_{2}(\mathbb{Z}_{\ell}))$
with $\ell\nmid pN$. It is a commutative algebra freely generated by Hecke
operators $T_{\ell}$, corresponding to the double coset of
$\begin{pmatrix}1&0\\\ 0&\ell\end{pmatrix}$, and invertible operators
$S_{\ell}$, corresponding to the double coset of $\begin{pmatrix}\ell&0\\\
0&\ell\end{pmatrix}$, where $\ell\nmid pN$. Fix also a submonoid
$G_{+}\subseteq G$ containing $K$. Let $V$ be a representation of $G_{-}$
(resp. $G_{+}$) on a $k$-vector space. Then, the arithmetic homology
$H_{*}(U^{p}K,V)$ (resp. the arithmetic cohomology $H^{*}(U^{p}K,V)$) is the
$U^{p}K$-coinvariants (resp. invariants) of a representation of
$\text{GL}_{2}(\mathbb{A}^{p\infty})\times G_{+}$, and is thus endowed with
commuting actions of $\mathbb{T}(pN)$ and $\mathcal{H}(G_{+},K)_{k}$, the
latter being a right (resp. left) action. For us, $\mathcal{H}(G_{+},K)_{k}$
will always be a commutative algebra so we will not distinguish between left
and right actions. Similarly, if $V$ is a representation of $G$ on a
$k$-vector space, then the $p$-arithmetic homology $H_{*}(U^{p},V)$ (resp. the
$p$-arithmetic cohomology $H^{*}(U^{p},V)$) is endowed with an action of
$\mathbb{T}(pN)$. If $V$ is a representation of $G_{-}$ (resp. $G$) and
$V^{\vee}$ denotes its (abstract) contragredient, then there are
$\mathbb{T}(pN)\otimes\mathcal{H}(G_{+},K)$-equivariant (resp.
$\mathbb{T}(pN)$-equivariant) isomorphisms $H^{*}(U^{p}K,V^{\vee})\simeq
H_{*}(U^{p}K,V)^{\vee}$ (resp. $H^{*}(U^{p},V^{\vee})\simeq
H_{*}(U^{p},V)^{\vee}$). In particular, the systems of eigenvalues in both
spaces are the same, so all the statements for cohomology in Theorem 1.1
follow from the corresponding statements for homology. For this reason, we
will now work almost exclusively with homology.
Let $\rho$ be a $k$-valued continuous representation of
$\text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ unramified outside $N$, and
consider the maximal ideal $\mathfrak{m}_{\rho}$ of $\mathbb{T}(pN)$ defined
by the kernel of the homomorphism $\mathbb{T}(pN)\longrightarrow k$ defined by
$T_{\ell}\longmapsto\operatorname{tr}\rho(\text{Frob}_{\ell})$ and $\ell
S_{\ell}\longmapsto\det\rho(\text{Frob}_{\ell})$ for $\ell\nmid p$, where
$\text{Frob}_{\ell}$ is a geometric Frobenius at $\ell$. For example,
$\mathfrak{m}_{1\oplus\varepsilon^{-1}}$ is generated by
$T_{\ell}-(1+\ell),\ell S_{\ell}-\ell$. Given a $\mathbb{T}(pN)$-module $M$,
we will say that $\rho$ contributes to, or appears in, $M$ is the localisation
of $M_{\mathfrak{m}_{\rho}}$ is non-zero. We will sometimes write $M_{\rho}$
instead of $M_{\mathfrak{m}_{\rho}}$. If $V$ is an irreducible representation
of $K$, then any system of Hecke eigenvalues in $H^{*}(\Gamma_{1}(N),V)$
corresponds to a semisimple 2-dimensional Galois representation as above. For
such a $V$, the cohomology $H^{0}(\Gamma_{1}(N),V)$ vanishes unless $V$ is the
trivial representation, in which case it is one-dimensional and the Hecke
operators act via $T_{\ell}=1+\ell,S_{\ell}=1$. In particular, the only
semisimple Galois representation $\rho$ that can contribute to
$H^{0}(\Gamma_{1}(N),V)$ is $1\oplus\varepsilon^{-1}$. In particular,
irreducible Galois representations can only contribute to arithmetic
cohomology in level $\Gamma_{1}(N)$ only in degree 1.
Given a character $\chi\colon\mathbb{Q}_{p}^{\times}\longrightarrow
k^{\times}$, write $k(\chi)$ for the $\mathbb{T}(pN)$-module whose underlying
module is $k$ and where $T_{\ell}$ (resp. $S_{\ell}$) act via $\chi(\ell)$
(resp. $\chi(\ell)^{2}$). For any $\mathbb{T}(pN)$-module $M$, write
$M(\chi):=M\otimes_{k}k(\chi)$ for the twist of $M$ by $k(\chi)$.
### 2.3. Irreducible mod $p$ representations of
$\text{GL}_{2}(\mathbb{Q}_{p})$
In this section we will recall the construction of the smooth irreducible mod
$p$ representations of $G$ and some facts about them. Given $0\leq r\leq p-1$,
consider the representation $\operatorname{Sym}^{r}(k^{2})^{\vee}$ of $K$ over
$k$. Note that $\operatorname{Sym}^{r}(k^{2})$ naturally extends to a
representation of the monoid $G^{+}$ of matrices with entries in
$\mathbb{Z}_{p}$, and also (perhaps less naturally) to a representation of
$KZ$ where $\beta$ acts trivially. In particular,
$\operatorname{Sym}^{r}(k^{2})^{\vee}$ extends to representations of $G^{-}$
and of $KZ$. The first action defines an action of $\mathcal{H}(G^{+},K)$ on
the compact induction
$\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee})$ and the
second defines an action of $\mathcal{H}(KZ,K)$. We let $T$ denote the
operator on
$\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee})$
corresponding to the double coset of $\alpha$ under the first action and $S$
the operator corresponding to the double coset of $\beta$ under the second,
which is an invertible operator. Explicitly, these operators are described as
follows. Given $g\in G$ and $v\in\operatorname{Sym}^{r}(k^{2})^{\vee})$, let
$[g,v]$ denote the element of
$\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee})$ that is
supported on $gK$ and maps $g$ to $v$. We will use the same notation for
compact inductions from other groups and of other representations. The Hecke
operators above are then defined by the formulas
$\displaystyle T[g,v]$ $\displaystyle=\sum_{x\in
K/I}[gx\alpha,\alpha^{-1}x^{-1}v],$ $\displaystyle S[g,v]$
$\displaystyle=[\beta g,v],$
Given a continuous character $\chi\colon\mathbb{Q}_{p}^{\times}\longrightarrow
k^{\times}$ and $\lambda\in k$, define
$\pi(r,\lambda,\chi):=\frac{\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee})}{(T-\lambda,S-1)}\otimes_{k}\chi\omega^{r}$
(we have included the twist by $\omega^{r}$ in order for this notation to
match that of the existing literature).
###### Remark 2.1.
We will later need to consider a variation of this definition that is defined
in families. Let $R$ be a $k$-algebra. One can define Hecke operators $T$ and
$S$ on
$\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\chi)$
for any character $\chi\colon G\longrightarrow R^{\times}$ in the same way as
for $R=k$ and $\chi=1$, and the natural isomorphism
$\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\chi)\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee})\otimes_{k}\chi$
intertwines $T$ and $S$ on the source with $T\otimes 1$ and $S\otimes 1$
respectively on the target. Moreover, if $b\in R^{\times}$ and $\lambda\in R$,
then there are isomorphisms of representations of $G$
$\displaystyle\frac{\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}R)}{(T-\lambda,S-1)}\otimes_{R}(\chi\mathrm{unr}_{{b}})$
$\displaystyle\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\frac{\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}(\chi\mathrm{unr}_{{b}}))}{(T-\lambda,S-1)}$
$\displaystyle\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\frac{\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\chi)}{(T-\lambda
b,S-b^{2})}.$
Let us define for $\tau\in R$, $\sigma\in S$ and $s\in\mathbb{Z}$,
$\widetilde{\pi}(r,\tau,\sigma,s)_{R}:=\frac{\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s}\otimes_{k}R)}{(T-\tau,S-\sigma)}.$
In particular, when $R=k$, then
$\pi(r,\lambda,\omega^{a}\mathrm{unr}_{{b}})\simeq\widetilde{\pi}(r,\tau,\sigma,s)_{k}$
for $\tau=\lambda b,\sigma=b^{2}$ and $s=a+r$.
The following results are due to Barthel–Livné [BL94] and Breuil [Bre03].
###### Theorem 2.2.
1. (i)If $(r,\lambda)\neq(0,\pm 1),(p-1,\pm 1)$, then $\pi(r,\lambda,\chi)$ is irreducible.
2. (ii)
If $\lambda\neq 0$ and $(r,\lambda)\neq(0,\pm 1)$ then there is a canonical
isomorphism
$\pi(r,\lambda,\chi)\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\operatorname{Ind}_{B}^{G}(\chi\mathrm{unr}_{{\lambda^{-1}}}\otimes\chi\mathrm{unr}_{{\lambda}}\omega^{r}).$
Here, $\operatorname{Ind}_{B}^{G}({-})$ denotes smooth parabolic induction.
The representation $\pi(r,\lambda,\chi)$ is said to be a principal series
representation.
3. (iii)
If $r=0$ and $\lambda=\pm 1$ there is a canonical homomorphism
$\pi(r,\lambda,\chi)\longrightarrow\operatorname{Ind}_{B}^{G}(\chi\mathrm{unr}_{{\lambda^{-1}}}\otimes\chi\mathrm{unr}_{{\lambda}})$
with kernel and cokernel isomorphic to
$\operatorname{St}\otimes\chi\mathrm{unr}_{{\lambda}}$ and image isomorphic to
$\chi\mathrm{unr}_{{\lambda}}\circ\det$. Here, $\operatorname{St}$ is the
Steinberg representation of $G$ over $k$, i.e. the quotient of
$\operatorname{Ind}_{B}^{G}(k)$ by the trivial representation $k$.
4. (iv)
If $\lambda=0$, then $\pi(r,0,\chi)$ is not isomorphic to a principal series
representation; it is said to be supersingular.
5. (v)
Any irreducible representation of $G$ over $k$ is isomorphic to a principal
series, a supersingular, a twist of the Steinberg, or a twist of the trivial
representation.
6. (vi)
If $\lambda\neq 0$, the only isomorphisms between various of these
representations are
$\pi(r,\lambda,\chi)\simeq\pi(r,-\lambda,\chi\mathrm{unr}_{{-1}})$
and for $\lambda\neq\pm 1$
$\pi(0,\lambda,\chi)\simeq\pi(p-1,\lambda,\chi).$
7. (vii)
If $\lambda=0$, the isomorphisms between various of these representations are
$\displaystyle\pi(r,0,\chi)$
$\displaystyle\simeq\pi(r,0,\chi\mathrm{unr}_{{-1}})$
$\displaystyle\simeq\pi(p-1-r,0,\chi\omega^{r})$
$\displaystyle\simeq\pi(p-1-r,0,\chi\omega^{r}\mathrm{unr}_{{-1}}).$
In particular, there is a non-zero map, unique up to scalar,
$\pi(p-1,1,\chi)\longrightarrow\pi(0,1,\chi)$ (resp.
$\pi(0,1,\chi)\longrightarrow\pi(p-1,1,\chi)$), which factors through a twist
of the Steinberg (resp. trivial) representation. In Section 4, we will
describe these maps explicitly.
### 2.4. The mod $p$ Langlands correspondence for
$\text{GL}_{2}(\mathbb{Q}_{p})$
Throughout this section, we assume $p\geq 5$. We will use the same conventions
for the mod $p$ local Langlands correspondence as in [Eme], however we also
include twists of extensions of the cyclotomic character by the trivial
character in our discussion. Let $\operatorname{MF}$ be Colmez’s magical
functor, defined by $\operatorname{MF}(\pi):=\mathbf{V}(\pi)\otimes\omega$ for
$\mathbf{V}$ as in [Col10].
###### Theorem 2.3.
Let
$\rho\colon\text{Gal}(\overline{\mathbb{Q}}_{p}/\mathbb{Q}_{p})\longrightarrow\text{GL}_{2}(k)$
be a continuous representation. Then, there exists a finite length smooth
representation of $G$ over $k$, unique up to isomorphism, satisfying the
following properties:
1. (i)
$\operatorname{MF}(\pi)\simeq\rho$,
2. (ii)
$\pi$ has central character corresponding to $(\det\rho)\omega$ under local
class field theory,
3. (iii)
$\pi$ has no finite-dimensional $G$-invariant subrepresentations or quotients.
More specifically, $\pi$ can be described as follows:
1. (i)
If $\rho$ is irreducible, say
$\rho=\operatorname{Ind}(\omega_{2}^{r+1})\otimes\chi$ with $0\leq r\leq p-1$,
then $\pi=\pi(r,0,\chi\omega)$ is supersingular.
2. (ii)
If $\rho\simeq\begin{pmatrix}\chi_{1}&*\\\ 0&\chi_{2}\end{pmatrix}$ with
$\chi_{1}\neq\chi_{2}\omega^{\pm 1}$, then $\pi$ is an extension
$0\longrightarrow\pi(r,\lambda,\chi)\longrightarrow\pi\longrightarrow\pi([p-3-r],\lambda^{-1},\chi\omega^{r+1})\longrightarrow
0,$
where
$\displaystyle\chi_{1}$
$\displaystyle=\omega^{r}\chi\mathrm{unr}_{{\lambda}},$
$\displaystyle\chi_{2}$
$\displaystyle=\omega^{-1}\chi\mathrm{unr}_{{\lambda^{-1}}}$
with $0\leq r\leq p-1$, and $[p-3-r]$ is the unique integer between $0$ and
$p-2$ that is congruent to $p-3-r$ modulo $p-1$. This extension is split if
and only if $\rho$ is semisimple.
3. (iii)
If $\rho$ is a non-split extension $\begin{pmatrix}\chi&*\\\
0&\chi\omega^{-1}\end{pmatrix}$, then $\pi$ has a unique Jordan–Hölder series,
which is of the form
$0\subseteq\pi_{1}\subseteq\pi_{2}\subseteq\pi$
where
$\pi_{1}\simeq\operatorname{St}\otimes\chi,\pi_{2}/\pi_{1}\simeq\chi\circ\det$
and $\pi/\pi_{2}\simeq\pi(p-3,1,\chi\omega)$.
4. (iv)
If $\rho$ is an extension $\begin{pmatrix}\chi\omega^{-1}&*\\\
0&\chi\end{pmatrix}$, then $\pi$ is an extension
$0\longrightarrow\pi(p-3,1,\chi\omega)\longrightarrow\pi\longrightarrow\operatorname{St}\otimes\chi\longrightarrow
0.$
This extension is split if and only if $\rho$ is semisimple. On both the
Galois side and the $\text{GL}_{2}(\mathbb{Q}_{p})$ side, there is a unique
class of non-trivial extensions, so this property determines $\pi$.
Proof. All the statements follow from the work of Colmez [Col10, Section
VII.4] except for case (iv), which follows from the end of the proof [Paš13,
Lemma 10.35] by taking into account that there is only one isomorphism class
of Galois representations that are non-split extensions of 1 by $\omega^{-1}$.
∎
For $\rho$ as in the theorem, we will say that $\pi$ is the representation
corresponding to $\rho$ under the mod $p$ local Langlands correspondence. One
could argue (for example, following [CEG+18, Remark 7.7]) that the “true” mod
$p$ local Langlands correspondence in case (iv) should be (up to isomorphism)
a non-trivial extension of $\pi$ as above by two copies of the trivial
character. However, we are only interested in the socle of $\pi$, which
remains the same and can be easily described in general by the following
result.
###### Proposition 2.4.
Let $\rho$ be a representation
$\text{Gal}(\overline{\mathbb{Q}}_{p}/\mathbb{Q}_{p})\longrightarrow\text{GL}_{2}(k)$
and let $\pi$ be the corresponding representation of
$\text{GL}_{2}(\mathbb{Q}_{p})$. Then, the following statements hold.
1. (i)
The representation $\pi(r,0,\chi)$ is a subrepresentation of $\pi$ if and only
if $\rho\simeq\operatorname{Ind}(\omega_{2}^{r+1})\otimes\chi\omega^{-1}$.
2. (ii)
For $\lambda\neq 0$ and $(r,\lambda)\neq(0,\pm 1),(p-1,\pm 1)$,
$\pi(r,\lambda,\chi)$ is a subrepresentation of $\pi$ if and only if
$\rho\simeq\begin{pmatrix}\omega^{r}\mathrm{unr}_{{\lambda}}&*\\\
0&\omega^{-1}\mathrm{unr}_{{\lambda^{-1}}}\end{pmatrix}\otimes\chi.$
3. (iii)
$\operatorname{St}\otimes\chi$ is a subrepresentation of $\pi$ if and only if
$\rho\simeq\begin{pmatrix}1&*\\\ 0&\omega^{-1}\end{pmatrix}\otimes\chi.$
4. (iv)
$\chi\circ\det$ is never a subrepresentation of $\pi$.
Proof. This follows from Theorem 2.3 and Theorem 2.2 (vi). ∎
### 2.5. The weight part of Serre’s conjecture
In this section we will recall the answer to the following question: given a
continuous representation
$\rho\colon\text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\longrightarrow\text{GL}_{2}(k)$,
for what $0\leq r\leq p-1$ and $s$ does $\rho$ contribute to
$H^{1}(\Gamma_{1}(N),\operatorname{Sym}^{r}(k^{2})\otimes\omega^{-s})$ and,
when it does, what are its eigenvalues for the Hecke operators at $p$? What we
mean by Hecke operators at $p$ is the following. One can define the action of
Hecke operators $T$ and $S$ on the arithmetic homology complex
$C_{\bullet}(U^{p}K,\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s})$
for any tame level $U^{p}\subseteq\text{GL}_{2}(\mathbb{A}^{p\infty})$ as we
explained in Section 2.2 by using the actions of $G^{-}$ and $KZ$ on
$\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s})$, and similarly
for the cohomology of the dual. When $s=0$, these operators correspond under
the Eichler–Shimura isomorphism followed by reduction mod $p$ to the usual
Hecke operators $T_{p}$ and $\langle p\rangle$ respectively for modular forms
of level $\Gamma_{1}(N)$ and weight $r+2$.
The answer to the question is contained in the proof of [BDJ10, Theorem 3.17].
We warn the reader that our conventions are different to those in [BDJ10]:
$\rho$ contributes to the cohomology of a Serre weight $V$ in the sense of
this article if and only if $\rho^{\vee}$ is modular of weight $V$ in the
sense of [BDJ10] (equivalently, if and only if $\rho$ is modular of weight
$V^{\vee}\otimes\omega^{-1}$ in the sense of [BDJ10]). In particular, it
follows from [BDJ10, Corollary 2.11] that $\rho$ contributes to the cohomology
of $V\otimes\omega^{a}$ if and only if $\rho\varepsilon^{a}$ contributes to
the cohomology of $V$ for any $a$.
###### Theorem 2.5.
Let
$\rho\colon\text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\longrightarrow\text{GL}_{2}(k)$
be an odd irreducible Galois representation. Let $0\leq r\leq p-1$ and
$a\in\mathbb{Z}$. Let $\lambda\in k,b\in k^{\times}$ and set $\tau=\lambda b$,
$\sigma=b^{2}$ and $s=a+r$. Then, $\rho$ contributes to the
$(T=\tau,S=\sigma)$-eigenspace in
$H^{1}(\Gamma_{1}(N),\operatorname{Sym}^{r}(k^{2})\otimes\omega^{-s})$ if and
only if $N(\rho)$ divides $N$ and one of the following holds:
1. (i)
$\lambda=0$ and
$\rho|_{\mathcal{G}_{p}}\simeq\operatorname{Ind}(\omega_{2}^{r+1})\otimes\omega^{a-1}\mathrm{unr}_{{b}}$,
2. (ii)
$\lambda\neq 0$, $(r,\lambda)\neq(0,\pm 1)$ and
$\rho|_{\mathcal{G}_{p}}\simeq\begin{pmatrix}\omega^{r}\mathrm{unr}_{{\lambda}}&*\\\
0&\omega^{-1}\mathrm{unr}_{{\lambda^{-1}}}\end{pmatrix}\otimes\omega^{a}\mathrm{unr}_{{b}}.$
3. (iii)
$r=0$, $\lambda=\pm 1$ and
$\rho|_{\mathcal{G}_{p}}\simeq\begin{pmatrix}1&*\\\
0&\omega^{-1}\end{pmatrix}\otimes\omega^{a}\mathrm{unr}_{{\lambda b}}.$
where $*$ denotes a peu ramifiée extension.
Proof. According to [Edi92, Theorem 2.5 and Theorem 2.6], only $\rho$ whose
restriction to $\mathcal{G}_{p}$ is irreducible (resp. reducible) can
contribute to the eigenspaces where $T=0$ (resp. $T\neq 0$). By [Edi92,
Theorem 2.6], the representations $\rho$ appearing in the
$(T=0,S=b^{2})$-eigenspace of
$H^{1}(\Gamma_{1}(N),\operatorname{Sym}^{r}(k^{2})\otimes\omega^{-s})$ are
those satisfying
$\rho|_{\mathcal{I}_{p}}\omega^{-s}\simeq(\omega_{2}^{r+1}\oplus\omega_{2}^{p(r+1)})\otimes\omega^{-r-1}$
and $(\det\rho)(\text{Frob}_{p})=b^{2}$ (recall that $\text{Frob}_{p}$ is a
Frobenius mapping to $p$ under class field theory, so it is a well-defined
element of $\mathcal{G}_{p}^{\mathrm{ab}}$). In particular, the restriction to
$\mathcal{G}_{p}$ of such a representation has determinant
$\omega^{2s-r-1}\mathrm{unr}_{{b^{2}}}$, so it must be isomorphic to
$\operatorname{Ind}(\omega_{2}^{r+1})\otimes\omega^{s-r-1}\mathrm{unr}_{{b}}\simeq\operatorname{Ind}(\omega_{2}^{r+1})\otimes\omega^{a-1}\mathrm{unr}_{{b}}.$
For the case where $\lambda\neq 0$, [Edi92, Theorem 2.5] states that, given a
system of Hecke eigenvalues in the $(T=\tau,S=\sigma)$-eigenspace of
$H^{1}(\Gamma_{1}(N),\operatorname{Sym}^{r}(k^{2})\otimes\omega^{-s})$, then
$\rho|_{\mathcal{G}_{p}}\omega^{-s}\simeq\begin{pmatrix}\mathrm{unr}_{{\tau}}&*\\\
0&\omega^{-r-1}\mathrm{unr}_{{\tau^{-1}\sigma}}\end{pmatrix}.$
If $\tau=\lambda b$ and $\sigma=b^{2}$, this is equivalent to
$\rho|_{\mathcal{G}_{p}}\simeq\begin{pmatrix}\omega^{r}\mathrm{unr}_{{\lambda}}&*\\\
0&\omega^{-1}\mathrm{unr}_{{\lambda^{-1}}}\end{pmatrix}\otimes\omega^{a}\mathrm{unr}_{{b}}.$
Conversely, [BDJ10, Theorem 3.17] and its proof show that a representation of
this form does contribute to
$H^{1}(\Gamma_{1}(N),\operatorname{Sym}^{r}(k^{2})\otimes\omega^{-s})$
provided that the extension is peu ramifiée whenever $r=0$ and $\lambda=\pm 1$
(and, in this case, it never contributes if the extension is très ramifiée).
It remains to show that it contributes to the $(T=\lambda
b,S=b^{2})$-eigenspace. When $r\neq p-2$ or $\rho|_{\mathcal{G}_{p}}$ is non-
split, [Edi92, Theorems 2.5 and 2.6] again show that this the only eigenspace
for Hecke operators at $p$ to which $\rho$ can contribute, so it must do so.
In the case when $r=p-2$ and
$\rho|_{\mathcal{G}_{p}}\simeq\omega^{a-1}\mathrm{unr}_{{\lambda
b}}\oplus\omega^{a-1}\mathrm{unr}_{{\lambda^{-1}b}}$, the same results show
that $\rho$ can only appear in the eigenspaces for $(T=\lambda b,S=b^{2})$ and
$(T=\lambda^{-1}b,S=b^{2})$. We know that $\rho$ contributes to at least one
of these, and we must show that $\rho$ contributes to both. When $\lambda=\pm
1$ this is clear, since both systems of eigenvalues are actually the same.
When $\lambda\neq\pm 1$, this follows from [Gro90, Theorem 13.10]. ∎
## 3\. $p$-arithmetic homology of $\pi(r,\lambda,\chi)$
### 3.1. $p$-arithmetic and arithmetic homology
In this section we will relate the $p$-arithmetic homology of the
representations $\pi(r,\lambda,\chi)$ to the arithmetic homology of Serre
weights $\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes\omega^{s}$. The argument
is the same as that of [Tar22]. As in Remark 2.1, $R$ is a $k$-algebra.
###### Lemma 3.1.
For any $0\leq r\leq p-1$ and $s\in\mathbb{Z}$, $\tau\in R$ and $\sigma\in
R^{\times}$, we have
$\text{Tor}^{i}_{R[T,S]}(R[T,S]/(T-\tau,S-\sigma),\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s}\otimes_{k}R))=0$
for $i>0$.
Proof. This follows from the fact that $(S-\sigma,T-\tau)$ is a regular
sequence for the $k[T,S]$-module
$\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s}\otimes_{k}R)$,
which can be seen by studying how $T$ and $S$ modify the support of a function
using the Cartan decomposition. See [CEG+18, Lemma 4.10] for the details. ∎
This shows that the $G$-module $\widetilde{\pi}(r,\tau,\sigma,s)_{R}$ from
Remark 2.1 can be written not just as the eigenquotient
$\displaystyle\operatorname{c-Ind}_{K}^{G}$
$\displaystyle(\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s}\otimes_{k}R)/(T-\tau,S-\sigma)$
$\displaystyle\simeq\frac{R[T,S]}{(T-\tau,S-\sigma)}\otimes_{R[T,S]}\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s}\otimes_{k}R)$
but also as a _derived_ eigenquotient: there is an isomorphism in the derived
category of (abstract111In our arguments involving homological algebra, we
will always work with categories of abstract representations (of $G$ or other
groups) and never with categories of smooth representations)
$R[G][T,S]$-modules
$\widetilde{\pi}(r,\tau,\sigma,s)_{R}\simeq\frac{R[G][T,S]}{(T-\tau,S-\sigma)}\otimes^{\mathbb{L}}_{R[G][T,S]}\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s}\otimes_{k}R).$
Fix $U^{p}$ and $N$ as in Section 2.1. We can define an action of Hecke
operators $T$ and $S$ in arithmetic homology over $R$ in the same way as
described in the beginning of Section 2.5, and the arguments in [Tar22,
Section 5.8] show that we have an isomorphism in the derived category of
$\mathbb{T}(pN)\otimes_{\mathbb{Z}}R[T,S]$-modules for the $p$-arithmetic
homology complex
$\displaystyle C_{\bullet}(U^{p},\widetilde{\pi}(r,\tau,\sigma,s)_{R})$
$\displaystyle\simeq
C_{\bullet}(U^{p}K,\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s}\otimes_{k}R)\otimes^{\mathbb{L}}_{R[T,S]}\frac{R[T,S]}{(T-\tau,S-\sigma)}.$
Moreover,
$\displaystyle
C_{\bullet}(U^{p}K,\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s}\otimes_{k}R)\simeq
C_{\bullet}(U^{p}K,\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s})\otimes^{\mathbb{L}}_{k}R.$
Thus, in fact
$\displaystyle C_{\bullet}(U^{p},\widetilde{\pi}(r,\tau,\sigma,s)_{R})$
$\displaystyle\simeq
C_{\bullet}(U^{p}K,\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s})\otimes^{\mathbb{L}}_{k[T,S]}\frac{R[T,S]}{(T-\tau,S-\sigma)}.$
###### Remark 3.2.
The reason why we have considered representations in families is the
following. Assume that $R=k[\tau,\sigma,\sigma^{-1}]$ for two indeterminate
variables $\tau$ and $\sigma$. Then,
$\displaystyle C_{\bullet}(U^{p},\widetilde{\pi}(r,\tau,\sigma,s)_{R})$
$\displaystyle\simeq
C_{\bullet}(U^{p}K,\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s})\otimes^{\mathbb{L}}_{k[T,S]}\frac{R[T,S]}{(T-\tau,S-\sigma)}$
$\displaystyle\simeq
C_{\bullet}(U^{p}K,\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s})\otimes^{\mathbb{L}}_{k[T,S,S^{-1}]}\frac{R[T,S,S^{-1}]}{(T-\tau,S-\sigma)}$
$\displaystyle\simeq
C_{\bullet}(U^{p}K,\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s})\otimes^{\mathbb{L}}_{k[T,S,S^{-1}]}\frac{k[T,S,S^{-1},\tau,\sigma,\sigma^{-1}]}{(T-\tau,S-\sigma)}$
$\displaystyle\simeq
C_{\bullet}(U^{p}K,\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s}),$
where the last term is viewed as an $R$-module by letting $\tau$ act as $T$
and $\sigma$ as $S$. In other words, the $p$-arithmetic homology over $R$
coincides with the corresponding arithmetic homology, and not just a (derived)
eigenquotient of it.
###### Proposition 3.3.
There is a spectral sequence converging to
$H_{*}(U^{p},\widetilde{\pi}(r,\tau,\sigma,s)_{R})$ whose $E^{2}$ page is
$E^{2}_{i,j}=\text{Tor}_{i}^{k[T,S]}\left(\frac{R[T,S]}{(T-\tau,S-\sigma)},H_{j}(U^{p}K,\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{s})\right).$
In particular, there is a spectral sequence converging to
$H_{*}(U^{p},\pi(r,\lambda,\omega^{a}\mathrm{unr}_{{b}}))$ whose $E^{2}$ page
is
$E^{2}_{i,j}=\text{Tor}_{i}^{k[T,S]}\left(\frac{k[T,S]}{(T-\lambda
b,S-b^{2})},H_{j}(U^{p}K,\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{a+r})\right).$
###### Remark 3.4.
When $R=k$, the $k[T,S]$-module resolution
$k[T,S]\xrightarrow{(S-\sigma)\oplus(\tau-T)}k[T,S]^{2}\xrightarrow{(T-\tau,S-\sigma)}k[T,S]$
of $k[T,S]/(T-\tau,S-\sigma)$ shows that for any $k[T,S]$-module $V$ that is
finite-dimensional as a $k$-vector space,
$\text{Tor}^{k[T,S]}_{i}(k[T,S]/(T-\tau,S-\sigma),V)$ vanishes in degrees
outside the range $[0,2]$, and is isomorphic in degree 2 (resp. degree 0) to
the $(T=\tau,S=\sigma)$-eigenspace (resp. eigenquotient) of $V$. Moreover, the
Tor modules in degree 1 lie in a short exact sequence
$\displaystyle 0$
$\displaystyle\longrightarrow\frac{k[T]}{(T-\tau)}\otimes_{k[T]}\text{Hom}_{k[S]}\left(\frac{k[S]}{(S-\sigma)},V\right)$
$\displaystyle\longrightarrow\text{Tor}^{k[T,S]}_{1}\left(\frac{k[T,S]}{(T-\tau,S-\sigma)},V\right)$
$\displaystyle\longrightarrow\text{Hom}_{k[T]}\left(\frac{k[T]}{(T-\tau)},\frac{k[S]}{(S-\sigma)}\otimes_{k[S]}V\right)\longrightarrow
0.$
In particular, the Tor groups vanish if and only if they vanish in at least
one of the degrees 0, 1 or 2. The dimensions $d_{0}$ and $d_{2}$ of the Tor
spaces in degrees 0 and 2 are equal, and the dimension in degree 1 is
$2d_{0}=2d_{2}$.
### 3.2. Proof of Theorem 1.1 in the generic case
Parts (i) and (ii) of Theorem 1.1 follow immediately from Proposition 3.3 for
supersingular and principal series representations, as well as their analogue
for the reducible representations $\pi(0,\pm 1,\chi)$ and $\pi(p-1,\pm
1,\chi)$ (by the corresponding results for arithmetic homology).
Moreover, if $\rho$ is an odd irreducible 2-dimensional Galois
representations, then the localisation at $\rho$ of the spectral sequence from
Proposition 3.3 satisfies $(E^{2}_{i,j})_{\rho}=0$ for $j\neq 1$, so we may
conclude that
$H_{i+1}(\Gamma^{p}_{1}(N),\pi(r,\lambda,\omega^{a}\mathrm{unr}_{{b}}))_{\rho}\simeq\text{Tor}_{i}^{k[T,S]}\left(\frac{k[T,S]}{(T-\lambda
b,S-b^{2})},H_{1}(U^{p}K,\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes_{k}\omega^{a+r})_{\rho}\right).$
In particular, taking into account Remark 3.4, the following are equivalent:
1. (i)
$\rho$ contributes to the $p$-arithmetic homology
$H_{*}(\Gamma^{p}_{1}(N),\pi(r,\lambda,\omega^{a}\mathrm{unr}_{{b}}))$, and it
does exactly in degrees 1, 2 and 3,
2. (ii)
$\rho$ contributes to the degree 1 $p$-arithmetic homology
$H_{1}(\Gamma^{p}_{1}(N),\pi(r,\lambda,\omega^{a}\mathrm{unr}_{{b}}))$,
3. (iii)
$\rho$ contributes to the $p$-arithmetic homology
$H_{*}(\Gamma^{p}_{1}(N),\pi(r,\lambda,\omega^{a}\mathrm{unr}_{{b}}))$,
4. (iv)
$\rho$ contributes to the $(T=\lambda b,S=b^{2})$-eigenspace of
$H_{1}(\Gamma_{1}(N),\operatorname{Sym}^{r}(k^{2})^{\vee}\otimes\omega^{a+r})$,
5. (v)
$\rho$ contributes to the $(T=\lambda b,S=b^{2})$-eigenspace of
$H^{1}(\Gamma_{1}(N),\operatorname{Sym}^{r}(k^{2})\otimes\omega^{-a-r})$.
By Theorem 2.5 and Proposition 2.4, when
$\pi(r,\lambda,\omega^{a}\mathrm{unr}_{{b}})$ is irreducible, these are
equivalent to $N(\rho)$ dividing $N$ and this representation appearing in the
socle of the smooth representation of $\text{GL}_{2}(\mathbb{Q}_{p})$
associated to $\rho|_{\mathcal{G}_{p}}$ by the mod $p$ local Langlands
correspondence of Theorem 2.3, which proves part (iii) of Theorem 1.1 in this
case. For later reference, we also record the following proposition, which
follows in the same way from Proposition 3.3 and Theorem 2.5.
###### Proposition 3.5.
Let
$\rho\colon\text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\longrightarrow\text{GL}_{2}(k)$
be an odd irreducible representation. Then, the space
$H_{1}(\Gamma^{p}_{1}(N),\pi(p-1,1,\chi))$ (resp. to
$H_{1}(\Gamma^{p}_{1}(N),\pi(0,1,\chi))$) is finite-dimensional, and any
system of Hecke eigenvalues in it has an attached Galois representation.
Moreover, $\rho$ contributes to this space if and only if $N(\rho)$ divides
$N$, and $\rho|_{\mathcal{G}_{p}}$ is isomorphic to an extension (resp. a peu
ramifiée extension)
$\begin{pmatrix}1&*\\\ 0&\omega^{-1}\end{pmatrix}\otimes\chi.$
## 4\. Preparation for the non-generic cases
In order to deal with the Steinberg and trivial cases, we will need a few
preliminaries on the non-zero maps
$\pi(p-1,1,\chi)\longrightarrow\pi(0,1,\chi)$ and
$\pi(0,1,\chi)\longrightarrow\pi(p-1,1,\chi)$ and the corresponding maps on
$p$-arithmetic homology. They turn out to be related to the degeneracy maps
from modular forms of level $\Gamma_{1}(N)$ to level
$\Gamma_{1}(N)\cap\Gamma_{0}(p)$ induced by $\tau\longmapsto\tau$ and
$\tau\longmapsto p\tau$ and to the group cohomological avatar of
multiplication by the Hasse invariant studied by Edixhoven–Khare in [EK03].
Our next goal is to study these maps.
### 4.1. The map $\pi(p-1,1,1)\longrightarrow\pi(0,1,1)$
Recall from Section 2.3 that there is a unique-up-to-scalaras non-zero map
$\pi(p-1,1,\chi)\longrightarrow\pi(0,1,\chi)$. The goal of this section is to
give an explicit description of this map.
We may assume that $\chi=1$. First, let us observe that there is a $K$-module
isomorphism
$k\oplus\operatorname{Sym}^{p-1}(k^{2})^{\vee}\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\operatorname{Map}(\mathbb{P}^{1}(\mathbb{F}_{p}),k)$,
which identifies the trivial representation with the subrepresentation of
constant functions $\mathbb{P}^{1}(\mathbb{F}_{p})\longrightarrow k$ and
$\operatorname{Sym}^{p-1}(k^{2})^{\vee}$ with the subrepresentation of
functions whose total sum equals 0. As usual, one can also identify
$\operatorname{Sym}^{p-1}(k^{2})^{\vee}$ with the space of homogeneous
polynomial functions of degree $p-1$ in two variables. The former
identification is then given by sending a homogeneous polynomial function $Q$
of degree $p-1$ in two variables to the function $(x:y)\longmapsto Q(x,y)$.
In fact, this can be upgraded to a $G^{-}$-equivariant isomorphism in a
natural way. The action of the monoid $G^{+}$ on $\mathbb{F}_{p}^{2}$ descends
to an action on
$\mathbb{F}_{p}^{2}/\mathbb{F}_{p}^{\times}=\mathbb{P}^{1}(\mathbb{F}_{p})\cup\\{0\\}$.
It is easy to check (for example, using the Cartan decomposition of $G$) that
an element $g\in G^{+}$ can act in three ways: invertibly (if $g\in K$), by
sending everything to 0 (if all the entries of $g$ are multiples of $p$), or
by mapping 0 and one point of $\mathbb{P}^{1}(\mathbb{F}_{p})$ to 0 and all
other points of $\mathbb{P}^{1}(\mathbb{F}_{p})$ to another (fixed) point of
$\mathbb{P}^{1}(\mathbb{F}_{p})$. Thus, we get an action of $G^{-}$ on
$\operatorname{Map}(\mathbb{P}^{1}(\mathbb{F}_{p})\cup\\{0\\},k)$, and it
follows from the previous sentence that the subspace of functions such that
$f(0)=\sum_{P\in\mathbb{P}^{1}(\mathbb{F}_{p})}f(P)$ is stable under this
action. This space can be naturally identified with
$\operatorname{Map}(\mathbb{P}^{1}(\mathbb{F}_{p}),k)$ by restriction, and the
resulting action of $G^{-}$ on this space makes the isomorphisms in the
previous paragraph $G^{-}$-equivariant. Naturally, these isomorphisms are also
$KZ$-equivariant when we instead extend the action of $K$ to one of $KZ$ by
letting $\beta$ act trivially. There is a $K$-equivariant isomorphism
$\operatorname{Map}(\mathbb{P}^{1}(\mathbb{F}_{p}),k)\longrightarrow\operatorname{c-Ind}_{I}^{K}(k)$
given by sending a function $f$ to $\begin{pmatrix}a&b\\\
c&d\end{pmatrix}\longmapsto f(a:c)$, and we will view the target as a
$G^{-}$-module and a $KZ$-module (whose underlying $K$-module structures
agree) by transport of structure. In particular, compactly inducing to $G$ we
obtain actions of Hecke operators $T$ and $S$ as usual, and the maps above
induce $k[T,S]$-module isomorphisms
$\operatorname{c-Ind}_{K}^{G}(k)\oplus\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{p-1}(k^{2}))\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\operatorname{c-Ind}_{K}^{G}(\operatorname{c-Ind}_{I}^{K}(k)).$
Consider the following two maps
$\phi_{1},\phi_{2}\colon\operatorname{c-Ind}_{I}^{G}(k)\longrightarrow\operatorname{c-Ind}_{K}^{G}(k)$.
The first is given simply by $\phi_{1}([g,a])=[g,a]$. The second map
$\phi_{2}$ is defined by $\phi_{2}([g,a])=[g\alpha,a]$. It will be useful to
view this map as the composition of the map
$[g,a]\longmapsto[g,a]\colon\operatorname{c-Ind}_{I}^{G}(k)\longrightarrow\operatorname{c-Ind}_{\alpha
K\alpha^{-1}}^{G}(k)$ and the intertwining isomorphism
(4.1) $\displaystyle\begin{split}\operatorname{c-Ind}_{\alpha
K\alpha^{-1}}^{G}(k)&\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\operatorname{c-Ind}_{K}^{G}(k)\\\
[g,a]&\longmapsto[g\alpha,a].\end{split}$
It is tedious, but straightforward, to check that the resulting maps
$\operatorname{c-Ind}_{K}^{G}(\operatorname{c-Ind}_{I}^{K}(k))\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\operatorname{c-Ind}_{I}^{G}(k)\rightrightarrows\operatorname{c-Ind}_{K}^{G}(k)$
are $k[T,S]$-equivariant. One can also check that the composition
$\operatorname{c-Ind}_{K}^{G}(k)\oplus\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{p-1}(k^{2})^{\vee})\longrightarrow\operatorname{c-Ind}_{K}^{G}(\operatorname{c-Ind}_{I}^{K}(k))\xrightarrow{\phi_{1}\oplus\phi_{2}}\operatorname{c-Ind}_{K}^{G}(k)\oplus\operatorname{c-Ind}_{K}^{G}(k)$
is of the form $\begin{pmatrix}1&0\\\ T&\phi\end{pmatrix},$ where $\phi$ is
also $k[T,S]$-equivariant.
###### Lemma 4.2.
The reduction mod $(T-1,S-1)$ of the homomorphism $\phi$ is a non-zero map
$\pi(p-1,1,1)\longrightarrow\pi(0,1,1)$.
Proof. The following argument is a representation theoretic analogue of the
proof of [EK03, Lemma 2] (in fact, this lemma can be deduced literally from
loc. cit., for example as a consequence of Proposition 4.6 below). Write
${}^{\circ}G=\\{g\in G:v_{p}(\det(g))=0\\}$. Then, by [Ser80, II.1.4 Theorem
3], ${}^{\circ}G$ is the amalgamated product of $K$ and $\alpha K\alpha^{-1}$
along $I$. Thus, there is a Mayer-Vietoris exact sequence in the group
homology of $k[G]$,
$0\longrightarrow\operatorname{c-Ind}_{I}^{G}(k)\longrightarrow\operatorname{c-Ind}_{K}^{G}(k)\oplus\operatorname{c-Ind}_{\alpha
K\alpha^{-1}}^{G}(k)\longrightarrow\operatorname{c-Ind}_{{}^{\circ}G}^{G}(k)\longrightarrow
0.$
Composing with the intertwining isomorphism 4.1, we obtain an exact sequence
(4.3) $\displaystyle
0\longrightarrow\operatorname{c-Ind}_{K}^{G}(\operatorname{c-Ind}_{I}^{K}(k))\stackrel{{\scriptstyle\phi_{1}\oplus\phi_{2}}}{{\longrightarrow}}\operatorname{c-Ind}_{K}^{G}(k)\oplus\operatorname{c-Ind}_{K}^{G}(k)\longrightarrow\operatorname{c-Ind}_{{}^{\circ}G}^{G}(k)\longrightarrow
0$
where the last map is given by
$([g_{1},a_{1}],[g_{2},a_{2}])\longmapsto[g_{1},a_{1}]-[g_{2}\alpha^{-1},a_{2}]$.
This exact sequence is $k[T,S]$-equivariant if we endow
$\operatorname{c-Ind}_{{}^{\circ}G}^{G}(k)$ with the action of $T$ (resp. $S$)
given by acting by $\alpha$ (resp. $\beta$). Taking the quotient of the exact
sequence above by the ideal $(T-1,S-1)$, we obtain an exact sequence
$\pi(0,1,1)\oplus\pi(p-1,1,1)\longrightarrow\pi(0,1,1)\oplus\pi(0,1,1)\longrightarrow
k\longrightarrow 0,$
where the first map is given by $\begin{pmatrix}1&0\\\
1&\overline{\phi}\end{pmatrix}$, where $\overline{\phi}$ is the map induced by
$\phi$. Looking at the Jordan-Hölder constituents of the terms in the exact
sequence, it is clear that $\overline{\phi}$ cannot be zero. ∎
Let us also remark that if $R$ is a $k$-algebra, $\tau\in R,\sigma\in
R^{\times}$ and $s\in\mathbb{Z}$, then tensoring $\phi$ with
$\omega^{s}\otimes_{k}R$ and quotienting by $(T-\tau,S-\sigma)$ we get a map
$\widetilde{\pi}(p-1,\tau,\sigma,s)_{R}\longrightarrow\widetilde{\pi}(0,\tau,\sigma,s)_{R}$.
### 4.2. The map $\pi(0,1,1)\longrightarrow\pi(p-1,1,1)$
There is also a unique-up-to-scalar non-zero map
$\pi(0,1,\chi)\longrightarrow\pi(p-1,1,\chi)$, which factors through
$\chi\circ\det$. The goal of this section is to show that this map comes from
specialising a map
$\widetilde{\pi}(0,\tau,\sigma,s)_{R}\longrightarrow\widetilde{\pi}(p-1,\tau,\sigma,s)_{R}$
as in the setting of the end of the previous section. As in the previous
section, this is essentially equivalent to the existence of a lift of the map
$\pi(0,1,1)\longrightarrow\pi(p-1,1,1)$ to a $k[T,S]$-equivariant map
$\operatorname{c-Ind}_{K}^{G}(k)\longrightarrow\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{p-1}(k^{2})^{\vee}).$
We will construct such a map by dualising the procedure of the previous
section. Consider the composition
(4.4)
$\displaystyle\operatorname{c-Ind}_{K}^{G}(k)\oplus\operatorname{c-Ind}_{K}^{G}(k)\xrightarrow{\text{id}\oplus\lx@cref{creftypecap~refnum}{eqn:
intertwining
isomorphism}^{-1}}\operatorname{c-Ind}_{K}^{G}(k)\oplus\operatorname{c-Ind}_{\alpha
K\alpha^{-1}}^{G}(k)\longrightarrow\operatorname{c-Ind}_{I}^{G}(k),$
where the last map is the sum of inclusions. It is $k[T,S]$-equivariant and
the resulting map
$\operatorname{c-Ind}_{K}^{G}(k)\oplus\operatorname{c-Ind}_{K}^{G}(k)\longrightarrow\operatorname{c-Ind}_{K}^{G}(k)\oplus\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{p-1}(k^{2})^{\vee})$
is of the form $\begin{pmatrix}1&S^{-1}T\\\ 0&-\psi\end{pmatrix}$. Explicitly,
$\psi([g,1])=\sum_{x\in K/I}[\beta^{-1}gx\alpha,e^{*}]$, where $e^{*}$ is the
element of $\operatorname{Sym}^{p-1}(k^{2})^{\vee}$ corresponding to the
polynomial function $Q(x,y)=x^{p-1}$.
###### Lemma 4.5.
The reduction mod $(T-1,S-1)$ of the homomorphism $\psi$ is a non-zero map
$\pi(0,1,1)\longrightarrow\pi(p-1,1,1)$.
Proof. As for Lemma 4.2, this follows from another result for group cohomology
(namely, Proposition 4.10 below), but we give another proof in a more
representation-theoretic spirit. Note that $\alpha^{-1}x^{-1}e^{*}$ is equal
to $e^{*}$ for any $x\in K$ which is not in the same left $I$-coset as
$w:=\begin{pmatrix}0&1\\\ 1&0\end{pmatrix}$, and vanishes if $x\in wI$. In
particular, $\psi([g,1])=T[\beta^{-1},e^{*}]+[\beta^{-1}w\alpha,e^{*}]$.
Hence, it’s enough to show that $[1,e^{*}]+[w\alpha,e^{*}]$ defines a non-zero
element of $\pi(p-1,1,1)$. To do this, we will check that its image under the
map of Theorem 2.2 (ii) is non-zero. This map is defined in [BL94, Section
6.2] and sends $[g,Q]$ to $h\longmapsto Q(x(1:0))$ where we have written
$g^{-1}h=xb$ with $x\in K$ and $b\in B$. The image of
$[1,e^{*}]+[w\alpha,e^{*}]$ maps $w$ to $1$, so in particular it is non-zero
(in fact, as we would expect, it is the constant function with value 1). ∎
### 4.3. The resulting maps on arithmetic cohomology
###### Proposition 4.6.
Let $R=k[\tau,\sigma,\sigma^{-1}]$ be as in Remark 3.2 and $s\in\mathbb{Z}$.
Then, the map $\widetilde{\phi}\colon
H_{*}(\Gamma_{1}(N),\operatorname{Sym}^{p-1}(k^{2})^{\vee}\otimes\omega^{s})\longrightarrow
H_{*}(\Gamma_{1}(N),\omega^{s}\circ\det)$ induced from the map
$\widetilde{\pi}(p-1,\tau,\sigma,s)_{R}\longrightarrow\widetilde{\pi}(0,\tau,\sigma,s)_{R}$
defined at the end of Section 4.1 is a twist of the dual of the map in [EK03,
Lemma 2]. In particular, if $p\geq 5$, this map is surjective in degree 1.
Proof. By [BDJ10, Corollary 2.11], we may assume $s=0$. The first sentence
follows from the constructions of both maps, and the second follows from
[EK03, Lemma 2]. ∎
As mentioned above, the proof of Lemma 4.2 is a representation-theoretic
analogue the proof of [EK03, Lemma 2]. The latter can then be recovered by
taking $p$-arithmetic homology of the exact sequence 4.3. To see this, we need
the following result.
###### Lemma 4.7.
If $p\geq 5$, the $p$-arithmetic homology
$H_{1}(\Gamma^{p}_{1}(N),\operatorname{c-Ind}_{{}^{\circ}G}^{G}(k))$ vanishes.
Proof. As $G=\Gamma_{1}^{p}(N){}^{\circ}G$ and
$\Gamma_{1}^{p}(N)\cap{}^{\circ}G=\Gamma_{1}^{p}(N)\cap\text{SL}_{2}(\mathbb{Q})$,
the natural restriction map
$\operatorname{c-Ind}_{{}^{\circ}G}^{G}(k)\longrightarrow\operatorname{c-Ind}_{\Gamma_{1}^{p}(N)\cap\text{SL}_{2}(\mathbb{Q})}^{\Gamma_{1}^{p}(N)}(k)$
is an isomorphism of representations of $\Gamma^{p}_{1}(N)$. The lemma follows
by Shapiro’s lemma and [EK03, Proof of Lemma 1]. ∎
Thus, when $p\geq 5$, we have a commutative diagram
${{H_{1}(\Gamma_{1}(N)\cap\Gamma_{0}(p),k)}}$${{H_{1}(\Gamma_{1}(N),k)\oplus
H_{1}(\Gamma_{1}(N),k)}}$${{H_{1}(\Gamma_{1}(N),k)\oplus
H_{1}(\Gamma_{1}(N),\operatorname{Sym}^{p-1}(k^{2})^{\vee}).}}$$\scriptstyle{\sim}$$\scriptstyle{{\begin{pmatrix}\text{id}&0\\\
T&\widetilde{\phi}\end{pmatrix}}}$
In particular, $\widetilde{\phi}$ is surjective, so we have indeed recovered
[EK03, Lemma 2]. Moreover, we see that the kernel of $\widetilde{\phi}$ is
isomorphic to the kernel of the map
(4.8) $\displaystyle H_{1}(\Gamma_{1}(N)\cap\Gamma_{0}(p),k)\twoheadrightarrow
H_{1}(\Gamma_{1}(N),k)\oplus H_{1}(\Gamma_{1}(N),k).$
This is the homomorphism induced by the maps between open modular curves
determined by $\tau\longmapsto\tau$ and $\tau\longmapsto p\tau$ on the upper-
half plane. A generalisation by Wiles of a lemma of Ribet determines the
Galois representations that contribute to this kernel.
###### Proposition 4.9.
Let $\rho$ be a 2-dimensional odd irreducible representation of
$\text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ over $k$ such that $N(\rho)$
divides $N$ and $\rho|_{\mathcal{G}_{p}}\simeq\begin{pmatrix}1&*\\\
0&\omega^{-1}\end{pmatrix}\otimes\mathrm{unr}_{{b}}$. Then, the localisation
at the maximal ideal $\mathfrak{m}_{\rho}$ of $\mathbb{T}(pN)$ corresponding
to $\rho$ of the kernel of 4.8 is non-zero.
Proof. If the extension $*$ in the statement is très ramifiée, then it is
clear that $\rho$ contributes to the kernel as it contributes to the source
but not the target. Therefore, we may assume that the extension is peu
ramifiée. Let $f$ be a normalised newform of weight 2, level $\Gamma_{1}(N)$
and character $\chi$ whose associated Galois representation over $k$ is
isomorphic to $\rho$. If $a_{p}$ is its $p$-th Fourier coefficient, let
$\alpha$ be the root of $x^{2}-a_{p}x-\chi(p)p$ that is a $p$-adic unit (there
should be no confusion with our previous use of the letter $\alpha$). Then
$\alpha^{2}\equiv a_{p}^{2}\equiv\chi(p)\mod p$, the second congruence
following from [Edi92, Theorem 2.5].
There is an eigenclass in $H_{1}(\Gamma_{1}(N),\overline{\mathbb{Z}}_{p})$ for
the Hecke operators away from $N$ with the same system of eigenvalues as $f$
and whose reduction to $H_{1}(\Gamma_{1}(N),k)$ is non-zero. As $\rho$ is
irreducible, the localisation of $H_{1}(\Gamma_{1}(N),k)$ at
$\mathfrak{m}_{\rho}$ is isomorphic to
$(k\otimes_{\mathbb{F}_{p}}J_{1}(N)[p])_{\mathfrak{m}_{\rho}}$, where
$J_{1}(N)$ is the Jacobian of the compactified modular curve of level
$\Gamma_{1}(N)$. Write $P$ for the element of
$(k\otimes_{\mathbb{F}_{p}}J_{1}(N)[p])_{\mathfrak{m}_{\rho}}$ corresponding
to the reduction of the eigenclass above. Similarly,
$H_{1}(\Gamma_{1}(N)\cap\Gamma_{0}(N),k)_{\mathfrak{m}_{\rho}}\simeq(k\otimes_{\mathbb{F}_{p}}J_{1}(N,p)[p])_{\mathfrak{m}_{\rho}}$,
where $J_{1}(N,p)$ is the Jacobian of the compactified modular curve of level
$\Gamma_{1}(N)\cap\Gamma_{0}(p)$. Consider now the image of
$(P,0)=(P,-(a_{p}-\alpha)P)$ under the map
$(k\otimes_{\mathbb{F}_{p}}J_{1}(N)[p])_{\mathfrak{m}_{\rho}}\oplus(k\otimes_{\mathbb{F}_{p}}J_{1}(N)[p])_{\mathfrak{m}_{\rho}}\longrightarrow(k\otimes_{\mathbb{F}_{p}}J_{1}(N,p)[p])_{\mathfrak{m}_{\rho}}$
induced by the morphisms between modular curves determined by
$\tau\longmapsto\tau$ and $\tau\longmapsto p\tau$ on the upper-half plane. The
image of $(P,0)$ is then one of the $p$-stabilisations of $P$: it is an
eigenvector for all Hecke operators $T_{\ell}$ for $\ell\nmid pN$ (with
eigenvalues determined by $\rho$), as well as the operator $U_{p}$ (resp.
$\langle n\rangle$ for any $n\in\mathbb{Z}$ with $n\equiv p\mod N$ and
$n\equiv 1\mod p$) with eigenvalue $\alpha$ (resp. $\chi(p)$). It is also non-
zero (for example, by Proposition 4.10 and its proof below, or by the
injectivity of [Wil95, (2.10)]). Hence, by [Wil95, Lemma 2.3], it defines
(under the isomorphisms above between group homology and $p$-torsion points in
Jacobians) a non-zero element of the localisation at $\mathfrak{m}_{\rho}$ of
the kernel of 4.8. ∎
Finally, we turn to the map from Section 4.2.
###### Proposition 4.10.
Let $R=k[\tau,\sigma,\sigma^{-1}]$ be as in Remark 3.2 and $s\in\mathbb{Z}$.
Let $\rho$ be a 2-dimensional odd irreducible representation. Then, the map
$H_{*}(\Gamma_{1}(N),\omega^{s}\circ\det)_{\rho}\longrightarrow
H_{*}(\Gamma_{1}(N),\operatorname{Sym}^{p-1}(k^{2})^{\vee}\otimes_{k}\omega^{s})_{\rho}$
induced from the map
$\widetilde{\pi}(0,\tau,\sigma,s)_{R}\longrightarrow\widetilde{\pi}(p-1,\tau,\sigma,s)_{R}$
is injective in degree 1. If $N(\rho)$ divides $N$ and
$\rho|_{\mathcal{G}_{p}}\simeq\begin{pmatrix}1&*\\\
0&\omega^{-1}\end{pmatrix}\otimes\omega^{s}\mathrm{unr}_{{b}}$, then the
cokernel is non-zero.
Proof. The proposition follows from Proposition 4.6 and Proposition 4.9 by
Poincaré duality. Let us spell out the details. Again, we may assume that
$s=0$. Given a $\mathbb{T}(pN)$-module $M$, let us write $M^{*}$ for the base
change of $M$ along the ring isomorphism
$\mathbb{T}(pN)\longrightarrow\mathbb{T}(pN)$ mapping $T_{\ell}\longmapsto
S_{\ell}^{-1}T_{\ell}$ and $S_{\ell}\longmapsto S_{\ell}^{-1}$. Thus, $\rho$
contributes to $M$ if and only if $\rho^{\vee}\otimes\varepsilon^{-1}$
contributes to $M^{*}$. Let us also write $M^{*}_{\rho}:=(M^{*})_{\rho}$. As
$\rho$ is irreducible, there are Poincaré duality isomorphisms
$\displaystyle H_{1}(\Gamma_{1}(N),k)_{\rho}$
$\displaystyle\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}H^{1}(\Gamma_{1}(N),k)^{*}_{\rho}$
$\displaystyle
H_{1}(\Gamma_{1}(N),\operatorname{Sym}^{p-1}(k^{2})^{\vee})_{\rho}$
$\displaystyle\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}H^{1}(\Gamma_{1}(N),\operatorname{Sym}^{p-1}(k^{2})^{\vee})^{*}_{\rho}$
$\displaystyle H_{1}(\Gamma_{1}(N)\cap\Gamma_{0}(p),k)_{\rho}$
$\displaystyle\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}H^{1}(\Gamma_{1}(N)\cap\Gamma_{0}(p),k)^{*}_{\rho}.$
On the one hand, the $k$-linear dual of 4.8 is the direct sum of the pullbacks
in cohomology of the maps from the modular curve of level
$\Gamma_{1}(N)\cap\Gamma_{0}(p)$ to the modular curve of level $\Gamma_{1}(N)$
determined by $\tau\longmapsto\tau$ and $\tau\longmapsto p\tau$ on the upper-
half plane. On the other hand, the map 4.4 induces on $p$-arithmetic homology
a map
$H_{1}(\Gamma_{1}(N),k)\oplus H_{1}(\Gamma_{1}(N),k)\longrightarrow
H_{1}(\Gamma_{1}(N)\cap\Gamma_{0}(p),k),$
that is (by construction) the direct sum of the transfer homomorphisms in the
homology of the modular curves above induced by the same pair of maps. Now,
under Poincaré duality, pullbacks in cohomology correspond to transfer
homomorphisms in homology, and thus the two maps correspond (after localising
at $\rho$) under the above Poincaré duality isomorphism.
Fix an isomorphism
$\operatorname{Sym}^{p-1}(k^{2})^{\vee}\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\operatorname{Sym}^{p-1}(k^{2})$.
It determines also an isomorphism between
$\operatorname{Map}(\mathbb{P}^{1}(\mathbb{F}_{p}),k)$ and its dual, and by
Shapiro’s lemma an isomorphism
$H^{1}(\Gamma_{1}(N)\cap\Gamma_{0}(p),k)\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}H^{1}(\Gamma_{1}(N),k)\oplus
H^{1}(\Gamma_{1}(N),\operatorname{Sym}^{p-1}(k^{2})^{\vee})$
that is compatible under Poincaré duality with the similar isomorphism for
homology. In conclusion, we have a commutative square
${H_{1}(\Gamma_{1}(N),k)_{\rho}\oplus
H_{1}(\Gamma_{1}(N),k)_{\rho}}$${H_{1}(\Gamma_{1}(N),k)_{\rho}\oplus
H_{1}(\Gamma_{1}(N),\operatorname{Sym}^{p-1}(k^{2})^{\vee})_{\rho}}$${H^{1}(\Gamma_{1}(N),k)^{*}_{\rho}\oplus
H^{1}(\Gamma_{1}(N),k)^{*}_{\rho}}$${H^{1}(\Gamma_{1}(N),k)^{*}_{\rho}\oplus
H^{1}(\Gamma_{1}(N),\operatorname{Sym}^{p-1}(k^{2})^{\vee})^{*}_{\rho}}$$\scriptstyle{\sim}$$\scriptstyle{\sim}$
where the top horizontal map is obtained from taking the $p$-arithmetic
homology of 4.4 and the bottom horizontal map is (the Poincaré dual of) the
$k$-linear dual of 4.8. Thus, the map
$H_{1}(\Gamma_{1}(N),k)_{\rho}\longrightarrow
H_{1}(\Gamma_{1}(N),\operatorname{Sym}^{p-1}(k^{2})^{\vee})_{\rho}$ from the
statement of the lemma corresponds under these isomorphisms to the dual of the
map from Proposition 4.6. Thus, dualising Proposition 4.6 and Proposition 4.9
gives the result. ∎
## 5\. Proof of Theorem 1.1
In Section 3.2 we have proven the generic case of Theorem 1.1, in this section
we are going to deal with the non-generic cases of twists of the trivial and
Steinberg representations. We assume throughout that $p\geq 5$.
### 5.1. The Steinberg case
The case of twists of the Steinberg representation will follow from
Proposition 3.5 and Proposition 4.10 and some formal algebraic manipulations.
Consider a two-term complex $C_{1}\longrightarrow C_{0}$ of representations of
$G$ such that $H_{0}(C_{\bullet})\simeq H_{1}(C_{\bullet})$. Taking the
$p$-arithmetic hyperhomology of this complex induces two spectral sequences
converging to the same abutment, one $E$ with $E^{2}$ page given by
$E^{2}_{i,j}=H_{i}(\Gamma^{p}_{1}(N),H_{j}(C_{\bullet}))$
and the other ${}^{\prime}E$ with ${}^{\prime}E^{1}$ page given by
${}^{\prime}E^{1}_{i,j}=H_{j}(\Gamma^{p}_{1}(N),C_{i}).$
This spectral sequence degenerates at the ${}^{\prime}E^{2}$ page, so the
systems of Hecke eigenvalues appearing in this page are the same as those in
the abutment. A spectral sequence argument shows that these systems of Hecke
eigenvalues are the same as those appearing in $E^{2}$. Indeed, if $i_{0}$ is
the smallest degree for which a fixed system of eigenvalues appears in
$H_{i_{0}}(\Gamma^{p}_{1}(N),H_{0}(C_{\bullet}))$, then the localisation at
this system of eigenvalues of the $E_{i_{0},0}$ term is stable in the
localised spectral sequence, so the localisation of the abutment in degree
$i_{0}$ will be non-zero. Moreover, if the homologies
$H_{j}(\Gamma^{p}_{1}(N),C_{i})$ are finite-dimensional, then so is the
abutment, and the same type of spectral sequence argument shows that so are
the terms in $E^{2}_{i,j}$.
Let us now specialise to our case of interest. The complex we will be
considering is given by the map
$C_{1}:=\pi(0,1,\chi)\longrightarrow\pi(p-1,1,\chi):=C_{0}$
from Lemma 4.5, so that $H_{0}(C_{\bullet})\simeq
H_{1}(C_{\bullet})\simeq\operatorname{St}\otimes\chi$. Thus, the previous
paragraph and Proposition 3.5 show that Theorem 1.1 (i) and (ii) are satisfied
for twists of the Steinberg representation. Let us analyse the systems of
eigenvalues appearing in homology. Proposition 3.5 shows that the only odd
irreducible Galois representations $\rho$ which can contribute to the
${}^{\prime}E^{1}$ page above, and hence to
$H_{*}(\Gamma^{p}_{1}(N),\operatorname{St}\otimes\chi)$, are those such that
$N(\rho)$ divides $N$ and $\rho|_{\mathcal{G}_{p}}$ is an extension of
$\chi\omega^{-1}$ by $\chi$. Fix such a $\rho$ and write
$\chi=\omega^{a}\mathrm{unr}_{{b}}$. By the argument in [Tar22, Proposition
5.8], and using that derived tensor products commute with mapping cones, the
$p$-arithmetic hyperhomology of $C_{\bullet}$ is isomorphic (in the derived
category of $\mathbb{T}(pN)\otimes_{\mathbb{Z}}k[T,S]$-modules) to the derived
tensor product over $k[T,S]$ of $k[T,S]/(T-b,S-b^{2})$ and
$\displaystyle\left[C_{\bullet}(\Gamma_{1}^{p}(N),\operatorname{c-Ind}_{K}^{G}(\omega^{a}))\longrightarrow
C_{\bullet}(\Gamma_{1}^{p}(N),\operatorname{c-Ind}_{K}^{G}(\operatorname{Sym}^{p-1}(k^{2})^{\vee}\otimes\omega^{a}))\right]$
$\displaystyle\simeq\left[C_{\bullet}(\Gamma_{1}(N),\omega^{a}\circ\det)\longrightarrow
C_{\bullet}(\Gamma_{1}(N),\operatorname{Sym}^{p-1}(k^{2})^{\vee}\otimes\omega^{a})\right],$
where $\left[{-}\right]$ denotes mapping cones and the isomorphism follows
from Shapiro’s lemma [Tar22, Proposition 5.3]. Using that $\rho$ contributes
to arithmetic homology only in degree 1, we see that the localisation of the
above hyperhomology at $\rho$ is
$\displaystyle\frac{k[T,S]}{(T-b,S-b^{2})}\otimes^{\mathbb{L}}_{k[T,S]}\left[H_{1}(\Gamma_{1}(N),\omega^{a}\circ\det)_{\rho}[1]\longrightarrow
H_{1}(\Gamma_{1}(N),\operatorname{Sym}^{p-1}(k^{2})^{\vee}\otimes\omega^{a})_{\rho}[1]\right]$
$\displaystyle\simeq\frac{k[T,S]}{(T-b,S-b^{2})}\otimes^{\mathbb{L}}_{k[T,S]}U_{a,\rho}[1]$
where the map in the first line is that of Proposition 4.10 and
$U_{a}=\operatorname{coker}\left(H_{1}(\Gamma_{1}(N),\omega^{a}\circ\det)\longrightarrow
H_{1}(\Gamma_{1}(N),\operatorname{Sym}^{p-1}(k^{2})^{\vee}\otimes\omega^{a})\right).$
In conclusion, the abutment of the localisation spectral sequences $E$ and
${}^{\prime}E$ is given in degree $i$ by
$\text{Tor}^{k[T,S]}_{i-1}\left(\frac{k[T,S]}{(T-b,S-b^{2})},U_{a,\rho}\right)$
In particular, analysing the $E^{2}$ page shows that
$\displaystyle H_{0}(\Gamma^{p}_{1}(N),\operatorname{St}\otimes\chi)_{\rho}$
$\displaystyle=H_{3}(\Gamma^{p}_{1}(N),\operatorname{St}\otimes\chi)_{\rho}=0,$
$\displaystyle H_{1}(\Gamma^{p}_{1}(N),\operatorname{St}\otimes\chi)_{\rho}$
$\displaystyle\simeq\frac{k[T,S]}{(T-b,S-b^{2})}\otimes_{k[T,S]}U_{a,\rho},$
$\displaystyle H_{2}(\Gamma^{p}_{1}(N),\operatorname{St}\otimes\chi)_{\rho}$
$\displaystyle\simeq\text{Hom}_{k[T,S]}\left(\frac{k[T,S]}{(T-b,S-b^{2})},U_{a,\rho}\right).$
Moreover, Proposition 4.10 shows that the homology in degrees 1 and 2 is
always non-zero, since $\rho$ contributes to $U_{a}$ and can only contribute
to its $(T=b,S=b^{2})$-eigenspace by [Edi92, Theorem 2.5]. Together with
Proposition 2.4, this completes the proof of Theorem 1.1 (iii) in this case.
### 5.2. The trivial case
The case of twists of the trivial representation is completely analogous to
the case of twists of the Steinberg representation, this time applying the
above analysis to
$C_{1}:=\pi(p-1,1,\chi)\longrightarrow\pi(0,1,\chi):=C_{0},$
where the map is that of Lemma 4.2. It satisfies $H_{0}(C_{\bullet})\simeq
H_{1}(C_{\bullet})\simeq\chi\circ\det$, so Theorem 1.1 (i) and (ii) hold for
these representations. The corresponding spectral sequences and Proposition
4.6 show that if $\chi=\omega^{a}\mathrm{unr}_{{b}}$ and $\rho$ is an odd
irreducible representation of $\text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$,
$\displaystyle H_{0}(\Gamma^{p}_{1}(N),\chi\circ\det)_{\rho}$
$\displaystyle=H_{1}(\Gamma^{p}_{1}(N),\chi\circ\det)_{\rho}=0,$
$\displaystyle H_{2}(\Gamma^{p}_{1}(N),\chi\circ\det)_{\rho}$
$\displaystyle\simeq\frac{k[T,S]}{(T-b,S-b^{2})}\otimes_{k[T,S]}V_{a,\rho},$
$\displaystyle H_{3}(\Gamma^{p}_{1}(N),\chi\circ\det)_{\rho}$
$\displaystyle\simeq\text{Hom}_{k[T,S]}\left(\frac{k[T,S]}{(T-b,S-b^{2})},V_{a,\rho}\right).$
where
$V_{a}:=\ker\left(H_{1}(\Gamma_{1}(N),\operatorname{Sym}^{p-1}(k^{2})^{\vee}\otimes\omega^{a})\longrightarrow
H_{1}(\Gamma_{1}(N),\omega^{a}\circ\det)\right).$
By Proposition 4.9 (and [BDJ10, Corollary 2.11]), the odd irreducible Galois
representations $\rho$ contributing to the $(T=b,S=b^{2})$-eigenspace of
$V_{a}$ are those such that $N(\rho)$ divides $N$ and
$\rho|_{\mathcal{G}_{p}}$ is an extension of $\chi\omega^{-1}$ by $\chi$.
These representations therefore appear in the $p$-arithmetic homology of
$\chi\circ\det$ exactly in degrees 2 and 3. There are no Galois
representations satisfying condition (iii)(a) of Theorem 1.1 (iii), as finite-
dimensional representations never appear in the socle of representations in
the image of the mod $p$ local Langlands correspondence for
$\text{GL}_{2}(\mathbb{Q}_{p})$. Therefore, Theorem 1.1 (iii) holds, which
completes the proof of Theorem 1.1.
###### Remark 5.1.
In the proof of Proposition 4.10 we showed using Poincaré duality for
arithmetic cohomology that $U_{a}=(V_{-a}^{\vee})^{*}$ (with the notation
introduced in that proof). The Poincaré duality isomorphisms in that proof
intertwine $T$ with $S^{-1}T$ and $S$ with $S^{-1}$. This, together with the
above computations, implies that for $\rho$ as above there are “Poincaré
duality” isomorphisms
$\displaystyle H^{i}(\Gamma_{1}^{p}(N),\chi\circ\det)_{\rho}$
$\displaystyle\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}H_{4-i}(\Gamma_{1}^{p}(N),\operatorname{St}\otimes\chi)^{*}_{\rho}.$
###### Remark 5.2.
Assume that $\chi$ is the trivial character for simplicity. Then, the fact
that Galois representations as above contribute to
$H_{*}(\Gamma_{1}^{p}(N),k)$ and $H^{*}(\Gamma_{1}^{p}(N),k)$ but not in
degree 1 (unlike in the other cases) is related to the fact that, if $\pi$ is
the smooth representation of $\text{GL}_{2}(\mathbb{Q}_{p})$ corresponding to
$\rho|_{\mathcal{G}_{p}}$ under the mod $p$ Langlands correspondence, then
$\text{Hom}_{k[G]}(k,\pi)=0$, but $\text{Ext}^{1}_{k[G]}(k,\pi)\neq 0$ (at
least when $\rho|_{\mathcal{G}_{p}}$ is non-split). This can be made precise
by relating $p$-arithmetic cohomology to completed cohomology and taking into
account Emerton’s local-global compatibility results [Eme], but we do not
pursue this here.
## References
* [BDJ10] Kevin Buzzard, Fred Diamond, and Frazer Jarvis. On Serre’s conjecture for mod $\ell$ Galois representations over totally real fields. Duke Math. J., 155(1):105–161, 2010.
* [BL94] L. Barthel and R. Livné. Irreducible modular representations of ${\rm GL}_{2}$ of a local field. Duke Math. J., 75(2):261–292, 1994.
* [Bre03] Christophe Breuil. Sur quelques représentations modulaires et $p$-adiques de ${\rm GL}_{2}(\mathbf{Q}_{p})$. I. Compositio Math., 138(2):165–188, 2003.
* [CEG+18] Ana Caraiani, Matthew Emerton, Toby Gee, David Geraghty, Vytautas Paškūnas, and Sug Woo Shin. Patching and the $p$-adic Langlands program for ${\rm GL}_{2}(\mathbb{Q}_{p})$. Compos. Math., 154(3):503–548, 2018.
* [Col10] Pierre Colmez. Représentations de ${\rm GL}_{2}(\mathbf{Q}_{p})$ et $(\phi,\Gamma)$-modules. Astérisque, (330):281–509, 2010.
* [Edi92] Bas Edixhoven. The weight in Serre's conjectures on modular forms. Inventiones Mathematicae, 109(1):563–594, dec 1992.
* [EK03] Bas Edixhoven and Chandrashekhar Khare. Hasse invariant and group cohomology. Doc. Math., 8:43–50, 2003.
* [Eme] Matthew Emerton. Local-global compatibility in the $p$-adic Langlands programme for ${\rm GL}_{2/\mathbb{Q}}$.
* [Gro90] Benedict H. Gross. A tameness criterion for Galois representations associated to modular forms (mod $p$). Duke Math. J., 61(2):445–517, 1990.
* [KS12] Jan Kohlhaase and Benjamin Schraen. Homological vanishing theorems for locally analytic representations. Math. Ann., 353(1):219–258, 2012.
* [Paš13] Vytautas Paškūnas. The image of Colmez’s Montreal functor. Publ. Math. Inst. Hautes Études Sci., 118:1–191, 2013.
* [Ser80] Jean-Pierre Serre. Trees. Springer-Verlag, Berlin-New York, 1980. Translated from the French by John Stillwell.
* [Ser87] Jean-Pierre Serre. Sur les représentations modulaires de degré $2$ de $\mathrm{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$. Duke Math. J., 54(1):179–230, 1987.
* [Tar22] Guillem Tarrach. $S$-arithmetic (co)homology and $p$-adic automorphic forms, 2022. Preprint available at https://arxiv.org/abs/2207.04554v1.
* [Wil95] Andrew Wiles. Modular elliptic curves and Fermat’s last theorem. Ann. of Math. (2), 141(3):443–551, 1995.
|
# Perturbative computations of neutron-proton scattering observables using
renormalization-group invariant $\chi$EFT up to N3LO
Oliver Thim<EMAIL_ADDRESS>Andreas Ekström Christian Forssén
Department of Physics, Chalmers University of Technology, SE-412 96, Göteborg,
Sweden
###### Abstract
We predict neutron-proton scattering cross-sections and polarization
observables up to next-to-next-to-next-to leading order in a renormalization-
group invariant description of the strong nucleon-nucleon interaction. Low-
energy constants are calibrated to phase shifts, sub-leading corrections are
computed in distorted-wave perturbation theory, and we employ momentum-cutoff
values 500 and 2500 MeV. We find a steady order-by-order convergence and
realistic descriptions of scattering observables up to a laboratory scattering
energy of approximately 100 MeV. We also compare perturbative and non-
perturbative calculations for phase shifts and cross sections and quantify how
unitarity is gradually restored at higher orders. The perturbative approach
offers an important diagnostic tool for any power counting and our results
suggest that the breakdown scale in chiral effective field theory might be
significantly lower than estimates obtained in non-perturbative calculations.
## I Introduction
Nuclear potentials used in ab initio [1] computations of atomic nuclei [2] are
almost exclusively derived using chiral effective field theory ($\chi$EFT) [3,
4, 5] based on Weinberg power counting (WPC) [6, 7]. Such potentials [8, 9,
10, 11, 12, 13, 14], now derived up to the fifth chiral order [15, 16, 17],
have furnished a wide range of structure and reaction predictions across the
nuclear chart [18, 19], but at the same time they grapple with the
renormalization challenge inherent to chiral nuclear forces [20]. Indeed,
numerical studies [21] of the nucleon-nucleon scattering amplitude have shown
that the contact operators, accounting for unresolved short-range physics,
already at leading order (LO) in WPC are not sufficient to renormalize the
singular nature [22] of the one pion-exchange potential. Consequently, LO
predictions based on WPC exhibit an unphysical dependence on the cutoff
$\Lambda$ that regularizes the amount of high-momentum (or short-range)
physics that is resolved.
Several PCs leading to renormalization-group (RG) invariant nucleon-nucleon
amplitudes have been proposed in the past two decades [23, 24, 25, 26, 27, 28,
29, 30, 31, 32, 33, 34, 35]. They can collectively be referred to as modified
Weinberg power countings (MWPCs). However, we typically know very little about
their predictive power for nuclei beyond the lightest-mass systems [36]. The
one exception is the recent study by Yang _et al._ [37] that presented the
first ab initio predictions of binding energies in 4He, 6Li, and 16O using
$\chi$EFT potentials up to next-to-leading order (NLO) in several different
MWPCs. The calculations in that work revealed an $\alpha$-decay instability in
the ground states in 6Li and 16O. Subsequent analyses brought forward probable
causes for this instability as originating in ($i$) overfitting of the low-
energy constants (LECs) that parameterize the short-range interactions [38]
and ($ii$) underestimating the importance of few-nucleon forces [39] at LO in
MWPC.
The notable absence of MWPC-based predictions for heavier-mass nuclei is
likely due to a variety of factors. Firstly, potentials based on WPC are
easier to implement in present ab initio computer codes as one
straightforwardly sum leading and sub-leading corrections to the potential
before solving the Schrödinger equation, whereas in MWPC sub-leading
corrections should be added in perturbation theory [40]. Secondly, there
exists several widely available computer codes for evaluating matrix elements
of chiral nucleon-nucleon and three-nucleon potentials, as well as currents,
to very high orders in WPC. Finally, it is currently prohibitively costly to
converge ab initio predictions of nuclear properties at the large values of
the cutoff required for analyzing RG-invariance in MWPC.
In light of these facts we certainly see the utility of WPC, which might
provide a consistent EFT framework provided that renormalization is
interpreted in a fashion where the cutoff never exceeds the order of the
breakdown scale [41, 42, 43, 44]. However, the existence of MWPCs, where
renormalization does allow for the cutoff to be taken far beyond the breakdown
scale, calls for a continued effort. Given the fundamental importance of RG-
invariance it should be seriously explored whether MWPC approaches can furnish
a realistic and predictive framework for ab initio nuclear physics.
In this paper, we contribute to the meager list of quantitative predictions
grounded in RG-invariant formulations of $\chi$EFT. To the best of our
knowledge, and somewhat surprisingly, nucleon-nucleon scattering observables
have not been computed in MWPC beyond LO [41]. Here, we present predictions
for integrated and differential cross-sections, as well as polarization
observables, for elastic neutron-proton ($np$) scattering up to next-to-next-
to-next-to-leading order (N3LO) in the MWPC of Long and Yang [30, 45, 32],
where higher-order corrections to the potential are treated perturbatively
[21, 40]. This work serves as an important step in the development and
uncertainty quantification of any model of the nuclear interaction [46, 47,
48, 49, 50].
In Section II we review how to construct potentials in the PC of Long and
Yang, describe how to numerically compute the scattering amplitude in
distorted-wave perturbation theory, and explain how we calibrated LEC values.
In Section III we present results for scattering observables up to N3LO, and
we summarize and conclude in Section IV.
## II Formalism
In $\chi$EFT, scattering amplitudes are expanded in a dimensionless ratio
$(Q/\Lambda_{b})^{\nu}$. Here, $\nu$ indicates the chiral order, $\Lambda_{b}$
is the underlying high-momentum scale of $\chi$EFT, and $Q$ denotes the
relevant low-energy scale. For nucleon-nucleon scattering, we assume
$Q\approx\text{max}(p,m_{\pi})$, where $p$ is the relative momentum in the
center of mass (c.m.) frame of the interacting nucleons, and the pion mass
$m_{\pi}$ is the relevant low-energy mass scale. In this work we adopt a
nomenclature where LO scales as $\left(Q/\Lambda_{b}\right)^{0}$ while sub-
leading orders are denoted by their relative scaling to LO. As such, NLO
scales as $\left(Q/\Lambda_{b}\right)^{1}$, next-to-next-to-leading order
(N2LO) as $\left(Q/\Lambda_{b}\right)^{2}$ and so on. In what follows, we
summarize relevant details regarding the MWPC that we use in this work, define
the potential terms $V^{(\nu)}$ entering at each chiral order, and explain how
we performed the perturbative calculations of scattering amplitudes.
### II.1 The nucleon-nucleon interaction potential in the Long and Yang power
counting
We employ the MWPC of Long and Yang [30, 32, 51, 40], which adheres to the
following overarching principles:
* •
The chiral order of a pion-exchange diagram, along with the necessary
counterterms for renormalizing pion loops, is determined by the naive
dimensional analysis (NDA) of its non-analytic part. This follows the same
principle as in Weinberg Power Counting (WPC).
* •
Counterterms are promoted to lower chiral order only when needed to fulfill
the requirement of RG-invariance.
* •
All corrections to the potential beyond LO are included perturbatively to
obtain RG-invariant amplitudes.
One-pion exchange (OPE) enters at LO in $\chi$EFT and must be treated non-
perturbatively, at least in the low partial waves where it is sufficiently
strong. The singular nature of OPE is increasingly alleviated by the
centrifugal barrier. Thus, at some point in the partial-wave expansion there
is sufficient angular momentum $\ell$ to furnish a perturbative treatment of
OPE [29, 52, 53] and consider it sub-leading.
At LO in the MWPC by Long and Yang, the OPE potential $V^{(0)}_{1\pi}$ is
considered non-perturbative in the ${}^{1}S_{0}$, ${}^{3}P_{0}$,
${}^{1}P_{1}$, ${}^{3}P_{1}$, ${}^{3}S_{1}\mathrm{-}^{3}D_{1}$ and
${}^{3}P_{2}\mathrm{-}^{3}F_{2}$ channels. OPE is attractive in ${}^{3}P_{0}$
and ${}^{3}P_{2}$. Renormalization requires promotion of counterterms to the
corresponding channels of the LO contact potential $V^{(0)}_{\mathrm{ct}}$
[21], thereby extending it beyond the canonical non-derivative ${}^{1}S_{0}$
and ${}^{3}S_{1}$ counterterms. At sub-leading orders ($\nu>0$), two pion-
exchange, $V_{2\pi}^{(\nu)}$, as well as higher-order contact potentials,
$V_{\text{ct}}^{(\nu)}$, enter perturbatively according to the principles
presented in the beginning of this subsection. The contributions to the
potential up to N3LO in the ${}^{1}S_{0}$, ${}^{3}P_{0}$, ${}^{1}P_{1}$,
${}^{3}P_{1}$, ${}^{3}S_{1}\mathrm{-}^{3}D_{1}$ and
${}^{3}P_{2}\mathrm{-}^{3}F_{2}$ channels are listed in the third column of
Table 1 labeled ”non-perturbative (at LO) channels”.
See Appendix A for detailed expressions of the potentials appearing in Table
1. Following Long and Yang, we do not consider any higher-order corrections to
OPE and employ potential expressions where pion loops are treated in
dimensional regularization. For the sub-leading two-pion exchange potential
$V^{(3)}_{2\pi}$ we use pion-nucleon LECs $c_{1},c_{3},c_{4}$ with central
values from the Roy-Steiner analysis in Ref. [54].
Table 1: Potential contributions at each in channels where OPE is treated non-perturbatively (column three) and perturbatively (column four). Detailed expressions for the potentials can be found in Appendix A. | | non-perturbative (at LO) | purely perturbative
---|---|---|---
order | potential | channels | channels
LO | $V^{(0)}$ | $V^{(0)}_{1\pi}+V^{(0)}_{\mathrm{ct}}$ | 0
NLO | $V^{(1)}$ | $V^{(1)}_{\mathrm{ct}}$ | $V^{(0)}_{1\pi}$
N2LO | $V^{(2)}$ | $V^{(2)}_{2\pi}+V^{(2)}_{\mathrm{ct}}$ | 0
N3LO | $V^{(3)}$ | $V^{(3)}_{2\pi}+V^{(3)}_{\mathrm{ct}}$ | $V^{(2)}_{2\pi}$
Let us now turn to the channels with $\ell>1$ (and without any coupling to
$\ell\leq 1$). For these channels we consider OPE to be perturbative and
consequently set it to zero at LO. We follow Ref. [52] and suppress two-pion
exchanges by the same chiral power as OPE. Up to N3LO, there are no contact
potentials in the perturbative channels, and the contributions are listed in
the last column of Table 1. Other suggestions for the PC in perturbative
channels are discussed by, e.g., Pavón Valderrama _et al._ [27].
### II.2 A perturbative treatment of nucleon-nucleon scattering amplitudes
The perturbative computation of nucleon-nucleon scattering amplitudes proceeds
in two steps. First, we solve the Lippmann-Schwinger (LS) equation for the LO
amplitude in the ${}^{1}S_{0}$, ${}^{3}P_{0}$, ${}^{1}P_{1}$, ${}^{3}P_{1}$,
${}^{3}S_{1}\mathrm{-}^{3}D_{1}$ and ${}^{3}P_{2}\mathrm{-}^{3}F_{2}$
channels. Note that the LO potential is identically zero in all other
channels. Second, we perturbatively include higher-order potential corrections
to the amplitude, accounting for the distortion due to the non-perturbative LO
solution where necessary. In the following, we explain this procedure in
detail, see also Refs. [53, 32, 30].
The neutron-proton Hamiltonian in the center-of-mass (c.m.) frame can be
written
$H=\frac{\bm{p}^{2}}{m_{N}}+V_{\mathrm{I}}+V_{\mathrm{II}},$ (1)
where $\bm{p}$ denotes the c.m. momentum and $m_{N}=2m_{n}m_{p}/(m_{n}+m_{p})$
the nucleon mass. The projectile energy in the laboratory frame will be
denoted $T_{\mathrm{lab}}$. Furthermore, $V_{\mathrm{I}}$ denotes the LO
potential, and $V_{\mathrm{II}}$ denotes the sum of all sub-leading
potentials, which formally can be infinitely many. The PC helps us identify
important and less important contributions to the scattering amplitude $T$ and
therefore facilitates a meaningful truncation of $V_{\mathrm{II}}$. With the
notation for the chiral potentials $V^{(\nu)}$ introduced in Section II.1,
$V_{\mathrm{I}}$ and $V_{\mathrm{II}}$ read
$\displaystyle V_{\mathrm{I}}$ $\displaystyle=V^{(0)},$ (2) $\displaystyle
V_{\mathrm{II}}$ $\displaystyle=\sum_{\nu=1}^{\infty}V^{(\nu)}.$ (3)
The LO amplitude, $T^{(0)}$, is obtained (non-perturbatively) by solving the
LS-equation
$T^{(0)}=V^{(0)}+V^{(0)}G^{+}_{0}T^{(0)},$ (4)
where the free resolvent is given by
$G^{+}_{0}=\left(E-H_{0}+i\epsilon\right)^{-1},$ (5)
and $H_{0}=\bm{p}^{2}/m_{N}$. We use a notation where we suppress the explicit
dependence on the c.m. scattering energy, $E$, for the resolvents and
amplitudes.
In WPC, higher-order corrections are accounted for non-perturbatively by
solving the LS-equation for the sum $V_{\mathrm{I}}+V_{\mathrm{II}}$. In MWPC,
however, potentials beyond LO, i.e., the corrections ($V_{\mathrm{II}}$),
enter in perturbation theory to obtain RG invariant results [40]. Indeed,
higher-order corrections should be amenable to a perturbative treatment. If
not, they are non-perturbative in nature and belongs at LO.
Distorted-wave perturbation theory has been applied to compute scattering
amplitudes in several previous studies, see, e.g., Refs. [28, 53, 30, 32, 51,
55]. The perturbation series for the scattering amplitude can be derived and
expressed in various ways. The one that we find most instructive follows Refs.
[56, 57]. First, using the two-potential trick, the $T$-operator for the
Hamiltonian in Eq. 1 is written in the form
$T=T^{(0)}+\Omega^{\dagger}_{-}V_{\mathrm{II}}\sum_{n=0}^{\infty}\left(G^{+}_{1}V_{\mathrm{II}}\right)^{n}\Omega_{+},$
(6)
where the Møller wave operators are defined as
$\displaystyle\Omega_{+}$ $\displaystyle=\mathds{1}+G^{+}_{0}T^{(0)},$ (7)
$\displaystyle\Omega^{\dagger}_{-}$
$\displaystyle=\mathds{1}+T^{(0)}G^{+}_{0},$ (8)
and the full LO resolvent reads
$G^{+}_{1}=\Omega_{+}G^{+}_{0}.$ (9)
Inserting Eq. 3 in Eq. 6 gives for the full $T$-operator
$T=T^{(0)}+\Omega^{\dagger}_{-}\left[\sum_{\nu=1}^{\infty}V^{(\nu)}\right]\sum_{n=0}^{\infty}\left[G^{+}_{1}\left(\sum_{\nu^{\prime}=1}^{\infty}V^{(\nu^{\prime})}\right)\right]^{n}\Omega_{+}.$
(10)
Expanding both sums and organizing terms according to their chiral orders
$\nu$ yields the expressions for the first-, second-, and third-order
corrections to the LO amplitude as
$\displaystyle T^{(1)}$ $\displaystyle=\Omega^{\dagger}_{-}V^{(1)}\Omega_{+}$
(11) $\displaystyle T^{(2)}$
$\displaystyle=\Omega^{\dagger}_{-}\left(V^{(2)}+V^{(1)}G^{+}_{1}V^{(1)}\right)\Omega_{+}$
(12) $\displaystyle T^{(3)}$
$\displaystyle=\Omega^{\dagger}_{-}\Big{(}V^{(3)}+V^{(2)}G^{+}_{1}V^{(1)}+V^{(1)}G^{+}_{1}V^{(2)}+$
$\displaystyle+V^{(1)}G^{+}_{1}V^{(1)}G^{+}_{1}V^{(1)}\Big{)}\Omega_{+}.$ (13)
A diagrammatic representation of amplitudes up to NLO is presented in Fig. 1.
Note that the full amplitude at, e.g., third order (N3LO) is given by the sum
$T^{(0)}+T^{(1)}+T^{(2)}+T^{(3)}$. Clearly, the distorted-wave corrections in
Eqs. 11, 12 and 13 simplify dramatically when applied to the channels where
OPE is perturbative such that $T^{(0)}=0$, $\Omega_{+}=\mathds{1}$, and
$\Omega^{\dagger}_{-}=\mathds{1}$. In these channels we therefore recover
ordinary perturbation theory.
Figure 1: Diagrammatic representation of the LO neutron-proton amplitude
$T^{(0)}$ (hatched oval), obtained by solving the LS-equation, as well as the
first correction $T^{(1)}$ given in Eq. 11. The grey (black) solid blobs
represent the potentials $V^{(0)}$ ($V^{(1)}$).
The distorted-wave corrections to the amplitudes $T^{(\nu>0)}$ can
alternatively be obtained as solutions to a set of modified LS-type equations,
discussed in more detail in Refs. [58, 59], which read
$T^{(\nu)}=V^{(\nu)}+\sum_{i=1}^{\nu}V^{(i)}G^{+}_{0}T^{(\nu-i)}+V^{(0)}G^{+}_{0}T^{(\nu)}.$
(14)
We use this formulation to verify our numerical implementation of Eqs. 11, 12
and 13. We note that the alternative approach of modified LS-equations
requires a matrix inversion at each order, whereas the distorted-wave approach
requires matrix multiplications only. However, the number of matrix
multiplications increases rapidly as the chiral order is increased. For
example, at $\nu=10$, Eqs. 11, 12 and 13 require an order of magnitude more
matrix multiplications than the modified LS equations in Eq. 14. In this study
we only go to $\nu=3$ for which the number of matrix multiplications of the
two formulations are similar.
### II.3 Numerical implementation
We project potentials and amplitudes to a partial-wave basis of states
$\ket{p,\ell,s,j}$ following the prescription in Ref. [60]111Note the mistake
in Eq. (4.22) pointed out in Ref. [4].. Here, $p=|\bm{p}|$, while $s,\ell,j$
denote the quantum numbers of the two-nucleon spin, orbital angular momentum,
and total angular momentum, respectively. Partial-wave matrix elements are
denoted by
$V^{js}_{\ell^{\prime}\ell}(p^{\prime},p)=\braket{p^{\prime},\ell^{\prime},s,j}{V}{p,\ell,s,j},$
(15)
where the conserved quantum numbers $s$ and $j$ are given as superscripts.
In the LS-equation, as well as in Eqs. 11, 12 and 13, infinite momentum
integrals appear and all potentials are regulated according to
$V^{js}_{\ell^{\prime}\ell}(p^{\prime},p)\to f_{\Lambda}(p^{\prime})\
V^{js}_{\ell^{\prime}\ell}(p^{\prime},p)f_{\Lambda}(p),$ (16)
where we choose a regulator function
$f_{\Lambda}(p)=\exp\left[-\frac{p^{6}}{\Lambda^{6}}\right]$ (17)
at all orders up to N3LO. In the calibration of the LECs, we use the cutoff
values $\Lambda=500$ MeV and $\Lambda=2500$ MeV.
Using Eqs. 7, 8 and 9, the terms in Eqs. 11, 12 and 13 can be expanded to sums
of products of the form $A_{1}G^{+}_{0}A_{2}$, of varying length. The
$A_{i}$’s are either $T^{(0)}$ or $V^{(\nu)}$ with $\nu=1,2,3$. For example,
the NLO correction in Eq. 11 reads
$\displaystyle T^{(1)}$
$\displaystyle=V^{(1)}+T^{(0)}G^{+}_{0}V^{(1)}+V^{(1)}G^{+}_{0}T^{(0)}$
$\displaystyle+T^{(0)}G^{+}_{0}V^{(1)}G^{+}_{0}T^{(0)}.$ (18)
Clearly, the fundamental matrix elements that need to be evaluated at sub-
leading orders are always of the form
$\bra{p^{\prime},\ell^{\prime}}A_{1}G^{+}_{0}A_{2}\ket{p,\ell},$ (19)
where we omit the $s$ and $j$ quantum numbers that are identical for the ket
and the bra. In Appendix B we show how to evaluate Eq. 19 using ordinary
matrix products and Gauss-Legendre quadrature. Longer products, e.g., of the
form $A_{1}G^{+}_{0}A_{2}G^{+}_{0}A_{3}$, are straightforwardly reduced to the
form in Eq. 19 by the associativity of matrix products. Knowing this, and the
distributive property with respect to addition, we can also reduce the
computational complexity of evaluating the perturbation series for $T$ by
computing and storing the composite operators $\Omega^{\dagger}_{-}$,
$\Omega_{+}$, and $G^{+}_{1}$.
For separable potentials of Yamaguchi type [61], both the distorted-wave
series and the LS equation can be solved analytically. We exploit this to
verify our numerical implementation and to inspect the stability of the
perturbative expansion. Numerical and analytical results for semi-realistic
and separable Yamaguchi potentials in the ${}^{1}S_{0}$ and
${}^{3}S_{1}\mathrm{-}^{3}D_{1}$ channels agree to at least single precision.
### II.4 Calibrating the low-energy constants
Our focus in this work is to predict and analyze the description of $np$
scattering observables in MWPC and specifically the PC of Long and Yang. To
enable quantitative calculations, we calibrate the values of the unknown LECs
using the same approach as Long and Yang, i.e., by tuning the contact LECs to
achieve a good reproduction of the low-energy Nijmegen phase shifts [62] at
selected scattering energies.
Before discussing the details of the calibration, it is important to remember
that the order-by-order amplitudes
$T=T^{(0)}+T^{(1)}+T^{(2)}+\ldots$ (20)
are computed perturbatively and their sum is unitary only up to perturbative
corrections. To obtain real-valued phase shifts in the calibration of the LECs
we must compute phase shifts perturbatively by expanding the $np$ $S$-matrix
and matching to chiral orders, see Appendix C for details. If one instead
solves for the partial-wave $S$-matrix non-perturbatively from the order-by-
order sum of $T^{(\nu)}$ amplitudes, the corresponding phase shifts will have
a non-zero imaginary part that increases with scattering energy. Indeed,
Figure 2 shows phase shifts computed perturbatively and non-perturbatively in
the two channels ${}^{1}D_{2}$ and ${}^{3}D_{2}$. There are no LECs that need
to be calibrated in these channels at the orders considered in this work. The
imaginary part of the non-perturbative phase shift increases with scattering
energy. As that happens, the real part of the phase shift and the (real-
valued) perturbative phase shift differ progressively. This is consistent with
observations in Ref. [63].
Figure 2: $np$ scattering phase shifts in the ${}^{1}D_{2}$ (top row) and
${}^{3}D_{2}$ (bottom row) channels at NLO, N2LO, and N3LO using a momentum
cutoff $\Lambda=2500$ MeV. Phase shifts computed using the perturbative method
are shown with black solid lines. The red dashed and dot-dashed lines show the
real and imaginary parts, respectively, of the phase shift computed by summing
the $T$-matrix contribution and using the non-perturbative relation between
phase shifts and the $S$-matrix. The black dashed lines show phase shifts from
the Nijmegen analysis [62].
In the calibration of LECs, we do not account for uncertainties stemming from
the Nijmegen phase shifts or the truncation of the $\chi$EFT expansion. While
we are aware of the potential risk of overfitting in doing so, we opted for a
simple approach to establish a first quantitative potential and a baseline
understanding. The application of Bayesian inference methods [47, 48, 49] to
quantify the posterior probability distributions for the values of the LECs in
MWPC [38], though more robust, requires considerably more efforts. In this
work, we focus on studying the effectiveness of MWPC for realistic description
of elastic $np$ scattering.
The $T_{\mathrm{lab}}$ values of the Nijmegen phase shifts used as calibration
data are listed in Table 2 for each channel and order. The calibrated LECs up
to N3LO are compiled in Table 3 in Appendix A. We use a naming-convention
where capital letters $C,D,E,\ldots$ denote LECs with dimension MeV-2, MeV-4,
MeV${}^{-6},\ldots$, respectively.
Each LEC receives perturbative corrections at subsequent orders from where it
was first introduced. As an example, the LO LEC $C_{{}^{1}S_{0}}$ is expanded
into contributions
$C_{{}^{1}S_{0}}=C^{(0)}_{{}^{1}S_{0}}+C^{(1)}_{{}^{1}S_{0}}+C^{(2)}_{{}^{1}S_{0}}+\dots,$
(21)
where the superscript enumerates the perturbative correction and not the
chiral order. In the following we will exemplify the calibration procedure by
discussing in detail how we calibrated the LECs in the ${}^{1}S_{0}$ channel.
Table 2: Laboratory scattering energies $T_{\mathrm{lab}}$ (in MeV) of the Nijmegen phase shifts [62] used to calibrate the values of the LECs at each chiral order. In total, we employed 33 single-energy phase shifts—the same as the total number of contact LECs in the chiral expansion of Long and Yang up to N3LO. Channel | LO | NLO | N2LO | N3LO
---|---|---|---|---
${}^{1}S_{0}$ | 5 | 5, 25 | 5, 25, 50 | 5, 25, 50, 75
${}^{3}P_{0}$ | 25 | - | 25, 50 | 75, 100
${}^{1}P_{1}$ | - | - | 50 | 50
${}^{3}P_{1}$ | - | - | 50 | 50
${}^{3}S_{1}\mathrm{-}^{3}D_{1}$ | ${}^{3}S_{1}:30$ | - | ${}^{3}S_{1}:30,50.$ | ${}^{3}S_{1}:30,50.$
| | | $\epsilon_{1}:50$ | $\epsilon_{1}:50$
${}^{3}P_{2}\mathrm{-}^{3}F_{2}$ | ${}^{3}P_{2}:30$ | - | ${}^{3}P_{2}:30,50.$ | ${}^{3}P_{2}:30,50.$
| | | $\epsilon_{2}:50$ | $\epsilon_{2}:50$
At LO we calibrate the LEC $C^{(0)}_{{}^{1}S_{0}}$ such that the LO
${}^{1}S_{0}$ phase shift, $\delta^{(0)}$, reproduces the Nijmegen phase shift
at $T_{\mathrm{lab}}=5$ MeV. Two LECs are present in the ${}^{1}S_{0}$ channel
of the NLO potential: $D^{(0)}_{{}^{1}S_{0}}$ and $C^{(1)}_{{}^{1}S_{0}}$. The
latter is a perturbative correction to the LO LEC. These two LECs are
calibrated such that the LO phase shift plus the perturbative NLO correction,
i.e., $\delta^{(0)}+\delta^{(1)}$, reproduce the Nijmegen phase shifts at
$T_{\mathrm{lab}}=5$ and 25 MeV. The role of $C^{(1)}_{{}^{1}S_{0}}$ is to
ensure that the NLO correction vanishes for $T_{\mathrm{lab}}=5$ MeV. At N2LO
we have the LECs $C^{(2)}_{{}^{1}S_{0}},\ D^{(1)}_{{}^{1}S_{0}},\
E^{(0)}_{{}^{1}S_{0}}$ calibrated to phase shifts at energies
$T_{\mathrm{lab}}=5,25$ and 50 MeV. Finally, at N3LO the LECs
$C^{(3)}_{{}^{1}S_{0}},\ D^{(2)}_{{}^{1}S_{0}},\ E^{(1)}_{{}^{1}S_{0}},\
F^{(0)}_{{}^{1}S_{0}}$ are calibrated to reproduce the phase shifts at
$T_{\mathrm{lab}}=5,25,50$ and 75 MeV. An analogous scheme is employed for the
remaining partial waves and LECs. We calibrate all LECs for two different
momentum cutoffs: $\Lambda=500$ and 2500 MeV.
For the channels where OPE is perturbative there are no LECs present that need
to be calibrated. As a consistency check we compute and reproduce the
scattering phase shifts of Ref. [52]. Figure 3 shows our fit of the phase
shifts in the channels where OPE is non-perturbative. The bands indicate the
variation due to the two different cutoff values. There is an overall order-
by-order convergence in all channels up to around $T_{\mathrm{lab}}=100$ MeV
and we can reproduce the known results of [30, 32, 51]. The degree of cutoff
sensitivity varies notably among different channels. For instance, channels
like ${}^{1}P_{1}$ and ${}^{3}F_{2}$ show minimal sensitivity to the cutoff
value, while ${}^{3}P_{2}$ and $\epsilon_{1}$ demonstrate a more pronounced
dependency. The calibration in the ${}^{3}P_{0}$ channel was particularly
challenging at the higher chiral orders and the calibration energies needed to
be shifted to relatively high values at N3LO, as seen in Table 2.
Figure 3: Phase shifts in the channels where OPE is non-perturbative and the
amplitudes are computed using full distorted-wave perturbation theory. The
bands indicate the envelope of the variation due to the two different cutoff
values; 500 MeV (dashed line) and 2500 MeV (solid line). Note that LO and NLO
results coincide for all channels except ${}^{1}S_{0}$, which is why the blue
NLO band appears to be missing in several panels. The black solid lines show
phase shifts from the Nijmegen analysis [62] and the diamond markers indicate
the calibration data at $T_{\mathrm{lab}}$ values from Table 2.
## III Neutron-Proton Scattering Observables
Here we predict selected $np$ scattering observables up to
$T_{\mathrm{lab}}\approx 100$ MeV using the potentials that were defined and
calibrated in Section II. We compute scattering observables from the partial-
wave amplitudes by first constructing the spin-scattering matrix, $M$, by [64,
65, 56]
$\displaystyle M^{s}_{m^{\prime}_{s}m_{s}}$
$\displaystyle(p_{0},\theta_{\mathrm{cm}},\phi)=\frac{\sqrt{4\pi}}{2ip_{0}}\sum_{j,\ell,\ell^{\prime}}i^{\ell-\ell^{\prime}}(2j+1)\sqrt{2\ell+1}$
$\displaystyle\times\begin{pmatrix}\ell^{\prime}&s&j\\\
m_{s}-m^{\prime}_{s}&m^{\prime}_{s}&-m_{s}\end{pmatrix}\begin{pmatrix}\ell&s&j\\\
0&m_{s}&-m_{s}\end{pmatrix}$ (22) $\displaystyle\times
Y^{\ell^{\prime}}_{m_{s}-m^{\prime}_{s}}(\theta_{\mathrm{cm}},\phi)\left(S^{(\nu)js}_{\ell^{\prime}\ell}(p_{0},p_{0})-\delta_{\ell^{\prime}\ell}\right).$
The angles $\theta_{\mathrm{cm}}\in[0,\pi]$ and $\phi\in[0,2\pi]$ are the
polar and azimuthal scattering angles, respectively where the latter is set to
zero by cylindrical symmetry. The on-shell scattering momentum, $p_{0}$, is
given from the laboratory scattering energy $T_{\mathrm{lab}}$ using Eq. 44 in
Appendix B. We compute $S^{(\nu)js}_{\ell^{\prime}\ell}(p_{0},p_{0})$, i.e.,
the $S-$matrix for a potential up to some chiral order $\nu$, by summing the
perturbatively computed $T$-matrix amplitudes to order $\nu$. Using the
conventions applied in this work, the partial-wave relation between the on-
shell $S$\- and $T$-matrix elements is thus given by
$\displaystyle
S^{(\nu)js}_{\ell^{\prime}\ell}(p_{0},p_{0})=\delta_{\ell^{\prime}\ell}-i\pi
m_{N}p_{0}$
$\displaystyle\times\left[T^{(0)js}_{\ell^{\prime}\ell}(p_{0},p_{0})+\dots+T^{(\nu)js}_{\ell^{\prime}\ell}(p_{0},p_{0})\right].$
(23)
We focus our discussion on the differential $np$ scattering cross section and
two selected polarizations, and calculate these from the spin-scattering
matrix as
$\displaystyle\frac{d\sigma}{d\Omega}$
$\displaystyle=\frac{1}{4}\operatorname{Tr}{MM^{\dagger}}$ (24)
$\displaystyle\frac{d\sigma}{d\Omega}\times P_{b}$
$\displaystyle=\frac{1}{4}\operatorname{Tr}{M\bm{\sigma}_{1n}M^{\dagger}}$
(25) $\displaystyle\frac{d\sigma}{d\Omega}\times A_{yy}$
$\displaystyle=\frac{1}{4}\operatorname{Tr}{M\bm{\sigma}_{1n}\bm{\sigma}_{2n}M^{\dagger}}$
(26)
where $\bm{\sigma}_{in}\equiv\bm{\sigma}_{i}\cdot\hat{\bm{n}}$ for nucleon
$i$, $\bm{\sigma}_{i}$ is the Pauli spin matrices, and $\hat{\bm{n}}$ is
normal to the scattering plane.
Figure 4: Selection of $np$ scattering observables in the energy interval
$T_{\mathrm{lab}}=10$ to 100 MeV. Experimental data from Refs. [66, 67]. The
bands indicate cutoff variation in the same way as in Fig. 3.
Figure 4 shows our prediction for these scattering observables in the energy
range $T_{\mathrm{lab}}=10$ to 100 MeV for the two cutoffs $\Lambda=500$ MeV
and $\Lambda=2500$ MeV. For the lower scattering energies
($T_{\mathrm{lab}}\lesssim 60$ MeV) we observe an order-by-order improvement
for all considered observables. Interestingly, the N3LO predictions do not
always perform better, but in general performs at least as well as N2LO.
Indeed, for $T_{\text{lab}}\approx$ 100 MeV (rightmost panels of Fig. 4), it
appears that the order-by-order improvement in the predictions of the
differential cross section and $P_{b}$ polarization deteriorates and N2LO can
perform better than N3LO. This effect is visible also at the level of phase
shifts shown in Fig. 3. It is not clear at the moment if this is due to
overfitting and (or) an underlying issue with the MWPC that we employ. Our
N3LO predictions are certainly influenced by the adopted values of sub-leading
$\pi N$ LECs [54]. Calculations of other scattering cross observables show
that the order-by-order convergence demonstrated in Fig. 4 is representative
for all elastic $np$ scattering observables in the PC by Long and Yang. Two-
pion exchange is clearly important for achieving a realistic description of
scattering observables with $T_{\mathrm{lab}}\lesssim 100$ MeV.
The total cross section can be straightforwardly computed from the
differential cross section as
$\sigma_{\mathrm{tot}}(p_{0})=2\pi\int_{-1}^{1}d(\cos\theta_{\mathrm{cm}})\
\frac{d\sigma}{d\Omega}(p_{0},\theta_{\mathrm{cm}}),$ (27)
and predictions for scattering energies up to $T_{\mathrm{lab}}=150$ MeV are
shown in Fig. 5. Also for this obvservable, the agreement with experimental
data typically improves order-by-order, at least up to N2LO. The improvement
of N3LO over N2LO is not obvious. At very low energies, the higher-order
predictions for the total cross section are much better than the lower-order
predictions. This result is somewhat peculiar for a low-energy EFT and likely
due to overfitting at the phase shift level. For $T_{\mathrm{lab}}\gtrsim 100$
MeV, roughly corresponding to 220 MeV relative momentum, the agreement with
data even deteriorates at N3LO. This is analogous to what was found for the
angular-differential observables shown in Fig. 4 and consistent with the
observation in Fig. 3 that the phase shifts at N3LO might suffer from
overfitting at the higher energies. Alternatively, the observed decline in
predictive power might indicate the presence of an additional mass scale at
200-300 MeV. Thus, it will be very interesting to study the effects of
accounting for the $\Delta(1232)$-isobar in two-pion exchange in this MWPC.
Figure 5: Total $np$ cross sections computed by integrating the differential
cross sections (27). Panel $(a)$ shows cross sections for a large interval of
scattering energies, $T_{\mathrm{lab}}=5\text{--}150$ MeV. Panels $(b)$ and
$(c)$ expand results at low- and high-energy intervals, respectively. The
bands indicate cutoff variation as in Fig. 3. Experimental data from Refs.
[66, 67].
Next, we analyze how the perturbative breaking of unitarity in $\chi$EFT
affects the predictions of total cross sections. Indeed, the computation of
$S$-matrix elements using Eq. 23, where the order-by-order contributions of
the scattering amplitudes are summed directly to the $S$-matrix, leads to a
perturbative breaking of unitarity. In contrast, amplitudes computed non-
perturbatively, i.e., when the potential terms are summed before solving for
the scattering amplitude (as is done in WPC), are unitary by construction. In
this case, the probability flux in the scattering process is also conserved
exactly and the optical theorem can be safely used to compute the total cross
section as, e.g.,
$\sigma_{\mathrm{tot}}(p_{0})=\frac{2\pi}{p_{0}}\operatorname{Im}\left[a(\theta_{\mathrm{cm}}=0)+b(\theta_{\mathrm{cm}}=0)\right],$
(28)
where $a(\theta_{\mathrm{cm}})$ and $b(\theta_{\mathrm{cm}})$ are Saclay-
amplitudes computed from the $M$-matrix [68].
We use the difference between total cross sections calculated using Eq. 27 and
Eq. 28 to measure the effects of unitarity breaking. In Fig. 6 we show the
relative difference between the cross sections computed using exact
integration and the optical theorem as a function of scattering energy. The
figure demonstrates how unitarity is restored perturbatively as we go to
higher chiral orders. Indeed, the relative difference between the two cross
section calculations is limited to 10% for scattering energies up to 40 MeV at
NLO, 70 MeV at N2LO, and 120 MeV at N3LO, respectively. The bands in the
figure reflect differences coming from using two cutoff values 500 MeV and
2500 MeV. The bands for NLO and N2LO increase smoothly with the scattering
energy. The band at N3LO shows an artifact from the two different calculations
for $\Lambda=2500$ MeV intersecting at some energies leading to very small
relative errors. We also note that the cutoff dependencies for the N2LO and
N3LO calculations do not vanish as the scattering energy approaches zero.
Figure 6: The relative difference between total $np$ cross sections ($\sigma$)
computed by integrating of the differential cross section (27) and the optical
theorem (28). The bands indicate cutoff variation as in Fig. 3. The color
coding for the orders is the same as Fig. 3. The horizontal dashed line marks
a $10$% difference.
We can also discuss this result in terms of the EFT truncation error. For a
given chiral order, we argue that the results from the two different cross
section calculations should not become significantly different until we reach
an energy where the next (omitted) order in the chiral low-energy expansion
becomes relevant. This should correspond to the scattering energy for which
the truncation error is significant. Breaking unitarity implies that the norm
of the partial-wave $S$-matrix in Eq. 23 deviates from unity as
$\left(S^{(\nu)}\right)^{\dagger}S^{(\nu)}=1-\mathcal{C}(Q/\Lambda_{b})^{\nu+1}$,
where we also expect $\mathcal{C}$ to be of natural size. This scaling of
unitarity breaking should be revisited when probability distributions of the
LEC values and the hyperparameters of the EFT truncation error have been
inferred using a Bayesian approach.
## IV Summary and outlook
This work presents a comprehensive analysis of $np$ scattering observables
(cross sections and polarizations) utilizing an RG-invariant formulation of
$\chi$EFT by Long and Yang. We calibrated the LECs by reproducing Nijmegen
phase shifts at specific scattering energies, and carried out calculation up
to N3LO for two values of the momentum-space cutoffs, $500$ MeV and $2500$
MeV. The PC that we employed is fairly representative of a broad class of
MWPCs in which corrections beyond LO, based on one-pion exchange, are included
perturbatively and the short-range contact potential incorporates counterterms
promoted to renormalize the long-range pion contributions to the scattering
amplitudes. A key result of this paper was a quantitative demonstration that
RG-invariant $\chi$EFT exhibits a steady order-by-order convergence in the
description of scattering observables, starting already at LO. A second key
result was the realistic reproduction of experimental scattering data in an
energy range up to $T_{\mathrm{lab}}=100$ MeV at N2LO. We also found that N3LO
predictions do not always improve over N2LO.
A perturbative approach exposes the deficiencies of any PC, not only the
possible lack of RG-independence. In fact, using a perturbative approach we
found that the accuracy of our N3LO predictions for the total $np$ cross
section declines as one approaches $T_{\mathrm{lab}}\gtrsim 100$ MeV. This
corresponds to a relative scattering momentum of 220 MeV and might suggest the
presence of an additional mass scale at 200–300 MeV. This finding is in
accordance with the known mass splitting between the nucleon and the
$\Delta$(1232) resonance, but is markedly lower than conventional estimates of
the breakdown scale of $\chi$EFT residing in the vicinity of the $\rho$-meson
mass. The latter estimate has also been corroborated in a Bayesian study of
non-perturbative WPC predictions of nucleon-nucleon scattering observables
[69].
Based on our comparison of perturbative and non-perturbative calculations of
phase shifts, we speculated that the magnitudes of the imaginary component of
the non-perturbative phase shift and the $\chi$EFT truncation error are
linked. We also investigated the breaking of unitarity at the level of total
$np$ cross sections. The connection between perturbative unitarity breaking
and the truncation error deserves further attention.
Future work will focus on quantifying posterior probability distributions for
the LECs and the EFT truncation error, making predictions beyond the two-
nucleon system, and the effects of including the $\Delta$(1232) resonance in
the two-pion exchange potential. Fast and accurate emulators [70], adapted to
perturbative computations, will likely be essential for rigorous testing of
RG-invariant $\chi$EFT against nuclear data and to address critical questions
regarding, e.g., the construction of LO, the importance of promoting higher-
order pion exchanges and many-nucleon forces as one increases the mass number,
and the level of fine-tuning in $\chi$EFT.
###### Acknowledgements.
O.T. thanks C.-J. Yang, B. Long, and R. Peng for helpful discussions and for
providing detailed benchmarks. The authors also thank Daniel Phillips for
feedback on a draft version of the manuscript. This work was supported by the
European Research Council (ERC) under the European Unions Horizon 2020
research and innovation program (Grant Agreement No. 758027), the Swedish
Research Council (Grants No. 2017-04234, No. 2020-05127 and No. 2021-04507).
The computations were enabled by resources provided by the Swedish National
Infrastructure for Computing (SNIC) partially funded by the Swedish Research
Council through Grant Agreement No. 2018-05973.
## References
* Ekström _et al._ [2023] A. Ekström, C. Forssén, G. Hagen, G. R. Jansen, W. Jiang, and T. Papenbrock, What is ab initio in nuclear theory?, Front. Phys. 11, 1129094 (2023), arXiv:2212.11064 [nucl-th] .
* Hergert [2020] H. Hergert, A Guided Tour of $ab$ $initio$ Nuclear Many-Body Theory, Front. in Phys. 8, 379 (2020), arXiv:2008.05061 [nucl-th] .
* Epelbaum _et al._ [2009] E. Epelbaum, H.-W. Hammer, and U.-G. Meissner, Modern Theory of Nuclear Forces, Rev. Mod. Phys. 81, 1773 (2009), arXiv:0811.1338 [nucl-th] .
* Machleidt and Entem [2011] R. Machleidt and D. R. Entem, Chiral effective field theory and nuclear forces, Phys. Rept. 503, 1 (2011), arXiv:1105.2919 [nucl-th] .
* Hammer _et al._ [2020] H. W. Hammer, S. König, and U. van Kolck, Nuclear effective field theory: status and perspectives, Rev. Mod. Phys. 92, 025004 (2020), arXiv:1906.12122 [nucl-th] .
* Weinberg [1990] S. Weinberg, Nuclear forces from chiral Lagrangians, Phys. Lett. B 251, 288 (1990).
* Weinberg [1991] S. Weinberg, Effective chiral Lagrangians for nucleon - pion interactions and nuclear forces, Nucl. Phys. B 363, 3 (1991).
* Entem and Machleidt [2003] D. R. Entem and R. Machleidt, Accurate charge dependent nucleon nucleon potential at fourth order of chiral perturbation theory, Phys. Rev. C 68, 041001 (2003), arXiv:nucl-th/0304018 .
* Ekström _et al._ [2013] A. Ekström _et al._ , Optimized Chiral Nucleon-Nucleon Interaction at Next-to-Next-to-Leading Order, Phys. Rev. Lett. 110, 192502 (2013), arXiv:1303.4674 [nucl-th] .
* Gezerlis _et al._ [2013] A. Gezerlis, I. Tews, E. Epelbaum, S. Gandolfi, K. Hebeler, A. Nogga, and A. Schwenk, Quantum Monte Carlo Calculations with Chiral Effective Field Theory Interactions, Phys. Rev. Lett. 111, 032501 (2013), arXiv:1303.6243 [nucl-th] .
* Piarulli _et al._ [2015] M. Piarulli, L. Girlanda, R. Schiavilla, R. Navarro Pérez, J. E. Amaro, and E. Ruiz Arriola, Minimally nonlocal nucleon-nucleon potentials with chiral two-pion exchange including $\Delta$ resonances, Phys. Rev. C 91, 024003 (2015), arXiv:1412.6446 [nucl-th] .
* Carlsson _et al._ [2016] B. D. Carlsson, A. Ekström, C. Forssén, D. F. Strömberg, G. R. Jansen, O. Lilja, M. Lindby, B. A. Mattsson, and K. A. Wendt, Uncertainty analysis and order-by-order optimization of chiral nuclear interactions, Phys. Rev. X 6, 011019 (2016), arXiv:1506.02466 [nucl-th] .
* Ekström _et al._ [2015] A. Ekström, G. R. Jansen, K. A. Wendt, G. Hagen, T. Papenbrock, B. D. Carlsson, C. Forssén, M. Hjorth-Jensen, P. Navrátil, and W. Nazarewicz, Accurate nuclear radii and binding energies from a chiral interaction, Phys. Rev. C 91, 051301 (2015), arXiv:1502.04682 [nucl-th] .
* Jiang _et al._ [2020] W. G. Jiang, A. Ekström, C. Forssén, G. Hagen, G. R. Jansen, and T. Papenbrock, Accurate bulk properties of nuclei from $A=2$ to $\infty$ from potentials with $\Delta$ isobars, Phys. Rev. C 102, 054301 (2020), arXiv:2006.16774 [nucl-th] .
* Reinert _et al._ [2018] P. Reinert, H. Krebs, and E. Epelbaum, Semilocal momentum-space regularized chiral two-nucleon potentials up to fifth order, Eur. Phys. J. A 54, 86 (2018), arXiv:1711.08821 [nucl-th] .
* Entem _et al._ [2017] D. R. Entem, R. Machleidt, and Y. Nosyk, High-quality two-nucleon potentials up to fifth order of the chiral expansion, Phys. Rev. C 96, 024004 (2017), arXiv:1703.05454 [nucl-th] .
* Epelbaum _et al._ [2020] E. Epelbaum, H. Krebs, and P. Reinert, High-precision nuclear forces from chiral EFT: State-of-the-art, challenges and outlook, Front. in Phys. 8, 98 (2020), arXiv:1911.11875 [nucl-th] .
* Tews _et al._ [2020] I. Tews, Z. Davoudi, A. Ekström, J. D. Holt, and J. E. Lynn, New Ideas in Constraining Nuclear Forces, J. Phys. G 47, 103001 (2020), arXiv:2001.03334 [nucl-th] .
* Tews _et al._ [2022] I. Tews _et al._ , Nuclear Forces for Precision Nuclear Physics: A Collection of Perspectives, Few Body Syst. 63, 67 (2022), arXiv:2202.01105 [nucl-th] .
* van Kolck [2020] U. van Kolck, The Problem of Renormalization of Chiral Nuclear Forces, Front. in Phys. 8, 79 (2020), arXiv:2003.06721 [nucl-th] .
* Nogga _et al._ [2005] A. Nogga, R. G. E. Timmermans, and U. van Kolck, Renormalization of one-pion exchange and power counting, Phys. Rev. C 72, 054006 (2005), arXiv:nucl-th/0506005 .
* Frank _et al._ [1971] W. Frank, D. J. Land, and R. M. Spector, Singular potentials, Rev. Mod. Phys. 43, 36 (1971).
* Pavon Valderrama and Ruiz Arriola [2006] M. Pavon Valderrama and E. Ruiz Arriola, Renormalization of NN interaction with chiral two pion exchange potential: Non-central phases, Phys. Rev. C 74, 064004 (2006), [Erratum: Phys.Rev.C 75, 059905 (2007)], arXiv:nucl-th/0507075 .
* Pavon Valderrama and Ruiz Arriola [2005] M. Pavon Valderrama and E. Ruiz Arriola, Renormalization of the deuteron with one pion exchange, Phys. Rev. C 72, 054002 (2005), arXiv:nucl-th/0504067 .
* Pavon Valderrama [2011] M. Pavon Valderrama, Perturbative Renormalizability of Chiral Two Pion Exchange in Nucleon-Nucleon Scattering: P- and D-waves, Phys. Rev. C 84, 064002 (2011), arXiv:1108.0872 [nucl-th] .
* Long [2013] B. Long, Improved convergence of chiral effective field theory for 1S0 of NN scattering, Phys. Rev. C 88, 014002 (2013), arXiv:1304.7382 [nucl-th] .
* Pavón Valderrama _et al._ [2017] M. Pavón Valderrama, M. Sánchez Sánchez, C. J. Yang, B. Long, J. Carbonell, and U. van Kolck, Power Counting in Peripheral Partial Waves: The Singlet Channels, Phys. Rev. C 95, 054001 (2017), arXiv:1611.10175 [nucl-th] .
* Valderrama [2011] M. P. Valderrama, Perturbative renormalizability of chiral two pion exchange in nucleon-nucleon scattering, Phys. Rev. C 83, 024003 (2011), arXiv:0912.0699 [nucl-th] .
* Birse [2006] M. C. Birse, Power counting with one-pion exchange, Phys. Rev. C 74, 014003 (2006), arXiv:nucl-th/0507077 .
* Long and Yang [2012a] B. Long and C. J. Yang, Short-range nuclear forces in singlet channels, Phys. Rev. C 86, 024001 (2012a), arXiv:1202.4053 [nucl-th] .
* Sánchez Sánchez _et al._ [2018] M. Sánchez Sánchez, C. J. Yang, B. Long, and U. van Kolck, Two-nucleon ${}^{1}S_{0}$ amplitude zero in chiral effective field theory, Phys. Rev. C 97, 024001 (2018), arXiv:1704.08524 [nucl-th] .
* Long and Yang [2012b] B. Long and C.-J. Yang, Renormalizing chiral nuclear forces: Triplet channels, Phys. Rev. C 85, 034002 (2012b).
* Yang [2016] C. J. Yang, Chiral potential renormalized in harmonic-oscillator space, Phys. Rev. C 94, 064004 (2016), arXiv:1610.01350 [nucl-th] .
* Mishra _et al._ [2022] C. Mishra, A. Ekström, G. Hagen, T. Papenbrock, and L. Platter, Two-pion exchange as a leading-order contribution in chiral effective field theory, Phys. Rev. C 106, 024004 (2022), arXiv:2111.15515 [nucl-th] .
* Peng _et al._ [2022] R. Peng, S. Lyu, S. König, and B. Long, Constructing chiral effective field theory around unnatural leading-order interactions, Phys. Rev. C 105, 054002 (2022), arXiv:2112.00947 [nucl-th] .
* Song _et al._ [2017] Y.-H. Song, R. Lazauskas, and U. van Kolck, Triton binding energy and neutron-deuteron scattering up to next-to-leading order in chiral effective field theory, Phys. Rev. C 96, 024002 (2017), [Erratum: Phys.Rev.C 100, 019901 (2019)], arXiv:1612.09090 [nucl-th] .
* Yang _et al._ [2021] C. J. Yang, A. Ekström, C. Forssén, and G. Hagen, Power counting in chiral effective field theory and nuclear binding, Phys. Rev. C 103, 054304 (2021), arXiv:2011.11584 [nucl-th] .
* Thim _et al._ [2023] O. Thim, E. May, A. Ekström, and C. Forssén, Bayesian analysis of chiral effective field theory at leading order in a modified Weinberg power counting approach, Phys. Rev. C 108, 054002 (2023), arXiv:2302.12624 [nucl-th] .
* Yang _et al._ [2023] C. J. Yang, A. Ekström, C. Forssén, G. Hagen, G. Rupak, and U. van Kolck, The importance of few-nucleon forces in chiral effective field theory, Eur. Phys. J. A 59, 233 (2023), arXiv:2109.13303 [nucl-th] .
* Long and van Kolck [2008] B. Long and U. van Kolck, Renormalization of Singular Potentials and Power Counting, Annals Phys. 323, 1304 (2008), arXiv:0707.4325 [quant-ph] .
* Epelbaum and Meissner [2013] E. Epelbaum and U. G. Meissner, On the Renormalization of the One-Pion Exchange Potential and the Consistency of Weinberg‘s Power Counting, Few Body Syst. 54, 2175 (2013), arXiv:nucl-th/0609037 .
* Epelbaum and Gegelia [2009] E. Epelbaum and J. Gegelia, Regularization, renormalization and ’peratization’ in effective field theory for two nucleons, Eur. Phys. J. A 41, 341 (2009), arXiv:0906.3822 [nucl-th] .
* Epelbaum _et al._ [2018] E. Epelbaum, A. M. Gasparyan, J. Gegelia, and U.-G. Meißner, How (not) to renormalize integral equations with singular potentials in effective field theory, Eur. Phys. J. A 54, 186 (2018), arXiv:1810.02646 [nucl-th] .
* Gasparyan and Epelbaum [2022] A. M. Gasparyan and E. Epelbaum, Is the RG-invariant EFT for few-nucleon systems cutoff independent?, arxiv (2022), arXiv:2210.16225 [nucl-th] .
* Long and Yang [2011a] B. Long and C.-J. Yang, Renormalizing chiral nuclear forces: A case study of ${}^{3}\phantom{\rule{-1.60004pt}{0.0pt}}{P}_{0}$, Phys. Rev. C 84, 057001 (2011a).
* Furnstahl _et al._ [2015] R. J. Furnstahl, N. Klco, D. R. Phillips, and S. Wesolowski, Quantifying truncation errors in effective field theory, Phys. Rev. C 92, 024005 (2015), arXiv:1506.01343 [nucl-th] .
* Wesolowski _et al._ [2019] S. Wesolowski, R. J. Furnstahl, J. A. Melendez, and D. R. Phillips, Exploring bayesian parameter estimation for chiral effective field theory using nucleon-nucleon phase shifts, J. Phys. G 46, 045102 (2019).
* Wesolowski _et al._ [2021] S. Wesolowski, I. Svensson, A. Ekström, C. Forssén, R. J. Furnstahl, J. A. Melendez, and D. R. Phillips, Rigorous constraints on three-nucleon forces in chiral effective field theory from fast and accurate calculations of few-body observables, Phys. Rev. C 104, 064001 (2021), arXiv:2104.04441 [nucl-th] .
* Svensson _et al._ [2022] I. Svensson, A. Ekström, and C. Forssén, Bayesian parameter estimation in chiral effective field theory using the hamiltonian monte carlo method, Phys. Rev. C 105, 014004 (2022).
* Svensson _et al._ [2023] I. Svensson, A. Ekström, and C. Forssén, Inference of the low-energy constants in delta-full chiral effective field theory including a correlated truncation error, (2023), arXiv:2304.02004 [nucl-th] .
* Long and Yang [2011b] B. Long and C. J. Yang, Renormalizing chiral nuclear forces: a case study of 3P0, Phys. Rev. C 84, 057001 (2011b), arXiv:1108.0985 [nucl-th] .
* Wu and Long [2019] S. Wu and B. Long, Perturbative $nn$ scattering in chiral effective field theory, Phys. Rev. C 99, 024003 (2019).
* Peng _et al._ [2020] R. Peng, S. Lyu, and B. Long, Perturbative chiral nucleon–nucleon potential for the ${}^{3}P_{0}$ partial wave, Commun. Theor. Phys. 72, 095301 (2020), arXiv:2011.13186 [nucl-th] .
* Siemens _et al._ [2017] D. Siemens, J. Ruiz de Elvira, E. Epelbaum, M. Hoferichter, H. Krebs, B. Kubis, and U. G. Meißner, Reconciling threshold and subthreshold expansions for pion–nucleon scattering, Phys. Lett. B 770, 27 (2017), arXiv:1610.08978 [nucl-th] .
* Barford and Birse [2003] T. Barford and M. C. Birse, A Renormalization group approach to two-body scattering in the presence of long range forces, Phys. Rev. C 67, 064006 (2003), arXiv:hep-ph/0206146 .
* Newton [1982] R. G. Newton, _Scattering theory of waves and particles_ (Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, New York, 10010, U.S.A., 1982).
* Hussein and Canto [2012] M. S. Hussein and L. F. Canto, _Scattering Theory Of Molecules, Atoms And Nuclei_ (World Scientific Publishing Company, Singapore, SINGAPORE, 2012).
* Griesshammer [2022] H. W. Griesshammer, What Can Possibly Go Wrong?, Few Body Syst. 63, 44 (2022), arXiv:2111.00930 [nucl-th] .
* Vanasse [2013] J. Vanasse, Fully Perturbative Calculation of $nd$ Scattering to Next-to-next-to-leading-order, Phys. Rev. C 88, 044001 (2013), arXiv:1305.0283 [nucl-th] .
* Erkelenz _et al._ [1971] K. Erkelenz, R. Alzetta, and K. Holinde, Momentum space calculations and helicity formalism in nuclear physics, Nucl. Phys. A 176, 413 (1971).
* Yamaguchi [1954] Y. Yamaguchi, Two nucleon problem when the potential is nonlocal but separable. 1., Phys. Rev. 95, 1628 (1954).
* Stoks _et al._ [1993] V. G. J. Stoks, R. A. M. Klomp, M. C. M. Rentmeester, and J. J. de Swart, Partial wave analaysis of all nucleon-nucleon scattering data below 350-MeV, Phys. Rev. C 48, 792 (1993).
* Odell _et al._ [2023] D. Odell, D. R. Phillips, and U. van Kolck, Effective Field Theory for the Bound States and Scattering of a Heavy Charged Particle and a Neutral Atom, (2023), arXiv:2307.13103 [nucl-th] .
* Blatt and Biedenharn [1952] J. M. Blatt and L. C. Biedenharn, The Angular Distribution of Scattering and Reaction Cross Sections, Rev. Mod. Phys. 24, 258 (1952).
* Glöckle [1983] W. Glöckle, _The Quantum Mechanical Few-body Problem_ (Springer-Verlag, Berlin Heidelberg, 1983).
* Navarro Pérez _et al._ [2013] R. Navarro Pérez, J. E. Amaro, and E. Ruiz Arriola, Partial-wave analysis of nucleon-nucleon scattering below the pion-production threshold, Phys. Rev. C 88, 024002 (2013).
* Pérez _et al._ [2013] R. N. Pérez, J. E. Amaro, and E. R. Arriola, Coarse-grained potential analysis of neutron-proton and proton-proton scattering below the pion production threshold, Phys. Rev. C 88, 064002 (2013).
* Bystricky _et al._ [1978] J. Bystricky, F. Lehar, and P. Winternitz, Formalism of Nucleon-Nucleon Elastic Scattering Experiments, J. Phys. (France) 39, 1 (1978).
* Melendez _et al._ [2017] J. A. Melendez, S. Wesolowski, and R. J. Furnstahl, Bayesian truncation errors in chiral effective field theory: nucleon-nucleon observables, Phys. Rev. C 96, 024003 (2017), arXiv:1704.03308 [nucl-th] .
* Duguet _et al._ [2023] T. Duguet, A. Ekström, R. J. Furnstahl, S. König, and D. Lee, Eigenvector Continuation and Projection-Based Emulators, (2023), arXiv:2310.19419 [nucl-th] .
* Epelbaum _et al._ [1998] E. Epelbaum, W. Gloeckle, and U.-G. Meissner, Nuclear forces from chiral Lagrangians using the method of unitary transformation. 1. Formalism, Nucl. Phys. A 637, 107 (1998), arXiv:nucl-th/9801064 .
* Epelbaum _et al._ [2000] E. Epelbaum, W. Gloeckle, and U.-G. Meissner, Nuclear forces from chiral Lagrangians using the method of unitary transformation. 2. The two nucleon system, Nucl. Phys. A 671, 295 (2000), arXiv:nucl-th/9910064 .
* Haftel and Tabakin [1970] M. I. Haftel and F. Tabakin, Nuclear saturation and the smoothness of nucleon-nucleon potentials, Nucl. Phys. A 158, 1 (1970).
* Landau [1990] R. H. Landau, _Quantum mechanics. Vol. 2: A second course in quantum theory_ (1990).
* Hoppe _et al._ [2017] J. Hoppe, C. Drischler, R. J. Furnstahl, K. Hebeler, and A. Schwenk, Weinberg eigenvalues for chiral nucleon-nucleon interactions, Phys. Rev. C 96, 054002 (2017), arXiv:1707.06438 [nucl-th] .
* Stapp _et al._ [1957] H. P. Stapp, T. J. Ypsilantis, and N. Metropolis, Phase shift analysis of 310-MeV proton proton scattering experiments, Phys. Rev. 105, 302 (1957).
## Appendix A Nuclear potentials in the Long and Yang power counting
The orders at which potentials appear in the Long and Yang PC in channels
where OPE is treated non-perturbatively are shown in Table 1. Similarly, for
the channels where OPE is treated perturbatively, we follow the PC of Ref.
[52] also shown in Table 1. In this appendix, we list the expressions for the
potentials appearing in Table 1. The potential contributions will be listed
using the following decomposition convention [4]
$\displaystyle V(\bm{p}^{\prime},\bm{p})$
$\displaystyle=V_{C}+\bm{\tau}_{1}\cdot\bm{\tau}_{2}W_{C}$ (29)
$\displaystyle+\left[V_{S}+\bm{\tau}_{1}\cdot\bm{\tau}_{2}W_{S}\right]\bm{\sigma}_{1}\cdot\bm{\sigma}_{2}+$
$\displaystyle+\left[V_{LS}+\bm{\tau}_{1}\cdot\bm{\tau}_{2}W_{LS}\right](-i\bm{S}\cdot\left(\bm{q}\times\bm{k})\right)$
$\displaystyle+\left[V_{T}+\bm{\tau}_{1}\cdot\bm{\tau}_{2}W_{T}\right]\bm{\sigma}_{1}\cdot\bm{q}\bm{\sigma}_{2}\cdot\bm{q}$
$\displaystyle+\left[V_{\sigma L}+\bm{\tau}_{1}\cdot\bm{\tau}_{2}W_{\sigma
L}\right]\bm{\sigma}_{1}\cdot(\bm{q}\times\bm{k})\bm{\sigma}_{2}\cdot(\bm{q}\times\bm{k}),$
where
$\bm{q}=\bm{p}-\bm{p}^{\prime},\quad\bm{k}=\frac{1}{2}\left(\bm{p}+\bm{p}^{\prime}\right),\quad\bm{S}=\frac{1}{2}\left(\bm{\sigma}_{1}+\bm{\sigma}_{2}\right)$
(30)
and $\bm{\sigma}_{i}$ denotes the Pauli spin matrix for the respective
nucleon.
The one-pion exchange potential takes the form
$\displaystyle V^{(0)}_{1\pi}$
$\displaystyle=\left(\bm{\tau}_{1}\cdot\bm{\tau}_{2}\right)\left(\bm{\sigma}_{1}\cdot\bm{q}\bm{\sigma}_{2}\cdot\bm{q}\right)W_{T},$
(31) $\displaystyle W_{T}$
$\displaystyle=-\left(\frac{g_{A}}{4f_{\pi}}\right)^{2}\frac{1}{q^{2}+m^{2}_{\pi}},$
(32)
where $g_{A}=1.29$ is the axial coupling, $f_{\pi}=92.1$ MeV the pion decay
constant, $m_{\pi}=138.039$ MeV is the average pion mass and $q=|\bm{q}|$. For
the two-pion exchange potentials, we employ expressions computed with
dimensional regularization (DR). The leading two-pion exchange potential takes
the form [71, 72, 4]
$\displaystyle V^{(2)}_{2\pi}$
$\displaystyle=\bm{\tau}_{1}\cdot\bm{\tau}_{2}W_{C}+\bm{\sigma}_{1}\cdot\bm{\sigma}_{2}V_{S}+\bm{\sigma}_{1}\cdot\bm{q}\bm{\sigma}_{2}\cdot\bm{q}V_{T},$
(33) $\displaystyle W_{C}$
$\displaystyle=-\frac{L(q)}{384\pi^{2}f^{4}_{\pi}}\Bigg{[}4m^{2}_{\pi}\left(5g^{4}_{A}-4g^{2}_{A}-1\right)+q^{2}\left(23g^{4}_{A}-10g^{2}_{A}-1\right)+\frac{48g^{4}_{A}m^{4}_{\pi}}{w^{2}}\Bigg{]},$
(34) $\displaystyle V_{S}$
$\displaystyle=\frac{3g^{4}_{A}L(q)q^{2}}{64\pi^{2}f^{4}_{\pi}},$ (35)
$\displaystyle V_{T}$
$\displaystyle=-\frac{1}{q^{2}}V_{S}=-\frac{3g^{4}_{A}L(q)}{64\pi^{2}f^{4}_{\pi}},$
(36)
with
$L(q)=\frac{w}{q}\ln\frac{w+q}{2m_{\pi}},\quad w=\sqrt{4m^{2}_{\pi}+q^{2}}.$
(37)
The sub-leading two-pion exchange potential takes the form of Eqs. (4.13) -
(4.20) in [4]. We apply the power counting
$\left(Q/m_{N}\right)=\left(Q/\Lambda_{b}\right)^{2}$ for $(1/m_{N})$
corrections, which means that all terms proportional to $1/m_{N}$ vanish at
order $\left(Q/\Lambda_{b}\right)^{3}$ (N3LO). The non-zero contributions read
$\displaystyle V^{(3)}_{2\pi}$
$\displaystyle=V_{C}+\left(\bm{\tau}_{1}\cdot\bm{\tau}_{2}\right)\left(\bm{\sigma}_{1}\cdot\bm{q}\bm{\sigma}_{2}\cdot\bm{q}\right)W_{T},$
(38) $\displaystyle V_{C}$ $\displaystyle=-\frac{3g^{2}_{A}}{16\pi
f^{4}_{\pi}}\Big{[}2m^{2}_{\pi}(2c_{1}-c_{3})-q^{2}c_{3}\Big{]}\tilde{w}^{2}A(q),$
(39) $\displaystyle W_{T}$
$\displaystyle=-\frac{1}{q^{2}}W_{S}=-\frac{g^{2}_{A}A(q)}{32\pi
f^{4}_{\pi}}c_{4}w^{2},$ (40)
with
$A(q)=\frac{1}{2q}\arctan\frac{q}{2m_{\pi}},\quad\tilde{w}=\sqrt{2m^{2}_{\pi}+q^{2}}.$
(42)
For the $\pi N$ LECs $c_{1},c_{3},c_{4}$, appearing in $V^{(3)}_{2\pi}$, we
employ numerical values determined in a Roy-Steiner analysis at NLO:
$c_{1}=-0.74$ GeV-1, $c_{3}=-3.61$ GeV-1 and $c_{4}=2.44$ GeV-1 [54].
The potential contributions at each order in the channels where OPE is treated
non-perturbatively are listed in Table 3. We denote counterterms in coupled
channels by a $2\times 2$ matrix representing $\ell^{\prime}=j\mp 1$ (rows)
and $\ell=j\mp 1$ (columns). Table 3 expands upon Table I in Ref. [30] to also
explicitly show the perturbative corrections to LECs present at each order.
Table 4 summarizes the number of LECs present at each order, excluding the
three $\pi N$ LECs at N3LO from the total number.
Table 3: Potential contributions at each chiral order in the channels where OPE is treated non-perturbatively. This table complements the information in Table 1. Order | Pion contribution | Contact terms
---|---|---
LO | $V^{(0)}_{1\pi}$ | $V^{(0)}_{\mathrm{ct}}:$
| | $C^{(0)}_{{}^{1}S_{0}}$, $\begin{pmatrix}C^{(0)}_{{}^{3}S_{1}}&0\\\ 0&0\end{pmatrix}$, $D^{(0)}_{{}^{3}P_{0}}p^{\prime}p$, $\begin{pmatrix}D^{(0)}_{{}^{3}P_{2}}p^{\prime}p&0\\\ 0&0\end{pmatrix}$
NLO | - | $V^{(1)}_{\mathrm{ct}}$:
| | $D^{(0)}_{{}^{1}S_{0}}(p^{\prime 2}+p^{2})$, $C^{(1)}_{{}^{1}S_{0}}$
N2LO | $V_{2\pi}^{(2)}$ | $V^{(2)}_{\mathrm{ct}}$:
| | $E^{(0)}_{{}^{1}S_{0}}p^{\prime 2}p^{2}$, $D^{(1)}_{{}^{1}S_{0}}(p^{\prime 2}+p^{2})$, $C^{(2)}_{{}^{1}S_{0}}$,
| | $\begin{pmatrix}D^{(0)}_{{}^{3}S_{1}}(p^{\prime 2}+p^{2})&D^{(0)}_{SD}p^{2}\\\ D^{(0)}_{SD}p^{\prime 2}&0\end{pmatrix}$, $\begin{pmatrix}C^{(1)}_{{}^{3}S_{1}}&0\\\ 0&0\end{pmatrix}$,
| | $E^{(0)}_{{}^{3}P_{0}}p^{\prime}p(p^{\prime 2}+p^{2})$ , $D^{(1)}_{{}^{3}P_{0}}p^{\prime}p$,
| | $p^{\prime}p\begin{pmatrix}E^{(0)}_{{}^{3}P_{2}}(p^{\prime 2}+p^{2})&E^{(0)}_{PF}p^{2}\\\ E^{(0)}_{PF}p^{\prime 2}&0\end{pmatrix}$, $\begin{pmatrix}D^{(1)}_{{}^{3}P_{2}}p^{\prime}p&0\\\ 0&0\end{pmatrix}$,
| | $D^{(0)}_{{}^{1}P_{1}}p^{\prime}p$, $D^{(0)}_{{}^{3}P_{1}}p^{\prime}p$
N3LO | $V_{2\pi}^{(3)}$, (includes | $V^{(3)}_{\mathrm{ct}}$:
| $\pi N$ LECs: $c_{1},c_{3},c_{4}$) | $F^{(0)}_{{}^{1}S_{0}}p^{\prime 2}p^{2}(p^{\prime 2}+p^{2})$, $E^{(1)}_{{}^{1}S_{0}}p^{\prime 2}p^{2}$, $D^{(2)}_{{}^{1}S_{0}}(p^{\prime 2}+p^{2})$, $C^{(3)}_{{}^{1}S_{0}}$,
| | $\begin{pmatrix}D^{(1)}_{{}^{3}S_{1}}(p^{\prime 2}+p^{2})&D^{(1)}_{SD}p^{2}\\\ D^{(1)}_{SD}p^{\prime 2}&0\end{pmatrix}$, $\begin{pmatrix}C^{(2)}_{{}^{3}S_{1}}&0\\\ 0&0\end{pmatrix}$,
| | $E^{(1)}_{{}^{3}P_{0}}p^{\prime}p(p^{\prime 2}+p^{2})$, $D^{(2)}_{{}^{3}P_{0}}p^{\prime}p$,
| | $p^{\prime}p\begin{pmatrix}E^{(1)}_{{}^{3}P_{2}}(p^{\prime 2}+p^{2})&E^{(1)}_{PF}p^{2}\\\ E^{(1)}_{PF}p^{\prime 2}&0\end{pmatrix}$, $\begin{pmatrix}D^{(2)}_{{}^{3}P_{2}}p^{\prime}p&0\\\ 0&0\end{pmatrix}$,
| | $D^{(1)}_{{}^{1}P_{1}}p^{\prime}p$, $D^{(1)}_{{}^{3}P_{1}}p^{\prime}p$
Table 4: The number of LECs at each order in the Long and Yang PC. Chiral order | New LECs | Pert. correction | Total up to order
---|---|---|---
LO | 4 | – | 4
NLO | 1 | 1 | 6
N2LO | 8 | 5 | 19
N3LO | 1 (+3222Sub-leading $\pi N$ LECs: $c_{1},c_{3},c_{4}$ excluded from the total in the last column.) | 13 | 33
## Appendix B Numerical implementation of distorted-wave perturbation theory
This appendix gives some more details regarding the implementation of the
equations for higher-order corrections to the scattering amplitude in Eqs. 11,
12 and 13. Since all operator products reduce to the form in Eq. 19, the
implementation can be done in complete analogy with the solution of the
partial-wave Lippmann-Schwinger equation using Gauss-Legendre quadrature [73,
74].
In this appendix we suppress the conserved quantum numbers $s$ and $j$, and
write the resolution of identity in the partial wave basis as
$\mathds{1}=\sum_{\ell}\int_{0}^{\infty}dk\ k^{2}\ket{k,\ell}\bra{k,\ell}.$
(43)
Furthermore, for a stationary proton (mass $m_{p}$) and an incoming neutron
(mass $m_{n}$) with kinetic energy $T_{\mathrm{lab}}$ in the laboratory frame
of reference, the modulus of the c.m.momentum, $p_{0}$, is given by
$p_{0}^{2}=\frac{m_{p}^{2}T_{\mathrm{lab}}(2m_{n}+T_{\mathrm{lab}})}{(m_{n}+m_{p})^{2}+2m_{p}T_{\mathrm{lab}}}.$
(44)
By inserting the resolution of identity in Eq. 19 and discretizing the
integral using Gauss-Legendre quadrature with momentum points and weights,
$\\{k_{i},w_{i}\\}_{i=1}^{N}$, we obtain
$\displaystyle\braket{p^{\prime},\ell^{\prime}}{A_{1}G^{+}_{0}A_{2}}{p,\ell}$
$\displaystyle=\sum_{\ell^{\prime\prime},\ell^{\prime\prime\prime}}\int_{0}^{\infty}dk_{1}\
k_{1}^{2}\int_{0}^{\infty}dk_{2}\
k^{2}_{2}\braket{p^{\prime},\ell^{\prime}}{A_{1}}{k_{1},\ell^{\prime\prime}}\braket{k_{1},\ell^{\prime\prime}}{G^{+}_{0}}{k_{2},\ell^{\prime\prime\prime}}\braket{k_{2},\ell^{\prime\prime\prime}}{A_{2}}{p,\ell}=$
$\displaystyle=\sum_{\ell^{\prime\prime}}\int_{0}^{\infty}dk_{1}\
k_{1}^{2}\braket{p,\ell^{\prime}}{A_{1}}{k_{1},\ell^{\prime\prime}}\frac{m_{N}}{p_{0}^{2}-k_{1}^{2}+i\epsilon}\braket{k_{1},\ell^{\prime\prime}}{A_{2}}{p,\ell}=$
(45)
$\displaystyle=\sum_{\ell^{\prime\prime}}\sum_{i=1}^{N}k^{2}_{i}w_{i}\braket{p,\ell^{\prime}}{A_{1}}{k_{i},\ell^{\prime\prime}}\frac{m_{N}}{p_{0}^{2}-k_{i}^{2}+i\epsilon}\braket{k_{i},\ell^{\prime\prime}}{A_{2}}{p,\ell}.$
(46)
Here, $p_{0}$ denotes the on-shell momentum for a given scattering energy
$T_{\mathrm{lab}}$ given by Eq. 44. Doing some manipulations and converting
the $+i\epsilon$ prescription to a principal value we obtain [65, 74]
$\displaystyle\braket{p^{\prime},\ell^{\prime}}{A_{1}G^{+}_{0}A_{2}}{p,\ell}$
$\displaystyle=\sum_{l^{\prime\prime}}\sum_{i=1}^{N}k_{i}^{2}w_{i}\braket{p^{\prime},\ell^{\prime}}{A_{1}}{k_{i},\ell^{\prime\prime}}\frac{m_{N}}{p_{0}^{2}-k_{i}^{2}}\braket{k_{i},\ell^{\prime\prime}}{A_{2}}{p,\ell}$
$\displaystyle-\braket{p^{\prime},\ell^{\prime}}{A_{1}}{p_{0},\ell^{\prime\prime}}\braket{p_{0},\ell^{\prime\prime}}{A_{2}}{p,\ell}\left[m_{N}p_{0}^{2}\sum_{i=1}^{N}\frac{w_{i}}{p_{0}^{2}-k_{i}^{2}}+\frac{i\pi
m_{N}p_{0}}{2}-m_{N}p_{0}\
\mathrm{arctanh}\left(\frac{p_{0}}{\tilde{\Lambda}}\right)\right].$ (47)
All potentials are regulated using Eq. 16 and at sufficiently high momentum,
$\tilde{\Lambda}$, all potential matrix elements are essentially zero. This
means that the integral in Eq. 46 is well represented by the discretized sum
where the momentum points and weights $\\{k_{i},w_{i}\\}_{i=1}^{N}$ are chosen
using Gauss-Legendre quadrature in the interval $[0,\tilde{\Lambda}]$. The
last term in the bracket in Eq. 47 implements the principal-value integral on
the interval $[\tilde{\Lambda},\infty]$ analytically since the grid is just
doing the integration on $[0,\tilde{\Lambda}]$ [75]. It is possible to have a
grid that extends to numerical infinity, but this generally leads to slower
convergence with $N$. For the calculations in this study, we employ
$\tilde{\Lambda}=\Lambda+1500$ MeV, for both $\Lambda=500$ MeV and
$\Lambda=2500$ MeV, which we find sufficient for numerical convergence.
Equation 47 can be expressed in a simpler form using matrix products, which
speeds up the computations. We define the propagator matrix as
$[G^{+}_{0}]_{ij}=\delta_{ij}F_{i},\quad
F_{i}=\begin{cases}\frac{m_{N}}{p_{0}^{2}-k_{i}^{2}},\quad i=1,...,N\\\
-f(p_{0}),\quad i=N+1,\end{cases}$ (48)
where
$f(p_{0})=m_{N}p_{0}^{2}\sum_{i=1}^{N}\frac{w_{i}}{p_{0}^{2}-k_{i}^{2}}+\frac{i\pi
m_{N}p_{0}}{2}-m_{N}p_{0}\
\mathrm{arctanh}\left(\frac{p_{0}}{\tilde{\Lambda}}\right).$ (49)
Similarly, we make the following definitions of matrices for $A_{\mu}$,
$\mu=1,2$,
$\displaystyle[A_{\mu}^{\ell^{\prime}\ell}]_{i,j}$
$\displaystyle=k_{i}\sqrt{w_{i}}\braket{k_{i},\ell^{\prime}}{A_{\mu}}{k_{j},\ell}k_{j}\sqrt{w_{j}},\quad
i,j=1,\dots,N$ (50) $\displaystyle[A_{\mu}^{\ell^{\prime}\ell}]_{i,j=N+1}$
$\displaystyle=k_{i}\sqrt{w_{i}}\braket{k_{i},\ell^{\prime}}{A_{\mu}}{p_{0},\ell},\quad
i=1,\dots,N$ (51) $\displaystyle[A_{\mu}^{\ell^{\prime}\ell}]_{i=N+1,j}$
$\displaystyle=\braket{p_{0},\ell^{\prime}}{A_{\mu}}{k_{j},\ell}k_{j}\sqrt{w_{j}},\quad
j=1,\dots,N$ (52) $\displaystyle[A_{\mu}^{\ell^{\prime}\ell}]_{i=N+1,j=N+1}$
$\displaystyle=\braket{p_{0},\ell^{\prime}}{A_{\mu}}{p_{0},\ell},$ (53)
effectively including an extra momentum-grid point $k_{N+1}\equiv p_{0}$ with
weight $\sqrt{w_{N+1}}=p_{0}^{-1}$. Using these definitions and defining
$D=A_{1}G^{+}_{0}A_{2}$, Eq. 47 can be written using $(N+1)\times(N+1)$ matrix
products
$[D^{\ell^{\prime}\ell}]_{ij}=\sum_{\ell^{\prime\prime}}\sum_{n,m=1}^{N+1}[A_{1}^{\ell^{\prime}\ell^{\prime\prime}}]_{in}[G^{+}_{0}]_{nm}[A_{2}^{\ell^{\prime\prime}\ell}]_{mj},\quad
i,j=1,\dots,N+1.$ (54)
For coupled channels, we further eliminate the sum over $\ell^{\prime\prime}$
in Eq. 54 by defining $(2N+2)\times(2N+2)$ block-matrices, which for $A_{1}$
reads
$[\bm{A}_{1}]=\begin{pmatrix}[A_{1}^{--}]&[A_{1}^{-+}]\\\
[A_{1}^{+-}]&[A_{1}^{++}]\end{pmatrix}.$ (55)
The $\pm$ notation represents $\ell=j\pm 1$. The propagator is diagonal in
$\ell$ and can be written as
$[\bm{G}^{+}_{0}]=\begin{pmatrix}[\bm{G}^{+}_{0}]&0\\\
0&[\bm{G}^{+}_{0}]\end{pmatrix}.$ (56)
We can finally write Eq. 54 as
$[\bm{D}]=[\bm{A}_{1}][\bm{G}^{+}_{0}][\bm{A}_{2}].$ (57)
Note that the simplification of Eq. 47 to an ordinary matrix product in Eq. 57
is only possible due to the specific structure of having $G^{+}_{0}$ in
between $A_{1}$ and $A_{2}$. This structure gives rise to the last “on-shell”
term in (47) that can be incorporated by adding the grid point
$k_{N+1}=p_{0}$, which then extends the sum in Eq. 47 to $N+1$. Equation 54
can now be used recursively to compute longer products such as
$\braket{p^{\prime},\ell^{\prime}}{A_{1}G^{+}_{0}A_{2}G^{+}_{0}A_{3}}{p,\ell}$.
As an example, the first-order correction to the $T$-matrix in Eq. 11 can be
expressed as the matrix equation
$[\bm{T}^{(1)}]=\left(\mathds{1}+[\bm{T}^{(0)}][\bm{G}^{+}_{0}]\right)[\bm{V}^{(1)}]\left(\mathds{1}+[\bm{G}^{+}_{0}][\bm{T}^{(0)}]\right).$
(58)
## Appendix C Perturbative phase shifts
In this appendix we discuss how to obtain phase shifts given perturbative
corrections to the $T$-matrix computed from Eqs. 11, 12 and 13. We will follow
the method outlined in Ref. [32] and add some additional details. For
uncoupled scattering channels, the $1\times 1$ $S$-matrix can be parameterized
by
$S=\exp\left(2i\delta\right),$ (59)
where $\delta$ is the phase shift. We expand both the phase shifts and the on-
shell $S$-matrix with the contributions at each chiral order obtaining
$\displaystyle S^{(0)}+S^{(1)}+S^{(2)}+S^{(3)}+\mathcal{O}(Q^{3})=$ (60)
$\displaystyle\exp\left(2i\left[\delta^{(0)}+\delta^{(1)}+\delta^{(2)}+\delta^{(3)}+\mathcal{O}(Q^{3})\right]\right).$
(61)
Performing a Taylor expansion of both sides, and matching chiral orders, gives
$\displaystyle S^{(0)}$ $\displaystyle=\exp\left(2i\delta^{(0)}\right)$ (62)
$\displaystyle S^{(1)}$
$\displaystyle=2i\delta^{(1)}\exp\left(2i\delta^{(0)}\right)$ (63)
$\displaystyle S^{(2)}$
$\displaystyle=\left[2i\delta^{(2)}-2\left(\delta^{(1)}\right)^{2}\right]\exp\left(2i\delta^{(0)}\right)$
(64) $\displaystyle S^{(3)}$
$\displaystyle=\left[2i\delta^{(3)}-4\delta^{(1)}\delta^{(2)}-\frac{4i}{3}\left(\delta^{(1)}\right)^{3}\right]\exp\left(2i\delta^{(0)}\right)$
(65)
From these equations, we straightforwardly obtain explicit expressions for the
LO phase shift $\delta^{(0)}$ (trivial), and all corrections
$\\{\delta^{(\nu)}\\}_{\nu>0}$. We note that all corrections are real valued.
To obtain the total phase shift at, e.g., N2LO, one has to sum
$\delta^{(0)}+\delta^{(1)}+\delta^{(2)}$. The $S$-matrix corrections are
obtained from the $T$-matrix corrections as
$S^{(\nu)}_{\ell^{\prime}\ell}=-i\pi
m_{N}p_{0}T^{(\nu)}_{\ell^{\prime}\ell},\quad\nu>0,$ (66)
for a given on-shell momentum, $p_{0}$.
For coupled channels we use the Stapp-parametrization [76] for the on-shell
$2\times 2$ $S$-matrix
$S=\begin{pmatrix}\cos(2\epsilon)e^{2i\delta_{1}}&i\sin(2\epsilon)e^{i(\delta_{1}+\delta_{2})}\\\
i\sin(2\epsilon)e^{i(\delta_{1}+\delta_{2})}&\cos(2\epsilon)e^{2i\delta_{2}}\end{pmatrix},$
(67)
where the three phase shifts $\delta_{1}$, $\delta_{2}$ and $\epsilon$
parameterize the amplitude for a given channel. We now proceed completely
analogous to the uncoupled case, dividing the $S$-matrix and phase shifts into
chiral orders as
$S=\sum_{\nu=0}^{\infty}=S^{(\nu)},\quad\delta_{1}=\sum_{\nu=0}^{\infty}\delta^{(\nu)}_{1},\quad\delta_{2}=\sum_{\nu=0}^{\infty}\delta^{(\nu)}_{2},\quad\epsilon=\sum_{\nu=0}^{\infty}\epsilon^{(\nu)}.$
(68)
For convenience, we define the functions
$\displaystyle f_{11}(\epsilon,\delta_{1})$
$\displaystyle=\cos(2\epsilon)e^{2i\delta_{1}},$ (69) $\displaystyle
f_{12}(\epsilon,\delta_{1},\delta_{2})$
$\displaystyle=i\sin(2\epsilon)e^{i(\delta_{1}+\delta_{2})},$ (70)
$\displaystyle f_{22}(\epsilon,\delta_{2})$
$\displaystyle=\cos(2\epsilon)e^{2i\delta_{2}},$ (71)
which are the constituents of the matrix in Eq. 67. Inserting the expansions
in Eq. 68 into Eq. 67, Taylor expanding and matching chiral orders, gives the
perturbative corrections to the phase shifts. Expanding the upper left matrix
element of $S$ gives
$\displaystyle S_{11}^{(0)}$ $\displaystyle=f_{11}$ (72) $\displaystyle
S_{11}^{(1)}$
$\displaystyle=\partial_{\epsilon}f_{11}\times\epsilon^{(1)}+\partial_{\delta}f_{11}\times\delta^{(1)}$
(73) $\displaystyle S_{11}^{(2)}$
$\displaystyle=\partial_{\epsilon}f_{11}\times\epsilon^{(2)}+\partial_{\delta}f_{11}\times\delta^{(2)}$
$\displaystyle+g^{(2)}_{11}(\epsilon^{(1)},\delta^{(1)})$ (74) $\displaystyle
S_{11}^{(3)}$
$\displaystyle=\partial_{\epsilon}f_{11}\times\epsilon^{(3)}+\partial_{\delta}f_{11}\times\delta^{(3)}$
$\displaystyle+g^{(3)}_{11}(\epsilon^{(1)},\delta^{(1)},\epsilon^{(2)},\delta^{(2)})$
(75)
where the functions $g^{(\nu)}_{11}$ are introduced to capture all non-linear
terms in the expansion
$\displaystyle g^{(2)}_{11}(\epsilon^{(1)},\delta^{(1)})$
$\displaystyle=\frac{1}{2}\partial^{2}_{\epsilon}f_{11}\times\left(\epsilon^{(1)}\right)^{2}+\frac{1}{2}\partial^{2}_{\delta}f_{11}\times\left(\delta^{(1)}\right)^{2}+\partial_{\epsilon}\partial_{\delta}f_{11}\times\delta^{(1)}\epsilon^{(1)}$
(76) $\displaystyle
g^{(3)}_{11}(\epsilon^{(1)},\delta^{(1)},\epsilon^{(2)},\delta^{(2)})$
$\displaystyle=\partial^{2}_{\epsilon}f_{11}\left(\epsilon^{(1)}\epsilon^{(2)}\right)+\partial_{\epsilon}\partial_{\delta}f_{11}\left(\epsilon^{(1)}\delta^{(2)}+\epsilon^{(2)}\delta^{(1)}\right)$
$\displaystyle+\partial^{2}_{\delta}f_{11}\left(\delta^{(1)}\delta^{(2)}\right)+\frac{1}{6}\partial^{3}_{\epsilon}f_{11}\left(\epsilon^{(1)}\right)^{3}+\frac{1}{2}\partial_{\delta}\partial^{2}_{\epsilon}f_{11}\left(\epsilon^{(1)}\right)^{2}\delta^{(1)}$
$\displaystyle+\frac{1}{2}\partial^{2}_{\delta}\partial_{\epsilon}f_{11}\epsilon^{(1)}\left(\delta^{(1)}\right)^{2}+\frac{1}{6}\partial^{3}_{\delta}f_{11}\left(\delta^{(1)}\right)^{3}.$
(77)
Since $f_{11}$ depends on $\epsilon$ and $\delta_{1}$ the index one is
suppressed. The function $f_{11}$ and all its derivatives are evaluated at
$(\epsilon^{(0)},\delta^{(0)}_{1})$.
For the lower right matrix element described by $f_{22}$ the expressions are
completely analogous to Eqs. 76 and 77, but with $\delta_{2}$ instead of
$\delta_{1}$. For the off-diagonal elements we get
$\displaystyle S_{12}^{(0)}$ $\displaystyle=f_{12}$ (78) $\displaystyle
S_{12}^{(1)}$
$\displaystyle=\partial_{\epsilon}f_{12}\times\epsilon^{(1)}+\partial_{\delta_{1}}f_{12}\times\delta^{(1)}_{1}+\partial_{\delta_{2}}f_{12}\times\delta^{(1)}_{2}$
(79) $\displaystyle S_{12}^{(2)}$
$\displaystyle=\partial_{\epsilon}f_{12}\times\epsilon^{(2)}+\partial_{\delta_{1}}f_{12}\times\delta^{(2)}_{1}+\partial_{\delta_{2}}f_{12}\times\delta^{(2)}_{2}+g^{(2)}_{12}(\epsilon^{(1)},\delta^{(1)}_{1},\delta^{(1)}_{2})$
(80) $\displaystyle S_{12}^{(3)}$
$\displaystyle=\partial_{\epsilon}f_{12}\times\epsilon^{(3)}+\partial_{\delta_{1}}f_{12}\times\delta^{(3)}_{1}+\partial_{\delta_{2}}f_{12}\times\delta^{(3)}_{2}+g^{(3)}_{12}(\epsilon^{(1)},\delta^{(1)}_{1},\delta^{(1)}_{2},\epsilon^{(2)},\delta^{(2)}_{1},\delta^{(2)}_{2}),$
(81)
where the functions $g^{(\nu)}_{12}$ capture the non-linear terms
$\displaystyle g^{(2)}_{12}(\epsilon^{(1)},\delta^{(1)}_{1},\delta^{(1)}_{2})$
$\displaystyle=\frac{1}{2}\partial^{2}_{\epsilon}f_{12}\times\left(\epsilon^{(1)}\right)^{2}+\frac{1}{2}\partial^{2}_{\delta_{1}}f_{12}\times\left(\delta^{(1)}_{1}\right)^{2}+\frac{1}{2}\partial^{2}_{\delta_{2}}f_{12}\times\left(\delta^{(1)}_{2}\right)^{2}$
$\displaystyle+\partial_{\epsilon}\partial_{\delta_{1}}f_{12}\epsilon^{(1)}\delta^{(1)}_{1}+\partial_{\epsilon}\partial_{\delta_{2}}f_{12}\epsilon^{(1)}\delta^{(1)}_{2}+\partial_{\delta_{1}}\partial_{\delta_{2}}f_{12}\delta^{(1)}_{1}\delta^{(1)}_{2}$
(82) $\displaystyle
g^{(3)}_{12}(\epsilon^{(1)},\delta^{(1)}_{1},\delta^{(1)}_{2},\epsilon^{(2)},\delta^{(2)}_{1},\delta^{(2)}_{2})$
$\displaystyle=\partial^{2}_{\epsilon}f_{12}\epsilon^{(1)}\epsilon^{(2)}+\partial^{2}_{\delta_{1}}f_{12}\delta^{(1)}_{1}\delta^{(2)}_{1}+\partial^{2}_{\delta_{2}}f_{12}\delta^{(1)}_{2}\delta^{(2)}_{2}$
$\displaystyle+\partial_{\epsilon}\partial_{\delta_{1}}f_{12}\left(\epsilon^{(1)}\delta^{(2)}_{1}+\epsilon^{(2)}\delta^{(1)}_{1}\right)+\partial_{\epsilon}\partial_{\delta_{2}}f_{12}\left(\epsilon^{(1)}\delta^{(2)}_{2}+\epsilon^{(2)}\delta^{(1)}_{2}\right)$
$\displaystyle+\partial_{\delta_{1}}\partial_{\delta_{2}}f_{12}\left(\delta^{(1)}_{1}\delta^{(2)}_{2}+\delta^{(2)}_{1}\delta^{(1)}_{2}\right)$
$\displaystyle+\frac{1}{2}\partial^{2}_{\epsilon}\partial_{\delta_{1}}f_{12}\left(\epsilon^{(1)}\right)^{2}\delta^{(1)}_{1}+\frac{1}{2}\partial^{2}_{\epsilon}\partial_{\delta_{2}}f_{12}\left(\epsilon^{(1)}\right)^{2}\delta^{(1)}_{2}$
$\displaystyle+\frac{1}{2}\partial_{\epsilon}\partial^{2}_{\delta_{1}}f_{12}\epsilon^{(1)}\left(\delta^{(1)}_{1}\right)^{2}+\frac{1}{2}\partial_{\delta_{2}}\partial^{2}_{\delta_{1}}f_{12}\delta^{(1)}_{2}\left(\delta^{(1)}_{1}\right)^{2}$
$\displaystyle+\frac{1}{2}\partial_{\epsilon}\partial^{2}_{\delta_{2}}f_{12}\epsilon^{(1)}\left(\delta^{(1)}_{2}\right)^{2}+\frac{1}{2}\partial_{\delta_{1}}\partial^{2}_{\delta_{2}}f_{12}\delta^{(1)}_{1}\left(\delta^{(1)}_{2}\right)^{2}+$
$\displaystyle+\partial_{\epsilon}\partial_{\delta_{1}}\partial_{\delta_{2}}f_{12}\epsilon^{(1)}\delta^{(1)}_{1}\delta^{(1)}_{2}$
$\displaystyle+\frac{1}{6}\partial^{3}_{\epsilon}f_{12}\left(\epsilon^{(1)}\right)^{3}+\frac{1}{6}\partial^{3}_{\delta_{1}}f_{12}\left(\delta^{(1)}_{1}\right)^{3}+\frac{1}{6}\partial^{3}_{\delta_{2}}f_{12}\left(\delta^{(1)}_{2}\right)^{3}.$
(83)
The function $f_{12}$ and all its derivatives are evaluated at
$(\epsilon^{(0)},\delta^{(0)}_{1},\delta^{(0)}_{2})$. Note that all the
functions $g^{(\nu)}_{**}$ vanish if the NLO corrections
$(\delta^{(1)}_{1},\delta^{(1)}_{2},\epsilon^{(1)})$ are zero. This is the
case for all coupled channels where OPE is treated non-perturbatively as seen
in Table 3. Furthermore, in all channels where OPE is treated perturbatively
the LO phase shifts are all zero, which makes many of the terms in the
expressions for $g^{(\nu)}_{**}$ vanish due to vanishing derivatives. Thus, in
both the perturbative and non-perturbative cases, Eqs. 76, 77, 82 and 83 can
be simplified substantially. The phase shift corrections
$(\epsilon^{(\nu)},\delta^{(\nu)}_{1},\delta^{(\nu)}_{2})$ for $\nu=1,2,3.$
are finally obtained by solving a system of linear equations
$\displaystyle\mathrm{\textbf{NLO}}:$
$\displaystyle\quad\begin{pmatrix}S_{11}^{(1)}\\\ S_{12}^{(1)}\\\
S_{22}^{(1)}\end{pmatrix}$
$\displaystyle=\begin{pmatrix}\partial_{\epsilon}f_{11}&\partial_{\delta_{1}}f_{11}&0\\\
\partial_{\epsilon}f_{12}&\partial_{\delta_{1}}f_{12}&\partial_{\delta_{2}}f_{12}\\\
\partial_{\epsilon}f_{22}&0&\partial_{\delta_{2}}f_{22}\end{pmatrix}\begin{pmatrix}\epsilon^{(1)}\\\
\delta^{(1)}_{1}\\\ \delta^{(1)}_{2}\end{pmatrix}$ (84)
$\displaystyle\mathrm{\textbf{N${}^{2}$LO}}:$
$\displaystyle\quad\begin{pmatrix}S_{11}^{(2)}-g^{(2)}_{11}\\\
S_{12}^{(2)}-g^{(2)}_{12}\\\ S_{22}^{(2)}-g^{(2)}_{22}\end{pmatrix}$
$\displaystyle=\begin{pmatrix}\partial_{\epsilon}f_{11}&\partial_{\delta_{1}}f_{11}&0\\\
\partial_{\epsilon}f_{12}&\partial_{\delta_{1}}f_{12}&\partial_{\delta_{2}}f_{12}\\\
\partial_{\epsilon}f_{22}&0&\partial_{\delta_{2}}f_{22}\end{pmatrix}\begin{pmatrix}\epsilon^{(2)}\\\
\delta^{(2)}_{1}\\\ \delta^{(2)}_{2}\end{pmatrix}$ (85)
$\displaystyle\mathrm{\textbf{N${}^{3}$LO}}:$
$\displaystyle\quad\begin{pmatrix}S_{11}^{(3)}-g^{(3)}_{11}\\\
S_{12}^{(3)}-g^{(3)}_{12}\\\ S_{22}^{(2)}-g^{(3)}_{22}\end{pmatrix}$
$\displaystyle=\begin{pmatrix}\partial_{\epsilon}f_{11}&\partial_{\delta_{1}}f_{11}&0\\\
\partial_{\epsilon}f_{12}&\partial_{\delta_{1}}f_{12}&\partial_{\delta_{2}}f_{12}\\\
\partial_{\epsilon}f_{22}&0&\partial_{\delta_{2}}f_{22}\end{pmatrix}\begin{pmatrix}\epsilon^{(3)}\\\
\delta^{(3)}_{1}\\\ \delta^{(3)}_{2}\end{pmatrix}.$ (86)
|
# PDSS: A Privacy-Preserving Framework for Step-by-Step Distillation of Large
Language Models
Tao Fan1, 2, Yan Kang2, Weijing Chen2, Hanlin Gu2, Yuanfeng Song2,
Lixin Fan2, Kai Chen1, Qiang Yang1,2
1 Hong Kong University of Science and Technology, China
2 WeBank, China
Correspondence<EMAIL_ADDRESS>
###### Abstract
In the context of real-world applications, leveraging large language models
(LLMs) for domain-specific tasks often faces two major challenges: domain-
specific knowledge privacy and constrained resources. To address these issues,
we propose PDSS, a privacy-preserving framework for step-by-step distillation
of LLMs. PDSS works on a server-client architecture, wherein client transmits
perturbed prompts to the server’s LLM for rationale generation. The generated
rationales are then decoded by the client and used to enrich the training of
task-specific small language model(SLM) within a multi-task learning paradigm.
PDSS introduces two privacy protection strategies: the Exponential Mechanism
Strategy and the Encoder-Decoder Strategy, balancing prompt privacy and
rationale usability. Experiments demonstrate the effectiveness of PDSS in
various text generation tasks, enabling the training of task-specific SLM with
enhanced performance while prioritizing data privacy protection.
PDSS: A Privacy-Preserving Framework for Step-by-Step Distillation of Large
Language Models
Tao Fan1, 2, Yan Kang2, Weijing Chen2, Hanlin Gu2, Yuanfeng Song2, Lixin Fan2,
Kai Chen1, Qiang Yang1,2 1 Hong Kong University of Science and Technology,
China 2 WeBank, China Correspondence<EMAIL_ADDRESS>
## 1 Introduction
Large Language Models(LLMs), boasting billions of parameters and remarkable
text generation abilities, have risen as a revolutionary force in artificial
intelligence. Prominent models, such as GPT-4 OpenAI (2023), LLaMATouvron et
al. (2023), and QwenBai et al. (2023), have garnered the attention of
researchers and practitioners alike, demonstrating unparalleled proficiency
across numerous tasks. Nevertheless, the sheer size of these models presents
significant obstacles for real-world deployment, particularly in environments
with limited resources. Meanwhile, as LLMs gain escalating popularity and
widespread utilization, privacy concerns have moved to the forefront,
especially when it comes to user data and model inference. In contrast, Small
Language Models(SLMs) often exhibit superior computational efficiency and
faster convergence rates, rendering them perfectly suited for real-time
applications or resource-constrained environments. Nonetheless, SLMs also
possess certain drawbacks stemming from their performance limitations. The
question then arises: How can we effectively combine the predictive prowess of
LLMs with the nimbleness of SLMs, all while adhering to privacy requirements?
To address these challenges, we introduce PDSS, a privacy-preserving framework
for step-by-step distillation of LLMs. In our envisioned setup, there’s a
high-powered server capable of deploying an LLM, paired with a client
possessing more limited computational resources running SLM. The challenge
lies in maintaining the privacy of client data while leveraging the server’s
LLM to aid in training the client’s SLM for text generation tasks, thereby
elevating its performance. PDSS aims to bridge this gap, enabling secure and
efficient knowledge transfer between LLM and SLM, and ultimately enhancing the
capabilities of the SLM without compromising privacy.
As illustrated in Figure 1, within our framework, the process works as
follows. Initially, the client transmits perturbed prompts to the server’s
LLM, which are protected by the PDSS prompt encoder module, thus ensuring
privacy protection. Subsequently, the server’s LLM generates perturbed
rationales from these prompts through the Chain of Thought (COT) approach Wei
et al. (2022) and relays them back to the client. Upon receiving these
perturbed rationales, the client’s rationales decoder module reconstructs them
into their original, aligned form corresponding to the raw prompt. Ultimately,
the client incorporates these rationales as supplementary and enriching
information for training its Task-Specific SLM within a multi-task learning
paradigm Wei et al. (2022); Hsieh et al. (2023); Zhang and Yang (2021). These
rationales justify the predicted labels and serve as insightful guidance for
training smaller and domain-specific models.
Within the PDSS framework, to achieve a balance between preserving the privacy
of user prompts and enhancing the usability of rationales, we introduce two
privacy protection strategies incorporated into the the prompt encoder module
and the rationales decoder module: the Exponential Mechanism Strategy and the
Encoder-Decoder Strategy. In the Exponential Mechanism Strategy, we utilize an
exponential mechanism to obfuscate the prompts Tong et al. (2023), followed by
decoding the perturbed rationales through In-Context Learning (ICL) Dong et
al. (2022). In the Encoder-Decoder strategy, we utilize an Encoder-Decoder SLM
specifically designed to encode raw prompts into perturbed prompts and
subsequently decode perturbed rationales back into their original form. To
effectively train this unified Encoder-Decoder SLM, we utilize a multi-task
learning paradigm Zhang and Yang (2021), encompassing both the encoding and
decoding training processes.
Our contributions are summarized as follows:
* •
Privacy-Preserving Framework for LLM Distillation. We propose PDSS, a novel
framework that facilitates secure and efficient knowledge transfer from LLM to
SLM in resource-constrained environments while adhering to privacy
requirements. PDSS addresses the challenges posed by the massive size of LLMs
for real-world deployment and the privacy concerns surrounding user data. By
utilizing perturbed prompts and rationales, PDSS ensures data privacy while
leveraging the predictive prowess of LLMs to enhance the performance of SLMs.
* •
Innovative Privacy Protection Strategies. Within PDSS, we introduce two
privacy protection strategies: the Exponential Mechanism Strategy and the
Encoder-Decoder Strategy. The former utilizes an exponential mechanism to
obfuscate user prompts, while the latter employs a specialized Encoder-Decoder
SLM to encode and decode perturbed prompts and rationales. These strategies
effectively balance user privacy and the usability of rationales, allowing for
secure and enhanced training of the client’s SLM without compromising on
privacy concerns.
* •
Empirical Evaluation and Enhanced Performance of Task-Specific SLM. Through
experiments on various text generation tasks, PDSS demonstrates the
effectiveness of its framework in training task-specific SLM with enhanced
performance. By harnessing the rationales generated by the server-side LLM,
PDSS provides valuable task-specific knowledge to the SLM, enabling them to
achieve significant improvements with the support of the LLM while
prioritizing data privacy protections.
Figure 1: Overview of our proposed PDSS workflow. Figure 2: Privacy-
Preserving Rationals Generation Example.
## 2 Related Work
### 2.1 Chain of Thought in Large Language Models
The Chain of Thought(COT) approach has recently garnered significant attention
in the realm of LLMs, thanks primarily to its remarkable ability to enhance
the reasoning capabilities of these models. This innovative concept was first
introduced by Wei et al. (2022). Their research demonstrated that by prompting
LLMs to produce a sequence of intermediary reasoning steps(rationales), the
models’ performance in handling intricate reasoning tasks could be notably
boosted. This groundbreaking study opened the door for further explorations
into COT. Since the introduction of COT, several studies have delved into its
extensions and variations. For example, Kojima et al. (2022) proposed the use
of zero-shot COT, where the model is prompted to generate reasoning
steps(rationales) without relying on prior examples. COT has also been applied
to various domains, including arithmetic reasoningCobbe et al. (2021),
commonsense reasoningKlein and Nabi (2020).
Nonetheless, despite the impressive feats achieved by LLMs, the adoption of
LLMs in domain-specific applications with constrained resources poses a
significant challengeFan et al. (2023) Kang et al. (2023). Recent studies by
Hsieh et al. (2023) Ho et al. (2022) Li et al. (2023), have capitalized on the
generated rationales as a form of insightful supervision to train smaller and
domain-specific models. However, previous studies have not addressed the
domain-specific data privacy issue that arises when LLMs and domain-specific
smaller models are deployed across different parties. In our work, we endeavor
to address this significant challenge.
### 2.2 Privacy Preserving LLM Inference
With the escalating popularity and widespread utilization of LLMs, privacy
concerns have taken center stage, particularly regarding user data and model
inference. Previous research efforts aimed at preserving privacy during LLM
inference have predominantly focused on several key techniques, including
differential privacy(DP) Dwork (2006), fully homomorphic encryption(FHE)
Gentry (2009), and secure multi-party computation(MPC) Yao (1986) protocols.
Numerous studies have delved into the intricacies of LLM inference leveraging
DP techniques. Notably, methods like SANTEXT+ Yue et al. (2021), CUSTEXT+ Chen
et al. (2022), TextObfuscator Zhou et al. (2023) and InferDPT Tong et al.
(2023) have harnessed differential privacy to sequentially replace sensitive
words in the text with semantically similar alternatives from a predefined
word adjacency list.
FHE and MPC techniques have also garnered attention as viable methods for
ensuring privacy during LLM inference. For instance, CipherGPT Hou et al.
(2023) proposes a secure matrix multiplication and a novel protocol for
securely computing GELU within transformer architecture using FHE and MPC
protocols to facilitate secure two-party GPT inference. Likewise, Puma Dong et
al. (2023) has adopted FHE and MPC in its transformer architecture for secure
third-party LLM inference. While FHE and MPC can be utilized for privacy-
preserving text generation tasks, their practical applications remain limited
primarily due to significant computational and communication overheads.
The advancements in privacy-preserving techniques, such as differential
privacy, FHE, and MPC, offer promising solutions to mitigate privacy risks
associated with LLM inference. However, balancing privacy and efficiency
remains a challenge that requires further exploration and refinement.
## 3 The Proposed PDSS Framework
### 3.1 Overview
In this section, we introduce PDSS, an innovative privacy-preserving framework
specifically designed for distilling step-by-step LLMs. The PDSS framework can
enhance the performance of SLMs while maintaining privacy, leveraging the
capabilities of LLM. We illustrate the PDSS in Figure 1 and describe the
associated training algorithm in Algorithm 1. The workflow of PDSS is outlined
as follows:
1. 1.
In the client, Prompt Encoder Module perturbs these prompts before sending
them to the server-side LLM.
2. 2.
In the server, the server-side LLM generates perturbed rationales based on
these perturbed prompts and sends them back to the client.
3. 3.
In the client, Rationales Decoder Module decodes the perturbed rationales.
4. 4.
In the client, Task-Specific SLM Training Module employs both the original
label data and the filter rationales data for multi-task learning.
### 3.2 Prompt Encoder Module
In the prompt encoder module, as illustrated in Figure 3, we propose two
privacy protection strategies:
1. 1.
Exponential Mechanism Encoder Strategy. In the first strategy, we utilize an
exponential mechanism McSherry and Talwar (2007)Tong et al. (2023), which
satisfies the criteria for the $\epsilon-DP$. This strategy works by replacing
each token in the prompt with a semantically similar one sampled from either a
predetermined adjacency list or a randomly generated adjacency list, based on
exponential mechanism.
The Definition of Exponential Mechanism Tong et al. (2023). For a given
scoring function $u:X\times Y\to R$, a randomized mechanism $M(X,u,Y)$ is
$\epsilon-DP$ compliant if it satisfies:
$\displaystyle P_{r}[y|x]\propto exp(\frac{\epsilon\cdot
u(x,y)}{2\bigtriangleup u})$ (1)
where the sensitivity $\bigtriangleup u$ is defined as:
$\displaystyle\bigtriangleup u=\max_{x,x^{{}^{\prime}}\in X,y\in
Y}|u(x,y)-u(x^{{}^{\prime}},y)|$ (2)
2. 2.
Encoder-Decoder Encoder Strategy. The tokens within a prompt differ
significantly in terms of their importance and degree of privacy. Applying a
uniform privacy budget $\epsilon$ across all tokens may not lead to the most
optimal solution. To further optimize the privacy-utility balance, we propose
an Encoder-Decoder strategy. This strategy is built upon the first exponential
mechanism. In the Encoder-Decoder strategy, we utilize an Encoder-Decoder SLM
specifically designed to encode raw prompts into perturbed prompts and
subsequently decode perturbed rationales back into their original form. This
strategy involves two training process: encoding training process and decoding
training process. In this section, we mainly focus on encoding training
process, as illustrated in Figure 3.
Initially, an encoding training process is required for the Encoder-Decoder
SLM. Formally, let’s denote a public dataset as
$P=\left\\{(p_{i},p^{\epsilon}_{i}))\right\\}^{N}_{i=1}$, where $p_{i}$
represents raw private prompt, $p^{\epsilon}_{i}$ represents perturbed prompt
generated using the first exponential mechanism with a privacy budget of
$\epsilon$. In the encoding training process, we train the Encoder-Decoder
SLM: $g_{\phi}(p_{i})\to p^{\epsilon}_{i}$. The details of encoding training
process is illustrated in Algorithm 1.
The Encoder objective can be formulated as follows:
$\displaystyle\mathcal{L}_{\text{Encoder}}(\phi;\mathcal{P})=\mathbb{E}_{(p,p^{\epsilon})\sim\mathcal{P}}\ell_{\text{CE}}(g_{\phi}(p),p^{\epsilon})$
(3)
where $\ell_{\text{CE}}$ is the cross-entropy loss.
As illustrated in Figure 2, we can observe an exemplary comparison between the
original input and its perturbed input in Step 1 and Step 2. This perturbed
prompt serves as the new, privacy-enhanced input for further processing.
By incorporating this perturbation mechanism, we ensure that the privacy of
the original prompt is preserved. This approach not only satisfies the privacy
requirements but also enables effective data utilization for downstream tasks,
striking a balance between privacy and utility.
Figure 3: Prompt Encoder Module.
### 3.3 Generating Perturbed Rationales from LLM
When the server-side LLM receives the perturbed prompt, we leverage the Chain-
of-Thought (CoT) prompting technique introduced by Wei et al. (2022) to
generate rationales from the LLM using this perturbed prompt. These generated
rationales, which are also perturbed, are then transmitted to the client. For
instance, as illustrated in Figure 2, given a perturbed prompt in the Step 2,
the LLM generates perturbed rationales in the Step 3.
### 3.4 Rationales Decoder Module
Once the client receives the perturbed rationales from the server-side LLM, it
must initiate a "decoder" process within the rationales decoder module to
decode the rationales. In rationales decoder module, as illustrated in Figure
4, we also propose two strategies correspond to the two protection strategy of
the prompt encoder module:
1. 1.
Exponential Mechanism Decoder Strategy. In the first decoding strategy, which
corresponds to Exponential Mechanism Encoder strategy. Here, we utilize In-
Context Learning(ICL) Dong et al. (2022) Tong et al. (2023) with the SLM to
decode the perturbed rationales. we can input a sample
$x_{i}=(p,p^{p},r^{p})_{i}$ into the SLM to prompt the generation of
rationales, where $p$ represents raw private prompt, $p^{p}$ represents
perturbed prompt and $r^{p}$ represents perturbed rationales generated from
LLM. $(p^{p},r^{p})_{i}$ can be viewed as an example for SLM in ICL. This
allows the SLM to generate rationales $r_{i}$ that are aligned with the
original, unperturbed prompt.
2. 2.
Encoder-Decoder Decoder Strategy. In the second decoding strategy, which
corresponds to Encoder-Decoder Encoder strategy. The rationales decoder module
also use the same the Encoder-Decoder SLM with Section 3.2.
Initially, a decoding training process is required for the Encoder-Decoder
SLM. Formally, let’s denote a public dataset as
$R=\left\\{(x_{i},r_{i}))\right\\}^{N}_{i=1}$, where $x_{i}$ represents an
input, where $x_{i}=(p,p^{p},r^{p})_{i}$ , $p$ represents raw private prompt,
$p^{p}$ represents perturbed prompt generated from Encoder-Decoder SLM,
$r^{p}$ represents perturbed rationales generated from LLM. $r_{i}$ represents
the raw rationale of raw prompt $p$ generated from LLM. In the decoding
training process, we train the Encoder-Decoder SLM: $g_{\phi}(x_{i})\to
r_{i}$. The details of decoding training process is illustrated in Algorithm
1.
The Decoder objective can be formulated as follows:
$\displaystyle\mathcal{L}_{\text{Decoder}}(\phi;\mathcal{R})=\mathbb{E}_{(x,r)\sim\mathcal{R}}\ell_{\text{CE}}(g_{\phi}(x),r)$
(4)
where $\mathcal{L}_{\text{Decoder}}$ is the rational decoder loss, and
$\ell_{\text{CE}}$ is the cross-entropy loss.
Subsequently, once the decoding training process of Encoder-Decoder SLM is
finished, we can input a sample $x_{i}=(p,p^{p},r^{p})_{i}$ into the SLM,
where $r^{p}$ represents perturbed rationales generated from LLM. This allows
the SLM to generate rationales $r_{i}$ that are aligned with the original,
unperturbed prompt.
We approach the training of the Encoder-Decoder SLM as a multi-task learning
problem encompassing both the encoding and decoding training processes. The
multi-task learning objective can be formulated as follows:
$\displaystyle\mathcal{L}_{1}=\alpha\mathcal{L}_{\text{Encoder}}+(1-\alpha)\mathcal{L}_{\text{Decoder}}$
(5)
where $\alpha$ is the hyperparameters that control the weight of encoder and
decoder loss.
As illustrated in Figure 2, we can observe an exemplary comparison between the
perturbed rationales from LLM and its decoded rationales from SLM in Step 3
and Step 4. It’s worth noting that although the SLM has the ability to
generate aligned rationales independently, the quality often falls short due
to its limited capabilities. By leveraging the perturbed rationales, we
effectively transfer the powerful capabilities of the server-side LLM to
enhance the Encoder-Decoder SLM, thereby improving the overall quality of the
generated rationales.
Figure 4: Rationales Decoder Module. Algorithm 1 PDSS
1:
2:$T$: total number of rounds;
3:$\mathcal{P}$: encoding training datasets;
4:$\mathcal{R}$: decoding training datasets;
5:$\mathcal{D}$: task-Spec training datasets;
6:$\eta_{\phi}$: learning rate of Encoder-Decoder SLM;
7:$\eta_{\omega}$: learning rate of Task-Specific SLM.
8:$g_{\phi}$, $f_{\omega}$.
9:$\triangleright$ Multi-Task Training for Encoder-Decoder SLM based on Public
Datasets $\mathcal{P}$ and $\mathcal{R}$.
10:for each epoch $t\in[T]$ do
11: $\phi^{t+1}\leftarrow\phi^{t}-\eta_{\phi}\bigtriangledown\mathcal{L}_{1}$.
12:end for
13:$\triangleright$ Generated $p^{p}$ using the updated Encoder.
14:$p^{p}=SLM_{Encoder}(p)$.
15:$\triangleright$ Generated perturbed rationales from LLM on the server.
16:$r^{p}=LLM(p^{p})$.
17:$\triangleright$ Decoded perturbed rationales using the updated Encoder-
Decoder SLM.
18:$r=SLM_{Decoder}(r^{p})$.
19:$\triangleright$ Multi-Task Training for Task-Specific SLM based on
Datasets $\mathcal{D}$.
20:for each epoch $t\in[T]$ do
21:
$\omega^{t+1}\leftarrow\omega^{t}-\eta_{\omega}\bigtriangledown\mathcal{L}_{2}$.
22:end for
### 3.5 Task-Specific SLM Training Module
In our work, we undertake the training of the client’s Task-Specific SLM
tailored for text generation tasks. Initially, we elaborate on the prevalent
framework for learning task-specific models. Leveraging this established
framework, we enhance it by integrating rationales produced from the
rationales decoder module into the training process. Formally, let’s denote a
dataset as $D=\left\\{(x_{i},(y_{i},r_{i}))\right\\}^{N}_{i=1}$, where $x_{i}$
represents an input, $y_{i}$ represents the associated expected output label,
and $r_{i}$ is the corresponding desired rationale.
We conceptualize learning with rationales as a multi-task learning problem, as
illustrated in Figure 5. Specifically, we train the model
$f_{\omega}(x_{i})\to(y_{i},r_{i})$ to accomplish not just the prediction of
task labels but also the generation of the corresponding rationales based on
textual inputs. This multi-task training ensures that our model not only
produces accurate predictions but also provides insightful justifications for
its decisions. By doing so, we enhance the transparency and explainability of
the model. The multi-task learning objective can be formulated as follows:
$\displaystyle\mathcal{L}_{2}=\beta\mathcal{L}_{\text{Label}}+(1-\beta)\mathcal{L}_{\text{Rationale}}$
(6)
where $\mathcal{L}_{\text{Label}}$ is the label prediction loss:
$\displaystyle\mathcal{L}_{\text{Label}}(\omega;\mathcal{D})=\mathbb{E}_{(x,y)\sim\mathcal{D}}\ell_{\text{CE}}(f_{\omega}(x),y)$
(7)
and $\mathcal{L}_{\text{Rationale}}$ is the rationale generation loss:
$\displaystyle\mathcal{L}_{\text{Rationale}}(\omega;\mathcal{D})=\mathbb{E}_{(x,r)\sim\mathcal{D}}\ell_{\text{CE}}(f_{\omega}(x),r)$
(8)
where $\ell_{\text{CE}}$ is the cross-entropy loss, $f_{\omega}(.)$ is the
Task-Specific SLM model, and $\beta$ is the hyperparameters that control the
weight of label prediction loss and rationale generation loss.
Figure 5: Task-Specific SLM Training Module.
## 4 Experiments
### 4.1 Setup
We have established a scenario to evaluate the performance of the PDSS
framework across a range of text generation tasks. This setup involves a
client-server architecture, where the client holds two downstream SLMs :an
Encoder-Decoder SLM, which specializes in encoder-decoder functionalities and
a Task-Specific SLM, tailored for specific tasks. On the server-side, we host
a LLM for more general and powerful text generation capabilities.
Specifically, we have chosen Qwen-14BBai et al. (2023) as LLM, while both SLMs
are Qwen-0.5BBai et al. (2023). Table 1 outlines the detailed configurations
of both the LLM and the SLMs.
Setting | Server | Client | Client
---|---|---|---
Model Type | LLM | Encoder-Decoder SLM | Task-Specific SLM
Model Name | Qwen-14B | Qwen-0.5B | Qwen-0.5B
Parameters(Billion) | 14 | 0.5 | 0.5
Table 1: LLM and SLMs Setting of PDSS.
Datasets and Evaluation Metrics. We conduct a comprehensive evaluation of PDSS
on 4 QA datasets. Specifically, we include CommonsenseQA(CQA) Talmor et al.
(2018), OpenBookQA(OBQA) Mihaylov et al. (2018), BoolQ Clark et al. (2019),
ARC-EClark et al. (2018). For these datasets, we primarily use Accuracy as the
evaluation metric.
Baselines. Since we incorporate two distinct strategies in the prompt encoder
module and rationales decoder module, we denote PDSS method with the
Exponential Mechanism Strategy as PDSS-EM and PDSS method with the Encoder-
Decoder Strategy as PDSS-ED. We conduct a comparative analysis to evaluate the
performance of our PDSS framework, which comprises both PDSS-EM and PDSS-ED.
These baselines included:
* •
FewShot-LLM, which represents the few-shot capabilities of LLM on the server;
* •
FewShot-SLM, which represents the few-shot performance of SLM on the client;
* •
Standalone, where the client independently fine-tunes its local model using
its own private dataset;
* •
DSSHsieh et al. (2023), where the client fine-tunes its local model by
distilling step-by-step LLM method without privacy-preserving.
### 4.2 Overall Performance Evaluation
In this section, we undertake a comprehensive analysis of the task performance
of PDSS. We assess both the PDSS-EM and PDSS-ED methods against other
baselines on Task-Specific SLM across various privacy budgets, denoted by
$\epsilon$.
The results, as presented in Table 2, clearly illustrate that both PDSS-EM and
PDSS-ED exhibit significantly better performance when compared to FewShot-SLM
and Standalone methods. With an increase in the privacy budget $\epsilon$,
both the performance of PDSS-EM and PDSS-ED have risen notably. Furthermore,
PDSS-ED demonstrates notably superior performance compared to PDSS-EM under
the same privacy budget $\epsilon$ . Specifically, under a privacy budget of
$\epsilon=3$, PDSS-EM surpasses the Standalone method by 3.4% and 17% in the
CQA and OBQA datasets, respectively, while PDSS-ED outperforms it by 5.2% and
22.4%. Similarly, when the privacy budget is increased to $\epsilon=10$, PDSS-
EM exceeds the Standalone approach by 6.3% and 21.6% within the CQA and OBQA
datasets, respectively, and PDSS-ED beats it by 7.2% and 28.6%. Remarkably,
across all datasets evaluated, when the privacy budget is set to
$\epsilon=10$, PDSS achieves comparable performance to DSS, highlighting its
efficacy and versatility in balancing privacy and utility.
Method | CQA | OBQA | BoolQ | ARC-E
---|---|---|---|---
FewShot-LLM | 80.9 | 82.8 | 85.2 | 80.3
FewShot-SLM | 25.7 | 28.6 | 59.7 | 40.7
Standalone | 55.7 | 43.4 | 78.4 | 50.3
DSS | 59.3 | 55.1 | 80.5 | 57.6
PDSS-EM($\epsilon=1$) | 57.7 | 49.2 | 80.1 | 52.3
PDSS-EM($\epsilon=3$) | 57.6 | 50.8 | 79 | 52.6
PDSS-EM($\epsilon=5$) | 58.8 | 53.2 | 80 | 55.3
PDSS-EM($\epsilon=10$) | 59.2 | 52.8 | 80.2 | 56.2
PDSS-ED($\epsilon=1$) | 58.2 | 50.8 | 80.3 | 56.4
PDSS-ED($\epsilon=3$) | 58.6 | 53.1 | 80.2 | 56.5
PDSS-ED($\epsilon=5$) | 58.3 | 53.4 | 80.4 | 56.3
PDSS-ED($\epsilon=10$) | 59.7 | 55.8 | 80.7 | 57.9
Table 2: We compare the performance of Task-Specific SLM trained with PDSS-EM
and PDSS-ED across different privacy budgets $\epsilon$ against the Task-
Specific SLM trained using baseline methods.
### 4.3 Reducing Training Data Evaluation
In this section, we conduct an in-depth analysis to explore the influence of
training data size on model performance. We compare the PDSS method with the
Standalone approach, varying the amount of training data used. Table 3
provides a clear illustration of how PDSS(with $\epsilon=3$) consistently
outperforms the Standalone method.
Remarkably, PDSS achieves superior performance even with significantly fewer
training samples compared to Standalone. More specifically, when trained on
merely 75% of the complete CQA, OBQA, and BoolQ datasets, both PDSS-EM and
PDSS-ED surpasses the performance of Standalone fine-tuning that has been
trained on the entirety of these datasets. Likewise, by using only 50% of the
full ARC-E dataset, PDSS-EM exceeds the results achieved by Standalone fine-
tuning on the complete dataset. Furthermore, PDSS-ED exhibits significantly
better performance than PDSS-EM across various dataset sizes (ranging from 25%
to 100%). The results indicate that PDSS is capable of extracting more
valuable information from smaller datasets, making it a promising approach in
data-scarce environments.
Task | Method | 25% | 50% | 75% | 100%
---|---|---|---|---|---
CQA | PDSS-EM | 49 | 53.5 | 56.7 | 57.6
PDSS-ED | 54.2 | 54.6 | 56.1 | 58.6
Standalone | - | - | - | 55.7
OBQA | PDSS-EM | 34.8 | 42.2 | 45.6 | 50.8
PDSS-ED | 41.4 | 43.6 | 50.6 | 53.1
Standalone | - | - | - | 44.2
BoolQ | PDSS-EM | 63 | 74 | 78.7 | 79
PDSS-ED | 72.8 | 77.6 | 79.1 | 80.2
Standalone | - | - | - | 78.4
ARC-E | PDSS-EM | 45.3 | 52.2 | 53.1 | 53.8
PDSS-ED | 48 | 49.7 | 55.9 | 56.5
Standalone | - | - | - | 50.3
Table 3: We compare the performance of Task-Specific SLM trained with PDSS-
EM($\epsilon=3$) and PDSS-ED($\epsilon=3$) against Standalone, across a range
of dataset sizes from 25% to 100%. The ’-’ indicates a method does not apply
to the corresponding dataset sizes.
### 4.4 Perturbed Rationales Evaluation
In this section, we focus on analyzing the quality of the perturbed
rationales($r^{p}$) generated from the perturbed prompt of LLM based on PDSS-
EM and PDSS-ED methods and compare them with the rationales($r$) generated
from raw prompt of the LLM. To evaluate the similarity between $r^{p}$ and
$r$, we use TokenRatio metric. A higher TokenRatio indicates a greater degree
of similarity between the perturbed and original rationales. For more details
about TokenRatio, please refer to Appendix C.
As shown in Table 4, with an increase in the privacy budget $\epsilon$ and a
corresponding decrease in perturbation, both the TokenRatio of PDSS-EM and
PDSS-ED have risen notably. Furthermore, in most of tasks, the TokenRatio of
PDSS-ED is higher than that of PDSS-EM in the same level of privacy budget
$\epsilon$. The experimental results confirm that the TokenRatio observed in
the perturbed rationales produced by both PDSS-EM and PDSS-ED, positively
correlate with the privacy budget $\epsilon$. This suggests that as the
privacy constraints are relaxed (higher $\epsilon$ values), the perturbed
rationales become more similar to the original rationales. This finding is
significant as it demonstrates the trade-off between privacy protection and
the utility of the generated rationales.
Method | CQA | OBQA | BoolQ | ARC-E
---|---|---|---|---
PDSS-EM($\epsilon=1$) | 19.8 | 26.2 | 26.6 | 24.6
PDSS-EM($\epsilon=3$) | 29.2 | 37.2 | 35.5 | 33.9
PDSS-EM($\epsilon=5$) | 48.8 | 59.6 | 55.2 | 53.9
PDSS-EM($\epsilon=10$) | 69.7 | 72 | 74.6 | 68.2
PDSS-ED($\epsilon=1$) | 26.7 | 33.1 | 29.7 | 31
PDSS-ED($\epsilon=3$) | 33.1 | 40.9 | 40.4 | 42.9
PDSS-ED($\epsilon=5$) | 49.6 | 61 | 57.5 | 63.5
PDSS-ED($\epsilon=10$) | 57.2 | 68.3 | 68 | 74.2
Table 4: We conduct a comparative analysis to assess the perturbed rationales
produced by PDSS-EM and PDSS-ED methods against the original, unperturbed
(raw) rationales that are directly generated from the raw prompt of the LLM.
### 4.5 Decoded Rationales Evaluation
In this section, we delve into the quality analysis of the decoded rationales
produced by the rationales decoder module based on PDSS-EM and PDSS-ED
methods. We compare these decoded rationales against those generated directly
from raw prompt of the LLM. We utilize the TokenRatio metric to assess their
similarities.
As shown in Table 5, in contrast to FewShot-SLM, it becomes apparent that the
decoded rationales’ quality based on PDSS-EM and PDSS-ED methods isn’t solely
reliant on the locally decoded SLM. The perturbed rationales crafted by the
LLM indeed fulfill their intended purpose. When juxtaposed with Table 4, it’s
clear that at comparable $\epsilon$ levels, the TokenRatio for the decoded
rationales surpass those of the perturbed rationales in the PDSS-EM and PDSS-
ED methods. This underscores the effectiveness of the rationales decoder
module in the PDSS-EM and PDSS-ED methods. Furthermore, with the increase of
the privacy budget $\epsilon$, the TokenRatio for the decoded rationales
generated by both PDSS-EM and PDSS-ED have increased significantly. This
suggests that as the privacy constraints are relaxed (higher $\epsilon$
values), the decoded rationales become more similar to the original
rationales. For more details about comparative analysis of perturbed
rationales and decoded rationales, please refer to Appendix D.
Method | CQA | OBQA | BoolQ | ARC-E
---|---|---|---|---
FewShot-SLM | 43.3 | 43.4 | 51.9 | 42.6
PDSS-EM($\epsilon=1$) | 38.3 | 37.1 | 38.4 | 41.5
PDSS-EM($\epsilon=3$) | 41.9 | 41.3 | 41.7 | 45.6
PDSS-EM($\epsilon=5$) | 53.1 | 54 | 55 | 58.3
PDSS-EM($\epsilon=10$) | 71.1 | 63 | 73.6 | 70.4
PDSS-ED($\epsilon=1$) | 57.2 | 53.4 | 45.2 | 57.5
PDSS-ED($\epsilon=3$) | 59 | 55.1 | 48 | 59.4
PDSS-ED($\epsilon=5$) | 59.8 | 59.5 | 55.7 | 65.5
PDSS-ED($\epsilon=10$) | 62 | 62.3 | 63.4 | 70.1
Table 5: We conduct a comparative analysis to assess the decoded rationales
produced by PDSS-EM and PDSS-ED methods against the original, unperturbed
(raw) rationales that are directly generated from the raw prompt of the LLM.
## 5 Conclusions
We introduced PDSS, a privacy-preserving framework for LLM distillation,
addressing domain-specific knowledge privacy and resource constraints. PDSS
employs a server-client architecture with prompt encoding, rationale
generating, rationale decoding, and task-specific SLM training, bridging the
gap between LLM and SLM while maintaining data privacy. Experiments on various
text generation tasks demonstrate PDSS’s ability to enhance SLM performance
with LLM support while prioritizing data privacy.
## Limitations
Our current study faces limitations due to computational and storage
constraints, which hinder our ability to experiment with larger model sizes.
Additionally, our evaluation of PDSS has been restricted to the Qwen model
architecture, leaving the possibility that PDSS may need to be further
explored in other model architectures. We intend to tackle these issues in
future research endeavors.
## References
* Bai et al. (2023) Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. 2023. Qwen technical report. _arXiv preprint arXiv:2309.16609_.
* Chen et al. (2022) Huimin Chen, Fengran Mo, Yanhao Wang, Cen Chen, Jian-Yun Nie, Chengyu Wang, and Jamie Cui. 2022. A customized text sanitization mechanism with differential privacy. _arXiv preprint arXiv:2207.01193_.
* Clark et al. (2019) Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. _arXiv preprint arXiv:1905.10044_.
* Clark et al. (2018) Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. _arXiv preprint arXiv:1803.05457_.
* Cobbe et al. (2021) Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. _arXiv preprint arXiv:2110.14168_.
* Dong et al. (2022) Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. 2022. A survey on in-context learning. _arXiv preprint arXiv:2301.00234_.
* Dong et al. (2023) Ye Dong, Wen-jie Lu, Yancheng Zheng, Haoqi Wu, Derun Zhao, Jin Tan, Zhicong Huang, Cheng Hong, Tao Wei, and Wenguang Cheng. 2023. Puma: Secure inference of llama-7b in five minutes. _arXiv preprint arXiv:2307.12533_.
* Dwork (2006) Cynthia Dwork. 2006. Differential privacy. In _International colloquium on automata, languages, and programming_ , pages 1–12. Springer.
* Fan et al. (2023) Tao Fan, Yan Kang, Guoqiang Ma, Weijing Chen, Wenbin Wei, Lixin Fan, and Qiang Yang. 2023. Fate-llm: A industrial grade federated learning framework for large language models. _arXiv preprint arXiv:2310.10049_.
* Gentry (2009) Craig Gentry. 2009. _A fully homomorphic encryption scheme_. Stanford university.
* Ho et al. (2022) Namgyu Ho, Laura Schmid, and Se-Young Yun. 2022. Large language models are reasoning teachers. _arXiv preprint arXiv:2212.10071_.
* Hou et al. (2023) Xiaoyang Hou, Jian Liu, Jingyu Li, Yuhan Li, Wen-jie Lu, Cheng Hong, and Kui Ren. 2023. Ciphergpt: Secure two-party gpt inference. _Cryptology ePrint Archive_.
* Hsieh et al. (2023) Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alexander Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. 2023. Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. _arXiv preprint arXiv:2305.02301_.
* Kang et al. (2023) Yan Kang, Tao Fan, Hanlin Gu, Lixin Fan, and Qiang Yang. 2023. Grounding foundation models through federated transfer learning: A general framework. _arXiv preprint arXiv:2311.17431_.
* Klein and Nabi (2020) Tassilo Klein and Moin Nabi. 2020. Contrastive self-supervised learning for commonsense reasoning. _arXiv preprint arXiv:2005.00669_.
* Kojima et al. (2022) Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. _Advances in neural information processing systems_ , 35:22199–22213.
* Lhoest et al. (2021) Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. 2021. Datasets: A community library for natural language processing. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 175–184, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Li et al. (2023) Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang Ren, Kai-Wei Chang, and Yejin Choi. 2023. Symbolic chain-of-thought distillation: Small models can also" think" step-by-step. _arXiv preprint arXiv:2306.14050_.
* McSherry and Talwar (2007) Frank McSherry and Kunal Talwar. 2007. Mechanism design via differential privacy. In _48th Annual IEEE Symposium on Foundations of Computer Science (FOCS’07)_ , pages 94–103. IEEE.
* Mihaylov et al. (2018) Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. _arXiv preprint arXiv:1809.02789_.
* OpenAI (2023) OpenAI. 2023. Gpt-4.
* Talmor et al. (2018) Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowledge. _arXiv preprint arXiv:1811.00937_.
* Tong et al. (2023) Meng Tong, Kejiang Chen, Yuang Qi, Jie Zhang, Weiming Zhang, and Nenghai Yu. 2023. Privinfer: Privacy-preserving inference for black-box large language model. _arXiv preprint arXiv:2310.12214_.
* Touvron et al. (2023) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_.
* Wei et al. (2022) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. _Advances in neural information processing systems_ , 35:24824–24837.
* Yao (1986) Andrew Chi-Chih Yao. 1986. How to generate and exchange secrets. In _27th annual symposium on foundations of computer science (Sfcs 1986)_ , pages 162–167. IEEE.
* Yue et al. (2021) Xiang Yue, Minxin Du, Tianhao Wang, Yaliang Li, Huan Sun, and Sherman SM Chow. 2021. Differential privacy for text analytics via natural text sanitization. _arXiv preprint arXiv:2106.01221_.
* Zhang and Yang (2021) Yu Zhang and Qiang Yang. 2021. A survey on multi-task learning. _IEEE Transactions on Knowledge and Data Engineering_ , 34(12):5586–5609.
* Zhou et al. (2023) Xin Zhou, Yi Lu, Ruotian Ma, Tao Gui, Yuran Wang, Yong Ding, Yibo Zhang, Qi Zhang, and Xuan-Jing Huang. 2023. Textobfuscator: Making pre-trained language model a privacy protector via obfuscating word representations. In _Findings of the Association for Computational Linguistics: ACL 2023_ , pages 5459–5473.
## Appendix A Rationales Generation through COT
We utilize the rationales data generated by server-side LLM through chain-of-
thought (CoT)Wei et al. (2022)Hsieh et al. (2023) technique to enhance the
performance of the client’s task-specific SLM. These rationales justify the
predicted labels and serve as insightful guidance for training smaller and
domain-specific models. Consider the following example: when asked “Question:A
beaver is know for building prowess, their supplies come from where? Answer
Choices: (a) british columbia (b) body of water (c) wooded area (d) pay debts
(e) zoo”. Utilizing the chain-of-thought (CoT) technique, the LLM can generate
intermediate rationales like, "The answer must be the place where beavers get
their supplies. Of the above choices, only wooded areas have the supplies that
beavers need.” Such rationales bridge the gap between the input and the final
answer, often encapsulating valuable task-related knowledge. This knowledge
would traditionally require extensive data for smaller and task-specific
models to acquire. Therefore, we harness these rationales as enriched training
material for small language models, employing a multi-task training paradigm
that encompasses both label prediction task and rationale prediction task.
## Appendix B More on Experimental Details
### B.1 Hyperparameter Settings
SLM Parameters. During the training process for both the Encoder-Decoder SLM
and the Task-Specific SLM, we specifically configured the parameters. We set
the batch size to 32 and employed the AdamW optimizer. The maximum number of
training steps ranged from 400 to 1500. Additionally, we assigned the values
of 0.5 to both $\alpha$ and $\beta$. Furthermore, the learning rates for
$\eta_{\phi}$ and $\eta_{\omega}$ were established at 5e-5.
### B.2 Data Splitting
For the datasets CQA/OBQA/BoolQ//ARC-E/, all splits (training, validation, and
test) were downloaded from HuggingFace Lhoest et al. (2021). During the
training of the Encoder-Decoder SLM, we randomly divided the training data
into two equal parts. One part was designated as the public dataset, while the
other part was allocated as the private dataset for the client.
### B.3 Dataset Licenses
For the datasets CQA/OBQA/BoolQ//ARC-E/ were downloaded from HuggingFaceLhoest
et al. (2021) and under Apache License, Version 2.0.
### B.4 Machine Configuration
The experiments were conducted on machines equipped with 4 Nvidia V100 32G.
## Appendix C The Definition of TokenRatio Metric
TokenRatio($r^{{}^{\prime}},r$). This metric calculates the unique words($u$)
in $r^{{}^{\prime}}$ and counts how many of these words are also present in
$r$, denoted as $i$. The TokenRatio is then calculated as $i$ divided by the
total number of unique words in $r^{{}^{\prime}}$ ($|u|$).
Figure 6: Comparative Analysis of Perturbed Rationales and Decoded Rationales.
## Appendix D Comparative Analysis of Perturbed Rationales and Decoded
Rationales
As shown in Figure 6, we conduct a comparison of the quality between the
perturbed rationales and the decoded rationales, employing both the PDSS-EM
and PDSS-ED methods across various privacy budgets denoted by $\epsilon$. For
clarity, we designate the perturbed rationales generated using the PDSS-EM and
PDSS-ED methods as P-PDSS-EM and P-PDSS-ED, respectively. Similarly, the
decoded rationales derived from these methods are denoted as D-PDSS-EM and
D-PDSS-ED. It’s clear that at comparable $\epsilon$ levels, the TokenRatio for
decoded rationales consistently surpasses that of perturbed rationales in most
tasks, when utilizing the PDSS-EM and PDSS-ED methods. This finding
underscores the remarkable effectiveness of the rationales decoder module
within both the PDSS-EM and PDSS-ED frameworks.
|
Hessenberg–Toeplitz Matrix Determinants with Schröder and Fine Number Entries
Taras Goy
Faculty of Mathematics and Computer Science
Vasyl Stefanyk Precarpathian National University
Ivano-Frankivsk 76018
Ukraine
<EMAIL_ADDRESS>
Mark Shattuck
Department of Mathematics
University of Tennessee
Knoxville, TN 37996
USA
<EMAIL_ADDRESS>
###### Abstract
In this paper, we find determinant formulas of several Hessenberg–Toeplitz
matrices whose nonzero entries are derived from the small and large Schröder
and Fine number sequences. Algebraic proofs of these results can be given
which make use of Trudi’s formula and the generating function of the
associated sequence of determinants. We also provide direct arguments of our
results that utilize various counting techniques, among them sign-changing
involutions, on combinatorial structures related to classes of lattice paths
enumerated by the Schröder and Fine numbers. As a consequence of our results,
we obtain some new formulas for the Schröder and Catalan numbers as well as
for some additional sequences from the OEIS in terms of determinants of
certain Hessenberg–Toeplitz matrices.
## 1 Introduction
Let $S_{n}$ denote the $n$-th _large_ Schröder number given by
$S_{n}=\frac{1}{n}\sum_{k=1}^{n}2^{k}\binom{n}{k}\binom{n}{k-1},\qquad n\geq
1,$
with $S_{0}=1$. The _small_ Schöder number $s_{n}$ is defined as
$s_{n}=\frac{1}{2}S_{n}$ for $n\geq 1$, with $s_{0}=1$. The $n$-th Fine
number, denoted here by $t_{n}$, is given by
$t_{n}=3\sum_{k=1}^{\lfloor\frac{n+1}{2}\rfloor}\binom{2n-2k}{n-1}-\binom{2n}{n},\qquad
n\geq 1,$
with $t_{0}=0$. The first several terms of the sequences $s_{n}$ and $t_{n}$
for $n\geq 0$ are as follows:
$\\{s_{n}\\}_{n\geq
0}=\\{1,1,3,11,45,197,903,4279,20793,103049,518859,\ldots\\}$
and
$\\{t_{n}\\}_{n\geq 0}=\\{0,1,0,1,2,6,18,57,186,622,2120,\ldots\\}.$
We will make use of in our proofs the (ordinary) generating functions for
$S_{n}$, $s_{n}$ and $t_{n}$, which are given respectively by
$\displaystyle\sum_{n\geq 0}S_{n}x^{n}$
$\displaystyle=\frac{1-x-\sqrt{1-6x+x^{2}}}{2x},$ $\displaystyle\sum_{n\geq
0}s_{n}x^{n}$ $\displaystyle=\frac{1+x-\sqrt{1-6x+x^{2}}}{4x},$
$\displaystyle\sum_{n\geq 0}t_{n}x^{n}$
$\displaystyle=\frac{1+2x-\sqrt{1-4x}}{2(2+x)}.$
Let $C_{n}=\frac{1}{n+1}\binom{2n}{n}$ denote the $n$-th Catalan number (see
A000108 in [17]) and recall
$\sum_{n\geq 0}C_{n}x^{n}=\frac{1-\sqrt{1-4x}}{2x}.$
The preceding three sequences are closely aligned with $C_{n}$. For example,
we have for $n\geq 1$,
$S_{n}=\sum_{k=0}^{n}\binom{n+k}{2k}C_{k}\quad\text{ and }\quad
t_{n+1}=\frac{1}{2}\sum_{k=2}^{n}\frac{C_{k}}{(-2)^{n-k}}.$
Additional relations for the Fine numbers are given by $C_{n}=2t_{n+1}+t_{n}$
and
$t_{n+1}=C_{n}-\sum_{k=0}^{n-1}C_{k}t_{n-k}.$
The sequences $S_{n}$, $s_{n}$ and $t_{n}$ arise in various settings in
enumerative and algebraic combinatorics and give the cardinalities of some
important classes of first quadrant lattice paths [2, 3, 4, 5, 6, 16]. See
entries A006318, A001003 and A000957, respectively, in [17] for further
information. Here, we will be interested in some new combinatorial properties
of these numbers related to their occurrence in certain Hessenberg–Toeplitz
matrices.
Many relations for Schröder and Fine numbers have previously been found (see,
e.g., [6, 19, 20] and references contained therein), and determinants of
matrices with Schröder or Fine number entries and their generalizations have
attracted recent attention. A couple of basic results in this direction
involve Hankel determinants for the Schröder numbers, namely,
$\det\\!\big{(}S_{i+j}\big{)}_{i,j=0}^{n-1}=2^{\binom{n}{2}}$ and
$\det\\!\big{(}S_{i+j+1}\big{)}_{i,j=0}^{n-1}=2^{\binom{n+1}{2}}$. The
comparable formulas for the Fine numbers (see [6]) are given by
$\det\\!\big{(}t_{i+j+1}\big{)}_{i,j=0}^{n-1}=1$ and
$\det\\!\big{(}t_{i+j+2}\big{)}_{i,j=0}^{n-1}=1-n$. These results have been
generalized in different ways by considering various families of Catalan-like
sequences; see, e.g., [7, 14] and reference contained therein.
In [18], Qi presents negativity results for a class of Toeplitz–Hessenberg
determinants whose elements contain the products of the factorial and the
large Schröder numbers. By using Cramer’s rule together with a generating
function approach, Deutsch [5] obtained the following Fine–Catalan determinant
relation
$t_{n}=(-1)^{n-1}\left|\begin{array}[]{ccccccc}C_{0}&1&1&\cdots&0&0\\\
C_{1}&C_{0}&1&\cdots&0&0\\\ C_{2}&C_{1}&C_{0}&\cdots&0&0\\\
\cdots&\cdots&\cdots&\ddots&\cdots&\cdots\\\
C_{n-2}&C_{n-3}&C_{n-4}&\cdots&C_{0}&1\\\
C_{n-1}&C_{n-2}&C_{n-3}&\cdots&C_{1}&C_{0}\end{array}\right|,$ (1)
which was later rediscovered by the authors in [8] (formula (2.21) with
$a=-1$). In [8], the authors found determinants of several families of
Toeplitz–Hessenberg matrices having various subsequences of the Catalan
sequence for the nonzero entries. These determinant formulas may also be
rewritten equivalently as identities involving sums of products of Catalan
numbers and multinomial coefficients. Comparable results featuring
combinatorial arguments have been found for the generalized Fibonacci
(Horadam), tribonacci and tetranacci numbers in [9, 10, 11]. See also [1],
where further results for Horadam number permanents and determinants are
obtained using an algebraic approach.
The organization of this paper is as follows. In the next section, we find
formulas providing algebraic arguments of several Hessenberg–Toeplitz matrices
whose nonzero entries are given by $S_{n}$, $s_{n}$, $t_{n}$ or translates
thereof. As a consequence of our results, one obtains new determinant
expressions, and hence combinatorial interpretations, for $S_{n}$, $s_{n}$ and
$C_{n}$, as well as for several additional sequences occurring in [17].
Further, equivalent multi-sum versions of these determinant identities may be
obtained using Trudi’s formula (see Lemma 1 below). In the third section, we
provide combinatorial proofs of the preceding formulas upon making use of
various counting techniques such as direct enumeration, bijections between
equinumerous structures and sign-changing involutions. In doing so, we draw
upon the well-known combinatorial interpretations of $S_{n}$, $C_{n}$, $s_{n}$
and $t_{n}$ as enumerators of certain classes of first quadrant lattice paths.
## 2 Schröder and Fine number determinant formulas
A Hessenberg–Toeplitz matrix is one having the form
$A_{n}:=A_{n}(a_{0};a_{1},\ldots,a_{n})=\left(\begin{array}[]{ccccccc}a_{1}&a_{0}&0&\cdots&0&0\\\
a_{2}&a_{1}&a_{0}&\cdots&0&0\\\ a_{3}&a_{2}&a_{1}&\cdots&0&0\\\
\cdots&\cdots&\cdots&\ddots&\cdots&\cdots\\\
a_{n-1}&a_{n-2}&a_{n-3}&\cdots&a_{1}&a_{0}\\\
a_{n}&a_{n-1}&a_{n-2}&\cdots&a_{2}&a_{1}\end{array}\right),$ (2)
where $a_{0}\neq 0$. The following multinomial expansion of $\det(A_{n})$ in
terms of a sum of products of the $a_{i}$ is known as _Trudi’s formula_ (see,
e.g., [13, Theorem 1]).
###### Lemma 1.
Let $n$ be a positive integer. Then
$\det(A_{n})=\sum_{\widetilde{v}=n}(-a_{0})^{n-|v|}{|v|\choose
v_{1},\ldots,v_{n}}a_{1}^{v_{1}}a_{2}^{v_{2}}\cdots a_{n}^{v_{n}},$ (3)
where
${|v|\choose v_{1},\ldots,v_{n}}=\frac{|v|!}{v_{1}!v_{2}!\cdots
v_{n}!},\quad\widetilde{v}=v_{1}+2v_{2}+\cdots+nv_{n},\quad|v|=v_{1}+v_{2}+\cdots+v_{n},\,\,v_{i}\geq
0.$
Equivalently, we have
$\det(A_{n})=\sum_{k=1}^{n}(-a_{0})^{n-k}\sum\limits_{\begin{smallmatrix}i_{1},\ldots,i_{k}\geq
1\\\ i_{1}+i_{2}+\cdots+i_{k}=n\end{smallmatrix}}a_{i_{1}}a_{i_{2}}\cdots
a_{i_{k}}.$
It is seen that the sum in (3) may be regarded as being over the set of
partitions of the positive integer $n$. The special case of Trudi when
$a_{0}=1$ is known as _Brioschi’s formula_ [15]. Here, we will focus on some
cases of $\det(A_{n})$ when $a_{0}=\pm 1$. For the sake of brevity, we denote
$\det\big{(}A_{n}(\pm 1;a_{1},a_{2},\ldots,a_{n})\big{)}$ by
$D_{\pm}(a_{1},a_{2},\ldots,a_{n})$.
There is the following inversion theorem involving $a_{i}$ and the
corresponding sequence of Hessenberg–Toeplitz determinants when $a_{0}=1$ (see
[12, Lemma 4]):
###### Lemma 2.
Let $(b_{n})_{n\geq 0}$ be defined by $b_{n}=\det(A_{n})$ for $n\geq 1$, where
$A_{n}$ is given by (2) with $a_{0}=b_{0}=1$. If $B_{n}$ denotes the
Hessenberg–Toeplitz matrix associated with $b_{0},\ldots,b_{n}$, then
$a_{n}=\det(B_{n})$ for $n\geq 1$.
We have the following determinant identity formulas involving the large and
small Schröder numbers.
###### Theorem 3.
We have
$\displaystyle D_{+}(S_{1},S_{2},\ldots,S_{n})$
$\displaystyle=(-1)^{n-1}S_{n-1},\qquad n\geq 2,$ (4) $\displaystyle
D_{-}(S_{1},S_{2},\ldots,S_{n})$ $\displaystyle=2\cdot A134425[n-1],$ (5)
$\displaystyle D_{+}(S_{0},S_{1},\ldots,S_{n-1})$
$\displaystyle=(-1)^{n-1}s_{n-1},$ (6) $\displaystyle
D_{-}(S_{0},S_{1},\ldots,S_{n-1})$ $\displaystyle=s_{n},\qquad$ (7)
$\displaystyle D_{+}(s_{1},s_{2},\ldots,s_{n})$
$\displaystyle=(-1)^{n-1}S_{n-1},$ (8) $\displaystyle
D_{-}(s_{1},s_{2},\ldots,s_{n})$ $\displaystyle=A225887[n-1],$ (9)
$\displaystyle D_{+}(s_{0},s_{1},\ldots,s_{n-1})$
$\displaystyle=(-1)^{n-1}A114710[n-1],$ (10) $\displaystyle
D_{-}(s_{0},s_{1},\ldots,s_{n-1})$ $\displaystyle=S_{n-1},\qquad$ (11)
$\displaystyle D_{+}(s_{2},s_{3},\ldots,s_{n+1})$
$\displaystyle=(-1)^{n-1}S_{n-1},\qquad n\geq 2,\qquad$ (12)
where all formulas hold for $n\geq 1$ unless stated otherwise.
Making use of Lemma 1 yields the following multinomial identities for the two
kinds of Schröder numbers.
###### Theorem 4.
We have
$\displaystyle\sum_{\widetilde{v}=n}(-1)^{|v|-1}{|v|\choose
v_{1},\ldots,v_{n}}S_{1}^{v_{1}}S_{2}^{v_{2}}\cdots S_{n}^{v_{n}}$
$\displaystyle=S_{n-1},\quad n\geq 2,$ (13)
$\displaystyle\sum_{\widetilde{s}=n}{|v|\choose
v_{1},\ldots,v_{n}}S_{1}^{v_{1}}S_{2}^{v_{2}}\cdots S_{n}^{v_{n}}$
$\displaystyle=2\cdot A134425[n-1],$ (14)
$\displaystyle\sum_{\widetilde{v}=n}(-1)^{|v|-1}{|v|\choose
v_{1},\ldots,v_{n}}S_{0}^{v_{1}}S_{1}^{v_{2}}\cdots S_{n-1}^{v_{n}}$
$\displaystyle=s_{n-1},$ (15) $\displaystyle\sum_{\widetilde{v}=n}{|v|\choose
v_{1},\ldots,v_{n}}S_{0}^{v_{1}}S_{1}^{v_{2}}\cdots S_{n-1}^{v_{n}}$
$\displaystyle=s_{n},$ (16)
$\displaystyle\sum_{\widetilde{v}=n}(-1)^{|v|-1}{|v|\choose
v_{1},\ldots,v_{n}}s_{1}^{v_{1}}s_{2}^{v_{2}}\cdots s_{n}^{v_{n}}$
$\displaystyle=S_{n-1},$ (17) $\displaystyle\sum_{\widetilde{s}=n}{|v|\choose
v_{1},\ldots,v_{n}}s_{1}^{v_{1}}s_{2}^{v_{2}}\cdots s_{n}^{v_{n}}$
$\displaystyle=A225887[n],$ (18)
$\displaystyle\sum_{\widetilde{v}=n}(-1)^{|v|-1}{|v|\choose
v_{1},\ldots,v_{n}}s_{0}^{v_{1}}s_{1}^{v_{2}}\cdots s_{n-1}^{v_{n}}$
$\displaystyle=A114710[n-1],$ (19)
$\displaystyle\sum_{\widetilde{v}=n}{|v|\choose
v_{1},\ldots,v_{n}}s_{0}^{v_{1}}s_{1}^{v_{2}}\cdots s_{n-1}^{v_{n}}$
$\displaystyle=S_{n-1},$ (20)
$\displaystyle\sum_{\widetilde{v}=n}(-1)^{|v|-1}{|v|\choose
v_{1},\ldots,v_{n}}s_{2}^{v_{1}}s_{3}^{v_{2}}\cdots s_{n+1}^{v_{n}}$
$\displaystyle=S_{n-1},\quad n\geq 2,$ (21)
where all formulas hold for $n\geq 1$ unless stated otherwise.
The identities in Theorems 3 and 4 are seen to be equivalent by (3), so we
need only prove the former.
###### Proof.
Let $f(x)=\sum_{n\geq 1}\det(A_{n})x^{n}$, where $A_{n}$ is given by (2). Then
rewriting (3) in terms of generating functions implies
$f(x)=\sum_{n\geq 1}(-a_{0}x)^{n}\sum_{\widetilde{v}=n}{|v|\choose
v_{1},\ldots,v_{n}}\left(-\frac{a_{1}}{a_{0}}\right)^{v_{1}}\cdots\left(-\frac{a_{n}}{a_{0}}\right)^{v_{n}}=\frac{g(x)}{1-g(x)},$
where $g(x)=\sum_{i\geq 1}(-a_{0})^{i-1}a_{i}x^{i}$.
We consider several cases on $a_{i}$. First let $a_{i}=S_{i}$ for $i\geq 1$.
Then
$g(x)=\sum_{i\geq
1}(-a_{0})^{i-1}S_{i}x^{i}=\frac{1+3a_{0}x-\sqrt{1+6a_{0}x+a_{0}^{2}x^{2}}}{2a_{0}^{2}x}.$
If $a_{0}=1$, then
$\displaystyle f(x)$
$\displaystyle=\frac{g(x)}{1-g(x)}=\frac{1+3x-\sqrt{1+6x+x^{2}}}{-1-x+\sqrt{1+6x+x^{2}}}=2x-\frac{1}{2}\left(1+3x-\sqrt{1+6x+x^{2}}\right)$
$\displaystyle=2x+\sum_{n\geq 2}(-1)^{n-1}S_{n-1}x^{n},$
which implies (4). If $a_{0}=-1$, then
$\displaystyle f(x)$
$\displaystyle=\frac{g(x)}{1-g(x)}=\frac{1-3x-\sqrt{1-6x+x^{2}}}{-1+5x+\sqrt{1-6x+x^{2}}}=\frac{4x}{1-7x+\sqrt{1-6x+x^{2}}}$
$\displaystyle=\sum_{n\geq 1}2\cdot A134425[n-1]x^{n},$
which implies (5), upon recalling the formula $\sum_{n\geq
0}A134425[n]x^{n}=\frac{2}{1-7x+\sqrt{1-6x+x^{2}}}$ (see OEIS article).
Now let $a_{i}=S_{i-1}$ for $i\geq 1$. In this case, we have
$g(x)=\sum_{i\geq
1}(-a_{0})^{i-1}S_{i-1}x^{i}=\frac{-1-a_{0}x+\sqrt{1+6a_{0}x+a_{0}^{2}x^{2}}}{2a_{0}}.$
If $a_{0}=1$, then
$f(x)=\frac{-1-x+\sqrt{1+6x+x^{2}}}{3+x-\sqrt{1+6x+x^{2}}}=\frac{-1+x+\sqrt{1+6x+x^{2}}}{4}=\sum_{n\geq
1}(-1)^{n-1}s_{n-1}x^{n},$
which gives (6), whereas if $a_{0}=-1$, then
$f(x)=\frac{1-x-\sqrt{1-6x+x^{2}}}{1+x+\sqrt{1-6x+x^{2}}}=\frac{1-3x-\sqrt{1-6x+x^{2}}}{4x}=\sum_{n\geq
1}s_{n}x^{n},$
which gives (7).
Similar proofs may be given for (8)–(11). Alternatively, formulas (8) and (11)
follow from (7) and (6), respectively, upon applying Lemma 2 since
$\displaystyle D_{+}(s_{1},\ldots,s_{n})=(-1)^{n-1}S_{n-1}\text{ if and only
if }$ $\displaystyle
D_{-}(S_{0},\ldots,S_{n-1})=D_{+}(S_{0},-S_{1},\ldots,(-1)^{n-1}S_{n-1})=s_{n}$
and
$\displaystyle D_{+}(S_{0},\ldots,S_{n-1})=(-1)^{n-1}s_{n-1}\text{ if and only
if }$ $\displaystyle
D_{-}(s_{0},\ldots,s_{n-1})=D_{+}(s_{0},-s_{1},\ldots,(-1)^{n-1}s_{n-1})=S_{n-1}.$
Finally, to show (12), let $a_{i}=s_{i+1}$ for $i\geq 1$ and $a_{0}=1$ to get
$g(x)=\sum_{i\geq
1}(-1)^{i-1}s_{i+1}x^{i}=\frac{-1-3x+4x^{2}+\sqrt{1+6x+x^{2}}}{4x^{2}}.$
Thus,
$\displaystyle f(x)$
$\displaystyle=\frac{g(x)}{1-g(x)}=\frac{-1-3x+4x^{2}+\sqrt{1+6x+x^{2}}}{1+3x-\sqrt{1+6x+x^{2}}}$
$\displaystyle=3x+\frac{1}{2}\left(-1-3x+\sqrt{1+6x+x^{2}}\right)=3x+\sum_{n\geq
2}(-1)^{n-1}S_{n-1}x^{n},$
which completes the proof. ∎
We have the following Fine number determinant formulas.
###### Theorem 5.
We have
$\displaystyle D_{+}(t_{1},t_{2},\ldots,t_{n})$ $\displaystyle=u_{n},$ (22)
$\displaystyle D_{-}(t_{1},t_{2},\ldots,t_{n})$ $\displaystyle=C_{n-1},$ (23)
$\displaystyle D_{+}(t_{2},t_{3},\ldots,t_{n+1})$
$\displaystyle=(-1)^{n-1}C_{n-1},\quad n\geq 2,$ (24) $\displaystyle
D_{-}(t_{2},t_{3},\ldots,t_{n+1})$ $\displaystyle=A137398[n],$ (25)
$\displaystyle D_{+}(t_{3},t_{4},\ldots,t_{n+2})$
$\displaystyle=(-1)^{n-1}A030238[n-1],$ (26) $\displaystyle
D_{+}(t_{4},t_{5},\ldots,t_{n+3})$ $\displaystyle=(-1)^{n-1}C_{n-1},\qquad
n\geq 3,$ (27)
where all formulas hold for $n\geq 1$ unless stated otherwise and $u_{n}$
denotes the sequence defined recursively by
$u_{n}=u_{n-1}+\sum_{i=1}^{n-2}(-1)^{i+1}C_{i}u_{n-i-1}$ if $n\geq 3$ with
$u_{1}=u_{2}=1$.
###### Proof.
Proofs comparable to those given for (4)–(12) may also be given for (22)–(27).
We illustrate using formula (27). First note that
$\sum_{n\geq 1}t_{n+3}x^{n}=\sum_{n\geq
3}t_{n+1}x^{n-2}=\frac{1}{x^{2}}\left(\frac{2}{1+2x+\sqrt{1-4x}}-1-x^{2}\right),$
and hence we have
$g(x)=\sum_{n\geq
1}(-1)^{n-1}t_{n+3}x^{n}=-\frac{1}{x^{2}}\left(\frac{1+2x-x^{2}+2x^{3}-(1+x^{2})\sqrt{1+4x}}{1-2x+\sqrt{1+4x}}\right).$
This gives
$\displaystyle\sum_{n\geq 3}\det(A_{n})x^{n}$
$\displaystyle=\frac{g(x)}{1-g(x)}-\det(A_{1})x-\det(A_{2})x^{2}$
$\displaystyle=\frac{-1-2x+x^{2}-2x^{3}+(1+x^{2})\sqrt{1+4x}}{1+2x-\sqrt{1+4x}}-2x+2x^{2}$
$\displaystyle=\frac{-1-4x-x^{2}+2x^{3}+(1+2x-x^{2})\sqrt{1+4x}}{1+2x-\sqrt{1+4x}}$
$\displaystyle=\frac{\left(-1-4x-x^{2}+2x^{3}+(1+2x-x^{2})\sqrt{1+4x}\right)\left(1+2x+\sqrt{1+4x}\right)}{4x^{2}}$
$\displaystyle=\frac{-2x^{2}\left(1+2x-2x^{2}-\sqrt{1+4x}\right)}{4x^{2}}=x\left(\frac{1-\sqrt{1+4x}}{-2x}-1+x\right)$
$\displaystyle=x\sum_{n\geq 2}C_{n}(-x)^{n}=\sum_{n\geq
3}(-1)^{n-1}C_{n-1}x^{n},$
which implies (27). ∎
## 3 Combinatorial proofs
In this section, we provide combinatorial proofs of formulas (4)–(12) and
(22)–(27). Let us first recall combinatorial interpretations of the sequences
$S_{n}$, $s_{n}$ and $t_{n}$ which we will make use of in our proofs and
define some related terms. Let $\mathcal{P}_{n}$ denote the set of lattice
paths (called _Schröder_ paths) from the origin to the point $(2n,0)$ that
never go below the $x$-axis using $u=(1,1)$, $d=(1,-1)$ and $h=(2,0)$ steps.
Then $S_{n}=|\mathcal{P}_{n}|$ for all $n\geq 0$, where $\mathcal{P}_{0}$ is
understood to consist of the empty path of length zero. Half the horizontal
distance traversed by a Schröder path $\lambda$ will be referred to here as
the _length_ of $\lambda$ and is denoted by $|\lambda|$. Note that $|\lambda|$
equals the sum of numbers of $u$ and $h$ steps in $\lambda$. (We remark that
the term _semi-length_ is often used in the literature, instead of length, for
the quantity indicated, though we prefer the latter due to brevity.) An $h$
step connecting two points with $y$-coordinate $\ell\geq 0$ is said to be of
_height_ $\ell$. A _low_ $h$ step will refer to an $h$ step of height $0$. The
subset of $\mathcal{P}_{n}$ whose members contain no low $h$ steps will be
denoted by $\mathcal{Q}_{n}$, with its members referred to as _restricted_
Schröder paths. Then it is well-known that $s_{n}=|\mathcal{Q}_{n}|$ for
$n\geq 0$. Hence, since $S_{n}=2s_{n}$ if $n\geq 1$, we have that exactly half
the members of $\mathcal{P}_{n}$ are restricted.
Let $\mathcal{D}_{n}$ denote the subset of $\mathcal{P}_{n}$ whose members
contain no $h$ steps. Members of $\mathcal{D}_{n}$ are referred to as _Dyck_
paths, with $|\mathcal{D}_{n}|=C_{n}$ for $n\geq 0$. A member of
$\mathcal{D}_{n}$ is said to have a _peak_ of height $i$ where $1\leq i\leq n$
if there exists a $u$ directly followed by a $d$ in which the $u$ has ending
height $i$. Let $\mathcal{E}_{n}$ denote the subset of $\mathcal{D}_{n}$ whose
members contain no peaks of height $1$. Then it is well-known that
$t_{n}=\mathcal{E}_{n-1}$ for $n\geq 1$, with $t_{0}=0$.
By a _return_ within a member of $\mathcal{P}_{n}$, we mean an $h$ or $u$ step
that terminates on the $x$-axis. A _terminal_ return is one that has endpoint
$(2n,0)$, with all other returns being referred to as _non-terminal_. By a
_unit_ within $\lambda\in\mathcal{P}_{n}$, we mean a subpath of $\lambda$
occurring between two adjacent returns or prior to the first return. Note that
a low $h$ step comprises its own unit with all other units of the form
$u\sigma d$ for some possibly empty Schröder path $\sigma$. Within members of
$\mathcal{E}_{n}$, all units must have length at least two, whereas members of
$\mathcal{Q}_{n}$ can also contain units of the form $ud$, but not $h$.
Finally, a member of $\mathcal{P}_{n}$ having no non-terminal returns is said
to be _primitive_. A primitive member $\lambda\in\mathcal{P}_{n}$ for $n\geq
2$ is necessarily of the form $\lambda=u\sigma d$, where
$\sigma\in\mathcal{P}_{n-1}$, and hence belongs to $\mathcal{Q}_{n}$.
We compute the determinant of an $n\times n$ Hessenberg–Toeplitz matrix using
the definition of a determinant as a signed sum over the set of permutations
$\sigma$ of $[n]$. In doing so, one need only consider those $\sigma$ whose
cycles when expressed disjointly each comprise a set of consecutive integers.
Such $\sigma$ are clearly in one-to-one correspondence with the compositions
of $n$, upon identifying the various cycle lengths with parts of a
composition. This implies that the determinant of a matrix $A_{n}$ of the form
(2) may be regarded as a weighted sum over the set of compositions of $n$. If
$a_{0}=1$ in this sum, then each part of size $r\geq 1$ has (signed) weight
given by $(-1)^{r-1}a_{r}$ (regardless of its position) and the weight of a
composition is the product of the weights of its constituent parts. One can
then define the sign of a composition as $(-1)^{n-m}$, where $m$ denotes the
number of its parts. On the other hand, when $a_{0}=-1$, every part of size
$r$ now contributes $a_{r}$ towards the weight of the composition. Thus,
assuming $a_{i}\geq 0$ for $i\geq 1$, each term in the determinant sum for
$A_{n}$ is non-negative in this case. Note that computing $\det(A_{n})$ where
$a_{0}=-1$ is equivalent to finding the permanent of the matrix obtained from
$A_{n}$ by replacing $a_{0}=-1$ with $a_{0}=1$.
We now provide combinatorial proofs of the formulas from Theorems 3 and 5
above.
Proofs of (4), (5), (8) and (9):
Let $\mathcal{A}_{n}$ denote the set of marked Schröder paths of length $n$ in
which returns to the $x$-axis may be marked and whose final return is always
marked. Define the sign of $\lambda\in\mathcal{A}_{n}$ by
$(-1)^{n-\mu(\lambda)}$, where $\mu(\lambda)$ denotes the number of marked
returns of $\lambda$. Let $\mathcal{A}_{n}^{\prime}\subseteq\mathcal{A}_{n}$
consist of those members of $\mathcal{A}_{n}$ in which there are no low $h$
steps (marked or unmarked). Then $D_{+}(S_{1},\ldots,S_{n})$ and
$D_{+}(s_{1},\ldots,s_{n})$ give the sum of the signs of all members of
$\mathcal{A}_{n}$ and $\mathcal{A}_{n}^{\prime}$, respectively. To see this,
first suppose $\tau$ is a member of $\mathcal{A}_{n}$ or
$\mathcal{A}_{n}^{\prime}$ and is derived from the (weighted) composition
$\sigma$ in either determinant expansion. That is, $\tau$ is obtained from
$\sigma$ by overlaying a member of $\mathcal{P}_{r}$ or $\mathcal{Q}_{r}$ on
each part of $\sigma$ of size $r$ for every $r$, marking the final return of
each path and finally concatenating the paths in the same order as the parts
of $\sigma$. Then the sequence of part sizes of $\sigma$ corresponds to the
sequence of lengths of the subpaths occurring between adjacent marked returns
of $\tau$ (or prior to the first marked return), and, in particular, the
number of parts of $\sigma$ equals the number of marked returns of $\tau$.
Thus, the sign of $\sigma$ in the determinant expansion corresponds to
$n-\mu(\tau)$, and considering all $\tau$ associated with each $\sigma$
implies $D_{+}(S_{1},\ldots,S_{n})$ and $D_{+}(s_{1},\ldots,s_{n})$ give the
sum of the signs of the members of $\mathcal{A}_{n}$ and
$\mathcal{A}_{n}^{\prime}$, respectively, as claimed.
We define a sign-changing involution on $\mathcal{A}_{n}$ by identifying the
leftmost non-terminal return and either marking it or removing the marking
from it. The set of survivors of this involution consists of the _primitive_
members of $\mathcal{A}_{n}$. If $n\geq 2$, then there are $S_{n-1}$ primitive
members of $\mathcal{A}_{n}$, each of sign $(-1)^{n-1}$, which implies (4).
Since the survivors of the involution all belong to
$\mathcal{A}_{n}^{\prime}$, this establishes (8) as well.
On the other hand, it is seen from the preceding that
$D_{-}(S_{1},\ldots,S_{n})$ and $D_{-}(s_{1},\ldots,s_{n})$ give the
cardinalities of the sets $\mathcal{A}_{n}$ and $\mathcal{A}_{n}^{\prime}$,
respectively, since when $a_{0}=-1$ the sign of $\sigma$ is cancelled out by
the product of the superdiagonal $-1$ factors in the term corresponding to
$\sigma$ in the determinant expansion. We first show (9). Let
$\mathcal{P}_{n}^{*}$ denote the set of colored members of $\mathcal{P}_{n}$
wherein each low $h$ step is colored in one of three ways. Recall one of the
combinatorial interpretations of $A225887[n]$ is that it gives the cardinality
of $\mathcal{P}_{n}^{*}$ for $n\geq 0$. Thus, to complete the proof of (9), it
suffices to define a bijection
$\phi:\mathcal{P}_{n-1}^{*}\rightarrow\mathcal{A}_{n}^{\prime}$. Let $h_{a}$,
$h_{b}$, $h_{c}$ denote the three kinds of colored low $h$ steps within
$\lambda\in\mathcal{P}_{n-1}^{*}$. We decompose $\lambda$ as
$\lambda=\lambda^{(1)}\cdots\lambda^{(r)}$ for some $r\geq 1$, where each
subpath $\lambda^{(i)}$ for $1\leq i\leq r-1$ ends in either $h_{a}$ or
$h_{b}$, with all other low $h$ steps in $\lambda$ (if there are any) equal to
$h_{c}$, and $\lambda^{(r)}$ is possibly empty. Note that if $\lambda$
contains no $h_{a}$ or $h_{b}$ steps, then we take $r=1$ and
$\lambda=\lambda^{(1)}$; further, if $\lambda$ ends in $h_{a}$ or $h_{b}$,
then $r\geq 2$ with $\lambda^{(r)}$ understood to be empty in this case. If
$1\leq i\leq r-1$ and $\lambda^{(i)}$ ends in $h_{a}$ with
$\lambda^{(i)}=\alpha_{i}h_{a}$, where $\alpha_{i}$ is possibly empty, then
let $\overline{\lambda}^{(i)}=u\alpha_{i}d$, where the final $d$ is marked
(i.e., the return to the $x$-axis associated with this $d$ is marked). If
$\lambda^{(i)}=\beta_{i}h_{b}$, then let
$\overline{\lambda}^{(i)}=u\beta_{i}d$, where the final $d$ is unmarked.
Finally, let $\overline{\lambda}^{(r)}=u\lambda^{(r)}d$, where the final $d$
is marked. Define
$\phi(\lambda)=\overline{\lambda}^{(1)}\cdots\overline{\lambda}^{(r)}$ as the
concatenation of the lattice paths $\overline{\lambda}^{(i)}$. Note that
$\phi$ can be reversed, and hence is bijective, as desired, upon considering
the positions of the returns and whether or not they are marked. Further, it
is seen that the number of $h_{c}$ steps within $\lambda$ equals the number of
$h$ steps of height $1$ within $\phi(\lambda)$ for all $\lambda$.
We now show (5). Let $\mathcal{\widetilde{P}}_{n}$ denote the set derived from
members of $\mathcal{P}_{n}$ by stipulating that the low $h$ steps come in one
of four kinds, denoted by $h^{(i)}$ for $1\leq i\leq 4$. Recall that
$A134425[n]$ gives $|\mathcal{\widetilde{P}}_{n}|$ for $n\geq 0$, so for (5),
we need to prove $|\mathcal{A}_{n}|=2|\mathcal{\widetilde{P}}_{n-1}|$ for
$n\geq 1$. We proceed inductively, noting that the $n=1$ case of the equality
is clear. Let $n\geq 2$ and we consider the following cases on members
$\lambda\in\mathcal{A}_{n}$: (i) $\lambda=\lambda^{\prime}h$, (ii)
$\lambda=\lambda^{\prime}\alpha$, where $\alpha\neq h$ is a unit and
$\lambda^{\prime}$ is nonempty, with the final return of $\lambda^{\prime}$
marked, or (iii) $\lambda=\lambda^{\prime}\beta$, where $\beta\neq h$ is a
unit and either $\lambda^{\prime}=\varnothing$ or
$\lambda^{\prime}\neq\varnothing$ with the final return of $\lambda^{\prime}$
not marked. We partition $\rho\in\mathcal{\widetilde{P}}_{n-1}$ as follows:
(I) $\rho$ ends in $h^{(1)}$ or $h^{(2)}$, (II) $\rho=\rho^{\prime}\alpha$,
where $\alpha\neq h^{(i)}$ for any $i$ is a unit and $\rho^{\prime}$ is
possibly empty, or (III) $\rho$ ends in $h^{(3)}$ or $h^{(4)}$.
We now demonstrate for each of (i)–(iii) that there are twice as many members
$\lambda\in\mathcal{A}_{n}$ as there are
$\rho\in\mathcal{\widetilde{P}}_{n-1}$ in the corresponding case (I)–(III).
Upon considering whether or not the final return in $\lambda^{\prime}$ is
marked, it is seen by the induction hypothesis that there are twice as many
$\lambda\in\mathcal{A}_{n}$ for which (i) applies as there are
$\rho\in\mathcal{\widetilde{P}}_{n-1}$ for which (I) applies. The same holds
true of (ii) and (II) as $\lambda^{\prime}$ in (ii) has length one greater
than that of $\rho^{\prime}$ in (II), with $\alpha$ the same in both cases. To
show that the same holds for cases (iii) and (III) above, observe first that
the number of possible $\rho\in\mathcal{\widetilde{P}}_{n-1}$ in (III) is
given by $2|\mathcal{\widetilde{P}}_{n-2}|$. Thus, to complete the proof of
(5), it is enough to prove that there are $2|\mathcal{A}_{n-1}|$ possible
$\lambda\in\mathcal{A}_{n}$ in (iii).
Let $\lambda=\lambda^{\prime}\beta\in\mathcal{A}_{n}$, where $\beta\neq h$ is
a unit and $\lambda^{\prime}$ does not have a marked final return. If
$\lambda^{\prime}=\varnothing$, i.e., $\lambda$ is primitive, then write
$\beta=u\beta^{\prime}d$ and regard $\beta^{\prime}$ as a member of
$\mathcal{A}_{n-1}$ in which only the final return is marked. Otherwise,
consider cases based on the length $\ell$ of $\beta$, where $1\leq\ell\leq
n-1$. If $\ell=1$, i.e., $\beta=ud$, then regard $\lambda^{\prime}$ as a
member of $\mathcal{A}_{n-1}$ by marking its last return. If $\ell\geq 2$,
then let $\beta=u\beta^{\prime}d$, where $\beta^{\prime}$ is nonempty. Then
form the lattice path $\sigma=\lambda^{\prime}\beta^{\prime}$ of length $n-1$,
wherein the last return of $\lambda^{\prime}$ and of $\beta^{\prime}$ are now
both marked (here, it is understood that all other returns of
$\lambda^{\prime}$ remain of the same status regarding whether or not they are
marked and that all non-terminal returns of $\beta^{\prime}$, if any, are
unmarked). Note that $\sigma\in\mathcal{A}_{n-1}$ with $\sigma$ containing at
least two marked returns. It is seen then that each member of
$\mathcal{A}_{n-1}$ arises exactly twice when one performs the operations
described above on the various members of $\mathcal{A}_{n}$ for which (iii)
applies, upon considering whether or not a member of $\mathcal{A}_{n-1}$
contains two or more marked returns, and if it does, additionally taking into
account the position of the rightmost non-terminal marked return. This
establishes the desired equality
$|\mathcal{A}_{n}|=2|\mathcal{\widetilde{P}}_{n-1}|$ for all $n\geq 1$, which
completes the proof of (5). ∎
Proofs of (6), (7), (10) and (11):
Let $\mathcal{A}_{n}$ be as in the previous proof and let
$\mathcal{B}_{n}\subseteq\mathcal{A}_{n}$ consist of those members in which
all marked returns (including the final return) correspond to low $h$ steps.
Let $\mathcal{B}_{n}^{\prime}\subseteq\mathcal{B}_{n}$ consist of those
members in which no low $h$ is unmarked. Define the sign of
$\lambda\in\mathcal{B}_{n}$ by $(-1)^{n-\mu(\lambda)}$, where $\mu(\lambda)$
denotes the number of marked low $h$’s. Reasoning as in the prior proof, we
have that $D_{+}(S_{0},\ldots,S_{n-1})$ and $D_{+}(s_{0},\ldots,s_{n-1})$ give
the sum of the signs of all members of $\mathcal{B}_{n}$ and
$\mathcal{B}_{n}^{\prime}$, respectively. To show (6), we define a sign-
changing involution on $\mathcal{B}_{n}$ by identifying the leftmost non-
terminal low $h$ and either marking it or removing the marking from it. This
involution fails to be defined for paths of the form $\lambda=\alpha h$, where
$\alpha\in\mathcal{Q}_{n-1}$ and $h$ is marked. Thus, there are $s_{n-1}$
survivors of the involution, each of sign $(-1)^{n-1}$, which implies (6). For
(10), we define an involution on $\mathcal{B}_{n}^{\prime}$ by identifying the
leftmost non-terminal (marked) low h step or peak of height $1$ (i.e., unit of
the form $ud$) and replacing one option with the other. This involution is not
defined on members $\rho=\beta h$, where $\beta\in\mathcal{P}_{n-1}$ contains
no low $h$ steps or peaks of height $1$. Note that there are $A114710[n-1]$
such $\rho$ for all $n\geq 1$, each with sign $(-1)^{n-1}$, which implies
(10).
On the other hand, we have that $D_{-}(S_{0},\ldots,S_{n-1})$ and
$D_{-}(s_{0},\ldots,s_{n-1})$ give the cardinalities of the sets
$\mathcal{B}_{n}$ and $\mathcal{B}_{n}^{\prime}$, respectively. To show (7),
consider decomposing $\rho\in\mathcal{B}_{n}$ as
$\rho=\rho^{(1)}\cdots\rho^{(r)}$ for some $r\geq 1$, where each $\rho^{(i)}$
ends in a marked low $h$ step and contains no other marked steps. Write
$\rho^{(i)}=\alpha_{i}h$ for $1\leq i\leq r$, where $\alpha_{i}$ is possibly
empty. Define $\overline{\rho}^{(i)}=u\alpha_{i}d$ and let
$\overline{\rho}=\overline{\rho}^{(1)}\cdots\overline{\rho}^{(r)}$. Then the
mapping $\rho\mapsto\overline{\rho}$ is seen to define a bijection between
$\mathcal{B}_{n}$ and $\mathcal{Q}_{n}$ (to reverse it, consider positions of
the returns in members of $\mathcal{Q}_{n}$), and hence
$|\mathcal{B}_{n}|=s_{n}$, which implies (7). Finally, members of
$\mathcal{B}_{n}^{\prime}$ and $\mathcal{P}_{n-1}$ are seen to be synonymous,
upon removing the marking from all low $h$’s and disregarding the final $h$ in
members of the former, which implies (11). ∎
Proof of (12):
Let $\mathcal{J}_{n,k}$ for $1\leq k\leq n$ denote the set of ordered
$k$-tuples $\lambda=(\lambda_{1},\ldots,\lambda_{k})$ wherein each
$\lambda_{i}$ is a restricted Schröder path having length at least two such
that $\sum_{i=1}^{k}|\lambda_{i}|=n+k$. Define the sign of
$\lambda\in\mathcal{J}_{n,k}$ by $(-1)^{n-k}$ and let
$\mathcal{J}_{n}=\cup_{k=1}^{n}\mathcal{J}_{n,k}$. Then we have that
$D_{+}(s_{2},\ldots,s_{n+1})$ gives the sum of the signs of all members of
$\mathcal{J}_{n}$. We define a sign-changing involution of $\mathcal{J}_{n}$
which makes use of several cases as follows. First suppose that the final
component $\lambda_{k}$ of $\lambda\in\mathcal{J}_{n,k}$ is _not_ primitive.
If $\lambda_{k}=u\sigma d\tau$, where $\sigma$ is a possibly empty Schröder
path and $|\tau|\geq 2$, then replace $\lambda_{k}$ with the two components
$\lambda_{k}=\tau$ and $\lambda_{k+1}=u\sigma dud$, leaving all other
components of $\lambda$ unchanged. We perform the reverse operation, i.e.,
fusing the last two components and dropping $ud$, if the last component
consists of a unit followed by $ud$. This pairs all members of
$\mathcal{J}_{n}$ in which the final component is not primitive except for
those belonging to $\mathcal{J}_{n,1}$ where $\lambda_{1}=u\sigma dud$ for
some $\sigma$.
Now suppose $\lambda_{k}$ within $\lambda$ is primitive. First assume
$|\lambda_{k}|\geq 3$, and we consider the following further subcases:
$\displaystyle(\text{i})~{}$ $\displaystyle\lambda_{k}=u\sigma d,\text{ with
}\sigma\text{ containing no low }h\text{'s}\text{ and }|\sigma|\geq 2,$
$\displaystyle(\text{ii})~{}$
$\displaystyle\lambda_{k}=u\sigma^{\prime}h\sigma^{\prime\prime}d,\text{ with
}\sigma^{\prime}\neq\varnothing\text{ and containing no low }h\text{'s}\text{
and }\sigma^{\prime\prime}\text{ possibly empty},$
$\displaystyle(\text{iii})~{}$ $\displaystyle\lambda_{k}=uh\sigma d,\text{
with }\sigma\neq\varnothing,$
where $\sigma,\sigma^{\prime},\sigma^{\prime\prime}$ denote Schröder paths.
(Note that by $\sigma$ or $\sigma^{\prime}$ not containing a low $h$ in the
preceding, we mean when $\sigma$ or $\sigma^{\prime}$ is viewed by itself
starting from the origin.) Now suppose
$\rho=(\rho_{1},\ldots,\rho_{k})\in\mathcal{J}_{n,k}$, with $\rho_{k}$
primitive and $|\rho_{k}|=2$. We consider the following subcases: (I)
$\rho_{k}=u^{2}d^{2}$, (II) $\rho_{k}=uhd$, with $\rho_{k-1}$ not primitive,
or (III) $\rho_{k}=uhd$, with $\rho_{k-1}$ primitive. Note that $n\geq 2$
implies $k\geq 2$ in (I)–(III) and hence a penultimate component exists in
each case. We now perform the following operations on the members of
$\mathcal{J}_{n,k}$ in (i)–(iii) above (leaving all other components
unchanged):
$\displaystyle(\text{a})~{}$ $\displaystyle\lambda_{k}=u\sigma
d\leftrightarrow\lambda_{k}=\sigma,~{}\lambda_{k+1}=u^{2}d^{2},$
$\displaystyle(\text{b})~{}$
$\displaystyle\lambda_{k}=u\sigma^{\prime}h\sigma^{\prime\prime}d\leftrightarrow\lambda_{k}=\sigma^{\prime}u\sigma^{\prime\prime}d,~{}\lambda_{k+1}=uhd,$
$\displaystyle(\text{c})~{}$ $\displaystyle\lambda_{k}=uh\sigma
d\leftrightarrow\lambda_{k}=u\sigma d,~{}\lambda_{k+1}=uhd.$
Note that the assumptions on $\sigma,\sigma^{\prime},\sigma^{\prime\prime}$ in
(i)–(iii) imply that these operations are well-defined and it is seen that
they are reversible in each case. Hence, they provide bijections between the
members of $\mathcal{J}_{n}$ satisfying (i), (ii) or (iii) and those
satisfying (I), (II) or (III), respectively. Since the number of components
changes by one in all cases, each member of $\mathcal{J}_{n}$ whose final
component is primitive is paired with another of opposite sign. Thus, when
taken together with the pairing defined in the preceding paragraph, we have
that all members of $\mathcal{J}_{n}$ are paired except for
$\lambda=(\lambda_{1})\in\mathcal{J}_{n,1}$ such that $\lambda_{1}=u\sigma
dud$ for some $\sigma\in\mathcal{P}_{n-1}$. There are $S_{n-1}$ possibilities
for these $\lambda$, each having sign $(-1)^{n-1}$, which implies formula
(12). ∎
Proofs of (22) and (23):
We first find a combinatorial interpretation for $D_{-}(t_{1},\ldots,t_{n})$.
A _short_ unit within a member of $\mathcal{D}_{n}$ will refer to a unit
having length one (i.e., is equal $ud$), with all other units being referred
to as _long_. Let $\mathcal{D}_{n}^{\prime}$ denote the subset of
$\mathcal{D}_{n}$ whose members have last unit short and hence
$|\mathcal{D}_{n}^{\prime}|=C_{n-1}$ for $n\geq 1$. Suppose $\rho$ is a
(weighted) composition of $n$ with $m$ parts occurring in the expansion of
$D_{-}(t_{1},\ldots,t_{n})$. On a part of size $r$ within $\rho$, we overlay
$\alpha\in\mathcal{E}_{r-1}$ followed by $ud$. We do this for each part of
$\rho$ and concatenate the resulting lattice paths $\alpha ud$ to obtain a
member of $\mathcal{D}_{n}^{\prime}$ in which there are $m$ short units
altogether. Upon considering all possible $m$, we have that
$D_{-}(t_{1},\ldots,t_{n})$ gives the cardinality of
$\mathcal{D}_{n}^{\prime}$, which implies (23).
To show (22), first note that $D_{+}(t_{1},\ldots,t_{n})$ gives the sum of the
signs of all $\lambda\in\mathcal{D}_{n}^{\prime}$, where the sign of $\lambda$
is defined as $(-1)^{n-\nu(\lambda)}$ and $\nu$ denotes the statistic
recording the number of short units. Let $r_{n}=D_{+}(t_{1},\ldots,t_{n})$ for
$n\geq 1$; clearly, we have $r_{1}=r_{2}=1$, so we may assume $n\geq 3$. Let
$\rho\in\mathcal{D}_{n}^{\prime}$. If the first unit of $\rho$ has length
$i+1$ for some $1\leq i\leq n-2$, then the contribution towards the sum of
signs is given by $(-1)^{i+1}C_{i}r_{n-i-1}$. Summing over all $i$ yields a
total contribution of $\sum_{i=1}^{n-2}(-1)^{i+1}C_{i}r_{n-i-1}$ for members
of $\mathcal{D}_{n}^{\prime}$ whose first unit is long. On the other hand, if
the first unit is short, then there are $r_{n-1}$ possibilities as no
adjustment for the sign is required when prepending a short unit to a member
of $\mathcal{D}_{n-1}^{\prime}$. Combining the prior cases of $\rho$ implies
$r_{n}$ satisfies the desired recurrence and completes the proof. ∎
Proofs of (24) and (25):
Let $\mathcal{L}_{n}$ denote the set of marked members of $\mathcal{E}_{n}$
wherein the first unit is not marked and all other units may be marked. Define
the sign of $\lambda\in\mathcal{E}_{n}$ by $(-1)^{n-\mu(\lambda)}$, where
$\mu(\lambda)$ denotes the number of unmarked units of $\lambda$. Then
$D_{+}(t_{2},\ldots,t_{n+1})$ and $D_{-}(t_{2},\ldots,t_{n+1})$ are seen to
give the sum of signs and cardinality, respectively, of the members of
$\mathcal{L}_{n}$. To show (24), define an involution on $\mathcal{L}_{n}$ by
marking or unmarking the second unit, if it exists. This operation is not
defined on the primitive members of $\mathcal{L}_{n}$, each of which has sign
$(-1)^{n-1}$. Since the primitive members of $\mathcal{L}_{n}$ have
cardinality $C_{n-1}$ for $n\geq 2$, formula (24) is established.
To show (25), let $b_{n}=D_{-}(t_{2},\ldots,t_{n+1})$ for $n\geq 1$ and note
$b_{n}=C_{n-1}+2\sum_{k=1}^{n-3}C_{k}b_{n-k-1},\qquad n\geq 3,$ (28)
with $b_{1}=0$ and $b_{2}=1$, upon considering whether or not a member of
$\mathcal{L}_{n}$ is primitive and, if not, taking into account the length
$k+1$ of the first unit, where $1\leq k\leq n-3$. Here, the factor of 2
accounts for the choice concerning whether or not the second unit is marked in
the latter case. In order to establish $b_{n}=A137398[n]$, we must show that
$b_{n}$ satisfies the defining recurrence for $A137398[n]$, i.e.,
$b_{n}=2b_{n-1}+2b_{n-2}+\sum_{k=1}^{n-3}C_{k}b_{n-k-1},\qquad n\geq 4.$ (29)
Comparing (28) and (29), to complete the proof of (25), it suffices to show
$C_{n-1}+\sum_{k=2}^{n-3}C_{k}b_{n-k-1}=2b_{n-1}+b_{n-2},\qquad n\geq 4.$ (30)
We may assume $n\geq 5$ in (30) since it is seen to hold for $n=4$.
To prove (30), we describe a combinatorial structure enumerated by the left
side of the identity and show that this structure is also enumerated by the
right. We will make use of the same descriptors short and long as before when
referring to units of varying length. Let $\mathcal{Y}_{n}$ denote the set of
all marked Dyck paths of length $n$ containing at least one short unit wherein
long units occurring to the right of the rightmost short unit (if there are
any) may be marked, but where the first such long unit is always unmarked.
Further, we require that the rightmost short unit within a member of
$\mathcal{Y}_{n}$ correspond to the $(2i-1)$-st and $(2i)$-th steps for some
$i\geq 3$. Note that there are $C_{n-1}$ members of $\mathcal{Y}_{n}$ ending
in a short unit, upon appending $ud$ to any member of $\mathcal{D}_{n-1}$.
Otherwise, $\lambda\in\mathcal{Y}_{n}$ is expressible as
$\lambda=\lambda^{\prime}ud\lambda^{\prime\prime}$, where $\lambda^{\prime}$
is any Dyck path with $|\lambda^{\prime}|\geq 2$ and $\lambda^{\prime\prime}$
is nonempty and consists of long units that may be marked, except for the
first, which is always unmarked. Then there are $C_{k}b_{n-k-1}$ possibilities
for $\lambda$ in which $|\lambda^{\prime}|=k$ and considering all possible
$k\in[2,n-3]$ implies that there are $\sum_{k=2}^{n-3}C_{k}b_{n-k-1}$ members
of $\mathcal{Y}_{n}$ that end in a long unit. Thus, we have that the left-hand
side of (30) gives $|\mathcal{Y}_{n}|$.
We now show that $2b_{n-1}+b_{n-2}$ also gives $|\mathcal{Y}_{n}|$. First let
us take two copies of each $\alpha\in\mathcal{L}_{n-1}$, where it is assumed
for now that $\alpha$ contains at least one marked unit. Then write
$\alpha=\alpha_{1}\cdots\alpha_{\ell-1}\alpha_{\ell}\cdots\alpha_{r}$, where
the $\alpha_{i}$ denote the units of $\alpha$, the leftmost marked unit is
$\alpha_{\ell}$ and $2\leq\ell\leq r$. Within the first copy of $\alpha$, we
insert $ud$ directly between the units $\alpha_{\ell-1}$ and $\alpha_{\ell}$.
Within the second copy of $\alpha$, we replace $\alpha_{\ell-1}$ with
$ud\alpha_{\ell}^{\prime}ud$, where
$\alpha_{\ell-1}=u\alpha_{\ell-1}^{\prime}d$. In both cases, we remove the
mark from the unit $\alpha_{\ell}$ and leave all other units of $\alpha$
undisturbed. On the other hand, if $\alpha\in\mathcal{L}_{n-2}$ contains a
marked unit and is decomposed into units as above, then we insert $udud$
between the units $\alpha_{\ell-1}$ and $\alpha_{\ell}$ and remove the mark
from $\alpha_{\ell}$. Note that the operations described in this paragraph
yield uniquely all members of $\mathcal{Y}_{n}$ not ending in $ud$ and can be
reversed by considering the position of the rightmost short unit and taking
into account whether there are one or more short units. If there are more than
one, then consider further whether or not the leftmost and rightmost short
units are adjacent.
So it remains to show
$2|\\{\alpha\in\mathcal{L}_{n-1}:\alpha\text{ has no marked
units}\\}|+|\\{\alpha\in\mathcal{L}_{n-2}:\alpha\text{ has no marked
units}\\}|$
equals the number of members of $\mathcal{Y}_{n}$ ending in $ud$ (recall that
this number is $C_{n-1}$). Note that this equality is equivalent to the known
relation $2t_{n}+t_{n-1}=C_{n-1}$ for $n\geq 2$; for a combinatorial proof, we
refer the reader to [6, Section 3]. This completes the proof of (30), as
desired. ∎
Proof of (26):
Let $\mathcal{M}_{n,k}$ for $1\leq k\leq n$ denote the set of ordered
$k$-tuples $(\lambda_{1},\ldots,\lambda_{k})$ such that each $\lambda_{i}$ is
a nonempty Dyck path all of whose units are long, with
$\sum_{i=1}^{k}|\lambda_{i}|=n+k$. Let members of $\mathcal{M}_{n,k}$ have
sign $(-1)^{n-k}$. Then it is seen that $D_{+}(t_{3},\ldots,t_{n+2})$ gives
the sum of the signs of all members of $\mathcal{M}_{n}$, where
$\mathcal{M}_{n}=\cup_{k=1}^{n}\mathcal{M}_{n,k}$. Before defining an
involution on $\mathcal{M}_{n}$, let us recall a definition. By a _valley_ of
height $j$ within a Dyck path where $j\geq 0$, we mean a $d$ directly followed
by a $u$ step in which the $u$ has starting height $j$. A _special_ valley
will refer to one of height $1$. Let
$\lambda=(\lambda_{1},\ldots,\lambda_{k})\in\mathcal{M}_{n,k}$ and suppose
first that the component $\lambda_{k}$ contains at least one special valley.
We decompose $\lambda_{k}$ as $\lambda_{k}=\alpha\bf{du}\beta$, where $\alpha$
and $\beta$ contain $2a$ and $2b$ steps respectively and $\bf{du}$ denotes the
rightmost special valley. Note that $a,b\geq 1$, with $|\lambda_{k}|=a+b+1$.
Let $\lambda^{*}$ be obtained from $\lambda$ by replacing $\lambda_{k}$ with
the two components $\lambda_{k}=\alpha d^{2}$ and $\lambda_{k+1}=u^{2}\beta$,
keeping all other components of $\lambda$ the same. One may verify
$\lambda_{k}\in\mathcal{E}_{a+1}$, $\lambda_{k+1}\in\mathcal{E}_{b+1}$, and
hence $\lambda^{*}\in\mathcal{M}_{n,k+1}$, with $\lambda_{k+1}$ containing no
special valleys. If it is the case that $\lambda\in M_{n,k}$ for some $k>1$
with $\lambda_{k}$ containing no special valleys, then $\lambda^{*}$ is
obtained from $\lambda$ by reversing the operation described above. The
mapping $\lambda\mapsto\lambda^{*}$ is an involution of $\mathcal{M}_{n}$
which always changes the sign and is not defined on
$\mathcal{M}_{n}^{\prime}\subseteq\mathcal{M}_{n}$ consisting of those
$\lambda=(\rho)\in\mathcal{M}_{n,1}$ such that $\rho$ contains no special
valleys.
To enumerate the members of $\mathcal{M}_{n}^{\prime}$, note that $\rho$ can
be decomposed into units as $\rho=\rho_{1}\cdots\rho_{j}$ for some $j\geq 1$,
where $\rho_{i}=u^{2}\rho_{i}^{\prime}d^{2}$ for each $i$ with
$\rho_{i}^{\prime}$ possibly empty. Let $a(n,j)$ denote the number of members
of $\mathcal{D}_{n}$ that have $j$ returns. Then removal of the initial $u$
and the final $d$ from each unit $\rho_{i}$ within $\rho$ implies that there
are $a(n+1-j,j)$ possible $\rho$, and summing over all $j$ yields
$|\mathcal{M}_{n}^{\prime}|=\sum_{j=1}^{\lfloor(n+1)/2\rfloor}a(n+1-j,j)$.
Recall that one of the combinatorial properties for $A030238[n]$ is that it is
given explicitly as $\sum_{j=1}^{\lfloor(n+2)/2\rfloor}a(n+2-j,j)$. Hence,
$|\mathcal{M}_{n}^{\prime}|=A030238[n-1]$ for $n\geq 1$. Since each member of
$\mathcal{M}_{n}^{\prime}$ has sign $(-1)^{n-1}$, the proof of (26) is
complete. ∎
Proof of (27):
Let $\mathcal{T}_{n,k}$ denote the set of ordered $k$-tuples
$(\lambda_{1},\ldots,\lambda_{k})$ such that each $\lambda_{i}$ is a Dyck path
of length at least three all of whose units are long, with
$\sum_{i=1}^{k}|\lambda_{i}|=n+2k$. Let members of $\mathcal{T}_{n,k}$ have
sign $(-1)^{n-k}$ and let $\mathcal{T}_{n}=\cup_{k=1}^{n}\mathcal{T}_{n,k}$.
Then we have that $D_{+}(t_{4},\ldots,t_{n+3})$ gives the sum of signs of all
members of $\mathcal{T}_{n}$. Let
$\mathcal{T}_{n}^{\prime}\subseteq\mathcal{T}_{n}$ consist of
$(\lambda_{1})\in\mathcal{T}_{n,1}$ such that $\lambda_{1}$ is expressible as
$\lambda_{1}=u^{2}d^{2}\alpha$, where $\alpha$ is a unit. Note that $n\geq 3$
implies $|\alpha|\geq 3$ and hence $\alpha$ is long, as required. As there are
$C_{n-1}$ possibilities for $\lambda_{1}$, we have
$\sigma(\mathcal{T}_{n}^{\prime})=(-1)^{n-1}C_{n-1}$, where $\sigma(S)$
denotes the sum of the signs of the members of a subset $S$ of
$\mathcal{T}_{n}$. Below, we define in several steps a sign-changing
involution on the entirety of $\mathcal{T}_{n}-\mathcal{T}_{n}^{\prime}$ when
$n\geq 3$, which implies (27).
We first partition $\mathcal{T}_{n}-\mathcal{T}_{n}^{\prime}$ into three
subsets $\mathcal{U}_{n}$, $\mathcal{V}_{n}$ and $\mathcal{W}_{n}$ given by
$\displaystyle(\text{i})~{}$
$\displaystyle\mathcal{U}_{n}=\\{(\lambda_{1},\ldots,\lambda_{k})\in\mathcal{T}_{n}-\mathcal{T}_{n}^{\prime}:\lambda_{k}\
\text{ not primitive}\\},$ $\displaystyle(\text{ii})~{}$
$\displaystyle\mathcal{V}_{n}=\\{(\lambda_{1},\ldots,\lambda_{k})\in\mathcal{T}_{n}-\mathcal{T}_{n}^{\prime}:\lambda_{k}\
\text{ primitive and contains no special peaks}\\},$
$\displaystyle(\text{iii})~{}$
$\displaystyle\mathcal{W}_{n}=\\{(\lambda_{1},\ldots,\lambda_{k})\in\mathcal{T}_{n}-\mathcal{T}_{n}^{\prime}:\lambda_{k}\
\text{ primitive and contains at least one special peak}\\},$
where $k\geq 1$ in each case and a _special_ peak is one of height two. We
first define involutions on $\mathcal{U}_{n}$ and $\mathcal{V}_{n}$. Let
$(\lambda_{1},\ldots,\lambda_{k})\in\mathcal{U}_{n}$ and suppose
$\lambda=\alpha\beta$, where $|\alpha|\geq 2$ and $\beta$ is a unit. Then we
replace the component $\lambda_{k}$ with the two components
$\lambda_{k}=\alpha$ and $\lambda_{k+1}=u^{2}d^{2}\beta$, if $|\alpha|\geq 3$,
or perform the inverse operation if $|\alpha|=2$ (i.e., $\alpha=u^{2}d^{2}$).
Note that the possible case where $k=1$ and $\lambda_{1}=u^{2}d^{2}\beta$ has
been excluded from consideration since such members of $\mathcal{T}_{n}$
belong to $\mathcal{T}_{n}^{\prime}$. Thus, the two operations defined above
taken together yield an involution, which we will denote by $\phi$, that is
defined on all of $\mathcal{U}_{n}$.
Now suppose $(\lambda_{1},\ldots,\lambda_{k})\in\mathcal{V}_{n}$. Then either
$|\lambda_{k}|\geq 4$ and is primitive with no special peaks or
$\lambda_{k}=u^{3}d^{3}$. In the former case, we decompose $\lambda_{k}$ as
$\lambda_{k}=u\alpha d$, where $\alpha\geq 3$. If $|\lambda_{k}|\geq 4$, then
replace the component $\lambda_{k}=u\alpha d$ with the two components
$\lambda_{k}=\alpha$ and $\lambda_{k+1}=u^{3}d^{3}$, keeping all other
components the same. Note that $\lambda_{k}$ containing no special peaks
implies that the penultimate component $\alpha$ in the resulting member of
$\mathcal{T}_{n}$ contains no short units, as required. If the final component
$\lambda_{k}$ equals $u^{3}d^{3}$, then perform the inverse operation, noting
that $n\geq 3$ implies $k\geq 2$ in this case. Thus, the two operations taken
together yield an involution, which we will denote by $\psi$, that is defined
on all of $\mathcal{V}_{n}$.
Define the subset $\mathcal{W}_{n}(1)$ of $\mathcal{W}_{n}$ as follows:
$\displaystyle\mathcal{W}_{n}(1)=\\{(\lambda_{1},\ldots,\lambda_{k})\in\mathcal{W}_{n}:$
$\displaystyle\,\lambda_{k}=u\alpha ud\beta d,\text{ where }|\alpha|\geq
1\text{ and }\beta\text{ contains only long units }$ $\displaystyle\text{ and
is possibly empty}\\}.$
In Lemma 6 below, it is shown $\sigma(W_{n}(1))=0$.
Now define the subset $\mathcal{W}_{n}(2)$ of $\mathcal{W}_{n}$ as consisting
of those $(\lambda_{1},\ldots,\lambda_{k})$ such that one of the following two
conditions holds:
$\displaystyle(\text{a})~{}$ $\displaystyle k\geq 1\text{ and
}\lambda_{k}=u(ud)\beta d,\text{ where }\beta\text{ consists of two or more
long units, or}$ $\displaystyle(\text{b})~{}$ $\displaystyle k\geq 2\text{ and
}\lambda_{k}=u(ud)\tau d,\text{ where }\tau\text{ is a single long unit, and
}\lambda_{k-1}=u(ud)\beta d,\text{ where }\beta\text{ consists }$
$\displaystyle\text{of one or more long units}.$
Define an involution of $\mathcal{W}_{n}(2)$ by breaking apart or combining
the final two components as indicated:
$\lambda_{k}=u(ud)\beta
d\leftrightarrow\lambda_{k}=u(ud)\beta^{\prime}d,\,\lambda_{k+1}=u(ud)\tau d,$
where $\beta$ consists of two or more long units, the first of which is
denoted by $\tau$, and $\beta^{\prime}=\beta-\tau$.
Let
$\mathcal{W}_{n}^{\prime}=\mathcal{W}_{n}-\mathcal{W}_{n}(1)-\mathcal{W}_{n}(2)$.
Note that $(\lambda_{1},\ldots,\lambda_{k})\in\mathcal{W}_{n}^{\prime}$
implies $\lambda_{k}=u(ud)\tau d$, where $\tau$ is a long unit. We decompose
$\mathcal{W}_{n}^{\prime}$ as
$\mathcal{W}_{n}^{\prime}=\cup_{i=1}^{4}\mathcal{W}_{n}^{\prime}(i)$, where
$\mathcal{W}_{n}^{\prime}(i)$ for $1\leq i\leq 4$ consists of those
$(\lambda_{1},\ldots,\lambda_{k})$ in $\mathcal{W}_{n}^{\prime}$ satisfying
respectively
$\displaystyle(\text{1})~{}$ $\displaystyle k=1,$ $\displaystyle(\text{2})~{}$
$\displaystyle k\geq 2\text{ and }\lambda_{k-1}\text{ is not primitive},$
$\displaystyle(\text{3})~{}$ $\displaystyle k\geq 2\text{ and
}\lambda_{k-1}\text{ is primitive with no special peaks, or}$
$\displaystyle(\text{4})~{}$ $\displaystyle k\geq 2\text{ and
}\lambda_{k-1}=u\alpha(ud)\beta d,\text{ where }|\alpha|\geq 1\text{ and
}\beta,\text{ possibly empty, consists of long units}.$
Below, it is shown in Lemma 7 that $\sigma(\mathcal{W}_{n}^{\prime})=0$ using
the cases above, and hence $\sigma(\mathcal{W}_{n})=0$. This implies
$\sigma(\mathcal{T}_{n}-\mathcal{T}_{n}^{\prime})=0$, as desired. ∎
###### Lemma 6.
If $n\geq 2$, then $\sigma(\mathcal{W}_{n}(1))=0$.
###### Proof.
The result is readily shown if $n=2$, so we may assume $n\geq 3$. We pair
members of $\mathcal{W}_{n}(1)$ of opposite sign by either breaking apart the
last component or combining the last two components as indicated:
$\lambda_{k}=u\alpha ud\beta d,\,|\alpha|\geq
2\leftrightarrow\lambda_{k}=u\alpha d,\,\lambda_{k+1}=u(ud)^{2}\beta d.$
The set of survivors of this involution consists of those $k$-tuples
$(\lambda_{1},\ldots,\lambda_{k})$ such that either (i) $k\geq 2$ and
$\lambda_{k}=u(ud)^{2}\beta d$, with $\beta$ consisting of long units if
nonempty and $\lambda_{k-1}$ not primitive, or (ii) $k=1$ and
$\lambda_{1}=u(ud)^{2}\beta d$, with $\beta$ as in (i). Note that $n\geq 3$
implies $\beta\neq\varnothing$ in the latter case. On the survivors satisfying
condition (i), we apply the involution $\phi$ defined above to the
$(k-1)$-tuple comprising the first $k-1$ components and then append
$\lambda_{k}$ to the resulting vector. Thus, all members satisfying (i) are
paired except for those in which $k=2$ with $\lambda_{1}=u^{2}d^{2}\tau$ and
$\lambda_{2}=u(ud)^{2}\beta d$, where $\beta$ consists of long units and
$\tau$ is a single (long) unit.
Suppose $|\tau|=i+1$ in the decomposition of $\lambda_{1}$. This implies
$|\beta|=(n+4)-|\lambda_{1}|-3=n-2-i$
in $\lambda_{2}$, and thus $\beta\in\mathcal{E}_{n-2-i}$. Hence summing over
all $i$ yields $\sum_{i=1}^{n-2}C_{i}t_{n-1-i}$ possible ordered pairs
$(\lambda_{1},\lambda_{2})$. Further, the survivors in case (ii) above have
cardinality $t_{n}$ since $\beta$ has length $n-1$ and contains only long
units. Thus, the sum of the signs of the remaining unpaired members of
$\mathcal{W}_{n}(1)$ is given by
$(-1)^{n-2}\sum_{i=1}^{n-2}C_{i}t_{n-1-i}+(-1)^{n-1}t_{n}=0,$
as desired, upon observing the recurrence
$t_{n}=\sum_{i=1}^{n-2}C_{i}t_{n-1-i}$ for $n\geq 3$. Note that this
recurrence may be easily realized combinatorially by considering the length
$i+1$ of the first unit within a member of $\mathcal{E}_{n-1}$. Thus, if
desired, it is straightforward to pair the remaining members of
$\mathcal{W}_{n}(1)$ of opposite sign upon considering the position of the
first return within a member of $\mathcal{E}_{n-1}$. ∎
###### Lemma 7.
If $n\geq 3$, then
$\sigma(\cup_{i=1}^{3}\mathcal{W}_{n}^{\prime}(i))=-\sigma(\mathcal{W}_{n}^{\prime}(4))=(-1)^{n-1}C_{n-2}$,
and hence $\sigma(\mathcal{W}_{n}^{\prime})=0$.
###### Proof.
We consider several cases on
$\lambda=(\lambda_{1},\ldots,\lambda_{k})\in\mathcal{W}_{n}^{\prime}$ whose
last component $\lambda_{k}$ is given by $\lambda_{k}=u(ud)\tau d$, where
$\tau$ is a long unit. If $\lambda\in\mathcal{W}_{n}^{\prime}(1)$, then $k=1$
implies $|\tau|=n$ and thus
$\sigma(\mathcal{W}_{n}^{\prime}(1))=(-1)^{n-1}C_{n-1}$. If
$\lambda\in\mathcal{W}_{n}^{\prime}(2)$, we apply the mapping $\phi$ defined
above to $\lambda^{\prime}=(\lambda_{1},\ldots,\lambda_{k-1})$ and then append
$\lambda_{k}$ to $\phi(\lambda^{\prime})$. This operation yields an involution
on $\mathcal{W}_{n}^{\prime}(2)$ that is not defined for those members in
which $k=2$ with $\lambda_{1}=u^{2}d^{2}\sigma$ and $\sigma$ is a unit. Upon
considering $|\sigma|=i+1$ for $1\leq i\leq n-3$, one gets
$\sum_{i=1}^{n-3}C_{i}C_{n-2-i}=C_{n-1}-2C_{n-2}$
unpaired members of $\mathcal{W}_{n}^{\prime}(2)$, by the recurrence for the
Catalan numbers. If $\lambda\in\mathcal{W}_{n}^{\prime}(3)$, we apply the
mapping $\psi$ defined above to $\lambda^{\prime}$ and then append
$\lambda_{k}$ to $\psi(\lambda^{\prime})$. This operation yields an involution
on $\mathcal{W}_{n}^{\prime}(3)$ except for those members where $k=2$ and
$\lambda_{1}=u^{3}d^{3}$, of which there are $C_{n-2}$ possibilities.
Combining the contributions from $W_{n}^{\prime}(i)$ for $1\leq i\leq 3$
yields
$\sigma(\cup_{i=1}^{3}W_{n}^{\prime}(i))=(-1)^{n-1}C_{n-1}+(-1)^{n-2}(C_{n-1}-2C_{n-2})+(-1)^{n-2}C_{n-2}=(-1)^{n-1}C_{n-2}.$
For the second statement, let $T$ denote the subset of
$\mathcal{W}_{n}^{\prime}(4)$ consisting of those members where $k=2$ and
$\lambda_{1}=u(ud)^{2}d$. Since $\sigma(T)=(-1)^{n-2}C_{n-2}$, we need to show
$\sigma(\mathcal{W}_{n}^{\prime}(4)-T)=0$. Note that within the final
component $\lambda_{k}=u(ud)\tau d$ of
$\lambda\in\mathcal{W}_{n}^{\prime}(4)-T$, we must have $2\leq|\tau|\leq n-2$.
We may then apply the involution $g$ from Lemma 6 to $\lambda^{\prime}$ (as
$|\tau|\leq n-2$), and to the resulting vector $g(\lambda^{\prime})$, we
append the component $\lambda_{k}$. This operation is seen to yield a sign-
changing involution of $\mathcal{W}_{n}^{\prime}(4)-T$, which completes the
proof. ∎
## References
* [1] H. Belbachir, A. Belkhir and I.-E. Djellas, Permanent of Toeplitz-Hessenberg matrices with generalized Fibonacci and Lucas entries, Appl. Appl. Math. 17(2) (2022), 558–570.
* [2] D. Callan, Some bijections and identities for the Catalan and Fine numbers, Sém. Lothar. Combin. 53 (2006), Article B53e.
* [3] Z. Chen and H. Pan, Identities involving weighted Catalan, Schröder and Motzkin paths, Adv. Appl. Math. 86 (2017), 81–98.
* [4] E. Y. P. Deng and W.-J. Yan, Some identities on the Catalan, Motzkin and Schröder numbers, Discrete Appl. Math. 156(14) (2008), 2781–2789.
* [5] E. Deutsch, Dyck path enumeration, Discrete Math. 204 (1999), 167–202.
* [6] E. Deutsch and L. Shapiro, A survey of the Fine numbers, Discrete Math. 241 (2001), 241–265.
* [7] M. Elouafi, A unified approach for the Hankel determinants of classical combinatorial numbers, J. Math. Anal. Appl. 431(2) (2015), 1253–1274.
* [8] T. Goy and M. Shattuck, Determinant formulas of some Toeplitz–Hessenberg matrices with Catalan entries, Proc. Indian Acad. Sci. Math. Sci. 129(4) (2019), Article 46.
* [9] T. Goy and M. Shattuck, Determinants of Toeplitz–Hessenberg matrices with generalized Fibonacci entries, Notes Number Theory Discrete Math. 25(4) (2019), 83–95.
* [10] T. Goy and M. Shattuck, Determinant identities for Toeplitz–Hessenberg matrices with tribonacci number entries, Trans. Comb. 9(2) (2020), 89–109.
* [11] T. Goy and M. Shattuck, Some Toeplitz–Hessenberg determinant identities for the tetranacci numbers, J. Integer Seq. 23 (2020), Article 20.6.8.
* [12] T. Komatsu and J. L. Ramírez, Some determinants involving incomplete Fubini numbers, An. Şt. Univ. Ovidius Constanţa Ser. Mat. 26 (2018), 143–170.
* [13] M. Merca, A note on the determinant of a Toeplitz-Hessenberg matrix, Spec. Matrices 1 (2013), 10–16.
* [14] L. Mu and Y. Wang, Hankel determinants of shifted Catalan-like numbers, Discrete Math. 340(6) (2017), 1389–1396.
* [15] T. Muir, The Theory of Determinants in the Historical Order of Development, Vol. 3, Dover Publications, 1960.
* [16] L. V. Shapiro and C. J. Wang, A bijection between $3$-Motzkin paths and Schröder paths with no peak at odd height, J. Integer Seq. 12 (2009), Article 09.3.2.
* [17] N. J. A. Sloane et al., The On-Line Encyclopedia of Integer Sequences. Available at https://oeis.org, 2020.
* [18] F. Qi, On negativity of Toeplitz-Hessenberg determinants whose elements contain large Schröder numbers, Palestine J. Math. 11(4) (2022), 373–378.
* [19] F. Qi and B.-N. Guo, Explicit and recursive formulas, integral representations, and properties of the large Schröder numbers, Kragujevac J. Math. 41(4) (2017), 121–141.
* [20] F. Qi, X.-T. Shi and B.-N. Guo, Two explicit formulas of the Schröder numbers, Integers 16 (2016), #A23.
2020 Mathematics Subject Classification: Primary 05A19; Secondary 11C20,
15B05.
_Keywords:_ Hessenberg–Toeplitz matrix, Trudi’s formula, Schröder number, Fine
number, Catalan number, Schröder path, Dyck path.
|
# Thick embeddings of graphs into symmetric spaces via coarse geometry
Benjamin Barrett and David Hume
with an appendix by Larry Guth and Elia Portnoy
###### Abstract
We prove estimates for the optimal volume of thick embeddings of finite graphs
into symmetric spaces, generalising results of Kolmogorov-Barzdin and Gromov-
Guth for embeddings into Euclidean spaces. We distinguish two very different
behaviours depending on the rank of the non-compact factor. For rank at least
2, we construct thick embeddings of $N$-vertex graphs with volume $CN\ln(1+N)$
and prove that this is optimal. For rank at most $1$ we prove lower bounds of
the form $cN^{a}$ for some (explicit) $a>1$ which depends on the dimension of
the Euclidean factor and the conformal dimension of the boundary of the non-
compact factor. The main tool is a coarse geometric analogue of a thick
embedding called a coarse wiring, with the key property that the minimal
volume of a thick embedding is comparable to the “minimal volume” of a coarse
wiring for symmetric spaces of dimension at least $3$. In the appendix it is
proved that for each $k\geq 3$ every bounded degree graph admits a coarse
wiring into $\mathbb{R}^{k}$ with volume at most $CN^{1+\frac{1}{k-1}}$. As a
corollary, the same upper bound holds for real hyperbolic space of dimension
$k+1$ and in both cases this result is optimal.
## 1 Introduction
The focus of this paper is on thick embeddings of graphs as considered by
Kolmogorov-Barzdin and Gromov-Guth [KB93, GG12]. By a graph, we mean a pair
$\Gamma=(V\Gamma,E\Gamma)$ where $V\Gamma$ is a set whose elements are called
vertices, and $E\Gamma$ is a set of unordered pairs of distinct elements of
$V\Gamma$. Elements of $E\Gamma$ are called edges. The topological realisation
of a graph is the topological space obtained from a disjoint union of unit
intervals indexed by $e\in E\Gamma$, whose end points we label using the two
elements contained in $e$. We then identify all endpoints which are labelled
by the same element of $V\Gamma$. We will use $\Gamma$ to refer to both the
graph and its topological realisation.
The idea behind thick embeddings of graphs is that they are the appropriate
embeddings to consider in situations where the graph models a physical object
(i.e. vertices and edges are “thick” and therefore need to remain a prescribed
distance apart). Two key examples are: a brain, where neurons are represented
by vertices and axons by edges; and an electronic network, where components
are vertices and wires are edges. We briefly summarise the relevant results
from [KB93, GG12] in the following two theorems.
###### Theorem 1.1.
Let $\Gamma$ be a finite graph with maximal degree $d$. For each $k\geq 3$,
there is a topological embedding $f_{k}:\Gamma\to\mathbb{R}^{k}$ and a
constant $C=C(d,k)$ with the following properties:
1. $(i)$
$d_{\mathbb{R}^{k}}(f_{k}(x),f_{k}(y))\geq 1$ whenever $x,y$ are: two distinct
vertices; an edge and a vertex not contained in that edge; or two disjoint
edges.
2. $(ii)$
$\textup{diam}(f_{3}):=\textup{diam}(\textup{im}(f_{3}))\leq C|\Gamma|^{1/2}$.
3. $(iii)$
$\textup{diam}(f_{k})\leq C|\Gamma|^{1/(k-1)}\ln(1+|\Gamma|)^{4}$.
Let $Z$ be a metric space. We say a topological embedding $g:\Gamma\to Z$ is
$\varepsilon$-thick if it satisfies the inequality
$d_{Z}(g(x),g(y))\geq\varepsilon$ whenever $x,y$ are as in condition $(i)$.
###### Theorem 1.2.
Let $k\geq 3$. For every $\delta,\varepsilon>0$ and $d\in\mathbb{N}$ there is
a constant $c>0$ such that given any finite graph $\Gamma$ with maximal degree
$d$ and Cheeger constant (cf. Definition 5.2) $\geq\delta$ and any
$\varepsilon$-thick topological embedding $g:\Gamma\to\mathbb{R}^{k}$, we have
$\textup{diam}(g)\geq c^{-1}|\Gamma|^{1/(k-1)}-c$.
When $Z$ admits a measure, we define the volume $\textup{vol}(g)$ of an
$\varepsilon$-thick topological embedding $g:\Gamma\to Z$ to be the measure of
the $1$-neighbourhood of its image111The choice of $1$-neighbourhood is
arbitrary for measure spaces with controlled growth (cf. Definition 4.1)
replacing this by another positive real changes volume by at most some uniform
multiplicative constant.. From Theorem 1.1 we get obvious upper bounds on the
volume of $1$-thick embeddings into $\mathbb{R}^{k}$. Namely,
$\textup{vol}(f_{3})\leq C^{\prime}|\Gamma|^{3/2}$ and
$\textup{vol}(f_{k})\leq C^{\prime}|\Gamma|^{k/(k-1)}\ln(1+|\Gamma|)^{4k}$.
In the main paper, we prove versions of Theorems 1.1 and 1.2 for thick
embeddings into symmetric spaces. The goal of the appendix is to provide sharp
upper bounds for thick embeddings into Euclidean spaces. The main result there
is a complete proof of an optimal version of Theorem 1.1(iii). Such an
argument had previously been sketched by Guth.
###### Theorem 1.3.
Let $d,k\in\mathbb{N}$ with $k\geq 3$. There is a constant $C=C(d,k)$ such
that for every finite graph $\Gamma$ with maximal degree $d$, there is a
$1$-thick topological embedding $f_{k}:\Gamma\to\mathbb{R}^{k}$ which
satisfies
$\textup{diam}(f_{k})\leq
C|\Gamma|^{1/(k-1)}\quad\textrm{and}\quad\textup{vol}(f_{k})\leq
C|\Gamma|^{1+1/(k-1)}.$
### 1.1 Thick embeddings into symmetric spaces
Our main results are analogues of Theorems 1.1 and 1.2 for more general simply
connected Riemannian symmetric spaces. Constructing graph embeddings into a
range of symmetric spaces has applications for machine learning (see [Lo21]
and references therein). In what follows we will assume that our symmetric
spaces are simply connected and Riemannian. The rank of a symmetric space is
the maximal dimension of an isometrically embedded Euclidean subspace. We
recall that each symmetric space $X$ decomposes as a direct product of
symmetric spaces $K\times\mathbb{R}^{d}\times N$ where $K$ is compact and $N$
has no non-trivial compact or Euclidean factor. In the literature, $N$ is
often referred to as the non-compact factor. Our results show a striking
contrast between the situation where the non-compact factor has rank at least
$2$ and the situation where it has rank at most $1$.
We begin with the case where the rank of $N$ is at least $2$, where we provide
matching upper and lower bounds.
###### Theorem 1.4.
Let $X$ be a symmetric space whose non-compact factor has rank $\geq 2$ and
let $d\in\mathbb{N}$. There are constants $\varepsilon,C>0$ which depend on
$X$ and $d$ such that for any finite graph $\Gamma$ with maximal degree at
most $d$, there is an $\varepsilon$-thick topological embedding of $\Gamma$
into $X$ with diameter $\leq C\ln(1+|\Gamma|)$ and volume $\leq
C|\Gamma|\ln(1+|\Gamma|)$.
###### Theorem 1.5.
Let $X$ be a symmetric space whose non-compact factor has rank $\geq 2$ and
let $d\in\mathbb{N}$. For any $d,\varepsilon,\delta>0$ there is a constant
$c=c(d,\varepsilon,\delta)>0$ with the following property. For any finite
graph $\Gamma$ with maximal degree $d$ and Cheeger constant
$h(\Gamma)\geq\delta$ every $\varepsilon$-thick222Unlike topological
embeddings into Euclidean space, there does not seem to be an obvious way to
relate the volumes of optimal topological embeddings with different thickness
parameters. topological embedding $g:\Gamma\to X$ satisfies
$\textup{vol}(g)\geq c|\Gamma|\ln(1+|\Gamma|)$.
Now we turn to the case where the rank of $N$ is at most $1$. When $N$ is a
real hyperbolic space, we also provide upper and lower bounds which match
except in the case of the hyperbolic plane where there is a sublogarithmic
gap.
###### Theorem 1.6.
Let $X=\mathbb{R}^{r}\times\mathbb{H}_{\mathbb{R}}^{q}$ where $q+r\geq 3$. Let
$d\in\mathbb{N}$. There is a constant $C=C(X,d)$ such that for any finite
graph $\Gamma$ with maximal degree at most $d$ there is a $1$-thick
topological embedding of $\Gamma$ into $X$ with volume
$\leq C|\Gamma|^{1+1/(q+r-2)}.$
For $q+r\geq 4$ this follows by composing the topological embedding from
Theorem 1.3 with a suitable coarse embedding
$\mathbb{R}^{r}\times\mathbb{R}^{q-1}\to\mathbb{R}^{r}\times\mathbb{H}_{\mathbb{R}}^{q}$
where $\mathbb{R}^{q-1}$ embeds as a horosphere in
$\mathbb{H}_{\mathbb{R}}^{q}$. The case $q+r=3$ is new and is treated
separately (cf. Theorem 1.8).
The lower bound we prove holds more generally. The rank one symmetric spaces
of non-compact type are real, complex and quaternionic hyperbolic spaces of
dimension at least $2$ ($\mathbb{H}^{q}_{\mathbb{R}}$,
$\mathbb{H}^{q}_{\mathbb{C}}$ and $\mathbb{H}^{q}_{\mathbb{H}}$ respectively)
and the Cayley plane $\mathbb{H}^{2}_{\mathbb{O}}$. These spaces are all
Gromov-hyperbolic, and as such they have a naturally defined boundary. The
conformal dimension of the boundary of $\mathbb{H}^{q}_{F}$ is
$Q=(q+1)\dim_{\mathbb{R}}(F)-2$ [Pan89]. We will not define conformal
dimension in this paper as we do not require it.
###### Theorem 1.7.
Let $X=K\times\mathbb{R}^{r}\times\mathbb{H}^{q}_{F}$, where $K$ is compact
and $q\dim_{\mathbb{R}}(F)+r\geq 3$. Let $d\in\mathbb{N}$. Let $Q$ be the
conformal dimension of the boundary of $\mathbb{H}^{q}_{F}$. For any
$d,\varepsilon,\delta>0$ there is a constant $c=c(d,\varepsilon,\delta)>0$
with the following property. For any graph $\Gamma$ with maximal degree $d$
and Cheeger constant $h(\Gamma)\geq\delta$ every $\varepsilon$-thick
topological embedding $g:\Gamma\to X$ has volume
$\geq\left\\{\begin{array}[]{lll}c|\Gamma|^{1+1/r}\ln(1+|\Gamma|)^{-1/r}&\textup{if}&Q=1,\\\
c|\Gamma|^{1+1/(Q+r-1)}&\textup{if}&Q\geq 2.\end{array}\right.$
This “gap” between the rank at most $1$ and the higher rank case is similar in
flavour to the gap in the separation profiles of symmetric spaces found in
[HMT22]. This is no coincidence. The lower bounds on the volumes of
topological embeddings found in Theorems 1.5 and 1.7 are inverse functions of
the separation profiles of the symmetric spaces333By the separation profile of
a symmetric space we mean either the $1$-Poincaré profile of the symmetric
space as defined in [HMT20] or equivalently, the separation profile as defined
in [BST12] of any graph quasi-isometric to the symmetric space., and our
approach to prove both of these theorems utilises separation profiles in a
crucial way. In order to use separation profiles, we will reformulate the
above theorems in terms of carefully chosen continuous maps (called coarse
wirings) between bounded degree graphs.
We present one further result in this section, which provides upper bounds for
thick embeddings into $\mathbb{H}^{3}_{\mathbb{R}}$ and
$\mathbb{H}^{2}_{\mathbb{R}}\times\mathbb{R}$. The first is asymptotically
optimal and the second within a sublogarithmic error (with the lower bounds
provided by 1.7) but which do not depend on the degree of the graph.
###### Theorem 1.8.
There are $1$-thick topological embeddings of $K_{M}$ (the complete graph on
$M$ vertices) into $\mathbb{H}^{3}_{\mathbb{R}}$ with diameter $\leq
C\ln(1+M)$ and volume $\leq CM^{2}$, and into
$\mathbb{H}^{2}_{\mathbb{R}}\times\mathbb{R}$ with diameter $\leq CM$ and
volume $\leq CM^{2}$, for some $C$ which does not depend on $M$.
### 1.2 Coarse $k$-wirings
###### Definition 1.9.
Let $\Gamma,\Gamma^{\prime}$ be graphs. A wiring of $\Gamma$ into
$\Gamma^{\prime}$ is a continuous map $f:\Gamma\to\Gamma^{\prime}$ such that
the image of each vertex is a vertex and the image of each edge is a walk in
$\Gamma^{\prime}$.
A wiring $f$ is a coarse $k$-wiring if
1. 1.
the preimage of each vertex in $\Gamma^{\prime}$ contains at most $k$ vertices
in $\Gamma$; and
2. 2.
each edge $e$ in $\Gamma^{\prime}$ is contained in the image of at most $k$
edges in $\Gamma$.
We consider the image of a wiring $\textup{im}(f)$ to be the subgraph of
$\Gamma^{\prime}$ consisting of all vertices in $f(V\Gamma)$ and all the walks
which are the images of edges under $f$. The diameter of a wiring
$\textup{diam}(f)$ is the diameter of its image (measured with respect to the
shortest path metric in $\Gamma^{\prime}$), the volume of a wiring
$\textup{vol}(f)$ is the number of vertices in its image.
Under mild hypotheses on the target space (cf. Definition 4.1), we can convert
a thick topological embedding into a coarse $k$-wiring.
###### Proposition 1.10.
Let $M$ be a Riemannian manifold with controlled growth and let $Y$ be a graph
quasi-isometric to $M$, let $d\in\mathbb{N}$ and let $T>0$. There exist
constants $C$ and $k$ such for every finite graph $\Gamma$ with maximal degree
$d$ the following holds:
If there is a $T$-thick topological embedding $\Gamma\to M$ with diameter $D$
and volume $V$ then there is a coarse $k$-wiring of $\Gamma$ into $Y$ with
diameter at most $CD$ and volume at most $CV$.
With stronger hypotheses we are able to convert coarse $k$-wirings into thick
topological embeddings.
###### Theorem 1.11.
Let $M$ be a compact Riemannian manifold of dimension $n\geq 3$, let $Y$ be a
graph quasi-isometric to the universal cover $\widetilde{M}$ of $M$ and let
$k,d\in\mathbb{N}$. There exist constants $C$ and $\varepsilon>0$ such that
the following holds:
If there is a coarse $k$-wiring of a finite graph $\Gamma$ with maximal degree
$d$ into $Y$ with diameter $D$ and volume $V$ then there is a
$\varepsilon$-thick embedding of $\Gamma$ into $\widetilde{M}$ with diameter
at most $CD$ and volume at most $CV$.
All of the symmetric spaces we consider are universal covers of compact
Riemannian manifolds. The reason for working with universal covers of compact
manifolds is to use compactness to deduce that finite families of curves which
are disjoint are at least a uniform positive distance apart. We then use deck
transformations of the universal cover to translate these curves and preserve
this uniform disjointness.
Using Proposition 1.10 and Theorem 1.11 we can prove Theorems 1.4, 1.5, 1.6
and 1.7 purely in terms of coarse wirings. We introduce wiring profiles in
order to discuss coarse wirings between infinite graphs.
###### Definition 1.12.
Let $\Gamma$ be a finite graph and let $Y$ be a graph. We denote by
$\textup{wir}^{k}(\Gamma\to Y)$ the minimal volume of a coarse $k$-wiring of
$\Gamma$ into $Y$. If no such coarse $k$-wiring exists, we say
$\textup{wir}^{k}(\Gamma\to Y)=+\infty$.
Let $X$ and $Y$ be graphs. The $k$-wiring profile of $X$ into $Y$ is the
function
$\textup{wir}^{k}_{X\to Y}(n)=\max\left\\{\textup{wir}^{k}(\Gamma\to Y)\
\left|\ \Gamma\leq X,\ |\Gamma|\leq n\right.\right\\}.$
A simple example of a situation where $\textup{wir}^{k}(\Gamma\to Y)=+\infty$
is when $\Gamma$ has a vertex whose degree is greater than $k^{2}$ times the
maximal degree of $Y$.
The reason for working with wiring profiles is that they have three very
useful properties. Firstly, wirings between graphs can be composed and there
is a natural inequality which controls the volume of the composition.
###### Proposition 1.13.
Let $X,Y,Z$ be graphs. Suppose $\textup{wir}^{k}_{X\to Y}$ and
$\textup{wir}^{l}_{Y\to Z}$ take finite values. Then
$\displaystyle\textup{wir}^{kl}_{X\to Z}(n)\leq\textup{wir}^{l}_{Y\to
Z}\left(\textup{wir}^{k}_{X\to Y}(n)\right).$
Secondly, for bounded degree graphs, the wiring profile of $X$ into $Y$ grows
linearly whenever there is a regular map from $X$ to $Y$.
###### Definition 1.14.
Let $X,Y$ be metric spaces and let $\kappa>0$. A map $r:X\to Y$ is
$\kappa$-regular if
1. 1.
$d_{Y}(r(x),r(x^{\prime}))\leq\kappa(1+d_{X}(x,x^{\prime}))$, and
2. 2.
the preimage of every ball of radius $1$ in $Y$ is contained in a union of at
most $\kappa$ balls of radius $1$ in $X$.
Quasi-isometric and coarse embeddings between bounded degree graphs are
examples of regular maps.
###### Proposition 1.15.
Let $X$ and $Y$ be graphs with maximal degree $\Delta>0$ and let $r:X\to Y$ be
a $\kappa$-regular map. Then there exists $k=k(\kappa,\Delta)$ such that
$\displaystyle\textup{wir}^{k}_{X\to
Y}(n)\leq\left(\kappa+\frac{1}{2}\right)\Delta n.$
These two propositions naturally combine to show that wiring profiles are
well-behaved with respect to regular maps.
###### Corollary 1.16.
Let $X$, $X^{\prime}$, $Y$ and $Y^{\prime}$ be graphs with maximal degree
$\Delta$ and let $r_{X}:X^{\prime}\to X$ and $r_{Y}:Y\to Y^{\prime}$ be
$\kappa$-regular maps. Then for every $k$ such that $\textup{wir}^{k}_{X\to
Y}$ takes finite values there is some $l$ such that
$\displaystyle\textup{wir}^{l}_{X\to Y^{\prime}}(n)$ $\displaystyle\leq$
$\displaystyle\left(\kappa+\frac{1}{2}\right)\Delta\dot{\textup{wir}}^{k}_{X\to
Y}(n).$ (1) $\displaystyle\textup{wir}^{l}_{X^{\prime}\to Y^{\prime}}(n)$
$\displaystyle\leq$
$\displaystyle\left(\kappa+\frac{1}{2}\right)\Delta\dot{\textup{wir}}^{k}_{X\to
Y}\left(\left(\kappa+\frac{1}{2}\right)\Delta n\right).$ (2)
The third benefit of coarse wirings is that we can find lower bounds on the
wiring profile of two bounded degree graphs in terms of their separation
profiles: a measure of the combinatorial connectivity of their finite
subgraphs introduced in [BST12]. We introduce the following notation from that
paper. Given two functions $f,g:\mathbb{N}\to\mathbb{R}$, we write $f\lesssim
g$ if there is a constant $C$ such that $f(n)\leq Cg(Cn)+C$ holds for all $n$.
We write $f\simeq g$ when $f\lesssim g$ and $g\lesssim f$.
###### Theorem 1.17.
Let $X$ and $Y$ be graphs of bounded degree where $\textup{sep}_{X}\gtrsim
n^{r}\ln(n)^{s}$ and $\textup{sep}_{Y}\simeq n^{p}\ln(n)^{q}$. Then, for any
$k$,
$wir^{k}_{X\to
Y}(n)\gtrsim\left\\{\begin{array}[]{lll}n^{r/p}\ln(n)^{(s-q)/p}&\textup{if}&p>0,\\\
\exp(n^{r/(q+1)}\ln(n)^{s/(q+1)})&\textup{if}&p=0.\end{array}\right.$
The separation profiles of (graphs quasi-isometric to) symmetric spaces have
all been calculated [BST12, HMT20, HMT22] and are all of the form
$n^{p}\ln(n)^{q}$. Combining these calculations with Theorem 1.17 and Theorem
1.11 is sufficient to prove Theorems 1.5 and 1.7.
The coarse geometric approach also has great benefits when computing upper
bounds. For instance, we can deduce the upper bound on volumes of thick
embeddings in Theorem 1.4 from the following theorem.
###### Theorem 1.18.
There is a Cayley graph $Y$ of the lamplighter group
$\mathbb{Z}_{2}\wr\mathbb{Z}$ with the following property. For each
$d\in\mathbb{N}$ there is some $C=C(d)$ such that for any $N$-vertex graph
$\Gamma$ with maximal degree $d$, we have
$\textup{wir}^{2d}(\Gamma\to Y)\leq CN\ln(1+N).$
The deduction works as follows. The graph $Y$ is quasi-isometric to the
Diestel-Leader graph $\textup{DL}(2,2)$ [Woe05]. Next, $\textup{DL}(2,2)$
quasi-isometrically embeds into any symmetric space $M$ whose non-compact
factor has rank $\geq 2$ [HMT22, Proposition 2.8 and Theorem 3.1]. Choose a
graph $X$ which is quasi-isometric to $M$. By Corollary 1.16, there are
constants $l,C^{\prime}$ which depend on $Y$ and $d$ but not $N$ such that
$\textup{wir}^{l}(\Gamma\to X)\leq C^{\prime}N\ln(1+N)$. Theorem 1.4 then
follows from Theorem 1.11 and Theorem 1.18.
It is important to stress that the analogy between thick embeddings and coarse
wirings only holds when there is a bound on the degree of the graphs and the
manifold dimension of the symmetric space is at least $3$. This is evidenced
by Theorem 1.8 which holds independent of the degree of the graph, where no
such result for coarse wirings is possible. On the other hand, in section 6.1,
we will consider coarse wirings into $\mathbb{R}^{2}$ and
$\mathbb{H}_{\mathbb{R}}^{2}$ where only planar graphs admit topological
embeddings.
###### Theorem 1.19.
Let $d\geq 3$ and let $X(d)$ be the disjoint union of all finite graphs with
maximal degree $\leq d$. Let $Y$ and $Z$ be graphs which are quasi-isometric
to $\mathbb{R}^{2}$ and $\mathbb{H}_{\mathbb{R}}^{2}$ respectively. For all
sufficiently large $k$, we have
$\textup{wir}^{k}_{X(d)\to Y}(n)\simeq
n^{2}\quad\textup{and}\quad\exp(n^{1/2})\lesssim\textup{wir}^{k}_{X(d)\to
Z}(n)\lesssim\exp(n).$
The lower bounds both follow from Theorem 1.17, since
$\textup{sep}_{X(d)}(n)\simeq n$ as it contains a family of expanders of at
most exponentially growing size [Hum17]. For the upper bound we will make
direct constructions. We believe that it is possible to improve the bound in
the $p=0$ case of Theorem 1.17 to $\exp(n^{r/q}\ln(n)^{s/q})$. This would have
the consequence that $\textup{wir}^{k}_{X(d)\to Z}(n)\simeq\exp(n)$ in Theorem
1.19.
One very natural question to consider is the dependence of
$\textup{wir}^{k}_{X\to Y}(n)$ (up to $\simeq$) on the parameter $k$. It is
clear that for $k\leq l$, $\textup{wir}^{k}_{X\to
Y}(n)\geq\textup{wir}^{l}_{X\to Y}(n)$ for any pair of bounded degree graphs
$X$ and $Y$, but the converse fails spectacularly [Ra23].
### Acknowledgements
The authors would like to thank Itai Benjamini for suggesting the relationship
between wiring problems and the separation profile which provided the initial
spark for this work, Romain Tessera for suggestions which improved the
exposition, and an anonymous referee for many suggestions and observations
which greatly improved the readability of the paper.
## 2 Thick topological embeddings into products of real hyperbolic and
Euclidean spaces
Our goal in this section is to prove Theorems 1.6 and 1.8, which we do by
directly constructing thick topological embeddings. We start with the proof of
Theorem 1.6 in the case $q+r\geq 4$. We will use the upper halfspace model of
real hyperbolic space
$\mathbb{H}^{q}_{\mathbb{R}}=\left\\{(x_{1},\ldots,x_{q-1};x_{q})\ \left|\
x_{i}\in\mathbb{R},\ x_{q}>0\right.\right\\}$ equipped with the metric
$d_{\mathbb{H}^{q}_{\mathbb{R}}}((x_{1},\ldots,x_{q-1};x_{q}),(y_{1},\ldots,x_{y-1};y_{q}))=\cosh^{-1}\left(1+\frac{\sum_{i=1}^{q}(x_{i}-y_{i})^{2}}{2x_{q}y_{q}}\right).$
###### Proof.
Define $h_{0}=(2(\cosh(1)-1))^{-1/2}$. Consider the map
$\phi_{q,r}:\mathbb{R}^{r}\times\mathbb{R}^{q-1}\to\mathbb{R}^{r}\times\mathbb{H}^{q}_{\mathbb{R}}\quad\textup{given
by}\quad\phi_{q,r}(\underline{x},\underline{y})=(\underline{x},(\underline{y};h_{0})).$
Claim:
$d(\phi_{q,r}(\underline{x},\underline{y}),\phi_{q,r}(\underline{x^{\prime}},\underline{y^{\prime}}))\geq
1$ whenever $\|\underline{x}-\underline{x^{\prime}}\|_{2}\geq 1$ or
$\|\underline{y}-\underline{y^{\prime}}\|_{2}\geq 1$.
###### Proof of Claim.
If $\|\underline{x}-\underline{x^{\prime}}\|_{2}\geq 1$ then this is obvious.
If $\|\underline{y}-\underline{y^{\prime}}\|_{2}\geq 1$, then
$\displaystyle
d(\phi_{q,r}(\underline{x},\underline{y}),\phi_{q,r}(\underline{x^{\prime}},\underline{y^{\prime}}))$
$\displaystyle\geq$ $\displaystyle
d_{\mathbb{H}^{q}_{\mathbb{R}}}((\underline{y};h_{0}),(\underline{y^{\prime}};h_{0}))$
$\displaystyle=$
$\displaystyle\cosh^{-1}\left(1+\frac{\|\underline{y}-\underline{y^{\prime}}\|_{2}^{2}}{2h_{0}^{2}}\right)$
$\displaystyle\geq$
$\displaystyle\cosh^{-1}\left(1+\frac{1}{2h_{0}^{2}}\right)$ $\displaystyle=$
$\displaystyle\cosh^{-1}(1+(\cosh(1)-1))=1.$
∎
Let $\Gamma$ be a finite graph with maximal degree $d$ and let
$\psi=\sqrt{2}.f_{q+r-1}$ where $f_{q+r-1}$ is the $1$-thick topological
embedding of $\Gamma$ into $\mathbb{R}^{q+r-1}$ defined in Theorem 1.3. Let us
first show that $\psi\circ\phi$ is a $1$-thick embedding of $\Gamma$ into
$\mathbb{R}^{r}\times\mathbb{H}^{q}_{\mathbb{R}}$.
The topological embedding $\psi$ is $\sqrt{2}$-thick. If
$\|(\underline{x},\underline{y})-(\underline{x^{\prime}},\underline{y^{\prime}})\|_{2}\geq\sqrt{2}$,
then either $\|\underline{x}-\underline{x^{\prime}}\|_{2}\geq 1$ or
$\|\underline{y}-\underline{y^{\prime}}\|_{2}\geq 1$. Applying the claim, we
see that $\psi\circ\phi$ is $1$-thick.
Finally we bound $\textup{vol}(\psi\circ\phi)$. Firstly note that if
$\|(\underline{x},\underline{y})-(\underline{x^{\prime}},\underline{y^{\prime}})\|_{2}\leq
1$, then
$\displaystyle
d(\phi_{q,r}(\underline{x},\underline{y}),\phi_{q,r}(\underline{x^{\prime}},\underline{y^{\prime}}))$
$\displaystyle=$
$\displaystyle\left(\|\underline{x}-\underline{x^{\prime}}\|_{2}+d_{\mathbb{H}^{q}_{\mathbb{R}}}((\underline{y};h_{0}),(\underline{y^{\prime}};h_{0}))\right)^{1/2}$
$\displaystyle\leq$
$\displaystyle\left(1+\cosh^{-1}\left(1+\frac{\|\underline{y}-\underline{y^{\prime}}\|_{2}^{2}}{2h_{0}^{2}}\right)\right)^{1/2}$
$\displaystyle\leq$
$\displaystyle\left(1+\cosh^{-1}\left(1+\frac{1}{2h_{0}^{2}}\right)\right)^{1/2}=\sqrt{2}.$
Now let $Y$ be a $\frac{1}{2}$-separated $1$-net in $\textup{im}(\psi)$. It
follows from the above equation that $\phi(Y)$ is a $\sqrt{2}$-net in
$\textup{im}(\psi\circ\phi)$. Denote by $\alpha,\beta$ the volumes of the
balls of radius $\frac{1}{4}$ and $\sqrt{2}+1$ in $\mathbb{R}^{q+r-1}$ and
$\mathbb{R}^{r}\times\mathbb{H}^{q}_{\mathbb{R}}$ respectively. We have
$\textup{vol}(\psi\circ\phi)\leq\beta|Y|\quad\textup{and}\quad\alpha|Y|\leq\textup{vol}(\psi).$
Hence, using the volume bounds from Theorem 1.1 as explained after Theorem
1.2, there is a constant $C$ which depends on $q,r,d$ but not $\Gamma$ such
that
$\displaystyle\textup{vol}(\psi\circ\phi)$ $\displaystyle\leq$
$\displaystyle\beta|Y|$ $\displaystyle\leq$
$\displaystyle\beta\alpha^{-1}\textup{vol}(\psi)$ $\displaystyle\leq$
$\displaystyle\beta\alpha^{-1}C^{\prime}|\Gamma|^{1+1/(q+r-2)}\ln(1+\Gamma)^{4(q+r-1)}.$
∎
It remains to tackle the case $q+r=3$. We split the proof into two parts.
Firstly, we build a $1$-thick topological embedding of the complete graph on
$N$ vertices into $[0,N-1]^{2}\times[0,1]$. Then we use embeddings of
$\mathbb{R}^{2}$ into $\mathbb{H}_{\mathbb{R}}^{3}$ and
$\mathbb{H}^{2}_{\mathbb{R}}\times\mathbb{R}$ to construct $1$-thick
topological embeddings.
###### Lemma 2.1.
Let $K_{N}$ denote the complete graph on $N$ vertices. There is a $1$-thick
topological embedding
$f:K_{N}\to[0,N-1]^{2}\times[0,1]\subset(\mathbb{R}^{3},\|\cdot\|_{\infty})$.
###### Proof.
Enumerate the vertices of $K_{N}$ as $v_{0},\ldots,v_{N-1}$. Now we map
$v_{k}$ to $(k,k,0)$. We connect $(k,k,0)$ to $(l,l,0)$ using the following
piecewise linear path $P_{kl}$:
$(k,k,0)\to(l,k,0)\to(l,k,1)\to(l,l,1)\to(l,l,0).$ (3)
Let us verify that this embedding is $1$-thick. Any two distinct vertices
$v_{k}$ and $v_{l}$ are mapped at distance $|k-l|\geq 1$. Next, consider a
path $P_{kl}$ and the image $(i,i,0)$ of a vertex $v_{i}$ with $i\neq k,l$.
Since one of the first two coordinates of the path $P_{kl}$ is always either
$k$ or $l$, we have
$d_{\infty}(P_{kl},(i,i,0))\geq\min\\{|i-k|,|i-l|\\}\geq 1.$
Finally, consider paths $P_{ij},P_{kl}$. Let $(w,x,a)\in P_{ij}$ and
$(y,z,b)\in P_{kl}$ and suppose $d((w,x,a),(y,z,b))<1$.
If $a=1$, then $b>0$, so $w=j$ and $y=k$. Since
$d_{\infty}((w,x,a),(y,z,b))\geq|w-y|$, we have $|j-k|<1$. Thus $j=k$ and the
two paths come from edges which share a vertex.
If $a\in(0,1)$ then $w=x\in\\{i,j\\}$. Since
$d_{\infty}((w,x,a),(y,z,b))\geq\max\\{|w-y|,|x-z|\\}$ and at least one of
$y,z$ is equal to either $k$ or $l$, one of $i,j$ must be equal to one of
$k,l$. Thus the two paths come from edges which share a vertex.
If $a=0$ then either $x=i$ or $w=x=j$. Also $b<1$ so either $z=k$ or $y=z=l$.
If $x=i$ and $z=k$ then the argument from the $a=1$ case holds. Next, suppose
$w=x=j$. Since $z\in\\{k,l\\}$ and $d_{\infty}((w,x,a),(y,z,b))\geq|x-z|$, we
have $j=k$ or $j=l$. If $x=i$ and $y=z=l$, then $i=l$ following the same
reasoning. ∎
Next, we embed $[0,N-1]^{2}\times[0,1]$ into $\mathbb{H}_{\mathbb{R}}^{3}$. We
work in the upper-half space model of
$\mathbb{H}_{\mathbb{R}}^{3}=\left\\{(x,y;z)\ \left|\ z>0\right.\right\\}$.
Consider the map
$\phi:\mathbb{R}^{2}\times[0,1]\to\mathbb{H}_{\mathbb{R}}^{3}$ defined by
$(x,y,a)\mapsto(x,y;h_{0}e^{-a}).$
###### Lemma 2.2.
Let $f:K_{N}\to[0,N-1]^{2}\times[0,1]$ be the $1$-thick topological embedding
from Lemma 2.1. The map $g=\phi\circ f$ is a $1$-thick embedding of $K_{N}$
into $\mathbb{H}_{\mathbb{R}}^{3}$ with diameter $\leq 2\ln N+9$ and volume
$\leq 2039N^{2}$.
###### Proof.
We first prove that $g$ is $1$-thick. Since $f$ is $1$-thick with respect to
the $L^{\infty}$ metric, it suffices to prove that
$d_{\mathbb{H}_{\mathbb{R}}^{3}}(\phi(a_{1},b_{1},c_{1}),\phi(a_{2},b_{2};c_{2}))\geq
1$ whenever $(a_{1},b_{1},c_{1}),(a_{2},b_{2},c_{2})\in[0,n-1]^{2}\times[0,1]$
are at $L^{\infty}$ distance $\geq 1$.
Suppose $\max\\{|a_{2}-a_{1}|,|b_{2}-b_{1}|,|c_{2}-c_{1}|\\}\geq 1$. If
$\max\\{|a_{2}-a_{1}|,|b_{2}-b_{1}|\\}\geq 1$, then
$d_{\mathbb{H}_{\mathbb{R}}^{3}}(\phi(a_{1},b_{1},c_{1}),\phi(a_{2},b_{2},c_{2}))\geq\cosh^{-1}\left(1+\frac{1}{2h_{0}^{2}}\right)=1.$
If $|c_{2}-c_{1}|\geq 1$, then
$\displaystyle
d_{\mathbb{H}_{\mathbb{R}}^{3}}(\phi(a_{1},b_{1},c_{1}),\phi(a_{2},b_{2},c_{2}))$
$\displaystyle\geq$
$\displaystyle\cosh^{-1}\left(1+\frac{h_{0}^{2}(1-e^{-1})^{2}}{2h_{0}^{2}e^{-1}}\right)$
$\displaystyle=$ $\displaystyle\cosh^{-1}(\cosh(1))=1.$
Next we bound the diameter and the volume. For every point $(x,y;z)$ in the
image of $g$, we have $|x|,|y|\leq N-1$ and $h_{1}=h_{0}e^{-1}\leq z\leq
h_{0}$. Thus
$\displaystyle d_{\mathbb{H}_{\mathbb{R}}^{3}}((0,0;h_{0}),(x,y;z))$
$\displaystyle\leq$
$\displaystyle\cosh^{-1}\left(1+\frac{2(N-1)^{2}+h_{0}^{2}(1-e^{-1})^{2}}{2h_{0}^{2}e^{-2}}\right)$
$\displaystyle\leq$
$\displaystyle\cosh^{-1}\left(1+\frac{2e^{2}N^{2}+e^{2}h_{0}^{2}}{2h_{0}^{2}}\right)$
$\displaystyle=$
$\displaystyle\cosh^{-1}\left(1+\frac{e^{2}}{2}+2e^{2}(\cosh(1)-1)N^{2}\right)$
$\displaystyle\leq$ $\displaystyle\cosh^{-1}\left(2e^{2}\cosh(1)N^{2}\right)$
$\displaystyle\leq$ $\displaystyle\ln\left(4e^{2}\cosh(1)N^{2}\right)$
$\displaystyle=$ $\displaystyle 2\ln(N)+\ln(4e^{2}\cosh(1))\leq 2\ln(N)+9.$
Next, we bound the volume. For each point $(x,y;z)$ in the image of $g$ there
is a point $(a,b;h_{0})$ with $a,b\in\\{0,\ldots,N-1\\}$ such that
$|x-a|\leq\frac{1}{2}$, $|y-b|\leq\frac{1}{2}$ and $z\in[h_{0}e^{-1},h_{0}]$.
We have
$\displaystyle d_{\mathbb{H}_{\mathbb{R}}^{3}}((a,b;h_{0}),(x,y;z))$
$\displaystyle\leq$
$\displaystyle\cosh^{-1}\left(1+\frac{\frac{1}{2}^{2}+\frac{1}{2}^{2}+h_{0}^{2}(1-e^{-1})^{2}}{2h_{0}^{2}e^{-2}}\right)$
$\displaystyle\leq$
$\displaystyle\cosh^{-1}\left(1+\frac{1}{4h_{0}^{2}e^{-2}}+\frac{1}{2e^{-2}}\right)$
$\displaystyle=$
$\displaystyle\cosh^{-1}\left(1+\frac{e^{2}\cosh(1)}{2}\right)=:\lambda.$
Thus, the volume of the $1$-neighbourhood of the image of $g$ is at most
$CN^{2}$ where $C$ is the volume of the ball of radius $\lambda+1$ in
$\mathbb{H}_{\mathbb{R}}^{3}$. We have
$C=\pi(\sinh(2(\lambda+1))-2(\lambda+1))\leq 2039$
as required. ∎
Using the same strategy, we can also prove the following.
###### Theorem.
There is a constant $C$ such that for every $N\in\mathbb{N}$, there is a
$1$-thick topological embedding
$g:K_{N}\to\mathbb{R}\times\mathbb{H}^{2}_{\mathbb{R}}$ with
$\textup{diam}(g)\leq CN$ and $\textup{vol}(g)\leq CN^{2}$.
###### Proof.
Repeat the proof of Theorem 1.8 but replace the map $\phi$ by
$\phi:\mathbb{R}^{2}\times[0,1]\to\mathbb{R}\times\mathbb{H}^{2}_{\mathbb{R}}\quad\textup{given
by}\quad\phi(x,y,z)=(x;y,h_{0}e^{-z}).\qed$
## 3 Coarse wiring
In this section, we present some elementary properties of coarse wirings and
construct coarse wirings of finite graphs into a Cayley graph of the
lamplighter group $\mathbb{Z}_{2}\wr\mathbb{Z}$.
Recall that a map $r:X\to Y$ between metric spaces is $\kappa$-regular if
$d_{Y}(r(x),r(y))\leq\kappa(1+d_{X}(x,y))$ for all $x,y\in X$ and the preimage
of every ball of radius $1$ in $Y$ is contained in a union of at most $\kappa$
balls of radius $1$ in $X$. We will first prove Proposition 1.15, we recall
the statement for convenience.
###### Proposition.
Let $X$ and $Y$ be graphs with maximal degree $\Delta$ and let $r:VX\to VY$ be
a $\kappa$-regular map. Then for all sufficiently large $k$ we have
$\displaystyle\textup{wir}^{k}_{X\to
Y}(n)\leq\left(\kappa+\frac{1}{2}\right)\Delta n.$
###### Proof.
Let $\Gamma\subset X$ be a subgraph with $\left\lvert V\Gamma\right\rvert\leq
n$. For $xx^{\prime}\in E\Gamma$ let $P_{xx^{\prime}}$ be any minimal length
path from $r(x)$ to $r(x^{\prime})$ and let
$\Gamma^{\prime}=\bigcup_{E\Gamma}P_{xx^{\prime}}$. We construct a wiring
$f:\Gamma\to\Gamma^{\prime}$ as follows. For each vertex $v\in V\Gamma$ we
define $f(v)=r(v)$. We then map each edge $xx^{\prime}$ continuously to the
path $P_{xx^{\prime}}$.
Since each path $P_{xx^{\prime}}$ contains at most $2\kappa+1$ vertices and
$|E\Gamma|\leq\frac{1}{2}\Delta n$, we have $\left\lvert
V\Gamma^{\prime}\right\rvert\leq n\Delta(\kappa+\frac{1}{2})$.
If $P_{xx^{\prime}}$ contains an edge $e$ then the distance from $r(x)$ to the
initial vertex of $e$ is at most $2\kappa$, so there are at most
$1+\Delta^{2\kappa+1}$ possibilities for $r(x)$; $r$ is at most
$\kappa(1+\Delta)$-to-one so there are at most
$k:=\kappa(1+\Delta)(1+\Delta^{2\kappa+1})$ possibilities for $x$. Therefore
there are at most $k$ edges $xx^{\prime}\in E\Gamma$ such that
$f(xx^{\prime})=P_{xx^{\prime}}$ contains a given edge $e$ of
$E\Gamma^{\prime}$. It follows that $\textup{wir}^{k}(\Gamma\to
Y)\leq(\kappa+\frac{1}{2})\Delta n$. ∎
To deduce Corollary 1.16 from the above proposition, we prove a bound on
compositions of coarse wirings (Proposition 1.13).
###### Proposition.
Suppose $\textup{wir}^{k}_{X\to Y}(N)<\infty$. Then
$\displaystyle\textup{wir}^{kl}_{X\to Z}(N)\leq\textup{wir}^{l}_{Y\to
Z}\left(\textup{wir}^{k}_{X\to Y}(N)\right).$
###### Proof.
If $\textup{wir}^{l}_{Y\to Z}\left(\textup{wir}^{k}_{X\to
Y}(N)\right)=+\infty$ then there is nothing to prove, so assume it is finite.
Let $\Gamma\subset X$ with $\left\lvert V\Gamma\right\rvert\leq N$. Then there
exists a coarse $k$-wiring $\psi$ of $\Gamma$ into $Y$ with
$\textup{vol}(W)\leq\textup{wir}^{k}_{X\to Y}(N)$ and a coarse $l$-wiring
$\psi^{\prime}$ of $\textup{im}(W)$ into $Z$ with
$\textup{vol}(W^{\prime})\leq\textup{wir}^{l}_{Y\to
Z}\left(\textup{wir}^{k}_{X\to Y}(N)\right)$.
We now construct a coarse $kl$-wiring $\psi^{\prime\prime}$ of $\Gamma$ into
$Z$. For each $v\in V\Gamma$, define
$\psi^{\prime\prime}(v)=\psi^{\prime}(\psi(v))$. For each $e\in E\Gamma$, let
$e_{1},\ldots,e_{m}$ be the edge path $P_{e}$. We define
$P^{\prime\prime}_{e}$ to be the concatenation of paths
$P^{\prime}_{e_{1}}P^{\prime}_{e_{2}}\ldots P^{\prime}_{e_{m}}$. We extend
$\psi^{\prime\prime}$ continuously so that the image of $e$ is
$P^{\prime\prime}_{e}$. It is clear that $\psi^{\prime\prime}|_{V\Gamma}$ is
$\leq kl$-to-$1$ and
$\textup{im}(\psi^{\prime\prime})\subseteq\textup{im}(\psi^{\prime})$, so
$\textup{vol}(\psi^{\prime\prime})\leq\textup{vol}(\psi^{\prime})$. Since each
edge in $\textup{im}(\psi^{\prime\prime})$ is contained in at most $l$ of the
paths $P^{\prime}_{e^{\prime}}$ and each $P^{\prime}_{e^{\prime}}$ is used in
at most $k$ of the paths $P_{e}$, we have that each edge in
$\textup{im}(\psi^{\prime\prime})$ is contained in the image of at most $kl$
of the edges in $E\Gamma$, as required. ∎
###### Proof of Corollary 1.16.
This follows immediately from Propositions 1.15 and 1.13. ∎
Finally in this section we prove Theorem 1.18 by constructing coarse wirings
into a Cayley graph of the lamplighter group. This construction is crucial for
Theorem 1.11. We identify $\mathbb{Z}_{2}\wr\mathbb{Z}$ with the semidirect
product $\bigoplus_{\mathbb{Z}}\mathbb{Z}_{2}\rtimes\mathbb{Z}$ and define $Y$
to be the Cayley graph of $\mathbb{Z}_{2}\wr\mathbb{Z}$ using the generating
set $\\{(\delta_{0},0),(0,1),(0,-1)\\}$ where $\delta_{0}(i)=1$ if $i=0$ and
$0$ otherwise. Let us recall the statement of Theorem 1.18.
###### Theorem.
Let $\Gamma$ be an $n$-vertex graph with maximal degree $d$. There is a coarse
$2d$-wiring of $\Gamma$ into $Y$ with diameter at most
$6\lceil\log_{2}(n)\rceil$ and volume at most
$dn\left(3\lceil\log_{2}(n)\rceil+\frac{1}{2}\right)$.
###### Proof.
Set $k=\lceil\log_{2}(n)\rceil$. For each $0\leq i\leq n-1$ and $0\leq j\leq
k-1$ fix $i_{j}\in\\{0,1\\}$ such that $\sum_{j=0}^{k-1}2^{j}i_{j}=i$.
Enumerate the vertices of $\Gamma$ as $v_{0},\ldots,v_{n-1}$. All the points
in the image of the wiring will have their lamplighter position and lamp
functions supported on the set $\\{0,\ldots,2k-1\\}$, so we represent elements
of $\mathbb{Z}_{2}\wr\mathbb{Z}$ by a binary string of length exactly $2k$
(for the element of $\bigoplus_{\mathbb{Z}}\mathbb{Z}_{2}$) with one entry
marked by a hat ($\hat{\ }$) to indicate the position of the lamplighter (for
the element of $\mathbb{Z}$). Note that this set has diameter at most
$6k=6\lceil\log_{2}(n)\rceil$.
Now we map each $v_{i}$ to $\hat{i_{0}}i_{1}\ldots i_{k-1}i_{0}i_{1}\ldots
i_{k-1}$ and for each edge $v_{i}v_{j}$ we assign the path $P_{ij}$ which
travels from left to right correcting the binary string as it goes, then
returns to the leftmost position:
$\displaystyle\hat{i_{0}}i_{1}\ldots i_{k-1}i_{0}i_{1}\ldots i_{k-1}$
$\displaystyle\to$ $\displaystyle\widehat{j_{0}}i_{1}\ldots
i_{k-1}i_{0}i_{1}\ldots i_{k-1}$ (4) $\displaystyle\to$ $\displaystyle
j_{0}\widehat{i_{1}}\ldots i_{k-1}i_{0}i_{1}\ldots i_{k-1}$ (5)
$\displaystyle\ldots$ $\displaystyle\to$ $\displaystyle
j_{0}j_{1}\ldots\widehat{j_{k-1}}i_{0}i_{1}\ldots i_{k-1}$ (6)
$\displaystyle\ldots$ $\displaystyle\to$ $\displaystyle j_{0}j_{1}\ldots
j_{k-1}j_{0}j_{1}\ldots\widehat{j_{k-1}}$ (7) $\displaystyle\to$
$\displaystyle j_{0}j_{1}\ldots
j_{k-1}j_{0}j_{1}\ldots\widehat{j_{k-2}}j_{k-1}$ (8) $\displaystyle\ldots$
$\displaystyle\to$ $\displaystyle\widehat{j_{0}}j_{1}\ldots
j_{k-1}j_{0}j_{1}\ldots j_{k-1}.$ (9)
Now suppose an edge $e$ lies on one of the paths $P_{ij}$. Choose one of the
end vertices and denote the binary string associated to this vertex by
$a_{0}\ldots a_{2k-1}$. We claim that at least one of the following holds:
$i=\sum_{l=0}^{k-1}2^{l}a_{k+l}\ (\dagger)\quad\quad
j=\sum_{l=0}^{k-1}2^{l}a_{l}\ (\ddagger)$
In particular, as the graph $\Gamma$ has maximal degree at most $d$, this
means that there are at most $2d$ paths containing the edge $e$.
If $e$ appears on $P_{ij}$ during stages (4), (5) or (6), then $a_{k+l}=i_{l}$
for $0\leq l\leq k-1$. Thus $(\dagger)$ holds. Otherwise, $e$ appears on
$P_{ij}$ during stages (7), (8) or (9), then $a_{l}=j_{l}$ for $0\leq l\leq
k-1$. Thus $(\ddagger)$ holds.
For the volume estimate, each path $P_{ij}$ meets at most $6k+1$ vertices and
there are $|E\Gamma|\leq\frac{1}{2}nd$ paths. ∎
## 4 From fine wirings to coarse wirings and back
In this section we prove Proposition 1.10 and Theorem 1.11, which describe
circumstances in which one can translate between thick embeddings of a graph
into a metric space and coarse wirings of that graph into a graph quasi-
isometric to the metric space.
### 4.1 Fine to coarse
In this subsection we will prove Proposition 1.10.
###### Definition 4.1.
Let $\mu$ be a measure on a metric space $X$. We say $(X,\mu)$ has controlled
growth if for every $r>0$
$c_{r}:=\inf_{x\in X}\mu(B_{r}(x))>0\quad\textup{and}\quad C_{r}:=\sup_{x\in
X}\mu(B_{r}(x))<+\infty.$
Let us recall the statement.
###### Proposition.
Let $M$ be a Riemannian manifold with controlled growth and let $Y$ be a graph
quasi-isometric to $M$. For any $d\in\mathbb{N}$ and $T>0$, there exists a
constant $k$ depending only on $d$, $M$, $T$ and $Y$ such that if $\Gamma$ is
a finite graph with maximal degree $d$ and there is a $T$-thick embedding
$\phi:\Gamma\to M$ with diameter $D$ and volume $V$ then there is a coarse
$k$-wiring of $\Gamma$ into $Y$ with diameter at most $kD$ and volume at most
$kV$.
###### Proof.
Let $f\colon M\to Y$ be a (possibly discontinuous) quasi-isometry. Let
$\lambda\geq 1$ be such that
1. 1.
$\frac{1}{\lambda}d_{Y}(f(x_{1}),f(x_{2}))-\lambda\leq
d_{M}(x_{1},x_{2})\leq\lambda d_{Y}(f(x_{1}),f(x_{2}))+\lambda$ for $x_{1}$
and $x_{2}$ in $M$, and
2. 2.
for any $y\in Y$, there exists $x\in M$ with $d_{Y}(y,f(x))\leq\lambda$.
We show that $f\phi$ can be perturbed to obtain a coarse wiring $\psi$.
For $v\in V\Gamma$, let $\psi(v)$ be any vertex of $Y$ within distance
$\frac{1}{2}$ of $f\phi(v)$. If $w$ is another vertex of $\Gamma$ with
$\psi(w)=\psi(v)$ then $d_{M}(\phi(v),\phi(w))\leq 3\lambda$. But, for any
distinct pair of vertices $v,w$, $d_{M}(\phi(v),\phi(w))\geq T$, so it follows
that at most $C_{3\lambda+T/2}/c_{T/2}$ vertices of $\Gamma$ map under $\psi$
to $\psi(v)$.
We now describe a collection of paths $P_{vv^{\prime}}$ in $Y$ as $v$ and
$v^{\prime}$ range over pairs of adjacent vertices in $\Gamma$. The
restriction of $\phi$ to the edge $vv^{\prime}$ is a continuous path in $M$;
choose a sequence
$\phi(v)=w_{0}^{\prime},\dotsc,w_{n}^{\prime}=\phi(v^{\prime})$ of points on
this path with $n$ minimal such that $d(w_{i}^{\prime},w_{i+1}^{\prime})\leq
2T$ for each $i$. Denote this minimal $n$ by $n_{vv^{\prime}}$. Choose
$w_{0}=\psi(v)$, $w_{n}=\psi(v^{\prime})$ and for each $1\leq i\leq n-1$ let
$w_{i}$ is a vertex of $Y$ within distance $\frac{1}{2}$ of
$f(w_{i}^{\prime})$. For each $i$ we have
$\displaystyle d_{Y}(w_{i},w_{i+1})$ $\displaystyle\leq$ $\displaystyle
1+d_{Y}(f(w^{\prime}_{i}),f(w^{\prime}_{i+1}))$ $\displaystyle\leq$
$\displaystyle 1+\lambda d_{M}(w^{\prime}_{i},w^{\prime}_{i+1})+\lambda^{2}$
$\displaystyle\leq$ $\displaystyle 1+2\lambda T+\lambda^{2}:=L,$
so can be joined by an edge path comprising at most $L$ edges. We define the
path $P_{vv^{\prime}}$ to be the concatenation of these $n_{vv^{\prime}}$
paths of length at most $L$.
We extend $\psi$ to a continuous map which sends each edge $vv^{\prime}$ to
the path $P_{vv^{\prime}}$. We claim that $\psi$ is a coarse wiring with the
appropriate bounds on diameter and volume.
Firstly, we bound the diameter. Note that every point in $\textup{im}(\psi)$
is within distance $(L+1)/2$ of some $f(w^{\prime})$ with
$w^{\prime}\in\textup{im}(\phi)$. Let $x,y\in\textup{im}(\psi)$ and let
$v,w\in\Gamma$ satisfy $d_{Y}(x,f\phi(v)),d_{Y}(y,f\phi(w))\leq(L+1)/2$. We
have
$\displaystyle d_{Y}(x,y)$ $\displaystyle\leq$ $\displaystyle
d_{Y}(x,f\phi(v))+\lambda\left(d_{M}(\phi(v),\phi(w))+\lambda\right)+d_{Y}(f\phi(w),y)$
$\displaystyle\leq$ $\displaystyle
L+1+\lambda.\textup{diam}(\phi)+\lambda^{2}$ $\displaystyle\leq$
$\displaystyle C(T,\lambda).\textup{diam}(\phi).$
The final inequality fails if $\Gamma$ is a single vertex, but the proposition
obviously holds in this situation. Otherwise $\textup{diam}(\phi)\geq T$ and
the inequality holds for a suitable $C$.
Next we bound the volume of the wiring. The bound follows from the two
inequalities
$\textup{vol}(\phi)\geq\frac{c_{T/2}}{2d+1}\left(|V\Gamma|+\sum_{vv^{\prime}\in
E\Gamma}n_{vv^{\prime}}\right)\quad\textup{and}\quad\textup{vol}(\psi)\leq|V\Gamma|+L\sum_{vv^{\prime}\in
E\Gamma}n_{vv^{\prime}}.$
For the second bound, each vertex in $V\Gamma$ contributes at most $1$ vertex
to $\textup{vol}(\psi)$ and each path $P_{vv^{\prime}}$ contributes at most
$Ln_{vv^{\prime}}$ vertices to $\textup{vol}(\psi)$. For the first bound,
notice that the (open) balls of radius $T/2$ around the image of each vertex
are necessarily disjoint. Similarly, the balls of radius $T/2$ centred at any
two points in one of the sequences
$\phi(v)=w_{0}^{\prime},\dotsc,w_{n}^{\prime}=\phi(v^{\prime})$ defined above
are necessarily disjoint: if this were not the case for $w^{\prime}_{j}$ and
$w^{\prime}_{j^{\prime}}$, we must have $|j-j^{\prime}|\geq 2$ since
$d(w^{\prime}_{i},w^{\prime}_{i+1})\geq T$ for all $i$, but then we can remove
$w^{\prime}_{j+1},\ldots,w^{\prime}_{j^{\prime}-1}$ from the above sequence,
contradicting the minimality assumption. Moreover, if two balls of radius
$T/2$ centred at points on sequences corresponding to different edges have
non-trivial intersection, then these edges must have a common vertex since
$\phi$ is a $T$-thick embedding. Thus, the $T$-neighbourhood of the image of
$\phi$ contains a family of $\left(|V\Gamma|+\sum_{vv^{\prime}\in
E\Gamma}n_{vv^{\prime}}\right)$ balls of radius $T/2$, such that no point is
contained in more than $2d+1$ of these balls ($d$ for each end vertex, and an
extra $1$ if the point is within distance $T/2$ of the image of a vertex). As
a result
$\textup{vol}(\phi)\geq\frac{c_{T/2}}{2d+1}\left(|V\Gamma|+\sum_{vv^{\prime}\in
E\Gamma}n_{vv^{\prime}}\right).$
It remains to prove that we have defined a coarse $k$-wiring. It is sufficient
to show that there is a constant $k$ depending only on $\lambda$ and the
growth rates $c$ and $C$ of volumes in $M$ such that any edge of $Y$ is
contained in $P_{vv^{\prime}}$ for at most $k$ edges $vv^{\prime}\in E\Gamma$.
Let $uu^{\prime}$ be an edge of $Y$ contained in at least one path in the
collection $P$. Let $A$ be the subset of $E\Gamma$ comprising edges $e$ such
that $P_{e}$ contains $uu^{\prime}$. As noted during the proof of the diameter
bound every point in $P_{e}$ is contained in the $(L+1)/2$-neighbourhood of
$f(\phi(e))$ so there is a point $x_{e}\in\phi(e)$ such that
$d_{Y}(u,f(x_{e}))\leq(L+1)/2$, and so for any other edge $e^{\prime}\in A$,
$\displaystyle
d_{M}(x_{e},x_{e^{\prime}})\leq\lambda\left(d_{Y}(f(x_{e}),u)+d_{Y}(u,f(x_{e^{\prime}}))\right)+\lambda\leq\lambda(L+2).$
For any edge $e^{\prime}\in A$, $x_{e^{\prime}}$ is within distance $T$ of at
most $2d$ of the points $x_{e^{\prime\prime}}$ for $e^{\prime\prime}\in A$. It
follows that the size of $A$ is at most $2dc_{T/2}^{-1}C_{\lambda(L+2)+T/2}$.
∎
### 4.2 Coarse to fine
The return direction is more sensitive and we are not able to obtain $1$-thick
embeddings in all cases. When the target space is Euclidean this is easily
resolved by rescaling, but in other spaces changing thickness potentially has
a more drastic effect on the volume. Let us recall Theorem 1.11.
###### Theorem.
Let $M$ be a compact Riemannian manifold of dimension $n\geq 3$, let $Y$ be a
graph quasi-isometric to the universal cover $\widetilde{M}$ of $M$ and let
$k,d\in\mathbb{N}$. There exist constants $C$ and $\varepsilon>0$ such that
the following holds:
If there is a coarse $k$-wiring of a finite graph $\Gamma$ with maximal degree
$d$ into $Y$ with diameter $D$ and volume $V$ then there is a
$\varepsilon$-thick embedding of $\Gamma$ into $\widetilde{M}$ with diameter
at most $CD$ and volume at most $CV$.
The proof of this result is completed in several steps. As we are aiming to
construct a topological embedding, the first step is to replace the coarse
$k$-wiring $\Gamma\to Y$ with an injective continuous function $\Gamma\to
Y^{\prime}$ where $Y^{\prime}$ is a “thickening” of $Y$. Exploiting the
symmetries in the universal cover, we choose $Y$ (and its thickening) to be
cocompact with respect to the action of $\pi_{1}(M)$, this reduces the problem
of embedding the thickening of $Y$ to defining finitely many paths in $M$. We
then use the fact that $M$ is compact to obtain a positive lower bound on the
thickness of the topological embedding.
Using Proposition 1.15 and the fact that quasi-isometries of bounded degree
graphs are regular, it suffices to prove Theorem 1.11 for a specific bounded
degree graph quasi-isometric to $\widetilde{M}$.
We require a standard S̆varc-Milnor lemma.
###### Lemma 4.2.
Let $x\in M$. Then, for sufficiently large $L$, the graph
$\mathcal{G}_{x}^{L}$ with vertex set equal to the preimage of $x$ in
$\widetilde{M}$, with vertices connected by an edge if and only if they are
separated by a distance of at most $L$ in $\widetilde{M}$, is quasi-isometric
to $\widetilde{M}$.
Now we assume that $Y=\mathcal{G}_{x}^{L}$ for a suitably chosen $L$. The next
step is to “thicken” $Y$ to a graph $Y^{\prime}$ to obtain injective wirings.
###### Definition 4.3.
A wiring $f:\Gamma\to Y$ of a finite graph $\Gamma$ into a graph $Y$ is called
an injective wiring if $f$ is injective.
###### Definition 4.4.
Given a graph $Y$ and $T\in\mathbb{N}$ we define the $T$-thickening of $Y$ to
be the graph $K_{T}(Y)$ with vertex set $VY\times\left\\{1,\ldots,T\right\\}$
and edges $\\{(v,i),(w,j)\\}$ whenever either $v=w$ and $1\leq i<j\leq T$, or
$\\{v,w\\}\in EY$ and $1\leq i\leq j\leq T$.
###### Lemma 4.5.
For all $d,k\in\mathbb{N}$ there exists some $T$ with the following property.
If there is a coarse $k$-wiring $\psi:\Gamma\to Y$ then there is an injective
wiring $\psi^{\prime}:\Gamma\to K_{T}(Y)$, such that
$\textup{diam}(\psi^{\prime})\leq\textup{diam}(\psi)+2$ and
$\textup{vol}(\psi^{\prime})\leq T\textup{vol}(\psi)$.
###### Proof.
Set $T=k(d+1)$. For each vertex $v\in Y$ enumerate
$\psi^{-1}(v)=\left\\{v_{1},\ldots,v_{m}\right\\}$ for some $m\leq k$. Define
$\psi^{\prime}(v_{l})=(\psi(v),l)$. We now define $\psi^{\prime}$ on the edges
of $\Gamma$. Whenever $vw$ is an edge in $\Gamma$ with $\psi(v)=\psi(w)$ then
there exist $l\neq l^{\prime}$ such that $v=v_{l}$ and $w=v_{l^{\prime}}$ and
we map the corresponding interval in $\Gamma$ with endpoints labelled $v$ and
$w$ isometrically onto the interval with endpoints labelled $(\psi(v),l)$ and
$(\psi(v),l^{\prime})$ in $K_{T}(Y)$.
Define $E^{\prime}$ to be the set of edges in $\Gamma$ whose end vertices are
distinct after applying $\psi$. Enumerate the edges of $E^{\prime}$ as
$e_{1}=v_{1}w_{1},\ldots,e_{n}=v_{n}w_{n}$ and, for each $j$, enumerate (in
order, including non-consecutive repetitions) the vertices contained in the
image of the interval corresponding to $v_{j}w_{j}$ under $\psi$ as
$\psi(v_{j})=v_{j}^{0},\ldots,v_{j}^{n_{j}}=\psi(w_{j})$. Each
$v_{j}^{i}v_{j}^{i+1}$ is an edge in $Y$.
Completely order the $v^{i}_{j}$ so that
$v^{i}_{j}<v^{i^{\prime}}_{j^{\prime}}$ whenever $j<j^{\prime}$ or
$j=j^{\prime}$ and $i<i^{\prime}$. Denote by $n(v^{i}_{j})$ the number of
$v^{i^{\prime}}_{j^{\prime}}<v^{i}_{j}$ such that
$v^{i^{\prime}}_{j^{\prime}}$ and $v^{i}_{j}$ are the same vertex of $Y$. We
extend $\psi^{\prime}$ by mapping the edge $e_{j}$ continuously and
injectively onto the path
$(v_{j}^{0},n(v_{j}^{0})+k+1),(v_{j}^{1},n(v_{j}^{1})+k+1),\ldots,(v_{j}^{n_{j}},n(v_{j}^{n_{j}})+k+1))$.
It is immediate from the construction that $\psi^{\prime}$ is an injective
wiring. It remains to find a uniform bound on $n(v_{j}^{i})$.
Since $\psi$ is a coarse $k$-wiring, each vertex in $Y$ lies in the interior
of at most $kd$ of the $\psi(e)$, so $n(v_{j}^{i})\leq kd-1$ for all $i,j$.
Thus $\psi^{\prime}$ as above is well-defined since $T=k(d+1)$.
Note that if $(x,j),(y,j^{\prime})$ are contained in
$\textup{im}(\psi^{\prime})$ then $(x,1),(y,1)\in\textup{im}(\psi^{\prime})$
and there is a path of length at most $\textup{diam}(\psi)$ connecting $(x,1)$
to $(y,1)$ in $K_{T}(Y)$. Hence
$\textup{diam}(\psi^{\prime})\leq\textup{diam}(\psi)+2$. If
$(x,j)\in\textup{im}(\psi^{\prime})$ for some $j$ then
$x\in\textup{im}(\psi)$. Therefore $\textup{vol}(\psi^{\prime})\leq
T\textup{vol}(\psi)$. ∎
The next step is to find a thick embedding of $K_{T}(Y)$ into $\widetilde{M}$.
###### Lemma 4.6.
Suppose that $M$ is a compact manifold of dimension $n\geq 3$ with fundamental
group $G$ and let $\widetilde{M}$ be the universal cover of $M$. Let $x\in M$
and denote by $Gx$ the orbit of $x$ in $\widetilde{M}$ under $G$. Then for any
$L,T$ there is an embedding of $Y^{\prime}=K_{T}(\mathcal{G}^{L}_{X}(Gx))$
into $\widetilde{M}$ that is equivariant with respect to the action of $G$ on
$\mathcal{M}$ by deck transformations.
This embedding is $\varepsilon$-thick for some $\varepsilon>0$, and there is a
uniform upper bound on the length of the paths obtained as the images of edges
of $Y^{\prime}$ under the embedding.
###### Proof.
Let $B$ be a ball in $M$ centred at $x$ which is homeomorphic to
$\mathbb{R}^{n}$. Fix a topological embedding $f$ of the complete graph on $T$
vertices into $B$. Enumerate the vertices $\\{w_{1},\ldots,w_{T}\\}$. For each
pair $w_{a},w_{b}\in VK_{T}$, and each homotopy class $[\ell]$ in
$\pi_{1}(M,x)$ which has a representative of length at most $L$, choose an arc
$\gamma_{a,b,[\ell]}$ connecting $f(w_{a})$ to $f(w_{b})$ such that the loop
obtained from concatenating $f(w_{a}w_{b})$ and $\gamma_{a,b,[\ell]}$ is in
$[\ell]$ and such that $\gamma_{a,b,[\ell]}$ intersects the union of
$f(K_{T})$ and all arcs previously added only at the points $f(y)$ and $f(z)$.
This is always possible using a general position argument.
Lifting this embedding to $\overline{M}$, we obtain a $G$-equivariant
embedding of $K_{T}(\mathcal{G}^{L}_{X}(Gx))$ into $\widetilde{M}$.
Specifically, the interval with end points $(gx,a)$ and $(gx,b)$ is mapped to
$gf(w_{a}w_{b})$ and if $(gx,a)(g^{\prime}x,b)$ is an edge in $Y^{\prime}$
then by definition the homotopy class corresponding to $g=[\ell]$ has a
representative of length at most $L$. Thus, the image of this edge in
$\widetilde{M}$ is the lift of $\gamma_{a,b,[\ell]}$ starting at $(gx,a)$ and
ending at $(g^{\prime}x,b)$. As the natural covering map $\widetilde{M}\to M$
is $1$-Lipschitz, this topological embedding is $\varepsilon$-thick, where
$\varepsilon=\min\left\\{d_{M}(X,Y)\right\\}$ as $X,Y$ range over the
following:
* •
$X=\\{f(v)\\}$, $Y=\\{f(w)\\}$ for distinct vertices $v,w\in VK_{T}$; or
* •
$X=\\{f(v)\\}$ and $Y$ is either $f(yz)$ or $\gamma_{y,z,[\ell]}$ with $v,y,z$
all distinct; or
* •
$X$ is either $f(vw)$ or $\gamma_{v,w,[\ell]}$ and $Y$ is either $f(yz)$ or
$\gamma_{y,z,[\ell^{\prime}]}$ with $v,w,y,z$ all distinct.
Similarly, since there are only finitely many $G$-orbits of images of edges,
there is a uniform upper bound on the lengths of images of edges. ∎
We are now ready to prove Theorem 1.11.
###### Proof of Theorem 1.11.
Let $M$ be a compact manifold of dimension $n\geq 3$, let $\widetilde{M}$ be
the universal cover of $M$ and let $Y$ be any graph quasi-isometric to
$\widetilde{M}$. Fix $d,k\in\mathbb{N}$ and assume that there is a coarse
$k$-wiring of $\Gamma$ into $Y$ with diameter $D$ and volume $V$. We may
assume $D\geq 1$ as the $D=0$ case is obvious.
By Lemma 4.2 there is some $L$ such that $\mathcal{G}_{x}^{L}$ is quasi-
isometric to $\widetilde{M}$, so by Corollary 1.16(1), there exists some
$l=l(k,d)$ so that there is a coarse $l$-wiring of $\Gamma$ into
$\mathcal{G}_{x}^{L}$ with diameter $\leq lD+l\leq 2lD$ and volume $\leq
lV+l\leq 2lV$.
Now we apply Lemma 4.5: for some $T=T(l,d)$ there is an injective wiring
$\psi$ of $\Gamma$ into $K_{T}(\mathcal{G}_{x}^{L})$ with diameter $\leq
2lD+2\leq 4lD$ and volume $\leq 2TlV$. Composing this injective wiring with
the $\varepsilon$-thick topological embedding $\phi$ of
$K_{T}(\mathcal{G}_{x}^{L})$ into $\widetilde{M}$ gives an $\varepsilon$-thick
embedding $f:\Gamma\to\widetilde{M}$. The diameter of the image of $f$ is
bounded from above by a constant multiple of $\textup{diam}(\psi)$. For the
volume, note that the sum of the lengths of all paths in the wiring is at most
a constant times $\textup{vol}(\psi)$, and the volume of the thick embedding
is at most this sum of lengths multiplied by the maximal volume of a ball of
radius $\varepsilon$ in $M$. Hence the volume of this thick embedding is at
most a constant multiple of $V$. ∎
## 5 Lower bounds on coarse wiring
The goal of this section is to prove Theorem 1.17. We begin by recalling the
definition of the separation profile and its key properties.
### 5.1 Background on the separation profile
Recall that $f\lesssim g$ if there is a constant $C$ such that
$f(x)\leq Cg(Cx)+C\quad\textup{for all}\ x\in X.$
We write $f\simeq g$ if $f\lesssim g$ and $g\lesssim f$.
###### Definition 5.1.
[BST12] Let $\Gamma$ be a finite graph. We denote by $\textup{cut}(\Gamma)$
the minimal cardinality of a set $S\subset V\Gamma$ such that no component of
$\Gamma-S$ contains more than $\frac{1}{2}|V\Gamma|$ vertices. A set $S$
satisfying this property is called a cut set of $\Gamma$.
Let $X$ be a (possibly infinite) graph. We define the separation profile of
$X$ to be the function $\textup{sep}_{X}:[0,\infty)\to[0,\infty)$ given by
$\textup{sep}_{X}(n)=\max\\{\textup{cut}(\Gamma)\mid\Gamma\leq X,\
|V\Gamma|\leq n\\}.$
For convenience, we will define $\textup{sep}_{X}(r)=0$ whenever $r<1$.
###### Definition 5.2.
The Cheeger constant of a finite graph $\Gamma$ is
$h(\Gamma)=\min\\{\frac{|\partial A|}{|A|}\mid A\subseteq V\Gamma,\
|A|\leq\frac{1}{2}|V\Gamma|\\}$
where $\partial A=\\{v\in V\Gamma\mid d_{\Gamma}(v,A)=1\\}$.
### 5.2 Lower bounds on wiring profiles
The key part of the proof of Theorem 1.17 is the following intimidating bound.
###### Proposition 5.3.
Let $X,Y$ be graphs with maximal degrees $\Delta_{X},\Delta_{Y}$ respectively.
If $\textup{wir}^{k}_{X\to Y}(n)<\infty$, then, for all $n\geq 3$,
$\sum_{s\geq 0}\textup{sep}_{Y}(2^{-s}\textup{wir}^{k}_{X\to
Y}(n))\geq\frac{\textup{sep}_{X}(n)}{k\Delta_{Y}}.$
Roughly, the idea is that given a subgraph $\Gamma\leq X$ and an “efficient”
coarse $k$-wiring $\Gamma\to\Gamma^{\prime}$, $\textup{cut}(\Gamma^{\prime})$
can be bounded from above by $\textup{cut}(\Gamma)$ up to a multiplicative
error depending only on $k$. However, we do not know that any cut of this size
equally divides the images of the vertices of $\Gamma$ in $\Gamma^{\prime}$ so
we may need to repeat the procedure on a subgraph of $\Gamma^{\prime}$ with at
most $|\Gamma^{\prime}|/2$ vertices and then again on a subgraph of
$\Gamma^{\prime}$ with at most $|\Gamma^{\prime}|/4$ vertices and so on. This
divide-and-conquer strategy is the reason for the summation on the left hand
side of the equation above.
###### Proof.
Let $n\geq 3$ and choose $\Gamma\leq X$ which satisfies
$\left\lvert\Gamma\right\rvert\leq n$ and
$\textup{cut}(\Gamma)=\textup{sep}_{X}(n)=l$. Since $n\geq 3$, it is always
the case that $l\leq 2\left\lvert\Gamma\right\rvert/3$. By [Hum17, Proposition
2.4] there is some $\Gamma^{\prime\prime}\leq\Gamma$ which satisfies
$\left\lvert\Gamma^{\prime\prime}\right\rvert\geq\frac{1}{2}\left\lvert\Gamma\right\rvert$
and $h(\Gamma^{\prime\prime})\geq\frac{l}{2\left\lvert\Gamma\right\rvert}$.
Let $\psi:\Gamma^{\prime\prime}\to\Gamma^{\prime}$ be a coarse $k$-wiring
where $\Gamma^{\prime}\leq Y$ satisfies
$|\Gamma^{\prime}|=wir^{k}_{Y}(\Gamma^{\prime\prime})$.
Let us recursively define a collection of subsets
$C^{\prime}_{1},C^{\prime}_{2},\ldots,\subseteq V\Gamma^{\prime}$ as follows.
Define $\Gamma^{\prime}_{0}=\Gamma$. Let $C^{\prime}_{s}$ be a cut set of
$\Gamma^{\prime}_{s}$ of minimal size. If for every component $A$ of
$\Gamma^{\prime}_{s}-C^{\prime}_{s}$, we have $|\left\\{v\in V\Gamma\ \left|\
\psi(v)\in A\right.\right\\}|\leq\frac{1}{2}|\Gamma$ then define
$C^{\prime}_{t}=\emptyset$ for all $t>s$ and end the process. Otherwise, set
$\Gamma_{s+1}$ to be the unique connected component of
$\Gamma^{\prime}_{s}-C^{\prime}_{s}$ satisfying $|\left\\{v\in V\Gamma\
\left|\ \psi(v)\in A\right.\right\\}|>\frac{1}{2}|\Gamma$. As
$|\Gamma|<\infty$ this process will always terminate. Define
$C^{\prime}=\bigcup_{s\geq 0}C^{\prime}_{s}$. By construction, for every
connected component $A$ of $\Gamma^{\prime}\setminus C^{\prime}$, we have
$|\left\\{v\in V\Gamma\ \left|\ \psi(v)\in
A\right.\right\\}|\leq\frac{1}{2}|\Gamma$. By definition of cut sets,
$|\Gamma^{\prime}_{s}|\leq 2^{-s}|\Gamma^{\prime}|$, so
$|C^{\prime}_{s}|\leq\textup{sep}_{Y}(2^{-s}|\Gamma^{\prime}|)=\textup{sep}_{Y}(2^{-s}\textup{wir}^{k}_{X\to
Y}(n))$.
Let $C$ be the set of all vertices in $\Gamma$ which are the end vertices of
an edge whose image under $\psi$ contains any vertex in $C^{\prime}$. By
construction $C$ is a cutset for $\Gamma$. Since $\psi$ is a $k$-coarse
wiring, each edge in $\Gamma^{\prime}$ lies in the image of at most $k$ edges
in $\Gamma$, so each vertex in $\Gamma^{\prime}$ lies in the image of at most
$k\Delta_{Y}$ edges in $\Gamma$, where $\Delta_{Y}$ is the maximal degree of
the graph $Y$. Thus $|C|\leq k\Delta_{Y}|C^{\prime}|$.
Combining these observations we see that
$\textup{cut}(\Gamma)\leq|C|\leq k\Delta_{Y}|C^{\prime}|\leq
k\Delta_{Y}\sum_{s\geq 0}\textup{sep}_{Y}(2^{-s}\textup{wir}^{k}_{X\to Y}(n))$
As this holds for all $\Gamma\leq X$ with $\left\lvert\Gamma\right\rvert\leq
n$, we deduce that
$\textup{sep}_{X}(n)\leq k\Delta_{Y}\sum_{s\geq
0}\textup{sep}_{Y}(2^{-s}\textup{wir}^{k}_{X\to Y}(n)).\qed$
In practice, the separation profiles of graphs we are interested in here are
of the form $n^{r}\ln(n)^{s}$ with $r\geq 0$ and $s\in\mathbb{R}$. Restricted
to these functions, Proposition 5.3 says the following.
###### Corollary 5.4.
Suppose $\textup{sep}_{X}(n)\gtrsim n^{r}\ln(n)^{s}$ and
$\textup{sep}_{Y}(n)\simeq n^{p}\ln(n)^{q}$. Then, for all $k$,
$wir^{k}_{X\to
Y}(n)\gtrsim\left\\{\begin{array}[]{lll}n^{r/p}\ln(n)^{(s-q)/p}&\textup{if}&p>0,\\\
\exp(n^{r/(q+1)}\ln(n)^{s/(q+1)})&\textup{if}&p=0.\end{array}\right.$
###### Proof.
If $wir^{k}_{X\to Y}(n)=+\infty$ there is nothing to prove, so assume this is
not the case. Applying our hypotheses to Proposition 5.3, we have
$n^{r}\ln(n)^{s}\lesssim\sum_{i\geq 0}\left(2^{-i}\textup{wir}^{k}_{X\to
Y}(n)\right)^{p}\ln\left(2^{-i}\textup{wir}^{k}_{X\to Y}(n)\right)^{q}.$ (10)
If $p>0$, then the sequence $\left(2^{-i}\textup{wir}^{k}_{X\to
Y}(n)\right)^{p}$ decays exponentially as a function of $i$, so
$\displaystyle n^{r}\ln(n)^{s}$ $\displaystyle\lesssim$
$\displaystyle\ln\left(\textup{wir}^{k}_{X\to Y}(n)\right)^{q}\sum_{i\geq
0}\left(2^{-i}\textup{wir}^{k}_{X\to Y}(n)\right)^{p}$ $\displaystyle\lesssim$
$\displaystyle\textup{wir}^{k}_{X\to Y}(n)^{p}\ln\left(\textup{wir}^{k}_{X\to
Y}(n)\right)^{q}.$
Hence, there is some constant $C>0$ such that
$w:=\textup{wir}^{k}_{X\to Y}(n)^{p}\ln\left(\textup{wir}^{k}_{X\to
Y}(n)\right)^{q}\geq C^{-1}(C^{-1}n)^{r}\ln(C^{-1}n)^{s}-C.$ (11)
Now suppose $\textup{wir}^{k}_{X\to Y}(n)\leq dn^{r/p}\ln(n)^{(s-q)/p}$. Then
$\displaystyle w$ $\displaystyle\leq$ $\displaystyle
d^{p}n^{r}\ln(n)^{s-q}\left(\ln(d)+\frac{r}{p}\ln(n)+\frac{s-q}{p}\ln\ln(n)\right)^{q}$
$\displaystyle\leq$
$\displaystyle\frac{(2r)^{s}d^{p}}{p^{s}}n^{r}\ln(n)^{s-q}\ln(n)^{q}=\frac{(2r)^{s}d^{p}}{p^{s}}n^{r}\ln(n)^{s}$
for sufficiently large $n$. This contradicts $\eqref{eq:wir}$ if $d$ is small
enough and $n$ is large enough. Hence,
$\textup{wir}^{k}_{X\to Y}(n)\gtrsim n^{r/p}\ln(n)^{(s-q)/p}.$
If $p=0$, then by $\eqref{eq:sepwirbd}$ there is some $C>0$ such that
$\displaystyle C^{-1-r}n^{r}\ln(C^{-1}n)^{s}-C$ $\displaystyle\leq$
$\displaystyle\sum_{i=0}^{\ln(\textup{wir}^{k}_{X\to
Y}(n))}\ln\left(2^{-i}\textup{wir}^{k}_{X\to Y}(n)\right)^{q}$
$\displaystyle\leq$ $\displaystyle\ln\left(\textup{wir}^{k}_{X\to
Y}(n)\right)^{q+1},$
Hence $\textup{wir}^{k}_{X\to Y}(n)\gtrsim\exp(n^{r/(q+1)}\ln(n)^{s/(q+1)})$.
∎
## 6 Completing Theorems 1.7, 1.4 and 1.5
In this section we give complete proofs of the main results of the paper.
###### Proof of Theorem 1.7.
Let $M=\mathbb{H}^{q}_{F}\times\mathbb{R}^{r}$ and let $Y$ be a bounded degree
graph which is quasi-isometric to $M$. Fix $d\in\mathbb{N}$ and
$\delta,\epsilon>0$. Enumerate the set of finite graphs with maximal degree
$d$ and Cheeger constant $\geq\delta$ by $\Gamma_{1},\Gamma_{2},\ldots$, with
$|\Gamma_{i}|\leq|\Gamma_{j}|$ whenever $i\leq j$. Let $X$ be the disjoint
union of all the $\Gamma_{i}$. If $X$ is finite, there is nothing to prove. By
[Hum17, Proposition 2.4],
$\textup{sep}_{X}(|\Gamma_{i}|)\geq\frac{\delta}{2}|\Gamma_{i}|$ for all $i$.
Set $Q=(q+1)\dim_{\mathbb{R}}(F)-2$, the conformal dimension of the boundary
of $\mathbb{H}^{q}_{F}$. We have $\textup{sep}_{Y}(n)\simeq
n^{1-1/(r+1)}\ln(n)^{1/(r+1)}$ if $Q=1$ and $\textup{sep}_{Y}(n)\simeq
n^{1-1/(Q+r-1)}$ if $Q\geq 2$, so by Corollary 5.4, for each $k$ there exists
a constant $C$ such that for all $i$,
$\textup{wir}^{k}_{X\to
Y}(|\Gamma_{i}|)\geq\left\\{\begin{array}[]{rcl}C^{-1}|\Gamma_{i}|^{1+1/r}\ln(1+|\Gamma_{i}|)^{-1/r}-C&\textup{if}&Q=1,\\\
C^{-1}|\Gamma_{i}|^{1+1/(Q+r-1)}-C&\textup{if}&Q\geq 2.\end{array}\right.$
(12)
We continue with the $Q=1$ case, the argument for $Q\geq 2$ is very similar.
Now suppose for a contradiction that for every $n$ there is some
$\epsilon$-thick embedding $\Gamma_{i_{n}}\to M$ with volume at most
$\frac{1}{n}|\Gamma_{i}|^{1+1/r}\ln(1+|\Gamma_{i}|)^{-1/r}$. By Proposition
1.10, there is a coarse $k$-wiring of $\Gamma_{i_{n}}$ into $Y$ with volume at
most
$\frac{k}{n}|\Gamma_{i}|^{1+1/r}\ln(1+|\Gamma_{i}|)^{-1/r}$
for some $k=k(d,M,\varepsilon,Y)$. This contradicts $\eqref{eq:wirlbrk1}$ for
large enough $n$. ∎
###### Proof of Theorem 1.4.
Let $\Gamma$ be a finite graph with maximal degree $d$. By Theorem 1.18 there
is a $2d$-coarse wiring of $\Gamma$ into a Cayley graph of
$\mathbb{Z}_{2}\wr\mathbb{Z}$ with volume $\leq
4d|\Gamma|\lceil\log_{2}(1+|\Gamma|)\rceil$. This Cayley graph is quasi-
isometric to $\textup{DL}(2,2)$ [Woe05], and $\textup{DL}(2,2)$ quasi-
isometrically embeds into any graph $X$ quasi-isometric to a symmetric space
whose non-compact factor has rank $\geq 2$ [HMT22, Proposition 2.8 and Theorem
3.1]. Thus, for some $l$ we have
$\textup{wir}^{l}_{\Gamma\to X}\leq C^{\prime}N\ln(1+N).\qed$
###### Proof of Theorem 1.5.
The proof is the same as for Theorem 1.7 replacing (12) with
$\textup{wir}^{k}_{Y\to X}(|\Gamma_{i}|)\geq
C^{-1}|\Gamma_{i}|\ln(1+|\Gamma_{i}|)-C.\qed$
### 6.1 Coarse wirings into two dimensional symmetric spaces
In this section we collect results about coarse wirings into $\mathbb{R}^{2}$
and $\mathbb{H}^{2}_{\mathbb{R}}$. The first is a direct construction of a
coarse wiring, the second a thick embedding into
$\mathbb{H}_{\mathbb{R}}^{2}\times[0,1]$ which is quasi-isometric to
$\mathbb{H}^{2}_{\mathbb{R}}$.
###### Proposition 6.1.
Every $N$-vertex graph with maximal degree $d$ admits a coarse $2d$-wiring
into the standard $2$-dimensional integer lattice $\mathbb{Z}^{2}$ with volume
at most $N^{2}$.
Let $X$ be the disjoint union of all finite graphs with maximal degree $3$.
For any $k$ there is some $C$ such that
$\textup{wir}^{k}_{X\to\mathbb{Z}^{2}}(n)\geq C^{-1}n^{2}-C$.
###### Proof.
The second claim follows immediately from Corollary 5.4 and the fact that
$\textup{sep}_{X}(n)\simeq n$ [Hum17] and
$\textup{sep}_{\mathbb{Z}^{2}}(n)\simeq n^{1/2}$ [BST12].
Let $\Gamma$ be an $n$-vertex graph with maximal degree $d$. Enumerate the
vertices of $\Gamma$ by $v_{0},\ldots,v_{n-1}$. We construct a $d$-wiring of
$\Gamma$ into $\\{0,\ldots,n-1\\}^{2}$ as follows:
Map the vertex $v_{k}$ to the point $(k,k)$. For each edge $v_{i}v_{j}$ (with
$i<j$) we define a path $P_{ij}$ which travels horizontally from $(i,i)$ to
$(j,i)$, then vertically from $(j,i)$ to $(j,j)$.
To see that this is a $1$-thick embedding, note that if a horizontal edge
$(a,b)(a+1,b)$ is in $P_{ij}$ then $b=i$. Similarly, if a vertical edge
$(a,b)(a,b+1)$ appears in $P_{ij}$, then $a=j$. Hence, any two paths
containing a common edge have a common end vertex. Since, by assumption, there
are at most $d$ edges with a given end vertex, we have defined a coarse
$2d$-wiring. The volume estimate is obvious. ∎
###### Proposition 6.2.
Let $Y$ be a graph which is quasi-isometric to $\mathbb{H}_{\mathbb{R}}^{2}$
and let $d\in\mathbb{N}$. There are constants $k=k(Y,d)$ and $C=C(Y,d)$ such
that any $N$-vertex graph $\Gamma$ with maximal degree $d$ admits a coarse
$k$-wiring into $Y$ with volume $\leq CN^{2}\exp(N)$.
Let $X$ be the disjoint union of all finite graphs with maximal degree $3$.
For any $k$ there is some $C$ such that $\textup{wir}^{k}_{X\to Y}(n)\geq
C^{-1}\exp(C^{-1}n^{1/2}))-C$.
###### Proof.
The second claim follows immediately from Corollary 5.4 and the fact that
$\textup{sep}_{X}(n)\simeq n$ [Hum17] and
$\textup{sep}_{\mathbb{H}_{\mathbb{R}}^{2}}(n)\simeq\ln(n)$ [BST12].
We will construct $1$-thick embeddings
$K_{N}\to\mathbb{H}_{\mathbb{R}}^{2}\times[0,1]$ with volume $\leq
C^{\prime}N^{2}\exp(N)$. Since $Y$ is quasi-isometric to
$\mathbb{H}_{\mathbb{R}}^{2}\times[0,1]$, the result will then follow from
Proposition 1.10.
Firstly, recall the definition of the metric in the upper halfspace model of
$\mathbb{H}_{\mathbb{R}}^{2}$:
$d_{\mathbb{H}_{\mathbb{R}}^{2}}((x_{1},y_{1}),(x_{2},y_{2}))=\cosh^{-1}\left(1+\frac{(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}{2y_{1}y_{2}}\right).$
We equip $\mathbb{H}_{\mathbb{R}}^{2}\times[0,1]$ with the metric
$d((w,x;a),(y,z;b))=\max\\{d_{\mathbb{H}_{\mathbb{R}}^{2}}((w,x),(y,z)),|a-b|\\}.$
Let us define $h_{0}:=(2(\cosh(1)-1)^{-1/2}>1$.
Claim: If $d((w,x;a),(y,z;b))<1$ and $x,z\leq h_{0}$, then $|a-b|<1$,
$|w-y|<1$ and $|\ln(x/h_{0})-\ln(z/h_{0})|<1$.
###### Proof of Claim.
It is immediate from the definition that $|a-b|<1$. Since $x,z\leq h_{0}$,
$\displaystyle 1$ $\displaystyle>$ $\displaystyle d((w,x;a),(y,z;b))$
$\displaystyle\geq$ $\displaystyle
d_{\mathbb{H}_{\mathbb{R}}^{2}}((w,x),(y,z))$ $\displaystyle\geq$
$\displaystyle\cosh^{-1}\left(1+\frac{(w-y)^{2}}{2h_{0}^{2}}\right)$
$\displaystyle\geq$ $\displaystyle\cosh^{-1}(1+(\cosh(1)-1)(w-y)^{2}).$
Hence $(w-y)^{2}<1$, so $|w-y|<1$. Finally, write $x=h_{0}e^{p}$ and
$z=h_{0}e^{q}$ with $p,q\in\mathbb{R}$. We have
$\displaystyle 1$ $\displaystyle>$ $\displaystyle d((w,x;a),(y,z;b))$
$\displaystyle\geq$ $\displaystyle
d_{\mathbb{H}_{\mathbb{R}}^{2}}((w,x),(y,z))$ $\displaystyle\geq$
$\displaystyle\cosh^{-1}\left(1+\frac{h_{0}^{2}(e^{p}-e^{q})^{2}}{2h_{0}^{2}e^{p+q}}\right)$
$\displaystyle=$
$\displaystyle\cosh^{-1}\left(\frac{1}{2}(e^{p-q}+e^{q-p})\right)=|p-q|.$
Hence $|\ln(x/h_{0})-\ln(z/h_{0})|=|p-q|<1$. ∎
Enumerate the vertices of $K_{N}$ by $v_{0},\ldots,v_{N-1}$. We map $v_{i}$ to
$(i,h_{0}e^{-i};0)$ where $h_{0}=(2(\cosh(1)-1)^{-1/2}$. For $i<j$, we connect
$(i,h_{0}e^{-i};0)$ to $(j,h_{0}e^{-j};0)$ using the path $P_{ij}$ defined as
follows:
$\displaystyle(i,h_{0}e^{-i};0)$ $\displaystyle\to$
$\displaystyle(j,h_{0}e^{-i};0)$ (13) $\displaystyle\to$
$\displaystyle(j,h_{0}e^{-i};1)$ (14) $\displaystyle\to$
$\displaystyle(j,h_{0}e^{-j};1)$ (15) $\displaystyle\to$
$\displaystyle(j,h_{0}e^{-j};0)$ (16)
where the first segment lies in the horocircle $y=h_{0}e^{-i}$ and the others
are geodesics.
We first prove that this embedding is $1$-thick. Let $(w,x;a)\in P_{ij}$ and
$(y,z;b)\in P_{kl}$ with $d((w,x;a),(y,z;b))<1$. Set $p=\ln(x/h_{0})$ and
$q=\ln(z/h_{0})$. From the claim we have $\max\\{|w-y|,|p-q|,|a-b|\\}<1$.
If $a=1$, then $b>0$, so $w=j$ and $y=l$. Since $w,y$ are both integers they
must be equal. Thus $j=l$ and the two paths come from edges which share a
vertex.
If $a\in(0,1)$ then $w=j$ and $p\in\\{-i,-j\\}$. Note that one of the four
equalities $y=k$, $y=l$, $q=-k$, $q=-l$ holds at every point on $P_{kl}$. If
it one of the first two, then $\min\\{|j-k|,|j-l|\\}<1$ and $j\in\\{k,l\\}$,
or if it is one of the last two, then one of $-i,-j$ is equal to one of
$-k,-l$. In any case the two paths share an end vertex.
If $a=0$ then either $p=-i$ or $w=j$ and $p=-j$. Also $b<1$ so either $q=-k$
or $y=l$ and $q=-l$. If $p=-i$, then either $q=-k$ in which case $|-i-(-k)|<1$
by the claim, thus $i=k$; or $q=-l$ in which case $i=l$ by the same reasoning.
Next, suppose $w=j$ and $p=-j$. Since $q\in\\{-k,-l\\}$ we have $j=k$ or
$j=l$. If $p=-i$, $y=l$ and $q=-l$, then $i=l$ following the same reasoning.
Every point in the image of the embedding is of the form $(x,h_{0}e^{-y};z)$
where $|x|,|y|\leq n-1$ and $z\in[0,1]$. Set
$p=\left(\frac{n-1}{2},h_{0};\frac{1}{2}\right)$. We have
$\displaystyle d\left((x,h_{0}e^{-y};z),p\right)$ $\displaystyle\leq$
$\displaystyle\cosh^{-1}\left(1+\frac{\left(\frac{n-1}{2}\right)^{2}+h_{0}^{2}(1-e^{n-1})^{2}}{2h_{0}^{2}e^{-(n-1)}}\right)+\frac{1}{2}$
$\displaystyle\leq$
$\displaystyle\cosh^{-1}\left(1+\frac{\frac{n^{2}}{4}+h_{0}^{2}}{2h_{0}^{2}e^{-(n-1)}}\right)+\frac{1}{2}$
$\displaystyle\leq$
$\displaystyle\cosh^{-1}\left(1+(\frac{n^{2}}{8}+1)e^{n-1}\right)+\frac{1}{2}$
$\displaystyle\leq$
$\displaystyle\cosh^{-1}\left(\frac{\frac{17n^{2}}{8}e^{n-1}}{2}\right)+\frac{1}{2}$
$\displaystyle\leq$
$\displaystyle\cosh^{-1}\left(\cosh(n-1+2\ln(n)+\ln(17)-\ln(8))\right)+\frac{1}{2}$
$\displaystyle=$ $\displaystyle n-1+2\ln(n)+\ln(17)-\ln(8)+\frac{1}{2}\leq
n+2\ln(n)$
Thus, the volume of the wiring is at most $4\pi\sinh^{2}((n+2\ln(n)+1)/2)$:
the volume of the ball of radius $n+2\ln(n)+1$ in
$\mathbb{H}_{\mathbb{R}}^{2}$. We have
$\displaystyle 4\pi\sinh^{2}((n+2\ln(n)+1)/2)$ $\displaystyle\leq$
$\displaystyle 4\pi\left(\frac{\exp((n+2\ln(n)+1)/2)}{2}\right)^{2}$
$\displaystyle\leq$ $\displaystyle\pi\exp(n+2\ln(n)+1)$ $\displaystyle=$
$\displaystyle e\pi n^{2}e^{n}\simeq e^{n}$
as required. ∎
## 7 Questions
One possible way to improve our bounds on thick embeddings of graphs into
other symmetric spaces whose non-compact factor has rank one is via
constructions of thick embeddings into nilpotent Lie groups. A positive
resolution to the following question would show that the lower bounds from
Theorem 1.7 are sharp whenever $Q\geq 2$.
###### Question 7.1.
Let $P$ be a connected nilpotent Lie group with polynomial growth of degree
$p\geq 3$ and let $d\in\mathbb{N}$. Do there exist constants $C,\varepsilon>0$
which depend on $p,d$ such that for any $N$-vertex graph $\Gamma$ with maximal
degree $d$ there is a $\varepsilon$-thick embedding of $\Gamma$ into $P$ with
diameter $\leq CN^{1/(p-1)}$?
Another important example worthy of consideration is a semidirect product of
the Heisenberg group with $\mathbb{R}$, $H\rtimes_{\psi}\mathbb{R}$ where the
action is given by
$\left(\begin{array}[]{ccc}1&x&z\\\ 0&1&y\\\
0&0&1\end{array}\right)\cdot\psi(t)=\left(\begin{array}[]{ccc}1&e^{t}x&z\\\
0&1&e^{-t}y\\\ 0&0&1\end{array}\right).$
###### Conjecture 7.2.
For every $d$ there exist constants $C=C(d)$ and $\varepsilon=\varepsilon(d)$
such that every finite graph $\Gamma$ with maximal degree $d$ admits a
$\varepsilon$-thick embedding into $H\rtimes_{\psi}\mathbb{R}$ with volume
$\leq C|\Gamma|\ln(1+|\Gamma|)$.
An immediate consequence of this conjecture is that the dichotomy at the heart
of [HMT22] is also detected by wiring profiles. Specifically, let $G$ be a
connected unimodular Lie group, let $Y$ be a graph quasi-isometric to $G$ and
let $X$ be the disjoint union of all finite graphs with degree $\leq 3$.
Either $G$ is quasi-isometric to a product of a hyperbolic group and a
nilpotent Lie group, in which case there is some $p>1$ such that for all $k$
sufficiently large $\textup{wir}^{k}_{X\to Y}(N)\gtrsim N^{p}$; or $G$
contains a quasi-isometrically embedded copy of either $\textup{DL}(2,2)$ or
$H\rtimes_{\psi}\mathbb{R}$, in which case for all $k$ sufficiently large
$\textup{wir}^{k}_{X\to Y}(N)\simeq N\ln(N)$.
The lower bound from separation profiles is incredibly useful, and our best
results are all in situations where we can prove that the lower bound in
Theorem 1.17 is optimal. As a result it is natural to record the following:
###### Question 7.3.
For which bounded degree graphs $Y$ does the following hold:
Let $X$ be the disjoint union of all finite graphs with maximal degree $\leq
3$. For all $k$ sufficiently large
$\textup{sep}_{Y}(\textup{wir}^{k}_{X\to Y}(N))\simeq N.$
A starting point would be to determine when the following strengthening of
Proposition 5.3 holds:
###### Question 7.4.
Let $X,Y$ be graphs of bounded degree where $\textup{wir}^{k}_{X\to
Y}(n)<\infty$. Does there exist a constant $C>0$ such that for all $n$
$\textup{sep}_{Y}(\textup{wir}^{k}_{X\to Y}(n))\geq
C^{-1}\textup{sep}_{X}(C^{-1}n)-C?$
We certainly should not expect Theorem 1.17 give the correct lower bound in
all cases. A natural example to consider would be a coarse wiring of an
infinite binary tree $B$ into $\mathbb{Z}^{2}$. The depth $k$ binary tree
$B_{k}$ (with vertices considered as binary strings $v=(v_{1},v_{2},\ldots
v_{m})$ of length $\leq k$) admits a $1$-wiring into $\mathbb{Z}^{2}$ with
volume $\lesssim|B_{k}|\ln|B_{k}|$ as follows
$\psi(v_{1},v_{2},\ldots v_{l})=\left(\sum_{i\in\\{l\mid
v_{l}=0\\}}2^{k-i},\sum_{j\in\\{l\mid v_{l}=1\\}}2^{k-i}\right)$
where the path connecting $\psi(v_{1},v_{2},\ldots v_{l})$ to
$\psi(v_{1},v_{2},\ldots v_{l},0)$ (respectively $\psi(v_{1},v_{2},\ldots
v_{l},1)$) is a horizontal (resp. vertical) line.
###### Question 7.5.
Is it true that for all sufficiently large $k$,
$\textup{wir}^{k}_{B\to\mathbb{Z}^{2}}(N)\simeq N\ln(N)$? Does the lower bound
hold for all coarse wirings $X\to Y$ where $X$ has exponential growth and $Y$
has (at most) polynomial growth?
Since the first version of this paper appeared, this question has been
resolved by Kelly [Kel23], who proved the slightly surprising result that
$\textup{wir}^{1}_{T_{3}\to\mathbb{Z}^{2}}(N)\simeq N$.
It is also natural to ask whether other invariants which behave monotonically
with respect to coarse embedding (and regular maps) provide lower bounds on
wiring profiles.
## Appendix A Appendix: The Kolmogorov-Barzdin Embedding Theorem in Higher
Dimensions
The goal of this appendix is to prove Theorem 1.3. The main theorem roughly
says that if we have a graph of bounded degree with $V$ vertices, then we can
embed it into an $n$-dimensional Euclidean ball of radius $V^{1/(n-1)}$
without too many edges or vertices intersecting any unit ball. Kolmogorov and
Barzdin proved the theorem in dimension 3 and Guth sketched a proof that
showed how their method generalized to higher dimensions in the language of
thick embeddings. In this appendix we present a full proof using the language
of coarse wirings introduced in the present paper.
###### Theorem A.1.
[KB93],[Gu16] Let $Q^{1}_{r}$ be the path graph with $r$ vertices, and let
$Q^{n}_{r}=Q^{1}_{r}\times Q^{1}_{r}\times\ldots Q^{1}_{r}$
where the graph product is taken $n$ times. If $\Gamma$ is a graph where every
vertex has degree at most $k$, then for some integer $C>0$ that only depends
on $n$ and $k$, and $R=\lceil|V\Gamma|^{\frac{1}{n-1}}\rceil$ there is a
coarse $(k+n)$-wiring,
$f:\Gamma\to Q^{n}_{2CR}.$
Here, the graph product $\Gamma_{1}\times\ldots\times\Gamma_{n}$ is the graph
with vertex set $V\Gamma_{1}\times\ldots\times V\Gamma_{n}$ and edges
$(v_{1},\ldots,v_{n}),(w_{1},\ldots,w_{n})$ whenever there is some $j$ such
that $v_{j}w_{j}\in E\Gamma_{j}$ and $v_{i}=w_{i}$ for all $i\neq j$.
The proof of Theorem 1.3 follows immediately from Theorem A.1 by first
applying 1.11, then rescaling by a factor of $\varepsilon^{-1}$.
We say a few words about our strategy for constructing $f$. If we think of
$Q^{n}_{2CR}$ as a graph embedded in an $n$-cube, then our $f$ maps the
vertices of $\Gamma$ into some face of this cube. The edges of $\Gamma$ are
mapped to paths which each consist of $O(n)$ straight segments of randomly
chosen lengths. It turns out that this $O(n)$ freedom is enough to guarantee
that, with non-zero probability, there is no edge of $Q^{n}_{2CR}$ where too
many of these paths overlap.
In the next section we provide a proof of Theorem A.1 following [Gu16].
## Appendix B Proof of Theorem A.1
###### Proof.
Let $C$ be some large constant, only depending on $n$, which we will choose
later. We can think of $Q^{n}_{2CR}$ as a graph embedded in the cube
$[0,2CR]^{n}$, where each edge has length $1$. We let $Q^{n-1}_{R}$ be a graph
embedded in the bottom face of this cube. Namely, $Q^{n-1}_{R}$ sits inside
$[0,CR]^{n-1}\times 0\subset[0,2CR]^{n}$, with each edge having length $C$.
Begin by defining $f$ on $V\Gamma$ by embedding all the vertices of $\Gamma$
into $Q^{n-1}_{R}$ in any way we like. Such an embedding is possible since
$R^{n-1}$ is larger than $|V\Gamma|$.
Next we have to extend $f$ to the edges of $\Gamma$. Give the edges of
$\Gamma$ some order, say $\\{e_{i}\\}_{i=1}^{|E\Gamma|}$, and let the
endpoints of $e_{i}$ be $x_{i,-}\in V\Gamma$ and $x_{i,+}\in V\Gamma$. For
many values of $j$, we will construct paths $\gamma(i,j)$ from $f(x_{i,+})$ to
$f(x_{i,-})$. Each $\gamma(i,j)$ will consist of $(2n-1)$ straight segments.
We select $f(e_{1})$ from among the paths $\gamma(1,j)$. Next we select
$f(e_{2})$ from among the paths $\gamma(2,j)$, making sure it does not pass
too close to $\gamma(1,j)$. And so on. At step $i$, we have to see that one of
the paths $\gamma(i,j)$ stays far enough away from the previous paths
$f(e_{1}),\ldots f(e_{i-1})$. In fact, we will show that at step $i$, if we
choose $j$ randomly, the probability that $\gamma(i,j)$ comes too close to one
of the previous paths is less than one half.
To define these paths we will use $x_{i}$ to refer the $i$th coordinate
direction in $[0,2CR]^{n}$. For a set of $n$ integers $j=\\{j_{0},j_{1},\ldots
j_{n-1}\\}\in([0,CR]\cap\mathbb{Z})^{n-1}$ the path $\gamma(i,j)$ has the
following form. Starting at $f(x_{i,+})$, we first draw a segment in the
$x_{n}$ direction with length $j_{0}$. Next we draw a segment in the $x_{1}$
direction with length $j_{1}$. Then we draw a segment in the $x_{2}$ direction
with length $j_{2}$. We continue in this way up to a segment in the $x_{n-2}$
direction of length $j_{n-2}$. We have $(CR+1)^{n-1}$ choices for $j$. Then we
draw a segment in the $x_{n-1}$ direction which ends at the $x_{n-1}$
coordinate of $f(x_{i,-})$. Then we draw a segment in the $x_{n-2}$ direction
which ends at the $x_{n-2}$ coordinate of our target $f(x_{i,-})$, etc.
Finally, we draw a segment in the $x_{n}$ direction which ends at
$f(x_{i,-})$.
We claim that we can choose $j$ so that $\gamma(i,j)$ only intersects
previously selected $f(e_{i})$ in the $x_{n}$ direction or intersects them
perpendicularly. Since each path is made of segments that point in the
coordinate directions, we just have to check that none of the segments
intersects a segment of a previous path going in the same direction. Call a
segment bad if it intersects a segment from a previous path going in the same
direction.
The initial segment of $\gamma(i,j)$, in the $x_{n}$ direction, can intersect
at most $k$ segments in the same direction of $f(e_{1})\ldots f(e_{i-1})$
because $f$ is an embedding on $V\Gamma$ and $\Gamma$ has degree at most $k$
at each vertex. Consider the first segment in the $x_{1}$ direction. On this
segment, the $2\ldots(n-1)$ coordinates are equal to those of $f(x_{i,+})$.
This segment can intersect an $x_{1}$ segment of a previous path
$f(e_{i^{\prime}})$ only if $f(x_{i,+})$ has the same $2\ldots(n-1)$
coordinates as $f(x_{i^{\prime},+})$ or $f(x_{i^{\prime},-})$. This leaves at
most $2Rk$ worrisome values of $i^{\prime}$. But on this segment of
$\gamma(i,j)$, the $n$ coordinate is fixed equal to $j_{0}$. This segment is
bad only if it has the same $x_{n}$ coordinate as a segment in the $x_{1}$
direction in one of the $2Rk$ worrisome values of $i^{\prime}$. But there are
$CR$ choices for $j_{0}$. So, choosing $(j_{0},\ldots,j){n-1})$ uniformly at
random, the probability that this first segment is bad is at most
$\frac{2k}{C}$.
A similar argument holds for the the second segment. This $x_{2}$ segment can
intersect a previous path $f(e_{i^{\prime}})$ only if $f(x_{i,+})$ has the
same $3\ldots(n-1)$ coordinates as $f(x_{i^{\prime},+})$ or
$f(x_{i^{\prime},-})$. This leaves at most $(2Rk)^{2}$ worrisome values of
$i^{\prime}$ . But there are $(CR)^{2}$ choices for $(j_{0},j_{1})$. So the
probability that the second segment is bad is at most $(\frac{2k}{C})^{2}$.
The same reasoning applies for the first $n$ segments. And in fact the same
reasoning applies for the following $n$ segments as well. For instance,
consider the second (and last) segment in the $x_{1}$ direction. Over the
course of this segment, the $2\ldots(n-1)$ coordinates are equal to those of
$f(x_{i,-})$, and so this segment can intersect a segment in the $x_{1}$
direction of a previous path $f(e_{i^{\prime}})$ only if $f(x_{i,-})$ has the
same $2\ldots(n-1)$ coordinates as $f(x_{i^{\prime},+})$ or
$f(x_{i^{\prime},-})$. This leaves at most $2Rk$ worrisome values of
$i^{\prime}$. But there are more than $CR$ choices of $j_{0}$, and so the
probability that this segment is bad is at most $\frac{2k}{C}$. In summary,
there are $2n-1$ segments, and each has probability at most $\frac{2k}{C}$ of
being bad, if $C$ is large. So, by a union bound, for some large $C$ only
depending on $n$ and $k$, more than half the time, all segments in directions
$x_{1}\ldots x_{n-1}$ are good. This gives us a path $f(e_{i})$ which
intersects at most $k$ paths in segments in the $x_{n}$ direction, and
intersects all other previous paths perpendicularly. From the construction, we
see that at most $(k+n)$ of the paths we choose intersect any vertex or edge
of $Q^{n}_{2CR}$. In other words, $f$ is a $(k+n)$-coarse wiring. ∎
## References
* [BST12] I. Benjamini, O. Schramm, and Á. Timár. On the separation profile of infinite graphs. Groups Geom. Dyn., 6(4):639–658, 2012.
* [GG12] M. Gromov and L. Guth. Generalizations of the Kolmogorov-Barzdin embedding estimates. _Duke Math. J._ , 161(13):2549–2603, 2012.
* [Gu16] L. Guth. Recent progress in quantitative topology. _Surveys in Differential Geometry_ , 22(1):191–216, 2017.
* [Hum17] D. Hume. A continuum of expanders. Fund. Math., 238:143–152, 2017.
* [HMT20] D. Hume, J. M. Mackay, and R. Tessera. Poincaré profiles of groups and spaces. Rev. Math. Iberoam., 36(6):1835–1886, 2020.
* [HMT22] D. Hume, J. M. Mackay, and R. Tessera. Poincaré profiles of Lie groups and a coarse geometric dichotomy. GAFA, 32:1063–1133, 2022.
* [Kel23] S. Kelly. A Topological Embedding of the Binary Tree into the Square Lattice Preprint available from arXiv:2311.13195.
* [KB93] A. N. Kolmogorov and Y. M. Barzdin. On the realization of networks in three-dimensional space. In _Selected Works of Kolmogorov_ , Kluwer, Dordrecht, 3:194–202, 1993.
* [Lo21] F. López, B. Pozzetti, S. Trettel, M. Strube and A. Wienhard. Symmetric Spaces for Graph Embeddings: A Finsler-Riemannian Approach. Proceedings of the 38th International Conference on Machine Learning, PMLR 139, 2021.
* [Pan89] P. Pansu. Dimension conforme et sphère à l’infini des variétés à courbure négative. Ann. Acad. Sci. Fenn. Ser. A I Math., 14(2):177–212, 1989.
* [Ra23] R. Raistrick. Dependence of the coarse wiring profile on the choice of parameter. Preprint available from arXiv:2311.08436.
* [Woe05] W. Woess. Lamplighters, Diestel-Leader graphs, random walks, and harmonic functions. Combin. Probab. Comput., 14(3):415–433, 2005.
|
d\vec{r}_{1}\chi_{i}^{*}\left(\vec{r}_{1}\right)\left(-\frac{1}{2}\nabla_{1}^{2}-\sum_{\sigma}\frac{Z_{\sigma}}{\left|\vec{r}_{1}-\vec{R}_{\sigma}\right|}\right)\chi_{j}\left(\vec{r}_{1}\right)$
(49)
$h_{ijkl}=\int
d\vec{r}_{1}d\vec{r}_{2}\chi_{i}^{*}\left(\vec{r}_{1}\right)\chi_{j}^{*}\left(\vec{r}_{2}\right)\frac{1}{r_{12}}\chi_{k}\left(\vec{r}_{2}\right)\chi_{l}\left(\vec{r}_{1}\right)$
(50)
This fermionic Hamiltonian can be mapped to Hamiltonian as the product of the
Pauli matrix using Bravyi-Kitaev transformation already discussed in 3.3. One
can also use parity basis and particle and spin conservation method to further
reduce the number of qubits needed.
In quantum computation, there are simulation methods which can be used to
simulate and calculate the energy states of the molecule using its
Hamiltonian. The phase estimation method using Trotterisation in which the
Eigen-energy values are encoded into the phase of the propagator is one
method. Also, another two ways for simulations are Direct implementation of
Hamiltonian using first-order [282] and second order Trotterisation [283]. Yet
another method is the Direct-measurement method. These algorithms are all
Phase Estimation Algorithm (PEA) type algorithms. The most useful method in
the NISQ era of quantum computers is the Variational Quantum Eigen Solver
[284]. The paper [285] shows that the VQE method requires the least number of
qubits for scaling. While PEA type of methods is shown to have higher accuracy
just by one measurement but they require a large number of qubits. This shows
that the VQE algorithm is best suited for NISQ era quantum computers while PEA
types are the ones best suited for long term quantum computers.
Number Operator
& T(θ)
Figure 8: Here $T(\theta)$ is the phase gate such that
$T(\theta)\ket{0}=\ket{0}$ and $T(\theta)\ket{0}=\exp(-\iota\theta)\ket{1}$
Number-excitation operator
p & M 2 2 M
q+1 [style=fill=red!30] 2 2 [style=fill=red!30]
q-1 2 2
r M [style=fill=red!30] R_z(θ) [style=fill=red!30] M
Figure 9: M gate is the combined set of $\\{H,Y\\}$ gates taken in order
($Y=R_{x}(-\frac{\pi}{2})$)
Double excitation operator
p & M_1 2 2 M_1^†
q M_2 [style=fill=red!30] 2 2 [style=fill=red!30] M_2^†
r M_3 2 2 M_3^†
s M_4 [style=fill=red!30] R_z(θ) [style=fill=red!30] M_4^†
Figure 10: In the circuit,
$(M_{1},M_{2},M_{3},M_{4})=\\{(H,H,H,H),(Y,Y,Y,Y),\newline
(H,Y,H,Y),(Y,H,Y,H),(Y,Y,H,H),(H,H,Y,Y),\newline (Y,H,Y,H),(H,Y,H,Y)\\}$
.
Excitation Operator
p & H 2 2 H Y 2 2 Y
q H [style=fill=red!30] R_z(θ) [style=fill=red!30] H Y [style=fill=red!30]
R_z(θ) [style=fill=red!30] Y
Figure 11: Y gate is nothing but $Y=R_{x}(-\frac{\pi}{2})$
Coulomb and exchange operators
p & G(θ) R_z(θ) 2 2
q R_z(θ) [style=fill=red!30] R_z(θ) [style=fill=red!30]
Figure 12: $G(\phi)$ is the global phase gate which is expressed as
$\exp{(-i\phi)}1$
_Notation:_
& 3 3
[style=fill=red!30] [style=fill=red!30] = & 1 1
1 1
1 1
#### 6.4.2 Molecular designing simulation
Being able to study the dynamics of the molecules and their time evolution
allows scientists to design and come up with new molecules which can be used
as products in the market or as a treatment for certain diseases. The last 2
years of the Covid-19 pandemic indicate the importance of speeding up these
processes of designing molecules. These can be solved using two methods. The
first approach is using the Born-Oppenheimer approximation. Alternatively, the
dynamics of the quantum molecular systems can be expressed as the simple
product of time-dependent electronic and nuclear wave functions [286].
Simulation using these methods requires a higher computational cost. Although,
the computational cost can be lowered by making certain approximations it
often increases the errors [287, 288].
Quantum dynamics has its relevance in the study of non-equilibrium processes
with potential energy surfaces, dynamics of molecular and solid state systems
with electron and nuclear dynamics and optimal quantum control theory. Quantum
optimal control theory is of high interest. It is nothing but the theory of
controlling the dynamics of quantum systems using external lasers.
Applications are immense and growing rapidly [289]. The theory has been
experimentally verified with bond dissociation experiments [290],
isomerisations [291] and molecular fragmentation [292].
This field has shown vast growth in recent years. Although, current QC
algorithms are often used for demonstration purposes only for which only
simple molecules are considered. This is because the current state-of-the-art
quantum computers are limited in terms of qubits. These confines the
algorithms to BO approximations and do not allow the inclusion of non-
adiabatic effects [293].
In this field of work, the most famous class of quantum algorithms that are
used is Variational Quantum Algorithms (VQAs). These algorithms use a hybrid
approach, that is, the simulation of the system is done on a parameterised
quantum circuit whose parameters are optimised classically using some cost
function. The authors of [294] use Variational Quantum Eigensolver to simulate
their molecular system. They specifically study the rearrangement dynamics of
the molecule. One reason why VQE is often used is that the current era of
quantum computers is noisy, but VQAs are adaptable to the noisy nature of QCs
if one uses Hardware efficient Ansatz [295]. In [296] a hybrid method for
calculating and designing Deuterated High-Efficiency Organic Light Emitting
Diodes was proposed. They use machine learning methods to calculate the Ising
model systems and then followed by the implementation of the VQE algorithm to
calculate the quantum efficiency of the molecular system to obtain the optimal
Deuterated molecule. Apart from using VQAs, one can also use the Digital
Quantum Simulation method for molecular dynamics. It has been used for laser-
driven systems. The work [297] describes the use of the theory of quantum
optimal control to simulate the dynamics of molecules. Although their approach
is also hybrid they do not use a variational approach. The steps involve
mapping to the qubits, and Hamiltonian simulation followed by the qubit
readout. To find the optimal control field, the readout states of the qubit
are used by the classical computer for optimisation. The optimisation function
can be decided based on the Quantum optimal control theory. This approach can
be used for controlled bond stretching, the orientation of dipole-dipole
coupled (OCS) rotors and preparing the state of the light-harvesting complex
in a controlled manner.
#### 6.4.3 Spectral analysis
Spectral analyses refer to the study of spectral properties. These properties
include the spectrum of frequencies of vibrations and related quantities like
eigenvalues and energies. It is well known that matter can never be at rest.
At the quantum level, even the tightly bonded molecules execute oscillations.
More commonly these oscillations are called vibrations. One can study
vibrations in time independent and time dependent picture. The former allows
us to perform spectral calculations like Infrared and Raman spectroscopy [298]
and fluorescence [299] which have importance in determining solar cells
performance [300] and industrial dyes [301]. The dynamics of vibrations have
much more applications including the dynamics of reactions [302] and
electronic transport [303]. Also, these affect large frequency temporal
resolved laser experiments [58].
This is a field of great importance. There have been methods to accurately
simulate the systems but they are limited to a few particle systems. One such
method is Real-space, grid-based method. When the simulation of the molecular
systems is done on a classical computer, we are restricted to using a finite
basis for spaning the infinite dimensional Hilbert space. The full
configuration interaction or FCI method can provide accurate solutions for
electronic structures but scales up exponentially with the increase in system
size [304]. This field uses configurational methods discussed in 4.
Table 2: Quantum circuits for second quantisation operators. The circuits are presented above Operator Name | Symbol | Circuit
---|---|---
Number Operator | $h_{pp}a^{\dagger}_{p}a_{p}$ | circuit 1
Excitation Operator | $h_{pq}(a^{\dagger}_{p}a_{q}+a^{\dagger}_{q}a_{p})$ | circuit 2
Coulomb and exchange operators | $h_{pqqp}a^{\dagger}_{p}a^{\dagger}_{q}+a_{q}a_{p})$ | circuit 3
Number-excitation operator | $h_{pqqr}(a^{\dagger}_{p}a^{\dagger}_{q}+a_{q}a_{r}+a^{\dagger}_{r}a^{\dagger}_{q}+a_{q}a_{p})$ | circuit 4
Operator for double excitation | $h_{pqrs}(a^{\dagger}_{p}a^{\dagger}_{q}a_{r}a_{s}+a^{\dagger}_{s}a^{\dagger}_{r}a_{q}a_{p})$ | circuit 5
#### 6.4.4 Chemical Reaction simulation
The analytical results in quantum chemistry are very important when it comes
to understanding a chemical reaction. They allow us to know the steps and
mechanisms involved in a chemical reaction [305]. Again, classical computers
pose the problem of fewer computational resources. Solving the Schrodinger
equation and simulating its time evolution requires an exponential increase in
the size of the system. Also, increasing the degree of freedom requires us to
have more size in the system.
Quantum Computers as already known, can easily simulate or can be used to
propagate the Schrödinger equation. They show a promise of completing the same
task in polynomial time [306] when compared to classical computers. Although
the limited number of qubits and noisy hardware makes things difficult for
simulating accurate results. Still, there are noise mitigation techniques
which can be used to reduce the noise. Most of the algorithms in Quantum
computation for chemical reactions are based on the approach of Digital
quantum simulations [307]. Specifically, there have been examples of reactions
which are controlled and driven by the external laser fields.
Figure 13: Figure shows the isomerisation of substituted malonaldehyde which
is non-symmetric. Authors of [308] performed the digital quantum simulation of
taking this molecule into consideration. A double well potential has been
considered during isomerisation.
The DQS based approach can be found in [309]. Authors of both have studied the
isomerisation reactions in a double welled potential and showed the time
evolution of the reactant and product states. The former implemented the
algorithm on an NMR based quantum computer, while the latter simulated the
same in the quantum simulator of IBM, called ibmq_qasm_simulator. The
theoretical approach remains the same in both the cases which are to find the
Hamiltonian in second quantisation and then to the qubit system. The former
uses a GRAPE technique [310] to implement the pulses on the NMR QC to
implement the unitary operations. This technique provides an efficient
implementation on the NMR quantum computers.
### 6.5 Bioinformatics
There has been a lot of development of algorithms and mathematical tools to
solve biological problems. Bioinformatics is one of the filed of utmost
importance as it provides solutions to a better lifestyle, helps fight against
widespread diseases and much more. This field explores complex areas like
human genome, modelling biomolecule’s behaviour in different environments,
calculating binding affinity of a ligand etc. These problems can widely be
categorised into three subsections. These are Drug Discovery, Genome
Sequencing and Proteomics.
The research in the field of finding better computational methods has been
tremendous. Some of these methods are used in alignment of sequence [311,
312], computational genetics [313], data processing for X-ray diffraction
[314]. These problems can be solved using computational methodologies. But the
current computational resources are not enough to simulate large bio-
molecules. Therefore, there has been a shift in attention from classical
computers to quantum computers as a computational platform. The following
subsections explain three major problems that can be found in the field of
Bio-informatics and also describes the emerging role of Quantum computation.
#### 6.5.1 Drug Discovery
The Discovery of drugs were used to happen accidentally like penicillin. But
now with advancements in technology drug discovery has become not a random
process but a process involving steps and procedures. Various chemical
compounds are selected from the database and extensively searched if they can
be a potential drug. Also, their synthesisability is studied. Following this,
the compound is optimized to maximise the affinity and then it is gone through
trials including animals succeeded by human trials.
The development of drugs is a very long process and consists of the following
stages including target discovery then molecular design followed by pre-
clinical studies and lastly clinical trial. This makes the process of creating
a marketable drug expensive and time-consuming. Pharmaceutical research has
been using high-throughput screening technology Discovering drugs involves
searching through target-binding compounds. This is a long process and even
expensive. While the computational methods for molecular docking can help to
identify the molecules which bind with the target.
The computational accuracy of the docking search depends on the description of
the compound in the software’s library used for the process. This software is
often called the docking engine. The docking methods used by these engines can
vary from software to software. Some common examples of these methods are
Auto-Dock Vina [315], MedusaDock [316] and Glide [317]. These approaches
mostly try to properties of the compounds and receptors which can bind
together. The accurate results are given by the density functional theory. As
usual, the classical computational methods are limited to small sized
molecules and receptors [318].
The quantum algorithm for Quantum machine learning is a faster and cheaper
solution to the problem of classical computations. The most promising area of
quantum algorithms for this field is the Quantum Generative Adversarial
Network [319]. Quantum GAN with Hybrid Generator [320] is one of the QGAN
algorithms. It consists of a parameterised quantum circuit for finding the
feature vector and followed by a classical DNN to predict the bond matrix in
terms of graphs for the drug. In this method, there are various variations.
One is Patched-QGAN-HG [321]. Apart from QGAN based approaches, there are
Image search based methods. These methods involve the quantum convolution
neural network [322]. These are based on convolution neural networks. In QNN
the filters are replaced with the quantum circuits. The quantum variational
autoencoders have been tried to perform the drug simulations but have not yet
shown any better performance than classical VAE. VAE is a method which is
based on probabilistic search [318].
#### 6.5.2 Genome sequencing
Genome sequencing is the process of figuring out the order of nucleotides of
the DNA. A DNA consists of an order of genetic letters As, Cs, Gs and Ts. The
human genome is made up of over 3 billion letters. To understand a genome, its
sequencing is very important. This will allow scientists to find genes much
more quickly as they contain information on where the genes are. The study of
genome sequence has great significance to scientists to understand how they
direct the growth and maintenance of a full organism.
Traditional computational methods use De Novo assembly [323] to construct an
original DNA from an unstructured set of reads without using any pre-requisite
knowledge like DNA sequence length, composition etc. The complexity depends on
the size of the genome. As an example, it takes nearly 50 hours for the human
genome on a supercomputer. This time might be acceptable for research tasks
but is not fit for the case of emergencies. These assembly tools are based on
Overlap Layout Consensus (OLC) algorithm [324]. It uses an OLC graph in which
the vertices are presented by a read while the overlap between any two reads
corresponds to the vertices of the graph. Then the Hamiltonian path is found
which is the path that consists of all the edges and each vertex is visited
only once giving the original genome.
Quantum computers specifically quantum annealing can solve this problem. Since
it is a graph problem, it can be formulated into a QUBO formulation. The OLC-
graphs are converted to the QUBO/Ising model [325] which is then embedded in
the quantum annealing system and then as an output one is given the
Hamiltonian path. Apart from QUBO formulation, the QAOA method is used for DNA
sequencing to accelerate the de-novo sequencing.
#### 6.5.3 Proteomics
Proteomic is a field that has started merging with quantum technologies. This
field studies the electronic structure of the proteins in a given cellular
system of any organism. Proteomics is defined as a group of proteins present
in the organism. This allows scientists to study the properties of proteins
like energy levels, dipole moments, amino acid charges, their electric
potential and a plethora of many things. This field emerged after the human
genome project (HGP) was completed in 2001 [326]. HGP involved the
identification of more than 30,000 genes in humans, which gave way to the
study of the proteins expressed by the genes. The problems involved in this
field include the characterisation of protein structures, inter protein
interaction (interactome) and phosphorylation (phospho-proteomics). Protein
folding is one problem which comes up after the identification of proteins.
One needs to study the proteins to disclose the knowledge of how the proteins
are encoded in genes. Classical algorithms of protein folding can sample small
conformation space. Many quantum algorithms have been proposed to solve this
problem. The paper [327], proposes a hamiltonian and the variational quantum
algorithm for folding a polymer chain with N monomers on a lattice,
specifically for 10 amino acid Angiotensin worked out on 22 qubits. Gate based
algorithms have also been proposed as in paper [328]. This field has a great
potential for growth with quantum computers offering an exploration of large
conformational space for protein folding.
## 7 Error Mitigation
Near term quantum computers are not fault-tolerant. The two most significant
hurdles to scalable universal quantum computers are error sensitivity and
noise. Errors can arise in each quantum computation step, making it difficult
for efficient digital quantum simulations. The errors can be broadly
classified as (i) State preparation errors, (ii) Gate errors and (iii)
Measurement or Readout errors. The gate errors are further classified into
Coherent and Incoherent errors. The coherent errors preserve the purity of the
state. It typically occurs due to miscalibration in the control parameters.
Now, one can understand incoherent errors in two ways. Either it can be
considered coherent errors with randomly varying control parameters or an
operation that entangle the qubit with the environment. State Preparation and
Measurement errors are sometimes together addressed as SPAM errors. Compared
to gate errors, SPAM errors occur only at the beginning or end of the circuit
and do not accumulate with increasing circuit depth.
There are two proposals for achieving Fault-tolerant quantum computation. The
first method uses non-abelian quasiparticles called anyons in the topological
matter to perform error-free quantum computation (Topological Quantum
Computation) [329]. Another approach for fault-tolerant quantum computation is
using Quantum Error-Correcting (QEC) codes [330]. One can use these codes to
detect and remove gate errors during computation. While the first proposal is
still in its infancy, the second approach requires computational resources
unattainable in near-term devices. For instance, using Surface code, a
ubiquitous QEC code, one needs millions of physical qubits to perform the
fault-tolerant computation of Shor’s algorithm [331].
Readout errors make approximately $15\%$ of error in quantum computation
(Superconductor qubit based). Thus mitigating readout errors holds importance.
A straightforward approach is using the operator rescaling method for error
mitigation. It uses the documented readout errors to correct the results via
post-processing. But it cannot mitigate the correlated errors in the
computation. Another approach to minimize the readout errors is the
calibration matrix method.
_Calibration Matrix Method_ : In this method, before each time evolution
experiment, we perform a calibration experiment to characterize the device for
each of the $2^{N}$ basis states. We organize the results of each calibration
experiment in a $2^{N}\times 2^{N}$ matrix after the calibration experiment.
Each member of the matrix $Pij$ represents the probability of a system
preparing in-state $i$ and measuring in-state $j$. By applying the inverse of
this matrix to the noisy results, we would get results with mitigated
measurement errors. While applying this method, we need to make two crucial
assumptions; 1) The readout error is caused by classical noise, and 2) the
noise is independent of the quantum gates applied to the system. A recent work
[332] shows that readout errors in quantum systems based on superconducting
qubits can be effectively explained using simply classical noise models.
Further, to prevent the exponential scaling of the calibration matrix with
system size, we assume the error due to noise is local and correlates to a
subset of qubits [30, 333]. Then the error model is called tensored error
model, and the calibration matrix will be the tensor product of $2\times 2$
noisy matrice. Also, sometimes due to strong noise, the inverse of the
calibration matrix will not be well defined. In such a scenario, we have to
find the Moore-Penrose pseudo inverse of the calibration matrix. Tensored
error models do not address the errors due to cross-talk between qubits during
readout. Therefore, recently a measurement error mitigation scheme that
addresses cross-talk errors was proposed [334].
Next, let’s move on to the mitigation of gate errors. As mentioned earlier,
there are Coherent and Incoherent gate errors. Incoherent errors are usually
modelled as depolarizing noise. There exist methods to mitigate depolarizing
noise [335, 336]. Coherent errors are more damaging than incoherent ones. But
in [337] it was shown that one could convert coherent errors to incoherent
errors through randomized compiling. Thus in principle, the coherent errors
also can be mitigated. But there are other approaches to mitigating gate
errors, including the popular one called Zero Noise Extrapolation (ZNE) [338,
339]. We will discuss some of the schemes to reduce the gate errors in the
following section.
_Zero Noise Extrapolation_ : A ZNE consists of two steps Noise Scaling and
Extrapolation. In noise scaling, we would intentionally scale up the noise
level of the quantum circuit. Noise scaling can be done in two ways. The first
approach is called time scaling or pulse stretching. In this method, we
stretch the control pulses and thereby increase the noise in the circuit. The
second approach is called the unitary folding. Here we map the unitary
operation $U$ to $U(U^{\dagger}U)^{n}$, where n is an integer. Thus in the
quantum circuit, it would increase the circuit depth and thereby scale the
noise. The unitary folding can be applied globally (Circuit Folding) or
locally (Gate Folding) [340]. Using Noise scaling, we would calculate the
expectation value of the observable (that we want to measure) at different
noise levels. Once we have the regression between the expectation value (of
the observable) and noise, we can evaluate the expectation value at the zero-
noise limit through extrapolation. The model used for performing extrapolation
depends upon the noise model assumed. Polynomial extrapolation is generally
used in the weak noise limit if the number of data points ($m$) is equal to or
greater than $d+1$, where $d$ is the degree of the polynomial. Of the two
variants of polynomial extrapolation, linear extrapolation is used when $d=1$
and Richardson extrapolation when $d=m-1$ [338]. The polynomial extrapolation
is inefficient when there is a low error rate and a large number of noisy
gates. In such cases, we need to resort to other extrapolation methods such as
poly-exponential extrapolation [340] and exponential extrapolation [331].
_Probabilistic Error Correction_ : The Probabilistic Error Cancellation (PEC)
[338] works based on two ideas. The first idea is the quasi-probability
representation of the ideal gates. It essentially means that we should
represent an ideal gate as a linear superposition of noisy quantum gates
[341]. The real coefficients in such a representation form a quasi-probability
distribution, i.e., the sum of the coefficients will be normalized but differ
from the standard probabilities by taking negative values. Using quasi-
probability representation, any observable represented using an ideal gate set
can be translated into a noisy gate set representation. One could directly
find the expectation values of noisy gates from the hardware. Using it, we
could derive the ideal expectation value of any observable. Unfortunately,
this strategy demands the execution of a large number of circuits, which rises
exponentially with circuit depth and is often impractical. Thus we use the
second idea of probabilistically sampling from the quasi-probability
representation. The Monte Carlo average that follows would give the
approximate expectation value of the observables.
_Other methods for error mitigation_ : Methods like PEC and ZNE discussed
above require complete knowledge of the underlying noise model to be
efficient. In most cases, experimentalists only have imperfect knowledge of
the noise model. Therefore people are working on learning-based QEM techniques
using ZNE and PEC to repress errors via an automatic learning process.
Examples of such methods include _Clifford Data Regression_ [342], and
_Learning-based quasi-probability_ method [343]. In addition, there is also
another approach based on Gate Set Tomography for error mitigation without
being noise aware [331]. Apart from the popular ones, there are other methods
for error mitigation. Examples of such methods include _Dynamic Decoupling_
[344], _Quantum Subspace Expansion_ [36, 37], _Stochastic error mitigation_
[48] and so on. Most of the methods we discussed do not use ancilla qubits.
Another class of error mitigation methods also uses ancilla qubits for error
mitigation. One ubiquitous example is the _Stabilizer-based (Symmetry
verification) error mitigation_. It uses ancilla to perform measurements on
conserved quantities of the problem Hamiltonian, such as spin or parity. Any
error would change conserved quantities indicated upon ancilla measurement
(similar to stabilizers in QEC). This method is often used in variational
state preparation methods [345, 346]. Another error mitigation method that
utilizes ancilla qubit is the recently proposed _Virtual Distillation_ [347,
348]. It suppresses noise by simultaneously preparing numerous copies of a
noisy state, entangling and measuring them to virtually prepare a more pure
state than the original noisy one. Virtual distillation is quite promising
with its exponential suppression of error rate.
Another class of errors that we haven’t discussed yet is the Algorithmic
errors. Compared to others, these errors are not of physical origin. One
ubiquitous example is the trotterization errors arising in the Hamiltonian
simulation. The prevalent method for mitigating trotterization error is the
ZNE. We perform the noise scaling using small trotter steps and then apply
extrapolation. Recently, another approach that exploits symmetry of the system
to mitigate trotterization error was proposed [349]. It is a symmetry
protection technique that causes destructive interference of the errors
arising in each trotter step. An extensive introduction to algorithmic and
other error mitigation techniques is provided in [350].
## 8 Software tool sets
Quantum computing is a field which merges many disciplines. At its current
age, it is at a stage where it has evolved to enter the industry. Many start-
ups in quantum computing have come up in recent years including Xanadu, IonQ,
Zapata, PASQAL, just to name a few. These start-ups are now collaborating with
bigger organisations providing them quantum solutions to the existing
problems. Over the years cloud solutions have become very famous in this
field. Many companies have been building software which allows one to execute
their problems with real quantum hardware without the need to learn quantum
computing. This section provides the details of many software & online
platforms for quantum simulation encountered during the survey. Their details
have been given in the table 2.
Table 3: Resources for Numerical and Quantum Simulations. Software Package | Domain | Tasks | Noteworthy attributes
---|---|---|---
QuASeR [351] | Bioinformatics | DNA Sequencing | One can perform DNA sequencing using the de-novo assembly on gate based and quantum annealers. Uses TSP, QUBO, Hamiltonian methods and QAOA algorithms
InQuanto [352, 353] | Chemistry | Ground and Excited states,Spectroscopy, Molecular Geometry,Transition Pathways, Reaction Pathways, Ligand Protein Binding, Molecular Forces | Mostly uses VQE methods and its variations like ADAPT-VQE, Penalty VQE, VQ Deflation, Quantum Subspace Expansion and Iterative Qubit-excitation Based VQE for computation of the tasks. The packages also comes with Error Mitigation techniques like PMSV and SPAM
MQS [353] | Chemistry | Computation of Solubility, Viscosity, Partition coefficient values, Phase equilibria calculations for vapour-liquid-, liquid-liquid and solid-liquid-equilibria. | Maps the models of quantum chemistry models (DFT, PMx,COSMO-RS/SAC, GNFx-xTB) to Quantum computer hardware through cloud based methods. Allows submitting calculations which are accessed and pipelined to further steps. Has applications for process design, product design and Material Design
OpenFemion [354] | Chemistry & Condensed Matter | Computation of Trotter error operators, symbolic Fourier transformation, preparing fermionic Gaussian states, routines for generating Hamiltonians of the Hubbard model, the homogeneous electron gas (jellium), general plane wave discretizations, and d-wave models of superconductivity and wide range of data structures important in Quantum chemistry | Everything from efficient data structures for encoding fermionic operators to fermionic circuit primitives for use on quantum devices is included in the package.
Fermionic.jl | Chemistry & Condensed Matter | Fermionic operators can be constructed both in the full Fock space or in the fixed particle number subspace, can be used to perform fermionic quantum computation. Compute average particle number, one body matrices entropy, partially traced systems , majorization relations, m-bodies density matrices, one body entropies and more. | Julia tool kit for fermionic simulations and fermionic quantum computation
MISTIQS [141] | Condensed Matter | Translation of circuits into executable circuit objects for IBM, Rigetti, and Google quantum devices, domain-specific IBM and Rigetti compilers developed for TFIM simulations, support for user-defined time dependence functions for external fields, full XYZ model support in Hamiltonian constructions. | A full-stack, cross-platform software for creating, constructing, and running quantum circuits for simulating quantum many-body dynamics of Heisenberg Hamiltonians-governed systems.
QuSpin [355, 356] | Condensed Matter | Can Implement Exact diagonalisation, Lanczos Algorithm, Floquet Hamiltonian simulation of a wide range of many-body system. Also have parallel computing capabilities | An open-source Python package that supports the use of various (user-defined) symmetries in one and higher dimensions, as well as (imaginary) time evolution following a user-specified driving protocol, for exact diagonalization and quantum dynamics of arbitrary boson, fermion, and spin many-body systems.
Kwant [357] | Condensed Matter | Calculation of transport parameters (conductance, noise, scattering matrix), dispersion relations, modes, wave functions, different Green’s functions, and out-of-equilibrium local values, other computations involving tight-binding Hamiltonians | Kwant is a Python package for computing numerical quantum transport. It provide a user-friendly, ubiquitous, and high-performance toolkit for simulating physical systems of any dimensionality and geometry that can be characterised by a tight-binding model.
ArQTIC [358] | Condensed Matter | Dynamic Simulation, QITE Simulation, can simulate materials that can be modeled by any Hamiltonian derived from a generic, one-dimensional, time-dependent Heisenberg Hamiltonain | An open-source, full-stack software package built for the simulations of materials on quantum computers
Quantavo [359] | Quantum Optics | A framework which can declare, manipulate and characterize quantum states of light (finite number of modes, and finite dimensional), and implement linear optics circuits such as Beam Splitters (BS), Phase Shifters (PS), arbitrary unitary transformations of the modes etc. | A Maple toolbox for linear optics and quantum information in Fock space
QuantumOptics.jl [360] | Open quantum systems | numerical simulation of the dynamics of OQS, finding the steady-state of OQS & time correlation functions, Optimizes processor usage and memory consumption | A Julia framework for simulating open quantum systems
HOQST: Hamiltonian Open Quantum System Toolkit [361] | Open quantum systems | simulating the dynamics of OQS, supports various master equations, as well as stochastic Hamiltonians | A Julia toolkit for simulating the open quantum system dynamics
Mitiq [362] | Error Mitigation | Zero-noise extrapolation, Probabilistic error cancellation, and Clifford data regression | Python package for error mitigation on noisy quantum computers
QuaEC [363] | Error Correction | Support for maniuplating Pauli and Clifford operators, as well as binary symplectic representations and automated analysis of error-correcting protocols based on stabilizer codes | Python library for working with quantum error correction and fault-tolerance
CHP: CNOT-Hadamard-Phase [364] | Error Correction | Construct quantum error-correction designs and debug them. Numerically study massive, highly entangled quantum systems. Generate random stabilizer quantum circuit, Shor 9-qubit quantum error-correcting code | High-performance simulator of stabilizer circuits (Quantum Error Correction)
QuaSiMo [365] | Hybrid quantum-classical algorithms | Dynamicalsimulation, VQE, Symmetry reduction, Fermion qubit mapping, QITE, QAOA | A composable design scheme for the development of hybrid quantum/classical algorithms and workflows for applications of quantum simulation
QuEST [366] | - | Many functions for simulating decoherence, Calculating density inner product, Hilbert Schmidt distance, Purity, Fidelity, many quantities from Density matrix | simulator of quantum circuits, state-vectors and density matrices.
qsims | - | qsims represents the spatial wavefunction of a particle as a discretized wavefunction on a grid, Internal states of the particle can be represented using multiple grid, Potentials and couplings between the internal states can be specified, and potentials can be position- and state-dependent. | A tool for studying quantum computing using optical lattices, General-purpose quantum simulation software package, capable of simulating the dynamics of systems with a wide range of Hamiltonians, qsims is not limited to optical lattices, and could be adapted for use in many other physical systems.
## 9 Concluding Remarks
In this paper, we have covered a handful of areas out of the vast versatile
potential domains which can show quantum advantage towards quantum simulation
in near future. Today, the real hardware implementation is limited to
elementary quantum systems and processes which is due to limitation in
realising decoherence free long circuit run time and inevitable gate errors.
But with every new day, we have seen new algorithms and techniques coming up
which have been enlightening the scientific community with optimised methods
to realise quantum simulation and this only narrates that, realising the
advantage of quantum simulation on quantum computers is a reality not very far
now. The realisation of quantum treatment with Hamiltonian simulation has also
expanded its branches to fundamental physics like- simulating gauge
theories[367][368], problems in high energy physics[369][370][371] and quantum
sensing. Its only a matter of time when quantum computers will be solving real
world problems addressing quantum simulations.
## 10 Acknowledgement
We are thankful to our colleagues and collaborators who contributed their
expertise, feedback, and valuable insights during the preparation of this
paper.
This work was made possible by the collective effort and unwavering commitment
of our team at Qulabs, and we are immensely grateful for their collaboration
and support.
## References
* [1] Francesco Tacchino, Alessandro Chiesa, Stefano Carretta, and Dario Gerace. Quantum computers as universal quantum simulators: state-of-the-art and perspectives. Advanced Quantum Technologies, 3(3):1900052, 2020.
* [2] Richard P Feynman. Simulating physics with computers. In Feynman and computation, pages 133–153. CRC Press, 2018.
* [3] Iulia M Georgescu, Sahel Ashhab, and Franco Nori. Quantum simulation. Reviews of Modern Physics, 86(1):153, 2014.
* [4] Francesco Tacchino. Digital quantum simulations and machine learning on near-term quantum processors. PhD, Universit‘a di Pavia, Dipartimento di Fisica, Universit‘a di Pavia, via Bassi 6, I-27100 Pavia, Italy.
* [5] Michael A Nielsen and Isaac Chuang. Quantum computation and quantum information, 2002.
* [6] Laurent Sanchez-Palencia. Quantum simulation: From basic principles to applications. arXiv preprint arXiv:1812.01110, 2018.
* [7] Mohan Sarovar, Jun Zhang, and Lishan Zeng. Reliability of analog quantum simulation. EPJ quantum technology, 4(1):1–29, 2017.
* [8] Lucas Lamata, Adrian Parra-Rodriguez, Mikel Sanz, and Enrique Solano. Digital-analog quantum simulations with superconducting circuits. Advances in Physics: X, 3(1):1457981, 2018.
* [9] Lucas Lamata, Antonio Mezzacapo, Jorge Casanova, and Enrique Solano. Efficient quantum simulation of fermionic and bosonic models in trapped ions. EPJ Quantum Technology, 1(1):9, December 2014.
* [10] Sergey B Bravyi and Alexei Yu Kitaev. Fermionic quantum computation. Annals of Physics, 298(1):210–226, 2002.
* [11] Pascual Jordan and Eugene Paul Wigner. Über das paulische äquivalenzverbot. In The Collected Works of Eugene Paul Wigner, pages 109–129. Springer, 1993.
* [12] Andrew Tranter, Sarah Sofia, Jake Seeley, Michael Kaicher, Jarrod McClean, Ryan Babbush, Peter V Coveney, Florian Mintert, Frank Wilhelm, and Peter J Love. The b ravyi–k itaev transformation: Properties and applications. International Journal of Quantum Chemistry, 115(19):1431–1441, 2015\.
* [13] Jacob T Seeley, Martin J Richard, and Peter J Love. The bravyi-kitaev transformation for quantum computation of electronic structure. The Journal of chemical physics, 137(22):224109, 2012.
* [14] A. Yu Kitaev. Quantum measurements and the Abelian Stabilizer Problem. arXiv:quant-ph/9511026, November 1995. arXiv: quant-ph/9511026.
* [15] Quantum Phase Estimation. publisher: IBM Quantum.
* [16] P M Q Cruz, G Catarina, R Gautier, and J Fernández-Rossier. Optimizing quantum phase estimation for the simulation of Hamiltonian eigenstates. Quantum Science and Technology, 5(4):044005, August 2020.
* [17] Zhaokai Li, Man-Hong Yung, Hongwei Chen, Dawei Lu, James D. Whitfield, Xinhua Peng, Alán Aspuru-Guzik, and Jiangfeng Du. Solving Quantum Ground-State Problems with Nuclear Magnetic Resonance. Scientific Reports, 1(1):88, December 2011.
* [18] Alán Aspuru-Guzik, Anthony D. Dutoi, Peter J. Love, and Martin Head-Gordon. Simulated Quantum Computation of Molecular Energies. Science, 309(5741):1704–1707, September 2005. arXiv: quant-ph/0604193.
* [19] Kianna Wan and Isaac H. Kim. Fast digital methods for adiabatic state preparation. arXiv:2004.04164 [quant-ph], March 2022. arXiv: 2004.04164.
* [20] Dave Wecker, Matthew B. Hastings, Nathan Wiebe, Bryan K. Clark, Chetan Nayak, and Matthias Troyer. Solving strongly correlated electron models on a quantum computer. Physical Review A, 92(6):062318, December 2015.
* [21] Akhil Francis, Ephrata Zelleke, Ziyue Zhang, Alexander F. Kemper, and James K. Freericks. Determining Ground-State Phase Diagrams on Quantum Computers via a Generalized Application of Adiabatic State Preparation. Symmetry, 14(4):809, April 2022.
* [22] P Brouwer. Theory of Many-Particle Systems. Cornell University, 2005.
* [23] Mario Motta, Chong Sun, Adrian T. K. Tan, Matthew J. O’Rourke, Erika Ye, Austin J. Minnich, Fernando G. S. L. Brandão, and Garnet Kin-Lic Chan. Determining eigenstates and thermal states on a quantum computer using quantum imaginary time evolution. Nature Physics, 16(2):205–210, February 2020.
* [24] Kübra Yeter-Aydeniz, George Siopsis, and Raphael C. Pooser. Scattering in the Ising Model Using Quantum Lanczos Algorithm. New Journal of Physics, 23(4):043033, April 2021. arXiv: 2008.08763.
* [25] Niladri Gomes, Feng Zhang, Noah F. Berthusen, Cai-Zhuang Wang, Kai-Ming Ho, Peter P. Orth, and Yongxin Yao. Efficient Step-Merged Quantum Imaginary Time Evolution Algorithm for Quantum Chemistry. Journal of Chemical Theory and Computation, 16(10):6256–6266, October 2020.
* [26] M. Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C. Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R. McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, and Patrick J. Coles. Variational Quantum Algorithms. Nature Reviews Physics, 3(9):625–644, September 2021. arXiv: 2012.09265.
* [27] Sam McArdle, Tyson Jones, Suguru Endo, Ying Li, Simon Benjamin, and Xiao Yuan. Variational ansatz-based quantum simulation of imaginary time evolution. npj Quantum Information, 5(1):75, December 2019. arXiv: 1804.03023.
* [28] Baptiste Anselme Martin, Pascal Simon, and Marko J. Rančić. Simulating strongly interacting Hubbard chains with the Variational Hamiltonian Ansatz on a quantum computer. arXiv:2111.11996 [cond-mat, physics:quant-ph], February 2022. arXiv: 2111.11996.
* [29] D. Wecker, M. B. Hastings, and M. Troyer. Towards Practical Quantum Variational Algorithms. Physical Review A, 92(4):042303, October 2015. arXiv: 1507.08969.
* [30] Abhinav Kandala, Antonio Mezzacapo, Kristan Temme, Maika Takita, Markus Brink, Jerry M. Chow, and Jay M. Gambetta. Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. Nature, 549(7671):242–246, September 2017.
* [31] Lucas Slattery and Bryan K. Clark. Quantum Circuits For Two-Dimensional Isometric Tensor Networks. arXiv:2108.02792 [cond-mat, physics:quant-ph], August 2021. arXiv: 2108.02792.
* [32] Jin-Guo Liu, Yi-Hong Zhang, Yuan Wan, and Lei Wang. Variational quantum eigensolver with fewer qubits. Physical Review Research, 1(2):023025, September 2019.
* [33] Oscar Higgott, Daochen Wang, and Stephen Brierley. Variational Quantum Computation of Excited States. Quantum, 3:156, July 2019. arXiv: 1805.08138.
* [34] Tyson Jones, Suguru Endo, Sam McArdle, Xiao Yuan, and Simon C. Benjamin. Variational quantum algorithms for discovering Hamiltonian spectra. Physical Review A, 99(6):062304, June 2019.
* [35] Nahum Sá, Ivan S. Oliveira, and Itzhak Roditi. Towards solving the BCS Hamiltonian gap in Near-Term Quantum Computers. arXiv:2105.14936 [quant-ph], February 2022. arXiv: 2105.14936.
* [36] Jarrod R. McClean, Mollie E. Kimchi-Schwartz, Jonathan Carter, and Wibe A. de Jong. Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states. Physical Review A, 95(4):042308, April 2017.
* [37] J. I. Colless, V.V. Ramasesh, D. Dahlen, M.S. Blok, M.E. Kimchi-Schwartz, J.R. McClean, J. Carter, W.A. de Jong, and I. Siddiqi. Computation of molecular spectra on a quantum processor with an error-resilient algorithm. Phys. Rev. X, 8:011021, Feb 2018.
* [38] Pauline J. Ollitrault, Abhinav Kandala, Chun-Fu Chen, Panagiotis Kl. Barkoutsos, Antonio Mezzacapo, Marco Pistoia, Sarah Sheldon, Stefan Woerner, Jay M. Gambetta, and Ivano Tavernelli. Quantum equation of motion for computing molecular excitation energies on a noisy quantum processor. Physical Review Research, 2(4):043140, October 2020.
* [39] Barbara M. Terhal and David P. DiVincenzo. Problem of equilibration and the computation of correlation functions on a quantum computer. Physical Review A, 61(2):022301, January 2000.
* [40] Oles Shtanko and Ramis Movassagh. Algorithms for Gibbs state preparation on noiseless and noisy random quantum circuits. arXiv:2112.14688 [cond-mat, physics:math-ph, physics:quant-ph], December 2021. arXiv: 2112.14688.
* [41] David Poulin and Pawel Wocjan. Sampling from the Thermal Quantum Gibbs State and Evaluating Partition Functions with a Quantum Computer. Physical Review Letters, 103(22):220502, November 2009.
* [42] Arnau Riera, Christian Gogolin, and Jens Eisert. Thermalization in Nature and on a Quantum Computer. Physical Review Letters, 108(8):080402, February 2012.
* [43] K. Temme, T. J. Osborne, K. G. Vollbrecht, D. Poulin, and F. Verstraete. Quantum Metropolis sampling. Nature, 471(7336):87–90, March 2011.
* [44] Man-Hong Yung and Alán Aspuru-Guzik. A quantum–quantum Metropolis algorithm. Proceedings of the National Academy of Sciences, 109(3):754–759, January 2012.
* [45] Pierre-Luc Dallaire-Demers and Frank K. Wilhelm. Method to efficiently simulate the thermodynamic properties of the Fermi-Hubbard model on a quantum computer. Physical Review A, 93(3):032303, March 2016.
* [46] J. Cohn, F. Yang, K. Najafi, B. Jones, and J. K. Freericks. Minimal effective Gibbs ansatz: A simple protocol for extracting an accurate thermal representation for quantum simulation. Physical Review A, 102(2):022622, August 2020.
* [47] Anirban Narayan Chowdhury and Rolando D. Somma. Quantum algorithms for Gibbs sampling and hitting-time estimation. arXiv:1603.02940 [quant-ph], March 2016. arXiv: 1603.02940.
* [48] Shi-Ning Sun, Mario Motta, Ruslan N. Tazhigulov, Adrian T. K. Tan, Garnet Kin-Lic Chan, and Austin J. Minnich. Quantum Computation of Finite-Temperature Static and Dynamical Properties of Spin Systems Using Quantum Imaginary Time Evolution. PRX Quantum, 2(1):010317, February 2021. arXiv: 2009.03542.
* [49] Guillaume Verdon, Michael Broughton, and Jacob Biamonte. A quantum algorithm to train neural networks using low-depth circuits. arXiv:1712.05304 [cond-mat, physics:quant-ph], August 2019. arXiv: 1712.05304.
* [50] Jingxiang Wu and Timothy H. Hsieh. Variational Thermal Quantum Simulation via Thermofield Double States. Physical Review Letters, 123(22):220502, November 2019.
* [51] William Cottrell, Ben Freivogel, Diego M. Hofman, and Sagar F. Lokhande. How to Build the Thermofield Double State. Journal of High Energy Physics, 2019(2):58, February 2019. arXiv: 1811.11528.
* [52] John Martyn and Brian Swingle. Product spectrum ansatz and the simplicity of thermal states. Physical Review A, 100(3):032107, September 2019.
* [53] R. Sagastizabal, S. P. Premaratne, B. A. Klaver, M. A. Rol, V. Negîrneac, M. S. Moreira, X. Zou, S. Johri, N. Muthusubramanian, M. Beekman, C. Zachariadis, V. P. Ostroukh, N. Haider, A. Bruno, A. Y. Matsuura, and L. DiCarlo. Variational preparation of finite-temperature states on a quantum computer. npj Quantum Information, 7(1):130, December 2021.
* [54] Xiao Yuan, Suguru Endo, Qi Zhao, Ying Li, and Simon Benjamin. Theory of variational quantum simulation. Quantum, 3:191, October 2019. arXiv: 1812.08767.
* [55] Jarrod R. McClean, Sergio Boixo, Vadim N. Smelyanskiy, Ryan Babbush, and Hartmut Neven. Barren plateaus in quantum neural network training landscapes. Nature Communications, 9(1):4812, December 2018.
* [56] Alán Aspuru-Guzik, Anthony D Dutoi, Peter J Love, and Martin Head-Gordon. Simulated quantum computation of molecular energies. Science, 309(5741):1704–1707, 2005.
* [57] Hefeng Wang, Sabre Kais, Alán Aspuru-Guzik, and Mark R Hoffmann. Quantum algorithm for obtaining the energy spectrum of molecular systems. Physical Chemistry Chemical Physics, 10(35):5388–5393, 2008.
* [58] Sam McArdle, Alexander Mayorov, Xiao Shan, Simon Benjamin, and Xiao Yuan. Digital quantum simulation of molecular vibrations. Chemical science, 10(22):5725–5735, 2019.
* [59] Wenda Zhou. Review on Quantum Walk Algorithm. Journal of Physics: Conference Series, 1748(3):032022, January 2021\.
* [60] Earl Campbell. A random compiler for fast Hamiltonian simulation. Physical Review Letters, 123(7):070503, August 2019. arXiv: 1811.08017.
* [61] Andrew M. Childs and Nathan Wiebe. Hamiltonian Simulation Using Linear Combinations of Unitary Operations. Quantum Information and Computation, 12(11&12). arXiv: 1202.5822.
* [62] Dominic W. Berry, Andrew M. Childs, Richard Cleve, Robin Kothari, and Rolando D. Somma. Simulating Hamiltonian dynamics with a truncated Taylor series. Physical Review Letters, 114(9):090502, March 2015. arXiv: 1412.4687.
* [63] Guang Hao Low and Isaac L. Chuang. Hamiltonian Simulation by Qubitization. Quantum, 3:163, July 2019. arXiv: 1610.06546.
* [64] Guang Hao Low and Isaac L. Chuang. Optimal Hamiltonian Simulation by Quantum Signal Processing. arXiv:1606.02685 [quant-ph], December 2016. arXiv: 1606.02685.
* [65] Electron correlations in narrow energy bands. Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences, 276(1365):238–257, November 1963.
* [66] Junjiro Kanamori. Electron Correlation and Ferromagnetism of Transition Metals. Progress of Theoretical Physics, 30(3):275–289, September 1963\.
* [67] Martin C. Gutzwiller. Effect of Correlation on the Ferromagnetism of Transition Metals. Physical Review Letters, 10(5):159–162, March 1963.
* [68] Matthew P. A. Fisher, Peter B. Weichman, G. Grinstein, and Daniel S. Fisher. Boson localization and the superfluid-insulator transition. Physical Review B, 40(1):546–570, July 1989.
* [69] J. Spałek. t-J Model Then and Now: a Personal Perspective from the Pioneering Times. Acta Physica Polonica A, 111(4):409–424, April 2007.
* [70] Tilman Esslinger. Fermi-Hubbard physics with atoms in an optical lattice. Annual Review of Condensed Matter Physics, 1(1):129–152, August 2010. arXiv: 1007.0012.
* [71] R. Micnas, J. Ranninger, and S. Robaszkiewicz. Superconductivity in narrow-band systems with local nonretarded attractive interactions. Reviews of Modern Physics, 62(1):113–171, January 1990.
* [72] P. Nozires and S. Schmitt-Rink. Bose condensation in an attractive fermion gas: From weak to strong coupling superconductivity. Journal of Low Temperature Physics, 59(3-4):195–211, May 1985.
* [73] Markus Greiner, Olaf Mandel, Tilman Esslinger, Theodor W. Hänsch, and Immanuel Bloch. Quantum phase transition from a superfluid to a Mott insulator in a gas of ultracold atoms. Nature, 415(6867):39–44, January 2002.
* [74] Elliott H. Lieb and F. Y. Wu. Absence of Mott Transition in an Exact Solution of the Short-Range, One-Band Model in One Dimension. Physical Review Letters, 20(25):1445–1448, June 1968.
* [75] Elliott H. Lieb and F.Y. Wu. The one-dimensional Hubbard model: a reminiscence. Physica A: Statistical Mechanics and its Applications, 321(1-2):1–27, April 2003.
* [76] M. Boninsegni, N. V. Prokof’ev, and B. V. Svistunov. Worm algorithm and diagrammatic Monte Carlo: A new approach to continuous-space path integral Monte Carlo simulations. Physical Review E, 74(3):036701, September 2006.
* [77] Kris Van Houcke, Evgeny Kozik, N. Prokof’ev, and B. Svistunov. Diagrammatic Monte Carlo. Physics Procedia, 6:95–105, 2010.
* [78] Robert Jördens, Niels Strohmaier, Kenneth Günter, Henning Moritz, and Tilman Esslinger. A Mott insulator of fermionic atoms in an optical lattice. Nature, 455(7210):204–207, September 2008.
* [79] Eugenio Cocchi, Luke A. Miller, Jan H. Drewes, Marco Koschorreck, Daniel Pertot, Ferdinand Brennecke, and Michael Köhl. Equation of State of the Two-Dimensional Hubbard Model. Physical Review Letters, 116(17):175301, April 2016.
* [80] Daniel Greif, Thomas Uehlinger, Gregor Jotzu, Leticia Tarruell, and Tilman Esslinger. Short-range quantum magnetism of ultracold fermions in an optical lattice. Science, 340(6138):1307–1310, June 2013. arXiv: 1212.2634.
* [81] Daniel Greif, Gregor Jotzu, Michael Messer, Rémi Desbuquois, and Tilman Esslinger. Formation and Dynamics of Antiferromagnetic Correlations in Tunable Optical Lattices. Physical Review Letters, 115(26):260401, December 2015.
* [82] Russell A. Hart, Pedro M. Duarte, Tsung-Lin Yang, Xinxing Liu, Thereza Paiva, Ehsan Khatami, Richard T. Scalettar, Nandini Trivedi, David A. Huse, and Randall G. Hulet. Observation of antiferromagnetic correlations in the Hubbard model with ultracold atoms. Nature, 519(7542):211–214, March 2015.
* [83] Timon A. Hilker, Guillaume Salomon, Fabian Grusdt, Ahmed Omran, Martin Boll, Eugene Demler, Immanuel Bloch, and Christian Gross. Revealing hidden antiferromagnetic correlations in doped Hubbard chains via string correlators. Science, 357(6350):484–487, August 2017.
* [84] Ulrich Schneider, Lucia Hackermüller, Jens Philipp Ronzheimer, Sebastian Will, Simon Braun, Thorsten Best, Immanuel Bloch, Eugene Demler, Stephan Mandt, David Rasch, and Achim Rosch. Fermionic transport in a homogeneous Hubbard model: Out-of-equilibrium dynamics with ultracold atoms. Nature Physics, 8(3):213–218, March 2012. arXiv: 1005.3545.
* [85] Markus Greiner, Olaf Mandel, Theodor W. Hänsch, and Immanuel Bloch. Collapse and revival of the matter wave field of a Bose–Einstein condensate. Nature, 419(6902):51–54, September 2002.
* [86] David Chen, Matthew White, Cecilia Borries, and Brian DeMarco. Quantum Quench of an Atomic Mott Insulator. Physical Review Letters, 106(23):235304, June 2011.
* [87] Florian Schäfer, Takeshi Fukuhara, Seiji Sugawa, Yosuke Takasu, and Yoshiro Takahashi. Tools for quantum simulation with ultracold atoms in optical lattices. Nature Reviews Physics, 2(8):411–425, August 2020.
* [88] Daniel S. Abrams and Seth Lloyd. Simulation of Many-Body Fermi Systems on a Universal Quantum Computer. Physical Review Letters, 79(13):2586–2589, September 1997.
* [89] R. Barends, L. Lamata, J. Kelly, L. García-Álvarez, A. G. Fowler, A Megrant, E Jeffrey, T. C. White, D. Sank, J. Y. Mutus, B. Campbell, Yu Chen, Z. Chen, B. Chiaro, A. Dunsworth, I.-C. Hoi, C. Neill, P. J. J. O’Malley, C. Quintana, P. Roushan, A. Vainsencher, J. Wenner, E. Solano, and John M. Martinis. Digital quantum simulation of fermionic models with a superconducting circuit. Nature Communications, 6(1):7654, November 2015.
* [90] Urtzi Las Heras, Laura García-Álvarez, Antonio Mezzacapo, Enrique Solano, and Lucas Lamata. Fermionic models with superconducting circuits. EPJ Quantum Technology, 2(1):8, December 2015.
* [91] R. Somma, G. Ortiz, J. E. Gubernatis, E. Knill, and R. Laflamme. Simulating Physical Phenomena by Quantum Networks. Physical Review A, 65(4):042323, April 2002. arXiv: quant-ph/0108146.
* [92] N. M. Linke, S. Johri, C. Figgatt, K. A. Landsman, A. Y. Matsuura, and C. Monroe. Measuring the Rényi entropy of a two-site Fermi-Hubbard model on a trapped ion quantum computer. Physical Review A, 98(5):052334, November 2018.
* [93] Benedikt Fauseweh and Jian-Xin Zhu. Digital quantum simulation of non-equilibrium quantum many-body systems. Quantum Information Processing, 20(4):138, April 2021.
* [94] Frank Arute, Kunal Arya, Ryan Babbush, and et al. Observation of separated dynamics of charge and spin in the Fermi-Hubbard model. arXiv:2010.07965 [quant-ph], October 2020. arXiv: 2010.07965.
* [95] Sabine Tornow, Wolfgang Gehrke, and Udo Helmbrecht. Non-Equilibrium Dynamics of a Dissipative Two-Site Hubbard Model Simulated on IBM Quantum Computers. arXiv:2011.11059 [cond-mat, physics:quant-ph], September 2021. arXiv: 2011.11059.
* [96] A. S. Alexandrov, V. V. Kabanov, and D. K. Ray. From electron to small polaron: An exact cluster solution. Physical Review B, 49(14):9915–9923, April 1994.
* [97] F. Marsiglio. Pairing in the Holstein model in the dilute limit. Physica C: Superconductivity, 244(1-2):21–34, March 1995.
* [98] O. S. Barišić. Holstein light quantum polarons on the one-dimensional lattice. Physical Review B, 73(21):214304, June 2006.
* [99] Aldo H. Romero, David W. Brown, and Katja Lindenberg. Polaron Effective Mass, Band Distortion, and Self-Trapping in the Holstein Molecular Crystal Model. Physical Review B, 59(21):13728–13740, June 1999. arXiv: cond-mat/9809025.
* [100] O. S. Barišić. Calculation of excited polaron states in the Holstein model. Physical Review B, 69(6):064302, February 2004.
* [101] J. Bonca, S. A. Trugman, and I. Batistic. The Holstein Polaron. Physical Review B, 60(3):1633–1642, July 1999. arXiv: cond-mat/9812252.
* [102] Benjamin Cohen-Stead, Owen Bradley, Cole Miles, George Batrouni, Richard Scalettar, and Kipton Barros. Fast and scalable quantum Monte Carlo simulations of electron-phonon models. arXiv:2203.01291 [cond-mat, physics:physics], March 2022. arXiv: 2203.01291.
* [103] P. E. Kornilovitch. Continuous-Time Quantum Monte Carlo Algorithm for the Lattice Polaron. Physical Review Letters, 81(24):5382–5385, December 1998. arXiv: cond-mat/9808155.
* [104] Martin Hohenadler, Hans Gerd Evertz, and Wolfgang von der Linden. Quantum Monte Carlo and variational approaches to the Holstein model. Physical Review B, 69(2):024301, January 2004. arXiv: cond-mat/0305387.
* [105] Alexandru Macridin. Phonons, charge and spin in correlated systems. s.n.] ; University Library Groningen] [Host, S.l.; Groningen, 2003. OCLC: 66661894.
* [106] Eric Jeckelmann and Steven R. White. Density matrix renormalization group study of the polaron problem in the Holstein model. Physical Review B, 57(11):6376–6385, March 1998. arXiv: cond-mat/9710058.
* [107] S. Ciuchi, F. de Pasquale, S. Fratini, and D. Feinberg. Dynamical mean-field theory of the small polaron. Physical Review B, 56(8):4494–4512, August 1997.
* [108] J. Casanova, A. Mezzacapo, L. Lamata, and E. Solano. Quantum Simulation of Interacting Fermion Lattice Models in Trapped Ions. Physical Review Letters, 108(19):190502, May 2012. arXiv: 1110.3730.
* [109] J. Casanova, L. Lamata, I. L. Egusquiza, R. Gerritsma, C. F. Roos, J. J. Garcia-Ripoll, and E. Solano. Quantum Simulation of Quantum Field Theories in Trapped Ions. Physical Review Letters, 107(26):260501, December 2011. arXiv: 1107.5233.
* [110] Alexandru Macridin, Panagiotis Spentzouris, James Amundson, and Roni Harnik. Digital quantum computation of fermion-boson interacting systems. Physical Review A, 98(4):042312, October 2018. arXiv: 1805.09928.
* [111] A. Mezzacapo, J. Casanova, L. Lamata, and E. Solano. Digital Quantum Simulation of the Holstein Model in Trapped Ions. Physical Review Letters, 109(20):200501, November 2012. arXiv: 1207.2664.
* [112] Maria Hermanns, Itamar Kimchi, and Johannes Knolle. Physics of the Kitaev model: fractionalization, dynamical correlations, and material connections. Annual Review of Condensed Matter Physics, 9(1):17–33, March 2018\. arXiv: 1705.01740.
* [113] Michael Karbach and Gerhard Muller. Introduction to the Bethe ansatz I. arXiv:cond-mat/9809162, September 1998. arXiv: cond-mat/9809162.
* [114] Michael Karbach, Kun Hu, and Gerhard Müller. Introduction to the bethe ansatz ii. Computers in Physics, 12(6):565–573, 1998.
* [115] Saptarshi Mandal and Arun M. Jayannavar. An introduction to Kitaev model-I. arXiv:2006.11549 [cond-mat], June 2020. arXiv: 2006.11549.
* [116] Daniel C Mattis. The Many-Body Problem: An Encyclopedia of Exactly Solved Models in One Dimension(3rd Printing with Revisions and Corrections). WORLD SCIENTIFIC, March 1993.
* [117] B. Sriram Shastry and Bill Sutherland. Exact ground state of a quantum mechanical antiferromagnet. Physica B+C, 108(1-3):1069–1070, August 1981.
* [118] Ian Affleck, Tom Kennedy, Elliott H. Lieb, and Hal Tasaki. Valence bond ground states in isotropic quantum antiferromagnets. Communications in Mathematical Physics, 115(3):477–528, September 1988.
* [119] Anders W. Sandvik. Computational Studies of Quantum Spin Systems. arXiv:1101.3281 [cond-mat, physics:hep-lat], pages 135–338, 2010\. arXiv: 1101.3281.
* [120] Synge Todo and Kiyoshi Kato. Cluster Algorithms for General- S Quantum Spin Systems. Physical Review Letters, 87(4):047203, July 2001.
* [121] Steven R. White and David A. Huse. Numerical renormalization-group study of low-lying eigenstates of the antiferromagnetic S =1 Heisenberg chain. Physical Review B, 48(6):3844–3852, August 1993.
* [122] DMRG of the Heisenberg model.
* [123] Simeng Yan, David A. Huse, and Steven R. White. Spin-Liquid Ground State of the S = 1/2 Kagome Heisenberg Antiferromagnet. Science, 332(6034):1173–1176, June 2011.
* [124] Hong-Chen Jiang, Hong Yao, and Leon Balents. Spin liquid ground state of the spin-$\frac{1}{2}$ square ${J}_{1}$-${J}_{2}$ Heisenberg model. Physical Review B, 86(2):024424, July 2012.
* [125] D. Porras and J. I. Cirac. Effective Quantum Spin Systems with Trapped Ions. Physical Review Letters, 92(20):207901, May 2004.
* [126] E. E. Edwards, S. Korenblit, K. Kim, R. Islam, M.-S. Chang, J. K. Freericks, G.-D. Lin, L.-M. Duan, and C. Monroe. Quantum simulation and phase diagram of the transverse-field Ising model with three atomic spins. Physical Review B, 82(6):060412, August 2010.
* [127] K Kim, S Korenblit, R Islam, E E Edwards, M-S Chang, C Noh, H Carmichael, G-D Lin, L-M Duan, C C Joseph Wang, J K Freericks, and C Monroe. Quantum simulation of the transverse Ising model with trapped ions. New Journal of Physics, 13(10):105003, October 2011.
* [128] A. Friedenauer, H. Schmitz, J. T. Glueckert, D. Porras, and T. Schaetz. Simulating a quantum magnet with trapped ions. Nature Physics, 4(10):757–761, October 2008.
* [129] K. Kim, M.-S. Chang, S. Korenblit, R. Islam, E. E. Edwards, J. K. Freericks, G.-D. Lin, L.-M. Duan, and C. Monroe. Quantum simulation of frustrated Ising spins with trapped ions. Nature, 465(7298):590–593, June 2010.
* [130] Joseph W. Britton, Brian C. Sawyer, Adam C. Keith, C.-C. Joseph Wang, James K. Freericks, Hermann Uys, Michael J. Biercuk, and John J. Bollinger. Engineered two-dimensional Ising interactions in a trapped-ion quantum simulator with hundreds of spins. Nature, 484(7395):489–492, April 2012.
* [131] G. J. Milburn. Simulating nonlinear spin models in an ion trap. arXiv:quant-ph/9908037, August 1999. arXiv: quant-ph/9908037.
* [132] J. J. García-Ripoll, M. A. Martin-Delgado, and J. I. Cirac. Implementation of Spin Hamiltonians in Optical Lattices. Physical Review Letters, 93(25):250405, December 2004.
* [133] Jonathan Simon, Waseem S. Bakr, Ruichao Ma, M. Eric Tai, Philipp M. Preiss, and Markus Greiner. Quantum simulation of antiferromagnetic spin chains in an optical lattice. Nature, 472(7343):307–312, April 2011.
* [134] Gyu-Boong Jo, Ye-Ryoung Lee, Jae-Hoon Choi, Caleb A. Christensen, Tony H. Kim, Joseph H. Thywissen, David E. Pritchard, and Wolfgang Ketterle. Itinerant Ferromagnetism in a Fermi Gas of Ultracold Atoms. Science, 325(5947):1521–1524, September 2009. arXiv: 0907.2888.
* [135] A. Micheli, G. K. Brennen, and P. Zoller. A toolbox for lattice-spin models with polar molecules. Nature Physics, 2(5):341–347, May 2006.
* [136] Jaeyoon Cho, Dimitris G. Angelakis, and Sougato Bose. Simulation of high-spin Heisenberg models in coupled cavities. Physical Review A, 78(6):062338, December 2008.
* [137] Dimitris I. Tsomokos, Sahel Ashhab, and Franco Nori. Using Superconducting Qubit Circuits to Engineer Exotic Lattice Systems. Physical Review A, 82(5):052311, November 2010. arXiv: 1009.2888.
* [138] Matthew Neeley, Markus Ansmann, Radoslaw C. Bialczak, Max Hofheinz, Erik Lucero, Aaron D. O’Connell, Daniel Sank, Haohua Wang, James Wenner, Andrew N. Cleland, Michael R. Geller, and John M. Martinis. Emulation of a Quantum Spin with a Superconducting Phase Qudit. Science, 325(5941):722–725, August 2009.
* [139] Franco Nori. Atomic physics with a circuit. Nature Physics, 4(8):589–590, August 2008.
* [140] Oleksandr Kyriienko and Anders S. Sørensen. Floquet quantum simulation with superconducting qubits. Physical Review Applied, 9(6):064029, June 2018. arXiv: 1703.04827.
* [141] Connor Powers, Lindsay Bassman, Thomas Linker, Ken-ichi Nomura, Sahil Gulania, Rajiv K. Kalia, Aiichiro Nakano, and Priya Vashishta. MISTIQS: An open-source software for performing quantum dynamics simulations on quantum computers. SoftwareX, 14:100696, June 2021. arXiv: 2101.01817.
* [142] Alba Cervera-Lierta. Exact Ising model simulation on a quantum computer. Quantum, 2:114, December 2018. arXiv: 1807.07112.
* [143] U. Las Heras, A. Mezzacapo, L. Lamata, S. Filipp, A. Wallraff, and E. Solano. Digital Quantum Simulation of Spin Systems in Superconducting Circuits. Physical Review Letters, 112(20):200501, May 2014. arXiv: 1311.7626.
* [144] Y. Salathé, M. Mondal, M. Oppliger, J. Heinsoo, P. Kurpiers, A. Potočnik, A. Mezzacapo, U. Las Heras, L. Lamata, E. Solano, S. Filipp, and A. Wallraff. Digital quantum simulation of spin models with circuit quantum electrodynamics. Physical Review X, 5(2):021027, June 2015. arXiv: 1502.06778.
* [145] Tatiana A. Bespalova and Oleksandr Kyriienko. Quantum simulation and ground state preparation for the honeycomb Kitaev model. arXiv:2109.13883 [cond-mat, physics:quant-ph], September 2021. arXiv: 2109.13883.
* [146] Adam Smith, M. S. Kim, Frank Pollmann, and Johannes Knolle. Simulating quantum many-body dynamics on a current digital quantum computer. npj Quantum Information, 5(1):106, December 2019.
* [147] E. Jané, G. Vidal, W. Dür, P. Zoller, and J. I. Cirac. Simulation of quantum dynamics with quantum optical systems. arXiv:quant-ph/0207011, July 2002. arXiv: quant-ph/0207011.
* [148] B. P. Lanyon, C. Hempel, D. Nigg, M. Müller, R. Gerritsma, F. Zähringer, P. Schindler, J. T. Barreiro, M. Rambach, G. Kirchmair, M. Hennrich, P. Zoller, R. Blatt, and C. F. Roos. Universal Digital Quantum Simulation with Trapped Ions. Science, 334(6052):57–61, October 2011.
* [149] Tasio Gonzalez-Raya, Rodrigo Asensio-Perea, Ana Martin, Lucas C. Céleri, Mikel Sanz, Pavel Lougovski, and Eugene F. Dumitrescu. Digital-Analog Quantum Simulations Using the Cross-Resonance Effect. PRX Quantum, 2(2):020328, May 2021.
* [150] Iñigo Arrazola, Julen S. Pedernales, Lucas Lamata, and Enrique Solano. Digital-Analog Quantum Simulation of Spin Models in Trapped Ions. Scientific Reports, 6(1):30534, September 2016.
* [151] G. H. Wannier. Antiferromagnetism. The Triangular Ising Net. Physical Review, 79(2):357–364, July 1950.
* [152] R. Moessner and S. L. Sondhi. Ising models of quantum frustration. Physical Review B, 63(22):224401, May 2001.
* [153] Jingfu Zhang, Man-Hong Yung, Raymond Laflamme, Alán Aspuru-Guzik, and Jonathan Baugh. Digital quantum simulation of the statistical mechanics of a frustrated magnet. Nature Communications, 3(1):880, January 2012.
* [154] Subir Sachdev. Quantum phase transitions. Cambridge University Press, Cambridge ; New York, 1999.
* [155] C. Lanczos. An iteration method for the solution of the eigenvalue problem of linear differential and integral operators. Journal of Research of the National Bureau of Standards, 45(4):255, October 1950.
* [156] Ernest R. Davidson. The Iterative Calculation of a Few of the Lowest Eigenvalues and Corresponding Eigenvectors of Large Real-Symmetric Matrices. Journal of Computational Physics, 17:87–94, January 1975. ADS Bibcode: 1975JCoPh..17…87D.
* [157] F. Heidrich-Meisner, A. Honecker, and T. Vekua. Frustrated ferromagnetic spin-$\frac{1}{2}$ chain in a magnetic field: The phase diagram and thermodynamic properties. Physical Review B, 74(2):020403, July 2006.
* [158] Cédric Weber, Luca Capriotti, Grégoire Misguich, Federico Becca, Maged Elhajal, and Frédéric Mila. Ising Transition Driven by Frustration in a 2D Classical Model with Continuous Symmetry. Physical Review Letters, 91(17):177202, October 2003.
* [159] Fabien Alet, Jesper Lykke Jacobsen, Grégoire Misguich, Vincent Pasquier, Frédéric Mila, and Matthias Troyer. Interacting Classical Dimers on the Square Lattice. Physical Review Letters, 94(23):235702, June 2005.
* [160] Fabien Alet, Grégoire Misguich, Vincent Pasquier, Roderich Moessner, and Jesper Lykke Jacobsen. Unconventional Continuous Phase Transition in a Three-Dimensional Dimer Model. Physical Review Letters, 97(3):030403, July 2006.
* [161] David A. Huse, Werner Krauth, R. Moessner, and S. L. Sondhi. Coulomb and Liquid Dimer Models in Three Dimensions. Physical Review Letters, 91(16):167004, October 2003.
* [162] Stefan Wessel and Matthias Troyer. Supersolid Hard-Core Bosons on the Triangular Lattice. Physical Review Letters, 95(12):127205, September 2005.
* [163] R. G. Melko, A. Paramekanti, A. A. Burkov, A. Vishwanath, D. N. Sheng, and L. Balents. Supersolid Order from Disorder: Hard-Core Bosons on the Triangular Lattice. Physical Review Letters, 95(12):127207, September 2005.
* [164] R. Chitra, Swapan Pati, H. R. Krishnamurthy, Diptiman Sen, and S. Ramasesha. Density-matrix renormalization-group studies of the spin-1/2 Heisenberg system with dimerization and frustration. Physical Review B, 52(9):6581–6587, September 1995.
* [165] Steven R. White and A. L. Chernyshev. Neel order in square and triangular lattice Heisenberg models. Physical Review Letters, 99(12):127004, September 2007. arXiv: 0705.2746.
* [166] H. C. Jiang, Z. Y. Weng, and D. N. Sheng. DMRG Numerical Study of the Kagomé Antiferromagnet. arXiv:0804.1616 [cond-mat], April 2008. arXiv: 0804.1616.
* [167] Rajiv R. P. Singh and David A. Huse. Ground State of the Kagomé Lattice Heisenberg Antiferromagnet. Physical Review B, 76(18):180407, November 2007. arXiv: 0707.0892.
* [168] Marcos Rigol, Tyler Bryant, and Rajiv R. P. Singh. Numerical Linked-Cluster Approach to Quantum Lattice Models. Physical Review Letters, 97(18):187202, November 2006.
* [169] S. Trebst, H. Monien, C. J. Hamer, Z. Weihong, and R. R. P. Singh. Strong-Coupling Expansions for Multiparticle Excitations: Continuum and Bound States. Physical Review Letters, 85(20):4373–4376, November 2000. arXiv: cond-mat/0007192.
* [170] Andrew D. King, Jack Raymond, Trevor Lanting, Sergei V. Isakov, Masoud Mohseni, Gabriel Poulin-Lamarre, Sara Ejtemaee, William Bernoudy, Isil Ozfidan, Anatoly Yu Smirnov, Mauricio Reis, Fabio Altomare, Michael Babcock, Catia Baron, Andrew J. Berkley, Kelly Boothby, Paul I. Bunyk, Holly Christiani, Colin Enderud, Bram Evert, Richard Harris, Emile Hoskinson, Shuiyuan Huang, Kais Jooya, Ali Khodabandelou, Nicolas Ladizinsky, Ryan Li, P. Aaron Lott, Allison J. R. MacDonald, Danica Marsden, Gaelen Marsden, Teresa Medina, Reza Molavi, Richard Neufeld, Mana Norouzpour, Travis Oh, Igor Pavlov, Ilya Perminov, Thomas Prescott, Chris Rich, Yuki Sato, Benjamin Sheldan, George Sterling, Loren J. Swenson, Nicholas Tsai, Mark H. Volkmann, Jed D. Whittaker, Warren Wilkinson, Jason Yao, Hartmut Neven, Jeremy P. Hilton, Eric Ladizinsky, Mark W. Johnson, and Mohammad H. Amin. Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets. Nature Communications, 12(1):1113, February 2021.
* [171] K. Kim, M.-S. Chang, S. Korenblit, R. Islam, E. E. Edwards, J. K. Freericks, G.-D. Lin, L.-M. Duan, and C. Monroe. Quantum simulation of frustrated Ising spins with trapped ions. Nature, 465(7298):590–593, June 2010.
* [172] Alán Aspuru-Guzik and Philip Walther. Photonic quantum simulators. Nature Physics, 8(4):285–291, April 2012.
* [173] Xiao-song Ma, Borivoje Dakic, William Naylor, Anton Zeilinger, and Philip Walther. Quantum simulation of the wavefunction to probe frustrated Heisenberg spin systems. Nature Physics, 7(5):399–405, May 2011.
* [174] Xiao-song Ma, Borivoje Dakić, Sebastian Kropatschek, William Naylor, Yang-hao Chan, Zhe-xuan Gong, Lu-ming Duan, Anton Zeilinger, and Philip Walther. Towards photonic quantum simulation of ground states of frustrated Heisenberg spin systems. Scientific Reports, 4(1):3583, May 2015.
* [175] J. Struck, C. Ölschläger, R. Le Targat, P. Soltan-Panahi, A. Eckardt, M. Lewenstein, P. Windpassinger, and K. Sengstock. Quantum Simulation of Frustrated Classical Magnetism in Triangular Optical Lattices. Science, 333(6045):996–999, August 2011.
* [176] Jianming Cai, Alex Retzker, Fedor Jelezko, and Martin B. Plenio. Towards a large-scale quantum simulator on diamond surface at room temperature. Nature Physics, 9(3):168–173, March 2013. arXiv: 1208.2874.
* [177] P. Möller and J.R. Nix. Nuclear pairing models. Nuclear Physics A, 536(1):20–60, January 1992.
* [178] Fabian Braun and Jan von Delft. Superconductivity in ultrasmall metallic grains. Physical Review B, 59(14):9527–9544, April 1999.
* [179] Rafael M Fernandes. Lecture Notes: BCS theory of superconductivity, 2020.
* [180] Philip W Anderson. Twenty-five Years of High-Temperature Superconductivity – A Personal Review. Journal of Physics: Conference Series, 449:012001, July 2013.
* [181] Elbio Dagotto. Correlated electrons in high-temperature superconductors. Reviews of Modern Physics, 66(3):763–840, July 1994.
* [182] R.W. Richardson and N. Sherman. Exact eigenstates of the pairing-force Hamiltonian. Nuclear Physics, 52:221–238, March 1964.
* [183] S Sarkar. Bethe-ansatz solution of the t-J model. Journal of Physics A: Mathematical and General, 23(9):L409–L414, May 1990.
* [184] Patrick A. Lee, Naoto Nagaosa, and Xiao-Gang Wen. Doping a Mott Insulator: Physics of High Temperature Superconductivity. arXiv:cond-mat/0410445, October 2004. arXiv: cond-mat/0410445.
* [185] Y. Hasegawa and D. Poilblanc. Hole dynamics in the t \- J model: An exact diagonalization study. Physical Review B, 40(13):9035–9043, November 1989.
* [186] Masao Ogata, M. U. Luchini, S. Sorella, and F. F. Assaad. Phase diagram of the one-dimensional t \- J model. Physical Review Letters, 66(18):2388–2391, May 1991.
* [187] Philippe Corboz, T.M. Rice, and Matthias Troyer. Competing States in the t - J Model: Uniform d -Wave State versus Stripe State. Physical Review Letters, 113(4):046402, July 2014.
* [188] C. Stephen Hellberg and E. Manousakis. Stripes and the t-J Model. Physical Review Letters, 83(1):132–135, July 1999.
* [189] Satoshi Morita, Ryui Kaneko, and Masatoshi Imada. Quantum Spin Liquid in Spin 1/2 J1-J2 Heisenberg Model on Square Lattice: Many-Variable Variational Monte Carlo Study Combined with Quantum-Number Projections. Journal of the Physical Society of Japan, 84(2):024720, February 2015. arXiv: 1410.7557.
* [190] Fumiko Yamaguchi and Yoshihisa Yamamoto. Quantum simulation of the t–J model. Superlattices and Microstructures, 32(4-6):343–345, October 2002\.
* [191] Efstratios Manousakis. A Quantum-Dot Array as Model for Copper-Oxide Superconductors: A Dedicated Quantum Simulator for the Many-Fermion Problem. arXiv:cond-mat/0201142, January 2002. arXiv: cond-mat/0201142.
* [192] L.-A. Wu, M. S. Byrd, and D. A. Lidar. Polynomial-Time Simulation of Pairing Models on a Quantum Computer. Physical Review Letters, 89(5):057904, July 2002. arXiv: quant-ph/0108110.
* [193] Kenneth R. Brown, Robert J. Clark, and Isaac L. Chuang. Limitations of Quantum Simulation Examined by Simulating a Pairing Hamiltonian using Nuclear Magnetic Resonance. Physical Review Letters, 97(5):050504, August 2006. arXiv: quant-ph/0601021.
* [194] Katherine L Brown, Suvabrata De, Vivien M Kendon, and William J Munro. Ancilla-based quantum simulation. New Journal of Physics, 13(9):095007, September 2011.
* [195] Lindsay Bassman, Miroslav Urbanek, Mekena Metcalf, Jonathan Carter, Alexander F. Kemper, and Wibe de Jong. Simulating Quantum Materials with Digital Quantum Computers. Quantum Science and Technology, 6(4):043002, October 2021. arXiv: 2101.08836.
* [196] Matthias Vojta. Quantum phase transitions. Reports on Progress in Physics, 66(12):2069–2110, December 2003\. arXiv: cond-mat/0309604.
* [197] Thomas Vojta. Quantum phase transitions in electronic systems. Annalen der Physik, 9(6):403–440, June 2000. arXiv: cond-mat/9910514.
* [198] P. C. Hohenberg and A. P. Krekhov. An introduction to the Ginzburg-Landau theory of phase transitions and nonequilibrium patterns. Physics Reports, 572:1–42, April 2015. arXiv: 1410.7285.
* [199] Catalin Pascu Moca and Adrian Roman. Quantum phase transition in a gapped Anderson model: A numerical renormalization group study. Physical Review B, 81(23):235106, June 2010. arXiv: 1006.4022.
* [200] Hyun-Jung Lee, Ralf Bulla, and Matthias Vojta. Numerical Renormalization Group for Impurity Quantum Phase Transitions: Structure of Critical Fixed Points. 2005\.
* [201] G. Kotliar, S. Y. Savrasov, K. Haule, V. S. Oudovenko, O. Parcollet, and C. A. Marianetti. Electronic Structure Calculations with Dynamical Mean-Field Theory: A Spectral Density Functional Approach. Reviews of Modern Physics, 78(3):865–951, August 2006. arXiv: cond-mat/0511085.
* [202] D. Vollhardt. Dynamical mean-field theory for correlated electrons. Annalen der Physik, 524(1):1–19, January 2012.
* [203] Nick Fläschner, Dominik Vogel, Matthias Tarnowski, Benno S. Rem, Dirk-Sören Lühmann, Markus Heyl, Jan Carl Budich, Ludwig Mathey, Klaus Sengstock, and Christof Weitenberg. Observation of a dynamical topological phase transition. 2016\.
* [204] A. J. Daley, H. Pichler, J. Schachenmayer, and P. Zoller. Measuring Entanglement Growth in Quench Dynamics of Bosons in an Optical Lattice. Physical Review Letters, 109(2):020505, July 2012.
* [205] Hannes Pichler, Lars Bonnes, Andrew J Daley, Andreas M Läuchli, and Peter Zoller. Thermal versus entanglement entropy: a measurement protocol for fermionic atoms with a quantum gas microscope. New Journal of Physics, 15(6):063003, June 2013.
* [206] Chenyong Ju, Chao Lei, Xiangkun Xu, Dimitrie Culcer, Zhenyu Zhang, and Jiangfeng Du. NV-Center Based Digital Quantum Simulation of a Quantum Phase Transition in Topological Insulators. Physical Review B, 89(4):045432, January 2014. arXiv: 1310.1451.
* [207] Xinhua Peng, Jiangfeng Du, and Dieter Suter. Quantum Phase Transition of Ground-state Entanglement in a Heisenberg Spin Chain Simulated in an NMR Quantum Computer. Physical Review A, 71(1):012307, January 2005. arXiv: quant-ph/0411049.
* [208] Rodney Loudon. The quantum theory of light. OUP Oxford, 2000.
* [209] P Forn-Díaz, L Lamata, E Rico, J Kono, and E Solano. Ultrastrong coupling regimes of light-matter interaction. Reviews of Modern Physics, 91(2):025005, 2019.
* [210] Anton Frisk Kockum, Adam Miranowicz, Simone De Liberato, Salvatore Savasta, and Franco Nori. Ultrastrong coupling between light and matter. Nature Reviews Physics, 1(1):19–40, 2019.
* [211] Agustin Di Paolo, Panagiotis Kl Barkoutsos, Ivano Tavernelli, and Alexandre Blais. Variational quantum simulation of ultrastrong light-matter coupling. Physical Review Research, 2(3):033364, 2020.
* [212] Zbigniew Ficek and Mohamed Ridza Wahiddin. Quantum optics for beginners. CRC Press, 2014.
* [213] Ranojoy Bose, Tao Cai, Kaushik Roy Choudhury, Glenn S Solomon, and Edo Waks. All-optical coherent control of vacuum rabi oscillations. Nature Photonics, 8(11):858–864, 2014.
* [214] JM Fink, M Göppl, M Baur, R Bianchetti, Peter J Leek, Alexandre Blais, and Andreas Wallraff. Climbing the jaynes–cummings ladder and observing its nonlinearity in a cavity qed system. Nature, 454(7202):315–318, 2008.
* [215] FW Cummings. Stimulated emission of radiation in a single mode. Physical Review, 140(4A):A1051, 1965.
* [216] Mausam Bindhani, Bikash Behera, and Prasanta Panigrahi. Quantum simulation of jaynes-cummings model on ibm q system. 02 2020.
* [217] T Holstein and Hl Primakoff. Field dependence of the intrinsic domain magnetization of a ferromagnet. Physical Review, 58(12):1098, 1940.
* [218] II Rabi. On the process of space quantization. Physical Review, 49(4):324, 1936.
* [219] Robert H Dicke. Coherence in spontaneous radiation processes. Physical review, 93(1):99, 1954.
* [220] Jochen Braumüller, Michael Marthaler, Andre Schneider, Alexander Stehli, Hannes Rotzinger, Martin Weides, and Alexey V Ustinov. Analog quantum simulation of the rabi model in the ultra-strong coupling regime. Nature communications, 8(1):1–8, 2017.
* [221] Juha Leppäkangas, Jochen Braumüller, Melanie Hauck, Jan-Michael Reiner, Iris Schwenk, Sebastian Zanker, Lukas Fritz, Alexey V Ustinov, Martin Weides, and Michael Marthaler. Quantum simulation of the spin-boson model with a microwave circuit. Physical Review A, 97(5):052321, 2018.
* [222] Antonio Mezzacapo, U Las Heras, JS Pedernales, L DiCarlo, E Solano, and L Lamata. Digital quantum rabi and dicke models in superconducting circuits. Scientific reports, 4(1):1–4, 2014.
* [223] JS Pedernales, I Lizuain, S Felicetti, G Romero, L Lamata, and E Solano. Quantum rabi model with trapped ions. Scientific reports, 5(1):1–7, 2015.
* [224] Dingshun Lv, Shuoming An, Zhenyu Liu, Jing-Ning Zhang, Julen S Pedernales, Lucas Lamata, Enrique Solano, and Kihwan Kim. Quantum simulation of the quantum rabi model in a trapped ion. Physical Review X, 8(2):021027, 2018.
* [225] Ibai Aedo and Lucas Lamata. Analog quantum simulation of generalized dicke models in trapped ions. Physical Review A, 97(4):042317, 2018.
* [226] Lucas Lamata. Digital-analog quantum simulation of generalized dicke models with superconducting circuits. Scientific Reports, 7(1):1–12, 2017.
* [227] Sergei Valer’evich Remizov, Andrei Andreevich Zhukov, Walter Valentinovich Pogosov, and Yu E Lozovik. Analog–digital quantum simulation of the dicke model with superconducting circuits. JETP Letters, 108(11):748–753, 2018.
* [228] Dominic W Berry, Graeme Ahokas, Richard Cleve, and Barry C Sanders. Efficient quantum algorithms for simulating sparse hamiltonians. Communications in Mathematical Physics, 270(2):359–371, 2007.
* [229] Nathan Wiebe, Dominic W Berry, Peter Høyer, and Barry C Sanders. Simulating quantum dynamics on a quantum computer. Journal of Physics A: Mathematical and Theoretical, 44(44):445308, 2011.
* [230] Carlos Sabín. Digital quantum simulation of linear and nonlinear optical elements. Quantum Reports, 2(1):208–220, 2020.
* [231] Paula Cordero Encinar, Andrés Agustí, and Carlos Sabín. Digital quantum simulation of beam splitters and squeezing with ibm quantum computers. Physical Review A, 104(5):052609, 2021.
* [232] Seth Lloyd. Universal quantum simulators. Science, 273(5278):1073–1078, 1996.
* [233] Rolando Somma, Gerardo Ortiz, James E Gubernatis, Emanuel Knill, and Raymond Laflamme. Simulating physical phenomena by quantum networks. Physical Review A, 65(4):042323, 2002.
* [234] Rolando D Somma. Quantum computation, complexity, and many-body physics. arXiv preprint quant-ph/0512209, 2005.
* [235] Yvonne Y Gao, Brian J Lester, Yaxing Zhang, Chen Wang, Serge Rosenblum, Luigi Frunzio, Liang Jiang, SM Girvin, and Robert J Schoelkopf. Programmable interference between two microwave quantum memories. Physical Review X, 8(2):021073, 2018.
* [236] Yaxing Zhang, Brian J Lester, Yvonne Y Gao, Liang Jiang, RJ Schoelkopf, and SM Girvin. Engineering bilinear mode coupling in circuit qed: Theory and experiment. Physical Review A, 99(1):012314, 2019.
* [237] JR Johansson, Göran Johansson, CM Wilson, Per Delsing, and Franco Nori. Nonclassical microwave radiation from the dynamical casimir effect. Physical Review A, 87(4):043804, 2013.
* [238] Christopher M Wilson, Göran Johansson, Arsalan Pourkabirian, Michael Simoen, J Robert Johansson, Tim Duty, Franco Nori, and Per Delsing. Observation of the dynamical casimir effect in a superconducting circuit. nature, 479(7373):376–379, 2011.
* [239] Diego González Olivares, Borja Peropadre, Joonsuk Huh, and Juan José García-Ripoll. Quantum emulation of molecular force fields: A blueprint for a superconducting architecture. Physical Review Applied, 8(6):064008, 2017.
* [240] Yudong Cao, Jonathan Romero, Jonathan P Olson, Matthias Degroote, Peter D Johnson, Mária Kieferová, Ian D Kivlichan, Tim Menke, Borja Peropadre, Nicolas PD Sawaya, et al. Quantum chemistry in the age of quantum computing. Chemical reviews, 119(19):10856–10915, 2019.
* [241] Heinz-Peter Breuer, Francesco Petruccione, et al. The theory of open quantum systems. Oxford University Press on Demand, 2002.
* [242] Crispin W Gardiner. Quantum noise. Springer series in synergetics, 1991.
* [243] J Stolze and D Suter. Quantum computing, revised and enlarged, 2008.
* [244] Julio T Barreiro, Markus Müller, Philipp Schindler, Daniel Nigg, Thomas Monz, Michael Chwalla, Markus Hennrich, Christian F Roos, Peter Zoller, and Rainer Blatt. An open-system quantum simulator with trapped ions. Nature, 470(7335):486–491, 2011.
* [245] Angel Rivas and Susana F Huelga. Open quantum systems, volume 10. Springer, 2012.
* [246] Bassano Vacchini. Quantum noise from reduced dynamics. Fluctuation and Noise Letters, 15(03):1640003, 2016.
* [247] Jonas Lammers, Hendrik Weimer, and Klemens Hammerer. Open-system many-body dynamics through interferometric measurements and feedback. Phys. Rev. A, 94:052120, Nov 2016.
* [248] Rob Clifton and Hans Halvorson. Entanglement and open systems in algebraic quantum field theory. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 32(1):1–31, 2001.
* [249] Daniel A Lidar. Lecture notes on the theory of open quantum systems. arXiv preprint arXiv:1902.00967, 2019.
* [250] Pragati Gupta and CM Chandrashekar. Optimal quantum simulation of open quantum systems. arXiv preprint arXiv:2012.07540, 2020.
* [251] Dave Bacon, Andrew M Childs, Isaac L Chuang, Julia Kempe, Debbie W Leung, and Xinlan Zhou. Universal simulation of markovian quantum dynamics. Physical Review A, 64(6):062302, 2001.
* [252] Ryan Sweke, Mikel Sanz, Ilya Sinayskiy, Francesco Petruccione, and Enrique Solano. Digital quantum simulation of many-body non-markovian dynamics. Physical Review A, 94(2):022317, 2016.
* [253] Guillermo García-Pérez, Matteo AC Rossi, and Sabrina Maniscalco. Ibm q experience as a versatile experimental testbed for simulating open quantum systems. npj Quantum Information, 6(1):1–10, 2020.
* [254] Zixuan Hu, Rongxin Xia, and Sabre Kais. A quantum algorithm for evolving open quantum dynamics on quantum computing devices. Scientific reports, 10(1):1–9, 2020.
* [255] Hefeng Wang, Sahel Ashhab, and Franco Nori. Quantum algorithm for simulating the dynamics of an open quantum system. Physical Review A, 83(6):062317, 2011.
* [256] Ryan Sweke, Ilya Sinayskiy, and Francesco Petruccione. Simulation of single-qubit open quantum systems. Physical Review A, 90(2):022331, 2014.
* [257] Masuo Suzuki. Fractal decomposition of exponential operators with applications to many-body theories and monte carlo simulations. Physics Letters A, 146(6):319–323, 1990.
* [258] W Forrest Stinespring. Positive functions on c*-algebras. Proceedings of the American Mathematical Society, 6(2):211–216, 1955.
* [259] H Langer. B. sz.-nagy and c. foias, harmonic analysis of operators on hilbert space. viii+ 387 s. budapest/amsterdam/london 1970. akadémiai kiadó/north-holland publishing company. Zeitschrift Angewandte Mathematik und Mechanik, 52(8):501–501, 1972\.
* [260] Anthony W Schlimgen, Kade Head-Marsden, LeeAnn M Sager, Prineha Narang, and David A Mazziotti. Quantum simulation of open quantum systems using a unitary decomposition of operators. Physical Review Letters, 127(27):270503, 2021.
* [261] Hirsh Kamakari, Shi-Ning Sun, Mario Motta, and Austin J Minnich. Digital quantum simulation of open quantum systems using quantum imaginary–time evolution. PRX Quantum, 3(1):010320, 2022.
* [262] Minjae Jo and Myungshik Kim. Simulating open quantum many-body systems using optimised circuits in digital quantum simulation. arXiv preprint arXiv:2203.14295, 2022.
* [263] José D Guimarães, Mikhail I Vasilevskiy, and Luís S Barbosa. Efficient method to simulate non-perturbative dynamics of an open quantum system using a quantum computer. arXiv preprint arXiv:2203.14653, 2022.
* [264] Wibe A de Jong, Mekena Metcalf, James Mulligan, Mateusz Płoskoń, Felix Ringer, and Xiaojun Yao. Quantum simulation of open quantum systems in heavy-ion collisions. Physical Review D, 104(5):L051501, 2021.
* [265] Pragati Gupta and CM Chandrashekar. Digital quantum simulation framework for energy transport in an open quantum system. New Journal of Physics, 22(12):123027, 2020.
* [266] Huan-Yu Liu, Tai-Ping Sun, Yu-Chun Wu, and Guo-Ping Guo. Variational quantum algorithms for the steady states of open quantum systems. Chinese Physics Letters, 38(8):080301, 2021.
* [267] J Robert Johansson, Paul D Nation, and Franco Nori. Qutip: An open-source python framework for the dynamics of open quantum systems. Computer Physics Communications, 183(8):1760–1772, 2012.
* [268] Alexandra Nagy and Vincenzo Savona. Variational quantum monte carlo method with a neural-network ansatz for open quantum systems. Physical review letters, 122(25):250501, 2019.
* [269] Niklas Christensson, Harald F Kauffmann, Tonu Pullerits, and Tomas Mancal. Origin of long-lived coherences in light-harvesting complexes. The Journal of Physical Chemistry B, 116(25):7449–7454, 2012.
* [270] Masoud Mohseni, Yasser Omar, Gregory S Engel, and Martin B Plenio. Quantum effects in biology. Cambridge University Press, 2014.
* [271] Mao Wang, Manuel Hertzog, and Karl Börjesson. Polariton-assisted excitation energy channeling in organic heterojunctions. Nature communications, 12(1):1–10, 2021.
* [272] MI Vasilevskiy, EV Anda, and SS Makler. Electron-phonon interaction effects in semiconductor quantum dots: A nonperturabative approach. Physical Review B, 70(3):035318, 2004.
* [273] Mario Motta and Julia E Rice. Emerging quantum computing algorithms for quantum chemistry. Wiley Interdisciplinary Reviews: Computational Molecular Science, page e1580, 2021.
* [274] C David Sherrill. Frontiers in electronic structure theory. The Journal of chemical physics, 132(11):110902, 2010.
* [275] Mark Webber, Vincent Elfving, Sebastian Weidt, and Winfried K Hensinger. The impact of hardware specifications on reaching quantum advantage in the fault tolerant regime. AVS Quantum Science, 4(1):013801, 2022.
* [276] Dawei Lu, Nanyang Xu, Ruixue Xu, Hongwei Chen, Jiangbin Gong, Xinhua Peng, and Jiangfeng Du. Simulation of chemical reaction dynamics on an nmr quantum computer. arXiv preprint arXiv:1105.4228, 2011.
* [277] Sam McArdle, Alexander Mayorov, Xiao Shan, Simon Benjamin, and Xiao Yuan. Digital quantum simulation of molecular vibrations. Chemical science, 10(22):5725–5735, 2019.
* [278] Hefeng Wang, Sabre Kais, Alán Aspuru-Guzik, and Mark R Hoffmann. Quantum algorithm for obtaining the energy spectrum of molecular systems. Physical Chemistry Chemical Physics, 10(35):5388–5393, 2008.
* [279] Vincent E Elfving, Benno W Broer, Mark Webber, Jacob Gavartin, Mathew D Halls, K Patrick Lorton, and A Bochevarov. How will quantum computers provide an industrially relevant computational advantage in quantum chemistry? arXiv preprint arXiv:2009.12472, 2020.
* [280] Martin P Andersson, Mark N Jones, Kurt V Mikkelsen, Fengqi You, and Seyed Soheil Mansouri. Quantum computing for chemical and biomolecular product design. Current Opinion in Chemical Engineering, 36:100754, 2022.
* [281] Benjamin P Lanyon, James D Whitfield, Geoff G Gillett, Michael E Goggin, Marcelo P Almeida, Ivan Kassal, Jacob D Biamonte, Masoud Mohseni, Ben J Powell, Marco Barbieri, et al. Towards quantum chemistry on a quantum computer. Nature chemistry, 2(2):106–111, 2010.
* [282] Ammar Daskin and Sabre Kais. Direct application of the phase estimation algorithm to find the eigenvalues of the hamiltonians. Chemical Physics, 514:87–94, 2018.
* [283] Ammar Daskin and Sabre Kais. A generalized circuit for the hamiltonian dynamics through the truncated series. Quantum Information Processing, 17(12):1–19, 2018.
* [284] Jarrod R McClean, Jonathan Romero, Ryan Babbush, and Alán Aspuru-Guzik. The theory of variational hybrid quantum-classical algorithms. New Journal of Physics, 18(2):023023, 2016.
* [285] Teng Bian, Daniel Murphy, Rongxin Xia, Ammar Daskin, and Sabre Kais. Quantum computing methods for electronic states of the water molecule. Molecular Physics, 117(15-16):2069–2082, 2019.
* [286] Ali Abedi, Neepa T Maitra, and Eberhard KU Gross. Exact factorization of the time-dependent electron-nuclear wave function. Physical review letters, 105(12):123002, 2010.
* [287] Basile FE Curchod and Todd J Martínez. Ab initio nonadiabatic quantum molecular dynamics. Chemical reviews, 118(7):3305–3336, 2018.
* [288] Fabien Gatti. Molecular quantum dynamics: from theory to applications. Springer, 2014.
* [289] Christiane P Koch, Mikhail Lemeshko, and Dominique Sugny. Quantum control of molecular rotation. Reviews of Modern Physics, 91(3):035005, 2019.
* [290] NH Damrauer, C Dietl, G Krampert, S-H Lee, K-H Jung, and G Gerber. Control of bond-selective photochemistry in ch brcl using adaptive femtosecond pulse shaping. The European Physical Journal D-Atomic, Molecular, Optical and Plasma Physics, 20(1):71–76, 2002.
* [291] G Vogt, G Krampert, P Niklaus, P Nuernberger, and G Gerber. Optimal control of photoisomerization. Physical review letters, 94(6):068305, 2005.
* [292] C Daniel and J Full. L. gonza les, c. lupulescu, j. manz, a. merli, s. vajda and l. woste. Science, 299:536–539, 2003.
* [293] Pauline J Ollitrault, Alexander Miessen, and Ivano Tavernelli. Molecular quantum dynamics: A quantum computing perspective. Accounts of chemical research, 54(23):4229–4238, 2021.
* [294] Qi Gao, Hajime Nakamura, Tanvi P Gujarati, Gavin O Jones, Julia E Rice, Stephen P Wood, Marco Pistoia, Jeannette M Garcia, and Naoki Yamamoto. Computational investigations of the lithium superoxide dimer rearrangement on noisy quantum devices. The Journal of Physical Chemistry A, 125(9):1827–1836, 2021.
* [295] Mario Motta and Julia E Rice. Emerging quantum computing algorithms for quantum chemistry. Wiley Interdisciplinary Reviews: Computational Molecular Science, page e1580, 2021.
* [296] Qi Gao, Gavin O Jones, Michihiko Sugawara, Takao Kobayashi, Hiroki Yamashita, Hideaki Kawaguchi, Shu Tanaka, and Naoki Yamamoto. Quantum-classical computational molecular design of deuterated high-efficiency oled emitters. arXiv preprint arXiv:2110.14836, 2021.
* [297] Alicia B Magann, Matthew D Grace, Herschel A Rabitz, and Mohan Sarovar. Digital quantum simulation of molecular dynamics and control. Physical Review Research, 3(2):023165, 2021.
* [298] Chaoyuan Zhu, Kuo Kan Liang, Michitoshi Hayashi, and Sheng Hsien Lin. Theoretical treatment of anharmonic effect on molecular absorption, fluorescence spectra, and electron transfer. Chemical Physics, 358(1-2):137–146, 2009.
* [299] Chen-Wen Wang, Ling Yang, Chaoyuan Zhu, Jian-Guo Yu, and Sheng-Hsien Lin. Franck-condon factors perturbed by damped harmonic oscillators: Solvent enhanced x 1ag a1b1u absorption and fluorescence spectra of perylene. The Journal of chemical physics, 141(8):084106, 2014.
* [300] L Debbichi, MC Marco de Lucas, JF Pierson, and P Kruger. Vibrational properties of cuo and cu4o3 from first-principles calculations, and raman and infrared spectroscopy. The Journal of Physical Chemistry C, 116(18):10232–10237, 2012\.
* [301] Solairaj Dhananasekaran, Rameshthangam Palanivel, and Srinivasan Pappu. Adsorption of methylene blue, bromophenol blue, and coomassie brilliant blue by $\alpha$-chitin nanoparticles. Journal of advanced research, 7(1):113–124, 2016.
* [302] Dimitri Antoniou and Steven D Schwartz. Internal enzyme motions as a source of catalytic activity: rate-promoting vibrations and hydrogen tunneling. The Journal of Physical Chemistry B, 105(23):5553–5558, 2001.
* [303] Hyonseok Hwang and Peter J Rossky. Harmonic model description of the franck- condon density for a betaine dye molecule. The Journal of Physical Chemistry A, 108(14):2607–2616, 2004.
* [304] Trygve Helgaker, Poul Jorgensen, and Jeppe Olsen. Molecular electronic-structure theory. John Wiley & Sons, 2014.
* [305] Ola Engkvist, Per-Ola Norrby, Nidhal Selmi, Yu-hong Lam, Zhengwei Peng, Edward C Sherer, Willi Amberg, Thomas Erhard, and Lynette A Smyth. Computational prediction of chemical reactions: current status and outlook. Drug discovery today, 23(6):1203–1218, 2018.
* [306] Scott Aaronson. Quantum computing, postselection, and probabilistic polynomial-time. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 461(2063):3473–3482, 2005.
* [307] Yves Salathé, Mintu Mondal, Markus Oppliger, Johannes Heinsoo, Philipp Kurpiers, Anton Potočnik, Antonio Mezzacapo, Urtzi Las Heras, Lucas Lamata, Enrique Solano, et al. Digital quantum simulation of spin models with circuit quantum electrodynamics. Physical Review X, 5(2):021027, 2015.
* [308] Dawei Lu, Nanyang Xu, Ruixue Xu, Hongwei Chen, Jiangbin Gong, Xinhua Peng, and Jiangfeng Du. Simulation of chemical reaction dynamics on an nmr quantum computer. arXiv preprint arXiv:1105.4228, 2011.
* [309] Kuntal Halder, Narendra N Hegade, Bikash K Behera, and Prasanta K Panigrahi. Digital quantum simulation of laser-pulse induced tunneling mechanism in chemical isomerization reaction. arXiv preprint arXiv:1808.00021, 2018.
* [310] Herschel Rabitz, Regina de Vivie-Riedle, Marcus Motzkus, and Karl Kompa. Whither the future of controlling quantum phenomena? Science, 288(5467):824–828, 2000.
* [311] Alberto Apostolico and Raffaele Giancarlo. Sequence alignment in molecular biology. Journal of Computational Biology, 5(2):173–196, 1998.
* [312] Michael Zuker. Suboptimal sequence alignment in molecular biology: Alignment with error analysis. Journal of molecular biology, 221(2):403–420, 1991.
* [313] Edward M Marcotte. Computational genetics: finding protein function by nonhomology methods. Current opinion in structural biology, 10(3):359–365, 2000.
* [314] Clemens Prescher and Vitali B Prakapenka. Dioptas: a program for reduction of two-dimensional x-ray diffraction data and data exploration. High Pressure Research, 35(3):223–230, 2015.
* [315] Sandro Cosconati, Stefano Forli, Alex L Perryman, Rodney Harris, David S Goodsell, and Arthur J Olson. Virtual screening with autodock: theory and practice. Expert opinion on drug discovery, 5(6):597–607, 2010.
* [316] Jian Wang and Nikolay V Dokholyan. Medusadock 2.0: Efficient and accurate protein–ligand docking with constraints. Journal of chemical information and modeling, 59(6):2509–2515, 2019\.
* [317] Thomas A Halgren, Robert B Murphy, Richard A Friesner, Hege S Beard, Leah L Frye, W Thomas Pollard, and Jay L Banks. Glide: a new approach for rapid, accurate docking and scoring. 2. enrichment factors in database screening. Journal of medicinal chemistry, 47(7):1750–1759, 2004.
* [318] Junde Li, Mahabubul Alam, M Sha Congzhou, Jian Wang, Nikolay V Dokholyan, and Swaroop Ghosh. Drug discovery approaches using quantum machine learning. In 2021 58th ACM/IEEE Design Automation Conference (DAC), pages 1356–1359. IEEE, 2021.
* [319] Junde Li, Rasit O Topaloglu, and Swaroop Ghosh. Quantum generative models for small molecule drug discovery. IEEE Transactions on Quantum Engineering, 2:1–8, 2021.
* [320] Junde Li, Rasit O Topaloglu, and Swaroop Ghosh. Quantum generative models for small molecule drug discovery. IEEE Transactions on Quantum Engineering, 2:1–8, 2021.
* [321] He-Liang Huang, Yuxuan Du, Ming Gong, Youwei Zhao, Yulin Wu, Chaoyue Wang, Shaowei Li, Futian Liang, Jin Lin, Yu Xu, et al. Experimental quantum generative adversarial networks for image generation. Physical Review Applied, 16(2):024051, 2021.
* [322] Maxwell Henderson, Samriddhi Shakya, Shashindra Pradhan, and Tristan Cook. Quanvolutional neural networks: powering image recognition with quantum circuits. Quantum Machine Intelligence, 2(1):1–9, 2020.
* [323] Jonathan Laserson, Vladimir Jojic, and Daphne Koller. Genovo: de novo assembly for metagenomes. Journal of Computational Biology, 18(3):429–443, 2011.
* [324] Eugene W Myers. The fragment assembly string graph. Bioinformatics, 21(suppl_2):ii79–ii85, 2005.
* [325] Fred Glover, Gary Kochenberger, and Yu Du. A tutorial on formulating and using qubo models. arXiv preprint arXiv:1811.11538, 2018.
* [326] Michael J Hubbard. Functional proteomics: The goalposts are moving. PROTEOMICS: International Edition, 2(9):1069–1078, 2002.
* [327] Anton Robert, Panagiotis Kl Barkoutsos, Stefan Woerner, and Ivano Tavernelli. Resource-efficient quantum algorithm for protein folding. npj Quantum Information, 7(1):1–5, 2021.
* [328] Mohammad Hassan Khatami, Udson C Mendes, Nathan Wiebe, and Philip M Kim. Gate-based quantum computing for protein design. arXiv preprint arXiv:2201.12459, 2022.
* [329] A. Yu Kitaev. Fault-tolerant quantum computation by anyons. Annals of Physics, 303(1):2–30, January 2003. arXiv: quant-ph/9707021.
* [330] Simon J Devitt, William J Munro, and Kae Nemoto. Quantum error correction for beginners. Reports on Progress in Physics, 76(7):076001, July 2013.
* [331] Suguru Endo, Simon C. Benjamin, and Ying Li. Practical Quantum Error Mitigation for Near-Future Applications. Physical Review X, 8(3):031027, July 2018.
* [332] Filip B. Maciejewski, Zoltán Zimborás, and Michał Oszmaniec. Mitigation of readout noise in near-term quantum devices by classical post-processing based on detector tomography. arXiv:1907.08518 [quant-ph], March 2020. arXiv: 1907.08518.
* [333] Mingyu Sun and Michael R. Geller. Efficient characterization of correlated SPAM errors. arXiv:1810.10523 [quant-ph], July 2020. arXiv: 1810.10523.
* [334] Sergey Bravyi, Sarah Sheldon, Abhinav Kandala, David C. Mckay, and Jay M. Gambetta. Mitigating measurement errors in multi-qubit experiments. Physical Review A, 103(4):042605, April 2021. arXiv: 2006.14044.
* [335] Miroslav Urbanek, Benjamin Nachman, Vincent R. Pascuzzi, Andre He, Christian W. Bauer, and Wibe A. de Jong. Mitigating depolarizing noise on quantum computers with noise-estimation circuits. Physical Review Letters, 127(27):270502, December 2021. arXiv: 2103.08591.
* [336] Joseph Vovrosh, Kiran E. Khosla, Sean Greenaway, Christopher Self, Myungshik Kim, and Johannes Knolle. Simple Mitigation of Global Depolarizing Errors in Quantum Simulations. Physical Review E, 104(3):035309, September 2021. arXiv: 2101.01690.
* [337] Joel J. Wallman and Joseph Emerson. Noise tailoring for scalable quantum computation via randomized compiling. Physical Review A, 94(5):052325, November 2016.
* [338] Kristan Temme, Sergey Bravyi, and Jay M. Gambetta. Error mitigation for short-depth quantum circuits. Physical Review Letters, 119(18):180509, November 2017. arXiv: 1612.02058.
* [339] Ying Li and Simon C. Benjamin. Efficient Variational Quantum Simulator Incorporating Active Error Minimization. Physical Review X, 7(2):021050, June 2017.
* [340] Tudor Giurgica-Tiron, Yousef Hindy, Ryan LaRose, Andrea Mari, and William J. Zeng. Digital zero noise extrapolation for quantum error mitigation. 2020 IEEE International Conference on Quantum Computing and Engineering (QCE), pages 306–316, October 2020. arXiv: 2005.10921.
* [341] Hakop Pashayan, Joel J. Wallman, and Stephen D. Bartlett. Estimating Outcome Probabilities of Quantum Circuits Using Quasiprobabilities. Physical Review Letters, 115(7):070501, August 2015.
* [342] Piotr Czarnik, Andrew Arrasmith, Patrick J. Coles, and Lukasz Cincio. Error mitigation with Clifford quantum-circuit data. Quantum, 5:592, November 2021. arXiv: 2005.10189.
* [343] Armands Strikis, Dayue Qin, Yanzhu Chen, Simon C. Benjamin, and Ying Li. Learning-based quantum error mitigation. PRX Quantum, 2(4):040330, November 2021. arXiv: 2005.07601.
* [344] Jingfu Zhang, Alexandre M. Souza, Frederico Dias Brandao, and Dieter Suter. Protected Quantum Computing: Interleaving Gate Operations with Dynamical Decoupling Sequences. Physical Review Letters, 112(5):050502, February 2014.
* [345] Sam McArdle, Xiao Yuan, and Simon Benjamin. Error-Mitigated Digital Quantum Simulation. Physical Review Letters, 122(18):180501, May 2019.
* [346] R. Sagastizabal, X. Bonet-Monroig, M. Singh, M. A. Rol, C. C. Bultink, X. Fu, C. H. Price, V. P. Ostroukh, N. Muthusubramanian, A. Bruno, M. Beekman, N. Haider, T. E. O’Brien, and L. DiCarlo. Experimental error mitigation via symmetry verification in a variational quantum eigensolver. Physical Review A, 100(1):010302, July 2019.
* [347] William J. Huggins, Sam McArdle, Thomas E. O’Brien, Joonho Lee, Nicholas C. Rubin, Sergio Boixo, K. Birgitta Whaley, Ryan Babbush, and Jarrod R. McClean. Virtual Distillation for Quantum Error Mitigation. Physical Review X, 11(4):041036, November 2021.
* [348] Piotr Czarnik, Andrew Arrasmith, Lukasz Cincio, and Patrick J. Coles. Qubit-efficient exponential suppression of errors. arXiv:2102.06056 [quant-ph], March 2021. arXiv: 2102.06056.
* [349] Minh C. Tran, Yuan Su, Daniel Carney, and Jacob M. Taylor. Faster Digital Quantum Simulation by Symmetry Protection. PRX Quantum, 2(1):010323, February 2021. arXiv: 2006.16248.
* [350] Suguru Endo and Simon Benjamin. Hybrid quantum-classical algorithms and error mitigation. DPhil, University of Oxford, 2019.
* [351] Aritra Sarkar, Zaid Al-Ars, and Koen Bertels. Quaser: Quantum accelerated de novo dna sequence reconstruction. Plos one, 16(4):e0249850, 2021.
* [352] Gabriel Greene-Diniz, David Zsolt Manrique, Wassil Sennane, Yann Magnin, Elvira Shishenina, Philippe Cordier, Philip Llewellyn, Michal Krompiec, Marko J Rančić, and David Muñoz Ramo. Modelling carbon capture on metal-organic frameworks with quantum computing. arXiv preprint arXiv:2203.15546, 2022.
* [353] Josh John Mellor Kirsopp, Cono Di Paola, David Zsolt Manrique, Michal Krompiec, Gabriel Greene-Diniz, Wolfgang Guba, Agnes Meyder, Detlef Wolf, Martin Strahm, and David Muñoz Ramo. Quantum computational quantification of protein-ligand interactions. arXiv preprint arXiv:2110.08163, 2021.
* [354] Jarrod R. McClean, Kevin J. Sung, Ian D. Kivlichan, Yudong Cao, Chengyu Dai, E. Schuyler Fried, Craig Gidney, Brendan Gimby, Pranav Gokhale, Thomas Häner, Tarini Hardikar, Vojtěch Havlíček, Oscar Higgott, Cupjin Huang, Josh Izaac, Zhang Jiang, Xinle Liu, Sam McArdle, Matthew Neeley, Thomas O’Brien, Bryan O’Gorman, Isil Ozfidan, Maxwell D. Radin, Jhonathan Romero, Nicholas Rubin, Nicolas P. D. Sawaya, Kanav Setia, Sukin Sim, Damian S. Steiger, Mark Steudtner, Qiming Sun, Wei Sun, Daochen Wang, Fang Zhang, and Ryan Babbush. OpenFermion: The Electronic Structure Package for Quantum Computers. arXiv:1710.07629 [physics, physics:quant-ph], February 2019. arXiv: 1710.07629.
* [355] Phillip Weinberg and Marin Bukov. QuSpin: a Python package for dynamics and exact diagonalisation of quantum many body systems part I: spin chains. SciPost Physics, 2(1):003, February 2017.
* [356] Phillip Weinberg and Marin Bukov. QuSpin: a Python package for dynamics and exact diagonalisation of quantum many body systems. Part II: bosons, fermions and higher spins. SciPost Physics, 7(2):020, August 2019.
* [357] Christoph W Groth, Michael Wimmer, Anton R Akhmerov, and Xavier Waintal. Kwant: a software package for quantum transport. New Journal of Physics, 16(6):063065, June 2014.
* [358] Lindsay Bassman, Connor Powers, and Wibe A. de Jong. ArQTiC: A full-stack software package for simulating materials on quantum computers. arXiv:2106.04749 [physics, physics:quant-ph], June 2021. arXiv: 2106.04749.
* [359] Alvaro Feito. Quantavo: a maple toolbox for linear quantum optics. arXiv preprint arXiv:0806.2171, 2008.
* [360] Sebastian Krämer, David Plankensteiner, Laurin Ostermann, and Helmut Ritsch. Quantumoptics. jl: A julia framework for simulating open quantum systems. Computer Physics Communications, 227:109–116, 2018.
* [361] Huo Chen and Daniel A. Lidar. Hamiltonian open quantum system toolkit. Communications Physics, 5(1):112, May 2022.
* [362] Ryan LaRose, Andrea Mari, Sarah Kaiser, Peter J. Karalekas, Andre A. Alves, Piotr Czarnik, Mohamed El Mandouh, Max H. Gordon, Yousef Hindy, Aaron Robertson, Purva Thakre, Nathan Shammah, and William J. Zeng. Mitiq: A software package for error mitigation on noisy quantum computers. arXiv:2009.04417 [quant-ph], August 2021. arXiv: 2009.04417.
* [363] Daniel Benjamin Criger. Practical Advances in Quantum Error Correction & Communication. D.Phil, University of Waterloo, Canada, 2013.
* [364] Scott Aaronson and Daniel Gottesman. Improved Simulation of Stabilizer Circuits. Physical Review A, 70(5):052328, November 2004. arXiv: quant-ph/0406196.
* [365] Thien Nguyen, Lindsay Bassman, Phillip C. Lotshaw, Dmitry Lyakh, Alexander McCaskey, Vicente Leyton-Ortega, Raphael Pooser, Wael Elwasif, Travis S. Humble, and Wibe A. de Jong. QuaSiMo: A Composable Library to Program Hybrid Workflows for Quantum Simulation. IET Quantum Communication, 2(4):160–170, December 2021. arXiv: 2105.07993.
* [366] Tyson Jones, Anna Brown, Ian Bush, and Simon Benjamin. QuEST and High Performance Simulation of Quantum Computers. Scientific Reports, 9(1):10736, December 2019. arXiv: 1802.08032.
* [367] U-J Wiese. Ultracold quantum gases and lattice systems: quantum simulation of lattice gauge theories. Annalen der Physik, 525(10-11):777–796, 2013.
* [368] Henry Lamm, Scott Lawrence, Yukari Yamauchi, NuQS Collaboration, et al. General methods for digital quantum simulation of gauge theories. Physical Review D, 100(3):034518, 2019.
* [369] John Preskill. Simulating quantum field theory with a quantum computer. arXiv preprint arXiv:1811.10085, 2018.
* [370] Natalie Klco, Alessandro Roggero, and Martin J Savage. Standard model physics and the digital quantum revolution: thoughts about the interface. Reports on Progress in Physics, 2022.
* [371] Erik Gustafson, Yannick Meurice, and Judah Unmuth-Yockey. Quantum simulation of scattering in the quantum ising model. Physical Review D, 99(9):094503, 2019. |
# Compositional Models for Estimating Causal Effects
Purva Pruthi David Jensen
College of Information and Computer Sciences,
University of Massachusetts Amherst
<EMAIL_ADDRESS>
###### Abstract
Many real-world systems can be represented as sets of interacting components.
Examples of such systems include computational systems such as query
processors, natural systems such as cells, and social systems such as
families. Many approaches have been proposed in traditional (associational)
machine learning to model such structured systems, including statistical
relational models and graph neural networks. Despite this prior work, existing
approaches to estimating causal effects typically treat such systems as single
units, represent them with a fixed set of variables, and assume a homogeneous
data-generating process. We study a compositional approach for estimating
individual treatment effects (ITE) in structured systems, where each unit is
represented by the composition of multiple heterogeneous components. This
approach uses a modular architecture to model potential outcomes for each
component and aggregates component-level potential outcomes to obtain the
unit-level potential outcomes. We discover novel benefits of the compositional
approach in causal inference — systematic generalization to estimate
counterfactual outcomes of unseen combinations of components and improved
overlap guarantees between treatment and control groups compared to the
classical methods for causal effect estimation. We also introduce a set of
novel environments for empirically evaluating the compositional approach and
demonstrate the effectiveness of our approach using simulated and real-world
data.
## 1 Introduction
Causal inference is central to empirical research and scientific discovery.
Inferring causal effects from observational data is an important problem in
many fields of science, such as medicine, economics, and education [Morgan and
Winship, 2015]. Many scientific and engineering challenges require
understanding treatment effect heterogeneity, including personalized medicine
[Curth et al., 2024] and custom online advertising [Bottou et al., 2013].
Existing approaches for causal effect estimation usually assume that each unit
of study is represented by a fixed set of features sampled from the data-
generating process that is homogeneous across all the units in the population,
known as unit homogeneity assumption [Holland, 1986]. However, many real-world
systems are modular, i.e., they decompose into heterogeneous functional
components that interact to produce system behavior [Callebaut and Rasskin-
Gutman, 2005, Johnson and Ahn, 2017]. Input data to such systems is often
structured and of variable size, reflecting the underlying modular structure
of the system. Examples of structured inputs processed by modular systems
include DNA sequences processed by cells, computer programs processed by
compilers, and natural language queries processed by language models.
Estimating heterogeneous treatment effects on complex real-world variable-size
structured inputs is an important problem, especially as the complexity of
modern technological systems increases.
To provide a simple and concrete example of the type of causal inference
problem that we focus on in this paper, consider the following example of an
arithmetic computation system consisting of addition, subtraction,
multiplication, and division modules (see Figure 1(a)). The system takes
arithmetic expressions as input — e.g., $((1+2)*(5-3))+(10/5)$ — and returns
the value of the expression as output (e.g., $8$). In this example, input
expressions are structured units of a “compositional” nature, i.e., they
comprise multiple component operations that can combine to generate new units
in multiple ways. These kinds of inputs can be represented as hierarchical
graphs, e.g., parse-trees, where each node is an operation and edges represent
the information flow between the components. Given such a system, consider the
task of modeling the causal effect of different memory sizes on the processing
time of different arithmetic expressions. This problem can be formulated as
estimating the individual-level effect.111Individual-level effect estimation
closely related to conditional average treatment effect estimation and
heterogeneous treatment effect estimation in the causal inference literature.
In the terminology of causal inference, each arithmetic expression is a unit
of analysis, the features of the arithmetic expression are pre-treatment
covariates, memory size is the intervention, and processing time is the
potential outcome [Rubin, 1974, 2005].
The standard approaches to heterogeneous treatment effect estimation [Hill,
2011, Athey and Imbens, 2016, Wager and Athey, 2018, Chernozhukov et al.,
2018] usually represent each unit using a fixed-size feature vector. For
example, in the case of arithmetic expressions, we can use the number of
operations in each unit and operand values as covariates and estimate the
individual-level treatment effect by conditioning on these features. However,
using fixed-size representation for compositional units such as the arithmetic
expressions above poses several estimation challenges: (1) As the structure
and complexity of each unit varies, estimating effects at the unit level
requires reasoning about the similarity among the heterogeneous units in high-
dimensional space; (2) Each unit has an instance-specific composition of the
basic operations, representing all the units with the same features would lead
to sparse feature representation and aggregation of the features of multiple
occurrences of each operation; (3) The approach does not exploit the
compositionality of the units and each new unit with an unseen combination of
the component operations would require reasoning from scratch.
We propose a compositional approach to causal effect estimation for structured
units represented as hierarchical graphs. This approach constructs an
instance-specific causal model with a modular architecture representing the
components for each unit and estimates the unit-level intervention’s effects
at the component level. By exploiting fine-grained information about the
structure of modular systems, such as execution traces in software programs,
query plans in databases, and log data in monitoring systems, the
compositional approach takes advantage of detailed information about the
system’s structure and behavior, which often remain unused. The compositional
approach decomposes causal queries into more fine-grained queries, focusing on
how unit-level interventions affect component-level outcomes to produce the
overall unit’s outcome. This framing offers benefits such as improved sample
efficiency, better overlap between treatment and control groups, enhanced out-
of-distribution effect estimation on units with unseen combinations of
components, causal effect estimation for realistic interventions that involve
adding, removing, or replacing modules in the system, and scalable causal
effect estimation for variable-length units without facing the curse of
dimensionality. These potential benefits make the compositional approach
promising for causal effect estimation in complex, modular systems.
Despite these potential benefits, learning compositional models for effect
estimation has pitfalls, including a larger number of parameters to estimate,
sensitivity to errors in individual components, and errors in modeling
component interactions. In this paper, we investigate the conditions under
which the compositional approach provides benefits over standard approaches.
Our findings indicate that compositional models provide better estimates of
individual treatment effects as overlap issues increase and offer systematic
generalization benefits on out-of-distribution units, particularly when the
underlying system comprises multiple heterogeneous components. Specifically,
we:
Formalize the compositional approach to causal effect estimation: We formalize
causal effect estimation for structured units, outline possible types of
compositions of potential outcomes in real-world examples, provide algorithms
to learn compositional models for different composition types, and discuss the
assumptions required to identify individual treatment effects from
observational data using the compositional approach.
Analyze the theoretical benefits of compositional models: We use the
generalization bounds for individual-level treatment effect estimation [Shalit
et al., 2017] to decompose the compositional model’s generalization error into
factual and counterfactual errors of the component models. We discuss the
assumptions of better component-level overlap and the existence of
heterogeneous components with independent mechanisms, under which
compositionality leads to better estimation of factual and counterfactual
errors, resulting in improved generalization performance.
Propose a set of real-world evaluation environments: We propose a set of novel
real-world evaluation environments to evaluate the compositional approach,
including query execution in relational databases for different memory sizes
and matrix processing on different types of computer hardware. We evaluate the
performance of the compositional approach compared to existing approaches on
both synthetic and real-world data sets.
Figure 1: Overview of key ideas: (a) System: An example arithmetic system that
takes structured expressions (units) as input, returns values as output.
Runtime (potential outcome) is observed for each expression for a given memory
level (treatment). (b) Data: Fixed-size data includes high-dimensional
covariates and treatment for each unit. In contrast, compositional data
consists of lower-dimensional component-specific covariates and treatment,
possibly with multiple samples per unit. (c) Training: The "unitary approach"
uses fixed-size data to estimate unit-level potential outcomes. The
compositional model uses compositional data to estimate component-level
potential outcomes, aggregating them to estimate unit-level outcomes. (d)
Inference: For a novel unit (possibly with unseen component combinations), the
compositional approach instantiates an instance-specific model with modular
architecture similar to the interaction structure of the components.
Other real-world use cases for the compositional approach to reason about
interventions’ effects and make informed, personalized decisions are detailed
in the supplementary material (Section B).
## 2 Related Work
We briefly discuss the connections of the compositional approach with the
existing work in causal inference and associational machine learning.
Causal inference in structured domains: In causal inference, a relatively
sparse body of work has focused on treatment effect estimation on structured
data in modular domains [Gelman and Hill, 2006, Salimi et al., 2020, Kaddour
et al., 2021]. For example, existing work in multi-level modeling and
hierarchical causal models [Gelman and Hill, 2006, Witty and Jensen, 2018,
Weinstein and Blei, 2024] leverages hierarchical data structure to improve
effect estimation under unobserved confounders. There is also growing interest
in heterogeneous effect estimation for complex data, such as images [Jerzak et
al., 2022], structured treatments (e.g., graphs, images, text, drugs) [Harada
and Kashima, 2021, Kaddour et al., 2021], and relational data [Salimi et al.,
2020, Khatami et al., 2024]. The compositional approach complements this line
of research by providing fine-grained analysis of individual effect estimation
on structured units and using modular architectures for variable-size
compositional data, offering systematic generalization benefits for effect
estimation tasks. Also, our focus lies in the structured and compositional
representation of entire units rather than only treatments, which helps better
estimate causal effects in the case of high-dimensional observational data.
Other related work is in the fine-grained analysis of the potential outcomes
to study the validity of synthetic control methods with panel data [Shi et
al., 2022].
Compositional models in associational machine learning: Our work is inspired
by research on compositional models in machine learning that exploit the
structure of underlying domains and explicitly represent it in the model
structure [Heckerman and Wellman, 1995, Koller and Pfeffer, 1997, Friedman et
al., 1999, Getoor and Taskar, 2007, Taskar et al., 2005, Laskey, 2008]. The
closest work to our proposed compositional models is the use of recursive
neural networks [Socher et al., 2011] and modular neural networks [Andreas et
al., 2016, Marcus and Papaemmanouil, 2019] in vision and language domains.
However, most of the work in machine learning focuses on understanding the
systematic generalization and sample efficiency benefits of compositional
models for prediction tasks, while their role in reasoning about intervention
effects is unexplored [Lake and Baroni, 2018, Hupkes et al., 2020, Wiedemer et
al., 2024]. Our work addresses this gap.
## 3 Compositional Approach for Causal Effect Estimation
In this section, we introduce a compositional representation of structured
units and potential outcomes and provide an algorithm to estimate individual
treatment effects for the structured units from the observational data using
compositional models.
Preliminaries: Let us assume that for a unit $i$ with pre-treatment covariates
$X_{i}=x\in\mathcal{X}\subset\mathbb{R}^{d}$ and a binary treatment
$T_{i}\in\\{0,1\\}$, there are two potential outcomes
$\\{Y_{i}(0),Y_{i}(1)\\}\in\mathcal{Y}\subset\mathbb{R}$ [Rubin, 1974, 2005].
In the observational data, we only observe one of the potential outcomes for
each unit, depending on the treatment assignment. We refer to
$Y_{i}=Y_{i}(T_{i})$ as the observed/factual outcome and
${Y_{i}}_{CF}=Y_{i}(1-T_{i})$ as the unobserved/counterfactual outcome.
Individual treatment effect (ITE) is defined as
$\tau(x):\mathbb{E}[Y_{i}(1)-Y_{i}(0)|X_{i}=x]$. Estimating ITE requires
assumptions of unconfoundedness, overlap, and consistency [Rosenbaum and
Rubin, 1983]. Under these assumptions, $\tau(x)$ is identifiable by
$\tau(x)=\mathbb{E}[{Y_{i}|X_{i}=x,t=1}]-\mathbb{E}[{Y_{i}|X_{i}=x,t=0}]$
[Pearl, 2009]. The general strategy to estimate ITE is to directly estimate
the conditional expectations of the outcomes using a single model with
treatment as a feature or by fitting two separate regression models [Künzel et
al., 2019]. Other approaches include propensity score-based adjustments and
doubly robust methods [Kennedy, 2023]. We illustrate the compositional
approach by directly estimating the potential outcomes. We use the term
“unitary models" to denote non-compositional approaches that don’t consider
the underlying structure and use fixed-size representation.
Compositional representation of the units and potential outcomes: We adopt a
system view to describe how the units of analysis can be decomposed and
represented using a small set of basic components. Consider a modular system
with $k$ heterogeneous components $\\{C_{1},C_{2},\dots C_{k}\\}$. The units
share this set of reusable components (See Figure 1 for the system summary).
Each structured input $Q_{i}$ to the system can be represented as a tuple
$(G_{i},\\{\mathbf{X}_{ij}\\}_{j=1:m_{i}})$ where $G_{i}$ is a tree-like
hierarchical graph representing the instance-specific interaction among
components, $\mathbf{X}_{ij}\in\mathbb{R}^{d_{j}}$ are input features to the
$j^{th}$ component and $m_{i}$ is the number of components involved. More
specifically, the graph $G_{i}$ represents the order in which the $m_{i}$
components process the structured unit, which variables $\mathbf{X}_{ij}$ are
passed as an input to each component and how variables are shared among the
components. Note that $m_{i}$ can be greater than the number of distinct
components $k$ in the system, indicating the presence of multiple instances of
each component type to represent each data instance. The number and kind of
components required to process each input are specific to each unit. As an
alternative to the compositional representation, a structured unit can also be
represented using a fixed-size representation in the form of a single high-
dimensional feature vector, $\mathbf{X}_{i}\in\mathbb{R}^{d}$ that represents
the aggregation of the component level input features
$\\{\mathbf{X}_{ij}\\}_{j=1}^{m}$. For example, see Figure 1(b) for the
example fixed-size and compositional data representation. An example of the
aggregation function includes concatenating the input features of each
component and adding the input features of multiple instances of each
component. We assume that a treatment $T_{i}$ is selected for each unit, which
can affect the potential outcomes of some or all components using different
mechanisms. For instance, in an arithmetic system, memory size can affect the
execution time of some or all operations using separate mechanisms. Although
component-level treatments that only affect one type of component can also be
selected, we restrict our focus to unit-level treatments in this work to
compare the compositional approach with non-compositional (unitary)
approaches. Let $Y_{i}(t)\in\mathbb{R}$ denote the unit-level potential
outcome under treatment $t$ for a unit $Q_{i}$, and let
$\\{Y_{ij}(0),Y_{ij}(1)\\}_{j=1:m_{i}}$ denote the fine-grained component-
level potential outcomes.
Difference between system output and potential outcome: Note that the output
of the system itself and the outcome we wish to estimate can be different. For
example, in the arithmetic example, the result of the arithmetic expression is
the system’s output, but the execution time of the expression is the potential
outcome of interest. In practical applications of causal reasoning, it is
often useful to understand the effects of interventions on system behavior,
and such behavior is often represented by key performance indicators (e.g.,
latency, efficiency, cost, and accuracy [Li et al., 2010, Bottou et al.,
2013]).
We aim to estimate ITE for structured units from observational data. Due to
each unit’s varying structure and complexity, satisfying the overlap
assumption at the unit level becomes challenging when using a high-dimensional
$\mathbf{X}_{i}$ non-compositional representation of the units [D’Amour et
al., 2021]. Instead, we exploit the underlying compositionality of the system
by reasoning about the component-level potential outcomes $Y_{ij}(t)$ for
comparatively lower-dimensional component-level features
$\mathbf{X}_{ij}\in\mathbb{R}^{d_{j}}(d_{j}<d)$ as covariates and given unit-
level intervention $T_{i}=t$. The lower-dimensional representation of the
component-level features compared to the unit-level features is generally true
for most systems, as not all the unit-level features are relevant to compute
the outcome of each component.
Types of composition: Parallel, sequential, and hierarchical: The composition
of component-level potential outcomes to generate the unit-level potential
outcome depends on the specific outcome, intervention type, system
characteristics, and interaction structure $G_{i}$ of the components. We
categorize kinds of composition into parallel, sequential, and hierarchical,
based on the dependence among component-level potential outcomes. Parallel
composition assumes that the potential outcomes of each component can be
computed independently of the potential outcomes of the other components
because there is no direct interaction among the potential outcomes for the
components. In the arithmetic example, this assumes that the processing time
of one arithmetic operation under a memory level can be assumed to be
conditionally independent of the processing times of the other operations,
given the input features of that component and shared treatment. This
composition is similar to spatial composition in vision and reinforcement
learning [Higgins et al., 2017, Van Niekerk et al., 2019]. A special case is
additive parallel composition, where the composition function is addition.
Sequential composition assumes that the potential outcomes of components have
chain-like causal dependencies, where a component’s potential outcome depends
on the values of other components’ potential outcomes, similar to the chained
composition of policies in reinforcement learning [Sutton et al., 1999].
Hierarchical composition assumes that some potential outcomes can be computed
independently while others have sequential dependencies. We assume that the
instance-specific interaction structure $G_{i}$ among the components defines
the structure of the hierarchical composition and is known.
Composition models for individual treatment effect estimation: We briefly
describe the model training and inference for two kinds of composition models
— (1) parallel composition model and (2) hierarchical composition model.
Detailed model description and algorithms for training and inference are
provided in the supplementary material (Algorithms 1, 2, Algorithms 3, and 4).
See Figure 1(c) and (d) for the general description of model training and
inference for compositional models. The additive parallel composition model
estimates ITE using fine-grained potential outcomes with independently trained
component models $(\hat{f}_{\theta_{j}})$ . During inference, component-level
potential outcomes are aggregated, assuming additive composition to estimate
unit-level outcomes, encoding conditional independence of component-level
outcomes given their causes. The hierarchical composition model accounts for
direct effects among component potential outcomes, with component models
trained jointly end-to-end based on the interaction structure $G_{i}$.
Potential outcomes are computed in post-order traversal, and ITE is estimated
using the last component’s outcome (see Figure 1 (d) for an example). When
only unit-level outcomes are observed, a version of the hierarchical model can
be trained, assuming access to only component-level features and the
interaction graph. We demonstrate in our experiments that hierarchical models
with unit-level outcomes achieve comparable performance to models with access
to fine-grained outcomes.
## 4 Theoretical Analysis
###### Theorem 4.1.
The CATE estimand for a structured unit $Q_{i}=q$ in case of additive parallel
composition is equal to the additive composition of the component-level CATE
estimands and is identified by the following:
$\tau(q)=\sum_{j=1}^{m_{i}}\mathbb{E}[y_{ij}|\mathbf{x}_{ij},t=1]-\mathbb{E}[y_{ij}|\mathbf{x_{ij}},t=0]$,
if we assume that unconfoundedness (G), overlap (H) and consistency (I) holds
at the component level.
The proof is provided in the supplementary material (D.1). The theorem implies
that if effects are identified at the component level and can be computed
independently, then unit-level effects can be estimated using the sum of
component-level effects. This result allows us to decompose the compositional
model’s error into the component model’s errors, as we demonstrate in the next
section.
Decomposition of the generalization error of the additive parallel
compositional model
The treatment effect estimate of additive model $\hat{f}_{add}$ for unit $q$
is $\hat{\tau}_{\hat{f}_{add}}(q)=\hat{f_{add}}(q,1)-\hat{f_{add}}(q,0)$. We
use precision in the estimation of heterogeneous effect (PEHE) loss [Hill,
2011], which is defined by the mean squared error difference in the estimated
effect and the ground truth effect:
$\epsilon_{PEHE}(\hat{f})=\mathbb{E}[(\hat{\tau}_{\hat{f}}(q)-\tau(q))^{2}]$.
Using the results of the Theorem 4.1, it can be easily shown that the error of
the additive parallel compositional model can be decomposed into the sum of
the errors of individual components and pair-wise covariance between the
errors of the component models, similar to the generalization error analysis
of the ensemble models [Ueda and Nakano, 1996]. This decomposition implies
that if the data-generating process of the component potential functions is
very similar, then the errors of the component models would be highly
correlated, and errors would aggregate. The more heterogeneous the components
are, the more benefits there are from the compositional approach.
$\epsilon_{PEHE}(f_{add})=\sum_{j=1}^{m_{i}}{\epsilon_{PEHE}(\hat{f}_{\theta_{j}})}+\sum_{j}\sum_{k,k\neq
j}\sqrt{{\epsilon_{PEHE}(\hat{f}_{\theta_{j}})}}\sqrt{{\epsilon_{PEHE}(\hat{f}_{\theta_{k}})}}$
(1)
Decomposition of error into factual and counterfactual errors: The factual
$(\epsilon_{F})$ and counterfactual errors $(\epsilon_{CF})$ are defined as :
$\epsilon_{F}(\hat{f})=\mathbb{E}[(\hat{f}(q)-y)^{2}]$,
$\epsilon_{CF}(\hat{f})=\mathbb{E}[\hat{f}(q,1-t)-y_{CF}]^{2}$. Similarly,
factual and counterfactual errors for the treatment and control population are
denoted as $\epsilon^{t=0}_{F}$, $\epsilon^{t=1}_{F}$, $\epsilon^{t=0}_{CF}$,
and $\epsilon^{t=1}_{CF}$.
Previous work [Shalit et al., 2017] provides upper bounds for generalization
error bounds for ITE estimators that decompose PEHE into the sum of factual
and counterfactual errors. This work also shows that the counterfactual error
can be upper bounded by the sum of factual error and distribution mismatch
between treatment $P(X=x|T=0)$ and control populations $P(X=x|T=1)$. Let us
assume that $D$ denotes the metric to measure the distribution mismatch
between the control and treatment populations, e.g., the integral probability
metric distance, and $\alpha$ is a normalization constant for a metric to be
well-defined. If we assume that the ground-truth potential-outcome functions
for the components are independent [Peters et al., 2017], then the PEHE error
of the additive model reduces to the sum of the PEHE errors of individual
components in equation 4. In that case, we get the following upper bound for
the error of the additive parallel model.
$\epsilon_{PEHE}(f_{add})\leq\sum_{j}^{m_{i}}\underbrace{{\epsilon_{j}}_{F}^{t=1}+{\epsilon_{j}}_{F}^{t=0}}_{factual\\_error\\_j}+\underbrace{\alpha
D(p^{t=1}_{\mathbf{x_{j}}},p^{t=0}_{\mathbf{x_{j}}})}_{distribution\\_mismatch\\_j}$
(2)
This decomposition allows us to systematically understand the reasons behind
the advantages of additive parallel composition models, as discussed below.
Better estimation of the factual outcomes: Various factors are responsible for
the improved estimation of the factual outcomes in the compositional model
(first term in the decomposition) — (1) Reduced dimensionality of the
component-level features as compared to the dimensionality of the high-level
representation of the input, which holds for most of the modular systems; (2)
Greater availability of samples at the component level due to the multiple
occurrences of the components; (3) More homogeneous data distribution of
covariates at the component level; and (4) Simpler outcome functions at the
component level as compared to the unit-level. Better sample efficiency
benefits of the modular model architectures for prediction tasks are also
discussed in the prior work [Boopathy et al., 2024].
Better estimation of the counterfactual outcomes in experimental and
observational data: In the case of experimental data or randomized controlled
data, counterfactual error mostly reduces to the factual error as there is a
zero or low distribution mismatch between treatment and control populations.
In that case, all the benefits of the compositional model in estimating
factual outcomes apply to counterfactual outcomes estimation. In the case of
observational data. if we assume the reduced dimensionality of the component-
level covariates, then the distribution mismatch between the control and
treated population is lower at the component level than the high-dimensional
covariate distribution for the unit. This allows better satisfaction of the
positivity assumption [D’Amour et al., 2021]. The compositional approach also
allows for the estimation of causal effects on different distributions of
units with the unseen combination of the components. This benefit expands the
possible interventions for adding, removing, or replacing components.
Figure 2: Results on synthetic data (5000 samples) with variable structure of
the units with multiple instances of each module in each structured unit, in
case of the additive parallel composition of the potential outcomes. We report
$\sqrt{\epsilon_{PEHE}}$ as the strength of confounding bias increases.
## 5 Experiments
Modeling effects for compositional data is a novel task lacking real-world
benchmark data sets. We evaluate models on synthetic data (Section 5.1) and
introduce query execution and matrix operation benchmarks (Section 5.2). Data
and code will be provided upon publication.
Compositional Models: We implement three compositional models based on the
composition type of potential outcomes, independent or joint training of
components, and access to fine-grained potential outcomes. The additive
parallel (all outcomes) model is only applied to compositional data with
additive parallel potential outcomes, assuming access to fine-grained
potential outcomes (denoted as AO, abbreviated for all outcomes), and is
implemented using a random forest and a fully connected neural network. The
hierarchical (all outcomes) model’s structure is similar to the interaction
graph of structured unit implemented as TreeLSTM [Tai et al., 2015], assumes
separate parameters for each component, and jointly trains the models end-to-
end, assuming access to fine-grained potential outcomes for individual
component loss computation. The hierarchical (single outcome) model assumes
access to only unit-level potential outcomes.
Baselines: We compare the performance of the compositional models with three
types of baselines, selecting one or two representative estimators from each:
(1) TNet, a neural network-based ITE estimator [Curth and Van der Schaar,
2021]; (2) X-learner, a meta learner that uses plug-in estimators to compute
ITE, with random forest as the base model class [Künzel et al., 2019]; (3)
Non-parametric Double ML [Chernozhukov et al., 2018]; and (4) Vanilla neural
network and random forest-based outcome regression models. Additional details
about the baselines are provided in the supplementary material.
Creation of observational data sets: All real-world data sets are experimental
data collected from real-world computational systems (databases and computer
programs) where we observe potential outcomes for both treatments.
Observational data sets are created from experimental data of real-world
computational systems by introducing confounding bias [Gentzel et al., 2021].
High-dimensional covariates are selected as biasing covariates for non-random
treatment sampling. Unconfoundedness is satisfied as biasing covariates are
observed. Treatment assignment dependence on biasing covariates varies between
0.1 and 0.9, creating overlap issues. Higher “bias strength" indicates higher
treatment probability for certain biasing covariate values.
Table 1: Synthetic data results: We report $\sqrt{\epsilon_{PEHE}}$ across various settings: unit structure (fixed/variable), composition types (parallel/hierarchical PO), bias strength (experimental/observational), and test data distribution (WID/OOD). Difficulty in estimating ITE increases from left to right. All outcomes (AO) models assume access to fine-grained potential outcomes, and single outcomes (SO) models use only unit-level outcomes. Additive parallel models can’t estimate ITE in hierarchical PO settings. The performance advantage of compositional models becomes more evident in variable structure settings, while TNet and vanilla NN are competitive in fixed structure and parallel PO settings. Scores are normalized by average effect size, where lower is better. Model | Fixed structure of units | Variable structure of units
---|---|---
| Parallel PO | Hierarchical PO | Parallel PO | Hierarchical PO
| bias=0 | bias=10 | bias=0 | bias=10 | WID | OOD | WID | OOD
Additive Parallel (AO) | 0.09 | 0.09 | $-$ | $-$ | 0.12 | 0.13 | $-$ | $-$
Hierarchical (AO) | $0.37$ | $0.40$ | 0.21 | 0.19 | $0.44$ | $0.64$ | 0.66 | 1.94
Hierarchical (SO) | $0.90$ | $1.22$ | 0.38 | 0.44 | $1.12$ | $1.44$ | 0.75 | 1.98
TNet (SO) | 0.16 | $0.76$ | $0.78$ | $0.87$ | $1.25$ | $1.52$ | $1.13$ | $1.79$
X-Learner (SO) | $0.62$ | $1.97$ | $0.66$ | $0.75$ | $1.82$ | $1.81$ | $1.66$ | $2.24$
Double ML (SO) | $0.73$ | $9.6$ | $1.94$ | $3.64$ | $12.88$ | $16.41$ | $6.64$ | $3.35$
Random Forest (SO) | $0.89$ | $3.71$ | $0.72$ | $0.84$ | $3.82$ | $3.72$ | $1.4$ | $2.10$
Neural network (SO) | 0.27 | $0.63$ | $0.71$ | $0.72$ | $0.79$ | $0.97$ | $1.69$ | $2.32$
### 5.1 Synthetic Data
We generate data sets with varying characteristics to test model performance
for units with different structures and composition functions. Structured
units are generated by sampling binary trees (max depth=10) with $k$=$10$
heterogeneous modules, each having $d_{j}$=$6$ features ($d$=$60$ total). The
total sum of features of all components is used as a biasing covariate. Data
sets vary in unit’s structure: fixed structure (each unit has exactly $k$
modules appearing once) vs. variable structure (multiple occurrences of
modules per unit, variable number of distinct modules per unit). Composition
types include additive parallel composition and hierarchical composition. Bias
strength is varied from 0 (experimental) to 10 (observational). Results for
the synthetic data experiments can be seen in Table 1 and Figure 2. Key
findings include:
(1) Fixed structure vs. variable structure of units: In Table 1, we observe
that the difference between the performance of the composition models (both
parallel and hierarchical) and the competitive baselines (e.g., TNet, Neural
Network) increases as we move from fixed structure to variable structure
setting. For example, baselines TNet and Neural network are competitive to the
compositional approaches in the case of fixed structure and parallel
composition setting (first column in the table). This is because, in a
variable structure setting, as the heterogeneity of the units increases, the
fine-grained modeling of potential outcomes leads to better performance.
(2) Composition type: Encoding composition structure in model architecture
improves effect estimation, especially when model architecture
(parallel/hierarchical) matches the underlying composition type (parallel
PO/hierarchical PO). The single-outcome hierarchical model, with only
interaction structure access, is competitive with the hierarchical all-
outcomes model. We observe that the error of non-compositional baselines
increases as we move from parallel to hierarchical composition type (e.g.,
TNet’s error increases from 0.16 (column 1) to 0.78 (column 3) as we move from
parallel composition to hierarchical composition, keeping everything else same
(structure and bias strength).
(3) Bias strength: In Figure 2 (a) and (b), we show the performance of the
models as bias strength increases, in the case of variable structure and
parallel composition type. Compositional models outperform baselines (left
figure) and are more sample-efficient as bias strength increases (right
figure). Neural network-based models (Hierarchical, parallel, TNet, Neural
Network) are less affected by increasing confounding bias than other baselines
(XLearner, Random Forest, Double ML), possibly due to their ability to
estimate counterfactual outcomes even with limited overlap between treatment
and control populations in high-dimensional settings ($d=60$).
(4) Out-of-distribution (OOD) units: Compositional models perform better than
baselines on OOD units (train: tree-depth $<$ 8, test: tree-depth $\geq$ 8),
showing systematic generalization benefits in counterfactual outcome
estimation.
### 5.2 Real-world data
Query execution in relational databases: We collect real-world query execution
plans data by running 1500 SQL queries against the publicly-available Stack
Overflow database under different configurations (memory size, indexing, page
cost), treating configuration parameters as interventions and execution time
as the potential outcome. The query plans include SQL operations like scans,
joins, aggregates, and sorts as component operations. Additive parallel
composition is ensured for the execution time by disabling parallelization.
Results for ITE estimation for query execution data set are shown in 3 (a).
Our findings include that (1) Additive parallel model estimates the effects
more accurately as compared to the vanilla random forest, NN, and TNet
baselines as overlap issues increase; (2) Random forest models outperform
neural network-based models due to smaller sample size and execution time
stochasticity. For some queries, the query execution system returns query
plans with modified structures for treatment and control. In such cases, the
effect is calculated assuming the corresponding structure for each treatment
value. Due to this reason, we could not test baselines that do not provide
counterfactual outcomes and only provide the effect estimates (e.g.,
X-learner, Double ML). More details about handling modified query plans are
included in the supplementary material.
Figure 3: Results for real-world data sets: (a) Query execution data set: We
observe that the parallel additive model estimates the effect more accurately
as overlap issues increase. (b) Matrix operations: All baselines perform
similarly for this data set.
Matrix operations data set: We generate a matrix operations data set by
evaluating complex matrix expressions (units) on two different computer
hardware (treatment) and store the execution time for each hardware (potential
outcome). The matrix size of matrices is varied from 2 to 1000, resulting in
25000 samples. The expressions contain multiple operations, e.g., inverse,
singular value decomposition, etc. We ensure that each operation is executed
individually, ensuring parallel additive composition. Matrix size is used as a
biasing covariate to create overlap issues. Figure 3(b) shows the results for
this data set. We find that all baselines perform similarly, and compositional
models show no additional benefit, potentially due to (1) the dominance of
matrix multiplication operation in determining the run-time, and (2) Many
operations are similar to each other, e.g., matrix multiplication, SVD,
inverse, making components homogeneous and coupling their mechanisms, (3)
matrix size (confounder) is affecting both unit-level and component-level
outcomes, creating similar overlap issues at both levels. In contrast,
synthetic and query execution data have high-dimensional covariates for unit-
level outcomes, allowing better estimation with lower-dimensional component-
level covariates.
## 6 Conclusion
The compositional approach to causal effect estimation shows promise in
complex, modular systems by exploiting fine-grained information about the
systems’ structure and decomposing causal queries into more fine-grained
queries. The approach offers benefits such as improved sample efficiency,
better overlap between treatment and control groups, enhanced out-of-
distribution effect estimation, and scalable causal effect estimation for
variable-size units. Future directions of this work include expanding the
modular architectures to more general structured data with arbitrary graph
structures and understanding the theoretical benefits of modeling complex
compositions of potential outcomes.
## References
* Andreas et al. [2016] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Neural module networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 39–48, 2016.
* Athey and Imbens [2016] S. Athey and G. Imbens. Recursive partitioning for heterogeneous causal effects. _Proceedings of the National Academy of Sciences_ , 113(27):7353–7360, 2016.
* Boopathy et al. [2024] A. Boopathy, S. Jiang, W. Yue, J. Hwang, A. Iyer, and I. R. Fiete. Breaking neural network scaling laws with modularity, 2024. URL https://openreview.net/forum?id=unE3TZSAVZ.
* Bottou et al. [2013] L. Bottou, J. Peters, J. Quiñonero-Candela, D. X. Charles, D. M. Chickering, E. Portugaly, D. Ray, P. Simard, and E. Snelson. Counterfactual reasoning and learning systems: The example of computational advertising. _Journal of Machine Learning Research_ , 14(11), 2013.
* Callebaut and Rasskin-Gutman [2005] W. Callebaut and D. Rasskin-Gutman. _Modularity: Understanding the Development and Evolution of Natural Complex Systems_. MIT press, 2005.
* Chernozhukov et al. [2018] V. Chernozhukov, D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey, and J. Robins. Double/debiased machine learning for treatment and structural parameters, 2018.
* Curth and Van der Schaar [2021] A. Curth and M. Van der Schaar. Nonparametric estimation of heterogeneous treatment effects: From theory to learning algorithms. In _International Conference on Artificial Intelligence and Statistics_ , pages 1810–1818. PMLR, 2021.
* Curth et al. [2024] A. Curth, R. W. Peck, E. McKinney, J. Weatherall, and M. van Der Schaar. Using machine learning to individualize treatment effect estimation: Challenges and opportunities. _Clinical Pharmacology & Therapeutics_, 2024.
* D’Amour et al. [2021] A. D’Amour, P. Ding, A. Feller, L. Lei, and J. Sekhon. Overlap in observational studies with high-dimensional covariates. _Journal of Econometrics_ , 221(2):644–654, 2021.
* Friedman et al. [1999] N. Friedman, L. Getoor, D. Koller, and A. Pfeffer. Learning probabilistic relational models. In _IJCAI_ , volume 99, pages 1300–1309, 1999.
* Gelman and Hill [2006] A. Gelman and J. Hill. _Data analysis using regression and multilevel/hierarchical models_. Cambridge University Press, 2006.
* Geman et al. [1992] S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias/variance dilemma. _Neural computation_ , 4(1):1–58, 1992.
* Gentzel et al. [2021] A. M. Gentzel, P. Pruthi, and D. Jensen. How and why to use experimental data to evaluate methods for observational causal inference. In _International Conference on Machine Learning_ , pages 3660–3671. PMLR, 2021.
* Getoor and Taskar [2007] L. Getoor and B. Taskar. _Introduction to statistical relational learning_. MIT press, 2007.
* Harada and Kashima [2021] S. Harada and H. Kashima. Graphite: Estimating individual effects of graph-structured treatments. In _Proceedings of the 30th ACM International Conference on Information & Knowledge Management_, pages 659–668, 2021.
* Heckerman and Wellman [1995] D. Heckerman and M. P. Wellman. Bayesian networks. _Communications of the ACM_ , 38(3):27–31, 1995.
* Higgins et al. [2017] I. Higgins, N. Sonnerat, L. Matthey, A. Pal, C. P. Burgess, M. Bosnjak, M. Shanahan, M. Botvinick, D. Hassabis, and A. Lerchner. Scan: Learning hierarchical compositional visual concepts. _arXiv preprint arXiv:1707.03389_ , 2017.
* Hill [2011] J. L. Hill. Bayesian nonparametric modeling for causal inference. _Journal of Computational and Graphical Statistics_ , 20(1):217–240, 2011.
* Holland [1986] P. W. Holland. Statistics and causal inference. _Journal of the American statistical Association_ , 81(396):945–960, 1986.
* Hupkes et al. [2020] D. Hupkes, V. Dankers, M. Mul, and E. Bruni. Compositionality decomposed: How do neural networks generalise? _Journal of Artificial Intelligence Research_ , 67:757–795, 2020.
* Jerzak et al. [2022] C. T. Jerzak, F. Johansson, and A. Daoud. Image-based treatment effect heterogeneity. _arXiv preprint arXiv:2206.06417_ , 2022.
* Johnson and Ahn [2017] S. G. Johnson and W.-k. Ahn. Causal mechanisms. _The Oxford handbook of causal reasoning_ , pages 127–146, 2017.
* Kaddour et al. [2021] J. Kaddour, Y. Zhu, Q. Liu, M. J. Kusner, and R. Silva. Causal effect inference for structured treatments. _Advances in Neural Information Processing Systems_ , 34:24841–24854, 2021.
* Kennedy [2023] E. H. Kennedy. Towards optimal doubly robust estimation of heterogeneous causal effects. _Electronic Journal of Statistics_ , 17(2):3008–3049, 2023.
* Khatami et al. [2024] S. B. Khatami, H. Parikh, H. Chen, S. Roy, and B. Salimi. Graph neural network based double machine learning estimator of network causal effects. _arXiv preprint arXiv:2403.11332_ , 2024.
* Koller and Pfeffer [1997] D. Koller and A. Pfeffer. Object-oriented bayesian networks. In _Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence_ , pages 302–313, 1997.
* Künzel et al. [2019] S. R. Künzel, J. S. Sekhon, P. J. Bickel, and B. Yu. Metalearners for estimating heterogeneous treatment effects using machine learning. _Proceedings of the national academy of sciences_ , 116(10):4156–4165, 2019.
* Lake and Baroni [2018] B. Lake and M. Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In _International conference on machine learning_ , pages 2873–2882. PMLR, 2018.
* Laskey [2008] K. B. Laskey. Mebn: A language for first-order bayesian knowledge bases. _Artificial Intelligence_ , 172(2-3):140–178, 2008.
* Li et al. [2010] L. Li, W. Chu, J. Langford, and R. E. Schapire. A contextual-bandit approach to personalized news article recommendation. In _Proceedings of the 19th international conference on World wide web_ , pages 661–670, 2010.
* Marcus and Papaemmanouil [2019] R. Marcus and O. Papaemmanouil. Plan-structured deep neural network models for query performance prediction. _arXiv preprint arXiv:1902.00132_ , 2019.
* Morgan and Winship [2015] S. L. Morgan and C. Winship. _Counterfactuals and causal inference_. Cambridge University Press, 2015.
* Pearl [2009] J. Pearl. _Causality_. Cambridge university press, 2009.
* Peters et al. [2017] J. Peters, D. Janzing, and B. Schölkopf. _Elements of causal inference: foundations and learning algorithms_. The MIT Press, 2017.
* Rosenbaum and Rubin [1983] P. R. Rosenbaum and D. B. Rubin. The central role of the propensity score in observational studies for causal effects. _Biometrika_ , 70(1):41–55, 1983.
* Rubin [1974] D. B. Rubin. Estimating causal effects of treatments in randomized and nonrandomized studies. _Journal of educational Psychology_ , 66(5):688, 1974.
* Rubin [2005] D. B. Rubin. Causal inference using potential outcomes: Design, modeling, decisions. _Journal of the American Statistical Association_ , 100(469):322–331, 2005.
* Salimi et al. [2020] B. Salimi, H. Parikh, M. Kayali, L. Getoor, S. Roy, and D. Suciu. Causal relational learning. In _Proceedings of the 2020 ACM SIGMOD international conference on management of data_ , pages 241–256, 2020.
* Shalit et al. [2017] U. Shalit, F. D. Johansson, and D. Sontag. Estimating individual treatment effect: generalization bounds and algorithms. In _International conference on machine learning_ , pages 3076–3085. PMLR, 2017.
* Shi et al. [2022] C. Shi, D. Sridhar, V. Misra, and D. Blei. On the assumptions of synthetic control methods. In _International Conference on Artificial Intelligence and Statistics_ , pages 7163–7175. PMLR, 2022.
* Socher et al. [2011] R. Socher, C. C. Lin, C. Manning, and A. Y. Ng. Parsing natural scenes and natural language with recursive neural networks. In _Proceedings of the 28th international conference on machine learning (ICML-11)_ , pages 129–136, 2011.
* Sutton et al. [1999] R. S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. _Artificial intelligence_ , 112(1-2):181–211, 1999.
* Tai et al. [2015] K. S. Tai, R. Socher, and C. D. Manning. Improved semantic representations from tree-structured long short-term memory networks. _arXiv preprint arXiv:1503.00075_ , 2015.
* Taskar et al. [2005] B. Taskar, V. Chatalbashev, D. Koller, and C. Guestrin. Learning structured prediction models: A large margin approach. In _Proceedings of the 22nd international conference on Machine learning_ , pages 896–903, 2005.
* Ueda and Nakano [1996] N. Ueda and R. Nakano. Generalization error of ensemble estimators. In _Proceedings of International Conference on Neural Networks (ICNN’96)_ , volume 1, pages 90–95. IEEE, 1996.
* Van Niekerk et al. [2019] B. Van Niekerk, S. James, A. Earle, and B. Rosman. Composing value functions in reinforcement learning. In _International conference on machine learning_ , pages 6401–6409. PMLR, 2019.
* Wager and Athey [2018] S. Wager and S. Athey. Estimation and inference of heterogeneous treatment effects using random forests. _Journal of the American Statistical Association_ , 113(523):1228–1242, 2018.
* Weinstein and Blei [2024] E. N. Weinstein and D. M. Blei. Hierarchical causal models. _arXiv preprint arXiv:2401.05330_ , 2024.
* Wiedemer et al. [2024] T. Wiedemer, P. Mayilvahanan, M. Bethge, and W. Brendel. Compositional generalization from first principles. _Advances in Neural Information Processing Systems_ , 36, 2024.
* Witty and Jensen [2018] S. Witty and D. Jensen. Causal graphs vs. causal programs: The case of conditional branching. In _First Conference on Probabilistic Programming (ProbProg)_ , 2018.
## Appendix A Broader Impacts
This paper presents work that aims to advance the field of machine learning
and causal inference. There are many potential societal consequences of our
work, none of which must be specifically highlighted here.
## Appendix B Other examples of structured systems with compositional data
The causal questions of interest in the compositional domain are: How do the
unit-level interventions impact the component-level outcomes to produce the
overall unit’s outcome? Many real-world phenomena require answering such
causal questions about the effect of shared interventions on different
components. We provide several real-world use cases where the compositional
approach can be useful to reason about the interventions’ effects and make
informed, personalized decisions.
* •
Compiler optimization: How do different hardware architectures affect the
compile time of different source codes? In this case, source code is the unit
of analysis consisting of multiple program modules; hardware architecture is
the unit-level intervention that can affect the compiling of different source
codes differently, and compile time is the outcome of interest.
* •
Energy efficiency optimization: How does a state-wide mandate of shifting to
more efficient electric appliances affect the monthly bill of each building in
the state? Each building can be assumed to consist of various electric
appliances, such that the intervention affects each kind of appliance
differently, affecting the overall utility bill.
* •
Supply chain optimization: How is the processing time of an order affected
when a supply chain company shifts to a different supplier for various parts?
In this case, each order execution plan is the unit of analysis that consists
of routing the information from different parties, suppliers, manufacturers,
and distributors specific to each order; intervention can impact the
processing time of different parties depending on the affected parts and order
details.
## Appendix C Composition models for individual treatment effect estimation
We first discuss the additive parallel composition model for ITE estimation
using fine-grained potential-level outcomes. See Figure 1(c) for the model
structure of the additive parallel compositional model.
### C.1 Additive Parallel Composition Model
We first discuss the simple case of additive parallel composition to provide
an intuition of model training and inference to compute ITE using fine-grained
potential-level outcomes. The main idea is that the component-level models for
effect estimation are instantiated specific to each unit and trained
independently as we assume conditional independence among the potential
outcomes given component-level features and shared treatment.
Model Training: We assume that the component models for estimating component-
level potential outcomes are denoted by
$\\{\hat{f}_{\theta_{1}},\hat{f}_{\theta_{2}},\hat{f}_{\theta_{3}}\dots\hat{f}_{\theta_{k}}\\}$,
$\hat{f}_{\theta_{j}}:\mathbb{R}^{d_{j}}\times\\{0,1\\}\rightarrow\mathbb{R},$
each of them is parameterized by separate independent parameters $\theta_{j}$.
For a given observational data set with $n$ samples,
$\mathcal{D}_{F}=\\{q_{i},t_{i},y_{i}\\}_{i=1:N}$, we assume that we observe
component-level features $\\{\mathbf{x}_{ij}\\}_{j=1:m_{i}}$, assigned
treatment $t_{i}$ and fine-grained component-level potential outcomes
$\\{y_{ij}\\}_{j=1:m_{i}}$ along with unit-level potential outcomes $y_{i}$.
For each component model $m$, model training involves the independent learning
of the parameters by minimizing the following squared loss:
$\theta_{m}:=\arg\min_{\theta}\frac{1}{N_{m}}\sum_{i=1}^{N_{m}}(f_{m}(\mathbf{x}_{im},t_{i};\theta_{m})-y_{im})^{2}$.
Here, $N_{m}$ denotes the total number of instances of component $m$ across
all the $N$ samples. Repeated instances of the components in each unit might
provide more samples to estimate the component-level potential outcomes
efficiently.
Model Inference: During inference, for each unit
$q_{i}=\\{\mathbf{x}_{ij}\\}_{j=1:m_{i}}$, depending on the presence of the
number and kind of each component $\\{1,2,\dots m_{i}\\}$ in $G_{i}$,
component index $l$ of distinct component corresponding to each component
instance $j$ is obtained. Then, both the potential outcomes are computed
$\hat{y}_{ij}(1)=\hat{f}_{\theta_{l}}(\mathbf{x}_{ij},1)$,
$\hat{y}_{ij}(0)=\hat{f}_{\theta_{l}}(\mathbf{x}_{ij},0)$. Assuming additive
composition,
$\hat{y}_{i}(1),\hat{y}_{i}(0)=\sum_{j}^{m_{i}}\hat{f}_{\theta_{l}}(\mathbf{x}_{ij},1),\sum_{j}^{m_{i}}\hat{f}_{\theta_{l}}(\mathbf{x}_{ij},0)$.
ITE estimate for each unit $i$ by additive parallel composition model is given
by $\hat{\tau}(q_{i})=\hat{y}_{i}(1)-\hat{y}_{i}(0)$. The additive parallel
composition model explicitly encodes the conditional independence of the
distribution of component-level potential outcomes given its causes
(component-level features and treatments). This is similar to assuming the
causal Markov assumption in the graphical models [Pearl, 2009], and
independent training of the parameters of component models is inspired by the
independence of mechanisms among underlying components’ assumption [Peters et
al., 2017]. Generally, the aggregation function can be non-additive and a
complex non-linear function of the potential outcomes. Assuming that the
aggregation function is the same across all data instances and parameterized
by ${\phi}$, the function’s parameters can be learned from the training data
by minimizing the following objective:
$\phi:=\arg\min_{\phi}\frac{1}{N}\sum_{i=1}^{N}(g(y_{i1}(t),y_{i2}(t),\dots{y_{i{m_{i}}}(t)});\phi_{j})-y_{i}(t))^{2}$.
( Algorithms 1, 2) provide more details.
### C.2 Hierarchical Composition Model
In hierarchical composition, we assume the same information about the
components is available in parallel composition. The main difference is that
we assume that the potential outcomes of components can directly affect each
other, and tree-like interaction structure $G_{i}$ denotes the composition
structure of the potential outcomes. More specifically, the potential outcome
of each component is computed using input features of that component, shared
unit-level treatment, and potential outcomes of the children’s components.
Potential outcomes of the children’s components are passed as input to the
components in a hierarchical fashion, and the potential outcome of the root
node is treated as the unit-level outcome. In the hierarchical composition
model, component models are trained jointly end-to-end to estimate the unit-
level potential outcomes. Compared to the parallel composition, the
hierarchical composition doesn’t make any explicit assumption about the
independence among the potential outcomes and captures the complex
interactions among them. These modular and recursive architectures are
commonly used in associational machine learning to model the natural language
parse trees and structured images for structured prediction tasks [Socher et
al., 2011, Andreas et al., 2016].
Model Training: For a unit $i$, a modular architecture consisting of $m_{i}$
component models is instantiated with the same input and output structure as
$G_{i}$. The potential outcomes are computed using the post-order traversal of
the tree $G_{i}$. The potential outcome for a model $m$ is computed as
$\hat{y}_{ij}=\hat{f}_{\theta_{m}}(\mathbf{x}_{ij},t_{i},\hat{y}_{i{j-1}},\hat{y}_{i{j-2}};\theta_{m})$,
where $\hat{y}_{i{j-1}}$ and $\hat{y}_{i{j-2}}$ are the outcomes of the
children nodes of each component (assuming binary tree). If a component is the
leaf node, then the potential outcome is computed just as a function of the
input features and the intervention, i.e.,
$\hat{y}_{ij}=\hat{f}_{\theta_{m}}(\mathbf{x}_{ij},t_{i},\theta_{m})$. The
total loss for each unit $i$ is computed as the sum of the loss of each
component $\sum_{j}^{m_{i}}(\hat{y}_{ij}-y_{ij})^{2}$ and gradients are
updated for the parameters of each component.
Model Inference: To compute ITE for a unit $i$, a modular architecture
consisting of $m_{i}$ component models is instantiated with the same input and
output structure as $G_{i}$, and the potential outcome of the root module is
taken as the unit level component, i.e., $\hat{y}_{i}(t)=\hat{y}_{im_{i}}(t)$.
ITE estimate for each unit $i$ by hierarchcial composition model is given by
$\hat{\tau}(q_{i})=\hat{y}_{i}(1)-\hat{y}_{i}(0)$.
Algorithm 3, (4) provide more details about hierarchical composition model
training and inference.
Unobserved component-level potential outcomes: There might be cases when we
only observe the unit-level outcome. In that case, it is possible to have
another version of the hierarchical composition model when we don’t have
access to the fine-grained potential outcome and only have information about
the component-level features and the interaction graph representing the
computation structure of the unit. In that case, we can jointly train all the
components, and gradients can mainly be computed based on unit-level outcome
prediction loss. We demonstrate the performance of both versions of
hierarchical composition models in our experiments.
### C.3 Algorithms to estimate individual treatment effects
#### C.3.1 Parallel Composition Model:
Algorithm 1 Parallel Composition: Training
1: Input: Factual data set:
$\mathcal{D}_{F}=\\{q_{i}:\\{\mathbf{x}_{ij}\\}_{j=1:m_{i}},t_{i},y_{i},\\{y_{ij}\\}_{j=1:m_{i}}\\}_{i=1:n}$,
number of distinct components $k$.
2: Result: Learned aggregation function $\hat{g}_{\phi}$ and potential outcome
models for each component:
$\\{\hat{f}_{\theta_{1}},\hat{f}_{\theta_{2}},\hat{f}_{\theta_{3}}\dots\hat{f}_{\theta_{k}}\\}$
3: Procedure:
4:
$\mathcal{D}_{1}\leftarrow\\{\\},\mathcal{D}_{2}\leftarrow\\{\\},\mathcal{D}_{3}\leftarrow\\{\\}\dots\mathcal{D}_{k}\leftarrow\\{\\}$
5: for $i=1$ to ${n}$ do
6: for $j=1$ to ${m_{i}}$ do
7: $l\leftarrow component\\_index(j)$ index of distinct component for $j^{th}$
component instance.
8:
$\mathcal{D}_{l}\leftarrow\mathcal{D}_{l}\cup\\{\mathbf{x}_{ij},t_{i},y_{i}\\}$
9: end for
10: end for
11: for $l=1$ to ${k}$ do
12: $N_{l}\leftarrow len(\mathcal{D}_{l})$
13:
$\theta_{l}:=\arg\min_{\theta}\frac{1}{N_{l}}\sum_{i=1}^{N_{l}}(f_{l}(\mathbf{x_{i}},t_{i};\theta_{l})-y_{i})^{2}$
independent training of all the component models.
14: end for
15:
$\phi:=\arg\min_{\phi}\frac{1}{N}\sum_{i=1}^{N}(g(y_{i1},y_{i2},\dots{y_{i{m_{i}}}});\phi_{j})-y_{i})^{2}$
Algorithm 2 Parallel Composition: Inference
1: Input: Test data set:
$\mathcal{D_{T}}=\\{q_{i}:\\{\mathbf{x}_{ij}\\}_{j=1:m_{i}}\\}_{i=1:n}$,
learned aggregation function model $\hat{g}_{\phi}$ and potential outcome
models for each component:
$\\{\hat{f}_{\theta_{1}},\hat{f}_{\theta_{2}},\hat{f}_{\theta_{3}}\dots\hat{f}_{\theta_{k}}\\}$,
2: Result: ITESamples
3: Procedure:
4: $ITESamples\leftarrow\\{\\}$
5: for $i=1$ to ${n}$ do
6: for $j=1$ to ${m_{i}}$ do
7: $l\leftarrow component\\_index(j)$
8: $\hat{y}_{ij}(1)=\hat{f}_{\theta_{l}}(\mathbf{x}_{ij},1)$
9: $\hat{y}_{ij}(0)=\hat{f}_{\theta_{l}}(\mathbf{x}_{ij},0)$
10: end for
11:
$\hat{y}_{i}(1)=\hat{g}_{\phi}(\hat{y}_{ij}(1),\hat{y}_{ij}(1)\dots\hat{y}_{im_{i}}(1))$
12:
$\hat{y}_{i}(0)=\hat{g}_{\phi}(\hat{y}_{ij}(0),\hat{y}_{ij}(0)\dots\hat{y}_{im_{i}}(0)$)
13: $\hat{\tau}(q_{i})=\hat{y}_{ij}(1)-\hat{y}_{i}(0)$
14: $ITESamples\leftarrow ITESamples\cup\\{(q_{i},\hat{\tau}(q_{i}))\\}$
15: end for
#### C.3.2 Hierarchical Composition Model
Algorithm 3 Hierarchical Composition: Training
1: Input: Factual data set:
$\mathcal{D}_{F}=\\{q_{i}:\\{\mathbf{x}_{ij}\\}_{j=1:m_{i}},t_{i},y_{i},\\{y_{ij}\\}_{j=1:m_{i}}\\}_{i=1:n}$,
number of distinct components $k$.
2: Result: Learned potential outcome models for each component:
$\\{\hat{f}_{\theta_{1}},\hat{f}_{\theta_{2}},\hat{f}_{\theta_{3}}\dots\hat{f}_{\theta_{k}}\\}$
3: while not converged do
4: $loss\\_1,loss\\_2,loss\\_3,loss\\_k=0$
5: for $i=1$ to ${n}$ do
6: Get the order of the components in which input is processed by using post-
order traversal of the tree $G_{i}$.
7: $orderedList\leftarrow post\\_order\\_traversal(G_{i})$
8: for component $j$ in $orderedList$ do
9: $l\leftarrow component\\_index(j)$
10: // Potential outcome of a component depends on the potential outcome of
the children components according to graph $G_{i}$ (assuming binary tree)
11: if component $m$ has children in $G_{i}$ then
12:
$\hat{y}_{ij}=\hat{f}_{\theta_{l}}(\mathbf{x}_{ij},t_{i},\hat{y}_{i{j-1}},\hat{y}_{i{j-2}};\theta_{l})$
13: else if component $l$ is a leaf operation then
14: $\hat{y}_{ij}=\hat{f}_{\theta_{l}}(\mathbf{x}_{ij},t_{i},\theta_{l})$
15: end if
16: $loss\\_l=loss\\_l+(\hat{y}_{ij}-y_{ij})^{2}$
17: end for
18: end for
19: Calculate gradients for the parameters for each module
20: for $l=1$ to ${k}$ do
21: $\delta_{l}\leftarrow\triangle_{\theta_{l}}\frac{1}{N_{l}}loss\\_l$
22: $\theta_{l}\leftarrow\theta_{l}-\alpha\delta_{l}$ joint training of all
the component models.
23: end for
24: Check convergence criterion
25: end while
Algorithm 4 Hierarchical Composition: Inference
1: Input: Test data set:
$\mathcal{D_{T}}=\\{q_{i}:\\{\mathbf{x}_{ij}\\}_{j=1:m_{i}}\\}_{i=1:n}$,
learned potential outcome models for each component:
$\\{\hat{f}_{\theta_{1}},\hat{f}_{\theta_{2}},\hat{f}_{\theta_{3}}\dots\hat{f}_{\theta_{k}}\\}$,
2: Result: ITESamples
3: Procedure:
4: $ITESamples\leftarrow\\{\\}$
5: for $i=1$ to ${n}$ do
6: Get the order of the operation in which input is processed by post-order
traversal of the tree
7: $orderedList\leftarrow post\\_order\\_traversal(G_{i})$
8: for component $j$ in $orderedList$ do
9: $l\leftarrow component\\_index(j)$
10:
$\hat{y}_{ij}=\hat{f}_{\theta_{l}}(\mathbf{x}_{ij},t_{i},\hat{y}_{i{j-1}},\hat{y}_{i{j-2}};\theta_{l}),$
11: end for
12: $\hat{y}_{i}(1)=\hat{y}_{im_{i}}$, get the potential outcome of the root
component in $G_{i}$
13: $\hat{y}_{i}(0)=\hat{y}_{im_{i}}$, get the potential outcome of the root
component in $G_{i}$
14: $\hat{\tau}(q_{i})=\hat{y}_{i}(1)-\hat{y}_{i}(0)$
15: $ITESamples\leftarrow ITESamples\cup\\{(q_{i},\hat{\tau}(q_{i}))\\}$
16: end for
## Appendix D Theoretical Proofs
### D.1 Identifiability of individual treatment effects in case of additive
parallel composition
###### Theorem D.1.
The CATE estimand for the structured units in case of additive parallel
composition is equal to the additive composition of the component-level CATE
estimands and is identified by the following estimand.
$\tau(q)=\sum_{j=1}^{m_{i}}\mathbb{E}[y_{j}|\mathbf{x_{j}},t=1]-\mathbb{E}[y_{j}|\mathbf{x_{j}},t=0]$
(3)
If we make the following assumptions:
###### Assumption E.
Parallel composition assumes that the ground-truth component-level potential
outcomes are conditionally independent of potential outcomes of other
components given component-level covariates and treatment:
$P(Y_{a}(t)|X_{a},T)\perp P(Y_{b}(t)|X_{b},T)\ \forall a,b\in\\{1,2,\dots
k\\},a\neq b$.
###### Assumption F.
Additivity assumes that ground-truth component-level potential outcomes add to
generate the ground-truth unit-level potential outcome, i.e.,
$Y_{i}(1)=\sum_{j}^{m_{i}}Y_{ij}(1)$, $Y_{i}(0)=\sum_{j}^{m_{i}}Y_{ij}(0)$.
###### Assumption G.
Component-level unconfoundedness assumes that unconfoundedness holds for the
component level potential outcomes, i.e., $Y_{ij}(1),Y_{ij}(0)\perp
T_{i}|\mathbf{X}_{ij}$.
###### Assumption H.
Component-level overlap assumes that overlap holds for the component level
covariates, i.e., $0<p(t=1|\mathbf{x}_{j})<1$.
###### Assumption I.
Component-level consistency assumes that consistency holds for the component
level covariates, i.e., $y_{ij}=Y_{ij}(0)|t=0$ and $y_{ij}=Y_{ij}(1)|t=1$.
###### Proof.
The individual-level treatment effect (ITE) estimand for structured units is
defined as
$\tau(q)=\mathbb{E}[Y_{i}(1)-Y_{i}(0)|Q_{i}=q]=\mathbb{E}[Y_{i}(1)-Y_{i}(0)|Q_{i}=(G_{i},\\{\mathbf{x}_{ij})\\}_{j=1:m_{i}}]$
Assuming additivity F, we get
$\tau(q)=\mathbb{E}[\sum_{j}^{m_{i}}Y_{ij}(1)-\sum_{j}^{m_{i}}Y_{ij}(0)|Q_{i}=(G_{i},\\{\mathbf{x}_{ij})\\}_{j=1:m_{i}}]$
Due to the linearity of the expectation, we get the following:
$\tau(q)=\mathbb{E}[\sum_{j}^{m_{i}}Y_{ij}(1)|Q_{i}=\\{\mathbf{x}_{ij}\\}_{j=1:m_{i}}]-\mathbb{E}[\sum_{j}^{m_{i}}Y_{ij}(0)|Q_{i}=\\{\mathbf{x}_{ij}\\}_{j=1:m_{i}}]$
$\tau(q)=\sum_{j}^{m_{i}}\mathbb{E}[Y_{ij}(1)|Q_{i}=\\{\mathbf{x}_{ij}\\}_{j=1:m_{i}}]-\sum_{j}^{m_{i}}\mathbb{E}[Y_{ij}(0)|Q_{i}=\\{\mathbf{x}_{ij}\\}_{j=1:m_{i}}]$
Assuming parallel composition E, we get that the computation of potential
outcomes does not depend on the interaction graph $G_{i}$ and only depends on
the component-level features.
$\tau(q)=\sum_{j}^{m_{i}}\mathbb{E}[Y_{ij}(1)|\mathbf{x}_{ij}]-\mathbb{E}[Y_{ij}(0)|\mathbf{x}_{ij}]$
Assuming component-level unconfoundedness G
$\tau(q)=\sum_{j}^{m_{i}}\mathbb{E}[Y_{ij}(1)|\mathbf{x}_{ij},t=1]-\mathbb{E}[Y_{ij}(0)|\mathbf{x}_{ij},t=0]$
Assuming component-level consistency I
$\tau(q)=\sum_{j}^{m_{i}}\mathbb{E}[Y_{ij}|\mathbf{x}_{ij},t=1]-\mathbb{E}[Y_{ij}|\mathbf{x}_{ij},t=0]$
Component-level overlap H ensures that the estimate is identified using
observational data. ∎
### I.1 Decomposition of the generalization error of the additive parallel
compositional model
The treatment effect estimate of a model $\hat{f}$ for unit $q$ is
$\hat{\tau}_{\hat{f}}(q)=\hat{f}(q,1)-\hat{f}(q,0)$. We measure the
performance using precision in the estimation of heterogeneous effect (PEHE)
loss [Hill, 2011], which is defined by the mean squared error difference in
the estimated effect and the ground truth effect for a population of units
sampled from density
$\epsilon_{PEHE}(\hat{f})=\mathbb{E}[(\hat{\tau}_{\hat{f}}(q)-\tau(q))^{2}]$.
Using the result of the Theorem 4.1, it can be easily shown that the error of
the additive parallel compositional model can be decomposed into the sum of
the errors of individual component models ($\hat{f}_{\theta_{j}}$) and pair-
wise covariance between the errors of the component models, similar to the
generalization error analysis of the ensemble models [Ueda and Nakano, 1996].
We provide the derivation in the supplementary material. Intuitively, if all
the component potential functions are the same, then the errors of the
component models would be highly correlated, and errors would aggregate. The
more heterogeneous the components are, the more benefits there are from the
compositional approach.
$\epsilon_{PEHE}(f_{add})=\sum_{j=1}^{m_{i}}{\epsilon_{PEHE}(\hat{f}_{\theta_{j}})}+\sum_{j}\sum_{k,k\neq
j}\sqrt{{\epsilon_{PEHE}(\hat{f}_{\theta_{j}})}}\sqrt{{\epsilon_{PEHE}(\hat{f}_{\theta_{k}})}}$
(4)
Derivation:
$\hat{\tau}_{\hat{f}}(q)=\hat{f}(q,1)-\hat{f}(q,0)$
For parallel, additive model, using Theorem D.1, we get:
$\hat{\tau}_{\hat{f}}(q)=\sum_{j=1}^{m_{i}}\hat{\tau}_{\hat{f}}(\mathbf{x_{j}})=\sum_{j=1}^{m_{i}}\mathbb{E}[y_{j}|\mathbf{x_{j}},t=1]-\mathbb{E}[y_{j}|\mathbf{x_{j}},t=0]$
$=\sum_{j=1}^{m_{i}}\hat{f}(\mathbf{x}_{j},1)-\hat{f}(\mathbf{x}_{j},0)$
PEHE for the additive model for distribution of units $p(q)$ By expanding the
square of the terms, we get.
$\epsilon_{PEHE,p}(f_{add})=\mathbb{E}_{p(q)}[\big{[}\sum_{j}^{m_{i}}\hat{\tau}(x_{j})-\tau(x_{j})\big{]}^{2}]$
$=\mathbb{E}_{p(q)}[\sum_{j=1}^{m}\big{[}\hat{\tau}(x_{j})-\tau(x_{j})\big{]}^{2}]+\mathbb{E}_{p(q)}[\sum_{j}\sum_{k,k\neq
j}(\hat{\tau}_{j}-\tau_{j})(\hat{\tau}_{k}-\tau_{k})]$
$=\sum_{j=1}^{m_{i}}{\epsilon_{PEHE}(\hat{f}_{\theta_{j}})}+\sum_{j}\sum_{k,k\neq
j}\sqrt{{\epsilon_{PEHE}(\hat{f}_{\theta_{j}})}}\sqrt{{\epsilon_{PEHE}(\hat{f}_{\theta_{k}})}}$
#### I.1.1 Decomposition of PEHE error into factual and counterfactual
errors:
For a unit $q$, with observed treatment $t$, observed potential outcome $y$
and unobserved counterfactual outcome $y_{CF}$, the factual $(\epsilon_{F})$
and counterfactual errors $(\epsilon_{CF})$ are defined as [Shalit et al.,
2017]:
$\epsilon_{F,p}(\hat{f})=\mathbb{E}_{p(q,t)}[(\hat{f}(q)-y)^{2}]$
$\epsilon_{CF,p}(\hat{f})=\mathbb{E}_{p(q,1-t)}(\hat{f}(q,1-t)-y_{CF})^{2}$
The existing generalization error upper bound for $\epsilon_{PEHE}$ is given
by [Shalit et al., 2017]:
$\epsilon_{PEHE}\leq 2(\epsilon_{F}+\epsilon_{CF})$ (5)
It was further shown by [Shalit et al., 2017] that the counterfactual error
can be upper bounded by the sum of factual error and distribution mismatch
term between treatment $P(X=x|T=0)$ and control populations $P(X=x|T=1)$. Note
that the distribution mismatch was defined in Shalit et al. [2017] concerning
the well-defined representation functions for the covariates. For simplicity,
we define it by considering the original density of the covariates. Suppose
$u$ is the probability of treatment $p(t=1)$ in the observational data. In
that case, $D$ denotes the metric to measure the distribution mismatch between
the control and treatment populations, e.g., the integral probability metric
distance, and $\alpha$ is a normalization constant for a metric to be well-
defined.
$\epsilon_{CF}\leq u\epsilon^{t=0}_{F}+(1-u)\epsilon_{F}^{t=1}+\alpha
D(p^{t=1}_{x},p^{t=0}_{x})$ (6)
Similarly, we can decompose the factual errors in terms of factual errors of
the component models.
$\epsilon_{F}(f_{add})=\sum_{j=1}^{m_{i}}{\epsilon_{j}}_{F}+\sum_{j}\sum_{k,k\neq
j}\sqrt{{\epsilon_{j}}_{F}}\sqrt{{\epsilon_{k}}_{F}}$
$\epsilon_{CF}(f_{add})=\sum_{j=1}^{m_{i}}{\epsilon_{j}}_{CF}+\sum_{j}\sum_{k,k\neq
j}\sqrt{{\epsilon_{j}}_{CF}}\sqrt{{\epsilon_{k}}_{CF}}$
Suppose we assume that the ground-truth potential outcome functions for the
components are independent of each other, independence of mechanisms of
components, i.e., components are heterogeneous. In that case, the PEHE error
of the additive model reduces to the sum of the PEHE errors of individual
components in equation 4. If we apply the error bounds for PEHE 5 and error
bounds for counterfactual errors 6 on the error of the component models, we
get the below upper bound for the error of the additive parallel model with
independent component potential functions.
$\epsilon_{PEHE,p}(f_{add})\leq\sum_{j}^{m_{i}}\underbrace{{\epsilon_{j}}_{F}^{t=1}+{\epsilon_{j}}_{F}^{t=0}}_{factual\\_error\\_j}+\underbrace{\alpha
D(p^{t=1}_{\mathbf{x_{j}}},p^{t=0}_{\mathbf{x_{j}}})}_{distribution\\_mismatch\\_j}$
(7)
### I.2 Generalization error of additive parallel compositional model for
prediction task
The generalization error of the estimator for each component $f_{C_{j}}$ on
the test set $X_{0j}$ can be written as below. Let’s assume that
$D_{j}^{N}=\\{X_{j}^{1:N},Y_{j}^{1:N}\\}$ denotes the training set for the
component $C_{j}$ of size $N$. We assume that each component has irreducible
additive noise with standard deviation $\sigma_{j}$.For simplicity, we assume
that each component is trained on same data size $N$.
We assume that the overall estimate of the modular model is the additive
composition of the estimates from individual estimators.
$f^{k}_{M}(X_{1},X_{2},\dots X_{k})=\sum_{j=1}^{k}f_{j}(X_{j};D^{N}_{j})$ (8)
Let’s assume that the output of each component model is generated using the
following equation
$Y_{j}=g_{j}(X_{j})+\sigma_{g}$
Using bias-variance decomposition of the generalization error, we get:
$R_{f_{j}}=\mathbb{E}_{X_{0j}}\bigl{\\{}Var(f_{j}|X_{0j})+Bias(f_{j}|X_{j})^{2}\bigl{\\}}+\sigma_{j}^{2}$
(9)
Similar to the analysis of the ensemble models, the generalization error of
the component level model on the test set $X_{0}=\\{X_{01},X_{02}\dots
X_{0k}\\}$ can be decomposed into the bias, variance, and covariance of the
individual component estimators Ueda and Nakano [1996]. The difference between
the ensemble models and the additive parallel compositional model is that in
ensemble models, each estimator is trained on the same training data. The
estimate is the weighted average of individual estimates. In contrast, in the
compositional model, each estimator is trained on data from different
components, and the overall estimate is additive rather than the average of
the individual estimates. This leads to the variance addition from difference
component models rather than the variance reduction as seen in ensemble
models.
###### Theorem I.1.
The generalization error $(R_{f^{k}_{M}})$ of the additive parallel model
$f^{k}_{M}$ consisting of k components on the test set
$X_{0}=\\{X_{01},X_{02}\dots X_{0k}\\}$ can be decomposed into the sum of
variances ($\overline{Var}(X_{0})$), sum of biases ($\overline{Bias}(X_{0}))$,
and sum of pairwise covariance ($\overline{Cov}(X_{0})$) of the individual
component estimators $f_{j}$. $\sigma_{j}$ denotes the standard deviation of
irreducible additive noise for the outcome of each component.
$R_{f^{k}_{M}}=\mathbb{E}_{X_{0}}[\overline{Var}(X_{0})+\overline{Cov}(X_{0})+\overline{Bias}(X_{0})^{2}]+\sum_{j}\sigma_{j}^{2}$
, where
$\overline{Var}(X_{0})=\sum_{j=1}^{k}Var(f_{j}|X_{0j})$
$\overline{Bias}(X_{0})=\sum_{j=1}^{k}Bias(f_{j}|X_{0j})$
$\overline{Cov}(X_{0})=\sum_{j}\sum_{j\textquoteright\neq
j}Cov(f_{j},f_{j}\textquoteright|X_{0j},X_{0j^{\prime}})$
###### Proof.
Let $R_{f^{k}_{M}}$ denote the generalization error of the additive modular
model whose estimate is given by 8. Using bias/variance decomposition Geman et
al. [1992] of the modular model’s estimator, we have.
$R_{f^{k}_{M}}=\mathbb{E}_{X_{0}}\bigl{\\{}Var(f^{k}_{M}|X_{0})+Bias(f^{k}_{M}|X_{0})^{2}\bigl{\\}}+\sigma^{2}$
$Var(f^{k}_{M}|X_{0})=\mathbb{E}_{D_{1}^{N},\dots
D_{k}^{N}}\big{[}\sum_{j=1}^{k}f_{j}(X_{0j};D^{N}_{j})-\mathbb{E}_{D_{1}^{N},\dots
D_{k}^{N}}[\sum_{j=1}^{k}f_{j}(X_{0j};D^{N}_{j})]\big{]}^{2}$
$=\sum_{j=1}^{k}\mathbb{E}_{D_{j}^{N}}[f_{j}-\mathbb{E}_{D_{j}^{N}}[f_{j}]]^{2}+\sum_{j}\sum_{j\textquoteright\neq
j}\mathbb{E}_{D_{j}^{N},D_{j^{\prime}}^{N}}[f_{j}-\mathbb{E}_{D_{j}^{N}}[f_{j}]][f_{j^{\prime}}-\mathbb{E}_{D_{j^{\prime}}^{N}}[f_{j^{\prime}}]]$
$=\overline{Var}(X_{0})+\overline{Cov}(X_{0})$
$Bias(f^{k}_{M}|X_{0})=\mathbb{E}_{D_{1}^{N},\dots
D_{k}^{N}}[\sum_{j=1}^{k}f_{j}(X_{0j};D^{N}_{j})-g_{j}(X_{0j}]$
$=\sum_{j=1}^{k}\mathbb{E}_{D_{j}^{N}}[f_{j}-g_{j}]=\sum_{j=1}^{k}Bias(f_{j}|X_{0j})=\overline{Bias}(X_{0})$
Therefore,
$R_{f^{k}_{M}}=\mathbb{E}_{X_{0}}[\overline{Var}(X_{0})+\overline{Cov}(X_{0})+\overline{Bias}(X_{0})^{2}]+\sum_{j}\sigma_{j}^{2}$
∎
## Appendix J Experiments
Implementation of the Compositional Models
1. 1.
Additive Parallel Models: We implement an additive parallel model using two
model classes: random_forest and neural_network. A three-layer, fully
connected MLP architecture was used for neural network models with hidden
layer dimension = 64 and ReLU activations. Models were trained using Adam
Optimizer with a learning rate of $0.01$.
2. 2.
Hierarchical Composition Models: TreeLSTM architecture was used with a hidden
dimension size = 64 and batch size = 32 for each component. Models were
trained using Adam optimizer with a learning rate of $0.01$. For all outcomes
of the hierarchical model, total loss for all the components was optimized,
while for the single-outcome model, loss for only unit-level potential outcome
was optimized.
Baselines: X-learner and non-parametric double machine learning implementation
is from Econml library and random forests were used as the base models. TNet
[Curth and Van der Schaar, 2021] implementation is taken from the Github
repository catenets.
### J.1 Synthetic Data Generation:
We generate data sets with varying characteristics to test model performance
for units with different structures and composition functions. Structured
units are generated by sampling binary trees (max depth=10) with $k$=$10$
heterogeneous modules, each having $d_{j}$=$6$ features ($d$=$60$ total). The
total sum of features of all components is used as a biasing covariate to
create overlap issues. The covariate distribution for each component is
sampled from a multivariate Gaussian distribution with a mean ranging between
0 and 3 and covariance ranging between 0 and 3. The potential outcomes for
each treatment is a quadratic function with different parameters for each
treatment to generate heterogeneous treatment effects. For fixed structure
data generation, the depth of the tree is fixed to $10$ so that every unit has
exactly the same number and kind of components. For the variable structure
setting, the depth of the tree randomly varies between $4$ and $10$, and
components are sampled with replacement. Every non-leaf node has another
component, such as children, and component-specific features, such as
children. Potential Outcome is simulated for each component for each treatment
as a function of input features and treatment for parallel composition and as
a function of input features, treatment, and potential outcome of the children
components.
### J.2 Real-world data
We first collect 10000 most popular user-defined Math Stack Overflow queries.
We install a PostgreSQL 14 database server and load a 50 GB version of the
publicly available Stack Overflow Database. We then run these queries with
different combinations of the configuration parameters listed in Table 2. In
all our experiments, our queries were executed with PostgreSQL 14 database on
a single node with an Intel 2.3 GHz 8-Core Intel Core i9 processor, 32GB of
RAM, and a solid-state drive. PostgreSQL was configured to use a maximum of 0
parallel workers to ensure non-parallelized executions so that additive
assumption about operations is satisfied (max_parallel_workers_per_gather =
0). Before each run of the query, we begin from the cold cache by restarting
the server to reduce caching effects among queries. Many database management
systems provide information about the query plans as well as actual execution
information through convenient APIs, such as EXPLAIN ANALYZE queries. Usually,
the total run-time of each operation, along with children’s operations, is
reported by Postgres. To model the behavior of each component operation, we
require the individual run-time of each component operation. This is
calculated using publicly available query plan explainer websites such as
this. We mainly model the query plans with the following operations —
Sequential Scan, Index Scan, Sort, Aggregate, Hash, Hash Join as the
occurrence of these operations in collected query plans was good, providing a
large number of samples to learn the models from data. For ITE estimation
experiments, we select $1500$ query plans in which effect sizes were
significant and were actually a result of the intervention rather than random
variation in the run-time due to the stochastic nature of the database
execution system. Each SQL query is run 5 times, and the median execution time
is taken as the outcome. We use data for memory size increase intervention.
### J.3 Matrix Operation data generation
We generate a matrix operations data set by evaluating complex matrix
expressions (units) on two different computer hardware (treatment) and store
the execution time for each hardware (potential outcome). The matrix size of
matrices is varied from 2 to 1000, resulting in 25000 samples. The expressions
contain multiple operations, e.g., inverse, singular value decomposition, etc.
We ensure that each operation is executed individually, ensuring parallel
additive composition. Matrix size is used as a biasing covariate to create
overlap issues.
Working Memory | Temp Buffers | Indices | Page Cost
---|---|---|---
64 KB | 800 KB | No indexing | High random page cost
2 MB | 8 MB | Primary key indexing | Equal random and sequential page cost
50 MB | 100 MB | Secondary key indexing | High sequential page cost
Table 2: Realistic interventions for causal effect estimation
### J.4 Covariates used for query execution data for model training
See 3 for the information about the high-dimensional features and component-
specific features used for training query execution plans
Model | Component | Training features | Outcome
---|---|---|---
Random Forest, Neural Network, TNet | | num_Sort, num_Hash_Join, num_Seq_Scan, num_Hash, num_Index_Scan, num_Aggregate, num_complex_ops, Sort_input_rows, Hash Join_input_rows, Hash Join_left_plan_rows, Hash Join_right_plan_rows, Seq Scan_input_rows, Hash_input_rows, Index Scan_input_rows, Aggregate_input_rows | total_execution_time
Compositional | Sequential Scan | Seq_Scan_input_rows, Seq_Scan_plan_rows | seq_scan_execution_time
Compositional | Index Scan | Index_Scan_input_rows, Index_Scan_plan_rows | index_scan_execution_time
Compositional | Hash | Hash_input_rows, Hash_plan_rows | hash_execution_time
Compositional | Hash Join | Hash_Join_left_input_rows, Hash_Join_right_input_rows, Hash_Join_plan_rows | hash_join_execution_time
Compositional | Sort | Sort_input_rows, Sort_plan_rows | sort_execution_time
Compositional | Aggregate | Aggregate_input_rows, Aggregate_plan_rows | aggregate_execution_time
Table 3: Training input and output features used by associational, SCM, and
modular models for both simulated and real-world query plans
### J.5 Experiment 5: Causal effect estimation of realistic interventions on
observational dataset:
We apply following kind of interventions to the query plans — (1) Increasing
memory: In this, we increase the size of working memory from 64 KB to 50 MB
before running the query. Based on the prior knowledge, this can cause query
plans to use more efficient sorting methods, such as quick sort, as compared
to external sort (on disk), which can cause the hash operation to use bigger
hash tables; 2) Adding indices: In this intervention, we add indexing data
structures on foreign keys of the database tables, encouraging query planners
to propose more plans with index scans as compared to sequential scans; (3)
Adding indices and increasing memory: In this, we apply both interventions
together, allowing for complex interactions due to multiple treatments. Ground
truth causal effects and effects after introducing observational bias for all
the interventions are shown in Figure 4 below. We use sort output rows to bias
the treatment in case of increasing memory intervention. For indices, we use
scan rows as a biasing covariate, and for both indices and memory
intervention, we use total output rows as a biasing covariate.
Figure 4: Ground-truth causal effect estimate of increasing memory for
experimental data (random) and observational data created with bias strength
1. 0: low memory, 1: high memory. We can see that increasing memory has the
most effect on Sort and aggregate operation and the least effect on the
sequential scan.
#### J.5.1 Change in query plan as a result of interventions on configuration
parameters:
For some interventions on the configuration parameters and for some queries,
the query planner doesn’t return the same query plan. It returns the query
plan with a changed structure as well as modified features of the components.
This makes sense as that is the goal of query optimizers to compare different
plans as resources change and find the most efficient plan. For example,
increasing the working memory often causes query planners to change the
ordering of Sort and aggregate operations, changing the structure as well as
inputs to each component. These interventions are different from standard
interventions in causal inference in which we assume that the covariates of
the unit remain the same (as they are assumed to be pre-treatment) and
treatment only modifies the outcome. In this case, a few features of the query
plan are modified as a result of the intervention (and thus are post-
treatment), while other features remain the same. Prediction of which features
would change is part of learning the behavior of the query planner under
interventions. In this work, we have mostly focused on learning the behavior
of the query execution engine and assumed that the query planner is accessible
to us. For simplicity, we assume that we know of the change in structure as a
result of the intervention for both models. We leave the learning of the
behavior of query optimizers under interventions for future work. This case
provides another challenge for the task of causal effect estimation, even in
the case of randomized treatments (bias strength = 0); due to the modified
features of the query plans, the distribution of features in control and
treatment populations might differ, providing an inherent observational bias
in the dataset coming from the query optimizer. As long as we provide the
information about modified query plans for both models, we believe that our
comparisons are fair. For changed query structure, CATE estimand can be
thought of as conditional on the same query but two different query plans.
$\tau(Q_{i})=\mathbb{E}[Y_{i}(1)-Y_{i}(0)|Q_{i}]$
$\tau(Q_{i})=\mathbb{E}[\mathbb{E}[Y_{i}(1)|Q_{p_{i}}(1)]-\mathbb{E}[Y_{i}(0)|Q_{p_{i}}(0)]]$
|
# Exploring dark sector parameters in light of neutron star temperatures
Guey-Lin Lin<EMAIL_ADDRESS>Institute of Physics, National Yang Ming Chiao
Tung University, Hsinchu 30010, Taiwan Yen-Hsun Lin
<EMAIL_ADDRESS>Institute of Physics, National Yang Ming Chiao Tung
University, Hsinchu 30010, Taiwan Institute of Physics, Academia Sinica,
Taipei 11529, Taiwan
###### Abstract
Neutron star (NS) as the dark matter (DM) probe has gained a broad attention
recently, either from heating due to DM annihilation or its stability under
the presence of DM. In this work, we investigate spin-$1/2$ fermionic DM
$\chi$ charged under the $U(1)_{X}$ in the dark sector. The massive gauge
boson $V$ of $U(1)_{X}$ gauge group can be produced in NS via DM annihilation.
The produced gauge boson can decay into Standard Model (SM) particles before
it exits NS, despite its tiny couplings to SM particles. Thus, we perform a
systematic study on $\chi\bar{\chi}\to 2V\to 4{\rm SM}$ as a new heating
mechanism for NS in addition to $\chi\bar{\chi}\to 2{\rm SM}$ and kinetic
heating from DM-baryon scattering. The self-trapping due to $\chi V$
scattering is also considered. We assume the general framework that both
kinetic and mass mixing terms between $V$ and SM gauge bosons are present.
This allows both vector and axial-vector couplings between $V$ and SM fermions
even for $m_{V}\ll m_{Z}$. Notably, the contribution from axial-vector
coupling is not negligible when particles scatter relativistically. We point
out that the above approaches to DM-induced NS heating are not yet adopted in
recent analyses. Detectabilities of the aforementioned effects to the NS
surface temperature by the future telescopes are discussed as well.
## I Introduction
It has been widely accepted that one-fifth of the total energy of the Universe
consists of dark matter (DM). Though multidisciplinary strategies are employed
to identify its essence, either from direct Aad:2015zva ; Abdallah:2015ter ;
Aalbers:2016jon ; Akerib:2016vxi ; Amole:2017dex ; Akerib:2017kat ;
Aprile:2017iyp ; Aprile:2018dbl ; Aprile:2019xxb ; Aprile:2019jmx or indirect
detections Aartsen:2014oha ; Choi:2015ara ; Aartsen:2016zhm ; Aguilar:2015ctt
; TheFermi-LAT:2017vmf ; Ambrosi:2017wek , the nature of DM remains a puzzle.
The approach of using neutron star (NS) as the DM probe has been proposed from
the heating effect due to DM Kouvaris:2007ay ; deLavallaz:2010wp ;
Kouvaris:2010vv ; Baryakhtar:2017dbj ; Raj:2017wrv ; Chen:2018ohx ;
Bell:2018pkk ; Acevedo:2019agu ; Joglekar:2019vzy ; Keung:2020teb , the NS
instability caused by DM gravitational collapse Kouvaris:2010jy ; Leung:2011zz
; Kouvaris:2011gb ; McDermott:2011jp ; Guver:2012ba ; Bramante:2013hn ;
Bramante:2013nma ; Kouvaris:2013kra ; Gresham:2018rqo ; Grinstein:2018ptl ;
Garani:2018kkd ; Lin:2020zmm and gravitation wave emitted from the merger of
binary NS admixed with DM Nelson:2018xtr ; Ellis:2018bkr ; Bauswein:2020kor .
Novel way of constraining long-lived particle through the NSs in the Milky Way
is also investigated recently Leane:2021ihh . In addition, DM self-interaction
naturally arises in various phenomenological models and was proposed to
resolve many issues in the small-scale structure, e.g. core-cusp, missing
satellite, too-big-to-fail and diverse galactic rotation curve, see Ref.
Tulin:2017ara for a review. Current astrophysical observations constrain DM
self-interaction cross section $\sigma_{\chi\chi}$ in the range Randall:2007ph
; Walker:2011zu ; BoylanKolchin:2011de ; BoylanKolchin:2011dk ; Elbert:2014bma
$0.1\,{\rm cm}^{2}\,{\rm g}^{-1}\leq\sigma_{\chi\chi}/m_{\chi}\leq 10\,{\rm
cm}^{2}\,{\rm g}^{-1}$ (1)
where $m_{\chi}$ is the DM mass.
Without delving into details of model constructions, DM self-interaction can
be understood phenomenologically as an exchange of a vector boson $V$ or a
scalar boson $\phi$ in the dark sector (DS). Here $\phi$ is the dark Higgs
responsible for a spontaneously symmetry breaking of Abelian $U(1)_{X}$ in DS,
and therefore the generation of $V$ boson mass. Assuming DM $\chi$ is a
spin-$1/2$ fermion and charged under $U(1)_{X}$ gauge coupling $g_{d}$, DM
self-interaction is induced by $\mathcal{L}_{{\rm DM-
DM}}=g_{d}\bar{\chi}\gamma_{\mu}\chi V^{\mu}$ and constrained by Eq. (1).
Furthermore, $V$ can mix with SM photon and $Z$ boson through kinetic
Holdom:1985ag ; Galison:1983pa ; Foot:2004pa ; Feldman:2006wd ;
ArkaniHamed:2008qn ; Pospelov:2008jd and mass mixing terms Babu:1997st ;
Davoudiasl:2012ag ; Davoudiasl:2013aya . These mixing terms appear in the
following Lagrangians:
$\displaystyle\mathcal{L}_{\rm gauge}$ $\displaystyle=$
$\displaystyle-\frac{1}{4}B_{\mu\nu}B^{\mu\nu}+\frac{1}{2}\frac{\varepsilon_{\gamma}}{\cos\theta_{W}}B_{\mu\nu}V^{\mu\nu}-\frac{1}{4}V_{\mu\nu}V^{\mu\nu},$
(2) $\displaystyle\mathcal{L}_{\rm mass}$ $\displaystyle=$
$\displaystyle\frac{1}{2}m_{Z}^{2}Z_{\mu}Z^{\mu}-\varepsilon_{Z}m_{Z}^{2}Z_{\mu}V^{\mu}+\frac{1}{2}m_{V}^{2}V_{\mu}V^{\mu},$
(3)
where $B^{\mu\nu}\equiv\partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}$ is the
$U(1)_{Y}$ field strength in SM while $\varepsilon_{\gamma}$ and
$\varepsilon_{Z}$ are the kinetic and $V-Z$ mass mixing parameters
respectively. The electromagnetic (EM) and neutral-current (NC) interactions
between $V$ and SM fermions $f$ resulting from mixing terms in Eqs. (2) and
(3) are given by
$\mathcal{L}_{{\rm DS-SM}}=\left(\varepsilon_{\gamma}eJ^{\rm
EM}_{\mu}+\tilde{\varepsilon}_{Z}\frac{g_{2}}{\cos\theta_{W}}J^{\rm
NC}_{\mu}\right)V^{\mu}$ (4)
where $g_{2}$ is the $SU(2)_{L}$ coupling and $J_{\mu}^{{\rm EM}}$ and
$J_{\mu}^{{\rm NC}}$ are SM electromagnetic and neutral currents,
respectively. The coefficient $\tilde{\varepsilon}_{Z}$ is a linear
combination of two mixing parameters and it reduces to $\varepsilon_{Z}$ for
$m_{V}\ll m_{Z}$. Its general expression is given in Appendix A.
In this paper, we examine the effect of DM heating due to the above
phenomenological setup for a nearby 3 giga-year-old (Gyr-old) and isolated NS.
The associated temperature is around $100\,{\rm K}$ according to the standard
cooling mechanism if there is no other heating source. Therefore, any
temperature deviating from this benchmark value can be potentially due to DM
annihilation in the star. Besides, the dark boson $V$ can be produced via
$\chi\bar{\chi}\to 2V$ for $m_{V}<m_{\chi}$. Thus, $V$ decaying into a fermion
pair inside the star is possible. This implies that the heating from
$\chi\bar{\chi}\to 2V$ cannot be ignored if the decay length of $V$ is smaller
than the star’s radius. Searching the nearby old and cold NS can improve our
understanding on DM. The new dynamics emerging from the above phenomenological
setup will be discussed in the following sections. For completeness, we also
analyze the signal-to-noise ratio (SNR) in the James Webb Space Telescope
(JWST) Gardner:2006ky . Future telescopes such as European Extremely Large
Telescope (E-ELT) and Thirty-Meter Telescope (TMT) Skidmore:2015lga will
constrain DM properties with unprecedented sensitivities. In the following
sections, we employ the NS mass $M_{0}=1.4M_{\odot}$ and and the radius
$R_{0}=12\,{\rm km}$. We also replace $g_{d}$ with
$\alpha_{\chi}=g_{d}^{2}/4\pi$ and all equations are expressed in terms of
natural units $\hbar=c=k_{B}=1$.
## II DM capture and NS temperature
When a NS swipes through the space, the DM particles in the halo can scatter
with the baryons and leptons inside the star. Once DMs lost an appreciable
fraction of kinetic energies, they will be gravitationally captured by the NS.
This capture process has been investigated extensively with contributions from
neutrons, protons and leptons as well as relativistic correction included in
Refs. Bell:2020jou ; Bell:2020lmm . In this paper, only neutron contribution
to the capture rate $C_{c}$ is considered. Contributions from other particle
species are ignored due to their small yields. The total DM number $N_{\chi}$
in the star satisfies the differential equation
$\frac{dN_{\chi}}{dt}=C_{c}-C_{a}N_{\chi}^{2},$ (5)
where $C_{a}$ is the DM annihilation rate. Both coefficients $C_{c}$ and
$C_{a}$ are well studied and the expressions can be found in Refs.
Bell:2020jou ; Bell:2020lmm ; Chen:2018ohx and references therein. We do not
reproduce here. Thus, the exact solution to Eq. (5) is obtained
$N_{\chi}=C_{c}\tau_{{\rm eq}}\tanh(t/\tau_{{\rm eq}})$ (6)
where $\tau_{{\rm eq}}=1/\sqrt{C_{c}C_{a}}$ is the equilibrium timescale. Once
$t>\tau_{{\rm eq}}$, $dN_{\chi}/dt=0$ and $N_{\chi}(t>\tau_{{\rm
eq}})=\sqrt{C_{c}/C_{a}}$ according to Eq. (5). The total annihilation rate at
this stage only depends on the capture rate since
$\Gamma_{a}=C_{a}N_{\chi}^{2}=C_{c}$. Note that $C_{c}$ depends on
$\sigma_{\chi n}$ and $\sigma_{\chi n}\leq\sigma_{\chi n}^{{\rm geom}}\approx
10^{-44}\,{\rm cm}^{2}$ where $\sigma_{\chi n}^{{\rm geom}}$ is the geometric
cross section. In principle, the maximum capture rate is determined by
$C_{c}(\sigma_{\chi n}^{{\rm geom}})=C_{c}^{{\rm geom}}$. Besides, when DM
falls into the NS surface, it is accelerated up to $0.3c-0.5c$. The non-
relativistic (NR) limit for calculating $\sigma_{\chi n}$ is not applicable.
Furthermore one has to consider contributions from axial-vector coupling due
to $V-Z$ mass mixing given by $\mathcal{L}_{\rm mass}$. We have thoroughly
included these effects. A brief discussion on how to compute $\sigma_{\chi n}$
in terms of relativistic kinematics is given in Appendix A.
NS is known to suffer from eternal cooling due to neutrino and photon
emissions. Without extra energy injection, the NS temperature drops until it
releases all its heat. However, if SM particles are produced due to DM
annihilation in the star, these particles can become a heat source and
potentially prevent the star from inevitable cooling. Therefore, the evolution
of NS interior temperature $T_{b}$ is governed by the equation
$\frac{dT_{b}}{dt}=\frac{-\epsilon_{\nu}-\epsilon_{\gamma}+\epsilon_{\chi}}{c_{V}},$
(7)
where $\epsilon_{\nu}\approx 2.1\times 10^{4}\,{\rm erg}\,{\rm cm}^{-3}\,{\rm
s}^{-1}\,(T_{b}/10^{7}\,{\rm K})^{8}$ is the neutrino emissivity,
$\epsilon_{\gamma}\approx 1.8\times 10^{14}\,{\rm erg}\,{\rm cm}^{-3}\,{\rm
s}^{-1}\,(T_{b}/10^{8}\,{\rm K})^{2.2}$ the photon emissivity,
$\epsilon_{\chi}$ the DM emissivity that is responsible for the heating from
DM annihilation and $c_{V}$ the NS heat capacity Kouvaris:2007ay .
Additionally, the surface temperature $T_{s}$ observed by a distant observer
is related to $T_{b}$ by $T_{s}\approx 8.7\times 10^{5}\,{\rm
K}\,(g_{s}/10^{14}\,{\rm cm}\,{\rm s}^{-1})^{1/4}(T_{b}/10^{8}\,{\rm
K})^{0.55}$ where $g_{s}=GM/R^{2}\approx 1.85\times 10^{14}\,{\rm cm\,s}^{-2}$
accounts for the redshift correction from the star’s surface gravity. It is
also pointed out that when $T_{b}<3700\,{\rm K}$, there is no distinction
between $T_{b}$ and $T_{s}$ Chen:2018ohx .
During each annihilation, a pair of DMs release $2m_{\chi}$ of energy in a
form of SM particles or dark bosons depending on which channel are
kinematically allowed. The total energy releasing rate by DM is
$\mathcal{E}_{\chi}=2m_{\chi}\Gamma_{a}\sum_{i}b_{i}$ where $b_{i}$ is the
branching ratio of a specific channel, eg. $e^{\pm},\mu^{\pm},\tau^{\pm}$ or
$q\bar{q}$, and $\sum_{i}b_{i}\leq 1$. Neutrino pair $\nu\bar{\nu}$ is also
part of the annihilation channel in the presence of $V-Z$ mass mixing, but it
cannot contribute to the heating. In addition to the annihilation, DMs also
lose kinetic energies $E_{k}$ to the star through the capturing process. This
has been realized as the kinetic heating Baryakhtar:2017dbj with the rate
$\mathcal{K}_{\chi}=C_{c}E_{k}=C_{c}m_{\chi}(\gamma-1)$ where
$\gamma=1/\sqrt{1-v^{2}}$ is the Lorentz factor.111Even DM is not captured,
energy deposition still occurs as long as $\chi n$ scattering can happen. On
the other hand, the kinetic heating effect from such uncaptured DM is
relatively small and negligible in our calculation. Thus, DM emissivity
$\epsilon_{\chi}$ is given by
$\epsilon_{\chi}=\frac{\mathcal{E}_{\chi}+\mathcal{K}_{\chi}}{V},$ (8)
where $V$ is the NS volume.
## III Decays of dark boson
(a)
(b)
Figure 1: DM heating from dark boson production $\chi\bar{\chi}\to 2V$. Left:
$V$ decays into SM particles before it exits the star. Right: $V$ is self-
trapped due to multiple $\chi V$ scatterings and then decays.
Here we discuss the case of $V$ produced by DM annihilation. $V$ is usually
produced in DM rich environment. If $V$ can subsequently scatter off the
surrounding DMs multiple times, it could lose energy and be self-trapped. It
then decays promptly as shown in Fig. 1b. However, such self-trapping effect
is in general inefficient since $\chi V$ scattering length $\ell_{\chi V}$ is
much larger than the thermal radius $r_{\rm th}$. Hence the scattering rate is
much suppressed and irrelevant to the heating. We leave the detail discussions
in Appendix C. Another trapping is due to the scattering between $V$ and
neutrons. On the other hand the relevant cross section is further suppressed
by the factor $\tilde{\varepsilon}_{Z}^{4}$ and the scattering length is
expected to be much larger than the NS radius. It is safe to omit this effect
in our calculation as well.
However, $V$ can decay into other SM particles before it propagates to the
surface as long as the decay length $\ell_{{\rm dec}}$ is shorter than
$R_{0}$. See Fig. 1a. The decay length is given by $\ell_{{\rm
dec}}=v\gamma\tau_{{\rm dec}}$ with $v\equiv\sqrt{1-m_{V}^{2}/m_{\chi}^{2}}$
the velocity of $V$ and $\tau_{{\rm dec}}\equiv\Gamma_{V}^{-1}$ the lifetime
of $V$ at rest where $\Gamma_{V}$ is the total decay width. Since $V$ is
produced on shell, we do not consider $V$ decaying back to $\chi$ due to
$m_{V}<m_{\chi}$. The probability for $V$ to convert into SM particles after a
propagation distance $r$ is
$F=1-e^{-r/\ell_{{\rm dec}}}.$ (9)
We took $r=R_{0}$ in the calculation. However, if neutrino is the decay
product, it cannot be considered as the heating source and must be subtracted.
By examining the numerical results for $F$, we found that $V$ can decay before
it exits the star in most of our interested parameter space. This implies that
$\chi\bar{\chi}\to 2V$ also plays an important role in NS heating. See
Appendix C for details. Generally speaking, NS contains muons and electrons
that are in degenerate. To enable the decay $V\to e^{+}e^{-}$, $m_{V}$ not
only has to be heavier than $2m_{e}$, the final kinetic energy carried by
$e^{-}$ must also exceed the electron chemical potential to prevent from Pauli
blocking. This condition has been implemented in our study.
Given the information in this section, we summarize that even when
$\chi\bar{\chi}\to 2V$ dominates the annihilation channel for
$m_{V}<m_{\chi}$, the heating effect is still efficient due to $V$ decays.
However, the self-trapping is generally unimportant due to $\ell_{\chi V}\gg
r_{{\rm th}}$ in this paper.
## IV Implication of DM on NS temperature
In this section, we describe how NS surface temperature $T_{s}$ is affected by
the DM annihilation. If $\epsilon_{\chi}$ is negligible, the standard cooling
mechanism gives $T_{s}\approx 100\,{\rm K}$ for a 3-Gyr-old NS. But when
$\epsilon_{\chi}$ is large enough to counterbalance $\epsilon_{\gamma,\nu}$,
$T_{s}$ could remain in a relatively higher temperature. We present the
numerical results of $T_{s}$ for both $\alpha_{\chi}=1$ and $0.01$ in Figs. 2
and 3 respectively. The adjacent DM density around NS is assumed to be the
same as that of the solar system, $\rho_{\chi}=0.3\,{\rm GeV/cm^{3}}$, since
we aims for the nearby isolated NS. The DM mass scale is shown from $100\,{\rm
MeV}$ to $10^{6}\,{\rm MeV}$. Once $m_{\chi}\lesssim 100\,{\rm MeV}$, all of
the annihilation channels to fermions will be Pauli blocked except neutrinos.
Nonetheless, there is no upper limit for DM mass in NS. But heavier $m_{\chi}$
results in lesser DM number density which makes the NS sensitivity worse. In
addition, Refs. Bramante:2017xlb ; Dasgupta:2019juq pointed out when
$m_{\chi}\gtrsim\mathcal{O}(10-100)\,{\rm TeV}$, it requires multiple
scatterings to capture the DM and implies that the single-scattering capture
is inefficient. Thus, we restrict our discussions below the TeV DM where NS
has better sensitivity and can be complementary to current DM direct searches.
In the following, we discuss the general trends of the numerical results in
terms of $\alpha_{\chi}=1$, Fig. 2, unless specified otherwise. The
conclusions can be applied to $\alpha_{\chi}=0.01$ directly. A simple
understanding on $\alpha_{\chi}$ is that the dark sector interactions are
proportional to $\alpha_{\chi}^{2}$ and DM-SM interactions are to
$\alpha_{\chi}$. The derivations of such features on the scattering cross
sections for all interactions are given in the appendices.
The values for the parameter $\eta\equiv\varepsilon_{\gamma}/\varepsilon_{Z}$
from top to bottom are $1$ (combined,
$\varepsilon_{Z}=\varepsilon_{\gamma}\neq 0$), 0 (pure $V-Z$ mixing,
$\varepsilon_{\gamma}=0$) and $\infty$ (pure kinetic mixing,
$\varepsilon_{Z}=0$), respectively. From left to right, we have
$m_{V}/m_{\chi}=10$ (heavy mediator), $1$ (equal mass) and $0.1$ (light
mediator). $T_{s}$ is indicated by the color bar placed on the right and the
lowest temperature is $100\,{\rm K}$. Without annihilation, e.g. no anti-DM
exists, solely kinetic heating can raise $T_{s}$ up to $1750\,{\rm K}$. If DM
annihilation is included, $T_{s}$ can maximally reach to $3100\,{\rm K}$.
Various constraints are also plotted, including XENON1T Aprile:2018dbl , XENON
LDM (low mass DM) based on the ionization Aprile:2019xxb and of Migdal
Aprile:2019jmx effects, SIDM Randall:2007ph ; Walker:2011zu ;
BoylanKolchin:2011de ; BoylanKolchin:2011dk ; Elbert:2014bma , SN1987A
Sung:2019xie and beam dump experiments Riordan:1987aw ; Bross:1989mp ;
Abdullah:2018ykz . The parameter curve rendering DM annihilation cross section
at the thermal relic value $6\times 10^{-26}\,{\rm cm}^{3}\,{\rm s}^{-1}$ in
the early Universe is plotted in green on each figure. We have adopted the
method given in Ref. Cirelli:2016rnw for computing the Sommerfeld enhancement
factor. The DM relative velocity in the early Universe is taken to be $c/3$.
See Appendix B for details. Here we present the thermal relic cross section as
a reference point and refer the readers to Refs. ArkaniHamed:2008qn ;
Cassel:2009wt ; Lin:2011gj for detailed discussions. In addition, although
the captured DMs can have relatively large Sommerfeld enhancement due to low
velocities,222Assuming DMs are thermalized with the NS core where
$T_{\chi}=T_{b}$. Thus the mean velocity is about $\sqrt{T_{\chi}/m_{\chi}}$.
the enhanced $\sigma v$ only shortens the equilibrium timescale $\tau_{\rm
eq}$. When $t\gg\tau_{\rm eq}$, the total annihilation rate only depends on
the capture rate with $\Gamma_{A}=C_{c}$. NS is generally insensitive to the
Sommerfeld enhancement as long as DMs are in equilibrium.
Figure 2: NS surface temperature $T_{s}$ in the $m_{\chi}-\varepsilon$ plane.
We took the age of NS is 3 Gyrs and the lowest $T_{s}=100\,{\rm K}$ without DM
heating. All figures have $\alpha_{\chi}=1$ and
$\eta=\varepsilon_{\gamma}/\varepsilon_{Z}$. From top to bottom, $\eta=1,0$,
and $\infty$. From left to right, $m_{V}/m_{\chi}=10,1,0.1$. Various
constraints from XENON1T Aprile:2018dbl , XENON LDM Aprile:2019xxb ;
Aprile:2019jmx , SIDM Randall:2007ph ; Walker:2011zu ; BoylanKolchin:2011de ;
BoylanKolchin:2011dk ; Elbert:2014bma , SN1987A Sung:2019xie ; Sung:2021swd ,
beam dump experiments Riordan:1987aw ; Bross:1989mp ; Abdullah:2018ykz and
the parameter curve rendering thermal relic cross section are shown as well.
Figure 3: The same as Fig. 2 except $\alpha_{\chi}=0.01$.
### IV.1 Case for $m_{V}\geq m_{\chi}$
When $m_{V}\geq m_{\chi}$, only $\chi\bar{\chi}\to f\bar{f}$ is allowed. A dip
occurs on each plot in Fig. 2 with this mass ordering. The resonant point is
caused by the pole in $\tilde{\varepsilon}_{Z}$ given by Eq. (16) when
$m_{V}=m_{Z}$ with $m_{Z}$ the SM $Z$ boson mass. In fact the value for
$\tilde{\varepsilon}_{Z}$ at this point is
$-i(\varepsilon_{Z}+\varepsilon_{\gamma}\tan\theta_{W})m_{Z}/\Gamma_{Z}$,
which is enhanced by the factor $m_{Z}/\Gamma_{Z}$.
Thus, the DM-neutron scattering cross section $\sigma_{\chi n}$ depends on
$\tilde{\varepsilon}_{Z}$ and is proportional to
$\sigma_{\chi
n}\propto\frac{\alpha_{\chi}\tilde{\varepsilon}_{Z}^{2}}{m_{V}^{4}}\frac{m_{\chi}^{2}m_{n}^{2}}{(m_{\chi}+m_{n})^{2}}\min(\xi,1)$
(10)
in the NR limit. See Eq. (23) for reference.333In the numerical calculation,
we used the general expression for $\sigma_{\chi n}$, Eq. (18), and the
derivation is given in the same appendix. Nonetheless, Eq. (10), or Eq. (23),
is simpler and suitable for our discussions in the main text. The last term
shows the suppression factor due to Pauli blocking where $\xi\sim q/\mu_{F}$
with $q$ the the momentum transfer during the scattering and $\mu_{F}$ the
neutron chemical potential.
In the equilibrium epoch, $t\gg\tau_{\rm eq}$, the total annihilation rate
$\Gamma_{A}=C_{c}\propto\sigma_{\chi n}$.444We found that the equilibrium
condition holds in most of the parameter space in this work. However, in the
calculation we adopted $\Gamma_{A}=C_{a}N_{\chi}^{2}$ with $N_{\chi}$ given by
Eq. (6), instead of simply assuming $\Gamma_{A}=C_{c}$. When
$\tilde{\varepsilon}_{Z}$ is at the resonant point, $\sigma_{\chi n}$ is
enhanced drastically by the factor $m_{Z}^{2}/\Gamma_{Z}^{2}$ so does the DM
heating resulted from DM emissivity $\epsilon_{\chi}$. This accounts for the
dip at $m_{V}=m_{Z}$ in each figure.
On the other hand, DM heating for $m_{\chi}$ in the sub-GeV region is much
stronger. It can be understood that, as $m_{\chi}\ll m_{n}$, $q\propto
m_{\chi}$ while $m_{V}/m_{\chi}$ is held fixed, we have $\sigma_{\chi
n}\propto\tilde{\varepsilon}_{Z}^{2}/m_{\chi}$ according to Eq. (10). Hence a
smaller $m_{\chi}$ leads to a larger $\sigma_{\chi n}$ as well as a more
effective DM heating. However, the effect of DM heating will not grow
indefinitely with $\tilde{\varepsilon}_{Z}$ as $\sigma_{\chi
n}\leq\sigma_{\chi n}^{\rm geom}\approx 10^{-44}\,{\rm cm}^{2}$. The maximum
$T_{s}$ caused by DM heating saturates when $\sigma_{\chi n}=\sigma_{\chi
n}^{\rm geom}$ and is around $3100\,{\rm K}$. This justifies our numerical
results in Fig. 2 that $T_{s}$ does not increase further when
$\tilde{\varepsilon}_{Z}$ is sufficiently large for a given $m_{\chi}$.
To all plots in Fig. 2, DM heating becomes weaker instead of proportional to
$1/m_{\chi}$ for $m_{\chi}\lesssim\mathcal{O}(170)\,{\rm MeV}$. Although DM is
capable of producing $e^{\pm}$ and $\mu^{\pm}$ in this mass range, the
chemical potentials for both particles are
$\mu_{F}^{e}\sim\mathcal{O}(170)\,{\rm MeV}$ and
$\mu_{F}^{\mu}\sim\mathcal{O}(70)\,{\rm MeV}$. All channels are Pauli blocked
and only pions formed by $q\bar{q}$ are allowed until $m_{\chi}<m_{\pi}$.
Nonetheless, in the presence of $V-Z$ mass mixing, neutrinos are also part of
the annihilation products and take a significant branching ratio in the DM
annihilation at such a mass region. But neutrino cannot contribute to the
heating. This explains why $T_{s}$ is much colder when
$m_{\chi}\lesssim\mathcal{O}(170)\,{\rm MeV}$. The DM heating in this region
is mainly due to kinetic heating. As $\sigma_{\chi n}=\sigma_{\chi n}^{\rm
geom}$, the resulted $T_{s}$ is around $1750\,{\rm K}$ from pure kinetic
heating.
Various $\eta$ values in Fig. 2 characterize the contributions from
$\varepsilon_{\gamma,Z}$ to $\tilde{\varepsilon}_{Z}$.555Since neutron is
charge neutral, $Q=0$, the effect of kinetic mixing $Q\varepsilon_{\gamma}$ in
Eq. (15a) has zero contribution to $\sigma_{\chi n}$. Hence $\sigma_{\chi
n}\propto\tilde{\varepsilon}_{Z}^{2}$. Nonetheless, if protons in NS are
considered, then $\varepsilon_{\gamma}$ shall contribute to the DM-proton
cross section $\sigma_{\chi p}$ as a consequence of non-vanishing
$Q\varepsilon_{\gamma}$. Both $\eta=1$ and $0$ are similar because even
$\varepsilon_{\gamma}\neq 0$, its effect to $\tilde{\varepsilon}_{Z}$ is
suppressed by $m_{V}^{2}/m_{Z}^{2}$ as seen from Eq. (16). For $\eta=1$, the
kinetic mixing can contribute comparably to the $V-Z$ mass mixing unless
$m_{V}>m_{Z}$. This can be clearly seen in Fig. 2 that the difference between
$\eta=1$ and $0$ is apparent only in $m_{V}>m_{Z}$ region, which is the region
to the right of the dip. To the left of the dip, the contribution from
$\varepsilon_{\gamma}$ to $\tilde{\varepsilon}_{Z}$ for $\eta=1$ is
negligible.
For $\eta=\infty$, $\varepsilon_{Z}$ vanishes so that the only contribution to
$\tilde{\varepsilon}_{Z}$ comes from $\varepsilon_{\gamma}$. As discussed
earlier, the effect of kinetic mixing term is suppressed by
$m_{V}^{2}/m_{Z}^{2}$ and thus $\sigma_{\chi
n}\propto\tilde{\varepsilon}_{Z}^{2}\propto\varepsilon_{\gamma}^{2}m_{V}^{4}/m_{Z}^{4}$.
The associated DM heating is in general much weaker than the cases with
$\eta=1$ and $0$. However, the advantage of $\eta=\infty$ is that no neutrino
can be produced in the DM annihilation due to the absence of $V-Z$ mass
mixing. The energy released from DM annihilation can be fully deposited into
NS. This accounts for the higher $T_{s}$ than $\eta=1$ and $0$ in terms of the
same $\sigma_{\chi n}$. But the difference is not apparent. Numerical
calculation shows it is around tens to $\mathcal{O}(100)\,{\rm K}$.
### IV.2 Case for $m_{V}<m_{\chi}$
For light mediator case, the channel $\chi\bar{\chi}\to 2V$ dominates over
$\chi\bar{\chi}\to f\bar{f}$ due to $\alpha_{\chi}\gg\varepsilon_{\gamma,Z}$
in general. As long as $F\sim 1$, $V$ can fully decay into SM particles before
it exits the NS. The resulting heating from $\chi\bar{\chi}\to 2V$ with $V\to
f\bar{f}$ can be appreciable as shown in the rightmost panel of Fig. 2. The
heating region in the $m_{V}<m_{\chi}$ case is much more expanded than the
$m_{V}\gg m_{\chi}$ case since lighter $m_{V}$ induces larger $\sigma_{\chi
n}$ as shown in Eq. (10). The resulting effects from different $\eta$’s are
similar to those in the previous subsection. When DMs mainly annihilate to
$2V$, the thermal relic cross section is controlled by $\alpha_{\chi}$ and
$m_{\chi}$ while it is independent of $\varepsilon_{\gamma,Z}$. Hence the
thermal relic cross section only constrains $m_{\chi}$ when $\alpha_{\chi}$
and $m_{V}$ are fixed. For $\alpha_{\chi}=1$ and $0.01$, the $m_{\chi}$ values
rendering the thermal relic cross section are around $2\times 10^{8}\,{\rm
MeV}$ and $5\times 10^{5}\,{\rm MeV}$, respectively.
## V Detectability of the future telescope
Figure 4: The exposure time $t_{{\rm exp}}$ for ${\rm SNR}=2$ in JWST. The
region enclosed by the red line indicates $t_{{\rm exp}}\leq 10^{5}\,{\rm s}$.
Since DM annihilation could significantly affect the NS surface temperature
$T_{s}$, we discuss the detectability of $T_{s}$ in the JWST and similar
telescopes in the future. The blackbody spectral flux density with $T_{s}$ at
a given frequency $\nu$ is given by Baryakhtar:2017dbj
$f_{\nu}(\nu,T_{s},d)=\frac{4\pi^{2}\nu^{3}}{e^{2\pi\nu/k_{B}T_{s}}-1}\left(\frac{R_{0}\gamma}{d}\right)^{2}$
(11)
where $d$ is the distance between the NS and the Earth. Taking
$T_{s}=2000\,{\rm K}$ and $d=10\,{\rm pc}$ as an example, we have
$f_{\nu}\approx 0.84\,{\rm nJy}$ at $\nu^{-1}=2\,{\rm\mu m}$. The signal-to-
noise ratio (SNR) for JWST-like telescope is proportional to
$f_{\nu}\sqrt{t_{{\rm exp}}}$ where $t_{{\rm exp}}$ is the exposure time. From
Ref. Gardner:2006ky , JWST covers $0.9\,{\rm\mu m}$ to $2.77\,{\rm\mu m}$
imaging sensitivity in its Near-Infrared Imager (NIRI). A F200W filter
centered at $\nu^{-1}=2\,{\rm\mu m}$ reaches ${\rm SNR}=10$ with
$f_{\nu}=10\,{\rm nJy}$ and $t_{{\rm exp}}=10^{4}\,{\rm s}$.
In Fig. 4, we plot the $t_{{\rm exp}}$ for obtaining ${\rm SNR}=2$ over
$d-T_{s}$ plane. The region enclosed by the red line represents $t_{{\rm
exp}}<10^{5}\,{\rm s}$. There are multiple filters available for NIRI with
$\nu^{-1}$ centered at various different values Gardner:2006ky . We select the
filter with $\nu^{-1}$ most suitably matching the corresponding blackbody
wavelength at $T_{s}$. This explains the zigzag behavior in Fig. 4. In
principle, as $\sigma_{\chi n}\sim\sigma_{\chi n}^{{\rm geom}}$, kinetic
heating can maximally warm the NS up to $1750\,{\rm K}$ without DM
annihilation. For NS that is located within 10 pc, JWST can achieve ${\rm
SNR=2}$ with $t_{{\rm exp}}\leq 10^{5}\,{\rm s}$ for $T_{s}\geq 1750\,{\rm K}$
## VI Summary and outlook
In this work we have investigated the new dynamics arising from the kinetic
mixing and $V-Z$ mass mixing between the dark gauge boson $V$ of the broken
$U(1)_{X}$ symmetry and neutral gauge bosons in SM. In particular, $V-Z$ mass
mixing induces a resonance at $m_{V}\approx m_{Z}$, which can be seen from the
pole of $\tilde{\varepsilon}_{Z}$ at $m_{V}=m_{Z}$. The axial-vector part of
the coupling between $V$ and SM fermions has been included in our
calculations. As $\chi\bar{\chi}\to 2V$ dominates the annihilation channel for
$m_{V}<m_{\chi}$, $V$ can decay into a pair of SM fermions before it exits NS
and induce NS heating in addition to $\chi\bar{\chi}\to f\bar{f}$. Although
this contribution appears naturally in the dark boson model considered here,
it is usually not included in the model-independent analysis, such as the one
performed in Ref. Chen:2018ohx . We also demonstrated numerically that NS can
provide constraints on sub-GeV DM with feeble coupling to SM particles in
complementary to the current direct search. The detectability with reasonable
$t_{{\rm exp}}$ in JWST telescopes is discussed. Similar conclusion can be
drawn for the future JWST-like telescopes.
We note that this work only considers $\chi n$ scattering in the capture rate.
This explains why NS is not sensitive to the dark sector when
$\varepsilon_{Z}=0$ ($\eta=\infty$). Neutron interacts with DM only through NC
interaction governed by $\tilde{\varepsilon}_{Z}$. Once $\varepsilon_{Z}=0$,
NC interaction becomes much suppressed since $\varepsilon_{\gamma}$ in
$\tilde{\varepsilon}_{Z}$ is oppressed by $m_{V}^{2}/m_{Z}^{2}$. However, NS
also consists of protons although the fraction of them is rather small. When
protons are included, CC interaction will be involved for the capture of DM
and NS remains sensitive to the dark sector even for $\varepsilon_{Z}=0$. In
general, NS sensitivity will be improved by including proton contributions. We
leave this for future studies.
## Appendix A DM-neutron interaction
Figure 5: Feynman diagram for DM-neutron scattering. The blob is an effective
vertex that includes both vector and axial-vector contributions from kinetic
mixing and $V-Z$ mass mixing.
When DMs fall into NS, they could scatter with neutrons via exchanging the
dark boson $V$ as shown in Fig. 5. The kinetic mixing and $V-Z$ mass mixing
generate vector and axial-vector interactions between $V$ and SM fermions. The
usual derivation of these interactions proceeds through the diagonalization of
both $\mathcal{L}_{\rm gauge}$ and $\mathcal{L}_{\rm mass}$ in Eqs. (2) and
(3), which gives rise to relations between fields in the gauge basis and those
in mass eigenstate basis. However, since we are only interested in
interactions up to $\mathcal{O}(\varepsilon_{\gamma})$ or
$\mathcal{O}(\varepsilon_{Z})$, we do not need to perform the diagonalization
but rather treating the mixing terms
$\varepsilon_{\gamma}B_{\mu\nu}V^{\mu\nu}/(2\cos\theta_{W})$ and
$\varepsilon_{Z}m_{Z}^{2}Z_{\mu}V^{\mu}$ as perturbations. These two mixing
terms generate the following two-point functions at the tree level
$\displaystyle i\Pi^{\mu\nu}_{V\gamma}$ $\displaystyle=$ $\displaystyle
i\varepsilon_{\gamma}k^{2}g^{\mu\nu},$ $\displaystyle i\Pi^{\mu\nu}_{VZ}$
$\displaystyle=$
$\displaystyle-i(\varepsilon_{\gamma}\tan\theta_{W}k^{2}+\varepsilon_{Z}m_{Z}^{2})g^{\mu\nu},$
(12)
where $k$ is the four-momentum of $V$ entering into kinetic mixing or $V-Z$
mixing vertex. Hence the EM coupling of $V$ to SM fermions results from
multiplying the two-point function $i\Pi^{\mu\nu}_{V\gamma}$, the photon
propagator $iD^{\gamma}_{\alpha\mu}(k)$, and the electromagnetic coupling
$ieA_{\alpha}J^{\alpha}_{\rm EM}$, as shown in Fig. 6. This multiplication
leads to
$\displaystyle ieJ^{\alpha}_{\rm
EM}\frac{-ig_{\alpha\mu}}{k^{2}}i\varepsilon_{\gamma}k^{2}g^{\mu\nu}V_{\nu}=ie\varepsilon_{\gamma}J^{\nu}_{\rm
EM}V_{\nu}.$ (13)
Similarly, NC coupling of $V$ to SM fermions is given by multiplying the two-
point function $i\Pi^{\mu\nu}_{VZ}$, the $Z$ boson propagator
$iD^{Z}_{\alpha\mu}(k)$, and the NC coupling $igZ_{\alpha}J^{\alpha}_{\rm
NC}/\cos\theta_{W}$. This gives rise to
$\displaystyle\frac{ig}{\cos\theta_{W}}J^{\alpha}_{\rm
NC}\frac{-i}{k^{2}-m_{Z}^{2}+im_{Z}\Gamma_{Z}}\left(g_{\alpha\mu}-\frac{k_{\alpha}k_{\mu}}{m_{Z}^{2}}\right)(-i)(\varepsilon_{\gamma}\tan\theta_{W}k^{2}+\varepsilon_{Z}m_{Z}^{2})g^{\mu\nu}V_{\nu}$
(14) $\displaystyle=$ $\displaystyle\frac{-ig}{\cos\theta_{W}}J^{\nu}_{\rm
NC}V_{\nu}\frac{(\varepsilon_{\gamma}\tan\theta_{W}m_{V}^{2}+\varepsilon_{Z}m_{Z}^{2})}{(m_{V}^{2}-m_{Z}^{2}+im_{Z}\Gamma_{Z})}.$
Here we have used the physical conditions $k^{2}=m_{V}^{2}$ and
$k_{\mu}\epsilon^{\mu}_{V}=0$. We have also chosen unitary gauge for the $Z$
boson propagator. Therefore, the interaction vertex between dark boson and
neutron in Fig. 5 has the following Lorentz structure
$ie\bar{\psi}_{n}\gamma^{\mu}(a_{f}+b_{f}\gamma^{5})\psi_{n}$ with
$\displaystyle a_{f}$ $\displaystyle=Q\varepsilon_{\gamma}+\frac{1}{\sin
2\theta_{W}}(I_{3}-2Q\sin^{2}\theta_{W})\tilde{\varepsilon}_{Z},$ (15a)
$\displaystyle b_{f}$ $\displaystyle=-\frac{I_{3}}{\sin
2\theta_{W}}\tilde{\varepsilon}_{Z},$ (15b)
where
$\tilde{\varepsilon}_{Z}=\frac{\varepsilon_{Z}+\varepsilon_{\gamma}\tan\theta_{W}(m_{V}^{2}/m_{Z}^{2})}{(1-m_{V}^{2}/m_{Z}^{2})^{2}+\Gamma^{2}_{Z}/m_{Z}^{2}}\left(1-\frac{m_{V}^{2}}{m_{Z}^{2}}-i\frac{\Gamma_{Z}}{m_{Z}}\right)$
(16)
and $\Gamma_{Z}$ is the $Z$ boson decay width, $Q$ and $I_{3}$ are the
electric charge and the weak isospin respectively. In Tab. 1, we list $Q$ and
$I_{3}$ for various particles. The values for neutron can be obtained by
summing the corresponding quantum numbers of three quarks $udd$ in the low
energy limit.
Figure 6: Feynman diagrams contributing to the coupling of dark boson $V$ to
SM fermions.
Mixing parameters $\varepsilon_{\gamma}$ and $\tilde{\varepsilon}_{Z}$ are
responsible for EM and NC interactions, respectively. EM interaction does not
contribute to $\sigma_{\chi n}$ since $Q=0$ for neutron. On the other hand
$\tilde{\varepsilon}_{Z}$ has a feeble dependence on $\varepsilon_{\gamma}$
with a suppression factor $m_{V}^{2}/m_{Z}^{2}$ when $m_{V}\ll m_{Z}$. This
explains why $\sigma_{\chi n}$ is still nonzero when $\varepsilon_{Z}=0$
($\eta=\infty$).
| $u$ | $d$ | $c$ | $s$ | $t$ | $b$ | $\ell$ | $\nu$
---|---|---|---|---|---|---|---|---
$Q$ | $\frac{2}{3}$ | $-\frac{1}{3}$ | $\frac{2}{3}$ | $-\frac{1}{3}$ | $\frac{2}{3}$ | $-\frac{1}{3}$ | $-1$ | $0$
$I_{3}$ | $\frac{1}{2}$ | $-\frac{1}{2}$ | $\frac{1}{2}$ | $-\frac{1}{2}$ | $\frac{1}{2}$ | $-\frac{1}{2}$ | $-\frac{1}{2}$ | $\frac{1}{2}$
Table 1: Values of $Q$ and $I_{3}$ for quarks, leptons and neutrinos.
The spin-averaged $\chi n$ scattering amplitude is given by
$\displaystyle\overline{|\mathcal{M}_{\chi n}|^{2}}$
$\displaystyle=\frac{8\pi\alpha_{\chi}}{(t-m_{V}^{2})^{2}}\\{-4m_{n}^{2}[(b_{f}^{2}-a_{f}^{2})m_{\chi}^{2}+a_{f}^{2}u+b_{f}^{2}(s+u)]+2(a_{f}^{2}+3b_{f}^{2})m_{\chi}^{4}$
$\displaystyle\quad-4a_{f}^{2}um_{n}^{2}+a_{f}^{2}(t^{2}+2tu+2u^{2})+2(a_{f}^{2}-b_{f}^{2})m_{n}^{4}+b_{f}^{2}(s^{2}+u^{2})\\},$
(17)
where $s$, $t$ and $u$ are the Mandelstam variables. DM scatters with neutrons
with its velocity boosted to $0.3c-0.6c$ by the NS gravity. It must be treated
relativistically. However, neutrons can be treated as at rest since its
chemical potential is $\mathcal{O}(200)\,{\rm MeV}$ in the star.
Therefore, from the method in Ref. Ilisie:2016jta , we are able to write down
the DM-neutron scattering cross section as
$\sigma_{\chi
n}=\frac{1}{16\pi\lambda^{1/2}(s,m_{1}^{2},m_{2}^{2})\lambda^{1/2}(s,m_{3}^{2},m_{4}^{2})}\int_{t_{-}}^{t_{+}}\overline{|\mathcal{M}_{\chi
n}|^{2}}dt$ (18)
where
$\lambda(x,y,z)=x^{2}+y^{2}+z^{2}-2xy-2yz+2xz$ (19)
is the Källén function,
$t_{\pm}=\frac{1}{2}\sum_{i=1}^{4}m_{i}^{2}-\frac{s}{2}-\frac{1}{2s}(m_{1}^{2}-m_{2}^{2})(m_{3}^{2}-m_{4}^{2})\pm\frac{\lambda^{1/2}(s,m_{1}^{2},m_{2}^{2})\lambda^{1/2}(s,m_{3}^{2},m_{4}^{2})}{2s},$
(20)
and
$s=m_{1}^{2}+m_{2}^{2}+2E_{1}m_{2},$ (21)
where $m_{1}=m_{3}=m_{\chi}$ and $m_{2}=m_{4}=m_{n}$ for the $\chi n$
scattering. In Eq. (21), the energy $E_{1}=\gamma m_{1}$ is the total energy
carried by particle 1, which is DM.
### A.1 Pauli blocking in the $\chi n$ scattering
Note that if the momentum transfer $\sqrt{-t}$ in Eq. (18) is smaller than the
Fermi momentum, the suppression by Pauli blocking takes effect. We include
this in the numerical calculation by incorporating the method in Ref.
Bell:2020jou . Our result agrees with Ref. Bell:2020jou in the three
benchmark scenarios that $\overline{|\mathcal{M}_{\chi n}|^{2}}$ are constant,
$t$-dependent and $t^{2}$-dependent.
### A.2 Axial-vector contribution in the NR limit
If $\chi$ can be treated non-relativistically as well, we have
$s=m_{\chi}^{2}+m_{n}^{2}+2m_{\chi}m_{n}$,
$u=m_{\chi}^{2}+m_{n}^{2}-2m_{\chi}m_{n}$ and $t=0$. Therefore the amplitude
and the cross section become,
$\overline{|\mathcal{M}_{\chi n}^{{\rm NR}}|^{2}}=\frac{64\pi
a_{f}^{2}\alpha_{\chi}m_{\chi}^{2}m_{n}^{2}}{m_{V}^{4}}$ (22)
and
$\sigma_{\chi n}^{{\rm
NR}}=\frac{4a_{f}^{2}\alpha_{\chi}}{m_{V}^{4}}\frac{m_{\chi}^{2}m_{n}^{2}}{(m_{\chi}+m_{n})^{2}}$
(23)
which are independent of $b_{f}$ where it determines the strength of axial-
vector coupling.
## Appendix B DM annihilation
(a) To SM particles
(b) To dark bosons
Figure 7: Various channels for DM annihilation
We can divide the DM annihilation into two categories, which are $m_{V}\geq
m_{\chi}$ and $m_{V}<m_{\chi}$, respectively. For the prior case, DM can only
annihilate into SM particles as shown in Fig. 7a. For the later one, as long
as $g_{d}\gg e\varepsilon_{\gamma,Z}$, the dominant annihilation products are
two dark boson $V$ as shown in Fig. 7b. The amplitude for $\chi\bar{\chi}\to
f\bar{f}$ is given by
$\displaystyle\overline{|\mathcal{M}_{\chi\bar{\chi}\to f\bar{f}}|^{2}}$
$\displaystyle=\frac{8\pi\alpha_{\chi}}{(s-m_{V}^{2})^{2}+m_{V}^{2}\Gamma_{V}^{2}}\\{a_{f}^{2}[-2(m_{f}^{2}+m_{\chi}^{2})(-m_{f}^{2}-m_{\chi}^{2}+2u)+s^{2}+2su+2u^{2}]$
$\displaystyle\quad+b_{f}^{2}[-4m_{\chi}^{2}(m_{f}^{2}+t+u)-2m_{f}^{4}+6m_{\chi}^{4}+t^{2}+u^{2}]\\}$
(24)
where $\Gamma_{V}$ is the $V$ decay width. Assuming DM is at rest in the star,
the amplitude can be simplified into,
$\overline{|\mathcal{M}_{\chi\bar{\chi}\to
f\bar{f}}|^{2}}=\frac{128\pi\alpha_{\chi}}{(4m_{\chi}^{2}-m_{V}^{2})^{2}+m_{V}^{2}\Gamma_{V}^{2}}m_{\chi}^{4}\left[a_{f}^{2}\left(1+\frac{1}{2}\frac{m_{f}^{2}}{m_{\chi}^{2}}\right)+b_{f}^{2}\left(1-\frac{m_{f}^{2}}{m_{\chi}^{2}}\right)\right].$
(25)
The partial decay widths of $V$ are given by
$\Gamma_{V}=\frac{m_{V}}{12\pi}\sqrt{1-4\frac{m_{f}^{2}}{m_{V}^{2}}}\left[a_{f}^{2}\left(1+2\frac{m_{f}^{2}}{m_{V}^{2}}\right)+b_{f}^{2}\left(1-4\frac{m_{f}^{2}}{m_{V}^{2}}\right)\right]$
(26)
for $V\to f\bar{f}$ and
$\Gamma_{V}=\frac{\alpha_{\chi}}{3}m_{V}\sqrt{1-4\frac{m_{\chi}^{2}}{m_{V}^{2}}}\left(1+2\frac{m_{\chi}^{2}}{m_{V}^{2}}\right)$
(27)
for $V\to\chi\bar{\chi}$. Note that we have omitted the Heaviside theta
function $\theta(m_{V}-2m_{\chi,f})$ in the above expressions but it is always
implemented when we perform the calculation to ensure the energy conservation.
Besides, when $m_{\chi}>m_{V}$, the channel $\chi\bar{\chi}\to 2V$ is allowed
and the amplitude is
$\displaystyle\overline{|\mathcal{M}_{\chi\bar{\chi}\to 2V}|^{2}}$
$\displaystyle=-\frac{32\pi^{2}\alpha_{\chi}{2}}{(t-m_{\chi}^{2})^{2}(u-m_{\chi}^{2})^{2}}\\{m_{V}^{4}[6m_{\chi}^{2}(t+u)-6m_{\chi}^{4}+t^{2}-8tu+u^{2}]$
$\displaystyle\quad+4m_{V}^{2}[m_{\chi}^{4}(t+u)-4m_{\chi}^{2}tu+tu(t+u)]-m_{\chi}^{4}(3t^{2}+14tu+3u^{2})$
$\displaystyle\quad+m_{\chi}^{2}(t^{3}+7t^{2}u+7tu^{2}+u^{3})+6m_{\chi}^{2}-tu(t^{2}+u^{2})\\}.$
(28)
In the NR limit,
$\overline{|\mathcal{M}_{\chi\bar{\chi}\to
2V}|^{2}}=256\pi^{2}\alpha_{\chi}^{2}\frac{m_{\chi}^{2}(m_{\chi}^{2}-m_{V}^{2})}{(m_{V}^{2}-2m_{\chi}^{2})^{2}}.$
(29)
Thus, the general expression for annihilation cross section is obtained by
using the Fermi golden rule,
$\sigma v=\frac{\overline{|\mathcal{M}|^{2}}}{32\pi
m_{\chi}^{2}}\sqrt{1-\frac{m_{f}^{2}}{m_{\chi}^{2}}}\theta(m_{\chi}-\mu_{F}^{f}),$
(30)
where $m_{f}$ is the final state particle mass and $\mu_{F}^{f}$ the chemical
potential of fermion $f$ in the star. There is no chemical potential for dark
boson $V$. Therefore, we arrive at
$(\sigma v)^{f\bar{f}}=4\alpha_{\chi}\kappa
m_{\chi}^{2}\sqrt{1-\frac{m_{f}^{2}}{m_{\chi}^{2}}}\theta(m_{\chi}-\mu_{F}^{f}),$
(31)
where
$\kappa=\frac{1}{(4m_{\chi}^{2}-m_{V}^{2})^{2}+m_{V}^{2}\Gamma_{V}^{2}}\left[a_{f}^{2}\left(1+\frac{1}{2}\frac{m_{f}^{2}}{m_{\chi}^{2}}\right)+b_{f}^{2}\left(1-\frac{m_{f}^{2}}{m_{\chi}^{2}}\right)\right]$
(32)
for $\chi\bar{\chi}\to f\bar{f}$ and
$(\sigma
v)^{2V}=8\pi\alpha_{\chi}^{2}\sqrt{1-\frac{m_{V}^{2}}{m_{\chi}^{2}}}\frac{(m_{\chi}^{2}-m_{V}^{2})}{(m_{V}^{2}-2m_{\chi}^{2})^{2}}$
(33)
for $\chi\bar{\chi}\to 2V$. The total annihilation cross section is the sum of
both
$\sigma v=(\sigma v)^{f\bar{f}}+(\sigma v)^{2V}.$ (34)
We note that the second term contributes when $m_{V}<m_{\chi}$.
## Appendix C Dark boson in the star
Dark bosons can be produced from DM annihilation once $m_{V}<m_{\chi}$. This
channel is thought to have feeble effect on the heating since $V$ interacts
with the NS medium weakly and escapes without any trace. However, we found
that, depending on the strength of $\varepsilon_{\gamma,Z}$, $V$ can decay
into SM particles before it reaches the surface of the star. In the case that
the decay length $\ell_{{\rm dec}}$ is much smaller than the star’s radius,
the total energy released from the annihilation can be fully deposited to the
star. See Fig. 1a. We also examine the case that $V$ is produced in the DM
rich region in the star’s center. $V$ could undergo multiple scattering with
the surrounding DMs and self-trapped until it decays. See Fig. 1b. This is
another way to extract energy from $V$. We discuss both effects in the
following.
### C.1 Decay length
Figure 8: Fraction $F$ of dark boson decay into SM particles that contributes
to the heating effect.
The dark boson decay length with time dilation effect is given by
$\ell_{{\rm dec}}=v\gamma\tau_{{\rm dec}},$ (35)
where $v=\sqrt{1-m_{V}^{2}/m_{\chi}^{2}}$ is the $V$ velocity and $\tau_{{\rm
dec}}=\Gamma_{V}^{-1}$ the $V$ lifetime at rest. Let us assume that $V$ is
produced in the center of the star and its propagation distance is $R_{0}$.
Fig. 8 presents $F$ defined in Eq. (9), i.e., the fraction of $V$ converting
into SM particles after traveling a distance $r=R_{0}$, as functions of
$\varepsilon_{Z,\gamma}$ and $m_{\chi}$ for $m_{V}=0.1m_{\chi}$. We have
subtracted neutrino contributions from $F$ since they cannot generate heat.
Since the branching ratio of $V$ decays to neutrinos is nonzero in the case of
$V-Z$ mass mixing, $F$ is generally smaller than $1$ for $\eta=1$. For
$\eta=\infty$, no neutrinos can be produced, thus $F$ can reach unity.
In these figures, the chemical potential for electron $\mu_{F}^{e}$ is about
$\mathcal{O}(170)\,{\rm MeV}$. For a dark boson at rest with
$m_{V}\leq\mu_{F}^{e}$, $V\to e^{+}e^{-}$ can be Pauli blocked even for
$m_{V}\geq 2m_{e}$. On the other hand, if $V$ is highly boosted as a result of
heavy DM annihilation, $V\to e^{+}e^{-}$ is not Pauli blocked as long as
$m_{\chi}\geq\mu_{F}^{e}$. Therefore, to enable $V\to f\bar{f}$ decays, two
conditions are required. The first is $m_{\chi}\geq\mu_{F}^{f}$ and the second
is $m_{V}\geq 2m_{f}$.
### C.2 Dark boson-DM interaction length
Figure 9: $\chi V$ scattering via $s$ and $t$ channels.
Feynman diagrams contributing to $\chi V$ scattering are shown in Fig. 9 and
the amplitude is given by
$\displaystyle\overline{|\mathcal{M}_{\chi V}|^{2}}$
$\displaystyle=\frac{64\pi^{2}}{3}\frac{\alpha_{\chi}^{2}}{(s-m_{\chi}^{2})^{2}(t-m_{\chi}^{2})^{2}}\\{m_{V}^{4}[6m_{\chi}^{2}(s+t)-6m_{\chi}^{4}+s^{2}-8st+t^{2}]$
$\displaystyle\quad-
m_{\chi}^{4}(3s^{2}+14st+3t^{2})+m_{\chi}^{2}(s^{3}+7s^{2}t+7st^{2}+t^{3})$
$\displaystyle\quad+4m_{V}^{2}[m_{\chi}^{4}(s+t)-4stm_{\chi}^{2}+st(s+t)]+6m_{\chi}^{8}-st(s^{2}+t^{2})\\}.$
(36)
To compute the scattering cross section $\sigma_{\chi V}$, it is fair to
assume DM at rest. However, $V$ is produced with relativistic velocity since
$m_{\chi}>m_{V}$. We follow the procedure given in Eqs. (18)-(21) and set
$m_{1}=m_{3}=m_{V}$ and $m_{2}=m_{4}=m_{\chi}$. Thus,
$\sigma_{\chi
V}=\frac{1}{16\pi\lambda(s,m_{V}^{2},m_{\chi}^{2})}\int_{t_{-}}^{t_{+}}\overline{|\mathcal{M}_{\chi
V}|^{2}}dt.$ (37)
Note that $\chi V$ scattering is not subject to Pauli blocking since DMs do
not become degenerate in the presence of annihilation.
Figure 10: The ratio $\ell_{\chi V}/r_{{\rm th}}$ for $\eta=1$ and $\infty$.
We take $\alpha_{\chi}=1$ and $T_{\chi}=1000\,{\rm K}$ in the calculation.
The $\chi V$ scattering length $\ell_{\chi V}$ is given by
$\ell_{\chi V}=(n_{\chi}\sigma_{\chi V})^{-1},$ (38)
with $n_{\chi}\equiv N_{\chi}/V_{\chi}$ the average DM number density. The
volume characterizing DMs in NS is $V_{\chi}=4\pi r_{{\rm th}}^{3}/3$ where
$r_{{\rm th}}\approx 2.4\times 10^{3}\,{\rm
cm}\,\left(\frac{T_{\chi}}{10^{5}\,{\rm K}}\frac{10\,{\rm
MeV}}{m_{\chi}}\right)^{1/2}$ (39)
is the thermal radius. If $\ell_{\chi V}\ll r_{{\rm th}}$, $V$ can scatter
with surrounding DMs multiple times and gradually lose its kinetic energy.
However, our numerical result shows that $\ell_{\chi V}\gg r_{\rm th}$ in all
of our interested parameter space. In Fig. 10, we take $\alpha_{\chi}=1$ and
$T_{\chi}=1000\,{\rm K}$. The choice $\alpha_{\chi}<1$ makes $\ell_{\chi V}$
even longer due to a weaker $\chi V$ interaction. For $\eta=1$, the region for
$\ell_{\chi V}/r_{{\rm th}}<1$ happens when $m_{\chi}\lesssim 300\,{\rm MeV}.$
However, even $V$ can be self-trapped, it hardly decays into particles other
than neutrinos because the allowed channels, eg. $e^{\pm}$ and $\mu^{\pm}$,
are Pauli blocked. For $\eta=\infty$, only a very small parameter space leads
to $\ell_{\chi V}/r_{{\rm th}}<1$. Therefore, we conclude that the self-
trapping of $V$ is insignificant, hence only $V$ decays contribute to the
energy injection.
## References
* (1) G. Aad et al. [ATLAS Collaboration], Eur. Phys. J. C 75, 299 (2015) [Erratum ibid 75, 408 (2015)] [arXiv:1502.01518 [hep-ex]].
* (2) J. Abdallah et al., Phys. Dark Univ. 9-10, 8 (2015) [arXiv:1506.03116 [hep-ph]].
* (3) J. Aalbers et al. [DARWIN Collaboration], JCAP 1611, 017 (2016) [arXiv:1606.07001 [astro-ph.IM]].
* (4) D. S. Akerib et al. [LUX Collaboration], Phys. Rev. Lett. 118, 021303 (2017) [arXiv:1608.07648 [astro-ph.CO]].
* (5) C. Amole et al. [PICO Collaboration], Phys. Rev. Lett. 118, 251301 (2017) [arXiv:1702.07666 [astro-ph.CO]].
* (6) D. S. Akerib et al. [LUX Collaboration], Phys. Rev. Lett. 118, 251302 (2017) [arXiv:1705.03380 [astro-ph.CO]].
* (7) E. Aprile et al. [XENON Collaboration], Phys. Rev. Lett. 119, 181301 (2017) [arXiv:1705.06655 [astro-ph.CO]].
* (8) E. Aprile et al. [XENON Collaboration], Phys. Rev. Lett. 121, 111302 (2018) [arXiv:1805.12562 [astro-ph.CO]].
* (9) E. Aprile et al. [XENON], Phys. Rev. Lett. 123, 241803 (2019) [arXiv:1907.12771 [hep-ex]].
* (10) E. Aprile et al. [XENON], Phys. Rev. Lett. 123, 251801 (2019) [arXiv:1907.11485 [hep-ex]].
* (11) M. G. Aartsen et al. [IceCube PINGU Collaboration], arXiv:1401.2046 [physics.ins-det].
* (12) K. Choi et al. [Super-Kamiokande Collaboration], Phys. Rev. Lett. 114, 141301 (2015) [arXiv:1503.04858 [hep-ex]].
* (13) M. G. Aartsen et al. [IceCube Collaboration], Eur. Phys. J. C 77, 146 (2017) [arXiv:1612.05949 [astro-ph.HE]].
* (14) M. Aguilar et al. [AMS Collaboration], 211101 (2015).
* (15) M. Ackermann et al. [Fermi-LAT Collaboration], Astrophys. J. 840, 43 (2017) [arXiv:1704.03910 [astro-ph.HE]].
* (16) G. Ambrosi et al. [DAMPE Collaboration], Nature 552, 63 (2017) [arXiv:1711.10981 [astro-ph.HE]].
* (17) C. Kouvaris, Phys. Rev. D 77, 023006 (2008) [arXiv:0708.2362 [astro-ph]].
* (18) A. de Lavallaz and M. Fairbairn, Phys. Rev. D 81, 123521 (2010) [arXiv:1004.0629 [astro-ph.GA]].
* (19) C. Kouvaris and P. Tinyakov, Phys. Rev. D 82, 063531 (2010) [arXiv:1004.0586 [astro-ph.GA]].
* (20) M. Baryakhtar, J. Bramante, S. W. Li, T. Linden and N. Raj, Phys. Rev. Lett. 119, 131801 (2017) [arXiv:1704.01577 [hep-ph]].
* (21) N. Raj, P. Tanedo and H. B. Yu, Phys. Rev. D 97, 043006 (2018) [arXiv:1707.09442 [hep-ph]].
* (22) C. S. Chen and Y. H. Lin, JHEP 1808, 069 (2018) [arXiv:1804.03409 [hep-ph]].
* (23) N. F. Bell, G. Busoni and S. Robles, JCAP 1809, 018 (2018) [arXiv:1807.02840 [hep-ph]].
* (24) J. F. Acevedo, J. Bramante, R. K. Leane and N. Raj, arXiv:1911.06334 [hep-ph].
* (25) A. Joglekar, N. Raj, P. Tanedo and H. B. Yu, arXiv:1911.13293 [hep-ph].
* (26) W. Y. Keung, D. Marfatia and P. Y. Tseng, JHEP 07, 181 (2020) [arXiv:2001.09140 [hep-ph]].
* (27) C. Kouvaris and P. Tinyakov, Phys. Rev. D 83, 083512 (2011) [arXiv:1012.2039 [astro-ph.HE]].
* (28) S. C. Leung, M. C. Chu and L. M. Lin, Phys. Rev. D 84, 107301 (2011) [arXiv:1111.1787 [astro-ph.CO]].
* (29) C. Kouvaris, Phys. Rev. Lett. 108, 191301 (2012) [arXiv:1111.4364 [astro-ph.CO]].
* (30) S. D. McDermott, H. B. Yu and K. M. Zurek, Phys. Rev. D 85, 023519 (2012) [arXiv:1103.5472 [hep-ph]].
* (31) T. Güver, A. E. Erkoca, M. Hall Reno and I. Sarcevic, JCAP 1405, 013 (2014) [arXiv:1201.2400 [hep-ph]].
* (32) J. Bramante, K. Fukushima and J. Kumar, Phys. Rev. D 87, 055012 (2013) [arXiv:1301.0036 [hep-ph]].
* (33) J. Bramante, K. Fukushima, J. Kumar and E. Stopnitzky, Phys. Rev. D 89, 015010 (2014) [arXiv:1310.3509 [hep-ph]].
* (34) C. Kouvaris and P. Tinyakov, Phys. Rev. D 90, 043512 (2014) [arXiv:1312.3764 [astro-ph.SR]].
* (35) M. I. Gresham and K. M. Zurek, Phys. Rev. D 99, 083008 (2019) [arXiv:1809.08254 [astro-ph.CO]].
* (36) B. Grinstein, C. Kouvaris and N. G. Nielsen, Phys. Rev. Lett. 123, 091601 (2019) [arXiv:1811.06546 [hep-ph]].
* (37) R. Garani, Y. Genolini and T. Hambye, JCAP 1905, 035 (2019) [arXiv:1812.08773 [hep-ph]].
* (38) G. L. Lin and Y. H. Lin, JCAP 08, 022 (2020) [arXiv:2004.05312 [hep-ph]].
* (39) A. Nelson, S. Reddy and D. Zhou, JCAP 07 (2019), 012 [arXiv:1803.03266 [hep-ph]].
* (40) J. Ellis, G. Hütsi, K. Kannike, L. Marzola, M. Raidal and V. Vaskonen, Phys. Rev. D 97 (2018), 123007 [arXiv:1804.01418 [astro-ph.CO]].
* (41) A. Bauswein, G. Guo, J. H. Lien, Y. H. Lin and M. R. Wu, [arXiv:2012.11908 [astro-ph.HE]].
* (42) R. K. Leane, T. Linden, P. Mukhopadhyay and N. Toro, [arXiv:2101.12213 [astro-ph.HE]].
* (43) S. Tulin and H. B. Yu, Phys. Rept. 730, 1 (2018) [arXiv:1705.02358 [hep-ph]].
* (44) S. W. Randall, M. Markevitch, D. Clowe, A. H. Gonzalez and M. Bradac, Astrophys. J. 679, 1173 (2008) [arXiv:0704.0261 [astro-ph]].
* (45) M. G. Walker and J. Penarrubia, Astrophys. J. 742, 20 (2011) [arXiv:1108.2404 [astro-ph.CO]].
* (46) M. Boylan-Kolchin, J. S. Bullock and M. Kaplinghat, Mon. Not. Roy. Astron. Soc. 415, L40 (2011) [arXiv:1103.0007 [astro-ph.CO]].
* (47) M. Boylan-Kolchin, J. S. Bullock and M. Kaplinghat, Mon. Not. Roy. Astron. Soc. 422, 1203 (2012) [arXiv:1111.2048 [astro-ph.CO]].
* (48) O. D. Elbert, J. S. Bullock, S. Garrison-Kimmel, M. Rocha, J. Oñorbe and A. H. Peter, Mon. Not. Roy. Astron. Soc. 453, 29 (2015) [arXiv:1412.1477 [astro-ph.GA]].
* (49) B. Holdom, Phys. Lett. 166B, 196 (1986)
* (50) P. Galison and A. Manohar, Phys. Lett. 136B, 279 (1984)
* (51) R. Foot, Int. J. Mod. Phys. D 13, 2161 (2004) [astro-ph/0407623].
* (52) D. Feldman, B. Kors and P. Nath, Phys. Rev. D 75, 023503 (2007) [hep-ph/0610133].
* (53) N. Arkani-Hamed, D. P. Finkbeiner, T. R. Slatyer and N. Weiner, Phys. Rev. D 79, 015014 (2009) [arXiv:0810.0713 [hep-ph]].
* (54) M. Pospelov and A. Ritz, Phys. Lett. B 671, 391 (2009) [arXiv:0810.1502 [hep-ph]].
* (55) K. S. Babu, C. F. Kolda and J. March-Russell, Phys. Rev. D 57, 6788 (1998) [hep-ph/9710441].
* (56) H. Davoudiasl, H. S. Lee and W. J. Marciano, Phys. Rev. D 85, 115019 (2012) [arXiv:1203.2947 [hep-ph]].
* (57) H. Davoudiasl, H. S. Lee, I. Lewis and W. J. Marciano, Phys. Rev. D 88, no. 1, 015022 (2013) [arXiv:1304.4935 [hep-ph]].
* (58) J. P. Gardner, J. C. Mather, M. Clampin, R. Doyon, M. A. Greenhouse, H. B. Hammel, J. B. Hutchings, P. Jakobsen, S. J. Lilly and K. S. Long, et al. Space Sci. Rev. 123, 485 (2006) [arXiv:astro-ph/0606175 [astro-ph]] and JWST pocket guide
* (59) W. Skidmore et al. [TMT International Science Development Teams & TMT Science Advisory Committee], Res. Astron. Astrophys. 15, 1945-2140 (2015) [arXiv:1505.01195 [astro-ph.IM]].
* (60) N. F. Bell, G. Busoni, S. Robles and M. Virgato, JCAP 09, 028 (2020) [arXiv:2004.14888 [hep-ph]].
* (61) N. F. Bell, G. Busoni, S. Robles and M. Virgato, [arXiv:2010.13257 [hep-ph]].
* (62) J. Bramante, A. Delgado and A. Martin, Phys. Rev. D 96 (2017), 063002 [arXiv:1703.04043 [hep-ph]].
* (63) B. Dasgupta, A. Gupta and A. Ray, JCAP 08 (2019), 018 [arXiv:1906.04204 [hep-ph]].
* (64) E. M. Riordan, M. W. Krasny, K. Lang, P. De Barbaro, A. Bodek, S. Dasu, N. Varelas, X. Wang, R. G. Arnold and D. Benton, et al. Phys. Rev. Lett. 59, 755 (1987)
* (65) A. Bross, M. Crisler, S. H. Pordes, J. Volk, S. Errede and J. Wrbanek, Phys. Rev. Lett. 67, 2942 (1991)
* (66) M. Abdullah, J. B. Dent, B. Dutta, G. L. Kane, S. Liao and L. E. Strigari, Phys. Rev. D 98, 015005 (2018) [arXiv:1803.01224 [hep-ph]].
* (67) A. Sung, H. Tu and M. R. Wu, Phys. Rev. D 99, 121305 (2019) [arXiv:1903.07923 [hep-ph]].
* (68) A. Sung, G. Guo and M. R. Wu, [arXiv:2102.04601 [hep-ph]].
* (69) M. Cirelli, P. Panci, K. Petraki, F. Sala and M. Taoso, JCAP 05 (2017), 036 [arXiv:1612.07295 [hep-ph]].
* (70) S. Cassel, J. Phys. G 37 (2010), 105009 [arXiv:0903.5307 [hep-ph]].
* (71) T. Lin, H. B. Yu and K. M. Zurek, Phys. Rev. D 85 (2012), 063503 [arXiv:1111.0293 [hep-ph]].
* (72) V. Ilisie, Concepts in Quantum Field Theory, Springer (2016)
|
# Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual
Active Speaker Detection
###### Abstract
Audio-visual active speaker detection (AVASD) is well-developed, and now is an
indispensable front-end for several multi-modal applications. However, to the
best of our knowledge, the adversarial robustness of AVASD models hasn’t been
investigated, not to mention the effective defense against such attacks. In
this paper, we are the first to reveal the vulnerability of AVASD models under
audio-only, visual-only, and audio-visual adversarial attacks through
extensive experiments. What’s more, we also propose a novel audio-visual
interaction loss (AVIL) for making attackers difficult to find feasible
adversarial examples under an allocated attack budget. The loss aims at
pushing the inter-class embeddings to be dispersed, namely non-speech and
speech clusters, sufficiently disentangled, and pulling the intra-class
embeddings as close as possible to keep them compact. Experimental results
show the AVIL outperforms the adversarial training by 33.14 mAP (%) under
multi-modal attacks.
Index Terms— Audio-visual active speaker detection, multi-modal adversarial
attack, adversarial robustness
## 1 Introduction
Active Speaker Detection (ASD) seeks to detect who is speaking in a visual
scene containing one or more speakers [1, 2]. Recently, audio-visual ASD
(AVASD), which integrates audio-visual information by learning the
relationship between speech and facial motion, effectively improves the
performance of ASD, and AVASD has become more indispensable as a front-end for
multi-modal applications. However, to the best of our knowledge, whether the
AVASD models are robust against adversarial attacks has not been investigated
previously, not to mention the effective defense method against such multi-
modal attacks.
Crafting indistinguishable adversarial noise, adding such noise to clean
samples to generate adversarial samples, and then manipulating the AI models
by such samples, is called _adversarial attack_ [3]. Previous adversarial
attacks usually focus on single-modal applications. For visual-modal attacks,
Szegedy et al. first propose to attack state-of-the-art image classification
models [3] in 2013. For the speech modality, models including automatic
speaker verification (ASV) systems [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18], anti-spoofing models for ASV [19, 20, 21, 22, 23], and automatic
speech recognition models [24, 25, 26, 27, 28, 29, 30] are also vulnerable to
adversarial attacks. For audio-visual learning, Li et al. [31] studied the
audio-visual adversarial robustness of the general sound event detection model
but only considered single- or multi-modal attacks under an attack method.
Given that AVASD is now ubiquitously implemented as a front-end for a variety
of multi-modal downstream models, the dangerous adversarial noise may
manipulate the AVASD front-end to commit errors, which will accumulate and
propagate to the downstream applications. Hence it is of high priority that we
mitigate the adversarial vulnerability of AVASD and ensure robustness against
such attacks. This paper investigates the susceptibility of AVASD models to
adversarial attacks and then proposes a novel defense method to improve their
robustness. Our contributions are summarized in two folds: 1). To the best of
our knowledge, this is the first work to reveal the vulnerability of AVASD
models under three kinds of attacks, including audio-only, visual-only, and
audio-visual adversarial attacks by extensive experiments. 2). We also propose
a novel audio-visual interaction loss (AVIL), which aims at pushing the inter-
class embeddings, namely the non-speech and speech clusters, sufficiently
disentangled, and pulling the intra-class embeddings as close as possible.
Expanding the inter-class dispersion and enhancing the intra-class compactness
will make it difficult for attackers to find feasible adversarial samples to
go beyond the decision boundary within the allocated attacking budget. The
experimental results illustrate that the brand-new audio-visual interaction
loss effectively strengthens the invulnerability of AVASD models.
Fig. 1: (a) The TalkNet framework. $x_{a}$ and $x_{v}$ are the audio and
visual inputs, respectively. $\otimes$ denotes the concatenation procedure.
$\mathcal{L}_{CE_{a}}$, $\mathcal{L}_{CE_{v}}$ and $\mathcal{L}_{CE_{av}}$ are
the cross entropy losses for audio-only prediction head, visual-only
prediction head, and audio-visual prediction head, respectively. (b) The
audio-visual attack framework for AVASD. $x_{a}$ and $x_{v}$ are the audio and
visual samples respectively, $y$ is the ground-truth for the multi-sensory
input $\\{x_{a},x_{v}\\}$. $\delta_{a}$ and $\delta_{v}$ are the adversarial
perturbations for $x_{a}$ and $x_{v}$, respectively. $\tilde{y}$ is the
prediction for the adversarial samples $\\{\tilde{x}_{a},\tilde{x}_{v}\\}$.
The adversarial attack aims at maximizing the difference between $y$ and
$\tilde{y}$.
## 2 Background
### 2.1 Audio-Visual Active Speaker Detection
The ASD task has been studied using audio, video, or the fusion of both. For
audio, the voice activity detector [32, 33] is often used to detect the
presence of speech. However, in real-world scenarios, the speech signal from
the microphones is easily mixed with overlapping speech and background noise,
which will hinder the effectiveness of voice activity detection. The visual
part [34, 35] mainly analyzes the face and upper body of a person to determine
whether the person is speaking, but the performance is limited due to some
non-speech activities, e.g. licking lips, eating, and grinning. The audio-
visual processing refers to the combination of audio and visual parts [36,
37], and allows learning across modalities about the relationship between
audio speech and facial motions. With valuable support from sizeable datasets,
e.g. AVA-Active Speaker, and the AVA Challenge series launched since 2018, a
variety of high-performance models for AVASD have emerged recently [38, 39,
40, 41, 42, 43, 44, 45, 46, 47, 48]. In real-world user authentication
systems, AVASD can be used as a front-end task to assure security verification
for speaker verification [49]. For AVASD system, there are four typical cases,
such as speech without target speaker, no audible speaker, speaker without
speech and speech with the target speaker. Only the speech with target speaker
is labeled as speaking. Attackers possibly use some single modal attack
methods or combine them to make AVASD produce wrong predictions in the other
three cases, which is dangerous. However, people have not yet seen
investigations on the adversarial robustness of AVASD model.
### 2.2 Adversarial Attacks
Adversarial attack is to manipulate a well-trained model to give wrong
predictions by an adversarial sample, which is imperceptible by humans,
compared with the original (unmodified) counterpart. Mathematically, given a
clean sample $x$ and the ground-truth label $y$, attack algorithms seek to
find a sufficiently small perturbation $\delta$ such that:
$\tilde{x}=x+\delta$, where $\tilde{x}$ is the adversarial sample that can
fool the model to produce the wrong prediction $\tilde{y}$. We can find a
suitable $\delta$ by solving the following objective function:
$\displaystyle\mathop{\arg\max}_{\delta}\mathcal{L}(\tilde{x},y,\theta),$ (1)
$\displaystyle s.t.||\delta||_{p}\leq\epsilon,$
where $\mathcal{L}(\cdot)$ denotes the objective function pre-defined by the
attackers, which is usually set to maximize the difference between $y$ and the
model’s final prediction given $\tilde{x}$, $\epsilon$ is the allowed
perturbation budget, and $||\cdot||_{p}$ denotes the $p$-norm, which is
usually considered to be a $l_{\infty}$-ball or $l_{2}$-ball centered at $x$.
We evaluate the AVASD models’ vulnerability with $l_{\infty}$-boundary
adversarial noise, as it is widely used as a standard evaluation boundary for
adversarial robustness [50]. To solve the optimization problem as shown in
Equation 1, we choose three widely used attack methods to evaluate the
robustness of AVASD models, and the details are summarized below.
Basic Iterative Method (BIM) BIM [51] is a method with iterative updates as
follows:
$\displaystyle
x_{m}=clip_{\epsilon}(x_{m-1}+\alpha\cdot\text{sign}({\nabla_{x_{m-1}}}\mathcal{L}(x_{m-1},y,\theta))),$
(2) $\displaystyle for\ m=1,...,M,$
where $x_{0}$ starts from the original sample $x$, $\alpha$ is the step size,
$M$ is the number of iterations and the $clip_{\epsilon}(\cdot)$ function
applies element-wise clipping to make
$||x_{m-1}-x||_{\infty}\leq\epsilon,\epsilon\geq 0\in\mathbb{R}$. The
perturbed example $x_{M}$ is the final adversarial example.
Momentum-based Iterative Method (MIM) MIM [52] is an improved version of BIM.
MIM introduces a momentum term into the iterative process to avoid BIM falling
into local minimum and thus improve the attack performance over BIM.
Projected Gradient Descent (PGD) PGD [50] is also a variant of BIM. PGD
randomly initializes the adversarial noise $\delta$ for $\gamma$ times and
conducts BIM-style attacks to generate $\gamma$ candidates of adversarial
noise. Finally, the best one out of the $\gamma$ candidates with the best
attack performance, will be chosen as the final adversarial sample.
Fig. 2: The Audio-Visual Interaction Loss. The circle and cross fork denote
the speech and non-speech embeddings, respectively. The colors blue and red
present the audio and visual embeddings, respectively. The centers are those
with bold borders.
## 3 Methodology
### 3.1 AVASD Model – TalkNet
We adopt TalkNet [48] for our case study to characterize the adversarial
robustness of AVASD. TalkNet is one of the state-of-the-art models for AVASD,
which is fully end-to-end. TalkNet takes a sequence of video frames $x_{v}$
consisting of cropped face sequences and the corresponding audio sequences
$x_{a}$ as inputs. The output probability denotes how likely the person is
speaking in the given video frame. TalkNet comprises a feature representation
front-end, and a speaker detection back-end classifier, as shown in Fig.
1.(a). The front-end consists of an audio temporal encoder and a video
temporal encoder to extract audio embeddings $e_{a,i}$ and visual embeddings
$e_{v,i}$ for the $i^{th}$ frame. In the back-end, the audio and visual
embeddings are aligned via inter-modality cross-attention and then
concatenated to obtain the joint audio-visual embeddings $z_{a,i}$ and
$z_{v,i}$ for the $i^{th}$ frame. Then, a self-attention network is applied
after the cross-attention network to model the audio-visual temporal
information. Finally, a fully-connected layer with a softmax is implemented to
project the output of the self-attention network to a sequence of ASD labels.
The predicted label sequence is compared with the ground-truth label sequence
by cross-entropy loss ($\mathcal{L}_{CE_{av}}$):
$\displaystyle\mathcal{L}_{CE_{av}}=-\frac{1}{T}\sum_{t=1}^{T}(y_{t}\cdot
logs_{t}+(1-y_{t})\cdot log(1-s_{t})),$ (3)
where $y_{t}$ and $s_{t}$ are the ground-truth and predicted score for the
$t^{th}$ frame, and $T$ is the total frames for one sample of video data.
During training, TalkNet utilizes two additional predict heads for audio
embeddings and visual embeddings after the cross-attention module to predict
the ASD label sequences, as shown in the part of the speaker detection back-
end in Fig. 1.(a). The additional outputs here are used to calculate the
weighted loss, and the final training loss is shown as follows:
$\displaystyle\mathcal{L}_{CE_{all}}=\mathcal{L}_{CE_{av}}+0.4\times\mathcal{L}_{CE_{a}}+0.4\times\mathcal{L}_{CE_{v}},$
(4)
where $\mathcal{L}_{CE_{a}}$ and $\mathcal{L}_{CE_{v}}$ denote the losses of
audio-only and visual-only prediction head, respectively. The coefficient 0.4
is referred from the TalkNet [48]. During inference, only the prediction head
after self-attention will be utilized. For further details of the above
setting, please refer to the TalkNet paper [48].
### 3.2 Multi-Modal Attacks
Let $x_{a}$ be an audio input, $x_{v}$ be the visual input and $y$ be the
corresponding ground-truth label for the multisensory input: {$x_{a}$,
$x_{v}$}. We divide the audio-visual adversarial attack into three categories:
the audio-only attack generates the audio adversarial example $\tilde{x}_{a}$,
the visual-only attack generates the visual adversarial example
$\tilde{x}_{v}$ and the audio-visual attack generates multi-modal adversarial
examples: $\tilde{x}_{a}$, $\tilde{x}_{v}$. To force a well-trained audio-
visual model to make wrong predictions with corresponding perturbations being
as imperceptible as possible, the objective function for multi-modal attacks
is as follows:
$\displaystyle\mathop{\arg\max}_{\delta_{a},\delta_{v}}\mathcal{L}(\tilde{x}_{a},\tilde{x}_{v},y),$
(5) $\displaystyle s.t.\ ||\delta_{a}||_{p}\leq\epsilon_{a},\ \
||\delta_{v}||_{p}\leq\epsilon_{v},$
where $\tilde{x}_{a}=x_{a}+\delta_{a}$, $\tilde{x}_{v}=x_{v}+\delta_{v}$,
$\mathcal{L}(\cdot)$ is the objective function to make the outputs of the
audio-visual model as different as possible to $y$, $||\cdot||_{p}$ is the
$p$-norm, and $\epsilon_{a}$ and $\epsilon_{v}$ are audio and visual
perturbation budgets. In the case of an audio-only attack, the perturbation
budget $\epsilon_{v}$ is equal to 0, and in the case of a visual-only attack,
the perturbation budget $\epsilon_{a}$ is equal to 0. In the case of audio-
visual attacks, both audio and visual inputs will be perturbed. Fig. 1(b)
illustrates the audio-visual adversarial attack framework. Different
strategies to search for $\delta$, which consists of $\delta_{a}$ and
$\delta_{v}$, result in different adversarial attack methods. Note that our
multi-modal attack is jointly optimized on audio-visual modality instead of
optimizing independently. This paper adopts three famous attack methods for
their effective attacking performance and affordable execution time based on
our resources: BIM, MIM, and PGD.
We also set up two attack scenarios: training-aware attack and inference-aware
attack scenarios. In both kinds of attack scenarios, the attackers have full
access to the model internals, including model architectures, parameters, and
gradients. Besides, the inference-aware attackers know exactly the inference
procedure of the AVASD model. In other words, they know the prediction head
adopted for inference is the audio-visual head after the self-attention as
shown in Fig. 1(a), and then they will conduct adversarial attacks based on
the loss as shown in Equation 3. The inference-aware attack scenario is more
practical, as it relies on the real inference procedure. For the training-
aware attackers, they even know the training loss of the AVASD model, and they
will conduct adversarial attacks by Equation 4. Training-aware attacks can
craft even more dangerous attacks as they adopt all three prediction heads to
find the adversarial perturbation. Unless specified otherwise, all the
experiments are conducted under the training-aware attack scenario as it is
more dangerous. We also perform the experiments for the inference-aware
scenario and it shows the same trend. We show comparison results between
training-aware and inference-aware attacks in Section 4.4.
### 3.3 Audio-Visual Interaction Loss
In this section, we first introduce the proposed audio-visual interaction loss
(AVIL) and the implementation details. Then we will present the rationale of
the proposed method.
Implementation Procedure of AVIL. Suppose we have $K$ frames for one batch,
and let $K_{s}$ and $K_{n}$ be the speech and non-speech frame numbers,
respectively. Let $\mathbb{S}$ and $\mathbb{N}$ denote the index sets for
speech and non-speech. We can get the four centers as below:
$\displaystyle
c_{a\text{-}s}=\frac{1}{K_{s}}\sum_{i\in\mathbb{S}}e_{a,i}\qquad$
$\displaystyle c_{a\text{-}ns}=\frac{1}{K_{n}}\sum_{i\in\mathbb{N}}e_{a,i}$
(6) $\displaystyle
c_{v\text{-}s}=\frac{1}{K_{s}}\sum_{i\in\mathbb{S}}e_{v,i}\qquad$
$\displaystyle c_{v\text{-}ns}=\frac{1}{K_{n}}\sum_{i\in\mathbb{N}}e_{v,i},$
where $c_{a\text{-}s}$, $c_{a\text{-}ns}$, $c_{v\text{-}s}$, $c_{v\text{-}ns}$
denote the centers for audio speech embeddings, audio non-speech embeddings,
visual speech embeddings, visual non-speech embeddings, respectively. The
centers are denoted with bold borders as shown in Fig. 2.
Then we can define the four audio-visual interaction losses:
* •
Intra-modality inter-class dispersion (Fig. 2.(a)):
$\displaystyle\mathcal{L}_{1}=cos(c_{a\text{-}s},c_{a\text{-}ns})+cos(c_{v\text{-}s},c_{v\text{-}ns}),$
(7)
where $cos$ denotes the cosine similarity.
* •
Intra-modality intra-class dissimilarity (Fig. 2.(b)):
$\displaystyle\mathcal{L}_{2}=$
$\displaystyle-(\frac{1}{K_{s}}\sum_{i\in\mathbb{S}}(cos(c_{a\text{-}s},e_{a,i})+cos(c_{v\text{-}s},e_{v,i}))$
(8)
$\displaystyle+\frac{1}{K_{n}}\sum_{i\in\mathbb{N}}(cos(c_{a\text{-}ns},e_{a,i})+cos(c_{v\text{-}ns},e_{v,i})))$
* •
Inter-modality intra-class dissimilarity (center-based) as shown Fig. 2.(c):
$\displaystyle\mathcal{L}_{3}=-(cos(c_{a\text{-}s},c_{v\text{-}s})+cos(c_{a\text{-}ns},c_{v\text{-}ns}))$
(9)
* •
Inter-modality intra-class distance (sample-based) as shown in Fig. 2.(d):
$\displaystyle\mathcal{L}_{4}=\frac{1}{K_{s}}\sum_{i\in\mathbb{S}}||e_{v,i}-e_{a,i}||_{2}+\frac{1}{K_{n}}\sum_{i\in\mathbb{N}}||e_{v,i}-e_{a,i}||_{2},$
(10)
where $e_{a,i}$ and $e_{v,i}$ denote the speech embeddings for the audio and
visual modalities, respectively. When $1\leq i\leq K$, $e_{a,i}$ and $e_{v,i}$
denote the non-speech embeddings.
To alleviate the adversarial noise, the four above losses mentioned above are
adapted in the training process of the AVASD model and the final objective
function is formulated as:
$\displaystyle\mathcal{L}_{avil}=\mathcal{L}_{CE_{all}}+\sum_{j=1}^{4}\lambda_{j}\cdot\mathcal{L}_{j},$
(11)
where $\lambda_{j}$ denotes the hyperparameter. Note that if
$\\{\lambda_{j}=0\ forj=1,2,3,4\\}$, $\mathcal{L}_{avil}$ will reduce to
$\mathcal{L}_{CE_{all}}$, the training loss of the original TalkNet. For
training the model with AVIL, we simply set
$\lambda_{1}=\lambda_{4}=\lambda_{2}=\lambda_{3}=0.1$, for simplicity. And
also, we just want to show the effectiveness of the four audio-visual
interaction losses, rather than exhaust the hyperparameter settings and
trickly improve the defense performance.
Rationale of AVIL. The adversarial attacks threaten the AVASD model by
maximizing the loss functions, e.g. Equation 3 and Equation 4, and then will
urge the output far away from its original decision region [3, 53]. For
example, after the adversarial attack, the output for a speech frame will go
away from the right region, namely “speech”, and become non-speech. As a
result, it is reasonable that high inter-class dispersion and intra-class
compactness will boost models’ invulnerability, as it will make it hard for
the attackers to find feasible adversarial perturbations within a given budget
to push the genuine samples to pass through the decision boundary.
Minimizing $\mathcal{L}_{1}$ will equip the model with better discrimination
capacity between speech and non-speech embeddings, resulting in higher inter-
class difference from the models’ perspective. Maximizing
$\mathcal{L}_{2},\mathcal{L}_{3}$ and minimizing $\mathcal{L}_{4}$ will force
the model to render more compact intra-class features. Incorporating
$\mathcal{L}_{1},\mathcal{L}_{2},\mathcal{L}_{3},\mathcal{L}_{4}$ in the
training process, we can simultaneously urge the model to learn both
discriminative inter-class features, and compact intra-class features, leading
the model less susceptible to adversarial perturbations. As shown in Table 1,
the four losses achieve the goal of significantly improving the robustness of
the models.
Fig. 3: Adversarial attack performance of AVASD models. (a) White-box and
black-box attackers under multi-modal attack with PGD method. (b) Single-modal
and multi-modal attack under white-box attacker with PGD method. (c) Different
attack algorithms under white-box attacker with multi-modal attack. The attack
budgets of audio and visual modals are $\epsilon_{a}=\epsilon_{av}\times
10^{-4}$ and $\epsilon_{v}=\epsilon_{av}\times 10^{-1}$, respectively.
## 4 Experiment
### 4.1 Experimental setup
We use TalkNet [48] to investigate the adversarial robustness of AVASD and
verify the effectiveness of our proposed method to alleviate adversarial
attacks. We reproduce and modify the TalkNet based on the official TalkNet
GitHub repository to attack and defend. To conduct gradient-based adversarial
attacks, including BIM, MIM, and PGD, we revise the feature extraction and
data augmentation steps using the PyTorch library to make the entire model
pipeline differentiable. For the dataset, we use the AVA Active Speaker
dataset [1], which contains 29,723 video samples for training. Since the
ground-truth labels for the testing set are not available to the public, we
use the validation set with 8,015 samples as the evaluation set. The lengths
of videos range from 1 to 10 seconds and their facial tracks are also
provided. We follow the official evaluation plan and evaluate performance with
mean average precision (mAP) [1]. The revised TalkNet achieves 92.58% mAP on
the evaluation set, which is slightly higher than the original paper [48]. As
the adversarial attack is time- and resource-consuming, we randomly selected
450 genuine samples (225 speaking and 225 non-speaking) with the correct
predictions to conduct adversarial attacks. The imperceptible attack budget of
audio and visual modality has a very large numerical gap, we introduced
$\epsilon_{av}$ to represent the attack budget for easier explanation. The
relationships between $\epsilon_{av}$ and attack budget of two modalities are
$\epsilon_{a}=\epsilon_{av}\times 10^{-4}$ and
$\epsilon_{v}=\epsilon_{av}\times 10^{-1}$, respectively.
### 4.2 The Model Vulnerability under Multi-modality Attacks
Fig. 3 illustrates the attack performance on AVASD under both single-modality
and audio-visual attacks, with three attack algorithms in both white-box and
black-box attack scenarios. The blue line is the baseline, where the genuine
samples are fed directly into the TalkNet model without any attacks. To
investigate the vulnerability of AVASD under both black-box and white-box
scenarios, we also trained two models, ncTalNet and specTalkNet. ncTalkNet
represents the TalkNet model without the cross attention module, as shown in
Fig. 1 (a). specTalkNet denotes the TalkNet by replacing the audio features
with linear spectrograms rather than MFCCs adopted by the original TalkNet.
White-box and Black-box Attackers. In Fig. 3 (a), there are three settings.
The TalkNet-TalkNet denotes the white-box scenario, that is, both the model
for generating adversarial samples and the target model are TalkNet. The
ncTalkNet-TalkNet and specTalkNet-TalkNet are the black-box scenarios, in
which the substitute models for generating adversarial samples are ncTalkNet
specTalkNet, respectively, and the target model is TalkNet. White-box
attackers achieve effective attack performance by degrading the mAP of the
TalkNet to a large scale, while black-box attackers can barely manipulate the
TalkNet. TalkNet-TalkNet degrades the mAP from 100% to 65.2%, when
$\epsilon_{av}=5$ under multi-modal attack with PGD method. But in the same
situation, black-box attackers are almost ineffective.
Single-modal and Multi-modal Attacks. We take TalkNet-TalkNet to further
evaluate the attack performance under single-modal, and multi-modal with the
PGD method in Fig. 3 (b). When $\epsilon_{av}=5$, the audio-only, visual-only
attack, and multi-modal attacks can degrade the model’s mAP to 99%, 77% and
65.2%. We can observe the same phenomenon in other settings. As a result, we
can derive the following three results: (1) Audio-only attacks can barely
influence the mAP of AVASD. (2) Visual-only attacks achieve more effective
attack performance than audio-only attacks. One possible reason is that for
one video data sample, there are far fewer audio samples than the pixels [3].
(3) Multi-modal attacks are always more dangerous than single-modal attacks.
Different Attack Algorithms with Different Attack Budgets. We take TalkNet-
TalkNet to further evaluate the multi-modal attack performance under BIM, MIM,
and PGD attacks. From Fig. 3 (c), we have the following observations: (1) As
the attack budget increases, the mAP of TalkNet has a significant decrease
trend. (2) All three attack methods can effectively degrade the AVASD model.
In the following experiments to evaluate the defense performance, we only
consider multi-modal attacks in the white-box scenarios, since it is the most
dangerous one, and attack budgets are set as $\epsilon_{av}=5$.
The Imperceptibility of Multi-Modal Attacks. We conduct the XAB test to verify
that the adversarial noise generated by multi-modal attacks is both visually
and acoustically imperceptible. The XAB test is a standard test to evaluate
the detectability between two sensory stimulus choices. We randomly select 4
adversarial-genuine pairs (2 speech and 2 non-speech pairs) for each of the
three attack algorithms with $\epsilon_{av}=5$, resulting in 12 pairs of
randomly selected adversarial-genuine pairs (i.e., A and B). One reference
data (i.e., X) is chosen from A and B. A, B, and X are shown to the
volunteers. The volunteers should select the more similar data to X, from A
and B. We hire five volunteers to join in the XAB test. The classification
accuracy for the XAB test is 53.33%, which is a nearly random guess, leading
to the conclusion that the adversarial samples are difficult to be
distinguished from genuine samples. The XAB test samples will be shown here
111https://xjchengit.github.io/Push-Pull/index.html.
| Model | Adversarial training [54] | Clean mAP (%) | Different attack methods with $\mathcal{L}_{{CE}_{all}}$
---|---|---|---|---
| BIM | MIM | PGD
| A (ECR) | V (ECR) | mAP (%) | A (ECR) | V (ECR) | mAP (%) | A (ECR) | V (ECR) | mAP (%)
(A) | $\mathcal{L}_{{CE}_{all}}$ | ✗ | 92.58 | 0.1658 | 0.3140 | 49.53 | 0.1683 | 0.3170 | 49.30 | 0.1715 | 0.3236 | 47.79
(B1) | $\mathcal{L}_{{CE}_{all}}$ | BIM | 92.15 | 0.2759 | 0.2644 | 62.7 | 0.2778 | 0.2663 | 59.26 | 0.2937 | 0.2772 | 60.01
(B2) | $\mathcal{L}_{{CE}_{all}}$ | MIM | 91.34 | 0.3052 | 0.2108 | 54.66 | 0.3073 | 0.2133 | 52.18 | 0.3030 | 0.2118 | 54.23
(B3) | $\mathcal{L}_{{CE}_{all}}$ | PGD | 91.68 | 0.2728 | 0.1846 | 58.29 | 0.2783 | 0.1893 | 58.3 | 0.2840 | 0.1938 | 56.06
(C1) | $\mathcal{L}_{{CE}_{all}}+\mathcal{L}_{1}$ | ✗ | 92.09 | 0.1407 | 0.2603 | 82.96 | 0.1382 | 0.2618 | 81.32 | 0.1379 | 0.2677 | 80.98
(C2) | $\mathcal{L}_{{CE}_{all}}+\mathcal{L}_{2}$ | ✗ | 92.05 | 0.1451 | 0.1444 | 92.65 | 0.1496 | 0.1481 | 90.69 | 0.1501 | 0.1509 | 88.93
(C3) | $\mathcal{L}_{{CE}_{all}}+\mathcal{L}_{3}$ | ✗ | 92.16 | 0.1575 | 0.3264 | 74.98 | 0.1602 | 0.3289 | 76.25 | 0.1590 | 0.3343 | 73.97
(C4) | $\mathcal{L}_{{CE}_{all}}+\mathcal{L}_{4}$ | ✗ | 91.28 | 0.1065 | 0.2262 | 83.82 | 0.1154 | 0.2322 | 79.82 | 0.1158 | 0.2351 | 78.72
(D1) | $\mathcal{L}_{{CE}_{all}}+\mathcal{L}_{1}+\mathcal{L}_{2}$ | ✗ | 92.46 | 0.2182 | 0.4672 | 66.91 | 0.2149 | 0.4656 | 67.89 | 0.2317 | 0.4910 | 64.11
(D2) | $\mathcal{L}_{{CE}_{all}}+\mathcal{L}_{1}+\mathcal{L}_{3}$ | ✗ | 92.20 | 0.2218 | 0.4102 | 48.16 | 0.2190 | 0.4134 | 47.92 | 0.2239 | 0.4228 | 49.27
(D3) | $\mathcal{L}_{{CE}_{all}}+\mathcal{L}_{1}+\mathcal{L}_{4}$ | ✗ | 91.81 | 0.0820 | 0.2194 | 93.86 | 0.0834 | 0.2337 | 93.34 | 0.0811 | 0.2313 | 93.15
(D4) | $\mathcal{L}_{{CE}_{all}}+\mathcal{L}_{2}+\mathcal{L}_{3}$ | ✗ | 92.27 | 0.1525 | 0.3094 | 57.02 | 0.1500 | 0.3029 | 63.36 | 0.1549 | 0.3135 | 61.54
(D5) | $\mathcal{L}_{{CE}_{all}}+\mathcal{L}_{2}+\mathcal{L}_{4}$ | ✗ | 91.93 | 0.0936 | 0.1583 | 68.12 | 0.0962 | 0.1612 | 66.28 | 0.0992 | 0.1667 | 67.75
(D6) | $\mathcal{L}_{{CE}_{all}}+\mathcal{L}_{3}+\mathcal{L}_{4}$ | ✗ | 91.70 | 0.0782 | 0.2128 | 91.79 | 0.0771 | 0.2135 | 92.48 | 0.0785 | 0.2172 | 91.01
(E1) | $\mathcal{L}_{{CE}_{all}}+\mathcal{L}_{1}+\mathcal{L}_{4}$ | BIM | 90.63 | 0.0989 | 0.1007 | 97.85 | 0.1006 | 0.1011 | 97.6 | 0.0955 | 0.1040 | 97.47
(E2) | $\mathcal{L}_{{CE}_{all}}+\mathcal{L}_{1}+\mathcal{L}_{4}$ | MIM | 91.70 | 0.0344 | 0.0676 | 99.99 | 0.0341 | 0.0669 | 99.98 | 0.0355 | 0.0696 | 99.97
(E3) | $\mathcal{L}_{{CE}_{all}}+\mathcal{L}_{1}+\mathcal{L}_{4}$ | PGD | 91.88 | 0.0470 | 0.1001 | 97.68 | 0.0446 | 0.0966 | 97.47 | 0.0423 | 0.0953 | 98.67
Table 1: AVASD performance of different models under three attack algorithms.
### 4.3 Audio-Visual Interaction Loss
Table 1 shows the defense performance of different methods under three attack
algorithms. (A) denotes the original TalkNet model. (B1)-(B3) are the
baselines, which represent models trained using adversarial training [54], and
the adversarial examples are generated by BIM, MIM, and PGD, respectively.
Adversarial training is conducted by injecting the adversarial data into the
training set and thus alleviating the adversarial invulnerability. (C1)-(C4)
denote the model trained by incorporating only one of the four losses in
Section 3.3. We exhaust the pairwise permutation of the four losses to train
AVASD models, which are shown in (D1)-(D6). We select the model with the best
defense performance from (D1)-(D6), namely (D3), and combine it with
adversarial training to see whether the AVIL can complement adversarial
training, and the models are shown in (E1)-(E3). The clean mAP(%) column is
the mAP performance testing on the entire evaluation set without adversarial
attacks. In order to conduct fair comparison, we get the data with correct
prediction for model (A)-(E3), and do intersection of such data to get the
testing data.
To show the effectiveness of defense methods against adversarial noise, we
also set up another evaluation metric, the embedding change ratio (ECR) for
audio embedding $z_{a}$ and visual embedding $z_{v}$ after the cross-attention
as shown in Fig. 1. (a). Take the audio ECR for example, it is calculated by:
$1/K\times\sum^{K}_{i=1}||z_{a,i}-\tilde{z}_{a,i}||_{2}/||z_{a,i}||_{2},$
where $\tilde{z}_{a,i}$ and $z_{a,i}$ are the adversarial and genuine
embeddings, $K$ is the number of total frames. ECR measures the embedding
change ratio before and after adversarial attacks. A (ECR) and V (ECR) denote
the ECR of the audio and visual parts, respectively. The lower the ECR is, the
less effect introduced by the attack algorithms and the better the defense
performance will be.
Baselines. From (A), the original TalkNet model performs well on the AVASD
task with 92.58% mAP for the clean samples, as shown in Table 1. However, the
multi-modality attacks seriously degrade the mAP of (A). It can only get
49.53%, 49.40%, and 47.79% mAP under BIM, MIM, and PGD attack algorithms
respectively. According to the (B1)-(B3) of Table 1, adversarial training does
improve the robustness of the AVASD model. Using the BIM attack algorithm to
generate adversarial examples and then conducting adversarial training
achieves the best defense performance compared with the other two attack
algorithms, resulting in 13.17%, 9.96%, and 12.22% absolute improvement of mAP
under BIM, MIM, and PGD attacks, respectively. In terms of ECR, adversarial
training effectively reduce V(ECR), yet significantly increase A(ECR).
Using One AVIL. As shown in (C1)-(C4), using any one of the four AVILs
improves the mAP to a large scale, resulting in better defense performance the
(B1)-(B3), the adversarial training using BIM, MIM, and PGD. (C2), using
$\mathcal{L}_{2}$ leads to the best performance. It seems that when only
introducing one loss, maximizing the intra-modality and intra-class similarity
is the best choice to tackle the adversarial noise. Regarding the ECR,
(C1)-(C4) help reduce A(ECR) and V(ECR) in most of the settings.
Pairwise Permutations of AVILs. From (C1)-(D6) in Table 1, we have the
following observations and analysis: (1) Adopting pairwise permutations of
AVILs perform better than adversarial training in most of the settings from
both mAP and ECR perspectives. (2) Combining $\mathcal{L}_{1}$ and
$\mathcal{L}_{4}$ even improves the mAP to 93.86%, 93.34%, and 93.15% for BIM,
MIM, and PGD respectively. It is the most robust combination. A possible
reason is that enlarging inter-class dispersion (minimizing $\mathcal{L}_{1}$)
and maximizing intra-class similarity (maximizing $\mathcal{L}_{4}$) at the
same time will result in the model with the best robustness.
Integrating AVIL with Adversarial Training. (E1)-(E3) show that combining AVIL
with adversarial training can leverage their complementary to improve the
adversarial robustness. Furthermore, integrating AVIL with MIM-based
adversarial training can improve the mAP to over 99% under BIM, MIM, and PGD
attacks.
Fig. 4: Training-aware ($\mathcal{L}_{{CE}_{all}}$) attack and inference-aware
($\mathcal{L}_{{CE}_{av}}$) attack scenarios.
### 4.4 Training-aware and Inference-aware Attacks
Fig. 4. compares the performance of 8 AVASD models in Table 1 under the
training-aware and inference-aware scenarios with BIM, MIM, PGD attack method.
We also have the same evaluation set as Table 1. The legend show the two
scenarios with three attack method. For instance, the legend
“$\mathcal{L}_{CE_{all}}$ (BIM)” denotes using the $\mathcal{L}_{CE_{all}}$ to
craft the adversarial samples with BIM attack method. The $L_{CE_{all}}$ and
$L_{CE_{av}}$ are defined in equations 4 and 3. We have the following
observations and analysis: (1) In (A) groups, we can see that the performance
of the AVASD model has also dropped significantly in the inference-aware
scenario, but the inference-aware attacks are less dangerous compared with
training-aware attacks. (2) From (B1)-(B3), adversarial training does
alleviate the adversarial noise with the same trend in the training-aware
attack scenario. (3) Compare (D3) with (B1)-(B3), the AVIL performs better in
improving the adversarial robustness than adversarial training. (4) From
(E1)-(E3), we can see that AVIL can complement adversarial training. To sum
up, we can conclude that the inference-aware attack scenario has the same
trend as the training-aware attack scenario.
## 5 Conclusion
In this work, we first expose that audio-visual active speaker detection
models are highly susceptible to adversarial attacks through comprehensive
experiments, by investigating the white-box and black-box adversaries, single-
and multi-modal attacks, training-aware, and inference-aware attack scenarios,
and three attack algorithms with several attack budgets. Also, we propose the
audio-visual interaction loss to enlarge the inter-class difference and intra-
class similarity, resulting in more robust AVASD models for which budget-
limited attackers can not find feasible adversarial samples. The experimental
results illustrate that the proposed method is far more effective than
adversarial training, and the proposed AVIL can complement adversarial
training to further alleviate the adversarial vulnerability of AVASD models.
In the future, we will investigate hyperparameter searching strategies to
further improve the effectiveness of the proposed AVIL.
## 6 ACKNOWLEDGMENTS
This work was partially supported by the Ministry of Science and Technology,
Taiwan (Grant no. MOST109-2221- E-002-163-MY3), and the Centre for Perceptual
and Interactive Intelligence, an InnoCentre of The Chinese University of Hong
Kong. This work was done while Haibin Wu was a visiting student at CUHK.
Haibin Wu is supported by Google PHD Fellowship Scholarship.
## References
* [1] Joseph Roth et al., “Ava-activespeaker: An audio-visual dataset for active speaker detection,” in ICASSP, 2020.
* [2] Joseph Roth et al., “Ava active speaker: An audio-visual dataset for active speaker detection,” in ICASSP 2020. IEEE, 2020, pp. 4492–4496.
* [3] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
* [4] Felix Kreuk, Yossi Adi, Moustapha Cisse, and Joseph Keshet, “Fooling end-to-end speaker verification with adversarial examples,” in ICASSP 2018. IEEE, 2018, pp. 1962–1966.
* [5] Haibin Wu, Xu Li, Andy T Liu, Zhiyong Wu, Helen Meng, and Hung-yi Lee, “Adversarial defense for automatic speaker verification by cascaded self-supervised learning models,” in ICASSP 2021. IEEE, 2021, pp. 6718–6722.
* [6] Arindam Jati, Chin-Cheng Hsu, Monisankha Pal, Raghuveer Peri, Wael AbdAlmageed, and Shrikanth Narayanan, “Adversarial attack and defense strategies for deep speaker recognition systems,” Computer Speech & Language, vol. 68, pp. 101199, 2021.
* [7] Haibin Wu, Yang Zhang, Zhiyong Wu, Dong Wang, and Hung-yi Lee, “Voting for the right answer: Adversarial defense for speaker verification,” arXiv preprint arXiv:2106.07868, 2021.
* [8] Yi Xie, Zhuohang Li, Cong Shi, Jian Liu, Yingying Chen, and Bo Yuan, “Real-time, robust and adaptive universal adversarial attacks against speaker recognition systems,” Journal of Signal Processing Systems, vol. 93, no. 10, pp. 1187–1200, 2021.
* [9] Yi Xie, Cong Shi, Zhuohang Li, Jian Liu, Yingying Chen, and Bo Yuan, “Real-time, universal, and robust adversarial attacks against speaker recognition systems,” in ICASSP 2020. IEEE, 2020, pp. 1738–1742.
* [10] Hadi Abdullah, Kevin Warren, Vincent Bindschaedler, Nicolas Papernot, and Patrick Traynor, “Sok: The faults in our asrs: An overview of attacks against automatic speech recognition and speaker identification systems,” in 2021 IEEE symposium on security and privacy (SP). IEEE, 2021, pp. 730–747.
* [11] Mirko Marras, Pawel Korus, Nasir D Memon, and Gianni Fenu, “Adversarial optimization for dictionary attacks on speaker verification.,” in Interspeech, 2019, pp. 2913–2917.
* [12] Haibin Wu, Xu Li, Andy T Liu, Zhiyong Wu, Helen Meng, and Hung-yi Lee, “Improving the adversarial robustness for speaker verification by self-supervised learning,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 30, pp. 202–217, 2021.
* [13] Rohan Kumar Das, Xiaohai Tian, Tomi Kinnunen, and Haizhou Li, “The attacker’s perspective on automatic speaker verification: An overview,” arXiv preprint arXiv:2004.08849, 2020.
* [14] Haibin Wu, Po-Chun Hsu, Ji Gao, Shanshan Zhang, Shen Huang, Jian Kang, Zhiyong Wu, Helen Meng, and Hung-Yi Lee, “Adversarial sample detection for speaker verification by neural vocoders,” in ICASSP 2022. IEEE, 2022, pp. 236–240.
* [15] Zhuohang Li, Cong Shi, Yi Xie, Jian Liu, Bo Yuan, and Yingying Chen, “Practical adversarial attacks against speaker recognition systems,” in Proceedings of the 21st international workshop on mobile computing systems and applications, 2020, pp. 9–14.
* [16] Zhiyuan Peng, Xu Li, and Tan Lee, “Pairing weak with strong: Twin models for defending against adversarial attack on speaker verification.,” in Interspeech, 2021, pp. 4284–4288.
* [17] Weiyi Zhang, Shuning Zhao, Le Liu, Jianmin Li, Xingliang Cheng, Thomas Fang Zheng, and Xiaolin Hu, “Attack on practical speaker verification system using universal adversarial perturbations,” in ICASSP 2021. IEEE, 2021, pp. 2575–2579.
* [18] Hao Tan, Le Wang, Huan Zhang, Junjian Zhang, Muhammad Shafiq, and Zhaoquan Gu, “Adversarial attack and defense strategies of speaker recognition systems: A survey,” Electronics, vol. 11, no. 14, pp. 2183, 2022.
* [19] Songxiang Liu et al., “Adversarial attacks on spoofing countermeasures of automatic speaker verification,” in ASRU 2019. IEEE, 2019, pp. 312–319.
* [20] Haibin Wu, Andy T Liu, and Hung-yi Lee, “Defense for black-box attacks on anti-spoofing models by self-supervised learning,” arXiv preprint arXiv:2006.03214, 2020.
* [21] Bowen Zhang, Benedetta Tondi, and Mauro Barni, “Adversarial examples for replay attacks against cnn-based face recognition with anti-spoofing capability,” Computer Vision and Image Understanding, vol. 197, pp. 102988, 2020\.
* [22] Andre Kassis and Urs Hengartner, “Practical attacks on voice spoofing countermeasures,” arXiv preprint arXiv:2107.14642, 2021.
* [23] Haibin Wu, Songxiang Liu, Helen Meng, and Hung-yi Lee, “Defense against adversarial attacks on spoofing countermeasures of asv,” in ICASSP 2020. IEEE, 2020, pp. 6564–6568.
* [24] Nicholas Carlini and David Wagner, “Audio adversarial examples: Targeted attacks on speech-to-text,” in 2018 IEEE Security and Privacy Workshops (SPW). IEEE, 2018, pp. 1–7.
* [25] Hiromu Yakura and Jun Sakuma, “Robust audio adversarial example for a physical attack,” arXiv preprint arXiv:1810.11793, 2018.
* [26] Rohan Taori, Amog Kamsetty, Brenton Chu, and Nikita Vemuri, “Targeted adversarial examples for black box audio systems,” in 2019 IEEE security and privacy workshops (SPW). IEEE, 2019, pp. 15–20.
* [27] Yao Qin, Nicholas Carlini, Garrison Cottrell, Ian Goodfellow, and Colin Raffel, “Imperceptible, robust, and targeted adversarial examples for automatic speech recognition,” in International conference on machine learning. PMLR, 2019, pp. 5231–5240.
* [28] Moustafa Alzantot, Bharathan Balaji, and Mani Srivastava, “Did you hear that? adversarial examples against automatic speech recognition,” arXiv preprint arXiv:1801.00554, 2018.
* [29] Lea Schönherr, Katharina Kohls, Steffen Zeiler, Thorsten Holz, and Dorothea Kolossa, “Adversarial attacks against automatic speech recognition systems via psychoacoustic hiding,” arXiv preprint arXiv:1808.05665, 2018.
* [30] Chao-Han Yang, Jun Qi, Pin-Yu Chen, Xiaoli Ma, and Chin-Hui Lee, “Characterizing speech adversarial examples using self-attention u-net enhancement,” in ICASSP 2020. IEEE, 2020, pp. 3107–3111.
* [31] Juncheng B Li, Shuhui Qu, Xinjian Li, Po-Yao Bernie Huang, and Florian Metze, “On adversarial robustness of large-scale audio visual learning,” in ICASSP 2022. IEEE, 2022, pp. 231–235.
* [32] Shaojin Ding et al., “Personal vad: Speaker-conditioned voice activity detection,” arXiv preprint arXiv:1908.04284, 2019.
* [33] Abhishek Sehgal and Nasser Kehtarnavaz, “A convolutional neural network smartphone app for real-time voice activity detection,” IEEE Access, vol. 6, pp. 9017–9026, 2018.
* [34] Punarjay Chakravarty, Sayeh Mirzaei, Tinne Tuytelaars, and Hugo Van hamme, “Who’s speaking? audio-supervised classification of active speakers in video,” in Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, 2015, pp. 87–90.
* [35] Muhammad Shahid, Cigdem Beyan, and Vittorio Murino, “S-vvad: visual voice activity detection by motion segmentation,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 2332–2341.
* [36] Ariel Ephrat, Inbar Mosseri, Oran Lang, Tali Dekel, Kevin Wilson, Avinatan Hassidim, William T Freeman, and Michael Rubinstein, “Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation,” arXiv preprint arXiv:1804.03619, 2018.
* [37] Vicente Peruffo Minotto, Claudio Rosito Jung, and Bowon Lee, “Multimodal multi-channel on-line speaker diarization using sensor fusion through svm,” IEEE Transactions on Multimedia, vol. 17, no. 10, pp. 1694–1705, 2015.
* [38] Okan Köpüklü, Maja Taseska, and Gerhard Rigoll, “How to design a three-stage architecture for audio-visual active speaker detection in the wild,” in Proceedings of IEEE/CVF ICCV, 2021, pp. 1193–1203.
* [39] Sourya Roy, Kyle Min, Subarna Tripathi, Tanaya Guha, and Somdeb Majumdar, “Learning spatial-temporal graphs for active speaker detection,” arXiv preprint arXiv:2112.01479, 2021.
* [40] Yuanhang Zhang, Susan Liang, Shuang Yang, Xiao Liu, Zhongqin Wu, Shiguang Shan, and Xilin Chen, “Unicon: Unified context network for robust active speaker detection,” in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 3964–3972.
* [41] Baptiste Pouthier, Laurent Pilati, Leela K Gudupudi, Charles Bouveyron, and Frederic Precioso, “Active speaker detection as a multi-objective optimization with uncertainty-based multimodal fusion,” arXiv preprint arXiv:2106.03821, 2021.
* [42] Triantafyllos Afouras, Andrew Zisserman, et al., “Sub-word level lip reading with visual attention,” arXiv preprint arXiv:2110.07603, 2021.
* [43] Juan Léon Alcázar, Fabian Caba, Ali K Thabet, and Bernard Ghanem, “Maas: Multi-modal assignation for active speaker detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 265–274.
* [44] Joon Son Chung, “Naver at activitynet challenge 2019–task b active speaker detection (ava),” arXiv preprint arXiv:1906.10555, 2019.
* [45] Juan León Alcázar, Fabian Caba, Long Mai, Federico Perazzi, Joon-Young Lee, Pablo Arbeláez, and Bernard Ghanem, “Active speakers in context,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 12465–12474.
* [46] Juan León-Alcázar, Fabian Caba Heilbron, Ali Thabet, and Bernard Ghanem, “Maas: Multi-modal assignation for active speaker detection,” arXiv preprint arXiv:2101.03682, 2021.
* [47] Yuan-Hang Zhang, Jingyun Xiao, Shuang Yang, and Shiguang Shan, “Multi-task learning for audio-visual active speaker detection,” The ActivityNet Large-Scale Activity Recognition Challenge, pp. 1–4, 2019.
* [48] Ruijie Tao, Zexu Pan, Rohan Kumar Das, Xinyuan Qian, Mike Zheng Shou, and Haizhou Li, “Is someone speaking? exploring long-term temporal features for audio-visual active speaker detection,” in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 3927–3935.
* [49] Man-Wai Mak and Jen-Tzung Chien, Machine learning for speaker recognition, Cambridge University Press, 2020.
* [50] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
* [51] A Kurakin, I Goodfellow, and S Bengio, “Adversarial examples in the physical world. arxiv 2016,” arXiv preprint arXiv:1607.02533, 2016.
* [52] Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li, “Boosting adversarial attacks with momentum,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 9185–9193.
* [53] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2574–2582.
* [54] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
|
$(\alpha,\beta)$ of the form $\alpha\leq u_{0}$, $\beta=0$ is a global
minimizer of $\bar{C}_{N}$.
###### Proof.
If $(\alpha_{N},\beta_{N})$ minimizes $\bar{C}_{N}$, then we know from the
previous lemma that $\alpha_{N}+\beta_{N}^{\top}x_{i}\leq u_{0}$ a.s., for all
$i=1,\dots,n$, for all sufficiently large $N$. It follows from (74) that
$\bar{C}_{N}(\alpha_{N},\beta_{N})\geq n$. At any $\alpha\leq u_{0}$, (74)
also shows that $\bar{C}_{N}(\alpha,0)=n$. Thus, every $(\alpha,0)$,
$\alpha\leq u_{0}$, is a global minimizer for sufficiently large $N$. ∎
We conclude from this result that the objective (74) degenerates under
sufficient imbalance, in the sense that it returns $\beta_{N}=0$ a.s., for all
sufficiently large $N$. The linear discriminant function
$\alpha_{N}+\beta_{N}^{\top}x$ assigns the same value $\alpha_{N}$ to every
observation, and $\alpha_{N}$ could be any value less than or equal to
$u_{0}$. |
# The first common fixed point theorem for commutative set-valued mappings
Issa Mohamadi Department of Mathematics, Islamic Azad University - Sanandaj
Branch, Sanandaj, Iran E-mail addresses<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract.
We establish the first common fixed point theorem for commutative set-valued
convex mappings. This may help to generalize common fixed point theorems in
single-valued setting to those in set-valued. We also prove the existence of a
fixed point in a continuously expanding sets under a none convex upper
semicontinuous set-vaued mapping; as a result we answer positively to a
question of Lau and Yao.
Keywords: Locally convex vector space; Fixed point; Upper semicontinuous;
Convex set-valued mapping;
Mathematics Subject Classifications 2010: 57N17; 37C25; 40C15; 54C60
## 1\. Introduction
Let $X$ and $Y$ be two topological vector spaces, we recall that a set-valued
mapping $T:X\rightarrow 2^{Y}$ is said to be upper semicontinuous, if for each
open subset $V$ of $Y$ and each $x\in X$ with $T(x)\subseteq V$, there exists
an open neighborhood $U$ of $x$ in $X$ such that $T(y)\subseteq V$ for all
$y\in U$. For two set-valued mappings $T,S$ from $X$ into $2^{X}$, their
composition is defined, in the literature, as $ToS(x)=\bigcup_{y\in S(x)}T(y)$
and $SoT(x)=\bigcup_{y\in T(x)}S(y)$. $T$ and $S$ are also said to be
commutative on $X$ if $ToS(x)=SoT(x)$, for all $x\in X$. We say that $T$
commutes with $S$ on the right if $SoT(x)\subseteq ToS(x)$, for all $x\in X$ .
We say that a mapping $T$ from $X$ into $2^{X}$ is convex if $\lambda
t+(1-\lambda)z\in T(\lambda x+(1-\lambda)y)$, for all $t\in T(x)$, $z\in T(y)$
and $\lambda\in(0,1)$. We also recall that for a set-valued mapping $T$ from
$X$ into $2^{X}$, $x\in X$ is a fixed point of $T$ if $x\in T(x)$.
Let $(X,d)$ be a metric space and $CB(X)$ denote the set of nonempty closed
bounded subset of $X$. For $A,B\in CB(X)$, define
$H(A,B)=\max\\{\delta(A,B),\delta(B,A)\\}$
where, $\delta(A,B)=\sup\\{d(a,B):a\in A\\}$ and
$\delta(B,A)=\sup\\{d(A,b):b\in B\\}$. It is known that $(CB(X),H)$ is a
metric space. The metric $H$ on $CB(X)$ is called the Hausdorff metric.
A mapping $T$ from a metric space $(x,d)$ into the metric space $(CB(X),H)$ is
said to be nonexpansive if $H(T(x),T(y))\leq d(x,y)$, for all $x,y\in X$.
Suppose that $C$ is a nonempty subset of a topological space $X$ and $D$ is a
nonempty subset of $C$. The mapping $R:C\longrightarrow D$ is said to be a
retraction if $R(x)=x$ for all $x\in D$; that is, $R^{2}=R$. In this case, $D$
is called a retract of $C$. When $(X,d)$ is a metric space then $D$ is called
a nonexpansive retract of $C$ if $R$ is a nonexpansive mapping.
For more details on these and related concepts refere to [1].
There are a number of landmark fixed point theorems for set-valued mappings.
In $1941$, Kakutani [9] showed that if $C$ is a nonempty convex compact subset
of an $n$-dimentional Euclidean space $\mathbb{R}^{n}$ and $T$ from $C$ into
$2^{C}$ is an upper semicontinuous mapping such that $T(x)$ is a nonempty
convex closed subset of $C$ for all $x\in C$; then, $T$ possesses a fixed
point in $C$. In $1951$, Glicksberg [5] and in $1952$, Fan [4], independently,
generalized Kakutani’s fixed point theorem $[5]$ from Euclidean spaces to
locally convex vector spaces. In [7], we showed that for a continuously
expanding compact and convex subset of a locally convex vector space, under an
upper semicontinuous set-valued convex mapping, there exists at least one
point that remains fixed under the expansion. In this work we generalize this
result to an arbitrary upper semicontinuous set-valued mapping in one
dimensional Euclidean space $\mathbb{R}$.
Many common fixed point theorems for single-valued mappings have also been
developed; among them, the Markov-Kakutani fixed point theorem is of great
interest for its numerous verity of applications that can be found in the
literature. In $1936$, Markov [10] and in $1938$, Kakutani [8] proved,
independently, that each family of commutative continuous affine mappings on a
nonempty compact convex subset of a Hausdorff topological vector space into
itself has a common fixed point. A part of our work has also been devoted to
generalize their theorem, applying our fixed point theorem along with the Fan-
Glicksberg fixed point theorem, for a family of two but convex and set-valued
mappings. The last part of our work is also devoted to provide an answer to a
question by Lau and Yao [6]. In fact, we generalize our common fixed point
theorem for none convex set-valued mappings in one dimensional Euclidean
space. .
## 2\. Our results
In the following theorem, we prove the existence of a common fixed point for
two set-valued convex mappings.
###### Theorem 2.1.
Let $X$ be a locally convex Hausdorff vector space, and $C$ be a nonempty
convex compact subset of $X$. Suppose that $\\{T_{1},T_{2}\\}$ are two
commutative upper semicontinuous convex set-valued mappings from $C$ into
$2^{C}$ such that $T_{i}(x)$, for $i=1,2$ and $x\in C$, is a nonempty closed
subset of $X$. Then, there exists $x\in C$ such that $x\in T_{1}(x)\cap
T_{2}(x)$.
###### Proof.
Let $Fix(T_{i})$ indicates the fixed points set of $T_{i}$, for $i=1,2$. Then,
by the Fan-Glicksberg fixed point theorem, $Fix(T_{1})$ is nonempty compact
convex subset of $X$. Define $G:Fix(T_{1})\rightarrow 2^{Fix(T_{1})}$ by
$G(x)=T_{2}(x)\cap Fix(T_{1})$, for $x\in Fix(T_{1})$. Then, $G$ is an upper
semicontinuous set-valued mapping in the topology on $Fix(T_{1})$ induced from
$X$. Now, we show that $G(x)$ is nonempty. Let $x\in Fix(T_{1})$, then $x\in
T_{1}(x)$. Accordingly, $T_{2}(x)\subseteq T_{2}(T_{1}(x))=T_{1}(T_{2}(x))$ by
commutativity of $T_{1}$ and $T_{2}$ and definition of composition for set-
valued mappings. Since $T_{1}(x)$ is a nonempty convex compact subset of $X$,
by Theorem $2.2$ in [7], $T_{1}$ has a fixed point in $T_{2}(x)$. That is ,
there exists $y\in T_{2}(x)$ such that $y\in T_{1}(y)$. It yields that $G(x)$
is nonempty. Therefore, by the Fan-Glicksberg fixed point theorem, again, $G$
has a fixed point on $Fix(T_{1})$. Thus, there exists $x\in Fix(T_{1})$ so
that $x\in G(x)$. This means $x\in T_{2}(x)\cap Fix(T_{1})$. This completes
the proof. ∎
Open problem 1. We still don’t know whether or not Theorem $2.1$ is valid for
a family of infinite number of commutative set-valued convex mappings; that
is, whether or not a generalization of the Markov-Kakutani fixed point theorem
to commutative set-valued convex mappings holds.
Remark. In Theorem $2.1$ instead of commutativity we can suppose that $T_{1}$
commutes with $T_{2}$ on the right.
Next, we generalize Theorem $2.2$ in [7] for an arbitrary upper semicontinuous
set-valued mapping in one dimesional Euclidean spaces.
###### Theorem 2.2.
Let $C$ be a nonempty convex compact subset of $\mathbb{R}$. Assume that
$T:C\rightarrow 2^{\mathbb{R}}$ is a set-valued upper semicontinuoues mapping
such that $T(x)$ is a nonempty compact convex subset of $\mathbb{R}$ for all
$x\in C$. If $C\subseteq T(C)$, then $T$ possesses a fixed point in $C$.
###### Proof.
Let
$\Delta=\\{U\subseteq C:Uis\>nonempty,\>closed,\>convex\>and\>U\subseteq
T(U)\\}.$
Then, $(\Delta,\subseteq)$, where $\subseteq$ is inclusion, is a partially
ordered set. Also, by Lemma $2.1$ in [7], every descending chain in $\Delta$
has a lower bound in $\Delta$. Therefore, by Zorn’s lemma, $\Delta$ has a
minimal element, say $U_{0}$. We show that $U_{0}$ is singleton. Define
$F:U_{0}\rightarrow 2^{U_{0}}$ by $F(x)=T(x)\cap U_{0}$ for all $x\in U_{0}$.
Then, $F(x)$ is a convex, compact subset of $X$, for all $x\in U_{0}$ since
$T(x)$ and $U_{0}$ are convex and compact. Let $V=\\{y\in
U_{0}:S(x)\neq\emptyset\\}$. Then, $V$ is nonempty as $U_{0}\subseteq
T(U_{0})$. It is clear that $V\subseteq T(V)$. By convexity and upper
semicontinuity of $T$, it can easily be seen that $V$ is a nonempty compact
subset of $U_{0}$ such that $V\subseteq T(V)$ and $V\neq U_{0}$.Then,
$U_{0}=co(V)$ by minimality of $U_{0}$ and the fact trhat $V$ is a compact
subset in $\mathbb{R}$. Also, since $U_{0}$ is a nonempty convex compact
subset of $\mathbb{R}$, it is a closed interval, say $[a,b]$, where $a,b\in
V$. In fact,We shall prove that $a=b$, We show it by the way of contradiction;
that is, we suppose that $a\neq b$. Now, let
$\Omega=\\{[c,d]\subseteq[a,b]:T(c)\cap[c,d]\neq\emptyset\>and\>\>T(d)\cap[c,d]\neq\emptyset\\}.$
Then, $\Omega\neq\emptyset$ Since $[a,b]\in\Omega$. We show that $\Omega$ has
a minimal element. Let $\\{[c_{i},d_{i}]\\}_{i\in I}$ be a descending chain,
by inclusion, in $\Omega$. Thus, $\bigcap_{i\in I}[c_{i},d_{i}]$ is a nonempty
compact convex subset in $\mathbb{R}$, and therefore a closed interval, say
$[c,d]$. By defining the relation $\leq$ on $I$ as : $i\leq j$ iff
$[c_{j},d_{j}]\subseteq[c_{i},d_{i}]$, for all $i,j\in I$, $(I,\leq)$ is a
directed set. Accordingly, $c=\lim_{i}c_{i}$ and $d=\lim_{i}d_{i}$. Next, we
show that $T(c)\cap[c,d]\neq\emptyset\>and\>\>T(d)\cap[c,d]\neq\emptyset$.
Suppose, on contrary, that $T(d)\cap[c,d]=\emptyset$ Therefore, by upper
semicontinuity of $T$ there exists an open neibourhood $U$ and $W$ containing
$[c,d]$ and $T(d)$ , respectively, such that $U\cap W=\emptyset$. Also, there
exists a neiborhood $U^{\prime}$ of $d$ so that for all $x$ in $U^{\prime}$ we
have $T(x)\subseteq W$. This implies that there is $i_{0}\in I$ such that
$T(d_{i})\subseteq W$ for all $i\geq i_{0}$. On the other hand, there also
exists $i_{1}\in I$ that $c_{i},d_{i}\in U$, for all $i\geq i_{1}$. Now having
taken $i\geq max\\{i_{0},i_{1}\\}$ it follows that $[c_{i},d_{i}]\cap
T(d_{i})=\emptyset$, a cotradiction. Accordingly, $[c,d]\in\Omega$. Thus, by
Zorn’s lemma $\Omega$ has a minimal element, say $[c^{\prime},d^{\prime}]$. If
$T(x)\cap[c^{\prime},d^{\prime}]\neq\emptyset$, for all
$x\in[c^{\prime},d^{\prime}]$; then by defining
$P:[c^{\prime},d^{\prime}]\longrightarrow 2^{[c^{\prime},d^{\prime}]}$ as
$P(x)=T(x)\cap[c^{\prime},d^{\prime}]$ and also applying Kakutani’s fixed
point theorem for mapping $P$, it follows that $T$ has a fixed point in
$[c^{\prime},d^{\prime}]$. This contradicts the minimality of $U_{0}$.
Therefore, $T(y)\cap[c^{\prime},d^{\prime}]=\emptyset$, for some
$y\in[c^{\prime},d^{\prime}]$. Now, suppose that $T(y)>d^{\prime}$ (to avoid
any incovenience, by $T(y)>d^{\prime}$ we mean $z>d^{\prime}$ for all $z\in
T(y)$) and define
$\Theta=\\{U\subseteq[c^{\prime},d^{\prime}]:U\>is\>an\>open\>interval\>containing\>y\>and\>T(w)>d^{\prime}\>for\>all\>w\in
U\\}.$
By upper semicontinuity of $T$, $\Theta$ is nonempty. Also , by applying
Zorn’s lamma, $\Theta$ has a maximal element, by inclusion, such as $U=(s,t)$.
Hence, upper semicontinuity of $T$ also implies that
$T(t)\cap[c^{\prime},d^{\prime}]\neq\emptyset$. We shall prove that
$d^{\prime}\in T(t)$. Let $\\{x_{n}\\}$ and $\\{y_{n}\\}$ be sequences such
that $x_{n}\rightarrow t^{-}$; and $y_{n}\in T(x_{n})$. Thus,
$y_{n}>d^{\prime}$. Since $T$ is upper semicontinuous and compact valued and
$C$ is compact, it is known that $T(C)$ is also compact. Accordingly, by
passing to a subsequence we may assume that $y_{n}\rightarrow y$, for some
$y\in\mathbb{R}$. Hence, $y\in T(t)$ and $y\geq d^{\prime}$. It follows that
$d^{\prime}\in T(t)$ as $T(t)$ is a nonempty compact convex subset of
$\mathbb{R}$. If $T(d^{\prime})\cap[t,d^{\prime}]\neq\emptyset$, it
contradicts the minimality of $[c^{\prime},d^{\prime}]$. Accordingly, we may
suppose that $T(d^{\prime})<t$ as
$T(d^{\prime})\cap[c^{\prime},d^{\prime}]\neq\emptyset$. Now, let
$\Sigma=\\{[m,n]\subseteq[t,d^{\prime}]:either\>n\in T(m),T(n)<m,\>or\>m\in
T(n),T(m)>n\\}.$
Then, $\Sigma$ is a nonempty set since $[t,d^{\prime}]\in\Sigma$. Let
$\\{[m_{j},n_{j}]\\}_{j\in J}$ be a descending chain, by inclusion, in
$\Omega$; then, by the same argument we had for $[c^{\prime},d^{\prime}]$, for
$[m,n]=\bigcap_{j\in J}[m_{j},n_{j}]$ we have $n\in T(m)$ or $m\in T(n)$. We
shall prove that $[m,n]\in\Sigma$; for this , having assumed that $n\in T(m)$
it is enough to show that $T(n)<m$. Suppose, on contrary, that there exists
$x\in T(n)$ so that $x\geq m$. If $T(n)\cap[m,n]\neq\emptyset$, then it
contradicts the minimality of $[c^{\prime},d^{\prime}]$. Thus, we may assume
that $T(n)>n$. Let $O_{1}$ and $O_{2}$ be open sets containing $[m,n]$ and
$T(n)$, respectively, and also $O_{1}\cap O_{2}=\emptyset$. Then by upper
semicontinuity of $T$ it follows that there exists $j_{0}\in J$ such that
$m_{j}\in O_{1},n_{j}\in O_{1}$, and $T(n_{j})\in O_{2}$, for all $j\geq
j_{0}$. Accordingly, $T(n_{j})>n_{j}$, for all $j\geq j_{0}$. This contradicts
the fact that $m_{j}\in T(n_{j})$, for all $j\geq J_{0}$. Hence, by Zorn’s
lemma, $(\Sigma,\subseteq)$ has a minimal element such as
$[m^{\prime},n^{\prime}]$. Suppose that $m^{\prime}\in T(n^{\prime})$ and
$T(m^{\prime})>n^{\prime}$, then by the same disscusion we already had for
$[t,d^{\prime}]$, there exist $p^{\prime}$ in $(m^{\prime},n^{\prime}]$ so
that $n^{\prime}\in T(p^{\prime})$. This contradicts either the minimality of
$[m^{\prime},n^{\prime}]$ or the minimality of $[c^{\prime},d^{\prime}]$.
The similar argument can be repeated with minor alterations for the case when
we have $T(y)<c^{\prime}$. Therefore, any case yields a contradiction. Thus,
$a=b$; that is $U_{0}$ is singleton. This complete the proof.
∎
Also, from the proof of Theorem $2.4$, the following result can be derived.
###### Corollary 2.3.
Let $[a,b]$ be a closed interval in $\mathbb{R}$, and $T:[a,b]\rightarrow
2^{\mathbb{R}}$ be a set-valued upper semicontinuoues mapping such that $T(x)$
is a nonempty compact convex subset of $\mathbb{R}$ for all $x\in[a,b]$.
Suppose also that $T(a)\cap[a,b]\neq\emptyset$ and
$T(b)\cap[a,b]\neq\emptyset$. Then, $T$ possesses a fixed point in $[a,b]$.
The following example shows that Theorem $2.2$ is not valid in more general
spaces.
Example. Let $T$ be the set-valued mapping from $C=[0,2]$ into
${2^{\mathbb{R}}}^{2}$ defined by
$T(x)=\begin{cases}[1,2-x]\times\\{x\\};&x\in[0,1],\\\
[2-x,1]\times\\{2-x\\};&x\in(1,2],\end{cases}.$
where $\times$ is the Cartesian product. It is obvious that $C\subset T(C)$ as
we have $T(0)=[1,2]$ and $T(2)=[0,1]$. It can easily be verified that $T$ is a
nonempty convex compact upper semicontinuous set-valued mapping that does not
possess any fixed point in $C$.
This example gives rise to the following open problem:
Open problem 2. As it can be seen from the above example,
Codim$(\frac{\mathcal{M}(T(C))}{\mathcal{M}(C)})\neq 0$ but in Theorem $2.2$
it is zero. Now the question that whether Theorem $2.2$ holds in more general
spaces where we have it zero, is still unanswerd, where by $\mathcal{M}(T(C))$
and $\mathcal{M}(C)$ we mean the subspacees of $X$ containing $T(C)$ and $C$,
respectively, with minimum dimensions.
In what follows we prove the existence of a common fixed point for a family of
commutative none convex set-valued mappings. Not only it provides an answer to
question $5.9$ in [6] but also it gives an insight into the structure of the
set of common fixed points for set-valued mappings.
###### Theorem 2.4.
Let $C$ be a nonempty convex compact subset of $\mathbb{R}$. Suppose that
$\Psi=\\{T_{i}:i\in I\\}$ is a family of commutative nonexpansive set-valued
mappings from $C$ into $2^{C}$ in which there are at most two mappings that
are not singled valued. If for each $i\in I$ and $x\in C$, $T_{i}(x)$ is a
nonempty closed convex subset of $C$, then the common fixed points of $\Psi$
is a nonempty convex nonexapansive retract of $C$.
###### Proof.
For $i\in I$ we show that $Fix(T_{i})$ is convex. For each $x\in C$, define
$f_{i}(x)=P_{T_{i}(x)}(x)=\\{y\in T_{i}(x):d(x,y)=\inf\\{d(x,z):z\in
T_{i}(x)\\}\\},$
where $P_{T_{i}(x)}$ is the metric projection on $T_{i}(x)$ for each $x\in C$.
It can easily be seen that $Fix(f_{i})=Fix(T_{i})$. To avoid any complexity in
writing, by $x\leq T_{i}(y)$ we mean $x\leq z$ for all $z\in T_{i}(y)$ and by
$T_{i}(x)\leq T_{i}(y)$ we mean $w\leq z$ for all $w\in T_{i}(x)$ and $z\in
T_{i}(y)$ . We shall prove that $f_{i}$ is a nonexpansive mapping from $C$
into $C$. Since $T_{i}$ is a nonempty closed convex mapping in $\mathbb{R}$,
we may suppose $T_{i}(x)=[a,b],T_{i}(y)=[c,d]$ for $x,y\in C$. We consider the
following cases:
Case $1$. Either $x\leq y\leq T_{i}(x)\leq T_{i}(y)$ or $x\leq T_{i}(x)\leq
y\leq T_{i}(y)$; then by definition of $f_{i}$ we have
$f_{i}(x)=a,f_{i}(y)=c$, therefore,
$\left\|f_{i}(x)-f_{i}(y)\right\|\leq
H(T_{i}(x),T_{i}(y))\leq\left\|x-y\right\|.$
Case 2. In either case $x\leq T_{i}(y)\leq y\leq T_{i}(x)$ or $x\leq
T_{i}(x)\leq T_{i}(y)\leq y$ we have $f_{i}(x)=a,f_{i}(y)=d$ and it is clear
that $\left\|f_{i}(x)-f_{i}(y)\right\|\leq\left\|x-y\right\|$.
Case 3. By nonexpansivity of $H$ the case when we have $T_{i}(x)<x,y<T_{i}(y)$
is also imposssible.
We can consider other cases by replacing $\leqslant$ by $\geqslant$ in the
mentioned cases and obtain the same result. Next we show that $Fix(f_{i})$ is
convex. Let $x,y\in Fix(f_{i})$ and $0\leq\lambda\leq 1$, then for $z=\lambda
x+(1-\lambda)y$ we have
$\left\|x-f_{i}(z)\right\|=\left\|f_{i}(x)-f_{i}(z)\right\|\leq\left\|x-z\right\|=(1-\lambda)\left\|x-y\right\|,$
$\left\|y-f_{i}(z)\right\|=\left\|f_{i}(y)-f_{i}(z)\right\|\leq\left\|y-z\right\|=\lambda\left\|x-y\right\|.$
These yields
$\left\|x-y\right\|\leq\left\|x-f_{i}(z)\right\|+\left\|f_{i}(z)-y\right\|\leq\left\|x-z\right\|+\left\|y-z\right\|=\left\|x-y\right\|.$
Consequently,
$\left\|x-y\right\|=\left\|x-f_{i}(z)\right\|+\left\|f_{i}(z)-y\right\|$. We
show that $x\leq f_{i}(z)\leq y$. Suppose, on contrary, that either
$f_{i}(z)\leq x$ or $y\leq f_{i}(z)$; each case results in either
$\left\|f_{i}(z)-y\right\|>\left\|x-y\right\|$ or
$\left\|f_{i}(z)-x\right\|>\left\|x-y\right\|$, respectively; which is a
contradiction. Hence, there is $\mu\in[0,1]$ such that $f_{i}(z)=\mu
x+(1-\mu)y$. Thus,
$\mu\left\|x-y\right\|=\left\|y-f_{i}(z)\right\|=\left\|f_{i}(y)-f_{i}(z)\right\|\leq\left\|y-z\right\|=\lambda\left\|x-y\right\|,$
$(1-\mu)\left\|x-y\right\|=\left\|x-f_{i}(z)\right\|=\left\|f_{i}(x)-f_{i}(z)\right\|\leq\left\|x-z\right\|=(1-\lambda)\left\|x-y\right\|.$
Accordingly, $\lambda=\mu$, that is, $f_{i}(z)=z$. This proves that
$Fix(T_{i})=Fix(f_{i})$ is convex.
By finite intersection property for compact sets we may suppose that
$I=\\{1,2,...,n\\}$ where $n\in\mathbb{N}$. Let
$F_{n}=\cap_{i=1}^{n}Fix(T_{i})$.
The proof is by induction. For $n=2$, assume that neither $T_{1}$ nor $T_{2}$
are single valued. Following the proof of Theorem $2.1$ and applying Theorem
$2.2$, it follows that $F_{n}$ is a nonempty convex subset of C .
We shall prove that $F_{2}$ is a nonexpansive retract of $C$. It is known, by
Bruck [3], that for the nonexpansive single valued mapping $f_{1}$, already
defined, there exists a nonexpansive retraction $g_{1}$ from $C$ onto
$Fix(f_{1})=Fix(T_{1})=F_{1}$. Now define $S:C\rightarrow 2^{C}$ by
$S(x)=T_{2}(g_{1}(x))\cap Fix(T_{1}).$
Then, it is easy to verify that
$H(S(x),S(y))\leq
H(T_{2}(g_{1}(x)),T_{2}(g_{1}(y)))\leq\left\|g_{1}(x)-g_{1}(y)\right\|\leq\left\|x-y\right\|.$
Having noticed $g_{1}(x)=x$ and following the proof of Theorem $2.1$, it
yields that $S(x)$ is a nonempty convex compact subset of $C$ for all $x\in
C$. On the other hand, for $x\in Fix(S)$ we have $x\in Fix(T_{1})$; thus
$g_{1}(x)=x.$ Therefore, $x\in T_{2}(x)$; that is, $Fix(S)\subseteq F_{2}$.
The inclusion $F_{2}\subseteq Fix(S)$ is also clear. Accordingly, the first
part of the proof implies that $F$ is a nonexpansive retract of $C$.
Now, let $n\geq 3$, $F_{n-1}\neq\emptyset$ and $r:C\rightarrow F_{n-1}$ be its
correspondant retraction. Then, we show that Fix$(T_{n}\texttt{o}r)=F_{n}$.
The inclusion $F_{n}\subseteq Fix(T_{n}\texttt{o}r)$ is trivial. For the
reverse inclusion, let $x\in Fix(T_{n}\texttt{o}r)$. Since $T_{n}$ commutes
with $T_{i}$ for $i=1,...,n-1$ and $r(x)\in F_{n-1}$, $F_{n-1}$ is $T_{n}$
invariant and $x=T_{n}\texttt{o}r(x)\in F_{n}$. Therefore, $r(x)=x$. That is,
$x=T_{n}\texttt{o}r(x)=T_{n}(x)$. Accordingly, Fix$(T_{n}\texttt{o}r)\subseteq
F_{n}$. Applying Bruck’s theorem for the nonexpansive mapping
$T_{n}\texttt{o}r$, the proof is completed.
∎
Remark. In [2], Boyce gave an example of two commutative mappings that have no
common fixed point. This shows that the condition that the mappings in Theorem
$2.4$ are nonexpansive can not be dropped.
## References
* [1] J. P. Aubin and I. Ekeland, Applied Nonlinear Analysis, 1984. MR $87a:58002$
* [2] W. M. Boyce, Commuting functions with no common fixed point, Trans. Amer. Math. Soc., 137 (1969), 77-92.
* [3] R. E. Bruck, Nonexpansive retracts of Banach spaces, Bull. Amer. Math. Soc. 76 (1970), 384-386.
* [4] K. Fan, Fixed-point and minimax theorems in locally convex topological linear spaces, Proc. Nat. Acad. Sci. U.S.A., 38 (1952), 121-126.
* [5] I. L. Glicksberg, A furthur generalization of the Kakutani fixed point theorem with application to Nash equilibrium points, Proc. Amer. Math. Soc. 3 (1952), 170-174.
* [6] A.T-M. Lau, L. Yao, Common fixed point properties for a family of set-valued mappings, J.Math. Anal. Appl. (2018).
* [7] I. Mohamadi, A mathematical proof for the existence of a possible source for dark energy , arXiv:1704.04430, (2017).
* [8] S. Kakutani, Two fixed-point theorems concerning bicompact convex sets, Proc. Imp. Acad Tokyo 14 (1938), 242-245.
* [9] S. Kakutani, A generalization of Brouwer’s fixed point theorem, Duke Math. J. vol. 7 (1941), 457-459.
* [10] A. Markov, Quelques theoremes sur les ensembles Abeliens, Doklady Akad, Nauk SSSR(N.S.) 10 (1936), 311-314.
|
# Quantum crosstalk cancellation for fast entangling gates and improved multi-
qubit performance
K. X. Wei<EMAIL_ADDRESS>E. Magesan I. Lauer S. Srinivasan D. F. Bogorin
S. Carnevale G. A. Keefe Y. Kim D. Klaus W. Landers N. Sundaresan C.
Wang E. J. Zhang M. Steffen O. E. Dial D. C. McKay<EMAIL_ADDRESS>A.
Kandala<EMAIL_ADDRESS>IBM Quantum, IBM T.J. Watson Research Center,
Yorktown Heights, NY 10598, USA
###### Abstract
Quantum computers built with superconducting artificial atoms already stretch
the limits of their classical counterparts. While the lowest energy states of
these artificial atoms serve as the qubit basis, the higher levels are
responsible for both a host of attractive gate schemes as well as generating
undesired interactions. In particular, when coupling these atoms to generate
entanglement, the higher levels cause shifts in the computational levels that
leads to unwanted $ZZ$ quantum crosstalk. Here, we present a novel technique
to manipulate the energy levels and mitigate this crosstalk via a simultaneous
AC Stark effect on coupled qubits. This breaks a fundamental deadlock between
qubit-qubit coupling and crosstalk, leading to a 90ns CNOT with a gate error
of (0.19 $\pm$ 0.02) $\%$ and the demonstration of a novel CZ gate with fixed-
coupling single-junction transmon qubits. Furthermore, we show a definitive
improvement in circuit performance with crosstalk cancellation over seven
qubits, demonstrating the scalability of the technique. This work paves the
way for superconducting hardware with faster gates and greatly improved multi-
qubit circuit fidelities.
Existing quantum processors Zhang _et al._ (2020); Arute _et al._ (2019)
based on superconducting transmon qubits are pushing the limits of classical
simulability. However, the realization of quantum advantage requires these
processors to scale up in both size and operational fidelity. Reaching a
suitable threshold on both counts would further enable quantum error
correction and the realization of a fault tolerant quantum computer. These
objectives require overcoming several technical challenges, notably, two-qubit
gate fidelity, crosstalk, system stability and qubit coherence. One common
architecture, based on fixed-frequency transmon qubits with fixed couplings,
has a distinct advantage in terms of stability and coherence, but has
limitations on gate speed and minimizing crosstalk due to always on
interactions, and their relation to the exchange coupling strength, $J$. While
a larger $J$ enables a faster entangling gate, the coupling leads to state
dependent frequency shifts of neighboring coupled qubits, which is a source of
quantum crosstalk that takes the form of a $ZZ$ interaction in the system
Hamiltonian. This is formally seen from the standard cQED Hamiltonian for a
pair of coupled transmons ($i=\\{0,1\\}$), modelled as Duffing oscillators,
$\displaystyle H_{0}/h$ $\displaystyle=$
$\displaystyle\sum_{i=\\{0,1\\}}\left(\nu_{i}\hat{a}_{i}^{\dagger}\hat{a}_{i}+\frac{\alpha_{i}}{2}\hat{a}_{i}^{\dagger}\hat{a}_{i}\left(\hat{a}_{i}^{\dagger}\hat{a}_{i}-1\right)\right)$
(1)
$\displaystyle+J(\hat{a}_{0}^{\dagger}+\hat{a}_{0})(\hat{a}_{1}^{\dagger}+\hat{a}_{1}),$
with bare qubit frequencies $\nu_{i}$, bare anharmonicities $\alpha_{i}$ and
coupling strength $J$. The coupling dresses the energy levels, and the
crosstalk arising from state dependent frequency shifts is expressed as,
$\displaystyle\nu_{ZZ}$ $\displaystyle=$
$\displaystyle(\nu_{11}-\nu_{10})-(\nu_{01}-\nu_{00}).$ (2)
For fixed couplings, this is an always-on source of crosstalk, referred to as
a static $ZZ$ interaction, with the following perturbative form,
$\displaystyle\nu_{ZZ,s}$ $\displaystyle=$
$\displaystyle-\frac{2J^{2}(\alpha_{0}+\alpha_{1})}{(\alpha_{1}-\Delta_{0,1})(\alpha_{0}+\Delta_{0,1})},$
(3)
where $\Delta_{0,1}$ represents the qubit-qubit detuning. This crosstalk has
been seen to be an important limitation to multi-qubit circuit performance in
tests of quantum volume Jurcevic _et al._ (2020), randomized benchmarking
McKay _et al._ (2019), and error correction codes Takita _et al._ (2016),
and may prevent device scaling Berke _et al._ (2020). Several hardware
strategies have been employed to mitigate this crosstalk. The simplest
approach, as seen from Eq. (3), is to lower $J$, however, this comes at the
expense of gate speed and lowers the overall gate fidelity due to finite qubit
coherence. More involved strategies include the the introduction of tunable
$J$ coupling Chen _et al._ (2014); Arute _et al._ (2019); Stehlik _et al._
(2021); coupling different flavors of qubits with opposite signs of
anharmonicity Zhao _et al._ (2020a); Ku _et al._ (2020); Xu and Ansari
(2020) (see Eq. (3)); and the use of engineered multi-path coupling elements
Mundada _et al._ (2019); Yan _et al._ (2018); Kandala _et al._ (2020); Zhao
_et al._ (2020b); Xu and Ansari (2020). An alternative approach is a quantum
control strategy to $ZZ$ cancellation via the AC Stark effect, using off-
resonant radiation to selectively tune the energy levels, and modulate $ZZ$,
as seen from Eq. (2). This has been demonstrated with a single near-resonant,
continuous wave (CW) drive in flux-tunable superconducting qubit architectures
Noguchi _et al._ (2020); Xiong _et al._ (2021). However, this requires being
close to a resonant transition outside the computational space, and is
susceptible to charge noise in transmon qubits.
In this work we show that the $ZZ$ interaction for a pair of coupled transmon
qubits can be tuned over several orders of magnitude by far-off resonant
driving on both qubits. We develop an analytical model of the effect for
transmons, building off previous theoretical work studying the case of coupled
spins Li _et al._ (2008). We then demonstrate that the effect, dubbed siZZle
- Stark induced $ZZ$ by level excursions - can be employed for both static
$ZZ$ cancellation as well as implementing $ZX$ and $ZZ$ entangling gates in
all-transmon processors with simple direct capacitive coupling. The ability to
cancel the static $ZZ$ interaction enables us to employ stronger qubit-qubit
coupling, leading to a state-of-the-art cross-resonance gate with over a
factor of 2 improvement in gate time from previous reports Kandala _et al._
(2020). Furthermore, we demonstrate a novel high-fidelity CZ gate based on
siZZle which adds to the toolkit of microwave-only Chow _et al._ (2011);
Poletto _et al._ (2012); Chow _et al._ (2013); Paik _et al._ (2016);
Krinner _et al._ (2020) two qubit gates. In contrast to previous approaches
Noguchi _et al._ (2020); Xiong _et al._ (2021), our approach with Stark
tones on both qubits introduces an additional control parameter, the phase
difference between the two tones, that is particularly useful for extending to
larger devices. We demonstrate $ZZ$ cancellation on a line of 7 qubits
combining siZZle with the hardware approach of multi-path couplers, and
demonstrate improvements in the performance of Quantum Volume (QV) circuits
Cross _et al._ (2019).
To describe the physics of siZZle, we consider the Hamiltonian of Eqn. (1) and
add off-resonant drives on both qubits,
$\displaystyle H_{\textrm{siZZle}}/h=H_{0}/h+$
$\displaystyle\sum_{i=\\{0,1\\}}\Omega_{i}\cos{(2\pi\nu_{d}t+\phi_{i})}(\hat{a}_{i}^{\dagger}+\hat{a}_{i}),$
(4)
with amplitudes $\Omega_{i}$, phases $\phi_{i}$, and a common frequency
$\nu_{d}$ . The device schematic in Fig. 1(a) depicts a simple direct
capacitive coupling between the qubits that produces the Hamiltonian model of
Eq. 4. In the limit of ${\Omega_{i}}/{|\nu_{d}-\nu_{i}|}\ll 1$, we can write
the dressed RWA Hamiltonian as,
$H_{\textrm{eff}}/h=\tilde{\nu}_{ZI}{ZI}/4+\tilde{\nu}_{IZ}{IZ}/4+\tilde{\nu}_{ZZ}{ZZ}/4,$
(5)
where the tilde notation refers to being in the doubly-dressed frame with
respect to the exchange coupling and Stark tones. To second order in
$\Omega_{i}$ and first order in $J$, the $ZZ$ coefficient is,
$\displaystyle\tilde{\nu}_{ZZ}$ $\displaystyle=$ $\displaystyle\nu_{ZZ,s}+$
(6)
$\displaystyle\frac{2J\alpha_{0}\alpha_{1}\Omega_{0}\Omega_{1}\cos{(\phi_{0}-\phi_{1})}}{\Delta_{0,d}\Delta_{1,d}(\Delta_{0,d}+\alpha_{0})(\Delta_{1,d}+\alpha_{1})},$
where the static term is given by Eqn. (3). In the above equations,
$\Delta_{i,j}=(\nu_{i}-\nu_{j})$ denotes detunings where $i,j\in\\{0,1,d\\}$.
The most significant contribution to the Stark shifts comes from the term
associated with a single, isolated drive
$\displaystyle\tilde{\nu}_{ZI,\textrm{single}}=-\frac{\Omega_{0}^{2}\alpha_{0}}{\Delta_{0,d}(\Delta_{0,d}+\alpha_{0})},$
(7)
which will be of significance in later discussions for the impact of the Stark
tones on qubit coherence. A formal derivation of these expressions is
discussed in the Supplementary Information. Eq. (6) reveals the various
control knobs to manipulate the strength of the Stark induced $ZZ$
interaction: the amplitudes of the two tones, the drive-qubit detunings, the
anharmonicities, and the phase differences between the two drive tones.
Figure 1: Physics of siZZle (a) Modulation of the $ZZ$ interaction strength
$\tilde{\nu}_{ZZ}$ as the Rabi amplitude of the Stark tones is swept (ratio
$\Omega_{1}/\Omega_{0}=0.5$) for fixed frequency $\nu_{d}=5.075$ GHz and phase
difference $\phi=\pi$. Experimental data (black circles) is compared to
numerical (blue line) and perturbative (red line) calculations using the
device parameters of Table 1. The inset shows a circuit representation of the
primary two-qubit device discussed in this work. (b) The corresponding
excursions of the computational levels, calculated numerically, to generate
the $\tilde{\nu}_{ZZ}$ shown in (a).(c) Modulation of the $ZZ$ interaction
strength $\tilde{\nu}_{ZZ}$ as the phase difference between the Stark tones is
swept, for fixed frequency $\nu_{d}=5.075$ GHz and and drive amplitudes
$\Omega_{1}=0.5\Omega_{0}=20$ MHz. Experimental data (black circles) is
compared to numerical (blue line) and perturbative (red line) calculations
using the device parameters of Table 1 (d) The corresponding excursions of the
computational levels, calculated numerically, to generate the
$\tilde{\nu}_{ZZ}$ shown in (c). Figure 2: Mapping the siZZle parameter
space (a) Experimental sweep of $\tilde{\nu}_{ZZ}$ versus Stark amplitudes for
fixed Stark frequency $\nu_{d}=$ 5.065 GHz and calibrated phase $\phi=\pi$.
The red dotted line highlights the
$\tilde{\nu}_{ZZ}\propto\Omega_{0}\Omega_{1}$ dependence that is expected from
the perturbative expression of (4). (b) Experimental sweep of
$\tilde{\nu}_{ZZ}$ versus Stark amplitude and Stark frequency for a fixed
ratio of Stark amplitudes $\Omega_{1}/\Omega_{0}=0.4$ and calibrated phase
$\phi=\pi$. (c) Experimental sweep of $\tilde{\nu}_{ZZ}$ versus the phase
difference $\phi$ and Stark frequency for a fixed Stark amplitudes
$\Omega_{0}=37.5$ MHz, $\Omega_{1}=15$ MHz. The + and - symbols in the 3 sub-
figures refer to the sign of $\tilde{\nu}_{ZZ}$.
Fig. 1 reveals the physics of siZZle, employing the parameters of the primary
two-qubit device studied in this work, device A. The parameters are given in
Table 1. We perform numerical diagonalization of Eq. (4) after moving into the
frame of the drive. Fig. 1 depicts how the excursions of the computational
levels leads to a modulation of $\tilde{\nu}_{ZZ}$, as the Stark tone
amplitudes ((a), (b)) and phase difference ((c), (d)) are swept. We also see
good agreement between the numerical calculations and the derived analytical
expression of Eq. 6 in the perturbative limit. Experimentally, we measure
$\tilde{\nu}_{ZZ}$ by employing standard Ramsey sequences on Q0 while Q1 is in
$|0\rangle$ or $|1\rangle$. The experimentally measured values show very good
agreement with numerics in Fig. 1(a), (c). A wider parameter space is
experimentally mapped in the 2D sweeps of Fig. 2 and further depicts the
physics of siZZle. Fig. 2(a) maps $\tilde{\nu}_{ZZ}$ versus the Rabi
amplitudes of the Stark tones on both qubits, and the region of
$\tilde{\nu}_{ZZ}\sim 0$ kHz clearly highlights the
$\tilde{\nu}_{ZZ}\propto\Omega_{0}\Omega_{1}$ dependence expected from Eq.
(6). Fig. 2 (b) shows that modulation of $\tilde{\nu}_{ZZ}$ versus siZZle
frequency and the Rabi amplitudes, and shows that sizeable $ZZ$ modulation can
be obtained over a wide range of frequencies. As can be seen qualitatively
from Eq. (6), placing the Stark tone away from the qubit frequency can be
compensated by increasing the drive amplitude, for the same
$\tilde{\nu}_{ZZ}$. Fig. 2 (c) demonstrates the sinusoidal phase dependence of
$\tilde{\nu}_{ZZ}$, over a range of frequencies. The experimental data of
Figures 1 and 2 reveal two interesting regimes of operation. At fairly modest
drives, we observe see that we can cancel the $ZZ$ interaction to operate at
$\tilde{\nu}_{ZZ}\sim 0$. At stronger drive amplitudes, one can generate large
$ZZ$ rates for two qubit entangling gates. These regimes of operation are
discussed in Fig. 3 and 4 and in the next two sections.
| $\tilde{\nu}_{0}$ | $\tilde{\nu}_{1}$ | $\tilde{\alpha}_{0}$ | $\tilde{\alpha}_{1}$
---|---|---|---|---
No siZZle | 4.960 | 5.016 | -0.283 | -0.287
siZZle | 4.953 | 5.014 | -0.276 | -0.286
Table 1: Qubit frequencies for device A depicted in Fig. 1(a) before and after
$ZZ$ cancellation. All the numbers are in units of GHz. We note that these
numbers represent the experimentally measured frequencies, dressed by the
coupling $J=7.745$ MHz. Figure 3: Fast cross-resonance with static ZZ
cancellation (a) Simultaneous randomized benchmarking (RB) of 50 ns single
qubit gates in the absence of static $ZZ$ cancellation (blue) leads to an
average error per gate (EPG) of 6.6e-3. After static $ZZ$ cancellation with a
pair of CW Stark tones at $\nu_{d}=$ 5.1 GHz, the EPG dramatically improves to
7.1e-4 (red). Bold symbols represent mean of the individual seeds (represented
by light symbols), and dotted lines are exponential fits to the decay of the
excited state probability $P_{1}$. (b) Phase calibration of the CW Stark tones
to $\tilde{\nu}_{ZZ}\sim 0$ for $\Omega_{0}=59$ MHz and $\Omega_{1}=22$ MHz.
(c) Strength of $ZX$ interaction $\tilde{\nu}_{ZX}$ versus cross-resonance
drive amplitude $\Omega_{\textrm{CR}}$ with (red) and without (blue) static
$ZZ$ cancellation. Here, Q1 is the control qubit and Q0 is the target qubit.
Bold circles represent experimentally measured rates, using Hamiltonian
tomography. Dotted lines are perturbative estimates, using Eq. 8. (d) EPG
measured by interleaved RB, for direct CNOT gates constructed from cross-
resonance, after $ZZ$ cancellation, as a function of CNOT gate time. The blue
dotted line represent the coherence limit to gate error from estimated using
standard $T_{1}$ and $T_{2}$ measurements before every RB run. (e) Post-$ZZ$
cancellation interleaved RB of a 90 ns direct CNOT gate reveals a best EPG of
1.86e-3, with an upper bound on the EPG of 4.0e-3.
In the first regime of operation, siZZle is used to cancel $ZZ$, which can be
utilized to increase the speed of entangling gates, such as cross-resonance
(CR) Paraoanu (2006); Chow _et al._ (2011), which are set by the coupling
strength $J$. As discussed previously in Eq. 3, increasing $J$ typically leads
to large values of static $ZZ$ crosstalk. Recent work Kandala _et al._ (2020)
with multi-path couplers demonstrated a way to break the standard relationship
between $J$ and $\nu_{ZZ,\textrm{static}}$ (operating at
$J/\nu_{ZZ,\textrm{static}}\sim 130$), leading to state-of-the art CR gate
fidelities. A drawback of the multi-path coupler approach is that
$\nu_{ZZ,\textrm{static}}$ depends strongly on the qubit frequencies, so that
attempting to achieve $\nu_{ZZ,\textrm{static}}\sim 0$ is non-trivial in fixed
frequency architectures given current precision over qubit frequency
allocation Zhang _et al._ (2020). Our quantum control approach to $ZZ$
cancellation introduced here enables tuning to $\tilde{\nu}_{ZZ}\sim 0$ over a
range of parameters since we have several degrees of freedom in our control
space. Importantly, this allows for a decoupling of $J$ and $\tilde{\nu}_{ZZ}$
so that fast, high-fidelity entangling gates are possible with minimal static
crosstalk in an architecture consisting of standard single path couplers and
nominally fixed-frequency qubits.
To test this, our device, described in Table 1 has a large coupling strength
of $J\sim 7.745$ MHz, leading to a very large static $ZZ$ interaction of
$\nu_{ZZ,\textrm{static}}=875$ kHz. Without any further mitigation of $ZZ$,
this prevents high-fidelity simultaneous single qubit operation due to
strongly state-dependent qubit frequencies. This is seen in the decay and
variance of simultaneous single qubit randomized benchmarking sequences shown
in Fig. 3(a) with an estimated average error per gate (EPG) of 6.6e-3. In
order to mitigate this crosstalk, we add continuous wave (CW) Stark drives to
cancel $ZZ$ and operate in a basis dressed by these off-resonant drives. The
system Hamiltonian builds off Eq. (4) to now include additional drives for
gate operation:
$\displaystyle H/h$ $\displaystyle=$ $\displaystyle H_{\textrm{siZZle}}/h+$
$\displaystyle\sum_{i=\\{0,1\\}}\Omega_{i,\textrm{gate}}(t)\cos{(2\pi\nu_{i,\textrm{gate}}t+\phi_{i})}(\hat{a}_{i}^{\dagger}+\hat{a}_{i})$
where $\Omega_{i,\textrm{gate}}(t)$ and $\nu_{i,\textrm{gate}}$ are the time-
dependent amplitude and frequency of the single/two-qubit gate drive on qubit
$i$ respectively.
The large choice of operating parameters for the $ZZ$ cancellation tones makes
identifying an optimal set of working parameters a complex task. First, we
limit leakage out of the computational subspace by placing the $ZZ$
cancellation tone above both qubits. Next, we optimize the detuning of the
cancellation tone. Smaller detuning reduces the drive amplitude required for
$ZZ$ cancellation. There is a practical limit to the amount of amplitude that
can delivered to the qubits before there is heating of system components.
However, if the detuning is too small then the cancellation tone may start to
interfere with the gate drive and time-dependent terms in the effective
Hamiltonian in the frame of the drive can no longer be ignored. For these
reasons, we select $\nu_{d}=$ 5.1 GHz, for device A. The CW amplitudes are
chosen to be sufficient to just approach $\tilde{\nu}_{ZZ}\sim 0$ after phase
calibration (i.e at $\phi=\pi$), see Fig. 3(b). We estimate the CW amplitudes
from the independent qubit Stark shifts to be $\Omega_{0}=59$ MHz and
$\Omega_{1}=22$ MHz. After tuning to $\tilde{\nu}_{ZZ}\sim 0$, the single
qubit gates are re-calibrated with the cancellation drives on. The new
operating frequencies of the qubits are $\tilde{\nu_{0}}=4.953$ GHz and
$\tilde{\nu_{1}}=5.014$ GHz, and so, the qubits have modest Stark shifts of
-7.8 MHz and -1.7 MHz respectively. Reducing the $ZZ$ in this way results in
remarkable improvements in simultaneous single qubit operation for 50 ns
gates, with an estimated gate error of 7.1e-4 from randomized benchmarking,
see Fig. 3(a). We note that there are several operating points for achieving
$\nu_{ZZ}\sim 0$, but operating at stronger CW amplitudes with larger Stark
shifts can to lead to additional dephasing.
With $ZZ$ cancelled and single-qubit gates calibration, we now calibrate a
two-qubit gate with cross-resonance. This entails additional drives on the
control qubit (Q1) at the dressed target qubit (Q0) frequency. In Fig. 3c, we
measure the generated $ZX$ rates versus CR drive amplitude from tomography of
the CR drive Hamiltonian, with and without $ZZ$ cancellation. The $ZX$ rate is
modified due to the presence of the cancellation tones, however, as a
consequence of the large $J$ coupling, one can access fairly large $ZX$ rates
at modest CR drive amplitudes. A perturbative model for the $ZX$ rate is
derived that includes the contribution from the cancellation tones. Assuming a
CR tone on transmon 0 (for the experiment of this paper the CR tone is on
transmon 1 so the labels will be swapped) we have,
$\displaystyle\tilde{\nu}_{ZX}$ $\displaystyle\sim
J\Omega_{0,\text{gate}}\left(A+B\Omega_{0}^{2}+C\Omega_{1}^{2}\right),$ (8)
where
$\displaystyle A$
$\displaystyle=-\frac{\delta}{\Delta_{0,1}(\delta+\Delta_{0,1})},$ (9)
and $B$, $C$ are given in the supplement. We see that the $ZX$ rate has
contributions that are quadratic in the cancellation tone amplitudes. The
zero-point slope for the $ZX$ rate is modified by the Stark tones and when
$\Omega_{0}=\Omega_{1}=0$ the usual first order expression for
$\tilde{\nu}_{ZX}$ is obtained. Fig. 3c contains the $ZX$ rates with the Stark
tones both off and on, and we see good agreement between the perturbative
model and experiment at low CR amplitudes.
The large $J$ coupling is also of consequence for the reduced control qubit
Stark shift, discussed previously in Kandala _et al._ (2020), and the
resulting stability of unechoed direct CNOT gates constructed using CR. We
construct and calibrate direct CNOT gates, similar to Kandala _et al._
(2020), and study the gate error obtained from interleaved RB as a function of
CNOT gate time in Fig. 3d. The calibration sequences and pulse shapes are
detailed in the supplement. At the optimal gate time of 90 ns, we depict
results from interleaved RB sequences in Fig. 3e, that we use to estimate an
error per gate (EPG) of 1.86e-3, with an error per Clifford (EPC) of 6.0e-3
from standard RB. Our decomposition has 1.5 CNOT gates per Clifford on average
and this places an upper bound on the EPG of EPC/1.5 $\sim$ 4.0e-3. The ratio
of EPG/EPC can be compared to analysis in Epstein _et al._ (2014) for
confidence in the interleaved RB estimates. We also note that our gate errors
fluctuate with changes in coherence and the defect environment Carroll _et
al._ (2021) in the vicinity of the qubit frequencies. At the time of the
displayed benchmarking, our measured coherence times for Q0(Q1) were $T_{1}=$
66 (66) $\mu$s and $T_{2}=$ 49(84) $\mu$s.
Figure 4: All $ZZ$ SiZZle gate (a) Post ZZ cancellation 2D sweep of
$\nu_{ZZ}$ with pulsed Stark frequency $\nu_{\textrm{gate}}$ and amplitude,
with the ratio of the two amplitudes fixed to $\Omega_{0}=\Omega_{1}$, and
phase calibrated to maximum contrast. The CW tones to cancel $ZZ$ use the same
parameters discussed in Fig 3, with $\nu_{d}=5.1$ GHz. (b) Interleaved RB of a
calibrated CZ gate based on siZZle reveals an error per gate of 5e-3, with an
upper bound on that figure of 7.6e-3. Figure 5: Dependence of multi-qubit
circuit fidelity on $ZZ$ interaction (Top) A device schematic of the line of 7
qubits, with a combination of hardware and control approaches to $ZZ$
modulation. The device employs multi-path couplers composed of a direct
capacitive coupling and a $\lambda$/4 bus resonator. (Bottom) Average heavy
output probability (HOP) for the same set of 200 random quantum volume (QV)
circuits, at different levels of $\tilde{\nu}_{ZZ}$. Error bars represent
standard error of the mean. The maximum and minimum $\tilde{\nu}_{ZZ}$ data
points are tuned by setting the pair wise phase difference between the siZZle
tones to $\phi\sim 0$ and $\phi\sim\pi$ respectively. The middle data point is
measured in the absence of siZZle. (Inset) Scatter of individual circuit HOPs
for the native (bare) device versus post-$ZZ$ cancellation.
In the second regime of operation, siZZle can be used as a standalone method
for performing a two-qubit gate due to the large $ZZ$ rates that can be
generated as shown in Figs. 1 and 2. In order to mitigate the static $ZZ$, we
continue to use CW tones at $\nu_{d}=$ 5.1 GHz, but, additionally pulse a
second set of off-resonant tones at a different frequency
$\nu_{\textrm{gate}}$ to generate large $\tilde{\nu}_{ZZ}$. This is shown in
Figure 4a, where we sweep the pulsed tone frequency and amplitudes
($\Omega_{0,\textrm{gate}}=\Omega_{1,\textrm{gate}}$) to generate
$\tilde{\nu}_{ZZ}$ exceeding a few MHz. We note that $ZZ$ gate operation can
also be achieved with a single frequency, using amplitude or phase modulation
to switch between low and high $ZZ$ rates. Once again, the operating parameter
space is very large, and finding a parameter set that is optimized for gate
fidelity, speed and leakage is a challenging task that is left for future
study. Here, we provide a proof-of-concept example of siZZle gate operation at
$\nu_{\textrm{gate}}=4.9$ GHz, with maximum amplitudes
$\Omega_{0,\textrm{gate}},\Omega_{1,\textrm{gate}}\sim 26$ MHz. We calibrate
the phase difference between the phase tones for maximum $\tilde{\nu}_{ZZ}$,
and employ frame changes on the control and target qubits to construct a novel
direct CZ gate of length 200 ns. Interleaved RB, shown in Fig. 4b reveals a
gate error of 5e-3, with an error per gate upper bound of 7.6e-3.
Finally, we study the impact of siZZle on multi-qubit circuit fidelity, using
a line of 7 qubits from a 27 qubit device with a heavy-hex architecture
Jurcevic _et al._ (2020), that we shall refer to as Device B. In order to
reduce the impact to qubit coherence from the Stark tones, our experiment
combines the quantum control approach to static $ZZ$ cancellation introduced
here with the hardware approach of multi-path couplers Kandala _et al._
(2020). The multi-path couplers already suppress the $\tilde{\nu}_{ZZ}$
compared to an equivalent direct coupler with the same effective $J$. This
reduces the amplitude of the siZZle tones required to then tune to
$\tilde{\nu}_{ZZ}\sim 0$, and consequently, the magnitude of the individual
qubit Stark shifts (see Eq. 7), and the impact to qubit coherence, if any.
This discussion and the device properties are detailed in the Supplementary
Information. As a reminder, we have three knobs to manipulate the $ZZ$
interaction in the device: the amplitude of the off-resonant tones, their
detuning from the qubit frequencies, and the pair wise phase difference. This
makes it particularly attractive for device-wide $ZZ$ cancellation on even
more complex topologies. For the considered line of qubits, we choose a common
Stark frequency set to 5.1 GHz, above all the qubit frequencies, leaving the
individual amplitudes and phases as the free control parameters. Placing the
Stark frequency above all the qubits reduces the possibility of undesired
frequency collisions. We then adjust the Stark amplitude on one of the qubits
to induce a Stark shift of $\sim$ 1 MHz. The amplitudes of the CW tones on the
subsequent qubits are then adjusted sequentially such that it is just
sufficient to tune to $\tilde{\nu}_{ZZ}\sim 0$ for every pair (i.e.
$\phi_{i}-\phi_{j}\sim\pi$). We then re-calibrate the single and two qubit
gates at the new dressed frequencies. We see that we can tune to
$\tilde{\nu}_{ZZ}\sim 0$ with very modest Stark shifts ($\sim$ 1 MHz), which
is important for reducing the impact to qubit coherence, as discussed above.
We then use cross-resonance to calibrate an echo CNOT with rotary target
drives, as in Sundaresan _et al._ (2020). We emphasize that we observe no
large changes in CNOT gate fidelity for all the pairs, at the different
$\tilde{\nu}_{ZZ}$ levels, which highlights the need for circuit-level
benchmarks such as quantum volume (QV) Cross _et al._ (2019) that are
sensitive to accumulated $ZZ$ errors from qubit idle times. In order to
benchmark multi-qubit performance, we employ seven-qubit QV circuits and
observe an improvement in the heavy output probability (HOP) from 0.5810 $\pm$
0.0027 to 0.5996 $\pm$ 0.0023 as the average $\tilde{\nu}_{ZZ}$ is tuned from
the bare value $\sim$ 40 kHz to $\sim$ 0 kHz. We employ 200 random circuits,
with a mean circuit time of $\sim$ 14.1 $\mu s$, and $83$ CNOT gates on
average. The improvement in the distribution of the individual circuit HOPs
with $ZZ$ cancellation is also depicted in Fig. 5. For the purpose of this
demonstration, we do not employ the circuit optimization and improved readout
techniques discussed in Jurcevic _et al._ (2020). Our control knobs also
enable us to systematically study the impact of $\tilde{\nu}_{ZZ}$ on circuit
performance. We modulate the average $\tilde{\nu}_{ZZ}$ in the device merely
by adjusting the pair-wise phase differences, and re-calibrate all the gates
at every step. Fig. 5 depicts the systematic decrease in HOP with increase in
average $\tilde{\nu}_{ZZ}$, and highlights why $ZZ$ cancellation will be
crucial for improving the performance of superconducting quantum processors.
The technique also opens up the path to more targeted studies of the impact of
the $ZZ$ interaction on spectator interactions and parallel gate operation,
all in a single device.
In conclusion, we demonstrate an all microwave technique - siZZle - for
arbitrary control of the $ZZ$ interaction rate in coupled transmon devices. We
use siZZle to demonstrate a novel high-fidelity CZ gate that could enable
hardware-efficient implementations of near-term algorithms on existing fixed-
frequency quantum processors. Furthermore, static $ZZ$ cancellation with
siZZle enables us to take cross-resonance past the 100 ns milestone for two-
qubit gate time, with state-of-the-art fidelity. This gives us a clear path to
increasing the fixed $J$ coupling in devices and also serves as a platform to
explore the physics of well-controlled strong coupling interactions. Finally,
combining siZZle with hardware approaches to $ZZ$ cancellation is leveraged to
definitively improve multi-qubit circuit fidelity, and highlights the
scalability of our technique. These results reveal quantum control with multi-
color drive tones to be an attractive approach to extend the reach of fixed
frequency superconducting quantum architectures.
We note recent independent work Mitchell _et al._ (2021) reporting siZZle and
a CZ gate based on the effect.
###### Acknowledgements.
We acknowledge Malcolm Carroll, Antonio Corcoles, Pat Gumann, Micheal Gordon,
Shawn Hall, Sean Hart, Muir Kumph, Jim Rozen, Maika Takita for experimental
contributions and Doug McClure, Petar Jurcevic for helpful discussions. The
device bring-up, gate calibration and characterization work was supported by
IARPA under LogiQ (contract W911NF-16-1-0114).
## References
* Zhang _et al._ (2020) Eric J Zhang, Srikanth Srinivasan, Neereja Sundaresan, Daniela F Bogorin, Yves Martin, Jared B Hertzberg, John Timmerwilke, Emily J Pritchett, Jeng-Bang Yau, Cindy Wang, _et al._ , “High-fidelity superconducting quantum processors via laser-annealing of transmon qubits,” arXiv preprint arXiv:2012.08475 (2020).
* Arute _et al._ (2019) Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando GSL Brandao, David A Buell, _et al._ , “Quantum supremacy using a programmable superconducting processor,” Nature 574, 505–510 (2019).
* Jurcevic _et al._ (2020) Petar Jurcevic, Ali Javadi-Abhari, Lev S Bishop, Isaac Lauer, Daniela F Bogorin, Markus Brink, Lauren Capelluto, Oktay Günlük, Toshinaro Itoko, Naoki Kanazawa, _et al._ , “Demonstration of quantum volume 64 on a superconducting quantum computing system,” arXiv preprint arXiv:2008.08571 (2020).
* McKay _et al._ (2019) David C. McKay, Sarah Sheldon, John A. Smolin, Jerry M. Chow, and Jay M. Gambetta, “Three-qubit randomized benchmarking,” Phys. Rev. Lett. 122, 200502 (2019).
* Takita _et al._ (2016) Maika Takita, A. D. Córcoles, Easwar Magesan, Baleegh Abdo, Markus Brink, Andrew Cross, Jerry M. Chow, and Jay M. Gambetta, “Demonstration of weight-four parity measurements in the surface code architecture,” Phys. Rev. Lett. 117, 210505 (2016).
* Berke _et al._ (2020) Christoph Berke, Evangelos Varvelis, Simon Trebst, Alexander Altland, and David P. DiVincenzo, “Transmon platform for quantum computing challenged by chaotic fluctuations,” arXiv preprint arXiv:2012.05923 (2020).
* Chen _et al._ (2014) Yu Chen, C. Neill, P. Roushan, N. Leung, M. Fang, R. Barends, J. Kelly, B. Campbell, Z. Chen, B. Chiaro, A. Dunsworth, E. Jeffrey, A. Megrant, J. Y. Mutus, P. J. J. O’Malley, C. M. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, Michael R. Geller, A. N. Cleland, and John M. Martinis, “Qubit architecture with high coherence and fast tunable coupling,” Phys. Rev. Lett. 113, 220502 (2014).
* Stehlik _et al._ (2021) J Stehlik, DM Zajac, DL Underwood, T Phung, J Blair, S Carnevale, D Klaus, GA Keefe, A Carniol, M Kumph, _et al._ , “Tunable coupling architecture for fixed-frequency transmons,” arXiv preprint arXiv:2101.07746 (2021).
* Zhao _et al._ (2020a) Peng Zhao, Peng Xu, Dong Lan, Xinsheng Tan, Haifeng Yu, and Yang Yu, “High-contrast zz interaction using multi-type superconducting qubits,” arXiv preprint arXiv:2002.07560 (2020a).
* Ku _et al._ (2020) Jaseung Ku, Xuexin Xu, Markus Brink, David C McKay, Jared B Hertzberg, Mohammad H Ansari, and BLT Plourde, “Suppression of unwanted $zz$ interactions in a hybrid two-qubit system,” arXiv preprint arXiv:2003.02775 (2020).
* Xu and Ansari (2020) Xuexin Xu and MH Ansari, “Zz freedom in two qubit gates,” arXiv preprint arXiv:2009.00485 (2020).
* Mundada _et al._ (2019) Pranav Mundada, Gengyan Zhang, Thomas Hazard, and Andrew Houck, “Suppression of qubit crosstalk in a tunable coupling superconducting circuit,” Physical Review Applied 12, 054023 (2019).
* Yan _et al._ (2018) Fei Yan, Philip Krantz, Youngkyu Sung, Morten Kjaergaard, Daniel L Campbell, Terry P Orlando, Simon Gustavsson, and William D Oliver, “Tunable coupling scheme for implementing high-fidelity two-qubit gates,” Physical Review Applied 10, 054062 (2018).
* Kandala _et al._ (2020) A Kandala, KX Wei, S Srinivasan, E Magesan, S Carnevale, GA Keefe, D Klaus, O Dial, and DC McKay, “Demonstration of a high-fidelity cnot for fixed-frequency transmons with engineered zz suppression,” arXiv preprint arXiv:2011.07050 (2020).
* Zhao _et al._ (2020b) Peng Zhao, Dong Lan, Peng Xu, Guangming Xue, Mace Blank, Xinsheng Tan, Haifeng Yu, and Yang Yu, “Suppression of static zz interaction in an all-transmon quantum processor,” (2020b), arXiv:2011.03976 [quant-ph] .
* Noguchi _et al._ (2020) Atsushi Noguchi, Alto Osada, Shumpei Masuda, Shingo Kono, Kentaro Heya, Samuel Piotr Wolski, Hiroki Takahashi, Takanori Sugiyama, Dany Lachance-Quirion, and Yasunobu Nakamura, “Fast parametric two-qubit gates with suppressed residual interaction using the second-order nonlinearity of a cubic transmon,” Phys. Rev. A 102, 062408 (2020).
* Xiong _et al._ (2021) Haonan Xiong, Quentin Ficheux, Aaron Somoroff, Long B. Nguyen, Ebru Dogan, Dario Rosenstock, Chen Wang, Konstantin N. Nesterov, Maxim G. Vavilov, and Vladimir E. Manucharyan, “Arbitrary controlled-phase gate on fluxonium qubits using differential ac-stark shifts,” (2021), arXiv:2103.04491 [quant-ph] .
* Li _et al._ (2008) Jian Li, K. Chalapat, and G. S. Paraoanu, “Entanglement of superconducting qubits via microwave fields: Classical and quantum regimes,” Phys. Rev. B 78, 064503 (2008).
* Chow _et al._ (2011) Jerry M. Chow, A. D. Córcoles, Jay M. Gambetta, Chad Rigetti, B. R. Johnson, John A. Smolin, J. R. Rozen, George A. Keefe, Mary B. Rothwell, Mark B. Ketchen, and M. Steffen, “Simple all-microwave entangling gate for fixed-frequency superconducting qubits,” Phys. Rev. Lett. 107, 080502 (2011).
* Poletto _et al._ (2012) S. Poletto, Jay M. Gambetta, Seth T. Merkel, John A. Smolin, Jerry M. Chow, A. D. Córcoles, George A. Keefe, Mary B. Rothwell, J. R. Rozen, D. W. Abraham, Chad Rigetti, and M. Steffen, “Entanglement of two superconducting qubits in a waveguide cavity via monochromatic two-photon excitation,” Phys. Rev. Lett. 109, 240505 (2012).
* Chow _et al._ (2013) Jerry M Chow, Jay M Gambetta, Andrew W Cross, Seth T Merkel, Chad Rigetti, and M Steffen, “Microwave-activated conditional-phase gate for superconducting qubits,” New. J. Phys. 15, 115012 (2013).
* Paik _et al._ (2016) Hanhee Paik, A. Mezzacapo, Martin Sandberg, D. T. McClure, B. Abdo, A. D. Córcoles, O. Dial, D. F. Bogorin, B. L. T. Plourde, M. Steffen, A. W. Cross, J. M. Gambetta, and Jerry M. Chow, “Experimental demonstration of a resonator-induced phase gate in a multiqubit circuit-qed system,” Phys. Rev. Lett. 117, 250502 (2016).
* Krinner _et al._ (2020) S Krinner, P Kurpiers, B Royer, P Magnard, I Tsitsilin, J-C Besse, A Remm, A Blais, and A Wallraff, “Demonstration of an all-microwave controlled-phase gate between far detuned qubits,” arXiv preprint arXiv:2006.10639 (2020).
* Cross _et al._ (2019) Andrew W. Cross, Lev S. Bishop, Sarah Sheldon, Paul D. Nation, and Jay M. Gambetta, “Validating quantum computers using randomized model circuits,” Phys. Rev. A 100, 032328 (2019).
* Paraoanu (2006) GS Paraoanu, “Microwave-induced coupling of superconducting qubits,” Physical Review B 74, 140504 (2006).
* Epstein _et al._ (2014) Jeffrey M. Epstein, Andrew W. Cross, Easwar Magesan, and Jay M. Gambetta, “Investigating the limits of randomized benchmarking protocols,” Phys. Rev. A 89, 062321 (2014).
* Carroll _et al._ (2021) Malcolm Carroll, Sami Rosenblatt, Petar Jurcevic, Isaac Lauer, and Abhinav Kandala, “Dynamics of superconducting qubit relaxation times,” (2021), arXiv:2105.15201 [quant-ph] .
* Sundaresan _et al._ (2020) Neereja Sundaresan, Isaac Lauer, Emily Pritchett, Easwar Magesan, Petar Jurcevic, and Jay M Gambetta, “Reducing unitary and spectator errors in cross resonance with optimized rotary echoes,” arXiv preprint arXiv:2007.02925 (2020).
* Mitchell _et al._ (2021) Bradley K. Mitchell, Ravi K. Naik, Alexis Morvan, Akel Hashim, John Mark Kreikebaum, Brian Marinelli, Wim Lavrijsen, Kasra Nowrouzi, David I. Santiago, and Irfan Siddiqi, “Hardware-efficient microwave-activated tunable coupling between superconducting qubits,” arXiv preprint arXiv:2105.05384 (2021).
* Chow _et al._ (2010) J. M. Chow, L. DiCarlo, J. M. Gambetta, F. Motzoi, L. Frunzio, S. M. Girvin, and R. J. Schoelkopf, “Optimized driving of superconducting artificial atoms for improved single-qubit gates,” Phys. Rev. A 82, 040305(R) (2010).
* McKay _et al._ (2017) David C. McKay, Christopher J. Wood, Sarah Sheldon, Jerry M. Chow, and Jay M. Gambetta, “Efficient $z$ gates for quantum computing,” Phys. Rev. A 96, 022330 (2017).
* Sheldon _et al._ (2016) Sarah Sheldon, Easwar Magesan, Jerry M. Chow, and Jay M. Gambetta, “Procedure for systematically tuning up cross-talk in the cross-resonance gate,” Phys. Rev. A 93, 060302 (2016).
* Sheldon _et al._ (2015) Sarah Sheldon, Lev S. Bishop, Easwar Magesan, Stefan Filipp, Jerry M. Chow, and Jay M. Gambetta, “Characterizing errors on qubit operations via iterative randomized benchmarking,” (2015), arxiv:1504.06597 .
Supplementary Information: Quantum crosstalk cancellation for fast entangling
gates and improved multi-qubit performance
## I SiZZle theory
We first provide some intuition for the dual-drive Stark induced $ZZ$ effect.
We consider a simple two-level model for the qubits, dressed by monochromatic
drives, with the $J$ coupling introduced as a perturbation. In the absence of
coupling, each qubit can be described independently by
$H_{0}/h=\nu_{Q0}|1\rangle\langle
1|+\Omega_{0}\cos(2\pi\nu_{d}t+\phi_{0})(|0\rangle\langle 1|+|1\rangle\langle
0|).$ (S1)
The off-resonant drive dresses the eigenstates and shifts the eigenvalues,
$\displaystyle|\overline{0}\rangle$ $\displaystyle\approx$
$\displaystyle|0\rangle-\frac{\Omega_{0}}{2\Delta_{0}}e^{-i\phi_{0}}|1\rangle,$
(S2) $\displaystyle|\overline{1}\rangle$ $\displaystyle\approx$
$\displaystyle|1\rangle+\frac{\Omega_{0}}{2\Delta_{0}}e^{i\phi_{0}}|0\rangle,$
(S3) $\displaystyle\overline{E}_{0}/h$ $\displaystyle\approx$
$\displaystyle-\frac{\Omega^{2}_{0}}{4\Delta_{0}},$ (S4)
$\displaystyle\overline{E}_{1}/h$ $\displaystyle\approx$
$\displaystyle\nu_{Q0}+\frac{\Omega^{2}_{0}}{4\Delta_{0}},$ (S5)
where $\Delta=\nu_{Q0}-\nu_{d}$ is the detuning. Therefore, in the two qubit
basis, the dressed states are,
$\displaystyle|\overline{00}\rangle$ $\displaystyle\approx$
$\displaystyle|00\rangle-\frac{\Omega_{0}}{2\Delta_{0}}e^{-i\phi_{0}}|10\rangle-\frac{\Omega_{1}}{2\Delta_{1}}e^{-i\phi_{1}}|01\rangle+\frac{\Omega_{0}\Omega_{1}}{4\Delta_{0}\Delta_{1}}e^{-i(\phi_{0}+\phi_{1})}|11\rangle,$
(S6) $\displaystyle|\overline{10}\rangle$ $\displaystyle\approx$
$\displaystyle|10\rangle+\frac{\Omega_{0}}{2\Delta_{0}}e^{i\phi_{0}}|00\rangle-\frac{\Omega_{1}}{2\Delta_{1}}e^{-i\phi_{1}}|11\rangle-\frac{\Omega_{0}\Omega_{1}}{4\Delta_{0}\Delta_{1}}e^{i(\phi_{0}-\phi_{1})}|01\rangle,$
(S7) $\displaystyle|\overline{01}\rangle$ $\displaystyle\approx$
$\displaystyle|01\rangle-\frac{\Omega_{0}}{2\Delta_{0}}e^{-i\phi_{0}}|11\rangle+\frac{\Omega_{1}}{2\Delta_{1}}e^{i\phi_{1}}|00\rangle-\frac{\Omega_{0}\Omega_{1}}{4\Delta_{0}\Delta_{1}}e^{-i(\phi_{0}-\phi_{1})}|10\rangle,$
(S8) $\displaystyle|\overline{11}\rangle$ $\displaystyle\approx$
$\displaystyle|11\rangle+\frac{\Omega_{0}}{2\Delta_{0}}e^{i\phi_{0}}|01\rangle+\frac{\Omega_{1}}{2\Delta_{1}}e^{i\phi_{1}}|10\rangle+\frac{\Omega_{0}\Omega_{1}}{4\Delta_{0}\Delta_{1}}e^{i(\phi_{0}+\phi_{1})}|00\rangle.$
(S9)
The dressing of the $|00\rangle$ and $|11\rangle$ with $|01\rangle$ and
$|10\rangle$ allows exchange interactions to directly couple them, leading to
a $ZZ$ interaction. We explicitly show this by calculating the energy shifts
due to a $J$ coupling, $H_{J}/h=J(|01\rangle\langle 10|+|10\rangle\langle
01|)$ ,
$\displaystyle\langle\overline{01}|H_{J}/h|\overline{01}\rangle$
$\displaystyle\approx$
$\displaystyle-J\frac{\Omega_{0}\Omega_{1}}{4\Delta_{0}\Delta_{1}}e^{-i(\phi_{0}-\phi_{1})},$
(S10) $\displaystyle\langle\overline{10}|H_{J}/h|\overline{10}\rangle$
$\displaystyle\approx$
$\displaystyle-J\frac{\Omega_{0}\Omega_{1}}{4\Delta_{0}\Delta_{1}}e^{i(\phi_{0}-\phi_{1})},$
(S11) $\displaystyle\langle\overline{00}|H_{J}/h|\overline{00}\rangle$
$\displaystyle\approx$ $\displaystyle
J\frac{\Omega_{0}\Omega_{1}}{2\Delta_{0}\Delta_{1}}\cos(\phi_{0}-\phi_{1}),$
(S12) $\displaystyle\langle\overline{11}|H_{J}/h|\overline{11}\rangle$
$\displaystyle\approx$ $\displaystyle
J\frac{\Omega_{0}\Omega_{1}}{2\Delta_{0}\Delta_{1}}\cos(\phi_{0}-\phi_{1}).$
(S13)
From Eq. 2, we thus obtain for the doubly dressed frame,
$\displaystyle\tilde{\nu}_{ZZ}$ $\displaystyle\approx$ $\displaystyle
2J\frac{\Omega_{0}\Omega_{1}}{\Delta_{0}\Delta_{1}}\cos(\phi_{0}-\phi_{1}).$
(S14)
For transmons we perform a similar calculation and including the $|2\rangle$
state leads to the expression in Eq. 6 of the main text.
More generally, we start from Eq. 4 of the main text,
$\displaystyle H_{\textrm{siZZle}}/h$ $\displaystyle=$ $\displaystyle
H_{0}/h+\sum_{i=\\{0,1\\}}\Omega_{i}\cos{(2\pi\nu_{d}t+\phi_{i})}(\hat{a}_{i}^{\dagger}+\hat{a}_{i}),$
(S15)
with amplitudes $\Omega_{i}$, phases $\phi_{i}$, and a common frequency
$\nu_{d}$. First, we move into the frame rotating at $\nu_{d}$ via the unitary
operator,
$\displaystyle R_{d}$
$\displaystyle=e^{-i2\pi\nu_{d}t\left(\hat{a}_{0}^{\dagger}\hat{a}_{0}+\hat{a}_{1}^{\dagger}\hat{a}_{1}\right)}.$
(S16)
The RWA is made on the drive tones by ignoring fast rotating terms. The result
is a time-independent Hamiltonian and diagonalizing followed by restoring
$\nu_{d}$ via $R_{d}^{\dagger}$ gives the effective Hamiltonian describing the
dynamics under the exchange coupling and Stark tones,
$\displaystyle H_{\text{eff}}/h$
$\displaystyle=\tilde{\nu}_{IZ}IZ/4+\tilde{\nu}_{ZI}ZI/4+\tilde{\nu}_{ZZ}ZZ/4,$
(S17)
where
$\tilde{\nu}_{ZZ}=(\tilde{E}_{00}+\tilde{E}_{11}-\tilde{E}_{01}-\tilde{E}_{10})/h$.
For the case $\alpha_{0}\approx\alpha_{1}$, $\tilde{\nu}_{ZZ}$ is given in Eq.
6 of the main text and $\tilde{\nu}_{IZ}$, $\tilde{\nu}_{ZI}$ are given by,
$\displaystyle\tilde{\nu}_{IZ}$
$\displaystyle=(\tilde{E}_{01}-\tilde{E}_{00}+\tilde{E}_{11}-\tilde{E}_{10})/h\approx\nu_{IZ,J}+\nu_{1,s}+\frac{J(\alpha_{0}+\alpha_{1})\Omega_{0}\Omega_{1}\cos(\phi_{0}-\phi_{1})}{\Delta_{1,d}(\alpha_{0}+\Delta_{0,d})(\alpha_{1}+\Delta_{1,d})},$
(S18) $\displaystyle\tilde{\nu}_{ZI}$
$\displaystyle=(\tilde{E}_{10}-\tilde{E}_{00}+\tilde{E}_{11}-\tilde{E}_{01})/h\approx\nu_{ZI,J}+\nu_{0,s}+\frac{J(\alpha_{0}+\alpha_{1})\Omega_{0}\Omega_{1}\cos(\phi_{0}-\phi_{1})}{\Delta_{0,d}(\alpha_{0}+\Delta_{0,d})(\alpha_{1}+\Delta_{1,d})},$
(S19)
where
$\displaystyle\tilde{\nu}_{IZ,J}$
$\displaystyle=2\left(-\nu_{1}+J^{2}\left(\frac{1}{\Delta_{01}}+\frac{\alpha_{0}+\alpha_{1}}{(\Delta_{01}+\alpha_{0})(\Delta_{01}-\alpha_{1})}\right)\right),$
$\displaystyle\tilde{\nu}_{ZI,J}$
$\displaystyle=2\left(-\nu_{0}+J^{2}\left(-\frac{1}{\Delta_{01}}+\frac{\alpha_{0}+\alpha_{1}}{(\Delta_{01}+\alpha_{0})(\Delta_{01}-\alpha_{1})}\right)\right),$
(S20) $\displaystyle\nu_{0,s}$
$\displaystyle=-\frac{\Omega_{0}^{2}\alpha_{0}}{\Delta_{0,d}(\alpha_{0}+\Delta_{0,d})},$
$\displaystyle\nu_{1,s}$
$\displaystyle=-\frac{\Omega_{1}^{2}\alpha_{1}}{\Delta_{1,d}(\alpha_{1}+\Delta_{1,d})}.$
(S21)
## II Cross-resonance with $ZZ$ cancellation tones
The starting Hamiltonian is given by,
$\displaystyle H/h$ $\displaystyle=$
$\displaystyle\sum_{i\in\\{0,1\\}}\left(\nu_{i}\hat{a}_{i}^{\dagger}\hat{a}_{i}+\frac{\alpha_{i}}{2}\hat{a}_{i}^{\dagger}\hat{a}_{i}\left(\hat{a}_{i}^{\dagger}\hat{a}_{i}-1\right)\right)+J(\hat{a}_{0}^{\dagger}+\hat{a}_{0})(\hat{a}_{1}^{\dagger}+\hat{a}_{1})$
(S22)
$\displaystyle+\sum_{i\in\\{0,1\\}}\Omega_{i}\cos{(2\pi\nu_{d}t+\phi_{i})}(\hat{a}_{i}^{\dagger}+\hat{a}_{i})+\Omega_{0,\textrm{gate}}(t)\cos{(2\pi\nu_{0,\textrm{gate}}t+\phi_{0,\textrm{gate}})}(\hat{a}_{0}^{\dagger}+\hat{a}_{0}).$
In order to find the effective Hamiltonian describing the system including the
cross-resonance tone, we first find the effective Hamiltonian describing the
dynamics of the exchange coupling and Stark tones. The series of
transformations are also applied to the CR drive tone
$\Omega_{0,\textrm{gate}}(t)\cos{(2\pi\nu_{0,\textrm{gate}}t+\phi_{0,\textrm{gate}})}(\hat{a}_{0}^{\dagger}+\hat{a}_{0})$
so we obtain,
$\displaystyle H$ $\displaystyle\rightarrow
H_{\text{eff}}+\Omega_{0,\textrm{gate}}(t)\cos{(2\pi\nu_{0,\textrm{gate}}t+\phi_{0,\textrm{gate}})}D_{\textrm{CR}}(t),$
(S23)
where $D_{\textrm{CR}}(t)$ is the transformed drive operator. We set
$\nu_{0,\textrm{gate}}=\tilde{\nu}_{1}$ and $\phi_{0,\textrm{gate}}=0$. Moving
into the frame rotating at $\tilde{\nu}_{1}$ and making the RWA gives to
first-order in the cross-resonance tone amplitude, first order in $J$, second
order in the Stark tone amplitudes, and assuming
$\alpha_{0}=\alpha_{1}=\alpha$ for simplicity,
$\displaystyle\tilde{\nu}_{ZX}$
$\displaystyle=\text{tr}\left(H_{\text{eff,CR}}\frac{ZX}{2}\right)=J\Omega_{0,\text{gate}}\left(A+B\Omega_{0}^{2}+C\Omega_{1}^{2}\right),$
(S24)
where
$\displaystyle A$
$\displaystyle=-\frac{\alpha}{\Delta_{0,1}(\alpha+\Delta_{0,1})},$ (S25)
$\displaystyle B$
$\displaystyle=-\frac{\alpha}{4\Delta_{0,1}(\alpha+\Delta_{0,1})^{2}\Delta_{0,d}}+\frac{(2\alpha+\Delta_{0,1})}{8(\alpha+\Delta_{0,1})\Delta_{0,d}\Delta_{0,1}^{2}}-\frac{\alpha}{4(\alpha+\Delta_{0,d})(\alpha+\Delta_{0,1})\Delta_{0,1}^{2}}$
$\displaystyle-\frac{\alpha}{4(\alpha+\Delta_{0,1}+\Delta_{0,d})(\alpha+\Delta_{0,1})\Delta_{0,1}^{2}}+\frac{(2\alpha+\Delta_{0,1})}{8\Delta_{1,d}(\alpha+\Delta_{0,1})\Delta^{2}}+\frac{\Delta_{0,1}(\alpha+\Delta_{0,d}+\Delta_{1,d})}{8(\alpha+\Delta_{0,1})^{2}(2\alpha+\Delta_{0,1})(\alpha+\Delta_{0,d})\Delta_{1,d}}$
$\displaystyle+\frac{1}{16(\alpha+\Delta_{0,1})^{2}}\Bigg{(}-\frac{2}{\Delta_{0,d}}-\frac{2}{\alpha+\Delta_{0,d}}-\frac{2\alpha}{(2\alpha+\Delta_{0,1})(\alpha+\Delta_{0,d})}+\frac{6\alpha}{(2\alpha+\Delta_{0,1})(2\alpha+\Delta_{0,d})}$
$\displaystyle+\frac{2\alpha}{(\alpha+\Delta_{0,d})(\alpha+\Delta_{0,1}+\Delta_{0,d})}+\frac{6\alpha}{(2\alpha+\Delta_{0,1})(3\alpha+\Delta_{0,1}+\Delta_{0,d})}-\frac{10\alpha+4\Delta_{0,1}}{\Delta_{1,d}(2\alpha+\Delta_{0,1})}\Bigg{)},$
(S26)
and
$\displaystyle
C=\frac{\alpha}{4\Delta_{0,1}^{2}}\Bigg{(}\frac{1}{(\Delta_{0,1}-\alpha)\Delta_{0,d}}-\frac{\Delta_{0,1}}{(\alpha+\Delta_{0,1})^{2}(\alpha+\Delta_{0,d})}+\frac{\alpha(\alpha+3\Delta_{0,1})}{(\Delta_{0,1}-\alpha)(\alpha+\Delta_{0,1})^{2}\Delta_{1,d}}$
$\displaystyle-\frac{\alpha(\alpha+3\Delta_{0,1})}{(\Delta_{0,1}-\alpha)(\alpha+\Delta_{0,1})^{2}(\alpha+\Delta_{1,d})}+\frac{\Delta_{0,1}}{(\alpha+\Delta_{0,1})^{2}(\Delta_{1,d}-\Delta_{0,1})}+\frac{1}{(\alpha-\Delta_{0,1})(\alpha-\Delta_{0,1}+\Delta_{1,d})}\Bigg{)}.$
(S27)
## III Gate Calibration: Device A
The single qubit gates are 4$\sigma$ derivative Gaussian quadrature corrected
(DRAG) pulses Chow _et al._ (2010) of duration 50 ns. The CNOT gate consists
of two flat-topped Gaussian pulses applied simultaneously on the control and
target qubits at the target frequency, followed by Z rotations on both qubits
implemented by frame changes McKay _et al._ (2017). The target pulse envelope
is given by
$\displaystyle\Omega(t)=\Omega_{x}(t)\cos(2\pi\nu_{tg}t)+(\beta\dot{\Omega}_{x}(t)+\gamma|\dot{\Omega}_{x}(t)|)\sin(2\pi\nu_{tg}t)$
where $\Omega_{x}$ is the flat-topped Gaussian pulse, $\nu_{tg}$ is the target
frequency, $\beta$ and $\gamma$ are the DRAG and skew corrections
respectively. The control pulse does not have DRAG or skew correction.
To begin with the CNOT gate calibration, we do a rough amplitude calibration
on the control pulse such that the $ZX$ rotation on the target is $\pi/2$,
then we apply a pulsed version of Hamiltonian tomography Sheldon _et al._
(2016) on the control pulse to align the $ZX$ interaction along the $-x$ axis.
Next we turn on the target pulse and do a fine calibration by simultaneously
varying the control amplitude, target amplitude, control and target phases,
target drag, target skew, and target frame change to tune the gate unitary to
be $\outerproduct{0}{0}\otimes I+e^{-i\varphi}\outerproduct{1}{1}\otimes X$.
Finally, the control frame change is calibrated to cancel $\varphi$, which
brings the unitary to a CNOT gate. The fine calibration sequences are shown in
Fig. S1 A-F, which measures the target dynamics when the control qubit is in
either $\ket{0}$ or $\ket{1}$ state. The control amplitude and target
amplitude are updated according to Fig. S1 A and B, the goal to simultaneously
satisfy $\theta_{ZX}+\theta_{IX}=0$ and $-\theta_{ZX}+\theta_{IX}=\pi$, where
$\theta_{ZX}$ and $\theta_{IX}$ are the rotations due to cross-resonance and
target pulses respectively. The target drag ($\beta$) and the gate angle are
updated according to Fig. S1 D and F, these calibrations make sure the target
rotation is along the $x$-axis when the control is in $\ket{1}$. When
calibrating the gate angle, the control and target phases are updated
together. Finally, the target skew ($\gamma$) and target frame change (fc) are
calibrated according to Fig. S1 E and C, which ensures the target dynamics is
identity when control is in $\ket{0}$. The control frame change (FC) is
calibrated at the very end, using the sequence in Fig. S1 G. The final
calibrated pulse envelope is shown in FIG. S1 H, the rise and fall for the
flat-topped Gaussian pulses are $2\sigma$ long where $\sigma=10$ns.
Figure S1: Direct CNOT gate calibration sequences Sequences A-F are
implemented simultaneously. Target and control amplitudes are updated
according to the outputs of A and B. The target frame change, target drag,
target skew, and control/target phase are updated according to the outputs of
C-F respectively. Sequence G is used to calibrate the control frame change.
The calibrated pulse envelope for the CNOT gate is shown in H. The sequence in
bracket is repeated $n$ times. For A-F the target qubit population is
measured, for G the control qubit population is measured. The sequence in
bracket is repeated n times
The calibration of the CZ gate is similar to that of the CNOT gate. Here, two
flat-topped Gaussian pulses are applied simultaneously to the control and
target qubits at the siZZle frequency, followed by two frame changes on the
control and target qubits. We fix the target amplitude and the relative phase
between the two siZZle pulses, then calibrate the control amplitude and target
frame change simultaneously to satisfy $\theta_{ZZ}+\theta_{IZ}=0$ and
$-\theta_{ZZ}+\theta_{IZ}=\pi$. Finally we calibrate the control frame change
to cancel the control Stark shift and bring the unitary to a CZ gate. The
calibration sequence are shown in Fig. S2 A-C, and the final CZ pulse envelope
are shown in Fig. S2 D, where the rise and fall times are $3\sigma$ with
$\sigma=10$ns. Unlike for CNOT gate, drag and skew are not used in the CZ gate
calibration.
Figure S2: Direct CZ gate calibration sequences Sequence A and B are
implemented simultaneously, and the outputs are used to update the control
amplitude and target frame change (fc). Sequence C is used to calibrate the
control frame change (FC). The calibrated pulse envelope for the CZ gate is
shown in D.
We show the calibration data for both the CNOT and CZ gates. The fine
calibration routine is an iterative process Sheldon _et al._ (2015), which
terminates when the absolute difference between the calibrated rotation angles
and the desired rotation angles becomes less than $0.01$. In FIG S3 A-G. we
show the converged data for sequence used in the CNOT gate calibration, and in
FIG S3 H-J the final converged data for sequence used in the CZ gate
calibration.
Figure S3: Output of the calibration sequences used for CNOT and CZ gates. A-G
correspond to the output of the CNOT calibration sequences respectively. Where
as H-J correspond to the output of the CZ calibration sequences respectively.
The blue points are experimental data, and the red dashed lines are the fits
to experimental data. We extract the calibrated rotation angles from the fits.
The y-axis in each plot corresponds to either the control or target
population, and the x-axis is number of repetitions (n) shown in the
calibration sequence. Figure S4: Device A coherence Scatter plots of $T_{1}$
(left), $T_{2}$ (middle), and $T_{2}^{*}$ (right) times for Q0 (top, red) and
Q1 (bottom ,blue), with and without CW siZZle tones. All the measurements were
interleaved and taken at 30 minute intervals. The Stark drive on Q0 for $ZZ$
cancellation is larger than Q1 at the chosen operating point, as well as the
corresponding Stark shift, resulting in a clear reduction $T_{2}$ and
$T_{2}^{*}$.
## IV SiZZle with multi-path ZZ cancellation couplers
As seen in the coherence data of Device A, discussed in Fig. S4, there can be
a degradation of coherence with $ZZ$ cancellation. Particularly, at large $J$
couplings with standard couplers, one requires large siZZle amplitudes
$\Omega$ to achieve $ZZ$ cancellation. Since the qubit Stark shifts are
proportional to $\Omega^{2}$, this makes the qubits more susceptible to
amplitude noise, and consequently can lead to additional dephasing. In this
section, we numerically show that using multi-path couplers, for the same
effective $J$, one can achieve full $ZZ$ cancellation at smaller siZZle
amplitudes due to requiring smaller qubit Stark shifts. We start with the
following form of the Hamiltonian with a direct qubit coupling and an
additional coupling path via a bus resonator.
$\displaystyle H/h$ $\displaystyle=$
$\displaystyle\sum_{i=\\{0,1\\}}\left(\nu_{i}\hat{a}_{i}^{\dagger}\hat{a}_{i}+\frac{\alpha_{i}}{2}\hat{a}_{i}^{\dagger}\hat{a}_{i}\left(\hat{a}_{i}^{\dagger}\hat{a}_{i}-1\right)\right)+J(\hat{a}_{0}^{\dagger}+\hat{a}_{0})(\hat{a}_{1}^{\dagger}+\hat{a}_{1})+\sum_{j}\nu_{j}\hat{b}_{j}^{\dagger}\hat{b}_{j}$
(S28)
$\displaystyle+\sum_{i=\\{0,1\\},j}g_{i,j}(\hat{a}_{i}^{\dagger}+\hat{a}_{i})(\hat{b}_{j}^{\dagger}+\hat{b}_{j})+\sum_{i=\\{0,1\\}}\Omega_{i}\cos{(2\pi\nu_{d}t+\phi_{i})}(\hat{a}_{i}^{\dagger}+\hat{a}_{i}).$
Most terms here have already been defined in Eq. 4 of the main text. The
additional terms arise from the bus coupling, where $g_{i,j}$ is the coupling
from qubit $i$ to the $j$’th harmonic mode of the bus at frequency $\nu_{j}$.
For simplicity, we drop the counter-rotating terms and consider a single bus
mode. Eq. S28 is then transformed into a time independent form by moving into
a frame rotating at the drive frequency $\nu_{d}$ via the rotation operator
$\hat{R}/h=e^{-i2\pi\nu_{d}t(\hat{a}_{0}^{\dagger}\hat{a}_{0}+\hat{a}_{1}^{\dagger}\hat{a}_{1}+\hat{b}^{\dagger}\hat{b})}$
and applying the RWA. One can then obtain the Stark shifts
$\tilde{\nu}_{ZI},\tilde{\nu}_{IZ}$ and the $ZZ$ interaction
$\tilde{\nu}_{ZZ}$ by diagonalizing the time independent Hamiltonian.
We consider the following parameters, that are similar to pairs on device B:
$\nu_{0}=4.85$ GHz, $\nu_{1}=4.95$ GHz, $\alpha_{0}=\alpha_{1}=-290$ MHz,
$g_{0}=g_{1}=135$ MHz, $J=10.6$ MHz, $\nu_{\textrm{bus}}=6.35$ GHz,
$\nu_{d}=5.1$ GHz and $\Omega_{0}=\Omega_{1}$. From the low amplitude
dependence of $\tilde{\nu}_{ZZ}$, we estimate an effective $J$ coupling for
the multi-path coupler(mpc) using the form of Eq. 6 to be
$J_{\textrm{eff}}=3.28$ MHz. We then compare the Stark tone amplitude
dependence of $\tilde{\nu}_{ZI},\tilde{\nu}_{IZ}$ and $\tilde{\nu}_{ZZ}$ for
this mpc device with a single path coupler (spc) of the same
$J_{\textrm{eff}}$. This is depicted in Fig. S5. For the mpc, while $ZZ$
cancellation only requires Stark tone amplitudes $\sim 15$ MHz, the spc
requires amplitudes $\sim 55$ MHz, seen in Fig. S5a. Consequently, the qubit
Stark shifts are much smaller at $ZZ$ cancellation for the mpc device, seen in
Fig. S5b and c, thereby reducing the sensitivity to Stark tone amplitude
noise.
Figure S5: SiZZle with multi-path $ZZ$ cancellation couplers Numerical
simulations of the Stark tone amplitude dependence of (a) $\tilde{\nu}_{ZZ}$,
(b) $\tilde{\nu}_{ZI}$ and (c) $\tilde{\nu}_{IZ}$ for multi-path coupler
(blue) and a single-path coupler (red) with the same $J_{\textrm{eff}}=3.2$
MHz, defined in the text. We consider the following parameters for the mpc
device: $\nu_{0}=4.85$ GHz, $\nu_{1}=4.95$ GHz, $\alpha_{0}=\alpha_{1}=-290$
MHz, $g_{0}=g_{1}=135$ MHz, $J=10.6$ MHz, $\nu_{\textrm{bus}}=6.35$ GHz,
$\nu_{d}=5.1$ GHz and $\Omega_{0}=\Omega_{1}$. The blue (red) dotted line
represents the operating point for ZZ cancellation for the mpc (spc) device.
For the mpc, the $ZZ$ cancellation is achieved at smaller Stark tone
amplitudes, and the smaller $\tilde{\nu}_{ZI}$ and $\tilde{\nu}_{IZ}$ are less
sensitive to amplitude noise on the Stark tones.
## V Device B: Seven-qubit device with multi-path couplers for $ZZ$
cancellation
In this section, we detail device B from the main text. The seven qubits
represent a sub section of a larger lattice of 27 qubits in the heavy-hex
architecture. In order to reduce the static $ZZ$ interaction compared to
standard single path couplers, the device employs coupling elements composed
of a direct capacitive coupler and a $\lambda/4$ bus resonator Kandala _et
al._ (2020). The bus resonator frequencies are in the range 6.35-6.55 GHz. For
$ZZ$ cancellation, a common frequency $\nu_{d}=5.1$ GHz was chosen for the CW
tones, above all the qubit transitions. Most of the qubit parameters and gate
fidelities are detailed in Fig. S6. The qubit anharmonicities are in the range
-288 to -295 MHz, and the average readout fidelity is 98.2 $\%$. As seen in
Fig. S6, the qubit frequencies are shifted by at most 1.2 MHz, by the CW tones
for full $ZZ$ cancellation. This helps retain good coherence times for the
device, even after $ZZ$ cancellation, depicted in Fig. S7. Some of the qubits
show a modest decrease in $T_{2}$, while the $T_{1}$ times are within typical
fluctuations. From the single drive Stark shifts of the qubits, we estimate
the amplitude of the CW tones driving Q0/1/2/3/4/5/6 for $ZZ$ cancellation to
be 17.6/16.8/20.4/19.5/21.1/13.0/21.3 MHz respectively.
Figure S6: Device B sub-system metrics Qubit frequencies, single and two-qubit
gate times and their respective error rates, and the strength of the pair wise
static $ZZ$ interaction $\tilde{\nu}_{ZZ}$ for (a) the native device without
siZZle tones (b) with siZZle tones at $\nu_{d}=5.1$ GHz, and pair-wise phases
tuned to $ZZ$ cancellation $\phi\sim\pi$. (c) with siZZle tones at
$\nu_{d}=5.1$ GHz, and pair-wise phases tuned to $ZZ$ amplification $\phi\sim
0$. All gate errors are estimated by randomized benchmarking. The arrows
represent the direction of the CNOT gates employed in the QV circuits
discussed in the main text, and the reported error per gates (EPG) represent
the upper bound obtained from the error per Clifford (EPC/1.5). The CNOT gates
are composed of two cross-resonance pulses and two finite-time single qubit
pulses, and the gate times are optimized for operation in the absence of
siZZle. The single qubit EPG’s represent the errors for simultaneous single
qubit operation. Figure S7: Device B coherence Scatter plots of $T_{1}$ (top,
red) and $T_{2}$ (bottom, blue) times of the 7 qubits, with and without siZZle
tones. All the measurements were interleaved and taken at 30 minute intervals.
|
# Forecasting and predicting stochastic agent-based models of cell migration
with biologically-informed neural networks
John T. Nardini
###### Abstract
Collective migration, or the coordinated movement of many individuals, is an
important component of many biological processes, including wound healing,
tumorigenesis, and embryo development. Spatial agent-based models (ABMs) are
often used to model collective migration, but it is challenging to thoroughly
study these models’ behavior due to their random and computational nature.
Modelers often overcome these obstacles by coarse-graining discrete ABM rules
into continuous mean-field partial differential equation (PDE) models. These
models are advantageous because they are fast to simulate; unfortunately,
these PDE models can poorly predict ABM behavior (or even be ill-posed) at
certain parameter values. In this work, we describe how biologically-informed
neural networks (BINNs) can be used to learn BINN-guided PDE models that are
capable of accurately predicting ABM behavior. In particular, we show that
BINN-guided PDE simulations can forecast future ABM data not seen during model
training. Additionally, we demonstrate how to predict ABM data at previously-
unexplored parameter values by combining BINN-guided PDE simulations with
multivariate interpolation. We highlight these results using three separate
ABMs that consist of rules on agent pulling and/or adhesion. Surprisingly,
BINN-guided PDEs can accurately forecast and predict ABM data with a one-
compartment PDE when the mean-field PDE is ill-posed or requires two
compartments. While we focus our presentation on the biological applications,
this work is broadly applicable to studying many systems that exhibit the
collective migration of individuals.
## 1 Introduction
Cellular migration is an important biological phenomenon involved in embryo
development, wound repair, and tumorigenesis [1, 2]. While the mechanics
driving _individual_ cell migration are now well understood, we have not yet
how many cells coordinate collective _population-level_ migration. In some
contexts, collective migration results from external cell stimuli, such as
empty space, cell-cell interactions, chemical signals & gradients, etc.
Additionally, many diseases and their respective complications develop when
cells abuse these cues [3]. For example, metastatic cancers are associated
with a loss of intercellular connections within epithelial tissues [4], and a
common complication of diabetes is nonhealing wounds where high inflammatory
markers and low paracrine signaling levels prevent cells from entering and
repairing a wound [5]. There is thus a current need to establish the impacts
of each of these stimuli on collective migration, which will provide insight
into tissue homeostasis, disease progression, and effective drug therapy
development.
Mathematical modeling is a useful tool to infer how physical and chemical cues
drive collective cell migration [3, 6, 7, 8]. In particular, stochastic agent-
based models (ABMs) are a widely-used modeling framework where autonomous
agents mimic individual cells [9, 10]. ABMs are advantageous because they
capture the discrete and stochastic nature of many biological processes [7].
These models are especially influential for cell biology, where modelers can
easily implement rules governing the impacts of different stimuli on agent
actions, such as the effects of cell-cell interactions on agent migration [7,
8, 11, 12]. One can further introduce population heterogeneity into an ABM by
incorporating several agent types that perform different rules. Despite the
many benefits of ABMs, there are limitations on their use: in particular,
their model simulations are computationally intensive and time-consuming to
perform [13, 10]. These restraints prevent modelers from efficiently exploring
how model parameters alter model outputs which, in turn, make it challenging
to thoroughly understand the effects of each stimuli on collective migration.
As such, there is a need for the development of methods to address ABMs’
computational expenses by 1. forecasting future model output from limited
simulations, and 2. predicting ABM data at previously-unexplored parameter
values [13, 14, 15].
One of the most commonly-used approaches for ABM forecasting and prediction
includes coarse-graining ABM rules into a continuous differential equation
(DE) model [9, 10]. ABMs are converted into ordinary DEs (ODEs) when tracking
a time-varying output (e.g., agent density) that is spatially homogenous [13].
Partial DEs (PDEs) are suitable for describing spatially-varying ABMs [10].
Mean-field PDEs that are coarse-grained from rules on the effects of cell-cell
interaction on cell migration often take the form of parabolic PDEs with
nonlinear density-dependent diffusion rates [3, 6, 7, 12]. These equations can
be written as
$\dfrac{\partial T}{\partial t}=\nabla\cdot\big{(}\mathcal{D}(T)\nabla
T\big{)},$ (1)
where $T=T(x,t)$ denotes the total spatiotemporal agent density and
$\mathcal{D}(T)$ is the density-dependent agent diffusion rate. Such DE models
are useful surrogates for ABMs because they are cheap and efficient to
simulate and are amenable to analytical methods, which modelers can use to
precisely infer how model parameters impact their outputs. However, these
models are unable to provide insight into individual level behavior and can
lead to poor ABM predictions (which can also be ill-posed) for many parameter
values [9, 12]. For example, Baker and Simpson 2010 [9] demonstrated that the
mean-field ODE model for a population growth ABM only accurately predict ABM
data when agents proliferate slowly. Chapelle and Yates 2019 [7] coarse-
grained rules on the effects of cell pulling on agent migration into multiple
PDE models; while these models accurately predict ABM data, the authors found
their accuracy decreases with more complex model rules. Anguige and Schmeiser
2009 [6] used a stochastic space-jump model to study how cell adhesion impacts
collective migration and found that the resulting mean-field PDE model is ill-
posed for large adhesion values. Modelers may improve DE models’ predictive
capability by implementing pair-wise interactions or moment closure
approximations, but the resulting models are often complicated and may
significantly increase computation time [9, 16].
Equation learning (EQL) is a new area of research on the development and
application of algorithms to discover the dynamical systems model that best
describes a dataset [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. Brunton et
al. 2016 [17] introduced what is now a widely-used least squares EQL approach
that uses sparse regression to learn a simple DE model from a user-specified
library of several candidate terms. This method has proven very successful in
recovering informative models from simulated ODE and PDE data as well as
experimental data [28]. There is a growing understanding that EQL methods can
aid the prediction of ABM data [13, 29, 30, 31]. For example, we recently
demonstrated that the least squares EQL approach learns ODE equations that
accurately describe simulated ABM data, even when the collected data is
incomplete or sparsely sampled [13]. Supekar et al. 2023 [31] coupled this
method with spectral basis representation data to discover PDE models that
capture the emergent behavior found in active matter ABMs. Another popular EQL
approach includes physics-informed neural networks (PINNs), where modelers
embed physical knowledge into the training procedure for artificial neural
networks (ANNs) [32, 33, 34, 35, 36]. Trained PINN models can predict complex,
sparse, and noisy data while also obeying known physical principles. Lagergren
et al. 2020 [26] extended the PINNs framework by replacing physics-based
mechanistic terms with function-approximating multi-layer perceptions (MLPs)
to develop the biologically-informed neural network (BINN) methodology. As a
result, BINN models can learn PDE models with nonlinear and density-dependent
modeling terms, such as the forms of Equation (1) that result from coarse-
graining various agent rules on the impacts of cell interactions on collective
migration. BINNs thus present a promising tool for ABM forecasting and
prediction, but determining how they can be used to learn predictive DE models
remains an open area of research.
In this work, we demonstrate how to combine BINNs and PDE model simulations to
forecast and predict ABM behavior. Our approach leverages BINNs’ vast data and
modeling approximation capability with the computational efficiency of PDE
models to develop a potent ABM surrogate modeling tool. In particular, we
demonstrate how to accurately forecast future ABM data at a fixed parameter
value by training a BINN model to precomputed data and then simulating the
BINN’s learned PDE model. We extend this approach to predict ABM behavior at
new parameter values by training BINNs to data from multiple parameter values
and performing multivariate interpolation over the learned modeling terms.
This prediction is performed by incorporating the interpolated modeling terms
into a simulated PDE model. Interpolation provides a straightforward approach
for learning PDE modeling terms that works well, though more complex
methodologies, such as ANNs or Gaussian processes, could be used [37].
The frequent use of ABMs to study economics, social sciences, engineering, and
many other areas has led to previous work on predictive surrogate ABM modeling
[38, 39, 40, 41]. This research is closely related to the field of designing
computer experiments, where modelers implement computationally efficient
statistical methods to emulate high-fidelity computer simulations [42, 43]. In
a typical study, modelers calibrate the chosen surrogate model to several high
fidelity computer simulations, and the calibrated surrogate model is utilized
to perform a certain task, such as identifying sensitive ABM parameters,
predicting new dynamics from the high fidelity simulation, or estimating its
parameters from data. Modelers must choose a surrogate model to use: Gaussian
processes are the most popular thanks to the influential work of [37]. The
Bayesian approximation error method is another widely-used technique [44, 45,
46], and ANNs have gained traction in recent years [15, 47]. While these
‘black-box’ model choices have proven successful in practice, they typically
ignore domain expertise on the high-fidelity simulation, which limits the
interpretability of their analyses. In this work, we implement a ‘gray-box’
approach for ABM prediction by training predictive BINN models to discover
computationally efficient surrogate PDE models [48]. Visual inspection of the
PDE modeling terms enables us to interpret how model parameters impact ABM
behavior at the population level. Our work is similar to [10], who built a
statistical model to infer the discrepancy between ABM simulations and their
coarse-grained ODE approximations; parameters with high discrepancies indicate
the assumptions underlying model coarse-graining are invalid. In this previous
study, incorporating the discrepancy model into the data’s statistical model
allowed for accurate ABM parameter estimation.
We illustrate the performance of BINNs in guiding ABM forecasting and
prediction using three separate ABMs consisting of different rules on the
impacts of cell interactions on agent migration. The first model implements
rules on cell pulling and is borrowed from [7]. The second model is a discrete
version of the space-jump model from [6] on cell adhesion. We introduce the
third model, which consists of two separate subpopulations, each of which
performs either cell pulling or adhesion rules. These models highlight how
agent interactions impact collective migrations as well as some limitations of
mean-field PDE models. Namely, the mean-field PDE model for the Adhesion model
is ill-posed for large cell adhesion values, and the mean-field model for the
Pulling & Adhesion model consists of two separate compartments for each
subpopulation; to the best of our knowledge, it is not possible to convert
this two-compartment model into a single-compartment PDE describing the total
population. Our BINN-guided approach learns a single-compartment PDE models
capable of forecasting and predicting data from all three ABMs over all
parameter values considered. We begin this work in Section 2 by presenting the
three ABMs as well as their mean-field PDE models. In Section 3, we discuss
our data analysis methods on BINNs training, multivariate interpolation, and
PDE simulation. In Section 4, we detail our results on implementing these
methods for ABM forecasting and prediction and finish this section with a
brief discussion on the computational expenses of each data analysis method.
We conclude these results and suggest areas for future work in Section 5.
## 2 Coarse-graining collectively migrating ABMs into PDE models
We present three ABMs that model how various cell-cell interactions, namely
cell pulling and adhesion, impact collective cell migration. All models are
two-dimensional cellular automata models with pulling agents that perform cell
pulling rules and/or adhesive agents that perform rules on cell adhesion. The
first model is borrowed from [7] and consists only of pulling agents; the
second model is inspired by the stochastic space jump model from [6] and
consists only of adhesive agents; to the best of our knowledge, we are the
first to study the third model, which consists of both pulling and adhesive
agents.
We detail our notation and ABM implementation in Section 2.1 and then present
the rules for each ABM and their mean-field PDE models in Section 2.2.
### 2.1 Implementation and notation details
Each model is simulated in the spatial domain $(x,y)\in[0,X]\times[0,Y]$. We
choose $X=200\text{ and }Y=40$ to represent a thin rectangle where collective
migration primarily occurs along the $x$-dimension and is not affected by the
boundary in this dimension. We represent this space with a two-dimensional
lattice with square lattice sites with length $\Delta=1$ to imitate a typical
cell length. The $(i,j)^{\text{th}}$ lattice site is centered at
$(x_{i},y_{j})$, where $x_{i}=(i-0.5)\Delta,\ i=1,\dots,X$, and
$y_{j}=(j-0.5)\Delta,\ j=1,\dots,Y.$ Each model is an exclusion process,
meaning that each agent can only occupy one lattice site at a time, and each
lattice site is occupied by at most one agent. The variables $P_{i,j}(t)$,
$H_{i,j}(t)$, and $0_{i,j}(t)$ denote the probabilities that lattice site
$(i,j)$ is occupied by a pulling agent, adhesive agent, or empty at time $t$,
respectively.
All model simulations are initialized by populating 75% of the lattice sites
in the middle 20% of columns, e.g., 75% of the lattice sites in
$\\{(x_{i},y_{j})\\}_{j=1}^{Y}$ are initially occupied for $i=80,\dots,120.$
All other columns are initially empty. Reflecting boundary conditions are used
at the boundaries of lattice to enforce a no-flux condition in the spatial
domain. Let $N^{(r)}_{P}(x_{i},t)$ and $N^{(r)}_{H}(x_{i},t)$ denote the
number of pulling and adhesive agents, respectively, in the $i^{\text{th}}$
column for $i=1,\dots,X$ from the $r^{\text{th}}$ of $R$ identically prepared
ABM simulations. To estimate the pulling and adhesive agent densities in the
$i^{\text{th}}$ column from the $r^{\text{th}}$ simulation, we compute
$P^{(r)}(x_{i},t)=\dfrac{N^{(r)}_{P}(x_{i},t)}{Y}\text{ and
}H^{(r)}(x_{i},t)=\dfrac{N^{(r)}_{H}(x_{i},t)}{Y},\ \ i=1,\dots,X,$
respectively. The total agent density is then estimated by
$T^{(r)}(x_{i},t)=P^{(r)}(x_{i},t)+H^{(r)}(x_{i},t).$
To estimate the averaged pulling, adhesive, and total agent density in the
$i^{\text{th}}$ column from $R$ identically prepared ABM simulations, we
compute:
$\displaystyle\langle P^{ABM}(x_{i},t)\rangle$
$\displaystyle=\dfrac{1}{R}\sum_{r=1}^{R}P^{(r)}(x_{i},t);$
$\displaystyle\langle H^{ABM}(x_{i},t)\rangle$
$\displaystyle=\dfrac{1}{R}\sum_{r=1}^{R}H^{(r)}(x_{i},t);\text{ and }$
$\displaystyle\langle T^{ABM}(x_{i},t)\rangle$
$\displaystyle=\dfrac{1}{R}\sum_{r=1}^{R}T^{(r)}(x_{i},t),\ \ \ \text{ for
}i=1,\dots,X.$
Figure 1: ABM rules on migration, pulling, and adhesion. a) When an agent
performs a migration event, it chooses one of the four cardinal directions to
move towards with equal probability. b) A migration event requires the lattice
site in the chosen migration direction to be empty; otherwise, the migration
event is aborted. c) Rules A-F dictate the rules on agent migration, pulling,
and adhesion. Rule A prescribes how a pulling agent (blue circle) migrates
when there is no neighboring agent. Rule B prescribes how a pulling agent
migrates and attempts to pull a neighboring pulling agent with it. Rule C
prescribes how an adhesive agent (red hexagon) migrates when there is no
neighboring agent. Rule D prescribes how a neighboring adhesive agent attempts
to adhere to a migrating adhesive agent and abort its migration event. Rule E
prescribes how a migrating pulling agent attempts to pull its neighboring
adhesive agent, while the adhesive agent attempts to adhere to the pulling
agent. Rule F prescribes how a migrating adhesive agent and neighboring
pulling agent do not interact with each other.
### 2.2 ABM Rules and their mean-field PDE models
The rules governing agent pulling and adhesion from all models are visually
depicted in Figure 1, and the parameters for each rule are described in Table
1. During all rules, an agent chooses one of its four neighboring lattice site
to move into with equal probability. The migration event is aborted if the
chosen site is already occupied. Rules A, B, and E are initiated when a
pulling agent attempts to migrate. This occurs with rate $r_{m}^{pull}$,
meaning that pulling agents attempt to perform one of these rules over an
infinitesimal time interval of length $dt$ with probability $r_{m}^{pull}dt$.
Rules C, D, and F are initiated when an adhesive agent attempts to migrate,
which occurs with rate $r_{m}^{adh}$. The effective rates in Figure 1 document
the rate at which each lattice site configuration at time $t$ changes to the
updated lattice site configuration at time $t+\Delta t$. We simulate each ABM
using the Gillespie algorithm, which we provide for the Pulling & Adhesion ABM
in Algorithm S1 in appendix D.
Parameter | Description | Range
---|---|---
$r_{m}^{pull}$ | Pulling agent migration rate | $[0,\infty)$
$r_{m}^{adh}$ | Adhesive agent migration rate | $[0,\infty)$
$p_{pull}$ | Probability of successful pulling event | $[0,1]$
$p_{adh}$ | Probability of successful adhesion event | $[0,1]$
$\alpha$ | Proportion of adhesive agents | $[0,1]$
Table 1: Description of the ABM parameters involved in each ABM. The
proportion of pulling agents in each simulation is given by $1-\alpha$.
#### 2.2.1 The Pulling Model
The Pulling model consists of pulling agents that migrate with rate
$r_{m}^{pull}$ and perform rules A and B from Figure 1. Suppose a pulling
agent at lattice site $(i,j)$ chooses to move rightwards into site $(i+1,j)$.
If the lattice site $(i-1,j)$ is unoccupied, then the agent performs Rule A
and moves into site $(i+1,j)$. If the lattice site $(i-1,j)$ is occupied, then
the agent attempts Rule B on agent pulling. This event succeeds with
probability $p_{pull}$, and the agent moves to site $(i+i,j)$ and pulls its
neighbor into lattice site $(i,j)$. This event fails with probability
$1-p_{pull}$, in which the agent moves into site $(i+i,j)$ but the neighbor
remains at lattice site $(i,j-1)$. These rules can be described by the
following trimolecular reaction rates:
$\displaystyle 0_{i-1,j}+P_{i,j}+0_{i+1,j}$
$\displaystyle\xrightarrow{r_{m}^{pull}/4}$ $\displaystyle
0_{i-1,j}+0_{i,j}+P_{i+1,j},$ (Rule A) $\displaystyle
P_{i-1,j}+P_{i,j}+0_{i+1,j}$
$\displaystyle\xrightarrow{p_{pull}r_{m}^{pull}/4}$ $\displaystyle
0_{i-1,j}+P_{i,j}+P_{i+1,j},$ (Rule B.1) $\displaystyle
P_{i-1,j}+P_{i,j}+0_{i+1,j}$
$\displaystyle\xrightarrow{(1-p_{pull})r_{m}^{pull}/4}$ $\displaystyle
P_{i-1,j}+0_{i,j}+P_{i+1,j}.$ (Rule B.2)
Equivalent reactions govern agent migration and pulling in the other three
directions.
In Appendix A.1, we show that Rules A and B can be coarse grained into the
Pulling ABM’s mean-field PDE model:
$\dfrac{\partial P}{\partial t}=\nabla\cdot\left(\mathcal{D}^{pull}(P)\nabla
P\right),\ \ \
\mathcal{D}^{pull}(P)=\dfrac{r_{m}^{pull}}{4}\left(1+3p_{pull}P^{2}\right)$
(2)
where $P=P(x,y,t)$ denotes the spatiotemporal pulling agent density. In Figure
2(a-f), we find that a simulation of Equation (2) closely matches
$P^{(1)}(x,t)$ over time for
$\bm{p}=(r_{m}^{pull},p_{pull})^{T}=(1.0,0.5)^{T}.$
Figure 2: ABM simulation snapshots and the mean-field PDE models for the
Pulling, Adesion, and Pulling & Adhesion ABMs. Blue pixels denote pulling
agents and red pixels denote adhesive agents. All ABMs were simulated on
rectangular 200$\times$40 lattices. (a-c) Snapshots of the Pulling ABM for
$r_{m}^{pull}=1.0,p_{pull}=0.5$. (d-f) The output spatiotemporal pulling agent
density (blue ‘x’ marks) is plotted against the solution of the mean-field PDE
(solid blue line) given by Equation (2). (g-i) Snapshots of the Adhesion ABM
for $r_{m}^{adh}=1.0,p_{adh}=0.5$. (j-l) The output spatiotemporal adhesive
agent density (red dots) is plotted against the solution of the mean-field PDE
(dashed red line) given by Equation (3). (m-o) Snapshots of the Pulling &
Adhesion ABM for
$r_{m}^{pull}=1.0,r_{m}^{adh}=0.25,p_{pull}=0.0.33,p_{adh}=0.33,\alpha=0.5$.
(p-r) The output spatiotemporal pulling and adhesive agent densities are
plotted against the solution of the mean-field PDE given by Supplementary
Equation (28).
#### 2.2.2 The Adhesion Model
The Adhesion model consists of adhesive agents that migrate with rate
$r_{m}^{adh}$ and perform rules C and D from Figure 1. Suppose an adhesive
agent at lattice site $(i,j)$ chooses to move rightwards into site $(i+1,j)$.
If the lattice site $(i-1,j)$ is unoccupied, then the agent performs Rule C
and moves into site $(i+1,j)$. If the lattice site $(i-1,j)$ is occupied, then
the neighboring agent attempts Rule D to adhere to the migrating agent and
abort their movement. This event succeeds with probability $p_{adh}$, and
neither agent changes its location. This adhesion event fails with probability
$1-p_{adh}$, and the migratory agent moves to site $(i+i,j)$ and the neighbor
remains at lattice site $(i,j-1)$. These rules can be described by the
following trimolecular reaction rates:
$\displaystyle 0_{i-1,j}+H_{i,j}+0_{i+1,j}$
$\displaystyle\xrightarrow{r_{m}^{adh}/4}$ $\displaystyle
0_{i-1,j}+0_{i,j}+H_{i+1,j},$ (Rule C) $\displaystyle
H_{i-1,j}+H_{i,j}+0_{i+1,j}$
$\displaystyle\xrightarrow{(1-p_{adh})r_{m}^{adh}/4}$ $\displaystyle
H_{i-1,j}+0_{i,j}+H_{i+1,j}.$ (Rule D)
In Appendix A.2, we show that Rules C and D can be coarse grained into the
Adhesion ABM’s mean-field PDE model:
$\dfrac{\partial H}{\partial t}=\nabla\cdot\left(\mathcal{D}^{adh}(H)\nabla
H\right),\ \ \ \
\mathcal{D}^{adh}(H)=\dfrac{3r_{m}^{adh}}{4}\left(p_{adh}\left(H-\dfrac{2}{3}\right)^{2}+1-\dfrac{4p_{adh}}{3}\right)$
(3)
where $H=H(x,y,t)$ denotes the spatiotemporal adhesive agent density. In
Figure 2(g-l), we find that a simulation of Equation (3) closely matches
$H^{(1)}(x,t)$ over time for $\bm{p}=(r_{m}^{adh},p_{adh})^{T}=(1.0,0.5)^{T}.$
It is important to note that $\mathcal{D}^{adh}(H)$ from Equation (3) becomes
negative for some density values when $p_{adh}>0.75$. This PDE fails to
provide an ABM prediction at these parameter values because negative diffusion
is ill-posed [6].
#### 2.2.3 The Pulling & Adhesion Model
The Pulling & Adhesion model consists of both pulling and adhesive agents. The
parameter $\alpha\in(0,1)$ denotes the portion of adhesive agents in the
simulation, and $(1-\alpha)$ denotes the portion of pulling agents in the
simulation. This model implements Rules A-F from Figure 1. Rules A-D are
unchanged from their descriptions in Sections 2.2.1 and 2.2.2. If a pulling
agent at lattice site $(i,j)$ chooses to move rightwards into site $(i+1,j)$
while an adhesive agent occupies site $(i-i,j)$, then Rule E dictates the
agents’ attempts to pull and adhere to each other. The migratory pulling agent
succeeds with probability $p_{pull}$ and moves to site $(i+1,j)$ while pulling
the neighboring adhesive agent into site $(i,j)$; the neighboring adhesive
agent successfully aborts the pulling agent’s migration event with probability
$p_{adh}$; both agents fail with probability $1-p_{adh}-p_{pull}$ and the
pulling agent moves to site $(i+1,j)$ while the adhesive agent remains at site
$(i-1,j)$. Based on our definition of this rule, it is not possible that both
the pulling and adhesion events succeed, so the parameters must satisfy $0\leq
p_{pull}+p_{adh}\leq 1$. Rule E can be described by the following trimolecular
reaction rate:
$\displaystyle H_{i-1,j}+P_{i,j}+0_{i+1,j}$
$\displaystyle\xrightarrow{p_{pull}r_{m}^{pull}/4}$ $\displaystyle
0_{i-1,j}+H_{i,j}+P_{i+1,j},$ (Rule E.1) $\displaystyle
H_{i-1,j}+P_{i,j}+0_{i+1,j}$
$\displaystyle\xrightarrow{(1-p_{adh}-p_{pull})r_{m}^{pull}/4}$ $\displaystyle
H_{i-1,j}+0_{i,j}+P_{i+1,j}.$ (Rule E.2)
If an adhesive agent at lattice site $(i,j)$ chooses to move rightwards into
site $(i+1,j)$ while a pulling agent occupies site $(i-i,j)$, then Rule F
dictates that the adhesive agent moves into site $(i+1,j)$ and the pulling
agent remains at site $(i-1,j)$. Rule F can be described by the following
trimolecular reaction rate:
$\displaystyle P_{i-1,j}+H_{i,j}+0_{i+1,j}$
$\displaystyle\xrightarrow{r_{m}^{adh}/4}$ $\displaystyle
P_{i-1,j}+0_{i,j}+H_{i+1,j}.$ (Rule F)
In Appendix C, we show that Rules A-F can be coarse-grained into the Pulling &
Adhesion ABM’s mean-field PDE model given by Supplementary Equation (28). This
two-compartment PDE describes the spatiotemporal densities of pulling agents,
$P(x,y,t)$, and adhesive agents, $H=H(x,y,t)$. In Figure 2(m-r), we find that
the $P$ and $H$ compartments from a simulation of Supplementary Equation (28)
closely $P^{(1)}(x,t)$ and $H^{(1)}(x,t)$, respectively, over time for
$\bm{p}=(r_{m}^{pull},r_{m}^{adh},p_{pull},p_{adh},\alpha)^{T}=(1.0,0.25,0.33,0.33,0.5)^{T}.$
To the best of our knowledge, it is not possible to convert Rules A-F into a
single-compartment PDE model describing the _total_ agent density,
$T=T(x,y,t)=H(x,y,t)+P(x,y,t)$.
## 3 Data analysis methods
We simulate all three models (the Pulling, Adhesion, and Pulling & Adhesion
ABMs) over a range of agent migration, pulling, and adhesion parameter values.
We represent the model parameters by the vector $\bm{p}$. Each model
simulation outputs 100 snapshots of agent configurations over time; from each
simulation, we generate the one-dimensional agent density along the
$x$-dimension over time. We average these densities over $R=25$ simulations to
obtain the final output ABM density, $\langle T^{ABM}(x,t;\bm{p})\rangle$. We
use the data from the first 75 timepoints as training data and the final 25
timepoints as testing data. BINN models consist of a data-approximating MLP,
$T^{MLP}(x,t)$, and a diffusion-rate-approximating MLP,
$\mathcal{D}^{MLP}(T)$. We train $T^{MLP}$ to closely approximate the ABM
training data while $T^{MLP}$ and $\mathcal{D}^{MLP}$ satisfy Equation (1).
After BINN training, the inferred $\mathcal{D}^{MLP}(T)$ function is used to
forecast and predict ABM data. To forecast ABM training and testing data, we
simulate the diffusion PDE framework using the inferred $\mathcal{D}^{MLP}(T)$
function. To predict ABM data at a new parameter value, $\bm{p}^{new}$, we
perform interpolation over several previously-inferred diffusion rate MLPs,
$\mathcal{D}^{MLP}(T;\bm{p}_{i})$ for $i=1,\dots,K_{1}$, and then simulate the
diffusion PDE framework using the resulting interpolant,
$\mathcal{D}^{interp}(T;\bm{p}^{new})$. Figure 3 gives a visual depiction of
our data analysis pipeline. The python files and notebook used for all steps
of our analysis are presented in
https://github.com/johnnardini/Forecasting_predicting_ABMs.
Figure 3: Data analysis pipeline. 1. Simulating ABM data For a given
parameter, $\bm{p}$, we simulate the Pulling, Adhesion, or Pulling & Adhesion
ABM. Each model outputs snapshots of pulling and adhesive agent locations over
time; we summarize this data by estimating the average total agent density
along the $x$-direction for each snapshot (not shown). We perform 25 total ABM
simulations for each $\bm{p}$ and average the total spatiotemporal agent
density to obtain $\langle T^{ABM}(x,t;\bm{p})\rangle$. The first 75
timepoints are placed into an training ABM dataset, and the final 25
timepoints are placed into a testing ABM dataset. 2. Training biologically-
informed neural networks (BINNs) to ABM data. Each BINN model consists of a
data-approximating MLP, $T^{MLP}(x,t)$, and a diffusion-rate-approximating
MLP, $\mathcal{D}^{MLP}(T)$. BINN models are trained so that
$T^{MLP}(x,t)\approx\langle T^{ABM}(x,t)\rangle^{train}$ while $T^{MLP}$ and
$\mathcal{D}^{MLP}$ satisfy Equation (1). After model training, the inferred
$\mathcal{D}^{MLP}(T)$ estimates the agent diffusion rate. 3a. Forecasting ABM
data. Simulating the diffusion PDE framework with $\mathcal{D}^{MLP}$ allows
us to forecast the ABM training and testing data. 3b. Predicting new ABM data.
We predict the rate of agent diffusion at a new parameter, $\bm{p}^{new}$, by
interpolating $\mathcal{D}^{MLP}(T;\bm{p})$ over several $\bm{p}$ values to
create $\mathcal{D}^{interp}$. Simulating the diffusion PDE framework with
$\mathcal{D}^{interp}$ allows us to predict the new ABM training and testing
data.
### 3.1 Simulating ABM data
We simulate ABM data by simulating each model over many parameter values,
$\bm{p}$ (Part 1 from Figure 3). For each $\bm{p}$, we simulate $R=25$
identically prepared realizations of the ABM; each realization is completed
when time reaches $t=1000$. We estimate the total spatiotemporal agent density
from each simulation and average over all $R$ simulations to obtain $\langle
T^{ABM}(x,t;\bm{p})\rangle$. We interpolate the output data over time to the
equispaced time grid $t_{j}=(j-1)\Delta t$ with $\Delta t=10$ for
$i=1,\dots,100$ to obtain $\langle T^{ABM}(x,t)\rangle=\\{\langle
T^{ABM}(x_{i},t_{j})\rangle\\}_{i=1,\dots,X}^{j=1,\dots,100}$. We split
$\langle T^{ABM}(x,t)\rangle$ into its training and testing datasets by
setting $\langle T^{ABM}(x,t)\rangle^{train}=\\{\langle
T^{ABM}(x_{i},t_{j})\rangle\\}_{i=1,\dots,X}^{j=1,\dots,T_{f}^{train}}$ and
$\langle T^{ABM}(x,t)\rangle^{test}=\\{\langle
T^{ABM}(x_{i},t_{j})\rangle\\}_{i=1,\dots,X}^{j=T_{f}^{train}+1,\dots,T_{f}^{test}}$.
We set $T_{f}^{train}=75$ and $T_{f}^{test}=100$ to place 75% of data into the
training dataset.
### 3.2 Training Biologically-informed Neural Networks (BINNs) to ABM data
We construct BINN models that consist of two sequential multilayer perceptron
(MLP) models: $T^{MLP}(x,t)$ predicts the total agent density at the point
$(x,t)$, and $\mathcal{D}^{MLP}(T)$ predicts the agent diffusion rate at the
density value $T$ (Part 2 of Figure 3). We train these two MLPs so that
$T^{MLP}(x,t)\approx\langle T^{ABM}(x,t)\rangle^{train}$ while the two MLPs
also satisfy Equation (1) in one spatial dimension:
$\dfrac{\partial}{\partial t}T^{MLP}=\dfrac{\partial}{\partial
x}\left(\mathcal{D}^{MLP}(T^{MLP})\dfrac{\partial}{\partial x}T^{MLP}\right).$
(4)
We chose this nonlinear diffusion PDE framework for BINN model training
because both mean-field models for the Pulling and Adhesion ABMs from Section
2 obey this framework with different diffusion rates.
We further detail the BINNs model architecture in Section 3.2.1, the loss
functions in Section 3.2.2, and our training procedure in Section 3.2.3.
#### 3.2.1 BINNs architecture
Following [26], we construct $T^{MLP}(x,t)$ using a fully-connected feed-
forward MLP with three hidden layers, which can be written as:
$\displaystyle z_{0}$ $\displaystyle=[x,t]$ $\displaystyle z_{1}$
$\displaystyle=\sigma\left(z_{0}W_{1}+b_{1}\right)$ $\displaystyle z_{2}$
$\displaystyle=\sigma\left(z_{1}W_{2}+b_{2}\right)$ $\displaystyle z_{3}$
$\displaystyle=\sigma\left(z_{2}W_{3}+b_{3}\right)$ $\displaystyle
T^{MLP}(x,t)$ $\displaystyle=\psi\left(z_{3}W_{4}+b_{4}\right),$ (5)
where each $z_{k}$ denotes the $k^{\text{th}}$ hidden layer; the $W_{k}$
matrices and the $b_{k}$ vectors provide the weights and biases of each hidden
layer, respectively; $\sigma$ denotes the sigmoid activation function
$\sigma(x)=1/(1+\exp{(-x)})$, and $\psi$ denotes the softplus activation
function $\psi(x)=\log(1+\exp(x))$. Each hidden layer in Equation (5) has 128
neurons, meaning that $W_{1}\in\mathbb{R}^{2\times
128};W_{2},W_{3}\in\mathbb{R}^{128\times 128};W_{4}\in\mathbb{R}^{128\times
1};b_{1},b_{2},b_{3}\in\mathbb{R}^{128};\text{ and }b_{4}\in\mathbb{R}$.
The architecture of $\mathcal{D}^{MLP}(T)$ is identical to the architecture
for $T^{MLP}$ in Equation (5), except $\mathcal{D}^{MLP}$ has a one-
dimensional input vector, $T$, instead of the two-dimensional input vector,
$[x,t]$.
#### 3.2.2 Loss Function
BINNs are trained to concurrently fit the given dataset, $\langle
T^{ABM}(x,t)\rangle^{train}$, and solve Equation (4) by minimizing the
following multi-term loss function:
$\mathcal{L}_{total}=\mathcal{L}_{WLS}+\mathcal{L}_{PDE}+\mathcal{L}_{constr}.$
(6)
The $\mathcal{L}_{WLS}$ term of Equation (6) computes a weighted mean-squared
error between $T^{MLP}(x,t)$ and $\langle T^{ABM}(x,t)\rangle^{train}$:
$\mathcal{L}_{WLS}=\dfrac{1}{T_{f}^{train}X}\sum_{i=1,j=1}^{X,T_{f}^{train}}w_{i,j}\bigg{(}T^{MLP}(x_{i},t_{j})-\langle
T^{ABM}(x_{i},t_{j})\rangle\bigg{)}^{2}.$ (7)
We set $w_{i,1}=10.0$ for all values of $i$ and all other $w_{i,j}$ values to
1.0 to ensure that $T^{MLP}$ closely agrees with the ABM’s initial data. By
minimizing Equation (7), we ensure $T^{MLP}(x,t)$ closely approximates
$\langle T^{ABM}(x,t)\rangle^{train}$.
The $\mathcal{L}_{PDE}$ term of Equation (6) quantifies how closely $T^{MLP}$
and $\mathcal{D}^{MLP}$ follow Equation (4). To ensure the MLPs satisfy this
PDE framework throughout the ABM’s entire spatiotemporal domain, we uniformly
sample 10,000 points, $\\{(x_{k},t_{k})\\}_{k=1}^{10,000}$, from
$[0,X]\times[0,750]$. For notational convenience, let
$\hat{T}_{k}=T^{MLP}(x_{k},t_{k})$ and
$\hat{D}_{k}=\mathcal{D}^{MLP}\big{(}T^{MLP}(x_{k},t_{k})\big{)}$. We then
compute the mean-squared error between the left- and right-hand sides of
Equation (4) at all sampled points:
$\mathcal{L}_{PDE}=\dfrac{1}{10,000}\sum_{i=1}^{10,000}\bigg{[}\dfrac{\partial}{\partial
t}\hat{T}_{k}-\dfrac{\partial}{\partial
x}\bigg{(}\hat{D}_{k}\dfrac{\partial}{\partial
x}\hat{T}_{k}\bigg{)}\bigg{]}^{2},$ (8)
where differentiation of $T^{MLP}$ and $\mathcal{D}^{MLP}$ is performed using
automatic differentiation. Minimizing Equation (8) verifies that $T^{MLP}$ and
$\mathcal{D}^{MLP}$ together satisfy Equation (4).
The $\mathcal{L}_{constr}$ term of Equation (6) incorporates user knowledge
into BINNs training. We penalize $\mathcal{D}^{MLP}$ for outputting values
outside of the interval $[D_{\min},D_{\max}]$. We set $D_{\min}=0$ because
Equation (4) is ill-posed if $\mathcal{D}(u)<0$, and we set $D_{\max}=1.0$
because the mean-field rates of diffusion are below one for all ABM
simulations in this study. We compute this term by squaring any values of
$\hat{D_{i}}$ that are not within $[D_{\min},D_{\max}]$ and weighting these
values by $10^{10}$:
$\mathcal{L}_{constr}=\dfrac{1}{10,000}\sum_{\begin{subarray}{c}k=1\\\
\hat{D}_{k}\notin[D_{\min},D_{\max}]\end{subarray}}^{10,000}10^{10}(\hat{D_{k}})^{2}.$
(9)
This term regularizes the BINN training procedure to prevent
$\mathcal{D}^{MLP}$ from outputting unrealistic values.
#### 3.2.3 BINN Training Procedure
For BINN model training, we randomly partiton the training ABM dataset into
80%/20% BINN training and BINN validation datasets. We train the BINN
parameter values (i.e., the weights and biases for $T^{MLP}$ and
$\mathcal{D}^{MLP}$) to minimize a loss function, $\mathcal{L}$, using the
gradient-based ADAM optimizer with its default hyperparameter values on the
BINN training dataset. For each new set of BINN parameters, we compute
$\mathcal{L}$ on the BINN validation dataset and save the BINN parameters if
the newly computed $\mathcal{L}$ value achieves a 1% or greater relative
improvement over the previous smallest recorded value. Following [34], we
perform training in a two-step process: in the first step, we train the BINN
to match the ABM data by optimizing $\mathcal{L}=\mathcal{L}_{WLS}$ from
Equation (7); in the second step, we train the BINN on
$\mathcal{L}=\mathcal{L}_{total}$ from Equation (6). The first training step
is performed for $10^{4}$ epochs with an early stopping criterion of $10^{3}$,
meaning that training ends early if the smallest-computed $\mathcal{L}$ value
on the validation data is unchanged for $10^{3}$ epochs. The second step is
performed for $10^{6}$ epochs with an early stopping criterion of $10^{5}$.
Each epoch is computed in minibatches of size $10^{3}$. BINN model training is
performed using the PyTorch deep learning library (version 1.7.1).
Following [26], we train five separate BINNs for each ABM dataset using
different BINN training and validation datasets because the final trained
model can be sensitive to which data is included in these two datasets. We
compute the five PDE forward simulations from these trained models and select
whichever BINN achieves the smallest mean-squared error against the ABM
training data as the final selected BINN model.
### 3.3 Forecasting ABM data
We use mean-field PDEs and BINN-guided PDEs to forecast ABM data (Part 3a of
Figure 3). Most of these PDEs are one-compartment nonlinear diffusion
equations that can be written in one spatial dimension as
$\displaystyle\dfrac{\partial T}{\partial t}$
$\displaystyle=\dfrac{\partial}{\partial x}\left(\mathcal{D}(T)\dfrac{\partial
T}{\partial x}\right),$ (10)
where $T=T(x,t)$ is the total agent density and $\mathcal{D}(T)$ is a density-
dependent rate of diffusion. Recall that for the Pulling ABM, $T(x,t)=P(x,t)$;
for the Adhesion ABM, $T(x,t)=H(x,t)$; and for the Pulling & Adhesion ABM,
$T(x,t)=P(x,t)+H(x,t)$.
For the mean-field PDE models, we simulate Equation (10) with
$\mathcal{D}(P)=\mathcal{D}^{pull}(P)$ from Equation (2) for the Pulling ABM,
and $\mathcal{D}(H)=\mathcal{D}^{adh}(H)$ from Equation (3) for the Adhesion
ABM. The mean-field PDE for the Pulling & Adhesion ABM is given by the two-
compartment PDE in Supplementary Equation (28). For BINN-guided PDE models, we
train a BINN model to $\langle T^{ABM}(x,t)\rangle^{train}$ (see Section 3.2)
and then simulate Equation (10) where $\mathcal{D}(T)$ is the
$\mathcal{D}^{MLP}(T)$ that results from BINN model training. Our
implementation method to numerically integrate Equation (10) is provided in
Appendix B.
We partition each PDE simulation,
$T(x,t)=\\{T(x_{i},t_{j})\\}_{i=1,\dots,X}^{j=1,\dots,100}$, into training and
testing datasets to match the training and testing ABM datasets:
$T(x,t)^{train}=\\{T(x_{i},t_{j})\\}_{i=1,\dots,X}^{j=1,\dots,T_{f}^{train}},\
\ \
T(x,t)^{test}=\\{T(x_{i},t_{j})\\}_{i=1,\dots,X}^{j=T_{f}^{train}+1,\dots,T_{f}^{train}}.$
We report the training mean-squared error (MSE) from each simulation as:
$\dfrac{1}{XT_{f}^{train}}\sum_{i=1,j=1}^{X,T_{f}^{train}}\left(T(x_{i},t_{j})-\langle
T^{ABM}(x_{i},t_{j}\rangle\right)^{2},$
and the testing MSE as:
$\dfrac{1}{X(T_{f}^{test}-T_{f}^{train})}\sum_{i=1,j=T_{f}^{train}+1}^{X,T_{f}^{test}}\left(T(x_{i},t_{j})-\langle
T^{ABM}(x_{i},t_{j}\rangle\right)^{2}.$
### 3.4 Predicting new ABM data
We use multivariate interpolation on previously-computed BINN-guided diffusion
rates to predict density-dependent diffusion rates for new ABM data (Part 3b
of Figure 3). We define a prior parameter collection and a new parameter
collection as
$\mathcal{P}^{prior}=\\{\bm{p}_{k}\\}_{k=1}^{K_{1}}\text{ and
}\mathcal{P}^{new}=\\{\bm{p}^{new}_{k}\\}_{k=1}^{K_{2}}.$
Our pipeline for predicting new ABM data from prior ABM simulations proceeds
as follows:
1. 1.
Generate the prior and new ABM data collections by simulating the ABM at all
parameters from the prior and new parameter collections:
$\mathcal{T}^{prior}=\bigg{\\{}\langle
T^{ABM}(x,t;\bm{p}_{k})\rangle\bigg{\\}}_{k=1}^{K_{1}}\text{ and
}\mathcal{T}^{new}=\bigg{\\{}\langle
T^{ABM}(x,t;\bm{p}^{new}_{k})\rangle\bigg{\\}}_{k=1}^{K_{2}}.$
2. 2.
Train a BINN model to each training ABM dataset from $\mathcal{T}^{prior}$ and
extract $\mathcal{D}^{MLP}(T;\bm{p}_{k})$ from the trained BINN model.
3. 3.
Perform multivariate interpolation on
$\\{\mathcal{D}^{MLP}(T;\bm{p}_{k})\\}_{k=1}^{K_{1}}$ to create an
interpolant, $\mathcal{D}^{interp}(T;\bm{p})$, that matches the concatenated
vector $[T,\bm{p}_{k}]$ to the diffusion rate
$\mathcal{D}^{MLP}(T;\bm{p}_{k})$ for $k=1,\dots,K_{1}$.
4. 4.
Predict the new ABM dataset at $\bm{p}^{new}_{k}$ by simulating Equation (10)
with $\mathcal{D}=\mathcal{D}^{interp}(T;\bm{p}^{new}_{k})$ to create
$T^{interp}(x,t;\bm{p}^{new}_{k})$. Partition
$T^{interp}(x,t;\bm{p}^{new}_{k})$ into its training and testing datasets to
match the ABM data’s training and testing datasets.
5. 5.
Compute the training and testing MSEs between
$T^{interp}(x,t;\bm{p}^{new}_{k})$ and $\langle T^{ABM}(x,t)\rangle$ to
summarize the predictive performance of $T^{interp}(x,t;\bm{p}^{new}_{k})$ for
$k=1,\dots,K_{2}$.
We implement multi-dimensional radial basis function interpolation using Sci-
kit Learn’s (version 0.24.2) RBFInterpolator command to create
$\mathcal{D}^{interp}(T;\bm{p})$.
## 4 Results
### 4.1 PDE simulations outperform neural networks in ABM forecasting
We investigate the performance of an ANN, BINN, BINN-guided PDE model, and
mean-field PDE model in forecasting ABM data. We simulated the Pulling ABM
with $\bm{p}=(r_{m}^{pull},p_{pull})^{T}=(1.0,0.5)^{T}$ to generate the ABM
data. The ANN was trained to minimize the loss function $\mathcal{L}_{WLS}$
from Equation (7) on the training ABM dataset, whereas the BINN was trained to
minimize $\mathcal{L}_{total}$ from Equation (6). Both PDE models simulate
Equation (10). For the BINN-guided PDE, we extract $\mathcal{D}^{MLP}$ from
the trained BINN model to obtain $\mathcal{D}$; for the mean-field PDE,
$\mathcal{D}$ is given by $\mathcal{D}^{pull}$ from Equation (2).
Visual inspection suggests that all four models match the ABM training data
well (Figure 4(a-b)111the mean-field PDE is not plotted in this figure because
it is visually indistinguishable from the BINN-guided PDE.), though the
computed training MSE values reveal that the mean-field and BINN-guided PDEs
outperform the neural networks in describing this data (Table 2). The BINN,
BINN-guided PDE, and mean-field PDE all accurately forecast the testing data
(Figure 4(c)), but the two PDE models achieve smaller MSE values than the BINN
model (Table 2). The ANN’s prediction for the testing data has a protrusion
that overpredicts all data for $x>125$ (Figure 4(c) inset), which causes this
model’s computed testing MSE value to be almost an order of magnitude higher
than all others.
Figure 4: Forecasting ABM data with neural networks and PDEs. ANN and BINN models were trained to fit $\langle T^{ABM}(x,t)\rangle^{train}$ from the Pulling ABM with $\bm{p}=(r_{m}^{pull},p_{pull})^{T}=(1.0,0.5)^{T}$. These two ANNs and the mean-field and BINN-guided (BG) PDE simulations were then used to forecast (a-b) $\langle T^{ABM}(x,t)\rangle^{train}$ and (c) $\langle T^{ABM}(x,t)\rangle^{test}$. The mean-field PDE simulation is not plotted because it is visually indistinguishable from the BG PDE simulation. Model | Training MSE | Testing MSE
---|---|---
Artificial neural network | $1.17\times 10^{-4}$ | $9.36\times 10^{-4}$
Biologically-informed neural network | $9.32\times 10^{-5}$ | $1.47\times 10^{-4}$
Mean-field PDE | $7.45\times 10^{-5}$ | $1.00\times 10^{-4}$
BINN-guided PDE | $7.64\times 10^{-5}$ | $1.02\times 10^{-4}$
Table 2: Computed MSE values when forecasting $\langle
T^{ABM}(x,t)\rangle^{train}$ and $\langle T^{ABM}(x,t)\rangle^{train}$ with an
ANN, BINN, mean-field PDE, or BINN-guided PDE.
### 4.2 Forecasting ABM data with BINN-guided and mean-field PDE simulations
We investigate the performance of BINN-guided and mean-field PDE simulations
in forecasting the training and testing ABM datasets from the Pulling,
Adhesion, and Pulling & Adhesion ABMs. See Section 3.3 for implementation
details.
#### 4.2.1 The BINN-guided and mean-field PDEs both accurately forecast
Pulling ABM data
The parameters for the Pulling ABM are $\bm{p}=(r_{m}^{pull},p_{pull})^{T}$.
To evaluate the BINN-guided and mean-field PDE models’ performances in
forecasting Pulling ABM data over a range of agent pulling parameter values,
we computed eleven ABM datasets by varying $p_{pull}=0.0,0.1,0.2,\dots,1.0$
while fixing $r_{m}^{pull}=1.0$. The inferred rates of agent diffusion from
both models propose that agents diffuse slower for low densities and faster
for high densities, and that larger values of $p_{pull}$ lead to increased
density-dependent diffusion rates (Figure 5(a)). The two models achieve
comparable training and testing MSE values for all values of $p_{pull}$,
though the mean-field PDE usually attains slightly smaller values (Figure
5(b)). Snapshots of both simulated PDE models against data shows that their
ABM predictions are visually indistinguishable (Supplementary Figure 12(a-c)).
Figure 5: Forecasting Pulling ABM data with the mean-field (MF) and BINN-
guided PDEs. (a) Plots of the mean-field diffusion rate,
$\mathcal{D}^{pull}(T)$, from Equation (2) and the inferred BINN diffusion
rate, $\mathcal{D}^{MLP}(T)$, for $p_{pull}=0.1,0.3,\dots,0.9$ (results not
shown for $p_{pull}=0.0,0.2,\dots,1.0$ for visual ease) while fixing
$r_{m}^{pull}=1.0$. (b) Plots of the mean-field and BINN-guided PDEs’ computed
training and testing MSE values while varying $p_{pull}$ and fixing
$r_{m}^{pull}=1.0$. (c) Plots of $\mathcal{D}^{pull}(T)$ and
$\mathcal{D}^{MLP}(T)$ for $r_{m}^{pull}=0.2,0.4,\dots,1.0$ while fixing
$p_{pull}=0.5$. (d) Plots of the mean-field and BINN-guided PDEs’ computed
training and testing MSE values while varying $r_{m}^{pull}$ and fixing
$p_{pull}=0.5$.
To evaluate both PDE models’ performances over a range of pulling agent
migration values, we computed ten Pulling ABM datasets with
$r_{m}^{pull}=0.1,0.2,\dots,1.0$ while fixing $p_{pull}=0.5$. We find close
agreement between both models’ inferred diffusion rates for values of
$r_{m}^{pull}$ (Figure 5(c)). As a result, both models achieve similar
computed training and testing MSE values (Figure 5(d)). Snapshots of both
simulated PDE models against data reveals that their ABM predictions are
visually indistinguishable (Supplementary Figure 12(d-f)).
#### 4.2.2 BINN-guided PDEs accurately forecast Adhesion ABM data when the
mean-field PDE is ill-posed
The parameters for the pulling ABM are $\bm{p}=(r_{m}^{adh},p_{adh})^{T}$. To
evaluate the BINN-guided and mean-field PDE models’ performances over a range
of agent adhesion parameter values, we computed eleven ABM datasets by varying
$p_{adh}=0.0,0.1,0.2,\dots,1.0$ while fixing $r_{m}^{adh}=1.0$. The inferred
rates of agent diffusion from both models decrease with agent density for most
values of $p_{adh}$ (Figure 6(a)). When $p_{adh}=0$, BINN-guided diffusion
rate is slightly increasing and the mean-field model’s diffusion rate is
constant. The BINN-guided diffusion rates decline faster with agent density
than the corresponding mean-field diffusion rates for densities below 0.4.
Both models agree that the density-dependent rates of diffusion fall as
$p_{adh}$ increases. We computed the training and testing MSEs for both models
for all values of $p_{adh}$ (Figure 6(b)) and partition the results as follows
:
* •
When $\bm{p_{adh}<0.5}$: both models achieve similar training MSE values near
$7\times 10^{-5}$ and testing MSE values around $10^{-4}$.
* •
When $\bm{0.5\leq p_{adh}\leq 0.75}$: the mean-field PDE model’s training and
testing MSE values increase as $p_{adh}$ increases, with a maximum computed
value above $3\times 10^{-4}$. The BINN-guided PDE model’s training and
testing MSE values remain near $7\times 10^{-5}$ and $10^{-4}$, respectively.
* •
When $\bm{p_{adh}>0.75}$: the mean-field PDE model is ill-posed and cannot
forecast this ABM data. The BINN-guided PDE model’s computed training and
testing MSE values increase as $p_{adh}$ increases, with a maximum computed
value of $2\times 10^{-4}$.
Close inspection of snapshots from both PDE model simulations against ABM data
from $p_{adh}=0.7$ reveals that the mean-field PDE model slightly overpredicts
the data at high densities above 0.5 and low densities below 0.1, whereas the
BINN-guided PDE closely matches the data (Supplementary Figure 13(c) inset).
Figure 6: Forecasting Adhesion ABM data with the mean-field (MF) and BINN-
guided PDEs. (a) Plots of the mean-field diffusion rate,
$\mathcal{D}^{adh}(T)$, from Equation (3) and the inferred BINN diffusion
rate, $\mathcal{D}^{MLP}(T)$, for $p_{adh}=0.1,0.3,\dots,0.9$ (results not
shown for $p_{adh}=0.0,0.2,\dots,1.0$ for visual ease) while fixing
$r_{m}^{adh}=1.0$. (b) Plots of the mean-field and BINN-guided PDEs’ computed
training and testing MSE values while varying $p_{adh}$ and fixing
$r_{m}^{adh}=1.0$. (c) Plots of $\mathcal{D}^{adh}(T)$ and
$\mathcal{D}^{MLP}(T)$ for $r_{m}^{adh}=0.2,0.4,\dots,1.0$ while fixing
$p_{adh}=0.5$. (d) Plots of the mean-field and BINN-guided PDEs’ computed
training and testing MSE values while varying $r_{m}^{adh}$ and fixing
$p_{adh}=0.5$.
To evaluate both PDE models’ performances over a range of adhesive agent
migration values, we computed ten ABM datasets with
$r_{m}^{adh}=0.1,0.2,\dots,1.0$ while fixing $p_{adh}=0.5$. Both PDE models
similarly propose that agent density-dependent diffusion rates decrease for
larger agent density values and that these rates increase for larger values of
$r_{m}^{adh}$ (Figure 6(c)). As a result, both PDEs achieve similar computed
training and testing MSE values for most values of $r_{m}^{adh}$ (Figure
6(d)). When $r_{m}^{adh}=0.1$, however, the BINN-guided PDE’s testing MSE
value is close to $10^{-4}$, whereas the mean-field PDE attains a much lower
testing MSE value near $6\times 10^{-5}$. Despite these differences, the two
model simulations appear similar at these parameter values (Supplementary
Figure 13(d-f)).
#### 4.2.3 BINN-guided PDEs accurately forecast Pulling & Adhesion ABM data
with a one-compartment model
The parameters for the Pulling & Adhesion ABM are
$\bm{p}=(r_{m}^{pull},r_{m}^{adh},p_{pull},p_{adh},\alpha)^{T}$. We evaluate
the performance of the BINN-guided and mean-field DE models in forecasting
data from the Pulling & Adhesion ABM. We created 48 ABM datasets by fixing the
base parameter values at $\bm{p}_{base}=(1.0,0.25,0.33,0.33,0.5)^{T}$ and then
varying one parameter at a time over several values. We vary
$r_{m}^{pull}=0.5,0.6,\dots,1.5$; $r_{m}^{adh}=0.0,0.1,\dots,1.0$;
$p_{pull}=0.1,0.2,\dots,0.6,0.67$; $p_{adh}=0.1,0.2,\dots,0.6,0.67$; and
$\alpha=0.0,0.1,\dots,1.0$. These parameter values were chosen to always
satisfy $p_{pull}+p_{adh}\leq 1.$
The BINN models’ inferred diffusion rates are often U-shaped with large
diffusion values at low and high agent densities and smaller values at
intermediate densities (Figure 7). This U-shape tends to increase for larger
values of $r_{m}^{pull},r_{m}^{adh},\text{ and }p_{pull}$ and decrease for
larger values of $p_{adh}$ and $\alpha$. The inferred diffusion rates appear
most sensitive to changes in the $\alpha$ parameter: at $\alpha=0.0$, it
strictly increases with agent density and attains an average value of 0.289;
at $\alpha=1.0$, it is strictly decreasing and has an average value of 0.051.
The inferred diffusion rate is also sensitive to the $r_{m}^{adh}$ and
$r_{m}^{pull}$ parameters: varying $r_{m}^{adh}$ primarily alters the BINN
diffusion rate at intermediate agent density values, whereas varying
$r_{m}^{pull}$ changes the BINN diffusion rate at low and high agent densitiy
values.
Figure 7: The inferred BINN diffusion rates for Pulling & Adhesion ABM data.
Plots of the inferred BINN diffusion rate, $\mathcal{D}^{MLP}(T)$, when
varying (a) $r_{m}^{pull}$, (b) $r_{m}^{adh}$, (c) $p_{pull}$, (d) $p_{adh}$,
(e) $\alpha$.
Recall that the BINN-guided PDE computes a single compartment to forecast the
total agent density, $T(x,t)$, whereas the mean-field PDE computes two
compartments forecasting the Pulling and Adhesive agent densities, $P(x,t)$
and $H(x,t)$, respectively. We forecast the total agent density with the mean-
field PDE by setting $T(x,t)=P(x,t)+H(x,t)$. The BINN-guided and mean-field
PDE models achieve similar training MSE values for most parameter values that
we considered (Figure 8). The mean-field model’s testing MSE values are often
smaller than the BINN-guided testing MSE values, though the BINN-guided PDE
also achieves small testing MSE values. For example, both PDE simulations
accurately predict ABM data when $p_{adh}$ is set to $0.4$, but visualizing
both PDE simulations shows that the mean-field PDE better matches the elbow of
the data than the BINN-guided PDE (Supplementary Figure 14(a-c)). The BINN-
guided PDE outperforms the mean-field PDE in forecasting data for small values
of $r_{m}^{adh}$: plotting both PDE simulations against data from
$r_{m}^{adh}=0.1$ shows that the mean-field PDE underpredicts the largest
agent density values, while the BINN-guided PDE accurately matches this data
(Supplementary Figure 14(d-f)).
Figure 8: Forecasting Pulling & Adhesion ABM data with the mean-field and
BINN-guided PDEs. Plots of the mean-field and BINN-guided PDEs’ computed
training and testing values while varying (a) $r_{m}^{pull}$, (b)
$r_{m}^{adh}$, (c) $p_{pull}$, (d) $p_{adh}$, (e) $\alpha$.
### 4.3 Predicting ABM data at new parameter values
ABM simulations can be computationally expensive when the model includes
complex rules or consists of many agents. This computational bottleneck makes
it challenging to investigate ABM behavior at many parameter values. We now
examine how performing multivariate interpolation on several BINN-inferred
diffusion rates can aid the prediction of previously-unseen ABM data at new
parameter values (see Section 3.4 for implementation details).
We predict new data from the Adhesion and Pulling & Adhesion ABMs in this
section. We do not include the Pulling ABM in this work because the mean-field
PDE model accurately forecasted ABM data for all parameter values that we
considered in Section 4.2.1.
#### 4.3.1 Predicting Adhesion ABM data
The parameters for the Adhesion ABM are $\bm{p}=(r_{m}^{adh},p_{adh})^{T}$. We
perform ABM data prediction for $p_{adh}\geq 0.5$ in this section because we
found that the mean-field PDE model accurately forecasted ABM data for
$p_{adh}\leq 0.5$ in Section 4.2.2.
We first perform interpolation over the $p_{adh}$ parameter while fixing
$r_{m}^{adh}$. The prior data collection consists of six ABM datasets
generated by varying $p_{adh}=0.5,0.6,0.7,\dots,1.0$ while fixing
$r_{m}^{adh}=1.0$; the new data collection consists of five ABM datasets
generated by varying $p_{adh}=0.55,0.65,0.75,0.85,\text{ and }0.95$ while
fixing $r_{m}^{adh}=1.0$. We performed multivariate interpolation over the six
inferred $\mathcal{D}^{MLP}(T;\bm{p})$ terms from the prior data collection to
generate $\mathcal{D}^{interp}(T;\bm{p})$. We use this interpolant to predict
the diffusion rates for all parameters from the new data collection (9(a)).
All inferred diffusion rates decrease with agent density and tends to fall
with larger $p_{adh}$ values. Most of the computed training and testing MSE
values on the new data collection are comparable to their counterparts from
the prior data collection, except the testing MSE at $p_{adh}=0.95$ exceeds
$5\times 10^{-4}$ while the testing MSEs at $p_{adh}=0.9\text{ and }1.0$ do
not exceed $2.5\times 10^{-4}$ (Figure 9(b)). Visual inspection of the
simulated PDE prediction against ABM data at $p_{adh}=0.95$ reveals that it
matches the data well but slightly mispredicts the data’s heel at later time
points (Supplementary Figure 15(a-c)).
Figure 9: Predicting Adhesion ABM data with BINN-guided PDEs and multivariate
interpolation for new $p_{adh}$ values. The parameters for the Adhesion ABM
are given by $\bm{p}=(r_{m}^{adh},p_{adh})^{T}.$ Here, we vary $p_{adh}$ while
fixing $r_{m}^{adh}=1.0$. The prior data collection consists of
$p_{adh}=0.5,0.6,\dots,1.0$ and the new data collection consists of
$p_{adh}=0.55,0.65,\dots,0.95$ (a) Plots of the learned
$\mathcal{D}^{MLP}(T;\bm{p})$ diffusion rates for the prior data collection.
We performed multivariate interpolation on these rates to obtain
$\mathcal{D}^{interp}(T;\bm{p})$, which we plot for the new data collection.
(b) Plots of the BINN-guided PDEs’ computed training and testing values on the
prior data collection, and the interpolated PDE’s training and testing values
on the new data collection.
We next perform interpolation over the $r_{m}^{adh}$ and $p_{adh}$ parameters.
The prior data collection consists of 18 ABM datasets generated by varying
$r_{m}^{adh}=0.1,0.5,1.0$ and $p_{adh}=0.5,0.6,\dots,1.0$; the new data
collection consists of ten ABM datasets generated from a latin hypercube
sampling of $(r_{m}^{adh},p_{adh})\in[0.1,1.0]\times[0.5,1.0]$ (Figure 10(a)
and Supplementary Table 4). We performed multivariate interpolation over each
$\mathcal{D}^{MLP}(T;\bm{p})$ from the prior data collection to generate
$\mathcal{D}^{interp}(T;\bm{p})$. The predicted diffusion rates for the new
data collection decrease with agent density, rise for larger $r_{m}^{adh}$
values, and decrease faster for larger $p_{adh}$ values (Figure 10(b)). We
order the parameters from the new data collection by increasing training MSE
values; four of the five lowest training MSE values result from the five
smallest $p_{adh}$ values, and four of the five highest MSE values result from
the five highest $p_{adh}$ values (Figure 10(c)). The four lowest training and
testing MSE values are all below $110^{-4}$, the eight lowest are all below
$2\times 10^{-4}$, and the highest testing MSE value reaches $1.6\times
10^{-3}$. Visual inspection of the simulated PDE prediction with the highest
testing MSE value reveals that this simulation mispredicts the data’s heel but
otherwise matches the ABM data well (Supplementary Figure 16(a-c)). Visual
inspection of the simulated PDE prediction with the third-highest MSE value
shows that this simulations accurately matches the ABM data (Supplementary
Figure 16(d-f)).
Figure 10: Predicting Adhesion ABM data with BINN-guided PDEs and multivariate
interpolation for new $r_{m}^{adh}$ and $p_{adh}$ values. The parameters for
the Adhesion ABM are given by $\bm{p}=(r_{m}^{adh},p_{adh})^{T}.$ Here, we
vary both parameters. (a) The prior data collection consists of
$r_{m}^{adh}=0.1,0.5,1.0$ and $p_{adh}=0.5,0.6,\dots,1.0$ and the new data
collection consists of a Latin hypercube (LHC) sampling of
$\bm{p}\in[0.1,1.0]\times[0.5,1.0]$ with 10 samples. (b) We performed
multivariate interpolation on the $\mathcal{D}^{MLP}(T;\bm{p})$ rates on the
prior data collection to obtain $\mathcal{D}^{interp}(T;\bm{p})$. We plot
three illustrative $\mathcal{D}^{interp}(T;\bm{p})$ values from the new data
collection. (c) Plots of the interpolated PDE’s training and testing values on
the new data collection.
#### 4.3.2 Predicting Adhesion & Pulling ABM data
The parameters for the Pulling & Adhesion ABM are
$\bm{p}=(r_{m}^{pull},r_{m}^{adh},p_{pull},p_{adh},\alpha)^{T}$. We perform
ABM data prediction over a large range of parameter values to determine if the
one-compartment BINN-guided PDE simulations can predict this ABM’s data that
results from two interacting subpopulations.
We perform multivariate interpolation over the $p_{pull},p_{adh},$ and
$\alpha$ parameters while fixing $r_{m}^{pull}=1.0$ and $r_{m}^{adh}=0.25$.
The prior and new data collections consist of 40 and 20 ABM parameter
combinations, respectively, that were generated from Latin hypercube samplings
of $(p_{pull},p_{adh},\alpha)\in[0,0.67]\times[0,0.67]\times[0,1]$ (Figure
11(a) and Supplementary Tables 5 and 6). We chose samplings where
$p_{pull}+p_{adh}\leq 1.0$ for all samples. The computed training and testing
MSE values for the new parameter collection suggest all simulated PDE
predictions accurately match the ABM data at those parameters (Figure 11(b)).
Of the 20 computed testing MSE values in the new data collection, four are
below $1.0\times 10^{-4}$, 16 are below $2.0\times 10^{-4}$, and all are below
$5.0\times 10^{-4}$. The highest and third highest testing MSE value results
from $(p_{pull},p_{adh},\alpha)=(0.218,0.553,0.675)$ and
$(0.251,0.486,0.975)$, respectively. Visually inspecting the simulated PDE
predictions from these parameter values against ABM data reveals that both
match the data well, though the worst prediction overpredicts the highest ABM
density values (Supplementary Figure 17).
Figure 11: Predicting Pulling & Adhesion ABM data with BINN-guided PDEs and
multivariate interpolation for new $p_{pull},p_{adh}$, and $\alpha$ values.
The parameters for the Adhesion ABM are given by
$\bm{p}=(r_{m}^{adh},r_{m}^{pull},p_{adh},p_{pull},\alpha)^{T}.$ Here, we vary
$p_{pull},p_{adh},$ and $\alpha$ while fixing $r_{m}^{pull}=1.0$ and
$r_{m}^{adh}=0.25$. (a) The prior data consists of a Latin hypercube (LHC)
sampling of $(p_{pull},p_{adh},\alpha)\in[0,0.67]\times[0,0.67]\times[0,1]$
with 40 samples and the new data consists of a LHC sampling of the same domain
with 20 samples. (b) Plots of the interpolated PDE’s training and testing
values on the new data.
### 4.4 Comparing the computational expense of each modeling approach
We finish with a discussion on the computational expense of all approaches
discussed in this work (Table 3 and Supplementary Figure 18). We recorded the
computed wall times to simulate each ABM, train each BINN model, and simulate
each PDE in Section 4.2. Averaging across all ABMs suggests that the average
ABM dataset took 40.0 minutes to generate with a standard deviation of 15.6
minutes. The average mean-field PDE model simulations for the Pulling ABM and
the Adhesion ABM took 0.6 and 0.5 seconds to complete, respectively, which are
about 4,000 and 4,500 times faster than the average ABM simulations time. The
average mean-field PDE model simulation time for the Pulling & Adhesion ABM
was4.7 seconds, which is 542 times faster than the average ABM simulation
time. Training a BINN model is the most time-consuming task with an average
time of 11.2 hours across all ABMs with an average standard deviation of 4.32
hours. The average BINN-guided PDE simulation takes 82.9 seconds with a
standard deviation of 77.12 seconds, which is approximately 28 times faster
than simulating the ABM.
ABM Name | ABM simulation | MF PDE simulation | BINN Training | BG PDE simulation
---|---|---|---|---
Adhesion | 37.5 (15.4) minutes | 0.5 (0.15) seconds | 10.6 (4.44) hours | 16.9 (23.65) seconds
Pulling | 39.9 (15.8) minutes | 0.6 (0.20) seconds | 10.0 (3.99) hours | 164.8 (156.9) seconds
Pulling & Adhesion | 42.5 (15.52) minutes | 4.7 (1.20) seconds | 13.1 (4.54) hours | 66.9 (50.81) seconds
Table 3: Computational expenses of each modeling approach. The mean wall time
computations (standard deviation in parentheses) for ABM simulations, BINN
training, mean-field PDE simulations, and BINN-guided PDE simulations for all
three ABMs.
## 5 Discussion and Future Work
The primary aim of this work was to introduce how BINNs can be used to learn
PDE models capable of forecasting and predicting ABM data. We used three
stochastic ABMs that consist of rules on how agent pulling and adhesion impact
agent migration to illustrate this approach. These models capture some of the
key cellular interactions for tumor invasion, wound healing, and development
[3, 12]. It is challenging to predict how the parameters that characterize
these processes will impact the output model behaviors, however, due to the
models’ high computational expenses. Modelers frequently address this
limitation by coarse-graining ABM rules into computationally-efficient PDE
models. Unfortunately, the resulting models can give misleading ABM
predictions and may be ill-posed for certain parameter values [6, 9]. Here, we
demonstrated that training BINNs to ABM data allows us to learn BINN-guided
PDE models that are capable of both 1.) forecasting future ABM data at a fixed
parameter value and 2.) predicting ABM data from new parameter values. We use
BINNs to forecast ABM data by simulating Equation (10) with the trained BINN
model’s inferred diffusion rate. ABM prediction requires previously-computed
ABM data from several parameter values. We train a BINN model to each dataset,
and then perform multivariate interpolation over their inferred diffusion
rates. We predict the output ABM data at new parameters by simulating Equation
(10) with the resulting interpolant as the diffusion rate.
We highlighted the strong forecasting and prediction capabilities of this
approach using multiple simulations from the Pulling, Adhesion, and Pulling &
Adhesion ABMs. For the Pulling ABM, the trained BINNs learn diffusion rates
that are similar to the mean-field diffusion rates for all parameter values;
as such, both models perform similarly in forecasting future ABM data. Due to
the similar computed results for all parameter values, we suggest that the
mean-field PDE model can be used (in lieu of performing multivariate
interpolation over several BINN diffusion rates) to predict new Pulling ABM
data. For the Adhesion ABM, the BINN-guided PDEs accurately forecast ABM data,
even for large adhesion values where the mean-field PDE is ill-posed. We can
perform multivariate interpolation to accurately predict new Adhesion ABM data
for large adhesion values. For the Pulling & Adhesion ABM, both the BINN-
guided and mean-field PDE simulations accurately forecasts the total ABM data.
The BINN-guided PDE achieves this with a one-compartment model, whereas the
mean-field PDE must compute two compartments (one for each agent type). We
were able to perform multivariate interpolation to accurately predict new
Pulling & Adhesion ABM data when varying parameters that alter the proportions
of agent types in the simulation, and the rates of agent pulling and adhesion.
A limitation of our approach for ABM forecasting and prediction is the
computational expense of BINN model training. The average BINN training
procedure in this study took 11.2 hours, which is about 17 times longer than
the average ABM data generation time of 40 minutes. Once a BINN model has been
trained, however, the average BINN-guided PDE simulation took 83 seconds,
which is roughly 28 times faster than the average time to generate an ABM
dataset. One possible source of these long BINN training times is our chosen
BINN model architecture, which consists of over over 50,000 parameters to
train. Kaplarevi-Malii et al. [33] proposed a genetic algorithm to identify
the optimal model archictecture for PINN models. In future work, we plan to
implement this algorithm to identify simpler BINN model architectures that can
be efficiently trained to learn predictive PDE models for ABMs.
Future work may include modeling more complex ABMs using the methods proposed
in this study. We focused on simple pulling rules that involve two agents, but
Chappelle and Yates [7] considered rules where multiple agents interact during
each pulling event. They found that the predictive accuracy of the coarse-
grained PDE models decreases as more agents become involved in pulling
processes. VandenHeuvel et al. [49] studied ABMs involving mechanical
relaxation to mimic epithelial tissues, but the coarse-grained PDEs are only
valid for fast mechanical relaxation rates. The coarse-grained PDE models for
the migration rules from these ABMs obey Equation (10) in one spatial
dimension; we could use BINNs to learn PDEs that better forecast and predict
ABM data for these challenging model rules. Agent proliferation is an
important component of biological invasion in wound healing and cancer growth
that we did not consider in this work. Many previous studies have shown that
coarse-grained PDE models fail to accurately describe ABM simulations when
agent proliferate quickly [9]. We could easily extend our methods to predict
ABM data from models where agents migrate and proliferate by adding a
population growth term into Equation (10) during BINN training.
An additional area of future work includes further BINN model development as a
means to deepen our ABM analysis. We studied population heterogeneity in this
work using an ABM with two agent types. We used a one-compartment PDE during
BINN training to demonstrate BINNs’ ability to forecast and predict complex
data with simple PDE models. Alternatively, we could update the BINN modeling
framework to predict the density of each agent type (as well as their
diffusion and/or growth terms) with a multi-compartment PDE framework. We
could infer how model parameters affect the modeling terms for each type of
agent. Incorporating the Bayesian inference techniques introduced in [34]
would allow us to develop Bayesian BINNs capable of performing uncertainty
quantification for each modeling term. There are thus many ways to extend the
methods proposed in this work, which will allow us to study more complex ABMs
and improve the learned models’ predictive capabilities.
Data Availability statement: All code for this work is publicly available at
https://github.com/johnnardini/Forecasting_predicting_ABMs.
## References
* [1] Michalina Janiszewska, Marina C. Primi, and Tina Izard. Cell adhesion in cancer: beyond the migration of single cells. Journal of Biological Chemistry, 295(8):2495–2505, February 2020\.
* [2] Katheryn E. Rothenberg, Yujun Chen, Jocelyn A. McDonald, and Rodrigo Fernandez-Gonzalez. Rap1 coordinates cell-cell adhesion and cytoskeletal reorganization to drive collective cell migration in vivo. Current Biology, 33(13):2587–2601.e5, July 2023.
* [3] John T. Nardini, Douglas A. Chapnick, Xuedong Liu, and David M. Bortz. Modeling keratinocyte wound healing: cell-cell adhesions promote sustained migration. Journal of Theoretical Biology, 400:103–117, July 2016.
* [4] Peter Friedl and Darren Gilmour. Collective cell migration in morphogenesis, regeneration and cancer. Nature Reviews Molecular Cell Biology, 10(7):445–457, July 2009\.
* [5] R. A. F. Clark and P. Henson. The molecular and cellular biology of wound repair. Plenum Press, New York, second edition, 1995.
* [6] Kathleen Anguige and Christian Schmeiser. A one-dimensional model of cell diffusion and aggregation, incorporating volume filling and cell-to-cell adhesion. Journal of Mathematical Biology, 58(3):395, March 2009.
* [7] George Chappelle and Christian A. Yates. Pulling in models of cell migration. Physical Review E, 99(6):062413, June 2019.
* [8] Shahzeb Raja Noureen, Jennifer P. Owen, Richard L. Mort, and Christian A. Yates. Swapping in lattice-based cell migration models. Physical Review E, 107(4):044402, April 2023.
* [9] Ruth E. Baker and Matthew J. Simpson. Correcting mean-field approximations for birth-death-movement processes. Physical Review E, 82(4):041905, October 2010.
* [10] Matthew J. Simpson, Ruth E. Baker, Pascal R. Buenzli, Ruanui Nicholson, and Oliver J. Maclaren. Reliable and efficient parameter estimation using approximate continuum limit descriptions of stochastic models. Journal of Theoretical Biology, 549:111201, September 2022.
* [11] Robert J. H. Ross, Christian. A. Yates, and Ruth E. Baker. Inference of cell–cell interactions from population density characteristics and cell trajectories on static and growing domains. Mathematical Biosciences, 264:108–118, June 2015.
* [12] Robin N. Thompson, Christian A. Yates, and Ruth E. Baker. Modelling cell migration and adhesion during development. Bulletin of Mathematical Biology, 74(12):2793–2809, December 2012\.
* [13] John T. Nardini, Ruth E. Baker, Matthew J. Simpson, and Kevin B. Flores. Learning differential equation models from stochastic agent-based model simulations. Journal of The Royal Society Interface, 18(176):rsif.2020.0987, 20200987, March 2021.
* [14] Le-Minh Kieu, Nicolas Malleson, and Alison Heppenstall. Dealing with uncertainty in agent-based models for short-term predictions. Royal Society Open Science, 7(1):191074, January 2020.
* [15] Dale Larie, Gary An, and R. Chase Cockrell. The use of artificial neural networks to forecast the behavior of agent-based models of pathophysiology: an example utilizing an agent-based model of sepsis. Frontiers in Physiology, 12:716434, October 2021.
* [16] Matthew J. Simpson, Jesse A. Sharp, and Ruth E. Baker. Distinguishing between mean-field, moment dynamics and stochastic descriptions of birth–death–movement processes. Physica A: Statistical Mechanics and its Applications, 395:236–246, February 2014.
* [17] Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15):3932–3937, April 2016.
* [18] E. Kaiser, J. Nathan Kutz, and Steven L. Brunton. Sparse identification of nonlinear dynamics for model predictive control in the low-data limit. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 474(2219):20180335, November 2018.
* [19] Samuel Rudy, Alessandro Alla, Steven L. Brunton, and J. Nathan Kutz. Data-driven identification of parametric partial differential equations. SIAM Journal on Applied Dynamical Systems, 18(2):643–660, January 2019. Publisher: Society for Industrial and Applied Mathematics.
* [20] Kathleen Champion, Bethany Lusch, J. Nathan Kutz, and Steven L. Brunton. Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences, 116(45):22445–22451, November 2019. Publisher: National Academy of Sciences Section: Physical Sciences.
* [21] Niall M. Mangan, Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Inferring biological networks by sparse identification of nonlinear dynamics. IEEE Transactions on Molecular, Biological and Multi-Scale Communications, 2(1):52–63, June 2016. Conference Name: IEEE Transactions on Molecular, Biological and Multi-Scale Communications.
* [22] Niall M. Mangan, J. Nathan Kutz, Steven L. Brunton, and Joshua L. Proctor. Model selection for dynamical systems via sparse regression and information criteria. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 473(2204):20170009, August 2017.
* [23] Daniel A. Messenger and David M. Bortz. Weak SINDy: galerkin-based data-driven model selection. Multiscale Modeling & Simulation, 19(3):1474–1497, January 2021\.
* [24] Daniel A. Messenger and David M. Bortz. Weak SINDy for partial differential equations. Journal of Computational Physics, 443:110525, October 2021.
* [25] John H. Lagergren, John T. Nardini, G. Michael Lavigne, Erica M. Rutter, and Kevin B. Flores. Learning partial differential equations for biological transport models from noisy spatio-temporal data. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 476(2234):20190800, February 2020.
* [26] John H. Lagergren, John T. Nardini, Ruth E. Baker, Matthew J. Simpson, and Kevin B. Flores. Biologically-informed neural networks guide mechanistic modeling from sparse experimental data. PLOS Computational Biology, 16(12):e1008462, December 2020. Publisher: Public Library of Science.
* [27] John T. Nardini, John H. Lagergren, Andrea Hawkins-Daarud, Lee Curtin, Bethan Morris, Erica M. Rutter, Kristin R. Swanson, and Kevin B. Flores. Learning equations from biological data with limited time samples. Bulletin of Mathematical Biology, 82(9):119, September 2020.
* [28] Samuel H. Rudy, Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Data-driven discovery of partial differential equations. Science Advances, 3(4):e1602614, April 2017. Publisher: American Association for the Advancement of Science Section: Research Article.
* [29] Daniel A. Messenger, Graycen E. Wheeler, Xuedong Liu, and David M. Bortz. Learning anisotropic interaction rules from individual trajectories in a heterogeneous cellular population. Journal of The Royal Society Interface, 19(195):20220412, October 2022.
* [30] Daniel A. Messenger and David M. Bortz. Learning mean-field equations from particle data using WSINDy. Physica D: Nonlinear Phenomena, 439:133406, November 2022.
* [31] Rohit Supekar, Boya Song, Alasdair Hastewell, Gary P. T. Choi, Alexander Mietke, and Jörn Dunkel. Learning hydrodynamic equations for active matter from particle simulations and experiments. Proceedings of the National Academy of Sciences, 120(7):e2206994120, February 2023.
* [32] Shengze Cai, Zhicheng Wang, Sifan Wang, Paris Perdikaris, and George Em Karniadakis. Physics-informed neural networks for heat transfer Problems. Journal of Heat Transfer, 143(6):060801, June 2021.
* [33] Ana Kaplarevic-Malisic, Branka Andrijevic, Filip Bojovic, Srdan Nikolic, Lazar Krstic, Boban Stojanovic, and Milos Ivanovic. Identifying optimal architectures of physics-informed neural networks by evolutionary strategy. Applied Soft Computing, page 110646, July 2023.
* [34] Kevin Linka, Amelie Schäfer, Xuhui Meng, Zongren Zou, George Em Karniadakis, and Ellen Kuhl. Bayesian physics informed neural networks for real-world nonlinear dynamical systems. Computer Methods in Applied Mechanics and Engineering, 402:115346, December 2022.
* [35] M. Raissi, P. Perdikaris, and G.E. Karniadakis. Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, February 2019.
* [36] Yeonjong Shin, Jerome Darbon, and George Em Karniadakis. On the convergence of physics informed neural networks for linear second-order elliptic and parabolic type PDEs. Communications in Computational Physics, 28(5):2042–2074, June 2020\. arXiv: 2004.01806.
* [37] Marc C. Kennedy and Anthony O’Hagan. Bayesian calibration of computer models. Journal of the Royal Statistical Society Series B: Statistical Methodology, 63(3):425–464, September 2001.
* [38] David L. Banks and Mevin B. Hooten. Statistical challenges in agent-based modeling. The American Statistician, 75(3):235–242, July 2021.
* [39] Sokratis Papadopoulos and Elie Azar. Integrating building performance simulation in agent-based modeling using regression surrogate models: A novel human-in-the-loop energy modeling approach. Energy and Buildings, 128:214–223, September 2016.
* [40] Guus Ten Broeke, George Van Voorn, Arend Ligtenberg, and Jaap Molenaar. The use of surrogate models to analyse agent-based models. Journal of Artificial Societies and Social Simulation, 24(2):3, 2021\.
* [41] Yi Zhang, Zhe Li, and Yongchao Zhang. Validation and calibration of an agent-based model: a surrogate approach. Discrete Dynamics in Nature and Society, 2020:1–9, January 2020\.
* [42] Sushant S. Garud, Iftekhar A. Karimi, and Markus Kraft. Design of computer experiments: a review. Computers & Chemical Engineering, 106:71–95, November 2017.
* [43] Ralph C. Smith. Uncertainty quantification: theory, implementation, and applications. Society for Industrial and Applied Mathematics, Philadelphia, PA, January 2013.
* [44] Jari Kaipio and Ville Kolehmainen. Approximate marginalization over modelling errors and uncertainties in inverse problems. In Paul Damien, Petros Dellaportas, Nicholas G. Polson, and David A. Stephens, editors, Bayesian Theory and Applications, pages 644–672. Oxford University Press, January 2013.
* [45] Jari P. Kaipio and Erkki Somersalo. Statistical and computational inverse problems, volume 160 of Applied Mathematical Sciences. Springer New York, New York, NY, 2005.
* [46] Jari Kaipio and Erkki Somersalo. Statistical inverse problems: discretization, model reduction and inverse crimes. Journal of Computational and Applied Mathematics, 198(2):493–504, January 2007.
* [47] Claudio Angione, Eric Silverman, and Elisabeth Yaneske. Using machine learning as a surrogate model for agent-based simulations. PLOS ONE, 17(2):e0263150, February 2022.
* [48] Abdul Afram and Farrokh Janabi-Sharifi. Black-box modeling of residential HVAC system and comparison of gray-box and black-box modeling methods. Energy and Buildings, 94:121–149, May 2015.
* [49] Daniel J. VandenHeuvel, Pascal R. Buenzli, and Matthew J. Simpson. Pushing coarse-grained models beyond the continuum limit using equation learning, August 2023. arXiv:2308.11086 [math, q-bio].
* [50] Alexander Kurganov and Eitan Tadmor. New high-resolution central schemes for nonlinear conservation laws and convection–diffusion equations. Journal of Computational Physics, 160(1):241–282, May 2000.
* [51] Linda Petzold. Automatic selection of methods for solving stiff and nonstiff systems of ordinary differential equations. SIAM Journal on Scientific and Statistical Computing, 4(1):136–148, March 1983.
## Appendix A Coarse-graining ABMs into PDE models
We will coarse-grain the Pulling, Adhesion, and Pulling & Adhesion ABMs into
their mean-field PDE models. Each ABM consists of a combination of Rules A-F
from Figure 1. Each rule updates the occupancies of three consecutive lattice
sites, such as $\\{(i,j-1),(i,j),(i,j+1)\\}$. Recall from Section 2 that
$0_{i,j}(t)$, $P_{i,j}(t)$, and $H_{i,j}(t)$ denote the probabilities that the
individual lattice site $(i,j)$ is unoccupied, occupied by a pulling agent, or
occupied by an adhesive agent, respectively, at time $t$. To convert each rule
into a PDE model, we invoke the _mean-field assumption_ , which supposes that
all lattice site occupancies are independent of each other. This assumption
simplifies model coarse-graining by allowing us to replace the joint
probability of three lattice site occupancies with the product of the three
individual lattice site occupancy probabilities. For example, under the mean-
field assumption, we can write the probability that lattice sites
$(i,j-1),(i,j),\text{ and }(i,j+1)$ are all occupied by pulling agents at time
$t$ as $P_{i,j-1}(t)P_{i,j}(t)P_{i,j+1}(t)$; otherwise, we must consider the
joint occupancy probability for this triplet of lattice sites. Mean-field DE
models can poorly predict ABM behavior when the mean-field assumption is
violated during ABM simulations, see [9, 10, 13] for further details.
### A.1 Coarse-graining the Pulling ABM
The Pulling ABM is composed of Rules A and B from Figure 1 and Section 2.2.1.
We begin coarse-graining this ABM into a PDE model by writing the master
equation governing how $P_{i,j}(t)$ changes according to these rules:
$\dfrac{\partial P_{i,j}(t)}{\partial
t}=K^{\ref{eq:ruleA}}+K^{\ref{eq:ruleB1}}+K^{\ref{eq:ruleB2}}.$ (11)
Rule A specifies how pulling agents migrate into an empty lattice site with
rate $r_{m}^{pull}/4$ when there is no neighboring agent in the lattice site
opposite the direction of migration. This rate is divided by four because the
agent randomly chooses to attempt to migrate into one of its four neighboring
lattice sites. We write this rule in the master equation as:
$\displaystyle K^{\ref{eq:ruleA}}=$
$\displaystyle-\dfrac{2r_{m}^{pull}}{4}\left[0_{i,j-1}(t)P_{i,j}(t)0_{i,j+1}(t)+0_{i-1,j}(t)P_{i,j}(t)0_{i+1,j}(t)\right]$
$\displaystyle+\dfrac{r_{m}^{pull}}{4}\left[0_{i,j-2}(t)P_{i,j-1}(t)0_{i,j}(t)+0_{i,j}P_{i,j+1}0_{i,j+2}+0_{i-2,j}P_{i-1,j}0_{i,j}+0_{i,j}P_{i+1,j}0_{i+2,j}\right],$
(12)
where the first line describes how a pulling agent moves out of lattice site
$(i,j)$, and the second line describes how a pulling agent moves into lattice
site $(i,j)$.
Rule B.1 specifies how a pulling agent migrates into an empty neighboring
lattice site and pulls its neighbor along with it, which occurs with
probability $p_{pull}$. We write this rule in the master equation as:
$\displaystyle K^{\ref{eq:ruleB1}}=-\dfrac{p_{pull}r_{m}^{pull}}{4}\bigg{[}$
$\displaystyle
P_{i,j}(t)P_{i,j+1}(t)0_{i,j+2}(t)+0_{i,j-2}(t)P_{i,j-1}(t)P_{i,j}(t)+$
$\displaystyle
P_{i,j}(t)P_{i+1,j}(t)0_{i+2,j}(t)+0_{i-2,j}(t)P_{i-1,j}(t)P_{i,j}(t)\bigg{]}$
$\displaystyle\dfrac{p_{pull}r_{m}^{pull}}{4}\bigg{[}$ $\displaystyle
P_{i,j-2}(t)P_{i,j-1}(t)0_{i,j}(t)+0_{i,j}(t)P_{i,j+1}(t)P_{i,j+2}(t)+$
$\displaystyle
P_{i-2,j}(t)P_{i-1,j}(t)0_{i,j}(t)+0_{i,j}(t)P_{i+1,j}(t)P_{i+2,j}(t)\bigg{]}.$
(13)
Rule B.2 specifies how a pulling agent migrates into an empty neighboring
lattice site and fails to pull its neighbor along with it, which occurs with
probability $1-p_{pull}$. We write this rule in the master equation as:
$\displaystyle
K^{\ref{eq:ruleB2}}=-\dfrac{(1-p_{pull})r_{m}^{pull}}{4}\bigg{[}$
$\displaystyle
P_{i,j-1}(t)P_{i,j}(t)0_{i,j+1}(t)+0_{i,j-1}(t)P_{i,j}(t)P_{i,j+1}(t)+$
$\displaystyle
P_{i-1,j}(t)P_{i,j}(t)0_{i+1,j+1}(t)+0_{i,j-1}(t)P_{i,j}(t)P_{i+1,j}(t)\bigg{]}$
$\displaystyle+\dfrac{(1-p_{pull})r_{m}^{pull}}{4}\bigg{[}$ $\displaystyle
P_{i,j-2}(t)P_{i,j-1}(t)0_{i,j}(t)+0_{i,j}(t)P_{i,j+1}(t)P_{i,j+2}(t)+$
$\displaystyle
P_{i-2,j}(t)P_{i-1,j}(t)0_{i,j}(t)+0_{i,j}(t)P_{i+1,j}(t)P_{i+2,j}(t)\bigg{]}.$
(14)
To obtain the resulting PDE model for the Pulling ABM, we substitute Equations
(12), (13), and (14) into Equation (11) and set $0_{i,j}=1-P_{i,j}.$ We
replace each term with its Taylor expansion, up to second order:
$\displaystyle P_{i\pm m,j}(t)$ $\displaystyle=P_{i,j}(t)\pm
m\Delta(P_{i,j}(t))_{x}+\dfrac{m\Delta^{2}}{2}(P_{i,j}(t))_{xx}+\mathcal{O}(\Delta^{3}),$
$\displaystyle m=-2,-1,0,1,2;$ $\displaystyle P_{i,j\pm n}(t)$
$\displaystyle=P_{i,j}(t)\pm
n\Delta(P_{i,j}(t))_{y}+\dfrac{n\Delta^{2}}{2}(P_{i,j}(t))_{yy}+\mathcal{O}(\Delta^{3}),$
$\displaystyle n=-2,-1,0,1,2;$ (15)
where subscripts denote differentiation with respect the the shown variable,
and $\Delta$ is the length of each lattice site. As shown in the Mathematica
notebook Pulling_model_coarse_graining.nb, taking the limit of the resulting
expression as $\Delta\rightarrow 0$ leads to the mean-field PDE model for the
Pulling ABM:
$\dfrac{\partial P}{\partial
t}=\nabla\cdot\left(\dfrac{r_{m}^{pull}}{4}\left(1+3p_{pull}P^{2}\right)\nabla
P\right),$ (16)
where $P=P_{i,j}(t)$.
### A.2 Coarse-graining the Adhesion ABM
The Adhesion ABM is composed of Rules C and D from Figure 1 and Section 2.2.2.
We begin coarse-graining this ABM into a PDE model by writing the master
equation governing how $H_{i,j}(t)$ changes according to these rules:
$\dfrac{\partial H_{i,j}(t)}{\partial
t}=K^{\ref{eq:ruleC}}+K^{\ref{eq:ruleD}}.$ (17)
Rule C specifies how adhesive agents migrate into an empty lattice site with
rate $r_{m}^{adh}/4$ when there is no neighboring agent in the lattice site
opposite the direction of migration. We write this rule in the master equation
as:
$\displaystyle K^{\ref{eq:ruleC}}=-\dfrac{2r_{m}^{adh}}{4}\bigg{[}$
$\displaystyle
0_{i,j-1}(t)H_{i,j}(t)0_{i,j+1}(t)+0_{i-1,j}(t)H_{i,j}(t)0_{i+1,j}(t)\bigg{]}$
$\displaystyle+\dfrac{r_{m}^{adh}}{4}\bigg{[}$ $\displaystyle
0_{i,j-2}(t)H_{i,j-1}(t)0_{i,j}(t)+0_{i,j}(t)H_{i,j+1}(t)0_{i,j+2}(t)+$
$\displaystyle
0_{i-2,j}(t)H_{i-1,j}(t)0_{i,j}(t)+0_{i,j}(t)H_{i+1,j}(t)0_{i+2,j}(t)\bigg{]},$
(18)
where the first line describes how an adhesive agent moves out of lattice site
$(i,j)$, and the second and third lines describe how an adhesive agent moves
into lattice site $(i,j)$.
Rule D specifies how adhesive agents migrate into an empty neighboring lattice
site when a neighboring adhesive agent is in the lattice site opposite the
direction of migration. The neighboring adhesive agent attempts to adhere to
the migratory agent and abort the migration event. The adhesion event succeeds
with probability $p_{adh}$, and neither agent changes its position. The
adhesion event fails with probability $1-p_{adh}$, and the migratory agent
shifts into the previously-empty lattice site while the neighboring agent
remains in its previous lattice site. We write this rule in the master
equation as:
$\displaystyle K^{\ref{eq:ruleD}}=-\dfrac{(1-p_{adh})r_{m}^{adh}}{4}\bigg{[}$
$\displaystyle
H_{i,j-1}(t)H_{i,j}(t)0_{i,j+1}(t)+0_{i,j-1}(t)H_{i,j}(t)H_{i,j+1}(t)+$
$\displaystyle
H_{i-1,j}(t)H_{i,j}(t)0_{i+1,j}(t)+0_{i-1,j}(t)H_{i,j}(t)H_{i+1,j}(t)\bigg{]}$
$\displaystyle+\dfrac{(1-p_{adh})r_{m}^{adh}}{4}\bigg{[}$ $\displaystyle
H_{i,j-2}(t)H_{i,j-1}(t)0_{i,j}(t)+0_{i,j}(t)H_{i,j+1}(t)H_{i,j+2}(t)+$
$\displaystyle
H_{i-2,j}(t)H_{i-1,j}(t)0_{i,j}(t)+0_{i,j}(t)H_{i+1,j}(t)H_{i+2,j}(t)\bigg{]}.$
(19)
To obtain the resulting PDE model for the Adhesion ABM, we substitute
Equations (18) and (19) into Equation (17) and set $0_{i,j}=1-H_{i,j}$. We
replace each term with its Taylor expansion, up to second order:
$\displaystyle H_{i\pm m,j}(t)$ $\displaystyle=H_{i,j}(t)\pm
m\Delta(H_{i,j}(t))_{x}+\dfrac{m\Delta^{2}}{2}(H_{i,j}(t))_{xx}+\mathcal{O}(\Delta^{3}),$
$\displaystyle m=-2,-1,0,1,2;$ $\displaystyle H_{i,j\pm n}(t)$
$\displaystyle=H_{i,j}(t)\pm
n\Delta(H_{i,j}(t))_{y}+\dfrac{n\Delta^{2}}{2}(H_{i,j}(t))_{yy}+\mathcal{O}(\Delta^{3}),$
$\displaystyle n=-2,-1,0,1,2.$ (20)
As shown in the Mathematica notebook Adhesion_model_coarse_graining.nb, taking
the limit of the resulting expression as $\Delta\rightarrow 0$ leads to the
mean-field PDE model for the Adhesion ABM:
$\dfrac{\partial H}{\partial
t}=\nabla\cdot\left(\dfrac{r_{m}^{adh}}{4}\left(3p_{adh}\left(H-\dfrac{2}{3}\right)^{2}+1-\dfrac{4p_{adh}}{3}\right)\nabla
H\right)$ (21)
where $H=H_{i,j}(t)$.
### A.3 Coarse-graining the Pulling & Adhesion ABM
The Pulling & Adhesion ABM is composed of Rules A to F from Figure 1 and
Sections 2.2.1-2.2.3. We begin coarse-graining this ABM into a PDE model by
writing the master system of equations governing how both $P_{i,j}(t)$ and
$H_{i,j}(t)$ change according to these rules:
$\displaystyle\dfrac{\partial P_{i,j}(t)}{\partial t}$
$\displaystyle=K^{\ref{eq:ruleA}}+K^{\ref{eq:ruleB1}}+K^{\ref{eq:ruleB2}}+K^{\ref{eq:rulee1}}_{P}+K^{\ref{eq:rulee2}}$
(22) $\displaystyle\dfrac{\partial H_{i,j}(t)}{\partial t}$
$\displaystyle=K^{\ref{eq:ruleC}}+K^{\ref{eq:ruleD}}+K^{\ref{eq:rulee1}}_{H}+K^{\ref{eq:rulef}},$
(23)
where $K^{\ref{eq:rulee1}}_{P}$ denotes how $P_{i,j}(t)$ is affected by Rule
E.1 and $K^{\ref{eq:rulee1}}_{H}$ denotes how $H_{i,j}(t)$ is affected by Rule
E.1. All other rules affect either $P_{i,j}(t)$ or $H_{i,j}(t)$, but not both.
Rules A-D are described in Sections A.1 and A.2, and we do not restate them
here.
Rule E specifies how a pulling agent migrates into an empty neighboring
lattice site when a neighboring adhesive agent is present in the lattice site
opposite the direction of migration. In Rule E.1, the pulling agent
successfully pulls the adhesive agent as it migrates, which occurs with
probability $p_{pull}$. In this scenario, the pulling agent shifts into the
previously-empty lattice site and the adhesive agent moves into the site
previously occupied by the pulling agent. We write this rule in the master
equation for $P_{i,j}(t)$ as:
$\displaystyle
K^{\ref{eq:rulee1}}_{P}=-\dfrac{p_{pull}r_{m}^{pull}}{4}\bigg{[}$
$\displaystyle
H_{i,j-1}(t)P_{i,j}(t)0_{i,j+1}(t)+0_{i,j-1}(t)P_{i,j}(t)H_{i,j+1}(t)+$
$\displaystyle
H_{i-1,j}(t)P_{i,j}(t)0_{i+1,j}(t)+0_{i-1,j}(t)P_{i,j}(t)H_{i+1,j}(t)\bigg{]}$
$\displaystyle+\dfrac{p_{pull}r_{m}^{pull}}{4}\bigg{[}$ $\displaystyle
H_{i,j-2}(t)P_{i,j-1}(t)0_{i,j}(t)+0_{i,j}(t)P_{i,j+1}(t)H_{i,j+2}(t)+$
$\displaystyle
H_{i-2,j}(t)P_{i-1,j}(t)0_{i,j}(t)+0_{i,j}(t)P_{i+1,j}(t)H_{i+2,j}(t)\bigg{]},$
(24)
and in the master equation for $H_{i,j}(t)$ as:
$\displaystyle
K^{\ref{eq:rulee1}}_{H}=-\dfrac{p_{pull}r_{m}^{pull}}{4}\bigg{[}$
$\displaystyle
0_{i,j-2}(t)P_{i,j-1}(t)H_{i,j}(t)+H_{i,j}(t)P_{i,j+1}(t)0_{i,j+2}(t)+$
$\displaystyle
0_{i-2,j}(t)P_{i-1,j}(t)H_{i,j}(t)+H_{i,j}(t)P_{i+1,j}(t)0_{i+2,j}(t)\bigg{]}$
$\displaystyle+\dfrac{p_{pull}r_{m}^{pull}}{4}\bigg{[}$ $\displaystyle
H_{i,j-1}(t)P_{i,j}(t)0_{i,j+1}(t)+0_{i,j-1}(t)P_{i,j}(t)H_{i,j+1}(t)+$
$\displaystyle
H_{i-1,j}(t)P_{i,j}(t)0_{i+1,j}(t)+0_{i-1,j}(t)P_{i,j}(t)H_{i+1,j}(t)\bigg{]}.$
(25)
The neighboring adhesive agent successfully adheres to the migrating pulling
agent and aborts its migration event with probability $p_{adh}$. Neither
$P_{i,j}(t)$ or $H_{i,j}(t)$ changes in this scenario as no agents change
their locations in response to the adhesion event. In Rule E.2, the adhesive
agent fails to adhere to the pulling agent and the pulling agent fails to pull
the adhesive agent, which occurs with probability $1-p_{adh}-p_{pull}$. In
this scenario, the pulling agent shifts into the previously-empty lattice site
while the neighboring adhesive agent remains in its previous lattice site. We
write this rule in the master equation as:
$\displaystyle
K^{\ref{eq:rulee2}}=-\dfrac{(1-p_{adh}-p_{pull})r_{m}^{pull}}{4}\bigg{[}$
$\displaystyle
H_{i,j-1}(t)P_{i,j}(t)0_{i,j+1}(t)+0_{i,j-1}(t)P_{i,j}(t)H_{i,j+1}(t)+$
$\displaystyle
H_{i-1,j}(t)P_{i,j}(t)0_{i+1,j}(t)++0_{i-1,j}(t)P_{i,j}(t)H_{i+1,j}(t)\bigg{]}$
$\displaystyle+\dfrac{(1-p_{adh}-p_{pull})r_{m}^{pull}}{4}\bigg{[}$
$\displaystyle
H_{i,j-2}(t)P_{i,j-1}(t)0_{i,j}(t)+0_{i,j}(t)P_{i,j+1}(t)H_{i,j+2}(t)+$
$\displaystyle
H_{i-2,j}(t)P_{i-1,j}(t)0_{i,j}(t)+0_{i,j}(t)P_{i+1,j}(t)H_{i+2,j}(t)\bigg{]}.$
(26)
Rule F specifies how adhesive agents migrate into an empty neighboring lattice
site when a neighboring pulling agent is in the lattice site opposite the
direction of migration. The two agents do not interact with each other in this
scenario. As such, the adhesive agent migrates into the empty lattice site
with rate $r_{m}^{adh}/4$. We write this rule in the master equation as:
$\displaystyle K^{\ref{eq:rulef}}=-\dfrac{r_{m}^{adh}}{4}\bigg{[}$
$\displaystyle
P_{i,j-1}(t)H_{i,j}(t)0_{i,j+1}(t)+0_{i,j-1}(t)H_{i,j}(t)P_{i,j+1}(t)+$
$\displaystyle
P_{i-1,j}(t)H_{i,j}(t)0_{i+1,j}(t)+0_{i-1,j}(t)H_{i,j}(t)P_{i+1,j}(t)\bigg{]}$
$\displaystyle+\dfrac{r_{m}^{adh}}{4}\bigg{[}$ $\displaystyle
P_{i,j-2}(t)H_{i,j-1}(t)0_{i,j}(t)+0_{i,j}(t)H_{i,j+1}(t)P_{i,j+2}(t)+$
$\displaystyle
P_{i-2,j}(t)H_{i-1,j}(t)0_{i,j}(t)+0_{i,j}(t)H_{i+1,j}(t)P_{i+2,j}(t)\bigg{]}.$
(27)
To obtain the resulting system of differential equations for the Pulling &
Adhesion ABM, we substitute Equations (12), (13), (14), (18), (19), (24),
(25), (26), and (27) into Equation (23) and set $0_{i,j}=1-T_{i,j}$, where
$T_{i,j}=P_{i,j}+H_{i,j}$. We replace each term with its Taylor expansion up
to second order from Equations (15) and (20). As shown in the Mathematica
notebook Pulling-Adhesion_coarse_graining.nb, taking the limit of the
resulting expression as $\Delta\rightarrow 0$ leads to the mean-field system
of PDEs for the Pulling & Adhesion ABM:
$\displaystyle\dfrac{\partial P}{\partial t}=$
$\displaystyle\dfrac{r_{m}^{pull}}{4}\nabla\cdot\bigg{(}(1-T)\nabla P+P\nabla
T\bigg{)}$
$\displaystyle+p_{adh}\dfrac{r_{m}^{pull}}{4}\nabla\cdot\bigg{(}-3P(1-T)\nabla
H-H(1-T)\nabla P-HP\nabla T\bigg{)}$
$\displaystyle+p_{pull}\dfrac{r_{m}^{pull}}{4}\nabla\cdot\bigg{(}3P^{2}\nabla
T\bigg{)}$ $\displaystyle\dfrac{\partial H}{\partial t}=$
$\displaystyle\dfrac{r_{m}^{adh}}{4}\nabla\cdot\bigg{(}(1-T)\nabla H+H\nabla
T\bigg{)}$
$\displaystyle+p_{adh}\dfrac{r_{m}^{adh}}{4}\nabla\cdot\bigg{(}-4(1-T)H\nabla
H-H^{2}\nabla T\bigg{)}$
$\displaystyle+p_{pull}\dfrac{r_{m}^{pull}}{4}\nabla\cdot\bigg{(}-(1-T)H\nabla
P+(1-T)P\nabla H+3HP\nabla T\bigg{)},$ (28)
where $P=P_{i,j}(t),H=H_{i,j}(t),\text{ and }T=T_{i,j}(t)$.
## Appendix B Numerical integration of PDEs
When simulating Equation (10), we populate the middle 20% of the spatial
dimension with 75% confluence and zero confluence everywhere else to match the
initial ABM configurations and implement no-flux boundary conditions:
$\displaystyle T(x,0)$ $\displaystyle=\begin{cases}0.75,&80\leq x\leq 120\\\
0,&\text{otherwise},\end{cases},$ $\displaystyle\dfrac{\partial T}{\partial
x}(0,t)$ $\displaystyle=\dfrac{\partial u}{\partial x}(X,t)=0.$ (29)
Before integration, we discretize the spatial domain as $x_{i}=i\Delta x$ with
$i=0,...,199$ and $\Delta x=1.0$. For ease of notation, let
$T_{i}(t)=T(x_{i},t)$ and $\mathcal{D}_{i}(t)=\mathcal{D}(T_{i}(t))$. We then
use the method of lines approach to integrate Equation (10). To discretize the
right hand side of Equation (10), we let
$\dfrac{\partial T_{i}(t)}{\partial x}\left(\mathcal{D}_{i}(t)\dfrac{\partial
T_{i}(t)}{\partial
x}\right)\approx\dfrac{P_{i+\nicefrac{{1}}{{2}}}(t)-P_{i-\nicefrac{{1}}{{2}}}(t)}{\Delta
x},$
where $P_{i\pm\nicefrac{{1}}{{2}}}(t)$ denotes the right or left flux through
location $x_{i}$, respectively. Following [50], we approximate these fluxes by
$\displaystyle P_{i+\nicefrac{{1}}{{2}}}(t)$
$\displaystyle=\dfrac{1}{2}\left(\mathcal{D}_{i}(t)\dfrac{T_{i+1}(t)-T_{i}(t)}{\Delta
x}+\mathcal{D}_{i+1}(t)\dfrac{T_{i+1}(t)-T_{i}(t)}{\Delta x}\right)$
$\displaystyle P_{i-\nicefrac{{1}}{{2}}}(t)$
$\displaystyle=\dfrac{1}{2}\left(\mathcal{D}_{i-1}(t)\dfrac{T_{i}(t)-T_{i-1}(t)}{\Delta
x}+\mathcal{D}_{i}(t)\dfrac{T_{i}(t)-T_{i-1}(t)}{\Delta x}\right).$ (30)
To implement the no-flux boundary conditions, we incorporate the ghost points
$x_{-1}$ and $x_{200}$ that enforce $u_{-1}(t)=u_{1}(t)$ and
$u_{198}(t)=u_{200}(t)$ into Equation (30). We integrate Equation (10) using
the odeint command from Scipy’s integration package (version 1.8.0), which
implements the Livermore Solver for Differential Equations (LSODA) method
[51].
## Appendix C Supplementary figures
Figure 12: Forecasting Pulling ABM data with mean-field (MF) and BINN-guided PDE models. The mean-field and BINN-guided PDE simulations are used to forecast Pulling ABM data for (a-c) $r_{m}^{pull}=1.0,p_{pull}=0.8$ (d-f) $r_{m}^{pull}=0.9,p_{pull}=0.5$. Figure 13: Forecasting Adhesion ABM data with mean-field and BINN-guided PDE models. The mean-field and BINN-guided PDE simulations are used to forecast Adhesion ABM data for (a-c) $r_{m}^{adh}=1.0,p_{adh}=0.7$ (d-f) $r_{m}^{adh}=0.1,p_{adh}=0.5$. Figure 14: Forecasting Pulling & Adhesion ABM data with mean-field (MF) and BINN-guided PDE models. The mean-field and BINN-guided PDE simulations are used to forecast Pulling & Adhesion ABM data for the base parameter values ($r_{m}^{pull}=1.0,r_{m}^{adh}=0.25,p_{pull}=0.33,p_{adh}=0.33$, and $\alpha=0.5$), except (a-c) $p_{adh}=0.4$ (d-f) $r_{m}^{adh}=0.1.$ Figure 15: Predicting Adhesion ABM data with the interpolated PDE model. The interpolated PDE model predicts Adhesion ABM data for (a-c) $r_{m}^{adh}=1.0$ and $p_{adh}=0.95$. Sample | $\bm{p}=(r_{m}^{adh},\ {p_{adh}})^{T}$
---|---
1 | (0.145, 0.825)T
2 | (0.505, 0.575)T
3 | (0.415, 0.725)T
4 | (0.865, 0.525)T
5 | (0.955, 0.625)T
6 | (0.235, 0.775)T
7 | (0.685, 0.675)T
8 | (0.325, 0.875)T
9 | (0.775, 0.925)T
10 | (0.595, 0.975)T
Table 4: Latin hypercube sampling for the Adhesion ABM. The samples from the new parameter dataset for the Adhesion ABM when varying $r_{m}^{adh}$ and $p_{adh}$. The samples are ordered by increasing testing MSE values (see Figure 10(c)). Figure 16: Predicting Adhesion ABM data with the interpolated PDE model. The interpolated PDE model predicts Adhesion ABM data for (a-c) $r_{m}^{adh}=0.595$ and $p_{adh}=0.975$ and (d-f) $r_{m}^{adh}=0.325$ and $p_{adh}=0.875$. Sample | $\bm{p}=\ (r_{m}^{pull},\ r_{m}^{adh},\ p_{pull},\ p_{adh},\ \alpha)^{T}$
---|---
1 | (1.0, 0.25, 0.394, 0.578, 0.912)T
2 | (1.0, 0.25, 0.293, 0.528, 0.938)T
3 | (1.0, 0.25, 0.008, 0.226, 0.988)T
4 | (1.0, 0.25, 0.511, 0.477, 0.862)T
5 | (1.0, 0.25, 0.41, 0.109, 0.962)T
6 | (1.0, 0.25, 0.075, 0.595, 0.888)T
7 | (1.0, 0.25, 0.042, 0.544, 0.838)T
8 | (1.0, 0.25, 0.327, 0.059, 0.712)T
9 | (1.0, 0.25, 0.444, 0.31, 0.662)T
10 | (1.0, 0.25, 0.209, 0.209, 0.612)T
11 | (1.0, 0.25, 0.126, 0.41, 0.762)T
12 | (1.0, 0.25, 0.193, 0.042, 0.588)T
13 | (1.0, 0.25, 0.059, 0.561, 0.462)T
14 | (1.0, 0.25, 0.243, 0.26, 0.788)T
15 | (1.0, 0.25, 0.427, 0.494, 0.512)T
16 | (1.0, 0.25, 0.595, 0.327, 0.812)T
17 | (1.0, 0.25, 0.025, 0.461, 0.388)T
18 | (1.0, 0.25, 0.377, 0.176, 0.488)T
19 | (1.0, 0.25, 0.226, 0.645, 0.538)T
20 | (1.0, 0.25, 0.528, 0.126, 0.688)T
21 | (1.0, 0.25, 0.561, 0.075, 0.562)T
22 | (1.0, 0.25, 0.142, 0.193, 0.362)T
23 | (1.0, 0.25, 0.31, 0.092, 0.738)T
24 | (1.0, 0.25, 0.176, 0.662, 0.412)T
25 | (1.0, 0.25, 0.645, 0.008, 0.638)T
26 | (1.0, 0.25, 0.343, 0.293, 0.312)T
27 | (1.0, 0.25, 0.092, 0.611, 0.238)T
28 | (1.0, 0.25, 0.109, 0.628, 0.012)T
29 | (1.0, 0.25, 0.159, 0.343, 0.212)T
30 | (1.0, 0.25, 0.26, 0.142, 0.188)T
31 | (1.0, 0.25, 0.36, 0.377, 0.262)T
32 | (1.0, 0.25, 0.276, 0.36, 0.038)T
33 | (1.0, 0.25, 0.578, 0.243, 0.288)T
34 | (1.0, 0.25, 0.628, 0.159, 0.062)T
35 | (1.0, 0.25, 0.477, 0.511, 0.138)T
36 | (1.0, 0.25, 0.611, 0.276, 0.338)T
37 | (1.0, 0.25, 0.461, 0.444, 0.162)T
38 | (1.0, 0.25, 0.544, 0.427, 0.112)T
39 | (1.0, 0.25, 0.494, 0.394, 0.088)T
40 | (1.0, 0.25, 0.662, 0.025, 0.438)T
Table 5: Latin hypercube sampling for the Pulling & Adhesion ABM. The samples from the prior parameter dataset for the Pulling & Adhesion ABM when varying $p_{pull}$, $p_{adh}$, and $\alpha$. The samples are ordered by increasing testing MSE values. Sample | $\bm{p}=\ (r_{m}^{pull},\ r_{m}^{adh},\ p_{pull},\ p_{adh},\ \alpha)^{T}$
---|---
1 | (1.0, 0.25, 0.285, 0.519, 0.775)T
2 | (1.0, 0.25, 0.419, 0.352, 0.875)T
3 | (1.0, 0.25, 0.486, 0.117, 0.525)T
4 | (1.0, 0.25, 0.553, 0.285, 0.375)T
5 | (1.0, 0.25, 0.385, 0.586, 0.475)T
6 | (1.0, 0.25, 0.586, 0.184, 0.175)T
7 | (1.0, 0.25, 0.62, 0.151, 0.325)T
8 | (1.0, 0.25, 0.184, 0.084, 0.625)T
9 | (1.0, 0.25, 0.352, 0.385, 0.925)T
10 | (1.0, 0.25, 0.653, 0.05, 0.275)T
11 | (1.0, 0.25, 0.151, 0.653, 0.075)T
12 | (1.0, 0.25, 0.452, 0.251, 0.125)T
13 | (1.0, 0.25, 0.084, 0.218, 0.225)T
14 | (1.0, 0.25, 0.318, 0.62, 0.725)T
15 | (1.0, 0.25, 0.519, 0.017, 0.825)T
16 | (1.0, 0.25, 0.117, 0.419, 0.425)T
17 | (1.0, 0.25, 0.251, 0.486, 0.975)T
18 | (1.0, 0.25, 0.017, 0.452, 0.025)T
19 | (1.0, 0.25, 0.05, 0.318, 0.575)T
20 | (1.0, 0.25, 0.218, 0.553, 0.675)T
Table 6: Latin hypercube sampling for the Pulling & Adhesion ABM. The samples
from the new parameter dataset for the Pulling & Adhesion ABM when varying
$p_{pull}$, $p_{adh}$, and $\alpha$. The samples are ordered by increasing
testing MSE values (see Figure 11(b)). Figure 17: Predicting Pulling &
Adhesion ABM data with the interpolated PDE model. The interpolated PDE model
predicts Adhesion ABM data for $r_{m}^{pull}=1.0$, $r_{m}^{adh}=0.25$, and
(a-c) $p_{pull}=0.218$, $p_{adh}=0.553$, and $\alpha=0.675$ (d-f)
$p_{pull}=0.251$, $p_{adh}=0.486$, and $\alpha=0.975$. Figure 18:
Computational expenses of each modeling approach. Violin plots represent the
distribution of wall time computations for ABM simulations, BINN training,
mean-field PDE simulations, and BINN-guided PDE simulations for the (a)
Pulling ABM, (b) Adhesion ABM, and (c) Pulling & Adhesion ABM.
## Appendix D Gillespie algorithm
Create an $X\times Y$ lattice with user-specified placement of agents
Set $t=0$
Set maximum simulation time $t_{\text{end}}$
Set $P(t)$ and $H(t)$ equal to the number of Pulling and Adhesive agents on
the lattice, respectively
while _$t <t_{\text{end}}$_ do
Calculate the following random variables, uniformly distributed on
$[0,1]:\gamma_{1},\gamma_{2}$
Calculate the propensity function $a(t)=r_{m}^{pull}P(t)+r_{m}^{adh}H(t)$
Calculate time step $\tau=-\ln(\gamma_{1})/a(t)$
$t=t+\tau$
$R=a(t)\gamma_{2}$
if _$R <r_{m}^{pull}P(t)$_ then
Perform Pulling agent migration (Algorthm S2)
else if _$R <r_{m}^{pull}P(t)+r_{m}^{adh}H(t)$_ then
Perform Adhesive agent migration (Algorthm S3)
end while
Algorithm 1 Gillespie algorithm for the Pulling & Adhesion ABM
Randomly choose a pulling agent and determine its lattice site index,
$\vec{x}=(i,j)^{T}$
Choose one of the four cardinal migration directions,
$\vec{dx}=(dx,dy)^{T}\in\\{(1,0)^{T},(-1,0)^{T},(0,1)^{T},(0,-1)^{T}\\}$, with
equal probability, $1/4$. The neighboring direction is given by
$\hat{dx}=-\vec{dx}$
if _$\vec{x}+\vec{dx}$ is empty_ then
if _$\vec{x}+\hat{dx}$ is empty_ then
/* Rule A */
Move the chosen pulling agent to lattice site $\vec{x}+\vec{dx}$
else if _$\vec{x}+\hat{dx}$ is occupied by a Pulling agent_ then
/* Rule B */
Calculate the random variable, $\gamma_{3}$, uniformly distributed on $[0,1]$
if _$\gamma_{3}\leq p_{pull}$_ then
Move the chosen pulling agent to lattice site $\vec{x}+\vec{dx}$
Move the neighboring agent to lattice site $\vec{x}$
else if _$\gamma_{3} >p_{pull}$_ then
Move the chosen pulling agent to lattice site $\vec{x}+\vec{dx}$
else if _$\vec{x}+\hat{dx}$ is occupied by an Adhesive agent_ then
/* Rule E */
Calculate the random variable, $\gamma_{3}$, uniformly distributed on $[0,1]$
if _$\gamma_{3}\leq p_{pull}$_ then
Move the chosen pulling agent to lattice site $\vec{x}+\vec{dx}$
Move the neighboring agent to lattice site $\vec{x}$
else if _$\gamma_{3}\leq p_{pull}+1-p_{adh}$ _ then
Move the chosen pulling agent to lattice site $\vec{x}+\vec{dx}$
Algorithm 2 Pulling Agent migration
Randomly choose an adhesive agent and determine its lattice site index,
$\vec{x}=(i,j)^{T}$
Choose one of the four cardinal migration directions,
$\vec{dx}=(dx,dy)^{T}\in\\{(1,0)^{T},(-1,0)^{T},(0,1)^{T},(0,-1)^{T}\\}$, with
equal probability, $1/4$. The neighboring direction is given by
$\hat{dx}=-\vec{dx}$
if _$\vec{x}+\vec{dx}$ is empty_ then
if _$\vec{x}+\hat{dx}$ is empty_ then
/* Rule C */
Move the chosen adhesive agent to lattice site $\vec{x}+\vec{dx}$
else if _$\vec{x}+\hat{dx}$ is occupied by an adhesive agent_ then
/* Rule D */
Calculate the random variable, $\gamma_{3}$, uniformly distributed on $[0,1]$
if _$\gamma_{3}\leq(1-p_{adh})$ _ then
Move the chosen adhesive agent to lattice site $\vec{x}+\vec{dx}$
else if _$\vec{x}+\hat{dx}$ is occupied by a Pulling agent_ then
/* Rule F */
Move the chosen adhesive agent to lattice site $\vec{x}+\vec{dx}$
Algorithm 3 Adhesive agent migration
|
# Multi-Scale Theory of Elasticity for Geomaterials
Christopher M. Szalwinski<EMAIL_ADDRESS>Lassonde School of Engineering,
York University, Toronto ON M3J 1P3 Canada
###### Abstract
The modern theory of elasticity and the first law of thermodynamics are
cornerstones of engineering science that share the concept of reversibility.
Engineering researchers have known for four decades that the modern theory
violates the first law of thermodynamics when applied to the more commonly
accepted empirical models of geomaterial stiffness. This paper develops a
cross-scale theory of elasticity that is compatible with the empirical models
and the first law of thermodynamics. This theory includes a material sample’s
total-volume to solid-volume ratio as an independent internal variable,
distinguishes deformation into uniform and contraction-swelling components,
introduces a uniformity surface that partitions stress space into contraction
and swelling sub-domains, couples the macroscopic properties to the volume
ratio and extrapolates the accepted empirical models to states that include
shear stress. This paper broadens the scope of the theory of elasticity to
include soft condensed matter.
constitutive relations; soft condensed matter; energy conservation;
contraction-swelling; critical state soil mechanics
## I Introduction
The states of repair of our countries’ infrastructures reflect our theoretical
understanding of earth materials. These materials include sands, gravel,
gypsum, clay, shale, rock, and composites like glass, concrete, plaster,
bricks, and asphalt [1]. They are economically essential to the global
construction industry and react to external forces in complex ways.
Geotechnical engineers, to meet design serviceability requirements for
landfills, land surfaces, and land, sea and sub-surface structures, predict
deformations well away from failure conditions. In their analyses, they rely
on the modern theory of elasticity and expect closed loading cycles to
conserve energy. However, for nearly half a century, researchers have claimed
that implementing the more commonly accepted empirical models of soil
deformation can violate the first law of thermodynamics [2-28]. This apparent
violation of a well-established law based on experience is one example of a
problem that involves crossing length scales.
Multi-scale investigations have been growing rapidly in materials science. The
NSF Report on Simulation-Based Engineering Science [29] describes the
transformation to multi-scale modeling and simulation as a powerful paradigm
shift in engineering science, with disparities in cross-scale descriptions
appearing in virtually all areas of science and engineering. The report refers
to the ensemble of disparities as the tyranny of scales. These disparities
focus attention on exploiting mesoscopic data to bridge the gaps between the
top-down and bottom-up models of systems with neither strategy alone sufficing
to yield the observable higher scale properties [30].
The history of materials science has shown us that a theory of elasticity that
is based on a mesoscopic model alone is at best tentative. After a century-
long contest, the multi-constancy tradition prevailed over the rari-constancy
tradition [31]. The multi-constancy tradition is top-down, assumes that the
superposition of pressures is due to a variety of displacements and defines
pressures as linear functions of those displacements [32]. The rari-constancy
tradition is bottom-up and models a body as composed of molecules with actions
between them being in the line that joins the molecules [32]. The modern
theory of elasticity is entirely within the former tradition.
The modern theory of elasticity requires two coefficients to describe a
material that lacks directional preference (an isotropic material): the bulk
modulus and the shear modulus. The bulk modulus specifies its stiffness in
volumetric deformation; the shear modulus specifies its stiffness in
distortion. The more commonly accepted empirical expressions for the bulk
modulus of a soil sample are linear functions of effective pressure and
specific volume [5,33-35] or linear functions of effective pressure alone
[36,37]. Effective pressure is Cauchy pressure less interstitial fluid
pressure. Specific volume is the ratio of a sample’s volume to that of its
solid constituents. The more commonly accepted empirical expressions for the
shear modulus of a soil sample at small strains include a proper fractional
power function of effective pressure and an improper fractional power function
of specific volume [38-40].
Zytynski et al.[2] demonstrated that these empirical models, which described
bulk modulus as a linear function of effective pressure, are non-conservative.
Although a classical conservative solution hosting bulk and shear moduli with
identical exponents for their power functions of pressure has been developed
[41,42], no energetically conservative solution is available that supports the
different exponents for these two moduli evident in the empirical models.
Soils, powders, bulk solids, and other aggregates exhibit distinct material
properties at macroscopic and mesoscopic scales. Their macroscopic stiffnesses
and natural free states vary with packing. The modern theory of elasticity
assumes uniform strain across all length scales and retains memory of a unique
natural free state [43]. Each assumption is overly restrictive for these
geomaterials.
This research paper develops a multi-scale theory of elasticity that includes
specific volume as an independent internal state variable. The theory admits a
continuum of natural free states, defines separate constitutive relations at
macroscopic and mesoscopic scales, conserves energy across closed loading
cycles and supports different exponents in the power functions of pressure for
the bulk and shear moduli.
The body of this paper consists of 5 sections. Section 2 describes the
mesoscopic model. It decomposes a representative element’s deformation into
uniform and differential parts. The differential part models deformation that
involves a change in packing. Section 3 partitions stress space into
contraction and swelling sub-domains and defines a contraction-swelling
modulus that specifies the element’s stiffness to a change in packing. Section
4 presents the internal energy potentials and the formal expressions for the
macroscopic elasticity and compliance tensors and establishes their major
symmetry. Section 5 derives two solutions for isotropic materials, one that
highlights the theory’s conceptual features and a more refined solution that
is compatible with the empirical models accepted by geotechnical engineers.
Section 6 reviews the published support in data for fine Ottawa sand, tire-
derived aggregates, and select porous solids. This section concludes by
comparing the theory to Critical State Soil Mechanics [5,33,35], proposing
refinements to the latter and highlighting a fundamental difference at its
limit.
## II Mesoscopic Model
Consider a geomaterial sample that consists of a large number of solid
particles. The particles are in contact with one another and form the skeleton
that defines the sample’s boundary. The particles remain within the boundary,
but and open to rearrangement; that is, the skeleton that defines the sample’s
boundary can change. The sample’s pore content flows between the particles and
can cross the sample’s boundary.
The continuum element that represents this prototypical sample consists of a
solid phase and an interstitial phase. The solid phase models the skeleton in
its current state; that is, the particles in their current arrangement. The
interstitial phase models the pore content; that is, the fluid flowing through
the solid phase and seeping across the element’s boundary. The element’s
specific volume is the ratio of the sample’s volume, $V$, to that of its solid
particles, $V_{s}$:
| $\nu\equiv V/V_{s}\ $ | (1)
---|---|---
The element’s porosity is the ratio of the sample’s volume to its pore volume:
| $\eta\equiv(V-V_{s})/V=1-{\nu}^{-1}$ | (2)
---|---|---
Specific volume and porosity are equivalent continuum measures of packing;
both used in soil mechanics. Porosity is more common in the mechanics of
porous solids. Specific volume and porosity are measurable but not directly
controllable.
As the element’s specific volume changes, so does each of its phases. The
solid phase for one specific volume is distinct from the solid phase for any
other specific volume. A change in specific volume involves a shift in the
element’s solid phase from that for the initial specific volume to that for
the updated specific volume. These shifts represent rearrangements of the
particles within the sample. The element model a sample with enough particles
for all changes in its specific volume to appear to be continuous.
### II.1 Components of Volumetric Deformation
The inclusion of specific volume, or porosity, as an independent variable
enables a distinction between changes in average particle proximity and
average particle radii; that is, an identification of two different aspects of
sample deformation: intra-particle deformation and inter-particle deformation,
with the latter measured relative to the former. Figure 1 illustrates this
degree of freedom. Note that the relative change in particle radii differs
from the relative change in their proximity.
Figure 1: Centric Deformation
The representative element’s volumetric deformation consists of a uniform
component and a differential component. The uniform component models
deformation at constant specific volume. Uniform deformation is identical for
solid and interstitial phases. The differential component models the
additional deformation of the interstitial phase. This part augments the
uniform component of the interstitial phase and is directly related to the
change in specific volume. Figure 2 illustrates these two components.
Figure 2: Straining of Solid and Interstitial Phases
Differential deformation is the centric part of the element’s deformation (cf.
the classical theory’s rari-constancy model). It describes either contraction
or swelling. Contraction represents a decrease in the centroidal distances
between adjacent particles. Swelling represents an increase in the distances
between adjacent particles. Contraction and swelling are each wholly distinct
from uniform deformation.
### II.2 Packing Pressure
To account for changes in energy associated with changes in packing, let us
introduce a mesoscopic pressure within the element. This internal pressure is
independent of the macroscopic pressure applied to the element’s boundary.
Consider a sphere centered at the element’s mass center as shown in Figure
LABEL:fig::Fig3. Its surface represents the average mass centers of particles
equidistant from the sample’s mass center. As the element’s specific volume
changes the radius of the sphere changes; that is, the distances of the
particles from the mass center change. Let us define the packing pressure
within the element as the mesoscopic pressure that maintains the sphere at its
current radius and denote this pressure by $\phi$.
Figure 3: Changes in Packing Pressure
Packing pressure can change even if the pressure applied at the boundary
remains constant. Conversely, the pressure applied at the boundary can change
even if the packing pressure remains constant. Consolidation is an example of
a process that involves changes in internal pressure but does not necessarily
involve any change in externally applied pressure.
A change in packing pressure either reduces or increases the sphere’s radius.
Contractive changes reduce its radius. Swelling changes increase its radius.
The extent of the change in the sphere’s radius due to a change in packing
pressure depends on the material’s properties. Under the principle of local
state [44], a change in packing pressure is related to the local change in
specific volume but not to its gradient. The packing pressure at which
specific volume remains unchanged is the element’s current equilibrium packing
pressure. Assuming that an equilibrium packing pressure exists for each
specific volume, let us define a contraction-swelling curve in $\phi-\nu$
space that relates equilibrium packing pressures to specific volumes
throughout the practical range of specific volumes:
| $\beta\equiv\beta\left(\phi,\nu\right)=0$ | (3)
---|---|---
At lower pressures, the element’s specific volume is highly sensitive to small
changes in packing pressure; that is, the packing of the sample’s particles
can change significantly. On the other hand, at higher pressures, the
element’s specific volume is relatively insensitive to large changes in
packing pressure. These two limiting conditions determine the general form of
the contraction-swelling curve. This curve is illustrated in Figure 4.
${\phi}_{r}$ is an arbitrarily selected reference packing pressure for the
element and ${\nu}_{r}$ is the specific volume corresponding to that pressure.
Figure 4: Contraction-Swelling Constitutive Relation
The subsets of contraction and swelling pressures for the element depend on
its current specific volume. The current specific volume or its equilibrium
packing pressure partitions the contraction-swelling curve into contraction
and swelling segments. The current contraction pressures are the packing
pressures that satisfy $\betaup$$\mathrm{>}$0\. The current swelling pressures
are those that satisfy $\betaup$$\mathrm{<}$0.
### II.3 Packing Energy
Any work done by the packing pressure during a change in specific volume
changes the element’s packing energy. This work excludes all work involving
uniform deformation; that is, all work done by externally applied stress in
uniform deformation.
Considering the interstitial phase of the element as a sphere of radius $r$,
the work done during a change in its radius is the work done by the packing
pressure at the sphere’s surface:
| $\delta W=\ -\ \phi\ 4\pi r^{2}\delta r$ | (4)
---|---|---
where $\delta$ denotes increment of. The minus sign associates positive work
with contraction. The change in the sphere’s radius is directly related to the
change in the element’s specific volume:
| $\delta\nu=\delta V/V=\ 3\delta r/r$ | $for\ \delta V_{s}=0$ (5)
---|---|---
The change in the element’s packing energy is the work done per unit volume:
| $\delta P=\delta W\ /\ V$ | (6)
---|---|---
where $P$ denotes packing energy. From Eqs. (4), (5) and (6):
| $\delta P=\ -\ \phi\ \delta\nu$ | (7)
---|---|---
The element’s packing energy follows from integration:
| $P=\int^{\nu}_{{\nu}_{r}}{\delta P}$ | (8)
---|---|---
Packing energy vanishes at the selected reference state
(${\phi}_{r},{\nu}_{r}$).
The appendix contains derivations of expressions for two packing energy
potentials based on separate contraction-swelling constitutive relations.
## III Macroscopic Model
The element’s specific volume and the externally applied stress define its
state completely. A change in the applied stress may cause a change in
specific volume and that change depends on the element’s properties.
To identify the form of the relation between a change in the element’s strain
and any change in its specific volume consider a linearly elastic material
with a compliance that varies with specific volume alone. The strain tensor,
$\boldsymbol{\epsilon}$, for such a material, is the inner product of its
compliance tensor, $\boldsymbol{C}(\nu)$, and the applied stress,
$\boldsymbol{\sigma}$:
| $\boldsymbol{\epsilon}=\boldsymbol{C}\left(\nu\right):\boldsymbol{\sigma}$ | (9)
---|---|---
where : denotes inner tensor product on two subscripts. The change in this
strain tensor depends on both the change in the stress tensor and the change
in specific volume. Differentiating Eq. (9) (cf. [45,46]) yields
| $\delta\boldsymbol{\epsilon}=\boldsymbol{C}(\nu):\delta\boldsymbol{\sigma}+\ \boldsymbol{\sigma}:(\partial\boldsymbol{C}(\nu)/\partial\nu)\delta\nu$ | (10)
---|---|---
The first term on the right-hand side is the uniform contribution to the
strain increment. This contribution models identical straining of the solid
and interstitial phases; that is, straining at constant specific volume. The
second term is the differential contribution. It models the change in specific
volume at constant stress.
Given this decomposition (Eq. (10)), consider a more complex material with a
compliance that also varies with externally applied stress. The strain
increment tensor consists of uniform and differential components:
| $\delta\boldsymbol{\epsilon}=\ \delta{\boldsymbol{\epsilon}}_{u}+\ \delta{\boldsymbol{\epsilon}}_{d}$ | (11)
---|---|---
where subscripts $u$ and $d$ denote uniform and differential respectively. The
uniform component is linearly related to the stress increment tensor:
| $\delta{\boldsymbol{\epsilon}}_{u}={\boldsymbol{C}}_{u}\left(\boldsymbol{\sigma},\nu\right):\delta\boldsymbol{\sigma}$ | (12)
---|---|---
where ${\boldsymbol{C}}_{u}$ denotes the uniform compliance tensor. This
component pervades all length scales. The differential component is given by
| $\delta{\boldsymbol{\epsilon}}_{d}=\boldsymbol{\omega}\left(\boldsymbol{\sigma},\nu\right)\ \delta\mu$ | (13)
---|---|---
where $\boldsymbol{\omega}$ denotes the normalized coupling tensor [45]. The
directions of this component depend on the current state. The scalar
multiplier, $\delta\mu$, is its magnitude. It is positive-valued in
contraction and negative-valued in swelling. Its relation to the change in
specific volume is established in sub-section III.1 below (Eq. (53)).
Figure 5: Uniformity Surface through the Current State
The magnitude of the differential component depends not only on the element’s
state but also on the applied stress increment. To distinguish between
contraction and swelling processes, consider a local surface in stress sub-
space that passes through the current stress state and partitions the
neighboring sub-space into contraction and swelling sub-domains as illustrated
in Figure 5. Let us represent this surface by
| $b\equiv b\left(\boldsymbol{\sigma},\nu\right)=0$ | (14)
---|---|---
Contraction states satisfy $b$$\mathrm{>}$0, while swelling states satisfy
$b$$\mathrm{<}$0\. Only stress increments along the surface produce purely
uniform straining. Let us call this surface the element’s uniformity surface.
If ${\boldsymbol{n}}$ denotes its normalized gradient:
| $\boldsymbol{n}\equiv\frac{\partial b}{\partial\boldsymbol{\sigma}}\ /\parallel\frac{\partial b}{\partial\boldsymbol{\sigma}}\parallel\boldsymbol{\mathrm{\ }}\boldsymbol{\mathrm{\ }}$ | (15)
---|---|---
then the stress increments that preserve the element’s specific volume are
normal to this gradient:
| ${\boldsymbol{n}}:\delta\boldsymbol{\sigma}=0$ | $for\ \delta\mu=0$ (16)
---|---|---
$\parallel\boldsymbol{x}\parallel$ denotes the positive-valued magnitude of
$\boldsymbol{x}$.
Since stress increments tangential to the surface do not cause any
differential straining, only that component of the stress increment that is
normal to the surface change specific volume. That is, the signed magnitude of
the differential component of the strain increment tensor is linearly related
to the component of the stress increment tensor that is normal to the local
uniformity surface. This consistency relation may be written as
| $\boldsymbol{n}:\delta\boldsymbol{\sigma}-S\delta\mu=0$ | (17)
---|---|---
where $S$ denotes the contraction-swelling modulus. The expression for the
signed magnitude follows directly from this relation:
| $\delta\mu\ \mathrm{=}\ \boldsymbol{n}:\delta\boldsymbol{\sigma}/S$ | (18)
---|---|---
The contraction-swelling modulus specifies the element’s stiffness to
differential deformation independently of its stiffness to uniform
deformation. Aggregates, which exhibit noticeable changes in packing, have
low-valued moduli. Porous solids, which exhibit minor changes in packing, have
relatively high-valued moduli.
Not all processes in this model are equilibrium processes. Stress increments
directed along the local uniformity surface preserve specific volume and
maintain multi-scale equilibrium. Since the specific volume does not change,
the equilibrium packing pressure remains constant. Stress increments directed
off the surface initiate non-equilibrium processes. Internal adjustments in
specific volume may occur at different rates than the rates of change of
external macroscopic stresses. While processes to states along the surface are
unconstrained, processes to states off the surface progress from initially
constrained states to ultimately unconstrained states. At multi-scale
equilibrium, the end state satisfies macroscopic equilibrium and the packing
pressure is the equilibrium packing pressure for the end state’s specific
volume. That is, the end stress state lies on the local uniformity surface for
the end stress state and the end specific volume.
This approach has some precedents. Augmenting thermodynamic solutions with
internal state variables is well-established practice [47]. Internal state
variables support constraint modeling of viscoelasticity and relaxation near
equilibrium [48]. Constrained equilibrium modeling extends to the averaging of
an internal variable [49]. Partitioning stress sub-space into sub-domains of
differing responses is a feature of the mathematical theory of elasto-
plasticity, as is a consistency relation.
### III.1 Compliance and Elasticity
The macroscopic compliance tensor for the element models all processes,
regardless of the stress increment tensor directions and predicts the strain
increment based on the applied stress increment. Substituting Eqs. (12), (13)
and (18) into Eq. (11) yields
| $\delta\boldsymbol{\epsilon}\ \mathrm{=}\ \boldsymbol{C}\boldsymbol{:}\delta\boldsymbol{\sigma}$ | (19)
---|---|---
where $\boldsymbol{C}$ denotes the element’s macroscopic compliance tensor:
| $\boldsymbol{C}\equiv{\boldsymbol{\ }\boldsymbol{C}}_{\boldsymbol{u}}\boldsymbol{+}\boldsymbol{\omega}\boldsymbol{\otimes}\boldsymbol{n}/S$ | (20)
---|---|---
where $\otimes$ denotes outer tensor product. The uniform compliance tensor
predicts the element’s compliance to unconstrained change; that is, for stress
increments along the local uniformity surface. The rightmost term describes
the added strain increment as specific volume progresses from its initially
constrained value to its multi-scale equilibrium value.
The macroscopic elasticity tensor predicts the stress increment tensor
corresponding to an applied strain increment tensor. Substituting Eq. (12)
into Eq. (11), inverting the result and substituting Eq. (13) yields
| $\delta\boldsymbol{\sigma}\ \mathrm{=}\ {\boldsymbol{E}}_{u}\boldsymbol{:}\boldsymbol{\delta}\boldsymbol{\epsilon}-{\boldsymbol{E}}_{u}\boldsymbol{:}\boldsymbol{\omega}\ \delta\mu$ | (21)
---|---|---
where ${\boldsymbol{E}}_{u}$ denotes the uniform elasticity tensor:
| ${\boldsymbol{E}}_{u}\ ={\boldsymbol{C}}^{-1}_{u}$ | (22)
---|---|---
The expression for the signed magnitude of the strain increment’s differential
component follows from the consistency relation. Substituting Eq. (21) into
Eq. (17) yields
| $\delta\mu=(\boldsymbol{n}\ {\boldsymbol{:}\boldsymbol{E}}_{u}\boldsymbol{:}\boldsymbol{\delta}\epsilon)\ /\ (S+\boldsymbol{n}:{\boldsymbol{E}}_{u}\boldsymbol{:}\boldsymbol{\omega})$ | (23)
---|---|---
Substituting Eq. (23) into Eq. (21) yields
| $\delta\boldsymbol{\sigma}=\boldsymbol{E}\boldsymbol{:}\boldsymbol{\delta}\boldsymbol{\epsilon}$ | (24)
---|---|---
where $\boldsymbol{E}$ denotes the element’s macroscopic elasticity tensor:
| $\boldsymbol{E}\equiv\boldsymbol{\ }\boldsymbol{E}_{u}-({\boldsymbol{E}}_{u}\boldsymbol{:}\boldsymbol{\omega}\boldsymbol{\otimes}{\boldsymbol{n}\boldsymbol{:}\boldsymbol{E}}_{u})\ /\ (S+\boldsymbol{n}\boldsymbol{:}{\boldsymbol{E}}_{u}:\boldsymbol{\omega})$ | (25)
---|---|---
The rightmost term in Eq. (25) relaxes the uniform stiffness accounting for
the added freedom as specific volume changes from its initially constrained
value to its unconstrained end value, at multi-scale equilibrium.
The relation between the normalized coupling tensor ($\boldsymbol{\omega}$)
and the normalized gradient to the uniformity surface ($\boldsymbol{n}$) in
the expressions for both macroscopic tensors is established in sub-section
IV.3 below.
### III.2 Contraction-Swelling Modulus
The contraction-swelling modulus specifies the element’s stiffness to changes
in differential deformation regardless of uniform deformation. Its value can
be estimated from measurements of volumetric and distortional stiffness at
isotropic loading states.
The pressure or mean-normal stress invariant is defined as
| $p\ \equiv\boldsymbol{\sigma}:\boldsymbol{I}/3$ | (26)
---|---|---
where $\boldsymbol{I}$ denotes the identity tensor. Let us assume that the
normalized coupling tensor and the normalized gradient to the uniformity
surface are identity transformations at all isotropic loading states:
| $\boldsymbol{\omega}=\boldsymbol{n}=\boldsymbol{I}$ | $for\ \boldsymbol{\sigma}=p_{o}\boldsymbol{I}$ (27)
---|---|---
where $p_{o}\ $denotes the applied pressure at any isotropic state. The
volumetric strain increment is defined as
| $\delta\epsilon\equiv\boldsymbol{I}:\delta\boldsymbol{\epsilon}$ | (28)
---|---|---
The macroscopic bulk modulus, $K$, relates this invariant linearly to the
pressure increment:
| $\delta\epsilon=K^{-1}\ \delta p_{o}$ | $for\ \boldsymbol{\sigma}=p_{o}\boldsymbol{I}$ (29)
---|---|---
$K$ is the tangential slope of the unloading-reloading line in $\epsilon-
p_{o}\ $space.
Substituting Eq. (27) into Eq. (25) and contracting twice yields:
| $K^{-1}=K^{-1}_{u}+S^{-1}$ | $for\ \boldsymbol{\sigma}=p_{o}\boldsymbol{I}$ (30)
---|---|---
where $K_{u}$ denotes the uniform bulk modulus of the element:
| $K_{u}\ \equiv\boldsymbol{I}:{\boldsymbol{E}}_{u}:\boldsymbol{I}\ /\ 9$ | (31)
---|---|---
The expression for the contraction-swelling modulus is:
| $S={\left[K^{-1}-K^{-1}_{u}\ \right]}^{-1}={\left[\left(\frac{\delta\epsilon}{\delta p_{o}}\right)-K^{-1}_{u}\ \right]}^{-1}$ | (32)
---|---|---
The value of the uniform bulk modulus can be estimated from an empirically
determined uniform shear compliance by assuming a constant Poisson’s ratio
across the family of solid phases of all specific volumes.
If the element is significantly more compliant than its solid phase, a first
approximation for the volumetric strain increment is
| $\delta\epsilon\ \approx\ -\ \delta\nu/\nu$ | $for\ K_{u}\gg K$ (33)
---|---|---
In this case, the expression for the contraction-swelling modulus reduces to
| $S\approx\delta p_{o}/\delta\epsilon$ | $for\ K_{u}\gg K$ (34)
---|---|---
## IV Internal Energy
The internal energy of the element integrates the macroscopic and mesoscopic
models. A sufficient condition for conservation of internal energy in a closed
process is the existence of a potential in the independent state variables.
In a kinematic description, strain and specific volume are the independent
state variables, the internal energy potential is the scalar measure of the
element’s state and stress is the dependent state variable. In a kinetic
description, stress and specific volume are the independent state variables,
the complementary internal energy potential is the scalar measure of state and
strain is the dependent variable. The expressions for the stress, strain,
packing pressure, elasticity, compliance and coupling tensors follow from
these two potentials.
### IV.1 Potential Functions
The internal energy potential,$\ E\left(\boldsymbol{\epsilon},\nu\right)$,
describes the element’s physical state in terms of its strain and specific
volume. The potential’s uniform version,
$U\left(\boldsymbol{\epsilon},\nu\right)$, which describes its physical state
for a prescribed specific volume, is the difference between the internal
energy potential, $E\boldsymbol{(}\boldsymbol{\epsilon},\nu)$, and the
element’s packing energy, $P(\nu)$:
| $U\left(\boldsymbol{\epsilon},\bar{\nu}\right)=E\left(\boldsymbol{\epsilon},\nu\right)\ -\ P\boldsymbol{(}\nu)$ | (35)
---|---|---
$U\left(\boldsymbol{0},\bar{\nu}\right)$ represents the natural free state for
the prescribed specific volume ($\nu=\bar{\nu}$). Figure 6 illustrates the
internal energy potential surface as a function of equivalent strain and
specific-volume. The reference state is the state at which the specific volume
is the reference specific volume ($\nu={\nu}_{r}$) and all strain vanishes
($E\left(\boldsymbol{0},{\nu}_{r}\right)$).
Figure 6: Internal Energy
The complementary internal energy potential,
$C\left(\boldsymbol{\sigma},\nu\right)$, describes the energy that the element
can transfer to its environment. This potential’s uniform version,
$Q\left(\boldsymbol{\epsilon},\bar{\nu}\right)$, which describes the
transferable energy for a prescribed specific volume, is the sum of the
complementary internal energy, $C\left(\boldsymbol{\sigma},\nu\right)$, and
the element’s packing energy, $P\left(\nu\right)$:
| $Q\left(\boldsymbol{\sigma},\bar{\nu}\right)=C\left(\boldsymbol{\sigma},\nu\right)+P\left(\nu\right)$ | (36)
---|---|---
Figure 7 illustrates the complementary internal energy potential surface as a
function of stress and specific volume. The reference state for this surface
is the state at which the specific volume is the reference specific volume
($\nu={\nu}_{r}$) and stress vanishes
($C\left(\boldsymbol{0},{\nu}_{r}\right)$).
Figure 7: Complementary Internal Energy
The uniform complementary internal energy is the partial Legendre transform of
element’s uniform internal energy with respect to strain [50]:
| $U\left(\boldsymbol{\epsilon},\nu\right)+\ Q\left(\boldsymbol{\sigma},\nu\right)=\boldsymbol{\sigma}:\boldsymbol{\epsilon}$ | (37)
---|---|---
Changes in packing energy are passive and its terms in Eqs. (35) and (36)
cancel one another in Eq. (37). The uniform potentials are the energy
potentials of the modern theory.
### IV.2 Stress, Strain, Elasticity, Compliance and Coupling Tensors
The stress, strain and packing pressure ($\boldsymbol{\sigma},\
\boldsymbol{\epsilon},\ \phi$) are partial derivatives of the internal energy
and complementary internal energy potential functions.
Differentiating Eq. (35) and substituting Eq. (7) into the result
distinguishes an internal energy increment into macroscopic and mesoscopic
components:
| $\delta U={\left(\frac{\partial E}{\partial\boldsymbol{\epsilon}}\right)}_{\nu}:\delta\boldsymbol{\epsilon}+{\left(\frac{\partial E}{\partial\nu}\right)}_{\boldsymbol{\epsilon}}\ \delta\nu+\phi\ \delta\nu$ | (38)
---|---|---
At multi-scale equilibrium, specific volume is constant, and the work done by
the stress on the element is
| $\boldsymbol{\sigma}:\delta\boldsymbol{\epsilon}={\left(\frac{\boldsymbol{\partial}E}{\boldsymbol{\partial}\boldsymbol{\epsilon}}\right)}_{\nu}:\boldsymbol{\delta}\boldsymbol{\epsilon}$ | (39)
---|---|---
Equating Eqs. (38) and (39) gives the expressions for the stress and the
packing pressure:
| $\boldsymbol{\sigma}={\left(\frac{\partial E}{\partial\boldsymbol{\epsilon}}\right)}_{\nu}$ | (40)
---|---|---
| $\phi=-\ {\left(\frac{\partial E}{\partial\nu}\right)}_{\boldsymbol{\epsilon}}$ | (41)
Differentiating Eq. (36) and substituting Eq. (7) into the result
distinguishes the complementary internal energy increment into macroscopic and
mesoscopic components:
| $\delta Q={\left(\frac{\partial C}{\partial\boldsymbol{\sigma}}\right)}_{\nu}:\delta\boldsymbol{\sigma}+{\left(\frac{\partial C}{\partial\nu}\right)}_{\boldsymbol{\sigma}}\delta\nu-\ \phi\ \delta\nu$ | (42)
---|---|---
At multi-scale equilibrium, specific volume is constant, and the complementary
work done is given by
| $\boldsymbol{\epsilon}:\delta\boldsymbol{\sigma}={\left(\frac{\partial C}{\partial\boldsymbol{\sigma}}\right)}_{\nu}:\boldsymbol{\delta}\boldsymbol{\sigma}$ | (43)
---|---|---
Equating Eqs. (42) and (43) gives the expressions for the total strain and the
packing pressure:
| $\boldsymbol{\epsilon}={\left(\frac{\partial C}{\partial\boldsymbol{\sigma}}\right)}_{\nu}$ | (44)
---|---|---
| $\phi=\ {\left(\frac{\partial C}{\partial\nu}\right)}_{\boldsymbol{\sigma}}$ | (45)
Eqs. (41) and (45) are equivalent expressions for packing pressure.
The uniform elasticity, uniform compliance and coupling tensors are second
derivatives of the energy potentials. Differentiating Eq. (40) distinguishes
the stress increment into uniform and differential components:
| $\delta\boldsymbol{\sigma}={\boldsymbol{E}}_{u}:\delta\boldsymbol{\epsilon}+\frac{{\partial}^{2}E}{\partial\nu\partial\boldsymbol{\epsilon}}\ \delta\nu$ | (46)
---|---|---
where ${\boldsymbol{E}}_{u}$ denotes the uniform elasticity tensor:
| ${\boldsymbol{E}}_{u}\equiv\frac{{\boldsymbol{\partial}}^{\boldsymbol{2}}E}{\partial\boldsymbol{\epsilon}\partial\boldsymbol{\epsilon}}$ | (47)
---|---|---
Differentiating Eq. (44) distinguishes the strain increment into uniform and
differential components:
| $\delta\boldsymbol{\epsilon}={\boldsymbol{C}}_{u}:\delta\boldsymbol{\sigma}+\boldsymbol{\mathit{\Omega}}\ \delta\nu$ | (48)
---|---|---
where $\boldsymbol{\mathit{\Omega}}$ denotes the coupling tensor. The uniform
compliance and coupling tensors are second partial derivatives of the
complementary internal energy potential:
| ${\boldsymbol{C}}_{u}=\frac{{\boldsymbol{\partial}}^{\boldsymbol{2}}C}{\partial\boldsymbol{\sigma}\partial\boldsymbol{\sigma}}$ | (49)
---|---|---
| $\boldsymbol{\mathit{\Omega}}=\frac{{\partial}^{2}C}{\partial\nu\partial\boldsymbol{\sigma}}$ | (50)
${\boldsymbol{E}}_{u}$ and ${\boldsymbol{C}}_{u}$ are inverses of one another.
Normalizing the coupling tensor and comparing Eq. (48) to Eq. (11) with Eq.
(12) identifies the differential component of strain increment tensor as the
product of the coupling tensor and the specific volume increment:
| $\boldsymbol{\delta}{\boldsymbol{\epsilon}}_{d}=-\ \boldsymbol{\mathit{\Omega}}\ \delta\nu$ | (51)
---|---|---
Substituting Eq. (51) into Eq. (13) relates the coupling tensor to the
normalized coupling tensor ($\boldsymbol{\mathit{\omega}}$) and the signed
magnitude of the strain increment’s differential component ($\delta\mu$) to
the specific volume increment:
| $\boldsymbol{\omega}=\boldsymbol{\mathit{\Omega}}\ /\parallel\boldsymbol{\mathit{\Omega}}\parallel$ | (52)
---|---|---
| $\delta\mu\ \mathrm{=}\ -\parallel\boldsymbol{\mathit{\Omega}}\parallel\ \delta\nu$ | (53)
The minus sign indicates that a specific volume decrement ($\delta\nu<0$) is
contractive ($\delta\mu>0$). The material properties that determine
differential deformation enter entirely through the coupling tensor.
### IV.3 Major Symmetry
The macroscopic analysis of sub-section 3.1 expresses the compliance and
elasticity tensors in terms of the normalized gradient to the uniformity
surface and the normalized coupling tensor. Any stress increment directed off
the uniformity surface for the current state changes the element’s specific
volume. During the process, the element’s stress state and specific volume may
change asynchronously. The major symmetry of the compliance and elasticity
tensors follows from multi-scale equilibrium.
For the element to be in a state of multi-scale equilibrium, its stress state
must lie on the local uniformity surface for the current specific volume and
its packing pressure must be the equilibrium packing pressure for that same
specific volume:
| $b\left(\boldsymbol{\sigma},\nu\right)=\ \beta\left(\phi,\nu\right)=0$ | (54)
---|---|---
Stress increments along the uniformity surface maintain multi-scale
equilibrium. Differentiating Eq. (54) holding specific volume constant relates
the surface gradient to the slope of the contraction-swelling curve through
the packing pressure gradient:
| $\frac{\partial b}{\partial\boldsymbol{\sigma}}=\frac{\partial\beta}{\partial\phi\ }\cdot\frac{\partial\phi}{\partial\boldsymbol{\sigma}}$ | (55)
---|---|---
Differentiating Eq. (45) holding specific volume constant yields the
expression for the packing pressure gradient:
| $\frac{\partial\phi}{\partial\boldsymbol{\sigma}}=\frac{{\partial}^{2}C}{\partial\boldsymbol{\sigma}\partial\nu}$ | (56)
---|---|---
Identity of cross derivatives of the complementary strain energy function (a
Maxwell relation) relates this gradient to the coupling tensor:
| $\frac{\partial\phi}{\partial\boldsymbol{\sigma}}=\ \frac{{\partial}^{2}C}{\partial\nu\partial\boldsymbol{\sigma}}=\ \boldsymbol{\mathit{\Omega}}$ | (57)
---|---|---
Substituting Eq. (57) into Eq. (55) relates the uniformity surface gradient to
the slope of the contraction-swelling curve through the coupling tensor:
| $\frac{\partial b}{\partial\boldsymbol{\sigma}}=\ \boldsymbol{\mathit{\Omega}}\ \frac{\partial\beta}{\partial\phi\ }$ | (58)
---|---|---
Normalizing the surface gradient and substituting Eq. (52) yields
| $\boldsymbol{n}=\boldsymbol{\omega}\parallel\boldsymbol{\mathit{\Omega}}\parallel\left(\frac{\partial\beta}{\partial\phi}\right)/\parallel\frac{\partial b}{\partial\boldsymbol{\sigma}}\parallel$ | (59)
---|---|---
Since both $\boldsymbol{n}$ and $\boldsymbol{\omega}$ are normalized,
| $\boldsymbol{n}\ =\ \boldsymbol{\omega}$ | (60)
---|---|---
| $\parallel\frac{\partial b}{\partial\boldsymbol{\sigma}}\parallel\ =\ \parallel\boldsymbol{\mathit{\Omega}}\parallel\left(\frac{\partial\beta}{\partial\phi\ }\right)$ | (61)
The directions of the coupling tensors are independent of the contraction-
swelling relation.
Substituting Eq. (60) into Eqs. (20) and (25) simplifies the expressions for
the macroscopic compliance and elasticity tensors:
| $\boldsymbol{C}={\boldsymbol{C}}_{u}+{\boldsymbol{\omega}\boldsymbol{\otimes}\boldsymbol{\omega}}\ /\ {S}$ | (62)
---|---|---
| $\boldsymbol{E}=\boldsymbol{E}_{u}-({\boldsymbol{E}}_{u}\boldsymbol{:}\boldsymbol{\omega}\boldsymbol{\otimes}\boldsymbol{\omega}\boldsymbol{:}{\boldsymbol{E}}_{u})\ /\ (S+\ \boldsymbol{\omega}\boldsymbol{\ }\boldsymbol{:}{\boldsymbol{E}}_{u}\boldsymbol{:}\boldsymbol{\omega})$ | (63)
That is, a sufficient condition for major symmetry of each tensor is multi-
scale equilibrium (Eq. 55).
## V Isotropic Materials
The generally accepted empirical expressions for bulk and shear moduli of soil
samples have been established for isotropic materials at isotropic states. The
theory’s linear specialization for these materials demonstrates the simplest
resolution of the energy conservation issue for these materials. Its non-
linear specialization demonstrates resolution of the differing exponents issue
in the power functions of the bulk and shear moduli.
### V.1 General Expressions
The invariants of stress and strain and their increments suffice to express
the constitutive relations for isotropic materials. The stress invariants are
defined as
| $p\equiv\boldsymbol{\sigma}:\boldsymbol{I}/3$ | (26 bis)
---|---|---
| $q={\left({{3}\boldsymbol{q}\boldsymbol{:}\boldsymbol{q}}/{2}\right)}^{\frac{1}{2}}$ | (64)
where $\boldsymbol{q}$ denotes the deviator stress tensor:
| $\boldsymbol{q}\ \equiv\boldsymbol{\sigma}-p\boldsymbol{I}$ | (65)
---|---|---
The strain increment invariants are work-conjugate to the stress invariants:
| $\delta\epsilon\equiv\boldsymbol{I}:\delta\boldsymbol{\epsilon}$ | (28 bis)
---|---|---
| $\delta\gamma={\left({2\delta\boldsymbol{\gamma}\boldsymbol{:}\delta\boldsymbol{\gamma}}/{3}\right)}^{\frac{1}{2}}$ | (66)
where $\boldsymbol{\gamma}$ denotes the deviator-strain tensor:
| $\delta\boldsymbol{\gamma}\ \equiv\delta\boldsymbol{\epsilon}-(\boldsymbol{I}/3)\ \delta\epsilon$ | (67)
---|---|---
The macroscopic relations are
| $\delta p=K\delta\epsilon+J\delta\gamma$ | (68)
---|---|---
| $\delta q=J\delta\epsilon+3G\delta\gamma$ | (69)
where $K,\ J,$ and $G$ are respectively the macroscopic bulk, cross and shear
moduli.
These moduli consist of uniform and differential components. From Eq. (25):
| $K=K_{u}-{\left({\omega}^{2}_{p}K^{2}_{u}+2{\omega}_{p}{\omega}_{q}J_{u}K_{u}+{\omega}^{2}_{q}J^{2}_{u}\right)}\ /\ {S_{c}}$ | (70)
---|---|---
| $J=J_{u}-{\left[{\omega}^{2}_{p}J_{u}K_{u}+\ {\omega}_{p}{\omega}_{q}\left(J^{2}_{u}+3K_{u}G_{u}\right)+3{\omega}^{2}_{q}J_{u}G_{u}\right]}\ /\ {S_{c}}$ | (71)
| $3G=3G_{u}-{\left({\omega}^{2}_{p}J^{2}_{u}+6{\omega}_{p}{\omega}_{q}J_{u}G_{u}+9{\omega}^{2}_{q}G^{2}_{u}\right)}\ /\ {S_{c}}$ | (72)
where $K_{u},\ J_{u},\ {and\ G}_{u}$${}^{\ }$denote respectively the uniform
bulk, cross and shear moduli; ${\omega}_{p},$ ${\omega}_{q}$ denote the
invariants of the normalized coupling tensor; and $S_{c}$ denotes the
composite contraction-swelling modulus:
| $S_{c}=S+\ {\omega}^{2}_{p}K_{u}+2{\omega}_{p}{\omega}_{q\ }J_{u}+3{\omega}^{2}_{q}G_{u}$ | (73)
---|---|---
This composite modulus augments the contraction-swelling modulus with the
element’s uniform stiffness.
### V.2 Linear Solution
The simplest internal energy potential that conserves energy for a linear
isotropic material with a variable specific volume is a quadratic function in
volumetric and deviatoric strain invariants and an inverse function of
specific volume
| $E\left(\epsilon,\gamma,\nu\right)=p_{r}(k{\epsilon}^{2}+3g{\gamma}^{2})/2\nu$ | (74)
---|---|---
where $p_{r}$ denotes an arbitrarily selected reference pressure; $k$ and $g\
$denote the non-dimensional bulk and shear indices corresponding to $p_{r}$.
The complementary internal energy potential is quadratic in the stress
invariants and proportional to specific volume
| $C(p,q,\nu)=\nu(p^{2}/k+q^{2}/3g)/2p_{r}$ | (75)
---|---|---
The strain and stress invariants are linearly related. From Eqs. (40) and
(44):
| $\epsilon\equiv\frac{\partial C}{\partial p}=\left(\frac{1}{kp_{r}}\right)\nu p$ | (76)
---|---|---
| $\gamma\equiv\frac{\partial C}{\partial q}=\left(\frac{1}{3gp_{r}}\right)\nu q$ | (77)
| $p\ \equiv\frac{\partial E}{\partial\epsilon}=\left(\frac{kp_{r}}{\nu}\right)\epsilon$ | (78)
| $q\ \equiv\frac{\partial E}{\partial\gamma}=\left(\frac{3gp_{r}}{\nu}\right)\gamma$ | (79)
The uniform moduli are partial derivatives of the stress invariants with
respect to strain:
| $K_{u}\ \equiv\frac{{\partial}^{2}E}{\partial{\epsilon}^{2}}=kp_{r}/\nu$ | (80)
---|---|---
| $J_{u}\ \equiv\frac{{\partial}^{2}E}{\partial\epsilon\partial\gamma}=0$ | (81)
| $G_{u}\ \equiv\frac{1}{3}\frac{{\partial}^{2}E}{{\partial}^{2}\gamma}=gp_{r}\mathrm{/}\mathrm{\nuup}$ | (82)
These moduli are the linearly scaled versions of their mesoscopic counterparts
(that is, the particle bulk and shear moduli ($kp_{r},\ gp_{r}$) [51]).
The coupling tensor coefficients are cross derivatives of the complementary
internal energy potential. From Eqs. (50) and (75):
| ${\mathit{\Omega}}_{p}\ \equiv\frac{{\partial}^{2}C}{\partial\nu\partial p}=\frac{p}{kp_{r}}$ | (83)
---|---|---
| ${\mathit{\Omega}}_{q}\ \equiv\frac{{\partial}^{2}C}{\partial\nu\partial q}=\frac{q}{3gp_{r}}$ | (84)
The normalized coefficients are
| ${\omega}_{p}={\left[1+{\left(\frac{k}{3g}\right)}^{2}{\left(\frac{q}{p}\right)}^{2}\right]}^{-\frac{1}{2}}$ | (85)
---|---|---
| ${\omega}_{q}=\left(\frac{k}{3g}\right)\left(\frac{q}{p}\right){\omega}_{p}$ | (86)
The normalized coupling tensor introduces shear stress effects to the uniform
moduli. Substituting into Eqs. (70) through (72) yields
| $K=\frac{kp_{r}}{\nu}\frac{T+\frac{\left(\frac{k}{3g}\right){\left(\frac{q}{p}\right)}^{2}}{\left[1+{\left(\frac{k}{3g}\right)}^{2}{\left(\frac{q}{p}\right)}^{2}\right]}}{T_{c}}$ | (87)
---|---|---
| $J=-\frac{kp_{r}}{\nu}\frac{\frac{\left(\frac{q}{p}\right)}{\left[1+{\left(\frac{k}{3g}\right)}^{2}{\left(\frac{q}{p}\right)}^{2}\right]}}{T_{c}}$ | (88)
| $G=\frac{gp_{r}}{\nu}\frac{T+\frac{1}{\left[1+{\left(\frac{k}{3g}\right)}^{2}{\left(\frac{q}{p}\right)}^{2}\right]\ }}{T_{c}}$ | (89)
where $T$ and $T_{c}$ denote the relative-contraction-swelling and composite
relative-contraction-swelling moduli respectively:
| $T\ \equiv\left(\frac{S}{K_{u|q=0\ }}\right)\ ={S}/\left({kp_{r}}/{\nu}\right)$ | (90)
---|---|---
| $T_{c}={S_{c}}/\left({kp_{r}}/{\nu}\right)=T+{\left[1+\left(\frac{k}{3g}\right){\left(\frac{q}{p}\right)}^{2}\right]}{\left[1+{\left(\frac{k}{3g}\right)}^{2}{\left(\frac{q}{p}\right)}^{2}\right]}^{-1}$ | (91)
Although the macroscopic moduli are inversely proportional to specific volume,
the linear scaling evident in the uniform moduli is absent in the macroscopic
moduli. Eqs. (87) through (89) extend the particle stress model [51] across
state space without imposing perfectly linear scaling.
The macroscopic moduli assume simpler forms at isotropic loading states. From
Eqs. (87) through (89),
| $K=\frac{\frac{kp_{r}}{\nu}}{\frac{1}{T}+1}=\frac{S}{1+T}$ | $for\ q=0$ (92)
---|---|---
| $J=0$ | $for\ q=0$ (93)
| $G=G_{u}=\frac{gp_{r}}{\nu}$ | $for\ q=0$ (94)
As the bulk index increases ($T\ \to 0$), the macroscopic bulk modulus
approaches the contraction-swelling modulus ($S$).
The packing pressure follows directly from the complementary internal energy
potential:
| $\phi=\frac{\partial C}{\partial\nu}={\left[p^{2}+\left(\frac{k}{3g}\right)q^{2}\right]}\ /\ {2kp_{r}}$ | (95)
---|---|---
Differentiating (95) yields
| ${\delta\phi}/{\phi}={2\left[p\delta p+\left(\frac{k}{3g}\right)q\delta q\right]}{\left[p^{2}+\left(\frac{k}{3g}\right)q^{2}\right]}^{-1}$ | (96)
---|---|---
At multi-scale equilibrium, specific volume and packing pressure are constant.
Integrating Eq. (96) for mesoscopic equilibrium ($\delta\phi=\delta\nu=0$)
yields
| $b\left(p,q,\nu\right)=p^{2}+\left(\frac{k}{3g}\right)q^{2}-p^{2}_{o}=\ 0$ | (97)
---|---|---
The integration constant ($p_{o}$) is a function of specific volume alone.
Expressions for the contraction-swelling modulus complete this solution. As
noted in sub-section 3.2, the uniform bulk modulus can be estimated from the
shear modulus if the Poisson’s ratio ($\rho$) for the family of solid phases
is constant:
| $k=2g(1+\rho)/3(1-2\rho)\ $ | $for\ q=0$ (98)
---|---|---
The lower limit on the contraction-swelling modulus follows from Eq. (32):
| $S\geq{\left(\frac{\delta\epsilon}{\delta p_{o}}\right)}^{-1}$ | $for\ q=0$ (99)
---|---|---
For a linear relation in $\nu-ln(p_{o}/p_{r})$ space
| $S=\nu p/\kappa$ | (100)
---|---|---
where $\kappa$ denotes the contraction-swelling index in $\nu-ln(p_{o}/p_{r})$
space:
| $\delta\nu=\ -\ \kappa\delta p/p$ | $for\ q=0$ (101)
---|---|---
For a linear relation in $ln\nu-ln(p_{o}/p_{r})$ space
| $S=p/{\kappa}^{*}$ | $for\ q=0$ (102)
---|---|---
where ${\kappa}^{*}$ denotes the modified contraction-swelling index in
$ln\nu-ln(p_{o}/p_{r})$ space:
| $\delta\nu/\nu=\ -\ {\kappa}^{*}\ \delta p/p$ | $for\ q=0$ (103)
---|---|---
The value of the contraction-swelling modulus is determined at isotropic
loading and extrapolated to non-isotropic loading states.
### V.3 Non-Linear Solution
The non-linear specialization for isotropic materials is based on the
generally accepted expression for the small-strain shear modulus of a range of
silts and clays [40]:
| $G_{u}=gp_{r}{\left(\frac{p}{p_{r}}\right)}^{n}{\nu}^{-a}$ | $for\ q=0$ (104)
---|---|---
where $n$ and $a$ are non-dimensional and denote respectively the element’s
shape and scaling coefficients. (Typical data: $15,000<gp_{r}<25,000,\
a\approx 2.4$ and [52,53] $n=0.50$ for smooth spherical contacts, $n=0.33\
$for angular contacts.) The corresponding uniform bulk modulus is
| $K_{u}=kp_{r}{\left(\frac{p}{p_{r}}\right)}^{n}{\nu}^{-a}$ | $for\ q=0$ (105)
---|---|---
where the bulk index can be determined from Eq. (98) assuming a constant
Poisson’s ratio for the family of solid phases.
The internal energy potential corresponding to both expressions is
| $E\left(\epsilon,\gamma,\nu\right)={p_{r}{\left[k\left(1-n\right)\psi{\nu}^{-a}\right]}^{\frac{2-n}{1-n}}{\nu}^{a}}/{k\left(2-n\right)}$ | (106)
---|---|---
where $\psi$ denotes an equivalent volumetric strain defined by
| $\psi^{2}\ \equiv{\epsilon}^{2}+{\gamma}^{2}/h$ | (107)
---|---|---
and where $h$ denotes the effective uniform stiffness ratio defined by
| $h\ \equiv\ k(1-n)/3g$ | (108)
---|---|---
The corresponding complementary strain energy potential is
| $C\left(p,q,\nu\right)={p_{r}r^{2-n}{\nu}^{a}}\ /\ {k\left(2-n\right)\left(1-n\right)}$ | (109)
---|---|---
where $r$ denotes the equivalent pressure ratio defined by
| $r^{2}\ \equiv(p^{2}+hq^{2})/p^{2}_{r}$ | (110)
---|---|---
The strain and stress invariants are power functions of the equivalent
pressure ratio and the specific volume. From Eqs. (40) and (44):
| $\epsilon=\left(\frac{1}{k\left(1-n\right)p_{r}r^{n}}\right){\nu}^{a}p$ | (111)
---|---|---
| $\gamma=\left(\frac{1}{3gp_{r}r^{n}}\right){\nu}^{a}q$ | (112)
| $p=p_{r}{\left(\frac{k\left(1-n\right)^{n}}{{\nu}^{a}}\right)}^{\frac{1}{1-n}}\epsilon$ | (113)
| $q=p_{r}{\left(\frac{k\left(1-n\right)^{n}}{{\nu}^{a}}\right)}^{\frac{1}{1-n}}\left(\frac{1}{h}\right)\gamma$ | (114)
The uniform moduli are given by
| $K_{u}=\frac{kp_{r}r^{n}}{{\nu}^{a}}\left\\{1-{nh{\left(\frac{q}{p}\right)}^{2}}/{\left[1+h{\left(\frac{q}{p}\right)}^{2}\right]}\right\\}$ | (115)
---|---|---
| $J_{u}=\frac{kp_{r}r^{n}}{{\nu}^{a}}\left\\{{n\left(\frac{q}{p}\right)}/{\left[1+h{\left(\frac{q}{p}\right)}^{2}\right]}\right\\}$ | (116)
| $G_{u}=\frac{kp_{r}r^{n}}{3h{\nu}^{a}}\left\\{1-{n}/{\left[1+h{\left(\frac{q}{p}\right)}^{2}\right]}\right\\}$ | (117)
These moduli are scaled versions of their mesoscopic counterparts ($\nu=1$).
The coupling tensor coefficients follow directly from the complementary
internal energy potential. From Eqs. (50) and (109):
| ${\mathit{\Omega}}_{p}={a{\nu}^{a-1}p}/{kp_{r}\left(1-n\right)r^{n}}$ | (118)
---|---|---
| ${\mathit{\Omega}}_{q}={a{\nu}^{a-1}q}/{3gp_{r}r^{n}}$ | (119)
The normalized coefficients are
| ${\omega}_{p}={\left[1+h^{2}{\left(\frac{q}{p}\right)}^{2}\right]}^{-\frac{1}{2}}$ | (120)
---|---|---
| ${\omega}_{q}=h\left(\frac{q}{p}\right){\omega}_{p}$ | (121)
The macroscopic moduli are given by
| $K=\frac{kp_{r}r^{n}}{{\nu}^{a}}{\left\\{T\left[1-\frac{nh{\left(\frac{q}{p}\right)}^{2}}{1+h{\left(\frac{q}{p}\right)}^{2}}\right]+\frac{\left(1-n\right)h{\left(\frac{q}{p}\right)}^{2}}{1+h^{2}{\left(\frac{q}{p}\right)}^{2}}\right\\}}\ /\ {T_{c}}$ | (122)
---|---|---
| $J=\frac{kp_{r}r^{n}}{{\nu}^{a}}{\left\\{T\left[\frac{n\left(\frac{q}{p}\right)}{1+h{\left(\frac{q}{p}\right)}^{2}}\right]-\frac{\left(1-n\right)\left(\frac{q}{p}\right)}{1+h^{2}{\left(\frac{q}{p}\right)}^{2}}\right\\}}\ /\ {T_{c}}$ | (123)
| $G=\frac{kp_{r}r^{n}}{3h{\nu}^{a}}{\left\\{T\left[1-\frac{n}{1+h{\left(\frac{q}{p}\right)}^{2}}\right]+\frac{1-n}{1+h^{2}{\left(\frac{q}{p}\right)}^{2}}\ \right\\}}\ /\ {T_{c}}$ | (124)
where
| $T_{c}=T+{1+h{\left(\frac{q}{p}\right)}^{2}}\ /\ {1+h^{2}{\left(\frac{q}{p}\right)}^{2}}$ | (125)
---|---|---
| $T={S{\nu}^{a}}/{kp_{r}r^{n}\ }$ | (126)
These moduli take simpler forms at isotropic loading states. From Eqs. (122)
through (126):
| $K=\frac{kp_{r}{\left(\frac{p}{p_{r}}\right)}^{n}{\nu}^{-a}}{T^{-1}+1}={S}\ /\ {1+T}$ | $for\ q=0$ (127)
---|---|---
| $J=0$ | $for\ q=0$ (128)
| $G=G_{u}={gp_{r}{\left(\frac{p}{p_{r}}\right)}^{n}}/{{\nu}^{a}}$ | $for\ q=0$ (129)
As the family of solid phases approaches volumetric incompressibility
($\rho\to\frac{1}{2}$) the value of the macroscopic bulk modulus reaches a
linear function of pressure:
| $K=S$ | $for\ q=0$ (130)
---|---|---
while the shear modulus remains a proper fractional power of pressure. The
expressions for the contraction-swelling modulus are the same as those for the
linear solution (given by Eqs. (100) and (102)).
The packing pressure follows directly from the complementary energy potential:
| $\phi=\frac{\partial C}{\partial\nu}={ap_{r}r^{2-n}{\nu}^{a-1}}/{k(2-n)(1-n)}$ | (131)
---|---|---
Differentiating (131) gives
| ${\delta\phi}/{\phi}=\left[{(a-1)\delta\nu}/{\nu}\right]+\left[{\left(2-n\right)\delta r}/{r}\right]$ | (132)
---|---|---
Integrating Eq. (132) for multi-scale equilibrium at constant packing pressure
($\delta\phi=\delta\nu=0$) yields
| $b\left(p,q,\nu\right)=p^{2}+hq^{2}-p^{2}_{o}=\ 0$ | (133)
---|---|---
The integration constant ($p_{o}$) changes with specific volume only.
Figure 8 illustrates the uniformity surface described by Eq. (133) in stress
space normalized with respect to isotropic loading pressure ($p_{o}$). This
surface partitions normalized stress space into contraction and swelling sub-
domains. Only stress increments directed along this surface produce purely
uniform straining.
Figure 8: Uniformity Surface for an Isotropic Material
## VI Discussion
The decomposition of deformation into uniform and differential components in
the mesoscopic model allows distinct representations at different scales. The
macroscopic compliance and elasticity tensors consist of uniform and
differential components. Their uniform properties map from macroscopic to
mesoscopic scales. Their mapping accounts for the absence of interstitial
content at mesoscopic scale by factoring the high-level observed properties by
a power function of specific volume. In other words, the uniform component of
the higher level observed properties is just an upscale version of multi-
constancy properties that are defined at mesoscopic scale. This uniform
component can be measured directly by applying stress increments that do not
alter the specific volume. The differential properties of these macroscopic
compliance and elasticity tensors are stress space constraints based on
property ratios and stress ratios, but not values themselves. The gradient to
the uniformity surface can be measured by applying stress increments that
alter specific volume. In crossing scales, the uniformity surface reduces to a
point on the contraction-swelling curve. Although the surface has no scaled
mesoscopic counterpart, the macroscopic contraction-swelling modulus is
related to the tangential slope of the curve of the contraction-swelling
curve. That is, higher level observed changes in the constraint surface map to
mesoscopic changes at multi-scale equilibrium.
This multi-scale theory includes the modern theory of elasticity as a special
case. That is the case in which the contraction-swelling modulus is
indefinitely large. From this perspective, the modern theory’s scope is
limited to materials that exhibit negligible contraction and swelling; that
is, to materials that can be modeled by a single solid phase of constant
specific volume. In other words, the present theory extends the modern theory
from one for a prescribed specific volume to one that models continuous
variation across a family of solid phases of distinct specific volumes. Figure
9 depicts the relation between the continuous solid phase represented by this
multi-scale theory and the family of discrete solid phases each represented as
a different material by the predecessor modern theory.
Figure 9: Multi-Scale and Modern Theories of Elasticity
Table 1 lists the expressions for the bulk and shear moduli of isotropic
materials at isotropic loading states ($q=0$). The upper three rows apply to
soils and aggregates, while the lower two rows apply to porous solids. The
exponents of the power functions of pressure for the bulk and shear moduli are
identical only for ideal porous solids. The moduli diverge moving up the
Table. The ideal aggregate classes exhibit a bulk modulus linear in pressure,
and possibly specific volume, and a shear modulus that is indefinitely large.
### VI.1 Data for Soils and Aggregates
Soils and tire-derived aggregates (TDA) belong to the three aggregate classes
listed in Table 1. In all three classes, differential volumetric straining
dominates volumetric straining. As a first approximation, volumetric straining
represents changes in particle proximity alone.
Class | Bulk Modulus | Shear Modulus
---|---|---
| S = $\frac{\nu p_{o}}{\kappa}$ | S = $\frac{p_{o}}{{\kappa}^{*}}$ |
Ideal Aggregates | ${\frac{\nu p_{o}}{\kappa}}$ | ${\frac{p_{o}}{{\kappa}^{*}}}$ | ${\mathrm{\infty}}$
Ideal Distortionally Compliant Aggregates | ${\frac{\nu p_{o}}{\kappa}}$ | ${\frac{p_{o}}{{\kappa}^{*}}}$ | ${\frac{gp_{r}}{{\nu}^{a}}{\left(\frac{p_{o}}{p_{r}}\right)}^{n}}$
Aggregates | ${\frac{\frac{\nu p_{o}}{\kappa}}{1+\nu\left(\frac{{\nu}^{a}}{\kappa k}\right){\left(\frac{p_{o}}{p_{r}}\right)}^{\left(1-n\right)}}}$ | ${\frac{\frac{p_{o}}{{\kappa}^{*}}}{1+{\left(\frac{{\nu}^{a}}{{\kappa}^{*}k}\right)\left(\frac{p_{o}}{p_{r}}\right)}^{\left(1-n\right)}}}$ | ${\frac{gp_{r}}{{\nu}^{a}}{\left(\frac{p_{o}}{p_{r}}\right)}^{n}}$
Porous Solids | ${\frac{\frac{kp_{r}}{{\nu}^{a}}{\left(\frac{p_{o}}{p_{r}}\right)}^{n}}{1+\left(\frac{1}{\nu}\right)\left(\frac{\kappa k}{{\nu}^{a}}\right){\left(\frac{p_{o}}{p_{r}}\right)}^{n-1}}}$ | ${\frac{\frac{kp_{r}}{{\nu}^{a}}{\left(\frac{p_{o}}{p_{r}}\right)}^{n}}{1+\left(\frac{{\kappa}^{*}k}{{\nu}^{a}}\right){\left(\frac{p_{o}}{p_{r}}\right)}^{n-1}}}$ | ${\frac{gp_{r}}{{\nu}^{a}}{\left(\frac{p_{o}}{p_{r}}\right)}^{n}}$
Ideal Porous Solids | ${\frac{kp_{r}}{{\nu}^{a}}{\left(\frac{p_{o}}{p_{r}}\right)}^{n}}$ | ${\frac{kp_{r}}{{\nu}^{a}}{\left(\frac{p_{o}}{p_{r}}\right)}^{n}}$ | ${\frac{gp_{r}}{{\nu}^{a}}{\left(\frac{p_{o}}{p_{r}}\right)}^{n}}$
Table 1: Bulk and Shear Moduli at Isotropic Loading States
Figures LABEL:fig:Fig10 and LABEL:fig:Fig11 compare predictions using Eqs.
(29) with (102) and the non-linear solution to published data for fine Ottawa
sand under isotropic unloading [54]. The contraction-swelling index selected
to fit the data for both loose and dense samples is ${\kappa}^{*}=0.0001.$ A
shape factor of $n=0.5$ models the particles as relatively rounded. The best
fits for loose ($\nu=1.836$) and dense ($\nu=1.665$) data are
$\frac{k}{{\nu}^{a}}=700$ and $\frac{k}{{\nu}^{a}}=1000$ respectively, given a
reference pressure of 1 atmosphere ($p_{r}=1$). These values yield $a=3.648$
and $k=6423$ for both packings. The permanent volumetric strains that align
the curves based on these coefficients are listed in the Figure legends. The
upper limit on each curve is the pressure at the end of reloading and the
onset of unloading. Combining these volumetric properties with the small-
strain shear data for Ottawa sand [55] gives a shear index of $g=4783$ (for
$a=3.648$ and $p_{r}\ =1$). This shear index combined with the derived bulk
index gives a uniform Poisson’s ratio of $\rho=0.2$. The correspondence
between the theory and this independently published data for loose and dense
packing and for volumetric and distortional straining is quite encouraging.
Tire-derived aggregate is twenty times more compressible than sand. Its
compressibility is due to the presence of tire chips, which are a common
additive to municipal solid waste. TDAs exhibit visible amounts of uniform
volumetric deformation. Figure LABEL:fig:Fig12 compares the pore volume
changes to particle compression under isotropic reloading and unloading [56].
The solid line identifies the volumetric strain. The dashed line identifies
the differential strain of the interstitial phase accumulated during
reloading. The volumetric strain of the aggregate is still predominantly
differential.
### VI.2 Data for Porous Solids
Consolidated rock, concrete and ceramics belong to the porous solid classes
listed in Table 1. Uniform volumetric straining dominates in these two
classes. The ideal porous solid class is the theory’s regular limit: its
contraction-swelling modulus is large enough to disregard any differential
volumetric straining ($S\to\infty$). As a first approximation, volumetric
straining is straining of a solid phase with constant specific volume. The
bulk and shear moduli are functions of porosity, increasing as porosity
decreases.
Data for porous solids is typically presented in normalized form with respect
to the modulus of the mesoscopic solid constituent (its projected value at
zero porosity). Anderson [57] studied the dependence of bulk and shear moduli
on volume per ion pair for a wide variety of materials. Anderson’s relation in
normalized form [58] is
| $\frac{K}{K_{\left(u\mathrel{\left|\vphantom{u\nu=1}\right.\kern-1.2pt}\nu=1\right)}}\approx\frac{K_{u}}{K_{\left(u\mathrel{\left|\vphantom{u\nu=1}\right.\kern-1.2pt}\nu=1\right)}}=\frac{G_{u}}{G_{\left(u\mathrel{\left|\vphantom{u\nu=1}\right.\kern-1.2pt}\nu=1\right)}}={\left(1-\eta\right)}^{a}$ | $T\to\infty$ (134)
---|---|---
$K_{u|\nu=1}$ denotes the uniform bulk modulus (of the constituent material)
evaluated at zero porosity. Relation (134) is identical to Eqs. (115) and
(117) at isotropic loading. Anderson proposed $a=5$ for oxide ceramics in
general, and $K_{u|\nu=1}=252GPa$ for alumina specifically. Munro [58]
developed this relation for the bulk modulus of high-purity alumina using
effective medium theory and found $K_{u|\nu=1}=252GPa,$ $a=2.1$ to be optimal,
based on 16 references to empirical data.
Krief et al. [59] used experimental data and Pickett’s [60] empirical result
(which assumes a system Poisson’s ratio approximately equal to the mineral
ratio) to show that for dry rock,
$\left(1-\beta\right)={\left(1-\eta\right)}^{m\left(\eta\right)}$, where
$\beta$ is Biot’s first coefficient and $m\left(\eta\right)=\frac{3}{1-\eta}$.
Knackstedt et al. [61] relied on this to write
| $\frac{K}{K_{u|\nu=1}}\approx\frac{K_{u}}{K_{u|\nu=1}}=\frac{G_{u}}{G_{u|\nu=1}}={(1-\eta)}^{\left(\frac{3}{1-\eta}\right)}$ | $T\to\infty$ (135)
---|---|---
Knackstedt et al. reported finite element simulation results for the elastic
properties of dry cemented sandstone that support a non-linear relation
between these moduli and porosity ($a\napprox 1$) and show an accurate
reproduction of the Krief et al. empirical relation between shear modulus and
porosity.
### VI.3 Critical State Soil Mechanics
Critical State Soil Mechanics (CSSM) [5,33] describes soils in the ideal
aggregate class. It models a soil sample as ‘a random aggregate of irregular
solid particles of diverse sizes which tear, rub, scratch, chip and even
bounce against each other during the process of continuous deformation’ and
applies at the length scale at which flow and deformation appear continuous.
The CSSM macroscopic bulk modulus is linearly related to effective pressure.
The model’s internal energy potential assumes a distortionally rigid solid
phase ($g\to\infty$) [62]. The volumetric rigidity of the solid phase
($k\to\infty$) necessarily follows from this assumption (Eq. (98)). In terms
of the present theory, volumetric straining is purely differential and
volumetric intra-particle straining is negligible. The particles themselves do
not store strain energy.
#### VI.3.1 Isotropic Loading States
Improvements to the CSSM model are possible based on the present theory. The
simplest linear solution that introduces distortional strain energy to the
CSSM potential, while ignoring both shape and scaling properties ($n=0,\
a=0$). The solid phase is then volumetrically compliant
($0\leq\rho<\frac{1}{2},k\leq\infty,0\leq T<\infty$). The corresponding
expressions for the bulk, cross and shear moduli are
| $K={\left[\left({1/kp_{r}}\right)+\kappa/\nu p\right]}^{-1}$ | (136)
---|---|---
| $J=0$ | (137)
| $G=gp_{r}$ | (138)
Three coefficients describe the properties of a soil sample: its contraction-
swelling index ($\kappa$ or ${\kappa}^{*}$) and the bulk and shear indices of
its solid phase ($k$ and $g$).
Zytynski et al. [2] noted that selecting a constant shear modulus may lead to
negative-valued Poisson’s ratios, which is physically unrealistic for soils.
Eqs (136) through (138) facilitate an energetically conservative model through
an independent selection of a constant shear modulus and a positive-valued
Poission’s ratio for the family of solid phases. Letting this Poisson’s ratio
increase to $\rho=\frac{1}{2}$ recovers the CSSM model in two coefficients and
energetically conservative form, with a constant shear modulus.
A further enhancement involves coupling the macroscopic properties to specific
volume. Adding linear scaling ($a=1$) couples the strain energy contribution
to specific volume and predicts a shear modulus that is inversely proportional
to specific volume [51]. The corresponding expressions for the macroscopic
moduli are Eqs. (92), (93), (94) and (100). Adding non-linear shape and
scaling ($n>0,\ a>1$) yields the commonly accepted expression for small-strain
shear modulus [40]. The corresponding expressions are Eqs. (127), (128), (129)
and (100). This is the class in the third topmost row of Table 1.
Letting the solid phase’s Poisson’s ratio increase to $\rho=\frac{1}{2}$
recovers the CSSM model in two coefficients, but now with the commonly
accepted expression for small-strain shear modulus. This is the ideal
distortionally complaint class in the second topmost row of Table 1.
#### VI.3.2 Non-Isotropic Loading States
The bulk, cross and shear moduli for non-isotropic loading states extrapolate
on the CSSM model. For the simplest enhancement in sub-section 6.3.1, which
introduces distortional compliance, but ignores both shape and scaling
properties ($n=0,\ a=0$), the present theory yields
| $K=kp_{r}\ \frac{W+h{\left(\frac{q}{p}\right)}^{2}}{W+1+h{\left(\frac{q}{p}\right)}^{2}\ }$ | (139)
---|---|---
| $J=-\ kp_{r}\frac{\frac{q}{p}}{W+1+h{\left(\frac{q}{p}\right)}^{2}}$ | (140)
| $G=gp_{r}\frac{W+1}{W+1+h{\left(\frac{q}{p}\right)}^{2}}$ | (141)
where
| $W=\frac{\nu p}{\kappa kp_{r}}\left[1+h^{2}{\left(\frac{q}{p}\right)}^{2}\right]$ | (142)
---|---|---
These expressions are valid extensions of the CSSM model provided that the
solid phase is not perfectly incompressible ($\rho<\frac{1}{2}$).
The present theory does not encompass the special case of a family of
incompressible solid phases with different specific volumes at non-isotropic
loading states. That is, the CSSM model cannot be recovered from the present
theory at these states. Letting the solid phase’s bulk index increase without
limit at any state of non-zero shear stress in either the linear solution or
the non-linear solution (Eqs. (87) through (91) or Eqs. (122) through (124)
respectively) leads to a singularity.
### VI.4 Future Considerations
The singularities that arise at non-isotropic loading states for
volumetrically incompressible solid phases expose a limitation of the present
theory. Its inability to recover the original CSSM model, which assumes
volumetric incompressibility, constrains the present theory’s scope to those
materials with solid phases that exhibit some volumetric compliance; that is,
phases with a uniform Poisson’s ratio in the range $0\leq\rho<\frac{1}{2}$.
Broadening the theory’s scope to include perfect volumetric incompressibility
($\rho=\frac{1}{2}$) calls for an intertheoretic solution [64]. The present
theory and the CSSM model achieve different objectives. The CSSM model [63],
like the bulk solids’ model of Jenike and Shield [65], identifies stable loci
of states of continuous plastic flow (critical states). Both models relate the
shear stress to effective pressure at any critical state through a frictional
constant, $q=Mp^{\prime}$. On the other hand, the present theory resolves the
energy conservation issue at states away from continuous plastic flow states
considering friction negligible. The CSSM model includes pressure as a
parameter in its expression for bulk modulus, while the present theory
includes pressure, shear stress and specific volume. Shear stress enters its
bulk, cross and shear moduli at non-isotropic states solely through the shear
stress ratio, $q/p$. Throughout the elastic region this ratio lies below the
value that mobilizes friction at critical state ($q/p^{\prime}<M$). Further
research to establish the intertheoretic relations for shear stress ratios
below critical state value ($0<q/p^{\prime}<M$) is clearly needed.
The expressions for bulk modulus listed in Table 1 include a term that can be
identified as the measure of a material’s softness. The product of the
contraction-swelling index and the uniform bulk index ($\kappa k$ or
${\kappa}^{*}k$) is a material constant that relates differential to uniform
stiffness. A material with a vanishingly small value is hard. A material with
a higher value than another material is softer than the other. That is, this
product locates its material along a spectrum from hard to soft condensed
matter.
## VII Concluding Remarks
The multi-scaling theory of elasticity described here is a constraint theory
that includes specific volume as an internal state variable and allows it to
change between multi-scale equilibrium states at rates independent of the rate
of applied stress. Its scope includes condensed matter in general and
geomaterials in specific, ranging from porous solids to aggregates.
The theory supports the more commonly accepted empirical models for soils and
conserves energy in closed loading cycles within the elastic region well away
from failure. Its uniformity surface partitions the stress sub-space in the
vicinity of the current state into contraction and swelling sub-domains and
identifies the locus of states that are reachable without changes in packing.
The major symmetry of the macroscopic elasticity and compliance tensors
follows from equilibrium at macroscopic and mesoscopic scales. A contraction-
swelling curve describes the locus of mesoscopic equilibrium packing pressures
across the range of specific volumes.
The theory requires at least three coefficients to describe an isotropic
material: its bulk, shear and contraction-swelling indices. It extrapolates
the empirical expressions for bulk, cross and shear moduli established at
isotropic loading states across the domain of state space. The theory includes
the modern theory of elasticity as its regular limit and offers refinements to
the Critical State Soil Mechanics model.
###### Acknowledgements.
Professor Jitendrapal Sharma suggested the term contraction to describe
compressive differential deformation of the interstitial phase. Dr. Alireza
Najma checked the derivations and assisted with the data retrieval and
presentation. Nurida Fatoullaeva assisted in the proof-reading of this paper.
This research did not receive any specific grant from funding agencies in the
public, commercial, or not-for-profit sectors.
## References
* (1) N. Woodcock Geology and Environment In Britain and Ireland, 1 edition. London: CRC Press, 1994.
* (2) M. Zytynski, M. F. Randolph, R. Nova, and C. P. Wroth “On modelling the unloading-reloading behaviour of soils,” International Journal for Numerical and Analytical Methods in Geomechanics, vol. 2, no. 1, pp. 87–93, Jan. 1978.
* (3) M. F. Randolph, J. P. Carter, and C. P. Wroth, “Driven piles in clay—the effects of installation and subsequent consolidation,” Géotechnique, vol. 29, no. 4, pp. 361–393, Dec. 1979.
* (4) G. T. Houlsby, “The use of a variable shear modulus in elastic-plastic models for clays,” Computers and Geotechnics, vol. 1, no. 1, pp. 3–13, Jan. 1985.
* (5) D. M. Wood, Soil Behaviour and Critical State Soil Mechanics, 1 edition. Cambridge England; New York: Cambridge University Press, 1991.
* (6) R. I. Borja, C.-H. Lin, and F. J. Montáns, “Cam-Clay plasticity, Part IV: Implicit integration of anisotropic bounding surface model with nonlinear hyperelasticity and ellipsoidal loading function,” Computer Methods in Applied Mechanics and Engineering, vol. 190, no. 26–27, pp. 3293–3323, 2001.
* (7) G. T. Houlsby and A. M. Puzrin, Principles of hyperplasticity: an approach to plasticity theory based on thermodynamic principles. London: Springer, 2006.
* (8) W. M. Coombs and R. S. Crouch, “Algorithmic issues for three-invariant hyperplastic Critical State models,” Computer Methods in Applied Mechanics and Engineering, vol. 200, no. 25–28, pp. 2297–2318, Jun. 2011.
* (9) K. Krabbenhoft and A. V. Lyamin, “Computational Cam clay plasticity using second-order cone programming,” Computer Methods in Applied Mechanics and Engineering, vol. 209–212, pp. 239–249, 2012.
* (10) M. M. Stickle, P. De La Fuente, C. Oteo, M. Pastor, and P. Dutto, “A modelling framework for marine structure foundations with example application to vertical breakwater seaward tilt mechanism under breaking wave loads,” Ocean Engineering, vol. 74, pp. 155–167, 2013.
* (11) A. Golchin and A. Lashkari, “A critical state sand model with elastic–plastic coupling,” International Journal of Solids and Structures, vol. 51, no. 15–16, pp. 2807–2825, 2014.
* (12) P. Y. Hong, J. M. Pereira, Y. J. Cui, A. M. Tang, F. Collin, and X. L. Li, “An elastoplastic model with combined isotropic–kinematic hardening to predict the cyclic behavior of stiff clays,” Computers and Geotechnics, vol. 62, pp. 193–202, 2014.
* (13) S. Le Pense, “Mean stress dependent nonlinear hyperelasticity coupled with damage stiffness degradation. A thermodynamical approach,” Mechanics Research Communications, vol. 60, pp. 85–89, 2014.
* (14) K. S. Wong and D. Mašín, “Coupled hydro-mechanical model for partially saturated soils predicting small strain stiffness,” Computers and Geotechnics, vol. 61, pp. 355–369, 2014.
* (15) J. Duriez and É. Vincens, “Constitutive modelling of cohesionless soils and interfaces with various internal states: An elasto-plastic approach,” Computers and Geotechnics, vol. 63, pp. 33–45, 2015.
* (16) V. Robin, A. A. Javadi, O. Cuisinier, and F. Masrouri, “An effective constitutive model for lime treated soils,” Computers and Geotechnics, vol. 66, pp. 189–202, 2015.
* (17) M. Martinelli, A. Burghignoli, and L. Callisto, “Dynamic response of a pile embedded into a layered soil,” Soil Dynamics and Earthquake Engineering, vol. 87, pp. 16–28, 2016.
* (18) P. Y. Hong, J. M. Pereira, A. M. Tang, and Y. J. Cui, “A two-surface plasticity model for stiff clay,” Acta Geotech., vol. 11, no. 4, pp. 871–885, Aug. 2016.
* (19) M. Lloret-Cabot, S. W. Sloan, D. Sheng, and A. J. Abbo, “Error behaviour in explicit integration algorithms with automatic substepping,” International Journal for Numerical Methods in Engineering, vol. 108, no. 9, pp. 1030–1053, 2016.
* (20) K. Sternik, “Elasto-plastic Constitutive Model for Overconsolidated Clays,” Int J Civ Eng, vol. 15, no. 3, pp. 431–440, May 2017.
* (21) A. Prashant, D. Bhattacharya, and S. Gundlapalli, “Stress-state dependency of small-strain shear modulus in silty sand and sandy silt of Ganga,” Géotechnique, vol. 69, no. 1, pp. 42–56, Feb. 2018.
* (22) Z. Shi, G. Buscarnera, and R. J. Finno, “Simulation of cyclic strength degradation of natural clays via bounding surface model with hybrid flow rule,” International Journal for Numerical and Analytical Methods in Geomechanics, vol. 42, no. 14, pp. 1719–1740, 2018.
* (23) A. Vrakas, “On the computational applicability of the modified Cam-clay model on the ‘dry’ side,” Computers and Geotechnics, vol. 94, pp. 214–230, Feb. 2018.
* (24) H.-S. Yu, P.-Z. Zhuang, and P.-Q. Mo, “A unified critical state model for geomaterials with an application to tunnelling,” Journal of Rock Mechanics and Geotechnical Engineering, Dec. 2018.
* (25) Z. Zhang, Y. Chen, and Z. Huang, “A novel constitutive model for geomaterials in hyperplasticity,” Computers and Geotechnics, vol. 98, pp. 102–113, 2018.
* (26) X. Kang and H. Liao, “Bounding surface plasticity model for jointed soft rocks considering overconsolidation and structural decay,” Computers and Geotechnics, vol. 108, pp. 295–307, Apr. 2019.
* (27) H. H. Nguyen, H. Khabbaz, and B. Fatahi, “A numerical comparison of installation sequences of plain concrete rigid inclusions,” Computers and Geotechnics, vol. 105, pp. 1–26, Jan. 2019.
* (28) V. Silvestri and C. Tabib, “An Enhanced Solution for the Expansion of Cylindrical Cavities in Modified Cam Clay,” in Advances in Numerical Methods in Geotechnical Engineering, 2019, pp. 101–117.
* (29) J. T. Oden, “Simulation-Based Engineering Science: A National Science Foundation Blue Ribbon Report,” 03-May-2006. [Online]. Available: https://www.nsf.gov/pubs/reports/sbes_final_report.pdf.
* (30) R. Batterman, “The Tyranny of Scales,” The Oxford Handbook of Philosophy of Physics, Feb. 2013.
* (31) A. E. H. Love, A Treatise on the Mathematical Theory of Elasticity, 4th Revised ed. edition. New York: Dover Publications, 2011.
* (32) I. Todhunter and K. Pearson, A history of the theory of elasticity and of the strength of materials: from Galilei to Lord Kelvin, 2 vols. New York: Dover Publications, 1960.
* (33) A. N. Schofield and C. P. Wroth, Critical State Soil Mechanics. Maidenhead, UK: McGraw-Hill, 1968.
* (34) M. Bolton, Guide to Soil Mechanics. London: Macmillan, 1979.
* (35) J. H. Atkinson and P. L. Bransby, Mechanics of Soils: An Introduction to Critical State Soil Mechanics. London?; New York: McGraw-Hill Inc.,US, 1978.
* (36) R. Butterfield, “A natural compression law for soils (an advance on e–log p’,” Géotechnique, vol. 29, no. 4, pp. 469–480, Dec. 1979.
* (37) K. Hashiguchi, “On the linear relations of V–ln p and ln v–ln p for isotropic consolidation of soils,” International Journal for Numerical and Analytical Methods in Geomechanics, vol. 19, no. 5, pp. 367–376, 1995.
* (38) B. O. Hardin and W. L. Black, “Vibration Modulus of Normally Consolidated Clay,” Journal of the Soil Mechanics and Foundations Division, vol. 94, no. 2, pp. 353–370, 1968.
* (39) S. Shibuya, S. C. Hwang, and T. Mitachi, “Elastic shear modulus of soft clays from shear wave velocity measurement,” Géotechnique, vol. 47, no. 3, pp. 593–601, Jun. 1997.
* (40) Vardanega P. J. and Bolton M. D., “Stiffness of Clays and Silts: Normalizing Shear Modulus and Shear Strain,” Journal of Geotechnical and Geoenvironmental Engineering, vol. 139, no. 9, pp. 1575–1589, Sep. 2013.
* (41) Collins I. F. and Houlsby G. T., “Application of thermomechanical principles to the modelling of geotechnical materials,” Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, vol. 453, no. 1964, pp. 1975–2001, Sep. 1997.
* (42) G. T. Houlsby, A. Amorosi, and E. Rojas, “Elastic moduli of soils dependent on pressure: a hyperelastic formulation,” Géotechnique, vol. 55, no. 5, pp. 383–392, 2005.
* (43) G. A. Maugin, The Thermomechanics of Plasticity and Fracture, 1st edition. Cambridge England; New York: Cambridge University Press, 1992.
* (44) J. Kestin and J. R. Rice, “Paradoxes in the Application of Thermodynamics to Strained Solids,” in A Critical Review of Thermodynamics, E. B. Stuart, A. J. Brainard, and B. Gal-Or, Eds. Baltimore: Mono Book Company, 1970, pp. 275–298.
* (45) T. Hueckel and G. Maier, “Incremental boundary value problems in the presence of coupling of elastic and plastic deformations: A rock mechanics oriented theory,” International Journal of Solids and Structures, vol. 13, no. 1, pp. 1–15, Jan. 1977.
* (46) I. F. Collins, “Associated and Non-Associated Aspects of the Constitutive Laws for Coupled Elastic/Plastic Materials,” International Journal of Geomechanics, vol. 2, no. 2, pp. 259–267, 2002.
* (47) J. Kestin, “Local-equilibrium formalism applied to mechanics of solids,” International Journal of Solids and Structures, vol. 29, no. 14, pp. 1827–1836, Jan. 1992.
* (48) M. A. Biot, “Theory of Stress-Strain Relations in Anisotropic Viscoelasticity and Relaxation Phenomena,” Journal of Applied Physics, vol. 25, no. 11, pp. 1385–1391, Nov. 1954.
* (49) J. R. Rice, “Inelastic constitutive relations for solids: An internal-variable theory and its application to metal plasticity,” Journal of the Mechanics and Physics of Solids, vol. 19, no. 6, pp. 433–455, Nov. 1971.
* (50) R. K. P. Zia, E. F. Redish, and S. R. McKay, “Making sense of the Legendre transform,” American Journal of Physics, vol. 77, no. 7, pp. 614–622, Jun. 2009.
* (51) C. M. Szalwinski, “The particle stress tensor,” Géotechnique, vol. 33, no. 2, pp. 181–182, Jun. 1983.
* (52) Goddard J. D. and Enderby John Edwin, “Nonlinear elasticity and pressure-dependent wave speeds in granular media,” Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences, vol. 430, no. 1878, pp. 105–131, Jul. 1990.
* (53) J. F. E. Richart, J. J. R. Hall, and R. D. Woods, Vibrations of Soils and Foundations. Englewood Cliffs, N.J: Prentice Hall, 1970.
* (54) Dakoulas Panos and Sun Yuanhui, “Fine Ottawa Sand: Experimental Behavior and Theoretical Predictions,” Journal of Geotechnical Engineering, vol. 118, no. 12, pp. 1906–1923, Dec. 1992.
* (55) L. Yang and L. Salvati, “Small Strain Properties of Sands with Different Cement Types,” International Conferences on Recent Advances in Geotechnical Earthquake Engineering and Soil Dynamics, May 2010.
* (56) Wartman Joseph, Natale Mark F., and Strenk Patrick M., “Immediate and Time-Dependent Compression of Tire Derived Aggregate,” Journal of Geotechnical and Geoenvironmental Engineering, vol. 133, no. 3, pp. 245–256, Mar. 2007.
* (57) O. L. Anderson, “2 - Determination and Some Uses of Isotropic Elastic Constants of Polycrystalline Aggregates Using Single-Crystal Data,” in Physical Acoustics, vol. 3, W. P. Mason, Ed. Academic Press, 1965, pp. 43–95.
* (58) R. G. Munro, “Effective Medium Theory of the Porosity Dependence of Bulk Moduli,” Journal of the American Ceramic Society, vol. 84, no. 5, pp. 1190–1192, May 2001.
* (59) M. Krief, J. Garat, J. Stellingwerff, and J. Ventre, “A Petrophysical Interpretation Using the Velocities of P and S Waves (Full-Waveform Sonic),” The Log Analyst, vol. 31, no. 06, Nov. 1990.
* (60) G. R. Pickett, “Acoustic Character Logs and Their Applications in Formation Evaluation,” Journal of Petroleum Technology, vol. 15, no. 06, pp. 659–667, Jun. 1963.
* (61) M. A. Knackstedt, C. H. Arns, and W. Val Pinczewski, “Velocity–porosity relationships: Predictive velocity model for cemented sands composed of multiple mineral phases,” Geophysical Prospecting, vol. 53, no. 3, pp. 349–372, 2005.
* (62) K. H. Roscoe, A. N. Schofield, and A. Thurairajah, “Yielding of Clays in States Wetter than Critical,” Géotechnique, vol. 13, no. 3, pp. 211–240, Sep. 1963.
* (63) K. H. Roscoe, A. N. Schofield, and C. P. Wroth, “On the Yielding of Soils,” Géotechnique, vol. 8, no. 1, pp. 22–53, Mar. 1958.
* (64) R. Batterman, “Intertheory Relations in Physics,” in The Stanford Encyclopedia of Philosophy, Fall 2016., E. N. Zalta, Ed. Metaphysics Research Lab, Stanford University, 2016.
* (65) A. W. Jenike and R. T. Shield, “On the Plastic Flow of Coulomb Solids Beyond Original Failure,” Journal of Applied Mechanics, vol. 26, pp. 599–602, 1959.
## Packing Energy Examples
A contraction-swelling relation in $\nu-ln\phi$ space takes the form:
| $\delta\nu=\ -\ \xi\ \delta\phi/\phi$ | (A1)
---|---|---
where $\xi$ denotes its tangential slope. The corresponding relation between
packing pressure and specific volume is given by
| $\beta=\ \phi-\ {\phi}_{r}e^{\left\\{\frac{{\nu}_{r}-\nu}{\xi}\right\\}}=0$ | (A2)
---|---|---
where ${\phi}_{r}$ denotes the reference packing pressure at reference
specific volume (${\nu}_{r}$). The packing energy increment follows from Eqs.
(7) and (A2)
| $\delta P\left(\nu\right)=\ -\ {\phi}_{r}e^{\left\\{\frac{{\nu}_{r}-\nu}{\xi}\right\\}}\delta\nu=\xi\ \delta\phi$ | (A3)
---|---|---
Integrating Eq. (A3) as a function of specific volume yields
| $P\left(\nu\right)=\xi(\phi-\ {\phi}_{r})$ | (A4)
---|---|---
Selecting a fully dispersed state as the reference state [62], yields
| $P\left(\nu\right)=\xi\ \phi$ | $\ {\phi}_{r}\ \to 0,\ {\nu}_{r}\ \to\infty$ (A5)
---|---|---
The contraction-swelling index in $\nu-ln\phi$ space is related to the index
in $\nu-lnp_{o}/p_{r}$ space through Eqs. (100) and (132):
| $\xi=\kappa\ /\ [2-n-(a-1)\kappa/\nu]$ | (A6)
---|---|---
Note that for a semi-logarithmic constitutive relation, the index for one
scale is a function of specific volume for the other scale.
The contraction-swelling relation in $ln\nu-ln\phi$ space takes the form:
| ${\delta\nu}/{\nu}=\ -\ {\xi}^{*}{\delta\phi}/{\phi}$ | (A7)
---|---|---
where ${\xi}^{*}$ denotes the tangential slope. The corresponding relation
between packing pressure and specific volume is given by
| $\beta=\ \phi-\ {\phi}_{r}{\left(\frac{{\nu}_{r}}{\nu}\right)}^{\left\\{\frac{1}{\xi*}\right\\}}=0$ | (A8)
---|---|---
The specific packing energy increment follows from Eqs. (7) and (A8)
| $\delta P\left(\nu\right)=\ -\ {\phi}_{r}{\left(\frac{{\nu}_{r}}{\nu}\right)}^{\left\\{\frac{1}{\xi*}\right\\}}\delta\nu=\nu\ {\xi}^{*\ }\delta\phi$ | (A9)
---|---|---
Integrating Eq. (A9) as a function of specific volume yields
| $P\left(\nu\right)=\ \left[\frac{{\xi}^{*}}{1-{\xi}^{*}}\right]{\phi}_{r}{\nu}_{r}\left[{\left(\frac{{\nu}_{r}}{\nu}\right)}^{\left\\{\frac{{1-\xi}^{*}}{{\xi}^{*}}\right\\}}-1\right]$ |
---|---|---
| $\ \ \ \ \ \ \ \ \ =\ \left[\frac{{\xi}^{*}}{1-{\xi}^{*}}\right]{\phi}_{r}{\nu}_{r}\left[{\left(\frac{\phi}{{\phi}_{r}}\right)}^{\left\\{1-{\xi}^{*}\right\\}}-1\right]$ | (A10)
The contraction-swelling index in $ln\nu-ln\phi$ space is related to the index
in $ln\nu-lnp_{o}/p_{r}$ space through Eqs. (102) and (132):
| ${\xi}^{*}={\kappa}^{*}\ /\ [2-n-(a-1){\kappa}^{*}]$ | (A11)
---|---|---
Note that for a logarithmic-logarithmic constitutive relation, the index for
one scale is not a function of specific volume for the other scale.
Figure Captions
1 – Centric Deformations
2 – Straining of Solid and Interstitial Phases
3 – Changes in Packing Pressure
4 – Contraction-Swelling Constitutive Relation
5 – Uniformity Surface through the Current State
6 – Internal Energy
7 – Complementary Internal Energy
8 – Uniformity Surface for an Isotropic Material
9 – Multi-Scale and Modern Theories of Elasticity
10 – Dense Fine Ottawa Sand under Isotropic Unloading ($D_{r}=75\%,\
k{\kappa}^{*}=0.64$) (after Dakoulas et al. 1992 – reproduced with permission
from ASCE)
11 – Loose Fine Ottawa Sand under Isotropic Unloading ($D_{r}=30\%,\
k{\kappa}^{*}=0.64$) (after Dakoulas et al. 1992 – reproduced with permission
from ASCE)
12 – Volume Changes in TDA (tire chips) under saturated, drained, isotropic
compression (Wartman et al. 2007 – reproduced with permission from ASCE)
|
$\displaystyle\left<\pi_{i}(x)\pi_{0}(0)\right>=\left<\pi_{0}(x)\pi_{i}(0)\right>=\frac{i}{2}\left(1+\frac{\xi_{2}^{2}}{\xi_{1}}\right)\mbox{Sign}(t)\partial_{i}\delta^{(3)}(\vec{x})+\frac{i\xi_{2}}{4}t^{2}\mbox{Sign}(t)\partial_{i}\vec{\partial}^{2}\delta^{(3)}(\vec{x}),$
$\displaystyle\left<\pi_{0}(x)\pi_{0}(0)\right>=i\left(1+\frac{\xi_{2}^{2}}{\xi_{1}}\right)\delta(t)\delta^{(3)}(\vec{x})+\frac{i\xi_{2}}{2}t^{2}\absolutevalue{t}\vec{\partial}^{2}\delta^{(3)}(\vec{x}).$
It can be checked directly that for every choice of $\xi_{1},\xi_{2}$, the
Ward Identities are all satisfied, which reveals the Carrollian conformal
invariance of the theory. Though these expressions look complicated, we can
select the Landau-type gauge $\xi_{2}=0$ to simply them and obtain the
nonvanishing correlators listed in (4.16). Here we list them again for
completeness.
$\displaystyle\left<A_{v}(x)A_{v}(0)\right>=-2i\absolutevalue{t}\delta^{(3)}(\vec{x}),\quad\left<A_{v}(x)\pi_{i}(0)\right>=-\left<\pi_{i}(x)A_{v}(0)\right>=\frac{3i}{2}\absolutevalue{t}\partial_{i}\delta^{(3)}(\vec{x}),$
(B.19)
$\displaystyle\left<A_{v}(x)\pi_{0}(0)\right>=-\left<\pi_{0}(x)A_{v}(0)\right>=i\mbox{Sign}(t)\delta^{(3)}(\vec{x}),$
$\displaystyle\left<A_{i}(x)\pi_{j}(0)\right>=-\left<\pi_{j}(x)A_{i}(0)\right>=-\frac{i}{2}\delta_{ij}\mbox{Sign}(t)\delta^{(3)}(\vec{x}),$
$\displaystyle\left<\pi_{i}(x)\pi_{j}(0)\right>=\left<\pi_{j}(x)\pi_{i}(0)\right>=\frac{i}{2}\absolutevalue{t}\left(\partial_{i}\partial_{j}\delta^{(3)}(\vec{x})+\delta_{ij}\vec{\partial}^{2}\delta^{(3)}(\vec{x})\right),$
$\displaystyle\left<\pi_{i}(x)\pi_{0}(0)\right>=\left<\pi_{0}(x)\pi_{i}(0)\right>=\frac{i}{2}\mbox{Sign}(t)\partial_{i}\delta^{(3)}(\vec{x}),\quad\left<\pi_{0}(x)\pi_{0}(0)\right>=i\delta(t)\delta^{(3)}(\vec{x}).$
## Appendix C C Ward identities and 2-point correlation functions
In this Appendix, we review the constraints on the 2-point correlation
functions of the primary operators from the Ward identities of Carrollian
conformal symmetries. There could be four classes of the correlators with
different structures, which will be labeled by Case 1.1, Case 1.2, Case 2.1,
and Case 2.2. It turns out that the correlators discussed in the main text
belong to Case 2.1.
Similar to the case in CFT, the structure of 2-point correlation functions in
CCFT is very much constrained by the Ward identities of the symmetries. For
the Carrollian conformal symmetries, the corresponding Ward identities are
listed in (C.1),
$\displaystyle P_{\mu}:$
$\displaystyle\quad(\partial^{\mu}_{1}+\partial^{\mu}_{2})\left<\mathcal{O}_{1}\mathcal{O}_{2}\right>=0,$
(C.1) $\displaystyle D:$ $\displaystyle\quad
x^{\mu}\partial^{\mu}\left<\mathcal{O}_{1}\mathcal{O}_{2}\right>+\Delta_{1}\left<\mathcal{O}_{1}\mathcal{O}_{2}\right>+\Delta_{2}\left<\mathcal{O}_{1}\mathcal{O}_{2}\right>=0,$
$\displaystyle J_{ij}:$
$\displaystyle\quad(x^{i}\partial^{j}-x^{j}\partial^{i})\left<\mathcal{O}_{1}\mathcal{O}_{2}\right>+\left<(J^{ij}\mathcal{O}_{1})\mathcal{O}_{2}\right>+\left<\mathcal{O}_{1}(J^{ij}\mathcal{O}_{2})\right>=0,$
$\displaystyle B_{i}:$ $\displaystyle\quad
x^{i}\partial_{t}\left<\mathcal{O}_{1}\mathcal{O}_{2}\right>+\left<[B_{i},\mathcal{O}_{1}]\mathcal{O}_{2}\right>+\left<\mathcal{O}_{1}[B_{i},\mathcal{O}_{2}]\right>=0,$
$\displaystyle K_{0}:$
$\displaystyle\quad\left(\left<[K_{0},\mathcal{O}_{1}]\mathcal{O}_{2}\right>+\left<\mathcal{O}_{1}[K_{0},\mathcal{O}_{2}]\right>\right)-x^{i}\left(\left<[B_{i},\mathcal{O}_{1}]\mathcal{O}_{2}\right>-\left<\mathcal{O}_{1}[B_{i},\mathcal{O}_{2}]\right>\right)=0,$
$\displaystyle K_{i}:$
$\displaystyle\quad\left(\left<[K_{i},\mathcal{O}_{1}]\mathcal{O}_{2}\right>+\left<\mathcal{O}_{1}[K_{i},\mathcal{O}_{2}]\right>\right)+x^{i}(\Delta_{1}-\Delta_{2})\left<\mathcal{O}_{1}\mathcal{O}_{2}\right>$
$\displaystyle\qquad\quad+x^{j}\left(\left<[J^{i}_{~{}j},\mathcal{O}_{1}]\mathcal{O}_{2}\right>-\left<\mathcal{O}_{1}[J^{i}_{~{}j},\mathcal{O}_{2}]\right>\right)+t\left(\left<[B_{i},\mathcal{O}_{1}]\mathcal{O}_{2}\right>-\left<\mathcal{O}_{1}[B_{i},\mathcal{O}_{2}]\right>\right)=0.$
It should be mentioned that we have used the techniques explained in the
appendix of [43] to simplify the expression for Carrollian special conformal
transformation generators $K_{0},K_{i}$. These identities hold for all of the
correlators appearing in this article. The one from the translational
generator $P_{\mu}$ requires that
$\left<\mathcal{O}_{1}\mathcal{O}_{2}\right>=f(x^{\mu}),$ (C.2)
where $x^{\mu}=x^{\mu}_{1}-x^{\mu}_{2}$.
As shown in [43], by solving the Ward identities, the 2-point correlators of
the operators in a CCFT is generically composed of two independent types, one
being of the power-law form, the other being proportional to the Dirac
$\delta$-function. In [43], the authors have discussed the one of the power-
law form in detail. In this appendix, we mainly focus on the 2-point
correlators for the primary operators in chain representations, and pay more
attention to the correlators which appear as the generalized functions999A
nice introduction to the generalized functions can be found in [58]. in
general $d$ dimensions, including the Dirac $\delta$-functions. The techniques
used here is similar to the ones in [43], and we strongly recommend the reader
to find more details there.
It should also be stressed that here we only consider the correlators of the
primary operators. Some operators in the staggered modules, like $\pi$ in the
magnetic scalar theory, are special in the sense that they are neither primary
($KO\neq 0$) nor descendent, and their correlators can not be constrained by
the discussions here. Even though these operators do obey some Ward identities
from their transformation laws, which help us to determine their correlators,
there is short of general rules on the correlators of these operators.
For the primary operators $\mathcal{O}_{1},\mathcal{O}_{2}$, their 2-point
correlation function $f=\left<\mathcal{O}_{1}\mathcal{O}_{2}\right>$ is a
homogeneous function by using the Ward identity of $D$,
$D:\quad(t\partial_{t}+x^{i}\partial_{i})f(t,\vec{x})+(\Delta_{1}+\Delta_{2})f(t,\vec{x})=0.$
(C.3)
The solution to this equation is a combination of two independent solutions,
the power-law functions and the generalized functions like the (derivatives
of) Dirac $\delta$-distribution. For example, the one-dimensional version of
this differential equation is
$x\partial f(x)+\lambda f(x)=0,$ (C.4)
with the solution being
$f(x)=c_{1}x^{-\lambda}+c_{2}\partial^{(\lambda-1)}\delta(x),$ (C.5)
where $c_{i}$ are constants, and $c_{2}\neq 0$ for $\lambda=1,2,...,$. In the
Carrollian case, $t$ direction and $x_{i}$ directions could be considered
separately, and thus the solution to (C.3) is simply
$f(t,\vec{x})=g(t)g(\vec{x}),$ (C.6)
where $g(t)$ and $g(\vec{x})$ are the homogeneous generalized functions of the
form (C.5).
Another important constraint is from the Ward identity of $B_{i}$ on the
lowest-level correlators $f=\left<\mathcal{O}_{1}\mathcal{O}_{2}\right>$:
$B_{i}:\quad x^{i}\partial_{t}f(t,\vec{x})=0.$ (C.7)
By the “lowest-level”, we mean
$[B_{i},\mathcal{O}_{1}]=[B_{i},\mathcal{O}_{2}]=0$. Considering the fact
$x\delta(x)=0$, we find four independent solutions,
$\displaystyle\partial_{t}f=0:$ $\displaystyle\quad\left\\{\begin{aligned}
&f(t,\vec{x})\propto P(\vec{x}),&&&&\textbf{(Case 1.1)}\\\
&f(t,\vec{x})\propto\prod_{i}\partial_{i}^{n_{i}}(\vec{\partial}^{2})^{n}\delta^{(d-1)}(\vec{x}),&&\Delta_{1}+\Delta_{2}=d-1+\sum_{i}n_{i}+2n,&&\textbf{(Case
1.2)}\end{aligned}\right.$ $\displaystyle x^{i}f=0:$
$\displaystyle\quad\left\\{\begin{aligned} &f(t,\vec{x})\propto
P(t)\delta^{(d-1)}(\vec{x}),&&&&\qquad\quad\textbf{(Case 2.1)}\\\
&f(t,\vec{x})\propto\partial_{t}^{n_{t}}\delta(t)\delta^{(d-1)}(\vec{x}),&&\Delta_{1}+\Delta_{2}=d+n_{t},&&\qquad\quad\textbf{(Case
2.2)}\end{aligned}\right.$
where both $P(t)$ and $P(\vec{x})$ are the power-law functions, and Case 1.2
appears for $\Delta_{1}+\Delta_{2}=d-1,d,d+1,...$ and Case 2.2 appears for
$\Delta_{1}+\Delta_{2}=d,d+1,d+2,...$. In fact, the correlators of the primary
operators in this paper belong to Case 2.1.
The Case 1.1 with $f(t,\vec{x})\propto P(\vec{x})$ being the power-law
function has been discussed in [43]. In the rest of this section, we first
repeat the constraints in Case 1.1 and then discuss the other situations.
### C.1 Case 1.1 and Case 1.2
As shown in [43], the chain representations can have the following forms
$\displaystyle(j)$ (C.8) $\displaystyle(j)\rightarrow$
$\displaystyle(j),\qquad j\neq 0$ $\displaystyle(0)\rightarrow(1)$
$\displaystyle\rightarrow(0),$
$\displaystyle\cdots\rightarrow(j)\rightarrow(j+1)$
$\displaystyle\rightarrow(j+2)\rightarrow\cdots,$
$\displaystyle\cdots\rightarrow(j)\rightarrow(j-1)$
$\displaystyle\rightarrow(j-2)\rightarrow\cdots.$
For Case 1.1, the correlators could be the power-law functions of $x^{\mu}$,
and the non-vanishing 2-point correlators only appear in the case that
$\mathcal{O}_{1},\mathcal{O}_{2}$ have (partially) inverse structure, and the
selection rule is $\Delta_{1}=\Delta_{2}$. The correlator takes the form
$\left<\mathcal{O}_{1}\mathcal{O}_{2}\right>=\frac{C~{}(t/|\vec{x}|)^{r}~{}I^{m_{1},m_{2}}_{j_{1},j_{2}}}{|\vec{x}|^{(\Delta_{1}+\Delta_{2})}}\delta_{\Delta_{1},\Delta_{2}},$
(C.9)
where $I$ is a rank-$0$ homogeneous function of $x_{i}$ representing the
tensor structure of $O_{i}$.
For Case 1.2, $\Delta_{1}+\Delta_{2}\geq d-1\in\mathbb{Z}$, there exists
another solution for the lowest-level 2-point correlators,
$f(t,\vec{x})\propto\prod_{i}\partial_{i}^{n_{i}}(\vec{\partial}^{2})^{n}\delta^{(d-1)}(\vec{x}),\hskip
12.91663pt\sum_{i}n_{i}=\Delta_{1}+\Delta_{2}-(d-1)-2n,\hskip
8.61108ptn_{i}\in\mathbf{N}^{+}.$ (C.10)
For the higher-level correlators, the solutions are of the form
$f^{\prime}(t,\vec{x})\propto
t^{r}\prod_{i}\partial_{i}^{n^{\prime}_{i}}(\vec{\partial}^{2})^{n}\delta^{(d-1)}(\vec{x})$
with
$\sum_{i}n^{\prime}_{i}-2n-r=\Delta_{1}+\Delta_{2}-(d-1),n^{\prime}_{i}\in\mathbf{N}^{+}$.
The full restriction on the 2-point correlators in Case 1.2 is similar to Case
1.1, except the case that one of the operators is a scalar, which will be
discussed separately later. The reason that the selection rule is (almost) the
same is that the power laws are proportional to (derivatives of) Dirac
$\delta$-functions under canonical regularization [58]:
$\displaystyle\frac{2}{\Omega_{(d-1)}}\left.\frac{r^{\lambda}}{\Gamma\left(\frac{\lambda+d-1}{2}\right)}\right|_{\lambda=-(d-1)-2k}=\frac{(-1)^{k}(d-2)!}{2^{k}k!(d-1+2k-2)!}(\vec{\partial}^{2})^{k}\delta^{(d-1)}(\vec{x})$
(C.11)
for $k=0,1,2,...$, with $r^{2}=\sum_{i}x_{i}^{2}$. As a result, most of the
constraints from the Ward identities are the same as the ones in Case 1.1.
Thus if $\Delta_{1}+\Delta_{2}\geq d-1\in\mathbb{Z}$ and
$\Delta_{1}=\Delta_{2}$, the correlators are non-vanishing for
$\mathcal{O}_{1}$ and $\mathcal{O}_{2}$ in partially inverse representations,
and the structures of the correlators are of the form
$\left<\mathcal{O}_{1,l_{1}}^{\\{s_{1}\\}}\mathcal{O}_{2,l_{2}}^{\\{s_{2}\\}}\right>=C~{}t^{r}~{}(D_{s_{1}}D_{s_{2}}(\vec{\partial}^{2})^{n}\delta^{(d-1)}(\vec{x})-\text{traces}),\quad\text{with
}D_{s_{i}}=\partial_{s_{i,1}}\cdots\partial_{s_{i,l_{i}}}$ (C.12)
The explicit selection rule is rather tedious, and we do not repeat them here.
The interested readers may refer [43] for detailed discussions.
The exceptional situation in Case 1.2 is when one of the primary operators is
in scalar representation $(0)$. In this case, there is one additional set of
the selection rules, due to the special property of Dirac $\delta$-function.
In the following, we explain how this additional selection rule emerges and
show the structure of the correlators in this situation. Firstly, for the
simplest case that both $\mathcal{O}_{1}$ and $\mathcal{O}_{2}$ are scalars
with $\Delta_{1}+\Delta_{2}=d-1$, the correlator is
$f=\left<\mathcal{O}_{1}\mathcal{O}_{2}\right>\propto\delta^{(d-1)}(\vec{x})$
in Case 1.2. It is known that
Case 1.1: $\displaystyle x_{i}f\propto\frac{x_{i}}{r^{(d-1)}}\neq 0,$ (C.13)
Case 1.2: $\displaystyle x_{i}f\propto x_{i}\delta^{(d-1)}(\vec{x})=0,$
which makes the constraints from the Ward identities of $K_{i}$ on $f$ for
Case 1.1 and 1.2 different,
Case 1.1: $\displaystyle\text{solution: }f=\frac{C_{1}}{r^{(d-1)}},$
$\displaystyle\text{constraint: }\Delta_{1}=\Delta_{2}=\frac{d-1}{2},$ (C.14)
Case 1.2: $\displaystyle\text{solution: }f=C_{2}\delta^{(d-1)}(\vec{x}),$
$\displaystyle\text{constraint: }\Delta_{1}+\Delta_{2}=d-1.$
Thus for Case 1.2, we have the selection rule
$\left<\mathcal{O}_{1}\mathcal{O}_{2}\right>=C~{}\delta^{(d-1)}(\vec{x}),\hskip
12.91663pt\mathcal{O}_{1},\mathcal{O}_{2}\in(0),\qquad\Delta_{1}+\Delta_{2}=d-1.$
(C.15)
Next, we consider the case that $\mathcal{O}_{1}$ is in more complicated chain
representation. In the case that $\mathcal{O}_{1}\in(j)$ is a symmetric
traceless tensor (STT) with spin $j$, $\mathcal{O}_{2}\in(0)$ is a scalar.
Using the fact
$x_{i}\partial_{i}\delta^{(d-1)}(\vec{x})=-\delta^{(d-1)}(\vec{x})$, we find
that the restrictions from the Ward identities of $K_{i}$ are
$\left<\mathcal{O}_{1}^{\\{s_{1},...,s_{j}\\}}\mathcal{O}_{2}\right>=C~{}(\partial_{s1}\cdots\partial_{s_{j}}\delta^{(d-1)}(\vec{x})-\text{traces}),\qquad\Delta_{1}=1,\quad\Delta_{2}=d-2+j.$
(C.16)
The “traces” term is the trace of
$\partial_{s1}\cdots\partial_{s_{j}}\delta^{(d-1)}(\vec{x})$, and subtracting
this term makes the correlators respect the traceless condition of
$\mathcal{O}_{1}$. Moreover for $\mathcal{O}_{1}\in(j)_{2}\to(j)_{1}$ and
$\mathcal{O}_{2}\in(0)$, we have101010Here we use subscripts to distinguish
different sectors of $(j)_{2}\to(j)_{1}$ with the same spin $j$. Similar
notation for $(0)_{3}\to(1)_{2}\to(0)_{1}$ will appear below.
$\displaystyle\left<\mathcal{O}_{1,(j)_{2}}^{\\{s_{1},...,s_{j}\\}}\mathcal{O}_{2}\right>=C~{}(\partial_{s_{1}}\cdots\partial_{s_{j}}\delta^{(d-1)}(\vec{x})-\text{traces}),\qquad\left<\mathcal{O}_{1,\text{others}}\mathcal{O}_{2}\right>=0,$
(C.17) $\displaystyle\qquad\Delta_{1}=1,\quad\Delta_{2}=d-2+j,$
For $\mathcal{O}_{1}$ being a decreasing chain,
$O_{1}\in(j+n)\to(j+n-1)\cdots\to(j+1)\to(j)$ and $O_{2}\in(0)$, we have
$\left<\mathcal{O}_{1,l_{1}=j+r}^{\\{s_{1},...,s_{l_{1}}\\}}\mathcal{O}_{2}\right>=\frac{C~{}t^{r}}{r!}~{}(\partial_{s_{1}}\cdots\partial_{s_{l_{1}}}\delta^{(d-1)}(\vec{x})-\text{traces}),\qquad\Delta_{1}=1,\Delta_{2}=d-2+j,$
(C.18)
where $\mathcal{O}_{1,l_{1}}$ is the spin-$l_{1}$ part of $\mathcal{O}_{1}$.
For $\mathcal{O}_{1}$ in an increasing chain representation,
$\mathcal{O}_{1}\in(j)\to(j+1)\cdots\to(j+n-1)\to(j+n)$, and
$\mathcal{O}_{2}\in(0)$, the correlators vanish except for the highest-rank
sector in $\mathcal{O}_{1}$. Namely, we have
$\left<\mathcal{O}_{1,(j)}^{\\{s_{1},...,s_{j}\\}}\mathcal{O}_{2}\right>=C~{}(\partial_{s_{1}}\cdots\partial_{s_{j}}\delta^{(d-1)}(\vec{x})-\text{traces}),\quad\left<\mathcal{O}_{1,\text{others}}\mathcal{O}_{2}\right>=0,\quad\Delta_{1}+\Delta_{2}=d-1+j.$
(C.19)
And finally, for $\mathcal{O}_{1}\in(0)_{3}\to(1)_{2}\to(0)_{1}$ and
$\mathcal{O}_{2}\in(0)$, we have
$\left<\mathcal{O}_{1,(0)_{3}}\mathcal{O}_{2}\right>=C~{}\delta^{(d-1)}(\vec{x}),\qquad\left<\mathcal{O}_{1,\text{others}}\mathcal{O}_{2}\right>=0,\qquad\Delta_{1}+\Delta_{2}=d-1.$
(C.20)
We have presented all the exceptional cases involving a scalar primary
operator. Here we only discuss the case that the other operator belong to a
chain representation, and we do not discuss the case that the other operator
is in a net-like representation.
### C.2 Case 2.1 and 2.2
Case 2.1 and Case 2.2 come from the fact that $x^{i}\delta^{(d-1)}(\vec{x})=0$
solves the equation of the Ward identities of $B_{i}$. The selection rules for
these two cases are very different from the ones in Case 1.1 and Case 1.2.
First we consider Case 2.1 with the operators being the symmetric traceless
tensors (STTs) and in the singlet representations $(j)$. Since the spacial
dependence in the correlators is always $\delta^{(d-1)}(\vec{x})$, the only
possible non-vanishing lowest-level correlator is from the case that
$\mathcal{O}_{1}$ and $\mathcal{O}_{2}$ have the same spin, $l_{1}=l_{2}$. It
can be checked that the Ward identities of $K_{i}$ are manifestly satisfied
using the fact that $x^{i}\delta^{(d-1)}(\vec{x})=0$, and there is no
selection rule on $\Delta_{1}$ and $\Delta_{2}$. Therefore we have
$\displaystyle\left<\mathcal{O}_{1}\mathcal{O}_{2}\right>=C~{}t^{(d-1-\Delta_{1}-\Delta_{2})}\delta^{(d-1)}(\vec{x}),$
$\displaystyle l_{1}=l_{2}=0,$ (C.21)
$\displaystyle\left<\mathcal{O}_{1}^{i_{1}}\mathcal{O}_{2}^{j_{1}}\right>=C~{}\delta^{i_{1}}_{j_{1}}t^{(d-1-\Delta_{1}-\Delta_{2})}\delta^{(d-1)}(\vec{x}),$
$\displaystyle l_{1}=l_{2}=1,$
$\displaystyle\left<\mathcal{O}_{1}^{i_{1}i_{2}}\mathcal{O}_{2}^{j_{1}j_{2}}\right>=C\left(\delta^{i_{1}}_{j_{1}}\delta^{i_{2}}_{j_{2}}+\delta^{i_{1}}_{j_{2}}\delta^{i_{2}}_{j_{1}}-\frac{2}{d-1}\delta^{i_{1}i_{2}}\delta_{j_{1}j_{2}}\right)t^{(d-1-\Delta_{1}-\Delta_{2})}\delta^{(d-1)}(\vec{x}),$
$\displaystyle l_{1}=l_{2}=2,$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\vdots$
$\displaystyle\left<\mathcal{O}_{1}^{i_{1}\cdots
i_{s}}\mathcal{O}_{2}^{j_{1}\cdots
j_{s}}\right>=C\left(\delta^{i_{1}}_{(j_{1}}\cdots\delta^{i_{s}}_{j_{s})}-\text{trace}\right)t^{(d-1-\Delta_{1}-\Delta_{2})}\delta^{(d-1)}(\vec{x}),$
$\displaystyle l_{1}=l_{2}=s.$
The “trace” term is to cancel the trace of $O_{1}$ indices and the trace of
$O_{2}$ indices, as both $\mathcal{O}_{1}$ and $\mathcal{O}_{2}$ are STTs. The
coefficient $C$’s are undetermined constants.
For the chain representations, there are very limited restrictions for the
correlators being non-vanishing. The calculations show that if two chain
representations have the same sub-sector, the correlators of the operators in
these subs-sectors and in the higher levels are non-vanishing. In other words,
if
$\displaystyle\mathcal{O}_{1}$
$\displaystyle\in\cdots\to(j_{n+1})\to(j_{n})\to(j_{n-1})\to\cdots,$ (C.22)
$\displaystyle\mathcal{O}_{2}$
$\displaystyle\in\cdots\to(j_{m+1})\to(j_{m})\to(j_{m-1})\to\cdots,\qquad\text{with
}j_{n}=j_{m}$
then
$\left<\mathcal{O}_{1,l_{1}=j_{\geq n}}\mathcal{O}_{2,l_{2}=j_{\geq
m}}\right>\neq 0.$ (C.23)
For the chains, there are the selection rules on $\Delta_{1}$ and
$\Delta_{2}$, but the specific selection rule must be discussed case by case.
For examples, we have
$\displaystyle\left<\mathcal{O}_{1,l_{1}}^{\\{s_{1},...,s_{l_{1}}\\}}\mathcal{O}_{2,l_{2}}^{\\{r_{1},...,r_{l_{2}}\\}}\right>$
(C.24)
$\displaystyle=C\frac{(d-1-\Delta_{1}-\Delta_{2})!~{}t^{(d-1-\Delta_{1}-\Delta_{2}+l_{1}+l_{2})}}{(d-1-\Delta_{1}-\Delta_{2}+l_{1}+l_{2})!}(\partial_{s1}\cdots\partial_{s_{l_{1}}}\partial_{r1}\cdots\partial_{r_{l_{2}}}\delta^{(d-1)}(\vec{x})-\text{traces})$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\text{for
}\mathcal{O}_{1},\mathcal{O}_{2}\in\cdots\to(2)\to(1)\to(0),\quad\text{with
}\Delta_{1}=\Delta_{2}=1.$
$\displaystyle\left<\mathcal{O}_{1,l_{1}}^{\\{s_{1},...,s_{l_{1}}\\}}\mathcal{O}_{2,l_{2}}^{\\{r_{1},...,r_{l_{2}}\\}}\right>$
(C.25)
$\displaystyle=C\frac{(d-1-\Delta_{1}-\Delta_{2})!~{}t^{(d-1-\Delta_{1}-\Delta_{2}+l_{1}+l_{2}-2)}}{(d-1-\Delta_{1}-\Delta_{2}+l_{1}+l_{2})!}\left(\delta_{(s_{1}}^{(r_{1}}\partial_{s2}\cdots\partial_{s_{l_{1}})}\partial^{r2}\cdots\partial^{r_{l_{2}})}\delta^{(d-1)}(\vec{x})-\text{traces}\right)$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\text{for
}\mathcal{O}_{1},\mathcal{O}_{2}\in\cdots\to(3)\to(2)\to(1),\quad\text{with
}\Delta_{1}=\Delta_{2}=0.$
Especially, we have
$\displaystyle\begin{aligned}
&\left<\mathcal{O}_{1,(0)_{3}}\mathcal{O}_{2,(0)_{3}}\right>=C~{}\frac{t^{(d-1-2\Delta+2)}}{(5-2\Delta)(\Delta-3)}\partial^{2}\delta^{(d-1)}(\vec{x})\\\
\end{aligned}$ (C.26) $\displaystyle\begin{aligned}
&\left<\mathcal{O}_{1,(0)_{3}}\mathcal{O}_{2,(1)_{2}}^{r}\right>=C~{}\frac{t^{(d-1-2\Delta+1)}}{(\Delta-3)}\partial_{r}\delta^{(d-1)}(\vec{x})\\\
&\left<\mathcal{O}_{1,(1)_{2}}^{s}\mathcal{O}_{2,(0)_{3}}\right>=C~{}\frac{t^{(d-1-2\Delta+1)}}{(\Delta-3)}\partial_{s}\delta^{(d-1)}(\vec{x})\\\
\end{aligned}$ $\displaystyle\begin{aligned}
&\left<\mathcal{O}_{1,(0)_{3}}\mathcal{O}_{2,(0)_{1}}\right>=C~{}t^{(d-1-2\Delta)}\delta^{(d-1)}(\vec{x})\\\
&\left<\mathcal{O}_{1,(1)_{2}}^{s}\mathcal{O}_{2,(1)_{2}}^{r}\right>=C~{}\frac{1-\Delta}{\Delta-3}t^{(d-1-2\Delta)}\delta_{sr}\delta^{(d-1)}(\vec{x})\\\
&\left<\mathcal{O}_{1,(0)_{1}}\mathcal{O}_{2,(0)_{3}}\right>=C~{}t^{(d-1-2\Delta)}\delta^{(d-1)}(\vec{x})\qquad\qquad\qquad\qquad\left<\text{others}\right>=0\\\
\end{aligned}$ $\displaystyle\qquad\qquad\qquad\qquad\qquad\text{for
}\mathcal{O}_{1},\mathcal{O}_{2}\in(0)_{3}\to(1)_{2}\to(0)_{1},\text{ with
}\Delta_{1}=\Delta_{2}=\Delta$
The selection rule for Case 2.2 is the same as the ones for Case 2.1.
Different from the relation between Case 1.1 and 1.2, there is no exceptional
situation. The analog of the exceptional case in Case 1.2 is when
$\Delta_{1}+\Delta_{2}=d$ with the correlator
$f\propto\delta(t)\delta^{(d-1)}(\vec{x})$, but the constraint from the Ward
identities of $K_{i}$ gives similar selection rules for Case 2.1 and 2.2.
The correlators appeared in the main text are all of Case 2.1. The primary
operator in the electric scalar theory is the field $\phi$, and the correlator
is
$\left<\phi(x)\phi(0)\right>=\frac{i}{2}\absolutevalue{t}\delta^{(d-1)}(\vec{x}).$
It can be checked that the correlator satisfies the Ward identities, no matter
if the temporal part is in power of $t$ or $\absolutevalue{t}$, and this
correlator matches the form of (C.21). Similar to the electric scalar theory,
the magnetic scalar theory have the primary operator $\phi$ with
$\left<\phi(x)\phi(0)\right>=0$, which obviously matches the form of (C.21).
The primary operators in the electric sector of electromagnetic theory are
$A_{\mu}=(A_{0},A_{i})$. They are in $(1)\to(0)$ representation and the
corresponding correlators are (B.10). These correlators have the same form
with (C.24). Finally, the fundamental operators
$A_{\alpha}=(A_{v},A_{i},A_{0})$ in the magnetic sector of electromagnetic
theory are primary operators , which are in $(0)\to(1)\to(0)$ representation.
Their correlators are in (B.18) which match the ones in (C.26) with
$\Delta_{1}=\Delta_{2}=1$.
## References
* [1] J. Levy-Leblond, _Une nouvelle limite non-relativiste du groupe de poincaré_ , _Annales de l’IHP Physique théorique_ 3 (1965) 1.
* [2] V. Sen Gupta, _On an Analogue of the Galileo Group_ , _Nuovo Cim._ 54 (1966) 512.
* [3] H. Bacry and J. Levy-Leblond, _Possible kinematics_ , _J. Math. Phys._ 9 (1968) 1605.
* [4] C. Duval, G.W. Gibbons, P.A. Horvathy and P.M. Zhang, _Carroll versus Newton and Galilei: two dual non-Einsteinian concepts of time_ , _Class. Quant. Grav._ 31 (2014) 085016 [1402.0657].
* [5] E. Bergshoeff, J. Gomis and G. Longhi, _Dynamics of Carroll Particles_ , _Class. Quant. Grav._ 31 (2014) 205009 [1405.2264].
* [6] J. Souriau, _Ondes et radiations gravitationnelles_ , _Colloques Internationaux du CNRS_ 220 (1973) 243.
* [7] C. Duval, G.W. Gibbons, P.A. Horvathy and P.M. Zhang, _Carroll symmetry of plane gravitational waves_ , _Class. Quant. Grav._ 34 (2017) 175003 [1702.08284].
* [8] M. Henneaux, _Geometry of Zero Signature Space-times_ , _Bull. Soc. Math. Belg._ 31 (1979) 47.
* [9] V. Belinski and M. Henneaux, _The Cosmological Singularity_ , Cambridge Monogr.Math.Phys., Cambridge Univ. Pr., Cambridge (2017), 10.1017/9781107239333.
* [10] R.F. Penna, _Near-horizon Carroll symmetry and black hole Love numbers_ , 1812.05643.
* [11] L. Donnay and C. Marteau, _Carrollian Physics at the Black Hole Horizon_ , _Class. Quant. Grav._ 36 (2019) 165002 [1903.09654].
* [12] L. Freidel and P. Jai-akson, _Carrollian hydrodynamics and symplectic structure on stretched horizons_ , 2211.06415.
* [13] J. Redondo-Yuste and L. Lehner, _Non-linear black hole dynamics and Carrollian fluids_ , 2212.06175.
* [14] J. de Boer, J. Hartong, N.A. Obers, W. Sybesma and S. Vandoren, _Carroll symmetry, dark energy and inflation_ , 2110.02319.
* [15] R. Casalbuoni, J. Gomis and D. Hidalgo, _World-Line Description of Fractons_ , 2107.09010.
* [16] F. Peña Benitez, _Fractons, Symmetric Gauge Fields and Geometry_ , 2107.13884.
* [17] C. Duval, G.W. Gibbons and P.A. Horvathy, _Conformal Carroll groups and BMS symmetry_ , _Class. Quant. Grav._ 31 (2014) 092001 [1402.5894].
* [18] H. Bondi, M.G.J. van der Burg and A.W.K. Metzner, _Gravitational waves in general relativity. 7. Waves from axisymmetric isolated systems_ , _Proc. Roy. Soc. Lond. A_ 269 (1962) 21.
* [19] R.K. Sachs, _Gravitational waves in general relativity. 8. Waves in asymptotically flat space-times_ , _Proc. Roy. Soc. Lond. A_ 270 (1962) 103.
* [20] R. Sachs, _Asymptotic symmetries in gravitational theory_ , _Phys. Rev._ 128 (1962) 2851.
* [21] G. Barnich and C. Troessaert, _Symmetries of asymptotically flat 4 dimensional spacetimes at null infinity revisited_ , _Phys. Rev. Lett._ 105 (2010) 111103 [0909.2617].
* [22] A. Bagchi, S. Detournay, R. Fareghbal and J. Simón, _Holography of 3D Flat Cosmological Horizons_ , _Phys. Rev. Lett._ 110 (2013) 141302 [1208.4372].
* [23] J. Hartong, _Holographic Reconstruction of 3D Flat Space-Time_ , _JHEP_ 10 (2016) 104 [1511.01387].
* [24] L. Ciambelli, R.G. Leigh, C. Marteau and P.M. Petropoulos, _Carroll Structures, Null Geometry and Conformal Isometries_ , _Phys. Rev. D_ 100 (2019) 046010 [1905.02221].
* [25] L. Donnay, A. Fiorucci, Y. Herfray and R. Ruzziconi, _Carrollian Perspective on Celestial Holography_ , _Phys. Rev. Lett._ 129 (2022) 071602 [2202.04702].
* [26] A. Bagchi, S. Banerjee, R. Basu and S. Dutta, _Scattering Amplitudes: Celestial and Carrollian_ , _Phys. Rev. Lett._ 128 (2022) 241601 [2202.08438].
* [27] L. Donnay, A. Fiorucci, Y. Herfray and R. Ruzziconi, _Bridging Carrollian and Celestial Holography_ , 2212.12553.
* [28] M. Henneaux and P. Salgado-Rebolledo, _Carroll contractions of Lorentz-invariant theories_ , _JHEP_ 11 (2021) 180 [2109.06708].
* [29] A. Bagchi, R. Basu, A. Kakkar and A. Mehra, _Flat Holography: Aspects of the dual field theory_ , _JHEP_ 12 (2016) 147 [1609.06203].
* [30] A. Bagchi, R. Basu, A. Mehra and P. Nandi, _Field Theories on Null Manifolds_ , _JHEP_ 02 (2020) 141 [1912.09388].
* [31] A. Bagchi, A. Mehra and P. Nandi, _Field Theories with Conformal Carrollian Symmetry_ , _JHEP_ 05 (2019) 108 [1901.10147].
* [32] E.A. Bergshoeff, J. Gomis and A. Kleinschmidt, _Non-Lorentzian theories with and without constraints_ , _JHEP_ 01 (2023) 167 [2210.14848].
* [33] E. Bergshoeff, J. Figueroa-O’Farrill and J. Gomis, _A non-lorentzian primer_ , _SciPost Phys. Lect. Notes_ 69 (2023) 1 [2206.12177].
* [34] D. Rivera-Betancour and M. Vilatte, _Revisiting the Carrollian scalar field_ , _Phys. Rev. D_ 106 (2022) 085004 [2207.01647].
* [35] D. Hansen, N.A. Obers, G. Oling and B.T. Søgaard, _Carroll Expansion of General Relativity_ , 2112.12684.
* [36] B. Julia and H. Nicolai, _Null-killing vector dimensional reduction and galilean geometrodynamics_ , _Nuclear Physics B_ 439 (1995) 291.
* [37] A. Bagchi, A. Banerjee, S. Dutta, K.S. Kolekar and P. Sharma, _Carroll covariant scalar fields in two dimensions_ , 2203.13197.
* [38] A. Bagchi, A. Banerjee and H. Muraki, _Boosting to BMS_ , _JHEP_ 09 (2022) 251 [2205.05094].
* [39] S. Baiguera, G. Oling, W. Sybesma and B.T. Søgaard, _Conformal Carroll Scalars with Boosts_ , 2207.03468.
* [40] A. Saha, _Intrinsic Approach to $1+1$D Carrollian Conformal Field Theory_, 2207.11684.
* [41] W.-B. Liu and J. Long, _Symmetry group at future null infinity I: scalar theory_ , 2210.00516.
* [42] S. Dutta, _Stress tensors of 3d Carroll CFTs_ , 2212.11002.
* [43] B. Chen, R. Liu and Y.-f. Zheng, _On Higher-dimensional Carrollian and Galilean Conformal Field Theories_ , _SciPost Phys._ 14 (2023) 088 [2112.10514].
* [44] C. Duval, G. Burdet, H.P. Kunzle and M. Perrin, _Bargmann Structures and Newton-cartan Theory_ , _Phys. Rev. D_ 31 (1985) 1841.
* [45] F. Rohsiepe, _On reducible but indecomposable representations of the Virasoro algebra_ , hep-th/9611160.
* [46] M.R. Gaberdiel, _An Algebraic approach to logarithmic conformal field theory_ , _Int. J. Mod. Phys. A_ 18 (2003) 4593 [hep-th/0111260].
* [47] K. Kytola and D. Ridout, _On Staggered Indecomposable Virasoro Modules_ , _J. Math. Phys._ 50 (2009) 123503 [0905.0108].
* [48] T. Creutzig and D. Ridout, _Logarithmic Conformal Field Theory: Beyond an Introduction_ , _J. Phys. A_ 46 (2013) 4006 [1303.0847].
* [49] P.-x. Hao, W. Song, X. Xie and Y. Zhong, _A BMS-invariant free scalar model_ , 2111.04701.
* [50] Z.-f. Yu and B. Chen, _Free field realization of the BMS Ising model_ , 2211.06926.
* [51] P.-X. Hao, W. Song, Z. Xiao and X. Xie, _A BMS-invariant free fermion model_ , 2211.06927.
* [52] H.P. Jakobsen, _Indecomposable finite-dimensional representations of a Lie algebras and Lie superalgebras_ , _Lect. Notes Math._ 2027 (2011) 125.
* [53] M. Hogervorst, M. Paulos and A. Vichi, _The ABC (in any D) of Logarithmic CFT_ , _JHEP_ 10 (2017) 201 [1605.03959].
* [54] B. Chen, P.-X. Hao, R. Liu and Z.-F. Yu, _On Galilean conformal bootstrap_ , _JHEP_ 06 (2021) 112 [2011.11092].
* [55] B. Chen and R. Liu, _The Shadow Formalism of Galilean CFT 2_, 2203.10490.
* [56] B. Chen, P.-x. Hao, R. Liu and Z.-f. Yu, _On Galilean Conformal Bootstrap II: $\xi=0$ sector_, 2207.01474.
* [57] A. Banerjee, S. Dutta and S. Mondal, _Carroll fermions in two dimensions_ , 2211.11639.
* [58] I. Gel’fand and G. Shilov, _Generalized Functions: Properties and Operations_ (1964).
* [59] B. Chen, H. Sun and Y.-f. Zheng, _work in progress_ , .
* [60] M. Islam, _Carrollian Yang-Mills Theory_ , 2301.00953.
* [61] A. Bagchi, A. Banerjee, R. Basu, M. Islam and S. Mondal, _Magic Fermions: Carroll and Flat Bands_ , 2211.11640.
* [62] J. Hartong, _Gauging the Carroll Algebra and Ultra-Relativistic Gravity_ , _JHEP_ 08 (2015) 069 [1505.05011].
* [63] E. Bergshoeff, J. Gomis, B. Rollier, J. Rosseel and T. ter Veldhuis, _Carroll versus Galilei Gravity_ , _JHEP_ 03 (2017) 165 [1701.06156].
* [64] J. Figueroa-O’Farrill, E. Have, S. Prohazka and J. Salzer, _The gauging procedure and carrollian gravity_ , _JHEP_ 09 (2022) 243 [2206.14178].
* [65] A. Campoleoni, M. Henneaux, S. Pekar, A. Pérez and P. Salgado-Rebolledo, _Magnetic Carrollian gravity from the Carroll algebra_ , _JHEP_ 09 (2022) 127 [2207.14167].
* [66] J.E. Humphreys, _Representations of Semisimple Lie Algebras in the BGG Category $\mathcal{O}$_, vol. 94, American Mathematical Soc. (2008).
* [67] C.A. Weibel, _An introduction to homological algebra_ , no. 38, Cambridge university press (1995).
* [68] S. Golkar and D.T. Son, _Operator Product Expansion and Conservation Laws in Non-Relativistic Conformal Field Theories_ , _JHEP_ 12 (2014) 063 [1408.3629].
* [69] G. Hochschild and J.-P. Serre, _Cohomology of group extensions_ , _Transactions of the American Mathematical Society_ (1953) 110.
* [70] G. Hochschild and J.-P. Serre, _Cohomology of lie algebras_ , _Annals of Mathematics_ (1953) 591.
* [71] M.D. Schwartz, _Quantum Field Theory and the Standard Model_ , Cambridge University Press (3, 2014). |
# Integrating IP broadcasting with audio tags: workflow and Challenges
###### Abstract
The broadcasting industry is increasingly adopting IP techniques,
revolutionising both live and pre-recorded content production, from news
gathering to live music events. IP broadcasting allows for the transport of
audio and video signals in an easily configurable way, aligning with modern
networking techniques. This shift towards an IP workflow allows for much
greater flexibility, not only in routing signals but with the integration of
tools using standard web development techniques. One possible tool could
include the use of live audio tagging, which has a number of uses in the
production of content. These include from automated closed captioning to
identifying unwanted sound events within a scene. In this paper, we describe
the process of containerising an audio tagging model into a microservice, a
small segregated code module that can be integrated into a multitude of
different network setups. The goal is to develop a modular, accessible, and
flexible tool capable of seamless deployment into broadcasting workflows of
all sizes, from small productions to large corporations. Challenges
surrounding latency of the selected audio tagging model and its effect on the
usefulness of the end product are discussed.
Index Terms— IP broadcasting, challenges, workflow, AI, Audio tagging.
## 1 Introduction
Internet Protocol (IP) broadcasting describes the process of transmitting
audio and video signals from one location to another using IP networking. One
technique traditionally used for transmitting audio/video is the Serial
Digital Interface (SDI), with fixed connections between dedicated hardware
devices. In comparison, IP broadcasting allows software to replace some of
these hardware devices, enabling greater scalability and easy re-
configuration. Cloud technology and containerisation methods such as Docker
[1] can be utilised to take advantage of such scalability.
There are a few challenges while creating software for use in an IP
broadcasting environment. First, as with most modern web applications, is
scalability and containerisation of software which allows the infrastructure
to adapt depending on the demand on the system by starting up new containers
when required. Containerisation also allows for the same task to be conducted
independently on different streams or sources by having a container per
stream. Another advantage of a containerisation approach means that if a fault
occurs on one container, it does not damage the entire system as a whole and
can be fixed independently. The second challenge is handling of the inelastic
audio and video traffic without introducing delay and jitter to the
transmission.
The audio track contains a wide array of descriptive information about the
sound events in the scene. Detecting sound events in real-time could have a
number of uses from aiding operators in within the broadcasting industry
programme creation to enhancing end user accessibility. For example, BBC
Research and Development [2] employs sound events detection framework to
identify sounds that may disrupt the ambience of a program. This work was
targeted at the BBC Autumnwatch programme which uses wildlife cameras
capturing the movement of animals. To avoid undesirable noises interrupting
the live stream such as cars passing by or people talking, an icon is
overlayed onto the operator’s interface, indicating the undesirable sound
event so the operator does not to switch to that source. Another use of sound
event detection is closed captioning. While majority of the existing work in
broadcasting audio has been focused on analysing the speech events [3, 4] and
only a few attempts of identifying sound events in real time in a live
transmission has been conducted [2]. To achieve full captioning (closed
captions of both sound events and speech) within IP broadcasting, a general
sound event detection models that detects sound events including speech are
required.
Figure 1: Basic flow of audio, video and metadata frames travelling through
their respective data streams using the Network Device Interface (NDI) system.
To overcome above challenges and integrate sound event data into IP
broadcasting, this paper contributes; (1) by containerising applications to
isolate each component from the other elements of the system, i.e. other
processing units, transmission and reception code. Code isolation means faults
are limited to that specific container as well as allowing the component to
run on any machine. Containerisation makes it easy to create multiple
instances of a component if needed allowing for scalability and the use of
cloud platforms. (2) We leverage an Artificial Intelligence (AI) model to
generate audio tags to transmit meta-information alongside audio. This added
information about the contents of the audio track has a number of uses in the
area of automation from better production tools to improved accessibility via
more descriptive automated captioning. An overall proposed framework is shown
in Figure 1 and more details including challenges around our approach can be
found in Section 5. Our code is made available at the
GitHub111https://github.com/Rhysbv/panns_ndi.
The rest of this paper is organised as follows. In Section 2, a background on
IP broadcasting technology and audio recognition in broadcasting is explained.
Section 3 presents system design to select appropriate broadcasting
technology, AI model and integration of broadcasting framework. Section 4
describes an example workflow and experimental setup. The challenges within IP
broadcasting to integrate AI model are explained in Section 5. Finally,
Section 6 presents the discussion and concludes the paper.
## 2 Related work
### 2.1 A Brief Overview on IP Broadcasting Technology
There are a few technologies currently used for IP broadcasting. The first of
these are described in standards from the Society of Motion Picture and
Television Engineers (SMPTE) as the ST 2110 suite of standards [5]. SMPTE
standards are used by the industry with examples including the Serial Digital
Interface (SDI) standards for transmission between equipment over a direct
connection, i.e. coaxial or fibre optic cable. The Networked Media Open
Specifications (NMOS) from the Advanced Media Workflow Association (AMWA) uses
ST 2110 along with other standards to define APIs allowing for the connection
of multiple receivers and senders on a network in a vendor agnostic way. NMOS
is not software, but specifications aiding development in the software.
Network Device Interface (NDI) by NewTek [6] on the other hand is an open
standard with fully developed software and Software Development Kit (SDK),
designed to allows for easy integration of IP broadcasting into existing
software by utilising the NDI SDK.
### 2.2 Audio Recognition in Broadcasting
There has been some work conducted in the broadcasting related to recognition
of audio events [2]. However mostly related to speech recognition and
transcription that is commonly used for tagging content for archiving
purposes. Recognising speech allows for easy searching without having to
manually tag content. For example, Raimond et al. [3] describe a system to
automate the tagging of content within the BBC’s radio archive based on speech
audio. Levin et al. [4] describes a system using automatic speech recognition
for captioning of news programming. This system runs against a re-speaker
which in this context is a person repeating speech in a more readable and
understandable way for the system, avoiding the issues surrounding the
acoustic environment and overlapping speakers. However, this systems only
supports the processing of speech and does not consider sound events in
general. More modern solutions proprietary do exists [7, 8] which remove the
requirement for a re-speaker but are still incapable of including sound
events. Additionally, BBC Research and Development [2] have designed an
application program (a software) to identify sound events for the purpose of
audio monitoring.
In contrast to previous work, we separate the audio tagging software from any
other application programs. In our work, a modular approach is opted and a
container specifically for general audio tagging is built to allow multiple
applications on the network to take advantage of the technology without
repeating work. This is helpful considering the computational overhead
associated with AI models. Our system can include the monitoring system as
described by BBC Research and Development [2] in addition to other systems
e.g. for captioning.
## 3 System Design
### 3.1 Selecting IP Broadcasting Technology
For our work, we need an IP technology that is both well used by the industry
allowing for wide adoption of the audio tagging technology and is simple to
implement. Standards have been created to support new IP based workflows. One
example from the Society of Motion Picture and Television Engineers (SMPTE) is
the ST 2110 suite of standards, which describes the transport of compressed
and uncompressed audio, video and metadata via individual Real-time Transport
Protocol (RTP) streams. However, the complexity of understanding these
standards means that it is only practical for large corporations to implement.
Alternative standards such as the Network Device Interface (NDI), which is an
open standard created by NewTek [6], has an easy to use Software Development
Kit (SDK). Due to easy integration, NDI is available in a wide variety of
software and hardware applications, enabling its wide spread adoption in both
large and small operations. This has lead to NDI being the selected technology
for our work. NDI transports data in the form of frames that contain the
relevant data as well as supporting information to support its use. There are
three types of frames used by NDI: audio, video and metadata. NDI also handles
the detection of sources allowing for routing of NDI frames.
### 3.2 AI Model used for Audio Tagging
To identify audio tags, we leverage AI models particularly convolutional
neural networks (CNN) that has shown remarkable performance in many audio
classification tasks [9, 10]. For example, pre-trained audio neural networks
(PANNs) [10] have been widely used to recognize a variety of audio events. A
description of AI models used in this paper for predicting the audio tags is
given below,
Pre-trained Audio Neural Networks (PANNs): CNN14 [10] is a pre-trained audio
neural network that is trained on Google Audioset dataset [11]. CNN14 is
trained by extracting the log-mel spectrograms from the audio clips. CNN14 has
81M parameters and it takes 21G multiply-accumulate operations to predict tags
of the audio of length 10 seconds. The trained CNN14 can predict wide range of
sound events such as car passing by, speech, siren, animal etc. This helps
identifying sounds in the wide array of possible scenarios the system could be
exposed to, i.e. different types of broadcast programming such as news
gathering in various locations or a panel show within a studio.
Efficient PANNs: E-PANNs [12] is an efficient version of original PANNs with
reduced memory requirement (24M parameters) and a reduced computational
complexity (13G MACs per 10 seconds audio). The efficient AI models are
beneficial in an IP networking environment, especially one involving inelastic
traffic (network traffic that is sensitive to variations in delay, e.g. audio
and video streams). This will be explored in Section 5.
### 3.3 NDI Integration
We use the NDI SDK [13] to create a software module including the PANNs
algorithm. Due to the reliance on Python based packages within PANNs module
such as “PANNs inference” [14], we use Python for implementation.
Specifically, Python binding made by the community [15] to interface between
python and the C++ SDK are used to enable NDI support. An additional Python
package is created to simplify the process of integrating NDI into both the
PANNs module and the suggested proof of concept applications described in
Section 4.
Our python package contains three classes: A receiver, transmitter and finder.
This allows an application to receive frames using the receiver class from a
given NDI source, which is detected using the finder class. These can then be
processed and transmitted by creating its own NDI source using the transmitter
class. An example of how this is used here can be seen in Figure 1. It is
important to note here that the flow of audio, video and metadata frames is
uninterrupted between the receiver and transmitter. Each audio frame is
intercepted and a copy is taken for analysis while the original copy is sent
straight to the transmitter, minimising the delay and jitter. One issue
surrounding the community supplied Python bindings were the associated bugs,
especially surrounding memory management. This led to having to convert each
frame to a Python dataclass so that it could be effectively freed and delt
with by the Python garbage collector, an issue that would not have been
encountered using the original C++ SDK.
Figure 2: Metadata generation pipeline.
### 3.4 Integrating Sound Events Metadata
In order to produce sound event predictions from PANNs model and make it
compatible with other NDI applications, we follow the pipeline as shown in
Figure 2 that takes the incoming audio frames from NDI and creates metadata
frames containing the audio tag to be sent across the network.
We use two ring buffers, first ring buffer stores incoming audio frames. From
each audio frame, we extract the individual audio samples and store them in a
second ring buffer. Once a sufficient number of samples has been collected in
the second ring buffer the entire contents of the ring buffer is fed into the
PANNs model. The size of the second ring buffer is crucial as it determines
the duration of the audio window that PANNs analyses. The impact on the size
of the window on the models latency is discussed in Section 5. To distribute
the predicted sound event across the NDI network, we use metadata frames.
These frames transport XML data, which can include third-party metadata as
used here. The output string from PANNs is inserted into an XML template for
transmission. Other NDI applications can then receive this XML via the
metadata frames to access the sound event prediction.
A summary of various steps is explained below,
1. 1.
Store received audio frames in ring buffer one.
2. 2.
Extract the floating point Pulse Code Modulated (PCM) audio samples from each
frame and store these in ring buffer two.
3. 3.
Wait until a given number of samples have been collected.
4. 4.
Feed the entire contents of ring buffer two into AI model.
5. 5.
Generate a metadata frame containing the prediction from AI model.
Figure 3: Proposed integrated pipeline: Audiowatch [2] framework with a
separated audio tagging unit.
## 4 Example Workflow
The proposed containerised component allows for the integration of audio
tagging capabilities into a multitude of different systems and use cases.
Below, we provide two examples of integration of audio tagging system into
existing IP broadcasting framework,
### 4.1 Audiowatch Example
Figure 3 demonstrates our system inspired by the BBC audiowatch project, where
we integrate a separate audio tagging software from other application
programs. We use Docker [1] for containerisation, creating multiple instances
of the audio tagging software to analyse several NDI sources simultaneously. A
sound event detection front end is a dashboard user interface as shown in
Figure 2 and it generates metadata corresponding to input audio. Metadata
containing sound event information is then sent to the icon selector module
for processing. Next, various icon selector containers extract the sound
events from the audio track supplied within the metadata frames. After
identifying the unwanted sound events, an appropriate icon overlay is
transmitted as an NDI video frame. Next, a video mixing software such as Open
Broadcaster Software (OBS) [16] is used to superimpose the icon onto the
original video source for displaying on the operators multiview, which is used
to monitor all video sources.
### 4.2 Online Closed Captioning
Another example integration could be the use of audio tagging to enhance
closed captioning. As discussed in Section 2.2 while work has been conducted
to automated closed captioning in real time using automatic speech
recognition, these do not include descriptions of sound events. By combining
the two technologies, full closed captioning could be achieved. This would
involve first parsing the audio through the audio tagging model using our
container. When the result is returned as human speech, the audio would then
be passed through a second speech recognition model to generate subtitles. One
major concern would be the accuracy of the audio tagging model. If the speech
was not always detected, we would miss large portions of speech text.
Additionally the difference in latency between a sound event being inserted
and speech going through two models would have to be accounted for.
## 5 AI model Integration Challenges
There are a number of integration challenges to consider while designing AI
based software fit for broadcasting. These challenges include the accuracy of
the prediction and the latency of the model delaying the signal. Generally,
PANNs and E-PANNs give similar prediction results.
Model latency: The latency of the model here describes the amount of time it
takes given a number of audio samples to produce an accurate sound event
prediction. Consideration of the model latency is significant given that we
are dealing with inelastic audio and video network traffic. This means that
any delay in processing contributes to a delay in the resulting transmission
depending on the infrastructure. Delay can be mitigated using a design similar
to shown in Figure 1, however there is the issue of predictions being desynced
to the audio track. Although we have minimal control over the IP network using
the audio tagging module, and thus cannot manage the network’s latency, we can
still select an optimal model that minimises latency while maintaining
accuracy.
Buffer size versus model latency: To analyse buffer size and latency of model,
we perform experimentation using a set of audio recordings with known sound
events. The first audio recording is taken directly from the PANNs repository
that involves a telephone ringing followed by human speech and is of seven
seconds. The second audio recording of a car driving into the distance. The
third audio recording is created by mixing a car driving and a running river
sound events.
Given the audio recordings, we analyse latency of the AI model at different
number of audio samples. We generate different length audio segments. The
audio samples taken are of multiples of 1024 (assuming frames containing 1024
samples are used) and represents the size of the buffer. Given the audio
samples of different length, we use the PANNs or E-PANNs model to produce
predictions while measuring the time taken for the model to produce a
prediction. Figure 4 shows latency by PANNs and E-PANNs model at different
buffer size. Both PANNs and E-PANNs follow a similar trajectory with E-PANNs
showing a considerable improvement in latency. This suggests that choosing an
appropriate model contribute to improve latency and hence making integration
of audio events more real-time while using less resources.
It is found that a buffer size of 48128 samples (47 * 1024 sample frames) is a
sensible choice in having a low latency while producing an accurate result in
detecting the sound events correctly. This equates to an audio window with a
duration of 1.002s sampled at 48KHz, that gives correct results with the
minimal latency. Prediction results and model latency computed on a AMD Ryzen
5 2500U and Intel Core i9-13900HX hardware can be found here.
Figure 4: PANNs/E-PANNs model latency vs buffer size when inputting audio
sampled at 48KHz. Experiments are performed a on AMD Ryzen 5 2500U system at
2GHz.
## 6 Discussion and Conclusion
The integration of IP broadcasting with audio tagging offers significant
potential for enhancing broadcast workflows, but it also presents several
challenges. The transition to IP broadcasting enables a more flexible,
scalable, and reconfigurable infrastructure compared to traditional methods
based on Serial Digital Interface (SDI). This flexibility is further enhanced
by containerisation technologies making the system more resilient and
adaptable. However, implementing an audio tagging system introduces challenges
primarily related to latency and the accuracy of audio tagging models.
One of the primary challenges discussed is the latency associated with the
audio tagging model. Given the real-time nature of broadcasting, any delays
introduced by processing can impact the overall operation. This makes the
choice of buffer size crucial. A smaller buffer reduces latency but might
compromise the accuracy of sound event detection. Conversely, a larger buffer
improves accuracy but increases latency. The experiments conducted show that
an acceptable balance can be achieved with a buffer size of 48128 samples,
which provides an acceptable latency while maintaining accuracy. The use of
Efficient PANNs (E-PANNs) further helps in reducing the computational
complexity and memory requirements, making it a suitable choice for real-time
applications.
Containerisation offers a robust solution to scalability issues. By isolating
the audio tagging functionality into a microservice, it becomes possible to
scale the system by simply adding more containers as needed. This isolation
also ensures that a fault in one container does not affect the entire system,
enhancing overall reliability. The use of Docker to containerise these
services allows for easy deployment and management across different network
setups. Additionally, the integration with NDI technology, which is widely
adopted in the industry, ensures broad applicability.
Despite these advantages, real-world deployment of such a system is not
without hurdles. The reliance on Python bindings to interface with the NDI
SDK, while practical, introduces potential issues with memory management that
need careful handling.
### 6.1 Conclusion
Integrating IP broadcasting with audio tagging presents a promising
advancement for the broadcasting industry. The use of containerisation and
audio tagging for real-time sound event detection can significantly enhance
content production and accessibility. However, addressing the challenges of
latency, accuracy and real-world deployment is crucial for the successful
implementation of this technology. Future work includes re-writing the
codebase to use the NDI C++ SDK directly, avoiding the issues surrounding the
Python bindings. Additionally, we would like to analyse more complex models
such as transformers [17, 18] within our broadcasting framework. Finally, the
creation of the discussed proof of concept applications would allow for full
demonstration of the usefulness of this technology.
## 7 Acknowledgements
This work was supported by Engineering and Physical Sciences Research Council
(EPSRC) Grant EP/T019751/1 “AI for Sound (AI4S)”. For the purpose of open
access, the authors have applied a Creative Commons Attribution (CC BY)
licence to any Author Accepted Manuscript version arising.
## References
* [1] Docker: Accelerated container application development. [Online]. Available: https://www.docker.com/
* [2] S. Ward and R. Dawes. AudioWatch - live audio monitoring for autumnwatch 2021 - BBC r&d. [Online]. Available: https://www.bbc.co.uk/rd/blog/2021-11-live-audio-monitoring-autumnwatch-ai
* [3] Y. Raimond, C. Lowis, R. Hodgson, and J. Tweed, “Automated semantic tagging of speech audio,” in _Proceedings of the 21st International Conference on World Wide Web_ , 2012, pp. 405–408.
* [4] K. Levin, I. Ponomareva, A. Bulusheva, G. Chernykh, I. Medennikov, N. Merkin, A. Prudnikov, and N. Tomashenko, “Automated closed captioning for russian live broadcasting,” in _Fifteenth Annual Conference of the International Speech Communication Association_ , 2014.
* [5] “SMPTE OV 2110-0:2018 - SMPTE overview document - professional media over managed IP networks roadmap for the 2110 document suite.” [Online]. Available: https://ieeexplore.ieee.org/document/8626804
* [6] NewTek, “NDI 5.6 white paper.” [Online]. Available: https://ndi.video/wp-content/uploads/2023/09/NDI-5.6-White-Paper-2023.pdf
* [7] Closed captioning software - IBM watson media. [Online]. Available: https://www.ibm.com/products/video-streaming/closed-captioning
* [8] enCaption: Automated closed captioning system | ENCO systems. [Online]. Available: https://www.enco.com/products/encaption
* [9] H. Purwins, B. Li, T. Virtanen, J. Schlüter, S.-Y. Chang, and T. Sainath, “Deep learning for audio signal processing,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 13, no. 2, pp. 206–219, 2019.
* [10] Q. Kong, Y. Cao, T. Iqbal, Y. Wang, W. Wang, and M. D. Plumbley, “Panns: Large-scale pretrained audio neural networks for audio pattern recognition,” _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , vol. 28, pp. 2880–2894, 2020.
* [11] J. F. Gemmeke, D. P. Ellis, D. Freedman, A. Jansen, W. Lawrence, R. C. Moore, M. Plakal, and M. Ritter, “Audio set: An ontology and human-labeled dataset for audio events,” in _IEEE international conference on acoustics, speech and signal processing (ICASSP)_ , 2017, pp. 776–780.
* [12] A. Singh, H. Liu, and M. D. Plumbley, “E-panns: Sound recognition using efficient pre-trained audio neural networks,” in _INTER-NOISE and NOISE-CON Congress and Conference Proceedings_ , vol. 268, no. 1. Institute of Noise Control Engineering, 2023, pp. 7220–7228.
* [13] N. INC, “Ndi sdk v6.0.1.” [Online]. Available: https://ndi.video/for-developers/ndi-sdk
* [14] Q. kong, “qiuqiangkong/panns_inference,” original-date: 2020-03-08T06:22:30Z. [Online]. Available: https://github.com/qiuqiangkong/panns˙inference
* [15] N. Kondo, “buresu/ndi-python,” original-date: 2019-04-16T10:59:04Z. [Online]. Available: https://github.com/buresu/ndi-python
* [16] “obsproject/obs-studio,” original-date: 2013-10-01T02:40:31Z. [Online]. Available: https://github.com/obsproject/obs-studio
* [17] Y. Gong, Y.-A. Chung, and J. Glass, “Ast: Audio spectrogram transformer,” _Interspeech_ , 2021.
* [18] F. Schmid, K. Koutini, and G. Widmer, “Efficient large-scale audio tagging via transformer-to-cnn knowledge distillation,” in _IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2023, pp. 1–5.
|
We set $R = \sqrt{\dx} + \sqrt{2(1+\beta)\log{T}}$ where $\beta \geq 2$ is a free parameter.
Define the event $\calE$ as:
\begin{align}
\calE := \left\{ \max_{0 \leq t \leq T-1} \norm{V_t}_2 \leq R \right\}. \label{eq:GLM_good_truncation_event}
\end{align}
Note that by the setting of $R$, we have $\Pr(\calE^c) \leq 1/T^{\beta}$ using standard Gaussian concentration results plus a union bound. Furthermore on $\calE$, the original GLM process
driven by Gaussian noise (<ref>)
coincides with the truncated process (<ref>).
Let $\widehat{f}$ denote the LSE on the original process
(<ref>), and
let $\bar{f}$ denote the LSE on the truncated process (<ref>).
\begin{align}
\E\norm{\widehat{f} - f_\star}_{L^2}^2 &= \E \norm{\widehat{f} - f_\star}_{L^2}^2 \ind\{\calE\} + \E \norm{\widehat{f} - f_\star}_{L^2}^2 \ind\{\calE^c\} \nonumber \\
&\leq \E\norm{\bar{f} - f_\star}_{L^2}^2 + \E \norm{\widehat{f} - f_\star}_{L^2}^2 \ind\{\calE^c\} \label{eq:glm_truncate_decomp}.
\end{align}
Let us now control the error term
$\E \norm{\widehat{f} - f_\star}_{L^2}^2 \ind\{\calE^c\}$.
Write $\widehat{f}(x) = \sigma(\widehat{A} x)$, and put $\widehat{\Delta} = \widehat{A}-A_\star$.
We have:
\begin{align}
\E \norm{\widehat{f} - f_\star}_{L^2}^2 \ind\{\calE^c\} &= \frac{1}{T} \sum_{t=0}^{T-1} \E\norm{\sigma(\widehat{A} X_t) - \sigma(A_\star X_t) }_2^2 \ind\{\calE^c\} \stackrel{(a)}{\leq} \frac{1}{T}\sum_{t=0}^{T-1} \E\norm{\widehat{\Delta} X_t}_2^2\ind\{\calE^c\} \nonumber \\
&\stackrel{(b)}{\leq} \frac{4B^2}{T} \sum_{t=0}^{T-1} \E\norm{X_t}_2^2 \ind\{\calE^c\}
\stackrel{(c)}{\leq} \frac{4B^2}{T^{1 + \beta/2}}\sum_{t=0}^{T-1} \sqrt{\E\norm{X_t}_2^4}
\stackrel{(d)}{\leq} \frac{4B^2 B_X^2}{T^{\beta/2}}. \label{eq:glm_fhat_lse_error_term}
\end{align}
Here, (a) follows since $\sigma$ is $1$-Lipschitz,
(b) uses the definition of $\scrF$ in (<ref>), (c) follows by
Cauchy-Schwarz, and (d) uses <Ref>.
The remainder of the proof is to bound the
LSE error $\E\norm{\bar{f} - f_\star}_{L^2}^2$.
First, we establish an almost sure bound on
$\{\bar{X}_t\}_{t \geq 0}$.
Consider the truncated GLM process (<ref>).
Under <Ref>, the process $\{\bar{X}_t\}_{t \geq 0}$ satisfies:
\begin{align}
\sup_{t \in \N} \norm{\bar{X}_t}_{P_\star} \leq \frac{2\opnorm{P_\star}^{1/2} \opnorm{H} (\sqrt{\dx} + \sqrt{2(1+\beta)\log{T}})}{1-\rho} \triangleq B_{\bar{X}}. \label{eq:glm_B_Xbar}
\end{align}
By triangle inequality and (<ref>):
\begin{align*}
\norm{\bar{X}_{t+1}}_{P_\star} &= \norm{ \sigma(A_\star \bar{X}_t) + H \bar{V}_t }_{P_\star} \leq \norm{\sigma(A_\star \bar{X}_t)}_{P_\star} + \norm{H \bar{V}_t}_{P_\star} \\
&\leq \rho^{1/2} \norm{\bar{X}_t}_{P_\star} + \norm{H \bar{V}_t}_{P_\star} \leq \rho^{1/2} \norm{\bar{X}_t}_{P_\star} + \opnorm{P_\star^{1/2} H} R.
\end{align*}
Unrolling this recursion,
and using the fact that $\inf_{x \in [0, 1]} \frac{1-\sqrt{x}}{1-x} = 1/2$ yields the result.
We next establish uniform bounds for the covariance matrices
of the truncated process.
Suppose $T \geq 4$.
Consider the truncated GLM process (<ref>),
and let the covariance matrices for the process $\{\bar{X}_t\}_{t \geq 0}$ be denoted as $\bar{\Gamma}_t \triangleq \E[ \bar{X}_t \bar{X}_t^\T ]$.
Under <Ref>:
\begin{align*}
\frac{1}{2} HH^\T \preccurlyeq \bar{\Gamma}_t \preccurlyeq B_{\bar{X}}^2 \cdot I.
\end{align*}
The upper bound is immediate from <Ref>,
since $\E[\bar{X}_t\bar{X}_t^\T] \preccurlyeq \E[\norm{\bar{X}_t}_2^2] I \preccurlyeq B_{\bar{X}}^2 I$.
For the lower bound, it is immediate when $t=0$
using <Ref>.
On the other hand, for $t \geq 1$, since $\bar{V}_t$ is zero-mean:
\begin{align*}
\E[ \bar{X}_t \bar{X}_t^\T] &= \E[(\sigma(A_\star \bar{X}_{t-1}) + H\bar{V}_{t-1})(\sigma(A_\star \bar{X}_{t-1}) + H\bar{V}_{t-1})^\T] \\
&= \E[\sigma(A_\star \bar{X}_{t-1}) \sigma(A_\star \bar{X}_{t-1})^\T] + \E[ H \bar{V}_{t-1} \bar{V}_{t-1}^\T H^\T]
\succcurlyeq \E[ H \bar{V}_{t-1} \bar{V}_{t-1}^\T H^\T] \succcurlyeq \frac{1}{2}HH^\T.
\end{align*}
The last inequality again holds from <Ref>.
§.§.§ Trajectory hypercontractivity for truncated GLM
For our purposes, the link function assumption in
<Ref> ensures the following
approximate isometry inequality which holds for all $x \in \R^{\dx}$ and all
matrices $A,A' \in \R^{\dx \times \dx}$:
\begin{align}
\zeta^2 \norm{Ax-A'x}_2^2 \leq \norm{\sigma(Ax)-\sigma(A'x)}_2^2 \leq \norm{Ax-A'x}_2^2. \label{eq:glm_approx_isometry}
\end{align}
This inequality is needed to establish trajectory hypercontractivity for $\scrF_\star$.
Suppose that $T \geq 4$.
Fix any matrix $A \in \R^{\dx \times \dx}$.
Under <Ref>, the truncated process (<ref>) satisfies:
\begin{align}
\frac{1}{T} \sum_{t=0}^{T-1} \E\norm{\sigma(A \bar{X}_t) - \sigma(A_\star \bar{X}_t)}_2^4 \leq \frac{4B_{\bar{X}}^4}{\sigma_{\min}(H)^4 \zeta^4} \left(\frac{1}{T}\sum_{t=0}^{T-1}\E\norm{\sigma(A \bar{X}_t) - \sigma(A_\star \bar{X}_t)}_2^2\right)^2.
\end{align}
Hence, the function class $\scrF_\star$ with $\scrF$ defined in
(<ref>) satisfies the $(C_{\mathsf{GLM}}, 2)$-trajectory
hypercontractivity condition
with $C_{\mathsf{GLM}} = \frac{4B_{\bar{X}}^4}{\sigma_{\min}(H)^4 \zeta^4}$.
Put $\Delta \triangleq A - A_\star$ and $M \triangleq \Delta^\T \Delta$.
We have:
\begin{align*}
\E \norm{\Delta \bar{X}_t}_2^4 &= \E[ \bar{X}_t^\T M \bar{X}_t \bar{X}_t^\T M \bar{X}_t ] \\
&\leq B_{\bar{X}}^2 \tr(M^2 \bar{\Gamma}_t) &&\text{using \Cref{prop:glm_truncated_state_bounds}} \\
&\leq B_{\bar{X}}^2 \opnorm{M} \tr(M \bar{\Gamma}_t) &&\text{H{\"{o}}lder's inequality}\\
&\leq B_{\bar{X}}^2 \tr(M) \tr(M \bar{\Gamma}_t) &&\text{since $M$ is positive semidefinite} \\
&\leq B_{\bar{X}}^4 \tr(M)^2 &&\text{using \Cref{prop:glm_trunc_cov_bounds}} \\
&\leq \frac{B_{\bar{X}}^4}{\lambda_{\min}(HH^\T)^2} \tr(M HH^\T)^2.
\end{align*}
On the other hand, by <Ref>:
\begin{align*}
\E\norm{\Delta \bar{X}_t}_2^2 = \tr(M \bar{\Gamma}_t) \geq \frac{1}{2}\tr(M HH^\T).
\end{align*}
Combining these bounds yields:
\begin{align*}
\frac{1}{T}\sum_{t=0}^{T-1}\E\norm{\Delta \bar{X}_t}_2^4 \leq \frac{B_{\bar{X}}^4}{\lambda_{\min}(HH^\T)^2} \tr(M HH^\T)^2 \leq \frac{4B_{\bar{X}}^4}{\lambda_{\min}(HH^\T)^2} \left( \frac{1}{T}\sum_{t=0}^{T-1} \E\norm{\Delta \bar{X}_t}_2^2 \right)^2.
\end{align*}
The claim now follows via the approximate isometry inequality (<ref>).
§.§.§ Bounding the dependency matrix for truncated GLM
We will use the result in <Ref>
to bound the total-variation distance by the $1$-Wasserstein distance.
This is where the non-degenerate noise assumption in <Ref>
is necessary.
The starting point is the observation that the diagonal Lyapunov function
in <Ref> actually yields
incremental stability <cit.> in addition to Lyapunov stability. In particular, let $\{a_i\}$ denote the rows of $A_\star$.
For any $x,x' \in \R^{\dx}$:
\begin{align}
\norm{\sigma(A_\star x) - \sigma(A_\star x')}^2_{P_\star} &= \sum_{i=1}^{\dx} (P_\star)_{ii} (\sigma(\ip{a_i}{x}) - \sigma(\ip{a_i}{x'}))^2 \nonumber \\
&\leq \sum_{i=1}^{\dx} (P_\star)_{ii} (\ip{a_i}{x} - \ip{a_i}{x'})^2 \nonumber \\
&= (x-x')^\T A_\star^\T P_\star A_\star (x-x') \nonumber \\
&\leq \rho \norm{x-x'}_{P_\star}^2. \label{eq:glm_incremental_stability}
\end{align}
This incremental stability property allows us to control the dependency matrix as follows.
Consider the truncated GLM process $\{\bar{X}_t\}_{t \geq 0}$
from (<ref>).
Let $\Pxbar$ denote the joint distribution of $\{\bar{X}_t\}_{t=0}^{T-1}$.
Under <Ref> and when $B \geq 1$,
we have that:
\begin{align*}
\opnorm{\Gammadep(\Pxbar)} \leq \frac{22}{1-\rho} \log\left( \frac{B \sqrt{\dx}(B_{\bar{X}} + B_X)}{2\sigma_{\min}(H)}\right).
\end{align*}
Let $\{X_t\}_{t \geq 0}$ denote the original GLM dynamics from (<ref>).
Fix indices $t \geq 0$ and $k \geq 1$.
We construct a coupling of $(\sfP_{X_{t+k}}(\cdot \mid X_t=x), \sfP_{X_{t+k}})$ as follows.
Let $\{V_t\}_{t \geq 0}$ be $N(0, I)$.
Let $\{Z_s\}_{s \geq t}$ be the process such that
$Z_{t} = x$, and follows the GLM dynamics (<ref>) using the noise $\{V_t\}_{t \geq 0}$
(we do not bother defining $Z_{t'}$ for $t' < t$ since we do not need it).
Similarly, let $\{Z'_s\}_{s \geq 0}$ be the process following the GLM dynamics (<ref>)
using the same noise $\{V_t\}_{t \geq 0}$.
Now we have:
\begin{align*}
\E\norm{Z_{t+k} - Z'_{t+k}}_{P_\star} &= \E\norm{\sigma(A_\star Z_{t+k-1}) - \sigma(A_\star Z'_{t+k-1})}_{P_\star} \\
&\leq \rho^{1/2} \E\norm{Z_{t+k-1} - Z'_{t+k-1}}_{P_\star} &&\text{using \Cref{eq:glm_incremental_stability}}.
\end{align*}
We now unroll this recursion down to $t$:
\begin{align*}
\E\norm{Z_{t+k} - Z'_{t+k}}_{P_\star} \leq \rho^{k/2} \E\norm{Z_t - Z'_t}_{P_\star} = \rho^{k/2} \E\norm{x - Z'_t}_{P_\star}.
\end{align*}
Since $P_\star \succcurlyeq I$, this shows that:
\begin{align*}
W_1(\sfP_{X_{t+k}}(\cdot \mid X_t = x), \sfP_{X_{t+k}}) \leq \rho^{k/2} (\norm{x}_{P_\star} + \E\norm{X_{t}}_{P_\star}) \leq \rho^{k/2}(\norm{x}_{P_\star} + B_X),
\end{align*}
where the last inequality follows from <Ref> and Jensen's inequality.
Next, it is easy to see the map $x \mapsto \sigma(A_\star x)$
is $\opnorm{A}$-Lipschitz.
Furthermore, since $H$ is full rank
by <Ref>, then for any $t$ and $k \geq 1$ both
$\sfP_t$ and
$\sfP_{X_{t+k}}(\cdot \mid X_t=x)$
are absolutely continuous
w.r.t. the Lebesgue measure in $\R^{\dx}$.
Using <Ref>, we have for any
$k \geq 2$:
\begin{align*}
\tvnorm{\sfP_{X_{t+k}}(\cdot \mid X_t = x) - \sfP_{X_{t+k}}} &\leq \frac{\opnorm{A_\star} \sqrt{\tr((HH^\T)^{-1})}}{2} W_1(\sfP_{X_{t+k-1}}(\cdot \mid X_t = x), \sfP_{X_{t+k-1}}) \\
&\leq \frac{\opnorm{A_\star} \sqrt{\tr((HH^\T)^{-1})}}{2} \rho^{(k-1)/2} (\norm{x}_{P_\star} + B_X).
\end{align*}
Using <Ref> to bound $\opnorm{\Gammadep(\Pxbar)}$ (which is valid because we constrained $\beta \geq 2$),
and <Ref> to bound $x \in \bar{\sfX}_t$,
for any $\ell \geq 1$:
\begin{align*}
\opnorm{\Gammadep(\sfP_{\bar{X}})} &\leq 3 + \sqrt{2} \sum_{k=1}^{T-1} \max_{t=0,\dots,T-1-k} \esssup_{x \in \bar{\sfX}_t} \sqrt{\tvnorm{\sfP_{X_{t+k}}(\cdot \mid X_t=x) - \sfP_{X_{t+k}}}} \\
&\leq 3 + \sqrt{2}\ell + \left[ \frac{\opnorm{A_\star} \sqrt{\tr((HH^\T)^{-1})} (B_{\bar{X}} + B_X)}{2} \right]^{1/2} \sum_{k=\ell+1}^{T-1} \rho^{(k-1)/4} \\
&\stackrel{(a)}{\leq} 5\ell + \left[ \frac{B \sqrt{\dx} (B_{\bar{X}} + B_X)}{2\sigma_{\min}(H)} \right]^{1/2} \frac{\rho^{\ell/4}}{1-\rho^{1/4}}.
\end{align*}
Above, (a) uses the bounds $\opnorm{A_\star} \leq B$ and
$\tr(HH^{-1}) \leq \dx/\sigma_{\min}(H)^2$.
Now put $\psi \triangleq \frac{B \sqrt{\dx} (B_{\bar{X}} + B_X)}{2\sigma_{\min}(H)}$.
We choose $\ell = \bigceil{\frac{\log(\sqrt{\psi})}{1-\rho^{1/4}}}$ so that
$\rho^{\ell/4} \leq 1/\sqrt{\psi}$.
This yields:
\begin{align*}
\opnorm{\Gammadep(\sfP_{\bar{X}})} &\leq \frac{11\log{\psi}}{2(1-\rho^{1/4})} \stackrel{(a)}{\leq} \frac{22 \log{\psi}}{1-\rho} = \frac{22}{1-\rho} \log\left( \frac{B \sqrt{\dx}(B_{\bar{X}} + B_X)}{2\sigma_{\min}(H)}\right).
\end{align*}
Above, (a) follows from
$\inf_{x \in [0,1]} \frac{1-x^{1/4}}{1-x} = 1/4$.
§.§.§ Finishing the proof of <Ref>
Below, we let $c_i$ be universal positive constants
that we do not track precisely.
For any $\e>0$ we now construct an $\e$-covering of $\scrF_\star \setminus B(r)$, with $\scrF_\star$ the offset class of $\scrF$
from (<ref>).
Note that we are not covering $\partial B(r)$ since the class
$\scrF_\star$ is not star-shaped. However, an inspection
of the proof of <Ref> shows that
one can remove the star-shaped assumption by instead
covering the set $\scrF_\star \setminus B(r)$.
To this end, we let $\{A_1,\dots,A_N\}$ be a $\delta$-cover of $\scrA \triangleq \{A \in \mathbb{R}^{\dx \times \dx} \mid \|A\|_F \leq B \}$,
for a $\delta$ to be specified.
By a volumetric argument we may choose $\{A_1,\dots,A_N\}$ such that $N \leq \left(1 +\frac{2B}{\delta} \right)^{\dx^2}$.
Now, any realization of $\{\bar{X}_t\}$ will have norm less than
$B_{\bar{X}}$ from (<ref>),
where $B_{\bar{X}}$ is bounded by:
\begin{align*}
B_{\bar{X}} \leq \frac{c_0\opnorm{P_\star}^{1/2} \opnorm{H} (\sqrt{\dx} + \sqrt{(1+\beta)\log{T}})}{1-\rho}.
\end{align*}
Now fix any $A \in \scrA$,
and let $A_i$ be an element in the $\delta$-cover satisfying $\norm{A - A_i}_F \leq \delta$.
We observe that for any $x$ satisfying $\norm{x}_2 \leq B_{\bar{X}}$:
\begin{align*}
\norm{(\sigma(A_ix) - \sigma(A_\star x)) - (\sigma(A x) - \sigma(A_\star x))}_2 &= \norm{\sigma(A_i x) - \sigma(A x)}_2 \leq \norm{(A_i - A) x}_2 \\
&\leq \norm{A_i-A}_F\norm{x}_2 \leq \delta B_{\bar{X}}.
\end{align*}
Thus, it suffices to take $\delta = \e /B_{\bar{X}}$
to construct the $\e$ cover of $\scrF_\star$,
i.e., $\calN_\infty(\scrF_\star, \e) \leq \left(1 +\frac{2BB_{\bar{X}}}{\e}\right)^{\dx^2}$. This then implies <cit.>:
\begin{align*}
\calN_\infty(\scrF_\star \setminus B(r), \e) \leq \calN_\infty(\scrF_\star, \e/2) \leq \left(1 +\frac{4BB_{\bar{X}}}{\e}\right)^{\dx^2}.
\end{align*}
Next, by <Ref>, $(\scrF_\star, \Pxbar)$ is $(C_{\mathsf{GLM}},2)$-hypercontractive for all $T \geq 4$,
\begin{align*}
C_{\mathsf{GLM}} \leq \frac{4B_{\bar{X}}^4}{\sigma_{\min}(H)^4 \zeta^4} \leq \frac{c_1 \opnorm{P_\star}^2 \mathrm{cond}(H)^4 (\dx^2 + ((1+\beta)\log{T})^2) }{ \zeta^4 (1-\rho)^4 }.
\end{align*}
Furthermore, by <Ref>:
\begin{align*}
\opnorm{\Gammadep(\Pxbar)}^2 \leq \frac{c_2}{(1-\rho)^2} \log^2\left( \frac{B \sqrt{\dx}(B_{\bar{X}} + B_X)}{2\sigma_{\min}(H)}\right) \triangleq \gamma^2.
\end{align*}
The class $\scrF_\star$ is $2BB_{\bar{X}}$-bounded on (<ref>).
Invoking <Ref>, for every $r>0$:
\begin{align}
\E \|\bar{f}- f_\star\|_{L^2}^2 \leq 8 \E \bar{\sfM}_T(\mathscr{F}_\star) + r^2 + 4B^2B_{\bar{X}}^2 \left(1 +\frac{4\sqrt{8} B B_{\bar{X}}}{r} \right)^{\dx^2} \exp \left( \frac{-T }{8C_{\mathsf{GLM}} \gamma^2 } \right). \label{eq:glm_truncated_LSE}
\end{align}
Here, the notation $\E \bar{\sfM}_T(\scrF_\star)$ is meant to emphasize
that the offset complexity is with respect to the truncated process
$\Pxbar$ and not the original process $\Px$.
We now set $r^2 = \opnorm{H}^2 \dx^2/T$,
and compute a $T_0$ such that the third term in (<ref>) is also bounded by $\opnorm{H}^2 \dx^2/T$.
To do this, it suffices to compute $T_0$ such that for all
$T \geq T_0$:
\begin{align*}
T \geq c_3 C_{\mathsf{GLM}} \gamma^2 \dx^2 \log\left( \frac{T B B_{\bar{X}}}{\opnorm{H} \sqrt{\dx}} \right).
\end{align*}
It suffices to take (assuming $\beta$ is at most polylogarithmic in any problem constants):
\begin{align}
T_0 \geq c_4
\frac{\opnorm{P_\star}^{2} \mathrm{cond}(H)^4 \dx^4}{\zeta^4 (1-\rho)^6} \mathrm{polylog}\left( B, \dx, \opnorm{P_\star}, \mathrm{cond}(H), \frac{1}{\zeta}, \frac{1}{1-\rho} \right).
\label{eq:glm_T0_bound}
\end{align}
Again, we do not attempt to compute the exact power of the
polylog term, but note it can in principle be done via
Next, from (<ref>) we have that
the error term
$\E \norm{\widehat{f} - f_\star}_{L^2}^2 \ind\{\calE^c\} \leq \frac{4B^2 B_X^2}{T^{\beta/2}}$.
Thus if we further constrain $\beta > 2$ and require $T_0 \geq c_5 \left[ \frac{B^2 \opnorm{P_\star}}{(1-\rho)^2} \right]^{\frac{1}{\beta/2-1}}$,
$\E \norm{\widehat{f} - f_\star}_{L^2}^2 \ind\{\calE^c\} \leq \frac{\opnorm{H}^2 \dx^2}{T}$.
Note that setting $\beta = \max\{3, c_6 \log{B}\}$ suffices.
To finish the proof, it remains to bound $\E \bar{\sfM}_T(\scrF_\star)$.
Now, unlike the linear dynamical systems case,
there is no closed-form expression for $\E \bar{M}_T(\scrF_\star)$.
Hence, we will bound it via the chaining bound (<ref>). This computation is done in <cit.>.
Before we can use the result, however, we need to verify that
the truncated noise process $\{H \bar{V}_t\}_{t \geq 0}$ is
a sub-Gaussian MDS. The MDS part is clear since $\bar{V}_{t} \perp \bar{V}_{t'}$ for $t \neq t'$, and $\bar{V}_t$ is zero-mean.
Furthermore, <Ref>
yields that $H \bar{V}_t$ is a $4\opnorm{H}^2$-sub-Gaussian random vector. Hence, we have:
\begin{align*}
\E \bar{\sfM}_T(\scrF_\star) \leq c_7 \frac{\opnorm{H}^2 \dx^2}{T} \log(1 + \opnorm{H} \sqrt{\dx} B B_{\bar{X}} T^2).
\end{align*}
The claim now follows. |
# CodeMind: A Framework to Challenge Large Language Models
for Code Reasoning
Changshu Liu Shizhuo Dylan Zhang Reyhaneh Jabbarvand
###### Abstract
Solely relying on test passing to evaluate Large Language Models (LLMs) for
code synthesis may result in unfair assessment or promoting models with data
leakage. As an alternative, we introduce CodeMind, a framework designed to
gauge the code reasoning abilities of LLMs. CodeMind currently supports three
code reasoning tasks: Independent Execution Reasoning (IER), Dependent
Execution Reasoning (DER), and Specification Reasoning (SR). The first two
evaluate models to predict the execution output of an arbitrary code or code
the model could correctly synthesize. The third one evaluates the extent to
which LLMs implement the specified expected behavior.
Our extensive evaluation of nine LLMs across five benchmarks in two different
programming languages using CodeMind shows that LLMs fairly follow control
flow constructs and, in general, explain how inputs evolve to output,
_specifically for simple programs and the ones they can correctly synthesize_.
However, their performance drops for code with higher complexity, non-trivial
logical and arithmetic operators, non-primitive types, and API calls.
Furthermore, we observe that, while correlated, specification reasoning
(essential for code synthesis) does not imply execution reasoning (essential
for broader programming tasks such as testing and debugging): ranking LLMs
based on test passing can be different compared to code reasoning111The
reasoning of LLMs and humans exhibit fundamental differences due to distinct
nature of their cognitive processes. Our conclusions on the extent of code
reasoning abilities of LLMs do not imply human-like reasoning.
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
Department of Computer Science
University of Illinois at Urbana-Champaign, Illinois, US
## 1 Introduction
Large Language Models (LLMs) have shown exceptional programming abilities,
specifically when instruction-tuned or prompted through Chain- or Tree-of-
Thoughts (CoT (Wei et al., 2022b) or ToT (Yao et al., 2023)) and in-context
learning (Wei et al., 2022a; Garg et al., 2022). However, several studies
suggest that LLMs struggle to generalize this exceptional ability,
specifically when the dataset becomes more complex (Du et al., 2023; Jimenez
et al., 2023), or the task requires understanding code, rather than natural
language (Pan et al., 2023; Min et al., 2023). This is mainly because LLMs are
trained to associate code synthesis with natural language specifications,
i.e., reason how to combine code constructs similar to examples they have seen
while satisfying requirements explained in the specification.
To illustrate how code reasoning tasks can evaluate LLMs, Figure 1-a shows a
code synthesized by GPT-3.5 given natural language specification. The code
constructs corresponding to the specification are highlighted with matching
colors. Due to the ambiguity in the natural language, this code returns the
smallest number in the list rather than the number at the index equal to the
value of the smallest number. As a result, for a given input $[2,5,4,3]$, the
code returns 2 instead of 4, and the assertion fails.
Figure 1: An example illustrating the importance of evaluating LLMs on code
reasoning
One way to assess the inductive code reasoning of LLMs is to include specific
expected program behavior and check whether the generated code can reproduce
that behavior. This entails a level of code reasoning, which we refer to as
Specification Reasoning (SR). Figure 1-b shows the new specification and the
corresponding generated code. Executing the code given the specified input-
output pair results in a test pass, indicating the ability of GPT-3.5 to
understand the given specification and generate a correct code.
Including test data in prompts has been a known practice to improve the
performance of models in programming tasks (Chen et al., 2022; Zhong et al.,
2022; Shi et al., 2022; Zhang et al., 2023). However, it is a weak proxy for
code reasoning as it still involves the association of code and natural
language. A deeper level of code reasoning is reasoning about execution output
given an input, which we call Execution Reasoning (ER). This task challenges
LLMs more, requiring them to reason about code without any natural language
cross reference. Figure 1-c shows the CoT reasoning of GPT-3.5 in response to
the ER task. Even though the model could generate a code that produced the
expected output (and is correct if validated through testing), it cannot
correctly reason about the code execution given the same inputs to predict the
output.
To automate code reasoning assessment, we propose CodeMind. CodeMind currently
offers three inductive code reasoning tasks: Independent Execution Reasoning
(IER) and Dependent Execution Reasoning (DER) assess if LLMs can reason about
how given inputs evolve to output for any arbitrary code (IER) or only the
code that it correctly synthesized. Specification Reasoning (SR) evaluates the
extent to which LLMs can reason and implement the specified behavior.
Using CodeMind, we performed a large-scale ground theory study to assess LLMs
for code reasoning. We selected _nine_ models, including both general-purpose
and Code LLMs and prompted them for IER, DER, and SR tasks on _5395_ programs
written in Java and Python .These programs are from _five_ programming
benchmarks, namely HumanEval (Chen et al., 2021), MBPP (Odena et al., 2021),
CRUXEval (Gu et al., 2024) CodeNet (Puri et al., 2021), and Avatar (Ahmad et
al., 2021). We observe that:
(1) LLMs have a good grasp of code constructs, likely due to alignment with
concepts in the natural language specification. The instruction-tuned models
can explain the code statement by statements and follow the execution of the
programs in general. LLM code reasoning abilities, however, are limited to
simple programs. Furthermore, models such as GPT-3.5 and MagicCoder (Wei et
al., 2023), although they correctly explain what the code does, may fail to
keep track of data flow and correctly reason about execution output. Open-
source LLMs that have achieved comparable effectiveness as GPT models in code
synthesis (Wei et al., 2023; Roziere et al., 2023; Luo et al., 2023) are
behind them with a _huge gap_ concerning code reasoning (section 5).
(2) LLMs can reason about test data in the specification, even if deceptive,
and bring that into the reasoning process for code synthesis (section 7).
However, their reasoning is bottlenecked by their inherent limitation. They
achieve a higher performance reasoning about the code they can correctly
synthesize (section 6).
(3) On a dataset with complex programs, there is _a negligible to no
correlation_ between the ranking of models based on code synthesis—generating
a code that passes all tests—and code reasoning performance (section 6). This
necessitates CodeMind tasks and metrics to complement the evaluation of LLMs
for code.
(4) Nested code constructs, complex conditional predicates and loop
conditions, non-trivial arithmetic and logic operators, and API invocations
can significantly challenge LLMs for code reasoning (section 8).
Our contributions are (1) CodeMind framework for code reasoning that formally
defines three inductive code reasoning tasks. CodeMind is open-source
(CodeMind, 2024) and accepts contributions from researchers to integrate more
code reasoning tasks into it; (2) a large-scale ground-theory evaluation of
LLMs for code reasoning using CodeMind; and (3) a comprehensive, in-depth
analysis of results that offers a catalog of root causes negatively impacting
the abilities of LLMs for code reasoning. This catalog would be a valuable
guideline for developing better benchmarks that truly evaluate the programming
abilities of LLMs.
## 2 Related Work
A large body of work has assessed LLMs for reasoning tasks of different
modalieties (Deshpande et al., 2021; Wu et al., 2023; Miceli-Barone et al.,
2023; Bubeck et al., 2023; Wang et al., 2023; Imani et al., 2023; Luo et al.,
2023; Huang et al., 2023; Valmeekam et al., 2022; Min et al., 2023), including
natural language, visual data, math, logic, and code. CodeMind is more closely
related to the very recent studies focusing on code reasoning (La Malfa et
al., 2024; Gu et al., 2024; Zhang et al., 2024).
CRUXEval benchmark is a concurrent work investigating the problem of code
reasoning abilities of LLMs using a dataset of simple programs generated by
CodeLlama (34B) with test cases (Gu et al., 2024). They evaluated a series of
LLMs on CRUXEval for input and output prediction tasks. Compared to CRUXEval,
CodeMind proposes more inductive code reasoning tasks, includes more programs
with a variety of levels of complexity, and controls between code synthesis
and reasoning tasks by evaluating LLMs using the same program. CodeMind is
also equipped with a static analysis pipeline to enable in-depth examination
and drawing informed conclusions.
La Malfa et al. (2024) evaluate LMs to predict variable values at each code
statement. Our experiments are larger compared to them: more programs with a
diverse distribution of complexity and different programming languages, and
more studied LLMs. We also offer more code reasoning tasks and present a
cross-analysis of code synthesis and reasoning abilities.
Zhang et al. (2024) investigate transformers’ ability to learn or infer the
recursive patterns from input and output pairs. They conclude that due to the
inherent limitations of transformers, they may fail to learn recursion and
instead find shortcut algorithms to reason about how outputs are related to
inputs. Compared to this work, we evaluate LLMs regardless of architecture and
training data but from the program perspective. We show LLMs can follow
recursion but usually lose track of data flow due to the inability to
correctly reason about loop conditions.
## 3 CodeMind
Program specification defines a function $S:S_{I}\rightarrow S_{O}$, where
$S_{I}$ is a set of all possible inputs to the program and $S_{O}$ is a set of
corresponding outputs. A code synthesized based on the implementation is
usually a function $C:C_{I}\rightarrow C_{O}$. We define a program to be
correct with respect to specification, if it satisfies all the following
conditions:
$C_{I}\subseteq S_{I}$, $C_{O}\subseteq S_{O}$, $\forall i\in C_{I},C(i)=S(i)$
This entails the models to reason about how inputs evolve to a given output
through implementation (execution reasoning) and implements a code such that
it generates correct outputs for a given input (specification reasoning).
### 3.1 Execution Reasoning
Considering the aforementioned formalization, we define two execution
reasoning tasks as follows.
Definition 1: Independent Execution Reasoning (IER). Given a program
$C:C_{I}\rightarrow C_{O}$ and set of inputs $\hat{I}=\\{i|i\in C_{I}\\}$, LLM
$L$ can correctly reason about code execution if $\hat{o}=C(\hat{I})$, where
$\hat{o}=L(\hat{I})$ is the predicted output by $L$. Note that in this task,
we do not deal with specification, so we can assess LLMs for any arbitrary
code that we have ground-truth pairs of $\langle\hat{I},\hat{o}\rangle$.
IER evaluates LLMs for any arbitrary code for general inductive code
reasoning, which requires knowing code constructs, arithmetic and logic
operations, and control flow. However, even for human developers, reasoning
about their developed code is easier than any arbitrary code. Furthermore, as
a self-consistency (Min et al., 2023) measurement, LLMs should be able to
reason about the code they can correctly synthesize. This demands to have the
following execution reasoning task.
Definition 2: Dependent Execution Reasoning (DER). Given a specification
$S:S_{I}\rightarrow S_{O}$, a program $C:C_{I}\rightarrow C_{O}$ generated by
LLM $L$, and set of inputs $\hat{I}=\\{i|i\in C_{I},C(i)=S(i)\\}$, LLM $L$ can
correctly reason about code execution if $\hat{o}=C(\hat{I})$, where
$\hat{o}=L(\hat{I})$ is the predicted output by $L$. The assumption here is
that when LLM $L$ generates code $C$ that passes the test
$\langle\hat{I},\hat{o}\rangle$, it be able to predict $\hat{o}$ correctly.
### 3.2 Specification Reasoning
In addition to inductive execution reasoning, a model should understand
specification to synthesize a correct code. We formally define the
specification reasoning task as follows.
Definition 3: Specification Reasoning (SR). Given a specification
$S:S_{I}\rightarrow S_{O}$, an arbitrary $\langle i,o\rangle$ specified in the
prompt along with the natural language specification, where $i\in S_{I},o\in
S_{O},S(i)=o$, and program $C:C_{I}\rightarrow C_{O}$ generated by LLM $L$,
the LLM can correctly reason about specification if $C(i)=S(i)$. In other
words, LLM $L$ should be able to pass a test with $\langle i,o\rangle$, when
they are explicitly specified in the prompt.
Table 1: CRR performance of subject LLMs on IER task through CoT prompting. We highlight the top three best-performing models with red (1), green (2), and blue (3). Dataset | Programming Language | # Subjects | General LLMs | Code LLMs
---|---|---|---|---
GPT-4 | GPT-3.5 | Llama 2 | Mistral | CodeLlama | DeepSeekCoder | MagicCoder | StarCoder | WizardCoder
MBPP | Python | 408 | 80.88% | 71.32% | 45.59% | 31.37% | 42.40% | 57.84% | 59.80% | 43.63% | 46.08%
HumanEval | Python | 162 | 79.01% | 64.20% | 30.86% | 32.72% | 45.06% | 41.98% | 52.47% | 38.89% | 40.12%
CruxEval | Python | 800 | 80.50% | 65.13% | 25.38% | 34.13% | 37.75% | 44.38% | 46.50% | 35.50% | 35.88%
CodeNet | Python | 1914 | 70.43% | 49.06% | 18.97% | 17.35% | 27.95% | 26.65% | 33.28% | 26.28% | 24.87%
Java | 1939 | 71.17% | 51.93% | 23.99% | 18.15% | 28.52% | 32.13% | 36.46% | 29.34% | 29.35%
Avatar | Python | 86 | 52.33% | 39.53% | 24.42% | 16.28% | 23.26% | 18.60% | 24.42% | 19.77% | 24.42%
Java | 86 | 48.84% | 34.88% | 23.26% | 11.63% | 27.91% | 23.26% | 24.42% | 13.95% | 13.95%
Total | Java and Python | 5395 | 72.60% | 54.24% | 24.26% | 21.54% | 30.40% | 33.85% | 38.68% | 30.14% | 29.99%
### 3.3 Evaluating Code Reasoning
We measure the performance of models in code reasoning for a given code with
the Correct Reasoning Score (CRS), which is $1$ if the model can correctly
reason about the code and $0$ otherwise. We also introduce the Correct
Reasoning Rate (CRR) metric, a collective metric that measures how much a
given LLM can reason about multiple programs in a benchmark. We calculate CRR
for a set of $m$ programs in benchmark $P$ as:
$CRR(P)=\dfrac{\sum\limits_{i=1}^{m}\llbracket CRS(p_{i}\in
P)=1\rrbracket}{m}$
## 4 Experimental Setup
Our study includes nine LLMs and $5395$ programs in Java and Python
programming languages from five programming datasets. We explain the details
of LLMs and program selection below.
Subject LLMs. We chose nine pre-trained or instruction-tuned models, covering
both general-purpose and Code LLMs. Our choice was limited to computing
resources, so we selected models with less than $20$B parameters that
outperform the rest for programming tasks. Our subject LLMs are GPT-4 (OpenAI,
2023b), GPT-3.5 (OpenAI, 2023a), Llama 2 (13B) (Touvron et al., 2023), Mistral
(Jiang et al., 2023), CodeLlama (13B, instruction-tuned)(Roziere et al.,
2023), StarCoder (15.5B)(Li et al., 2023), WizardCoder (15B, instruction-
tuned)(Xu et al., 2023), MagicCoder (7B)(Wei et al., 2023) (instruction-
tuned), DeepSeekCoder (6.7B)(Bi et al., 2024). We followed the best practices
and customized the prompt templates per each model (all prompts are publicly
available for further investigation (CodeMind, 2024)). Except for the GPT
models, we set the temperature to zero to ensure the reproducibility of the
results. Our code is open-source to users for using CodeMind for other models
and temperatures.
Subject Programs. Our criteria for selecting subject programs were the
existence of test data (inputs and corresponding expected output) and
implementations of the same program in multiple programming languages (to
investigate its impact on code reasoning). From several existing benchmarks
(Wang et al., 2022; Athiwaratkun et al., 2022; Chen et al., 2021; Liu et al.,
2023; Gu et al., 2024; Zheng et al., 2023; Cassano et al., 2022; Jimenez et
al., 2023; Du et al., 2023; Odena et al., 2021; Puri et al., 2021; Ahmad et
al., 2021), we chose the programs in HumanEval (Chen et al., 2021), MBPP
(Odena et al., 2021), CodeNet (Puri et al., 2021), Avatar (Ahmad et al.,
2021), and CruxEval (Gu et al., 2024). We chose Java and Python versions of
the programs as they are more prominently used programming languages.
HumanEval and MBPP are well-known benchmarks for code synthesis. CodeNet and
Avatar are code translation benchmarks. CRUXEval is a benchmark of relatively
simple Python programs generated by CodeLlama (34B) to evaluate input
prediction and output prediction of LLMs.
Figure 2: Complexity distribution of the subject programs in terms of
Cyclomatic Complexity (CC) and Line of Code (LoC)
Figure 2 shows the complexity distribution of the programs in terms of
Cyclomatic Complexity, $CC$ (Gill & Kemerer, 1991), and Lines of Code (LoC).
$CC$ measures the number of independent execution paths in the program control
flow graph (CFG). The metric is computed for a class as $CC=E-N+2P$, where $E$
and $N$ are the number of edges and nodes in the CFG, respectively, and $P$ is
the number of methods in the class. In general, a higher $CC$ indicates a more
complex program.
For code reasoning tasks, the model should reason which execution path to take
for a given input to predict the output. So, the higher number of independent
paths makes it unlikely for the model to succeed by chance. $CC$ might be
correlated with the number of lines in the program, but more lines do not
cause higher $CC$. For example, a program with $10$ lines and no conditional
or loop constructs only has one execution path, while a program with $8$ lines
and two nested conditional statements has $3$ or $4$ execution paths,
depending on the conditional predicates.
## 5 LLM Evaluation on IER
Figure 3: Impact of CC on CRR performance of LLMs in IER
To evaluate the performance of LLMs on the IER task, we promoted them under
two settings: direct answering and CoT. For direct answering, we prompted each
model to predict the output of given inputs. Under the CoT setup, we first
instruct the models to simulate the execution step by step by predicting the
output value after execution of each statement. We then ask the model to
predict the output of given inputs. In both settings, the prompt contains one
in-context example for two purposes: introducing the IER task and instructing
the response formatting.
Given that IER only requires an arbitrary code and corresponding ground-truth
pair of $\langle\hat{I},\hat{o}\rangle$ (section 3.1), we prompted the LLMs
using all $5395$ subject programs in this experiment. Table 1 shows the result
of this experiment through CoT prompting. From these results, we observe that:
Table 2: CRR performance of subject LLMs on DER task through CoT prompting. We highlight the top three best-performing models with red (1), green (2), and blue (3). Dataset | # Subjects | Task | General LLMs | Code LLMs
---|---|---|---|---
GPT-4 | GPT-3.5 | Mistral | CodeLlama | DeepSeekCoder | MagicCoder | StarCoder | WizardCoder
MBPP | 408 | Synthesis | $86.52\%$ | $80.39\%$ | 43.36% | 56.86% | 72.30% | 70.34% | 44.85% | 61.03%
Reasoning | 82.62% | 79.20% | 43.50% | 43.53% | 63.39% | 69.34% | 56.83% | 48.19%
CRR Improvement cf. IER | $1.74\%\uparrow$ | $7.88\%\uparrow$ | $11.89\%\uparrow$ | $1.13\%\uparrow$ | $5.15\%\uparrow$ | $9.54\%\uparrow$ | $13.20\%\uparrow$ | $2.11\%\uparrow$
HumanEval | 162 | Synthesis | 87.65% | 69.75% | 52.47% | 67.90% | 81.48% | 79.62% | 48.15% | 72.46%
Reasoning | 80.28% | 74.63% | 34.12% | 35.45% | 54.55% | 53.49% | 58.97% | 59.50%
CRR Improvement cf. IER | $1.27\%\uparrow$ | $10.70\%\uparrow$ | $1.4\%\uparrow$ | $9.61\%\downarrow$ | $12.57\%\uparrow$ | $1.02\%\uparrow$ | $20.08\%\uparrow$ | $19.38\%\uparrow$
* •
GPT models outperform others on the IER task, with large margins of $33.92\%$
(GPT-4) and $15.56$ (GPT-3.5) from the best open-source model. Among the open-
source models, except for the Avatar dataset, MagicCoder outperforms others
with an average margin of $4.83\%$.
* •
On the datasets with samples in both Java and Python, all the models
experience a performance drop (average drop of $2.91\%$ in CodeNet and
$2.33\%$ in Avatar). This is likely because Java enforces a stricter syntax
and typing system than Python, making the code execution reasoning harder.
* •
Compared to direct answering, CoT prompting, under which the models articulate
the execution process verbally before predicting the output, results in
$5.24\%$ improvement in the IER performance of the models on average. However,
the less-than-ideal accuracy of (open-source) models, even with CoT prompting,
demands a fundamental change.
* •
Moving down in the table, the models face greater challenges in IER, i.e.,
reasoning about the execution on CodeNet and Avatar programs, compared to
MBPP, HumanEval, and CRUXEval. One potential reason is the complexity of such
programs as demonstrated in Figure 2. A detailed breakdown of the model’s
performance (Figure 3) shows a strong negative correlation Spearman’s Rank
Order Correlation (ROC) (Spearman, 1961) between CC and CRR. between CRR and
CC, confirming that models struggle more in IER for a more complex code. At
the same time, some models, namely Llama 2, CodeLlama, MagicCoder, StarCoder,
and WizardCoder, achieve a lower performance on CRUXEval compared to
HumanEval, which are less complex regarding both LoC and CC. This entails a
further better understanding of what factors other than CC impact the CRR
performance of the models (section 8).
## 6 LLM Evaluation on DER
We seek to address the critical question of how effectively the model can
correctly reason about the correct programs it has generated. This evaluation
requires us to align code synthesis and code reasoning tasks together. Our
pipeline for evaluating DER consists of three steps: (1) following the best
practices, we prompted subject LLMs for code synthesis; (2) we ran the
synthesized program against existing tests; and (3) for the programs with test
pass, we prompted the model for code execution reasoning using the chosen test
input and under CoT style. Note that we also removed the comments from the
synthesized code for fairness. We excluded the programs from CRUXEval,
CodeNet, and Avatar, since these datasets are not designed for code synthesis
and lack proper program specifications. Also, we could not reproduce the code
synthesis results of Llama 2 and excluded that from subject LLMs. Similar to
IER experiments, we set the temperature to zero to account for the non-
determinism and reproducibility of the results. As a result of this design
decision, our synthesis results might be different from existing leaderboards.
Table 2 shows the results of this experiment. GPT models still outperform
open-source models on the DER task, with a margin of $17.97$ (GPT-4) and
$13.13$ (GPT-3.5) from the best open-source model. Compared to IER, the gap
between GPT models and open-source models has been reduced. We can also
observe that the models achieve $6.84\%$ higher CRR on average in the DER task
(except CodeLlama on HumanEval), compared to IER.
Before concluding the models are more competent in the execution reasoning
when evaluated on the programs they correctly synthesize, we compared the
programs in this experiment with those in IER experiment. If true, lower
complexity might be the root cause of higher CRR on the DER task. Figure 4
shows the CC distribution of the programs in MBPP and HumanEval, compared to
that generated by subject LLMs. We can observe that the synthesized code, if
not more complex, is no less than the ground-truth programs in these datasets.
Consequently, we confirm that models reason better on a code they correctly
synthesize. However, there is still a considerable gap between the code
synthesis and reasoning abilities of the LLM, specifically on open-source
models.
Figure 4: CC distribution of the programs synthesized by LLMs compared to the
original programs in the HumanEval (top) and MBPP (bottom) datasets
Given that code synthesis and reasoning are unified in DER, we first computed
the Spearman’s ROC between the rank of models based on numbers in the
Synthesis row and Reasoning row for each dataset. The results show a strong
positive correlation on MBPP ($\rho=0.85$), but a negligible correlation on
HumanEval ($\rho=0.17$). These results communicate a strong message: the
ranking of LLMs based on their code synthesis abilities (pass@k) could be
significantly different than their reasoning abilities on the same code. This
necessitates a framework such as CodeMind that promotes other evaluation
aspects of LLMs for code.
Table 3: Performance of LLMs on SR task. Symbol $\downarrow$ indicates a drop from the previous setting (row above), and $\uparrow$ indicates an increase from the previous setting (row above). Dataset | Setting | General LLMs | Code LLMs
---|---|---|---
GPT-4 | GPT-3.5 | Mistral | CodeLlama | DeepSeekCoder | MagicCoder | StarCoder | WizardCoder
MBPP | With Test | 90.69% | 85.05% | 50.74% | 63.73% | 78.68% | 75.25% | 51.47% | 67.89%
No Test | 72.13% $\downarrow$ | 78.87% $\downarrow$ | 48.28% $\downarrow$ | 53.68% $\downarrow$ | 67.65% $\downarrow$ | 69.61% $\downarrow$ | 41.67% $\downarrow$ | 52.21% $\downarrow$
Misleading Test | 68.14% $\downarrow$ | 74.02% $\downarrow$ | 50.74% $\uparrow$ | 59.07% $\uparrow$ | 68.63% $\uparrow$ | 67.40% $\downarrow$ | 40.20% $\downarrow$ | 58.09% $\uparrow$
HumanEval | With Test | 91.98% | 74.07% | 57.41% | 70.37% | 87.04% | 81.48% | 56.17% | 76.54%
No Test | 88.27% $\downarrow$ | 70.37% $\downarrow$ | 54.32% $\downarrow$ | 65.43% $\downarrow$ | 82.10% $\downarrow$ | 80.86% $\downarrow$ | 38.89% $\downarrow$ | 76.54%
Misleading Test | 83.95% $\downarrow$ | 65.43% $\downarrow$ | 53.70% $\downarrow$ | 61.73% $\downarrow$ | 79.63% $\downarrow$ | 74.69% $\downarrow$ | 27.04% $\downarrow$ | 66.05% $\downarrow$
## 7 Evaluation on SR
Specification Reasoning (SR) offers a novel perspective in understanding the
code synthesis process of LLMs, particularly in how they leverage input-output
specifications. To evaluate the abilities of LLMs for SR, we prompt LLMs for
code synthesis under the following three settings:
Figure 5: CRR of top five best-performing LLMs per specific code constructs
across all datasets. We abbreviate the tags with B (Basic), F (For), I (If),
NI (Nested If), NL (Nested Loop), S (Switch), T (Try), and W (While)
(1) Natural language specification with one ground-truth input-output. Under
this setting, we randomly select one of the existing tests and add that to the
specification. We validate the synthesized code using only this test.
(2) Natural language specification with no input-output. We remove the test
added to the specification in the previous setting and re-prompt LLMs for code
synthesis. We validate the synthesized code using only the test from the
previous setting. Intuitively, if including test data can help LLMs in code
synthesis, we observe a drop in LLMs’ performance.
(3) Natural language specification with misleading input-output. We mutate the
expected output of the test from the first setting and add it to the
specification. We validate the synthesized code using the original test. The
mutation changes the expected output to a value that does not align with the
specification. For example, if the expected output is True, mutation changes
it to False. Similarly, if the expected output is a positive integer, we
mutate it to a negative one with a large difference. Intuitively, due to the
divergence with natural language specification, misleading input-output should
further drop LLMs’ performance.
We followed a similar setup for this experiment as the one in section 6;
performed this experiment only on MBPP and HumanEval programs. _We also pre-
processed the prompts from HumanEval, which initially contained input-output
samples._ The results in Table 3 show that the performance of LLMs in code
synthesis is, on average, $7.36\%$ higher with test data included in the
specification. Introducing deceptive tests in the specification detrimentally
affects the LLMs’ performance in code synthesis compared to a legitimate test
($10\%$ performance drop on average). However, compared to No Test cases, the
performance drop across all the models and programs is only $2.65\%$ on
average. Regardless, these results showcase the ability of LLMs to reason and
utilize the test data in the specification.
Figure 6: Impact of loop length in Java programs (CodeNet and Avatar) on LLMs’
performances
## 8 In-Depth Analysis of Results
We further analyzed the IER results, which evaluate the general ability of
LLMs in code reasoning. In the first step, we wanted to see if LLMs know how
different code constructs work. Without knowing the logic of each code
construct, reasoning about code execution is impossible. To that end, we
tagged each of $5395$ programs based on code constructs used in their
implementation with the following labels: For, While, If, Try, Switch, Nested
Loop, Nested If, and Basic. A program tagged with a Basic label has no special
code construct. Next, we clustered the programs per tag and computed the CRR
of LLMs for each cluster. Figure 5 shows the results of this analysis for the
top five best-performing LLMs. We can observe that models handle conditional
statements better than recursion, except for Try-Catch or Try-Except
statements. Furthermore, when it comes to nested constructs, the CRR values
notably drop. Impact of Loop Properties. Given that models struggle the most
with recurring constructs, we focused the programs with For, While, and Nested
Loop tags at the next step. We hypothesize this struggle is due to the loop’s
length or determining the loop’s length. The former questions whether it is
harder for the model to track the program’s data flow as loops get longer. The
latter questions the abilities of models to reason about how many times a code
block should be repeated, regardless of how long the length would be.
Figure 6 plots the distribution of correct versus incorrect cases and CRR
values per each loop length in Java programs. Sub-figure labels show the ROC
coefficients between the loop length and CRR. We can observe a moderate to
strong negative correlation between the loop length and CRR of the models,
i.e., CRR decreases as the loop length increases. By manually investigating
the incorrect IER cases, we also noticed that LLMs mostly failed to reason
about loop conditions correctly. Without knowing loop conditions and the
number of iterations, it is impossible to reason about the execution
correctly. Furthermore, we found cases where, although the model could reason
about loop conditions and the number of iterations, it lost track of data flow
in the loop, thereby predicting output incorrectly.
(a) Java
(b) Python
Figure 7: Performances of LLMs on code with different types of outputs. Type
Match(TM) means the ratio of predicted output that is in the same type as the
expected output (ground truth). Value Match(VM) means the CRR on the code with
a certain type of output.
In the code snippet below (p03059 from CodeNet (Java)), the loop condition
depends on a constant variable (c) and variable (time), whose value changes
inside the loop.
⬇
Input: 3 5 7, Expected Output:10
int a, b, c = sc.nextInt();
int amount = 0, time = a;
for(int i = 2; time <= c + 0.5; i++){
amount += b;
time = a * i;
}
println(amount);
All models failed in IER of this relatively simple program. Looking at the CoT
analysis, GPT-3.5 knows the loop condition and can reason about all the
variable values in the first and second iterations. However, it stays in the
loop for one more. StarCoder cannot correctly reason about loop condition and
iterates the code block six times. MagicCoder CoT only explains the code
(correct explanation) without communicating how it reasons about variables.
⬇
GPT-3.5, Predicted Output:15
Analysis: Enters for loop and iterates from 2 onwards
until the value of time exceeds c + 0.5. In first
iteration, amount becomes 5 and time becomes 6. In
second iteration, amount becomes 10 and time becomes
9. In third iteration, amount becomes 15 and time
becomes 12. 12 is greater than 7+0.5, loop<EMAIL_ADDRESS>
StarCoder, Predicted Output:105
Analysis: It enters a for loop and it iterates from 2
to 7 to check each integer i in this range. In each
iteration, the value of i is passed to variable time.
The condition is@ met when i is 2,3,4,5,6 and 7.
MagicCoder, Predicted Output:15
Enters for loop and iterates from 2 to infinity. Inside
the loop, it adds the value of b to variable amount.
Then it calculates new value of time by multiplying a
with current iteration i. The loop continues until time
is greater than c+0.5. After the loop finishes, print
the final value of amount as output, which is 15.
The loop condition is simple in the code snippet below (p02994 from CodeNet
(Java)). Consequently, most models correctly reasoned about the number of
iterations.
⬇
Input: 5 2, Expected Output: 18
int n = scanner.nextInt();
int l = scanner.nextInt();
List<Integer> diff = new ArrayList<Integer>();
for(int i = 1; i < n+1; i++) {
int value = l+i-1;
if(value < 0) {value = value * -1;}
diff.add(value);
}
removeK = diff.indexOf(Collections.min(diff)) + 1;
println(n*l + n*(n+1)/2 - n - (l+ removeK -1));
Still, some LLMs struggled to reason about code execution inside the loop
correctly. CodeLlama can correctly reason about the number of loop iterations
but fails to calculate the values in the diff array, resulting in a wrong
output. MagicCoder can also correctly reason about loop length, and similar to
the previous example, correctly explain what the code does. However, it cannot
predict the output correctly (predicted output is 10), preventing us from
judging if it understands the code as well as it explains it.
⬇
CodeLlama, Predicted Output:24
Analysis: In the given inputs n is 5 and l is 2. The
sequence of numbers from 1 to 5 is [1,2,3,4,5]. The
differences between l and each number in the sequence
are [1,0,1,2,3]. The smallest difference is 0 at index
1. So, the index 1 is removed from the sequence. The
new sequence becomes [1,3,4,5]. The sum of the original
sequence [1,2,3,4,5] and the new sequence [1,3,4,5] is
calculated as (5*2+5*(5+1)/2-5-(2+1-1))=15+15-5-2=24.
Impact of if-else Properties. Our manual investigations reveal that LLMs
struggle with conditional constructs for two reasons: (1) confusion about
branches and (2) wrong reasoning about conditional predicates. Concerning the
former, we observed that LLMs sometimes struggle to recognize conditional
statements. For example, regardless of which branch to take, they always go
into both “if” and “else” branches to reason about execution. However, such
cases rarely happen, and in most cases, LLMs struggle to analyze the
conditional predicates correctly, take the wrong branch, and predict the
output incorrectly.
Impact of Math and Logic Operators. Knowing how the math and logic operators
work is essential to reasoning how inputs evolve into output through
execution. The observations from previous analyses also underscore their
importance to reason about the program control flow. Our manual investigation
of failed IER cases shows the prevalence of arithmetic and logic operators in
programs, with the former dominating the number of the latter. LLMs struggled
more to reason about bitwise operators (e.g., >>>, $\wedge$, >>, <<, |, and
$\&$) compared to arithmetic (e.g., + and -) and comparison operators (e.g.,
>, <).
The code snippet below is from MBPP (task 311) and contains several arithmetic
and logic operators.
⬇
Input: 10, Expected Output:14
def set_left_most_unset_bit(n):
if not (n & (n + 1)): return n
pos, temp, count = 0, n, 0
while temp:
if not (temp & 1): pos = count
count += 1
temp>>=1
return (n | (1 << (pos)))
StarCoder, Predicted Output:10
Analysis: The variable n is initialized to 10. The code
checks if n is a power of 2. If it is, the code returns
n. If it is not, the code calculates the position of
the leftmost unset bit. The code then returns the
result of n | (1 << (pos)).
StarCoder, while failing to predict the output, correctly knows how most
operators work. For example, it can reason that the bitwise AND of n and n+1
is used to determine whether integer n is a power of 2. It can also reason
that the loop condition checks if the least significant bit of n is set to 0.
However, it seemingly cannot understand the meaning of the return statement
(setting the least significant bit to 1), hence failing to predict the correct
output.
Impact of Output Types. We categorized programs based on the output types and
checked (1) if LLMs were able to correctly predict the type of output (Type
Match) and (2) if they could correctly reason about the values of output
(Value Match). We identified seven types in subject programs, namely Int
(e.g., $2$), Decimal(e.g., $2.34$), String (e.g., ”CodeMind”), Binary (e.g.,
True or False), List (e.g., [1,3,4,7]), and Tuple (Python-specific, e.g.,
(2,7)). Figure 7 shows the details of these results. In summary, LLMs achieve
a high Type Match ($>80\%$), although they struggled to predict the correct
value (Value Match). Among different types, it is harder for the models to
predict the values of outputs with Tuple/List and Decimal types.
Tuples and Lists consist of multiple items, and every single one of them may
change during the program execution. As a result, it is unsurprising that
models struggle to track the flow of inputs through potentially different
execution paths and reason about a complex output as a whole. Additionally,
given that manipulation of such types involves API calls, e.g., min(), next(),
charAt(), reasoning about changes requires LLMs know how the APIs work, which
is an additional effort.
## 9 Concluding Remarks
In this paper, we discussed the necessity of code reasoning tasks as an
alternative to evaluate LLMs for programming tasks. We introduced CodeMind, a
framework that supports several code reasoning tasks, and used CodeMind in a
large-scale grounded theory study to evaluate state-of-the-art LLMs for code
reasoning. Our results demonstrate that LLMs, in general, know how code
constructs work and are capable of reasoning about program specification and
following how inputs evolve to output through execution. However, their
ability is limited as the code becomes more complex, i.e., has more complex
control- or data flow, contains non-primitive types, and invokes API calls. We
also observe that specification reasoning, which is essential to generate a
code from a given program specification, does not mean models can also reason
about code execution. We are considering two future directions based on this
work. First, we plan to add more code reasoning tasks to CodeMind, e.g.,
variable reasoning and code optimization reasoning. Furthermore, we want to
augment CodeMind with a benchmark that can challenge LLMs’ code reasoning to a
greater extent compared to the existing benchmarks.
## References
* Ahmad et al. (2021) Ahmad, W. U., Tushar, M. G. R., Chakraborty, S., and Chang, K.-W. Avatar: A parallel corpus for java-python program translation. _arXiv preprint arXiv:2108.11590_ , 2021.
* Athiwaratkun et al. (2022) Athiwaratkun, B., Gouda, S. K., Wang, Z., Li, X., Tian, Y., Tan, M., Ahmad, W. U., Wang, S., Sun, Q., Shang, M., et al. Multi-lingual evaluation of code generation models. _arXiv preprint arXiv:2210.14868_ , 2022.
* Bi et al. (2024) Bi, X., Chen, D., Chen, G., Chen, S., Dai, D., Deng, C., Ding, H., Dong, K., Du, Q., Fu, Z., et al. Deepseek llm: Scaling open-source language models with longtermism. _arXiv preprint arXiv:2401.02954_ , 2024.
* Bubeck et al. (2023) Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., et al. Sparks of artificial general intelligence: Early experiments with gpt-4. _arXiv preprint arXiv:2303.12712_ , 2023.
* Cassano et al. (2022) Cassano, F., Gouwar, J., Nguyen, D., Nguyen, S., Phipps-Costin, L., Pinckney, D., Yee, M.-H., Zi, Y., Anderson, C. J., Feldman, M. Q., et al. Multipl-e: A scalable and extensible approach to benchmarking neural code generation. _arXiv preprint arXiv:2208.08227_ , 2022.
* Chen et al. (2022) Chen, B., Zhang, F., Nguyen, A., Zan, D., Lin, Z., Lou, J.-G., and Chen, W. Codet: Code generation with generated tests. _arXiv preprint arXiv:2207.10397_ , 2022.
* Chen et al. (2021) Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto, H. P., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov, M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray, S., Ryder, N., Pavlov, M., Power, A., Kaiser, L., Bavarian, M., Winter, C., Tillet, P., Such, F. P., Cummings, D., Plappert, M., Chantzis, F., Barnes, E., Herbert-Voss, A., Guss, W. H., Nichol, A., Paino, A., Tezak, N., Tang, J., Babuschkin, I., Balaji, S., Jain, S., Saunders, W., Hesse, C., Carr, A. N., Leike, J., Achiam, J., Misra, V., Morikawa, E., Radford, A., Knight, M., Brundage, M., Murati, M., Mayer, K., Welinder, P., McGrew, B., Amodei, D., McCandlish, S., Sutskever, I., and Zaremba, W. Evaluating large language models trained on code, 2021.
* CodeMind (2024) CodeMind. Artifact website. https://github.com/Intelligent-CAT-Lab/CodeMind, 2024.
* Deshpande et al. (2021) Deshpande, R., Chen, J., and Lee, I. Rect: A recursive transformer architecture for generalizable mathematical reasoning. In _NeSy_ , pp. 165–175, 2021.
* Du et al. (2023) Du, X., Liu, M., Wang, K., Wang, H., Liu, J., Chen, Y., Feng, J., Sha, C., Peng, X., and Lou, Y. Classeval: A manually-crafted benchmark for evaluating llms on class-level code generation. _arXiv preprint arXiv:2308.01861_ , 2023.
* Garg et al. (2022) Garg, S., Tsipras, D., Liang, P. S., and Valiant, G. What can transformers learn in-context? a case study of simple function classes. _Advances in Neural Information Processing Systems_ , 35:30583–30598, 2022.
* Gill & Kemerer (1991) Gill, G. K. and Kemerer, C. F. Cyclomatic complexity density and software maintenance productivity. _IEEE transactions on software engineering_ , 17(12):1284–1288, 1991.
* Gu et al. (2024) Gu, A., Rozière, B., Leather, H., Solar-Lezama, A., Synnaeve, G., and Wang, S. I. Cruxeval: A benchmark for code reasoning, understanding and execution. _arXiv preprint arXiv:2401.03065_ , 2024.
* Huang et al. (2023) Huang, K.-H., Zhou, M., Chan, H. P., Fung, Y. R., Wang, Z., Zhang, L., Chang, S.-F., and Ji, H. Do lvlms understand charts? analyzing and correcting factual errors in chart captioning. _arXiv preprint arXiv:2312.10160_ , 2023.
* Imani et al. (2023) Imani, S., Du, L., and Shrivastava, H. Mathprompter: Mathematical reasoning using large language models. _arXiv preprint arXiv:2303.05398_ , 2023.
* Jiang et al. (2023) Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., Casas, D. d. l., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., et al. Mistral 7b. _arXiv preprint arXiv:2310.06825_ , 2023.
* Jimenez et al. (2023) Jimenez, C. E., Yang, J., Wettig, A., Yao, S., Pei, K., Press, O., and Narasimhan, K. Swe-bench: Can language models resolve real-world github issues? _arXiv preprint arXiv:2310.06770_ , 2023.
* La Malfa et al. (2024) La Malfa, E., Weinhuber, C., Torre, O., Lin, F., Cohn, A., Shadbolt, N., and Wooldridge, M. Code simulation challenges for large language models. _arXiv preprint arXiv:2401.09074_ , 2024.
* Li et al. (2023) Li, R., Allal, L. B., Zi, Y., Muennighoff, N., Kocetkov, D., Mou, C., Marone, M., Akiki, C., Li, J., Chim, J., Liu, Q., Zheltonozhskii, E., Zhuo, T. Y., Wang, T., Dehaene, O., Davaadorj, M., Lamy-Poirier, J., Monteiro, J., Shliazhko, O., Gontier, N., Meade, N., Zebaze, A., Yee, M.-H., Umapathi, L. K., Zhu, J., Lipkin, B., Oblokulov, M., Wang, Z., Murthy, R., Stillerman, J., Patel, S. S., Abulkhanov, D., Zocca, M., Dey, M., Zhang, Z., Fahmy, N., Bhattacharyya, U., Yu, W., Singh, S., Luccioni, S., Villegas, P., Kunakov, M., Zhdanov, F., Romero, M., Lee, T., Timor, N., Ding, J., Schlesinger, C., Schoelkopf, H., Ebert, J., Dao, T., Mishra, M., Gu, A., Robinson, J., Anderson, C. J., Dolan-Gavitt, B., Contractor, D., Reddy, S., Fried, D., Bahdanau, D., Jernite, Y., Ferrandis, C. M., Hughes, S., Wolf, T., Guha, A., von Werra, L., and de Vries, H. Starcoder: may the source be with you!, 2023.
* Liu et al. (2023) Liu, J., Xia, C. S., Wang, Y., and Zhang, L. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. _arXiv preprint arXiv:2305.01210_ , 2023.
* Luo et al. (2023) Luo, H., Sun, Q., Xu, C., Zhao, P., Lou, J., Tao, C., Geng, X., Lin, Q., Chen, S., and Zhang, D. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. _arXiv preprint arXiv:2308.09583_ , 2023.
* Miceli-Barone et al. (2023) Miceli-Barone, A. V., Barez, F., Konstas, I., and Cohen, S. B. The larger they are, the harder they fail: Language models do not recognize identifier swaps in python. _arXiv preprint arXiv:2305.15507_ , 2023.
* Min et al. (2023) Min, M. J., Ding, Y., Buratti, L., Pujar, S., Kaiser, G., Jana, S., and Ray, B. Beyond accuracy: Evaluating self-consistency of code large language models with identitychain. _arXiv preprint arXiv:2310.14053_ , 2023.
* Odena et al. (2021) Odena, A., Sutton, C., Dohan, D. M., Jiang, E., Michalewski, H., Austin, J., Bosma, M. P., Nye, M., Terry, M., and Le, Q. V. Program synthesis with large language models. In _n/a_ , pp. n/a, n/a, 2021. n/a.
* OpenAI (2023a) OpenAI. Chatgpt: Optimizing language models for dialogue. _https://openai.com/blog/chatgpt_ , 2023a.
* OpenAI (2023b) OpenAI. Gpt-4 technical report. _https://arxiv.org/abs/2303.08774_ , 2023b.
* Pan et al. (2023) Pan, R., Ibrahimzada, A. R., Krishna, R., Sankar, D., Wassi, L. P., Merler, M., Sobolev, B., Pavuluri, R., Sinha, S., and Jabbarvand, R. Understanding the effectiveness of large language models in code translation. _arXiv preprint arXiv:2308.03109_ , 2023.
* Puri et al. (2021) Puri, R., Kung, D., Janssen, G., Zhang, W., Domeniconi, G., Zolotov, V., Dolby, J., Chen, J., Choudhury, M., Decker, L., Thost, V., Buratti, L., Pujar, S., and Finkler, U. Project codenet: A large-scale ai for code dataset for learning a diversity of coding tasks. 2021\.
* Roziere et al. (2023) Roziere, B., Gehring, J., Gloeckle, F., Sootla, S., Gat, I., Tan, X. E., Adi, Y., Liu, J., Remez, T., Rapin, J., et al. Code llama: Open foundation models for code. _arXiv preprint arXiv:2308.12950_ , 2023.
* Shi et al. (2022) Shi, F., Fried, D., Ghazvininejad, M., Zettlemoyer, L., and Wang, S. I. Natural language to code translation with execution. _arXiv preprint arXiv:2204.11454_ , 2022.
* Spearman (1961) Spearman, C. The proof and measurement of association between two things. 1961\.
* Touvron et al. (2023) Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_ , 2023.
* Valmeekam et al. (2022) Valmeekam, K., Olmo, A., Sreedharan, S., and Kambhampati, S. Large language models still can’t plan (a benchmark for llms on planning and reasoning about change). _arXiv preprint arXiv:2206.10498_ , 2022.
* Wang et al. (2023) Wang, K., Ren, H., Zhou, A., Lu, Z., Luo, S., Shi, W., Zhang, R., Song, L., Zhan, M., and Li, H. Mathcoder: Seamless code integration in llms for enhanced mathematical reasoning. _arXiv preprint arXiv:2310.03731_ , 2023.
* Wang et al. (2022) Wang, S., Li, Z., Qian, H., Yang, C., Wang, Z., Shang, M., Kumar, V., Tan, S., Ray, B., Bhatia, P., et al. Recode: Robustness evaluation of code generation models. _arXiv preprint arXiv:2212.10264_ , 2022.
* Wei et al. (2022a) Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D., Metzler, D., et al. Emergent abilities of large language models. _arXiv preprint arXiv:2206.07682_ , 2022a.
* Wei et al. (2022b) Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q. V., Zhou, D., et al. Chain-of-thought prompting elicits reasoning in large language models. _Advances in Neural Information Processing Systems_ , 35:24824–24837, 2022b.
* Wei et al. (2023) Wei, Y., Wang, Z., Liu, J., Ding, Y., and Zhang, L. Magicoder: Source code is all you need. _arXiv preprint arXiv:2312.02120_ , 2023.
* Wu et al. (2023) Wu, Z., Qiu, L., Ross, A., Akyürek, E., Chen, B., Wang, B., Kim, N., Andreas, J., and Kim, Y. Reasoning or reciting? exploring the capabilities and limitations of language models through counterfactual tasks. _arXiv preprint arXiv:2307.02477_ , 2023.
* Xu et al. (2023) Xu, C., Sun, Q., Zheng, K., Geng, X., Zhao, P., Feng, J., Tao, C., and Jiang, D. Wizardlm: Empowering large language models to follow complex instructions. _arXiv preprint arXiv:2304.12244_ , 2023.
* Yao et al. (2023) Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y., and Narasimhan, K. Tree of thoughts: Deliberate problem solving with large language models. _arXiv preprint arXiv:2305.10601_ , 2023.
* Zhang et al. (2024) Zhang, D., Tigges, C., Zhang, Z., Biderman, S., Raginsky, M., and Ringer, T. Transformer-based models are not yet perfect at learning to emulate structural recursion. _arXiv preprint arXiv:2401.12947_ , 2024.
* Zhang et al. (2023) Zhang, K., Wang, D., Xia, J., Wang, W. Y., and Li, L. Algo: Synthesizing algorithmic programs with generated oracle verifiers. _arXiv preprint arXiv:2305.14591_ , 2023.
* Zheng et al. (2023) Zheng, Q., Xia, X., Zou, X., Dong, Y., Wang, S., Xue, Y., Shen, L., Wang, Z., Wang, A., Li, Y., et al. Codegeex: A pre-trained model for code generation with multilingual benchmarking on humaneval-x. In _Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_ , pp. 5673–5684, 2023.
* Zhong et al. (2022) Zhong, M., Liu, G., Li, H., Kuang, J., Zeng, J., and Wang, M. Codegen-test: An automatic code generation model integrating program test information. _arXiv preprint arXiv:2202.07612_ , 2022.
|
# Improved Lower Bounds for Property B
Karl Grill Daniel Linzmayer
Institute of Statistics and Mathematical Methods in Economics
TU Wien
###### Abstract
If an $n$-uniform hypergraph can be 2-colored, then it is said to have
property B. Erdős (1963) was the first to give lower and upper bounds for the
minimal size $\mathbf{m}(n)$ of an $n$-uniform hypergraph without property B.
His asymptotic upper bound $O(n^{2}2^{n})$ still is the best we know, his
lower bound $2^{n-1}$ has seen a number of improvements, with the current best
$\Omega$ $(2^{n}\sqrt{n/\log(n)})$ established by Radhakrishnan and Srinivasan
(2000). Cherkashin and Kozik (2015) provided a simplified proof of this
result, using Pluhár’s (2009) idea of a random greedy coloring. In the present
paper, we use a refined version of this argument to obtain improved lower
bounds on $\mathbf{m}(n)$ for small values of $n$. We also study
$\mathbf{m}(n,v)$, the size of the smallest $n$-hypergraph without property B
having $v$ vertices.
## 1 Introduction
We consider an $n$-uniform hypergraph $H=(V,E)$ with vertex set $V$ of
cardinality $|V|=v$ and edge set
$E\subseteq\\{A\subseteq V:|A|=n\\}.$
$H$ is said to have property B if it is 2-colorable, i.e., if its vertices can
be colored with two colors (traditionally called “red” and “blue”) in such a
way that no edge is monochromatic.
We let $\mathbf{m}(n)$ denote the smallest number of edges that an $n$-uniform
hypergraph without property B must have, and $\mathbf{m}(n,v)$ the smallest
number of edges in a hypergraph with $v$ vertices that does not have property
B.
The exact values of $\mathbf{m}(n)$ are only known for $n=1,2,3,4$ so far. For
$n=5$ and higher only bounds for the true solution are known[13]. Brute force
calculation quickly reaches its limits due to the quickly increasing
complexity of the problem for increasing $n$.
Erdős and Hajnal [9] presented the first upper bound
$m(p)\leq\genfrac{(}{)}{0.0pt}{1}{2n-1}{n}.$ Erdős [10], using the
probabilistic method, obtained the bounds
$2^{n-1}\leq\mathbf{m}(n)\leq(1+o(1))e\log(2)n^{2}2^{n-2}.$ (1)
His upper bound is still the best asymptotic result available, though some
improvements have been made for small $n$ by constructive means.
In [8], for even $v=O(n)$, he proves the bounds
$\frac{\genfrac{(}{)}{0.0pt}{1}{v}{n}}{2\genfrac{(}{)}{0.0pt}{1}{v/2}{n}}\leq\mathbf{m}(n,v)\leq
2v\frac{\genfrac{(}{)}{0.0pt}{1}{v}{n}}{2\genfrac{(}{)}{0.0pt}{1}{v/2}{n}}.$
(2)
This actually holds for any even $v>2n$, but for $v>n^{2}/2$ this is worse
than (1), which is obtained for $v=n^{2}/2$. So, for any $v$, we have upper
and lower bounds for $\mathbf{m}(n,v)$ that differ only by a factor of order
$O(n^{2})$. This is still quite big, but its polynomial growth is slower than
the exponential growth of the bounds on $\mathbf{m}(n,v)$, a fact that will be
important in our later considerations.
As for the lower bound, Goldberg and Russell[11] observed that for the
smallest $v$ with $\mathbf{m}(n,v)\leq m$, a hypergraph with $v$ vertices
achieving this bound must have the property that every pair of vertices is
contained in some edge, which, by Schoenheim’s bound, implies
$m\geq\lceil\frac{v}{n}\lceil\frac{v-1}{n-1}\rceil\rceil.$ (3)
Together with the random coloring bound
$\mathbf{m}(n,v)\geq\frac{\genfrac{(}{)}{0.0pt}{1}{v}{n}}{\genfrac{(}{)}{0.0pt}{1}{\lfloor
v/2\rfloor}{n}+\genfrac{(}{)}{0.0pt}{1}{\lceil v/2\rceil}{n}},$ (4)
this gives
$\mathbf{m}(n)\geq\min_{v}\max\left(\lceil\frac{v}{n}\lceil\frac{v-1}{n-1}\rceil\rceil,\left\lceil\frac{\genfrac{(}{)}{0.0pt}{1}{v}{n}}{\genfrac{(}{)}{0.0pt}{1}{\lfloor
v/2\rfloor}{n}+\genfrac{(}{)}{0.0pt}{1}{\lceil
v/2\rceil}{n}}\right\rceil\right).$ (5)
Beck [2, 3] used a recoloring of a random coloring and succeeded in proving
$2^{-n}\mathbf{m}(n)\to\infty$. Later Spencer [16] strengthened and simplified
Beck’s argument. This idea was carried further by Radhakrishnan and Srinivasan
[17], yielding the best lower bound currently known
$\mathbf{m}(n)=\Omega(2^{n}\sqrt{n/\log(n)}).$
Cherkashin and Kozik [4] gave a simpler proof of this result, using the greedy
coloring approach introduced by Pluhár [14]. We study this approach in more
detail in the next section.
Many of these results generalize to more than two colors. The survey article
by Raigorodskii and Cherkashin [15] gives an account of various results along
with their proofs.
## 2 Greedy Coloring
Pluhár [14] introduced a greedy coloring procedure: one starts with all
vertices red and arranges them in random order. Then one looks at the vertices
in sequence, changing the color of a vertex to blue if it is the last one in
an otherwise all-red edge.
By the nature of this algorithm, no monochromatic red edge can occur, and it
is obvious that for a two-colorable hypergraph there is an ordering of the
vertices for which the greedy algorithm yields a proper coloring. As the
dependence of this procedure on the number of vertices is a bit of a nuisance,
Pluhár [14] introduced the notion of assigning independent, uniformly $[0,1]$
distributed weights to the vertices and arranging them in increasing order of
their weights.
Using this idea, he obtained a simpler proof of the fact that
$\mathbf{m}(n)2^{-n}\to\infty$, although his result was weaker than that of
Radhakrishnan and Srinivasan. The next step was performed by Cherkashin and
Kozik [4]. They utilized the random greedy coloring method to construct a
simpler proof for Radhakrishnan and Srinivasan’s asymptotic result.
The central idea is the following: greedy coloring can only fail if it
produces a blue edge. By the nature of the coloring procedure, the first
vertex in this edge must be the last vertex in some other (otherwise red)
edge. Thus, the probability that coloring fails can be estimated above by the
probability that some vertex is the last one in some edge and the first in
some other at the same time. We call such a vertex critical. Given that there
is a vertex with weight $x$, we can bound the conditional probability that it
is critical by each of the three probabilities that it is the last vertex in
some edge, that it is the first in some edge, or that both hold at the same
time. This yields the estimate
$\mathbb{P}(\mbox{Greedy coloring
fails})\leq\int_{0}^{1}\min(mnx^{n-1},mn(1-x)^{n-1},\gamma
x^{n-1}(1-x)^{n-1})dx,$ (6)
where $m$ is the number of edges, and $\gamma$ is the number of ordered pairs
of edges that have exactly one vertex in common. In some cases, one can find
good estimates for $\gamma$, but most of the time we have to make do with the
trivial bound $\gamma\leq m(m-1)$. It is easily seen that the right-hand side
of (6) is the minimum of
$2mx^{n}+\gamma\int_{x}^{1-x}u^{n-1}(1-u)^{n-1}du$ (7)
If this is less than $1$, then there is a positive probability that greedy
coloring succeeds, and so $\mathbf{m}(n)>m$. Cherkashin and Kozik [4] weaken
this by applying the inequalities $u(1-u)\leq 1/4$ and $\gamma\leq m^{2}$ to
obtain Radhakrishnan and Srinivasan’s [17] result; Aglave et al. [1] reduce
the case $n=5$, $m=28$ to $v=23$, as all other values of $v$ are ruled out by
(3) and (4). They improve the upper bound on $\gamma$ to $\gamma\leq 670$,
which is enough to give a value less than $1$ in equation (7), proving
$\mathbf{m}(5)\geq 29$.
We go back to considering $\mathbf{m}(n,v)$ with its explicit dependence on
the number of vertices $v$. As a consequence, instead of the continuous
distribution of the weights, we can work directly with the discrete
distribution of the random permutation. By the same reasoning that led us to
(6), we can get an upper bound for the probability $p$ that there is a
critical vertex:
$p\leq\sum_{k=0}^{v}p(k),$
where $p(k)$ denotes the probability that the vertex in position $k$ is
critical; this can be estimated above by
$\sum_{k=0}^{v}p(k)\leq\sum_{k=0}^{v}\frac{1}{v}\min\left(mn\frac{\genfrac{(}{)}{0.0pt}{1}{k-1}{n-1}}{\genfrac{(}{)}{0.0pt}{1}{v-1}{n-1}},mn\frac{\genfrac{(}{)}{0.0pt}{1}{v-k}{n-1}}{\genfrac{(}{)}{0.0pt}{1}{v-1}{n-1}},\gamma\frac{\genfrac{(}{)}{0.0pt}{1}{k-1}{n-1}\genfrac{(}{)}{0.0pt}{1}{v-k}{n-1}}{\genfrac{(}{)}{0.0pt}{1}{v-1}{n-1}\genfrac{(}{)}{0.0pt}{1}{v-n}{n-1}}\right).$
(8)
It may be worth noting that for $v\to\infty$, the right-hand side of (8)
converges to (6) and, of course, the three terms in the minimum are bounds for
the probabilities that $k$ is the last vertex in some edge, the first in some
edge, or both.
In equation (8), the first term in the minimum is smaller than the second for
$k<\lfloor(v+1)/2\rfloor$, and if the smaller of these is not greater than the
third, then the right-hand side evaluates as
$m\frac{\genfrac{(}{)}{0.0pt}{1}{\lfloor
v/2\rfloor}{n}+\genfrac{(}{)}{0.0pt}{1}{\lceil
v/2\rceil}{n}}{\genfrac{(}{)}{0.0pt}{1}{v}{n}},$
so we can only get an improvement over (4), if, for some $k<(v+1)/2$, we have
$\gamma\genfrac{(}{)}{0.0pt}{0}{v-k}{n-1}<mn\genfrac{(}{)}{0.0pt}{0}{v-n}{n-1}.$
As long as we do not have a better estimate than $\gamma\leq m(m-1)$, (2)
implies that we need
$\genfrac{(}{)}{0.0pt}{0}{v-1}{n-1}<n\genfrac{(}{)}{0.0pt}{0}{v-n}{n-1},$ (9)
which in turn implies $v>\frac{n^{2}}{\log(n)}(1+o(1))$.
We can make some slight improvements to our estimate of $\gamma$: on one hand,
if we know that a certain pair of vertices is contained in $r$ edges, then
obviously
$\gamma\leq m(m-1)-r(r-1),$
and another upper bound is obtained by counting the pairs that have a certain
vertex in common: let $l_{i}$ denote the number of occurrences of vertex $i$,
and $r_{i}$ the maximum frequency of a pair $\\{i,j\\},j\neq i$. Then
$\gamma\leq\sum_{i=1}^{v}\left(l_{i}(l_{i}-1)-r_{i}(r_{i}-1)\right).$
Using these ideas, we calculated lower bounds, letting $n$ vary in the range
$5$ to $9$, and $v$ from $2n+1$ to $200$. In fact, in tune with our
considerations in (9), we observe that, for small values of $v$, our bound and
the Goldberg-Russell bound (5) agree. For $n=5$, we get improved lower bounds
for $17\leq v\leq 25$.
For these calculations and those mentioned below, we use small C programs. As
all the probabilities in question, apart from the continuous Cherkashin-Kozik
bound (7) are rational, we can perform exact calculations using an appropriate
bignum library, we chose GNU libgmp. Multiplying with a common denominator, we
can work with integers, which yields a considerable improvement of efficiency
over rational number computations.
For other values of $n$, too, this procedure gives us improved lower bounds
for $\mathbf{m}(n)$. We summarize these bounds for small values of $n$
together with those obtained from Cherkashin and Kozik’s [4] continuous
procedure, that is independent of $v$, and the basic estimate by Goldberg and
Russell [11] in table 1. For the sake of completeness, we have also included
the best known upper bounds as reported in [1].
$n$ | $5$ | $6$ | $7$ | $8$ | $9$
---|---|---|---|---|---
Goldberg and Russell [11] | 28 | 51 | 94 | 174 | 328
Cherkashin and Kozik [4] | 27 | 57 | 119 | 248 | 516
Discrete greedy eq. (8) | 30 | 62 | 126 | 259 | 533
Current upper bound [1] | 51 | 147 | 421 | 1212 | 2401
Table 1: First lower bounds
It should be mentioned that, although we get improved lower bounds for some
finite values of $n$, this approach does not change the asymptotic result. As
a matter of fact, the sum in (8), properly normalized, converges to the
integral in (6).
## 3 Locking a vertex in place
We consider a particular pair of edges that have only one vertex in common.
The probability that this pair becomes critical with their common point in
position $k$ evaluates as
$\frac{\genfrac{(}{)}{0.0pt}{1}{k-1}{n-1}\genfrac{(}{)}{0.0pt}{1}{v-k}{n-1}}{v\genfrac{(}{)}{0.0pt}{1}{v-1}{n-1}\genfrac{(}{)}{0.0pt}{1}{v-n}{n-1}}.$
This takes its maximum for $k=\lceil v/2\rceil$. Thus, it seems plausible that
we might be able to improve our estimates by locking a vertex with a small
degree in place there. So, let $l$ denote the smallest degree of a vertex, and
let us put the associated vertex in position $v_{1}=\lceil v/2\rceil$. There
are $l$ edges that contain the selected vertex, and $m-l$ that don’t. By the
pigeon-hole principle, we know that
$\frac{v-1}{n-1}\leq l\leq\frac{mn}{v}:$
for the lower bound, remember that we assume that all pairs $\\{i,j\\}$ with
$j\neq i$ be contained in some edge, so $l(n-1)\geq v-1$, and on the other
hand, the sum of all $v$ vertex degrees is $mn$, so $lv\leq mn$.
This assumption changes our bounds for the various probabilities, e.g., for
$k<v_{1}$ we have
$p_{1}(k)=n(m-l)\frac{\genfrac{(}{)}{0.0pt}{1}{k-1}{n-1}}{(v-1)\genfrac{(}{)}{0.0pt}{1}{v-2}{n-1}},$
$p_{2}(k)=n(m-l)\frac{\genfrac{(}{)}{0.0pt}{1}{v-k-1}{n-1}}{(v-1)\genfrac{(}{)}{0.0pt}{1}{v-2}{n-1}}+nl\frac{\genfrac{(}{)}{0.0pt}{1}{v-k-1}{n-2}}{(v-1)\genfrac{(}{)}{0.0pt}{1}{v-2}{n-2}},$
$p_{3}(k)=(m-l)(m-l-1)\frac{\genfrac{(}{)}{0.0pt}{1}{k-1}{n-1}\genfrac{(}{)}{0.0pt}{1}{v-k-1}{n-1}}{(v-1)\genfrac{(}{)}{0.0pt}{1}{v-2}{n-1}\genfrac{(}{)}{0.0pt}{1}{v-n-1}{n-1}}+(m-l)l\frac{\genfrac{(}{)}{0.0pt}{1}{k-1}{n-1}\genfrac{(}{)}{0.0pt}{1}{v-k-1}{n-2}}{(v-1)\genfrac{(}{)}{0.0pt}{1}{v-2}{n-1}\genfrac{(}{)}{0.0pt}{1}{v-n-1}{n-2}}$
as upper bounds for the probabilities that the vertex in position $k$ is last
in some edge, first in some edge, or both.
As it turns out, this approach serves to improve our lower bounds to
$\mathbf{m}(5)\geq 31$, $\mathbf{m}(6)\geq 63$, $\mathbf{m}(7)\geq 127$,
$\mathbf{m}(8)\geq 261$ and $\mathbf{m}(9)\geq 537$.
We can go further and fix more than one vertex in the center spots.
Unfortunately, this quickly gets complicated, as we have to control the
numbers of edges that contain a particular subset of the selected vertices,
which amounts to $2^{s}-1$ numbers for $s$ selected vertices. In the case
$s=2$, we need to control the numbers $l_{1}$, $l_{2}$ and $l_{12}$ of edges
containing only the first selected vertex, only the second, or both. This is
still simple enough to carry out the calculations for all combinations of $n$
and $v$ that we considered in the previous section. In this way, we obtain the
improvements $\mathbf{m}(7)\geq 128$ and $\mathbf{m}(8)\geq 262$.
The case $s=3$ affords working with the seven numbers $l_{A}$ of edges
containing exactly the (non-empty) subset $A$ of $\\{1,2,3\\}$. We apply this
to selected combinations of $n$, $m$, and $v$. This by itself is enough to
give us $\mathbf{m}(5)\geq 32$. For $n=6$ and above, we do not get an
immediate improvement. For $n=6$, we need to check the values of $v$ from $39$
to $42$. It turns out that our upper bound for the probability that greedy
coloring fails is less than $1$ for all cases except $L_{1}=L_{2}=9$ and
$L_{12}\geq 2$ (we use $L_{A}$ to denote the number of edges that contain $A$
as a subset, so, for example $L_{1}=l_{1}+l_{12}+l_{13}+l_{123}$, which equals
the degree of vertex $1$). The sum of all vertex degrees is $6\cdot 63=378$
and every vertex degree is at least $9$, amounting for a total of at least
$39\cdot 9=351$. This only leaves room for at most $23$ vertices with a degree
greater than $9$. So, at least $16$ vertices must have a degree equal to $9$.
If we pick one of those, call it $1$, the sum of the numbers of pairs
involving it is 45. Every such pair must occur at least once, leaving at most
$6$ pairs that can occur more than once. Thus, we can find another vertex of
degree $9$, call it $2$, such that the pair $(1,2)$ has only one occurrence.
Choosing these two vertices as the marked ones, we get $L_{1}=L_{2}=9$ and
$L_{12}=1$. For these parameters, the greedy algorithm succeeds with positive
probability, and we arrive at $\mathbf{m}(6)\geq 64$. Similar arguments work
for eliminating $n=8,m=262$ and $n=9,m=537$, so we can summarize
###### Theorem 1.
We have the lower bounds
$\mathbf{m}(5)\geq 32,\mathbf{m}(6)\geq 64,\mathbf{m}(7)\geq
128,\mathbf{m}(8)\geq 263,\mathbf{m}(9)\geq 538.$
## 4 The case $v=2n+1$
The case $v=2n+1$ is interesting, because it is the smallest number of
vertices for which $\mathbf{m}(n,v)$ is not known for all values of $n$, and
because of its close relation with other covering questions. In particular, de
Vries[6] showed that the Goldberg-Russell lower bound
$v(n,2n+1)\geq\frac{\genfrac{(}{)}{0.0pt}{1}{2n+1}{n}}{n+2}$
is attained if and only if the associated hypergraph is a $(n-1,n,2n+1)$
Steiner system, i.e., if every $n-1$-subset of $V$ is contained in exactly one
of the hyperedges. It is worth noting that this lower bound is $C_{n+1}/2$,
where $C_{n}$ denotes the $n$-th Catalan number. $C_{n}$ is known to be odd
iff $n$ is of the form $n=2^{k}-1$[5], so we find that for $n=2^{k}-2$ the
lower bound $C_{n+1}/2$ is not an integer, so it cannot be attained. De Vries’
result, combined with the well-known fact that a $(3,4,v)$-Steiner system
exists iff $v\equiv 4,6\pmod{6}$ [12], even gives that
$\mathbf{m}(n,2n+1)>C_{n+1}/2$ if $n\not\equiv 3,5\pmod{6}$. For particular
values of $n$, we can push this a bit further:
###### Theorem 2.
For $n=2^{k}-2$, $k\geq 3$
$\mathbf{m}(n,2n+1)\geq(C_{n+1}+3)/2,$
and for $n=2^{r}(2^{2k}+1)-2$, $r,k\geq 1$, in particular for $n=2^{2k+1}$,
$k\geq 1$,
$\mathbf{m}(n,2n+1)\geq C_{n+1}/2+3.$
For the proof, we exploit our analysis for the greedy algorithm with one and
two fixed vertices. We let
$m_{0}=\frac{C_{n+1}}{2}=\frac{(2n+1)!}{n!(n+2)!}$
and
$L_{0}=\frac{m_{0}n}{2n+1}=\frac{(2n)!}{(n-1)!(n+2)!}=\frac{1}{n-1}\genfrac{(}{)}{0.0pt}{1}{2n}{n-2}==\frac{1}{n+2}\genfrac{(}{)}{0.0pt}{1}{2n}{n-1}.$
The last two terms show that the denominator in the representation of $L_{0}$
as a reduced fraction is a common divisor of $n-2$ and $n+1$. By our
assumptions, $n-1$ is not a multiple of $3$, so $L_{0}$ is an integer.
Let us now look at a hypergraph with $m=m_{0}+x$ edges. A critical vertex can
only occur in position $n$, $n+1$, or $n+2$.
If we lock a vertex with minimal degree $L_{1}$ (we will call it “1” in the
future) in position $n+1$, we can bound the probability that $n$ is critical
by the probability that it is the last in some edge, which in turn is bounded
by
$\frac{m-L_{1}}{\genfrac{(}{)}{0.0pt}{1}{2n}{n}},$
and the same bound applies to the probability that $n+2$ is critical.
Similarly, the probability that $n+1$ is critical is bounded by
$\frac{L_{1}\genfrac{(}{)}{0.0pt}{1}{n}{n-1}}{\genfrac{(}{)}{0.0pt}{1}{2n}{n-1}}.$
From these, we obtain the upper bound
$\frac{(n!)^{2}(2m+L_{1}(n-1))}{(2n)!}$
for the probability that there is a critical vertex.
The pigeonhole principle implies
$L_{1}\leq\frac{mn}{2n+1}=\frac{(m_{0}+x)n}{2n+1}=L_{0}+\frac{xn}{2n+1}.$
For $x\leq 2$ this upper bound is less then $L_{0}+1$, so we obtain $L_{1}\leq
L_{0}$. For $L_{1}<L_{0}$, our upper bound for the probability of a critical
vertex is at most
$1+\frac{(n!)^{2}(2x+1-n)}{(2n)!}<1,$
so we are left with the case $L_{1}=L_{0}$. We lock the vertex with the second
largest degree $L_{2}$ (call it “2”) in position $n+2$. We let $L_{12}$ denote
the number of edges containing both vertices 1 and 2. There are
$l_{1}=L_{1}-L_{12}$ edges containing vertex 1 but not 2, $l_{2}=L_{2}-L_{12}$
containing vertex 2 alone, and $l_{0}=m-L_{1}-L_{2}+L_{12}$ edges containing
neither 1 nor 2. We get upper bounds
$\frac{l_{0}}{\genfrac{(}{)}{0.0pt}{1}{2n-1}{n}}$
and
$\frac{l_{2}}{\genfrac{(}{)}{0.0pt}{1}{2n-1}{n-1}}$
for the probabilities that $n$ resp. $n+2$ are critical. For $n+1$, we have
the two upper bounds
$\frac{l_{1}n}{\genfrac{(}{)}{0.0pt}{1}{2n-1}{n-1}}$
and
$\frac{l_{1}}{\genfrac{(}{)}{0.0pt}{1}{2n-1}{n-1}}+\frac{L_{12}(n-1)}{\genfrac{(}{)}{0.0pt}{1}{2n-1}{n-2}}.$
Thus, the probability that there is a critical vertex is at most
$\frac{n!(n-1)!(l_{0}+l_{2}+\min(nl_{1},l_{1}+L_{12}(n+1)))}{(2n-1)!}=$
$\frac{n!(n-1)!(2m+(n-1)L_{1}-2n|L_{12}-L_{12}^{0}|)}{2(2n-1)!}=$
$1+\frac{n!(n-1)!(x-n|L_{12}-L_{12}^{0}|)}{(2n-1)!},$
where we have put
$L_{12}^{0}=\frac{L_{0}(n-1)}{2n}=\frac{m_{0}(n-1)}{2(2n+1)}=\frac{(2n-1)!}{(n-2)!(n+2)!}.$
By theorem 2.1 in [5], the multiplicity of $2$ as a factor of $C_{n}$ is one
less than the number of digits $1$ in the binary representation of $n+1$. From
this, we can conclude that $L_{12}^{0}$ is not an integer, but $4L_{12}^{0}$
is. Thus $|L_{12}-L_{12}^{0}|\geq 1/4$, and we get an upper bound less than
$1$ for the probability that a critical vertex exists.
## 5 Conclusion
We were able to obtain improved lower bounds for $\mathbf{m}(n)$ and
$\mathbf{m}(n,v)$ for selected values of $n$ and $v$, in particular we could
improve the lower bound $\mathbf{m}(5)\geq 29$ from [1] to $\mathbf{m}(5)\geq
32$.
We hope that our method can be extended to obtain an improvement of the
asymptotic lower bound. To this end, we would need to let the number of fixed
vertices go to infinity as $n$ increases. In order to still obtain
sufficiently small upper bounds for the probabilities of criticality, this
needs to be combined with a tight control of the joint occurrences of the
selected vertices.
## 6 Acknowledgements
We are very gratefully to Prof. Cherkashin for his valuable remarks and
suggestions.
## References
* [1] Aglave, S., Amarnath, V.A., Shannigrahi, S. Singh, S., “Improved Bounds for Uniform Hypergraphs without Property B”, Australasian Journal of Combinatorics, Volume 76(1) (2020), 73–86.
* [2] Beck, J. “On a Combinatorial Problem of P. Erdős and L. Lovász”, Discrete Mathematics 17 (1977), 127–131.
* [3] Beck, J. “On 3-chromatic Hypergraphs”, Discrete Mathematics, 24(2) (1978), 127–137.
* [4] Cherkashin, D., Kozik, J., “A note on random greedy coloring of uniform hypergraphs”, Random Structures and Algorithms 47 (3) (2015), 407–413.
* [5] Deutsch, E., Sagan, B.E., “Congruences for Catalan and Motzkin numbers and related sequences”, J. Number Theory 117 (2006), 191–215. doi:10.1016/j.jnt.2005.06.005.
* [6] de Vries, H.L., “On property B and on Steiner systems”, Math. Z. 153(2) (1977), 153–159, doi:10.1007/BF01179788.
* [7] Erdős, P., “On a combinatorial problem”, Nordisk Mat. Tidskrift 11 (1963), 5–10.
* [8] Erdős, P., “On a combinatorial problem III”, Canad. Math. Bull.12 (4) (1969),413–416. doi:10.4153/CMB-1969-051-5.
* [9] Erdős, P, and Hajnal, A. “A Property of a Family of Sets”, Acta Math. Hung. 12(1) (1961), 87–123, doi:10.1007/BF02066676.
* [10] Erdős, P., and Spencer, J., “Probabilistic Methods in Combinatorics”, Akadémiai Kiadó, Budapest, 1974.
* [11] Goldberg, M.K., and Russell, H.C., “Toward computing m(4)”, Ars Combin. 39(1995), 139–148.
* [12] Hanani, M. “On Quadruple Systems”, Canad. J. Math. 12 (1960), 145–157.
* [13] P. Østergård, R.J., “On the minimum size of 4-uniform hypergraphs without property B”, Discrete Appl. Math. 163 (2) (2014), 199–204.
* [14] Pluhár, A., “Greedy colorings of uniform hypergraphs”, Random Structures and Algorithms 35 (2009) 216–221. doi:10.1002/rsa.20267.
* [15] Raigorodskii, A.M., and Cherkashin, D.D., ‘Extremal problems in hypergraph coloring”, Russian Math. Surveys 75:1 (2020), 89–146, doi:10.1070/RM9905.
* [16] Spencer, J. “Coloring $n$-Sets Red and Blue”, J. Comb. Theory Ser. A 30 (1981), 112–113, doi:10.1016/0097-3165(81)90045-5.
* [17] Radhakrishnan, J,. and Srinivasan, A., “Improved bounds and algorithms for hypergraph 2-coloring”, Random Structures Algorithms 16(1) (2000), 4–32.
|
# Noisy Feature Mixup
Soon Hoe Lim
Nordita, KTH Royal Institute of Technology
and Stockholm University
<EMAIL_ADDRESS>
& N. Benjamin Erichson*
University of Pittsburgh
<EMAIL_ADDRESS>
&Francisco Utrera
University of Pittsburgh and ICSI
<EMAIL_ADDRESS>&Winnie Xu
University of Toronto
<EMAIL_ADDRESS>&Michael W. Mahoney
ICSI and UC Berkeley
<EMAIL_ADDRESS>equal contributions
###### Abstract
We introduce Noisy Feature Mixup (NFM), an inexpensive yet effective method
for data augmentation that combines the best of interpolation based training
and noise injection schemes. Rather than training with convex combinations of
pairs of examples and their labels, we use noise-perturbed convex combinations
of pairs of data points in both input and feature space. This method includes
mixup and manifold mixup as special cases, but it has additional advantages,
including better smoothing of decision boundaries and enabling improved model
robustness. We provide theory to understand this as well as the implicit
regularization effects of NFM. Our theory is supported by empirical results,
demonstrating the advantage of NFM, as compared to mixup and manifold mixup.
We show that residual networks and vision transformers trained with NFM have
favorable trade-offs between predictive accuracy on clean data and robustness
with respect to various types of data perturbation across a range of computer
vision benchmark datasets.
## 1 Introduction
Mitigating over-fitting and improving generalization on test data are central
goals in machine learning. One approach to accomplish this is regularization,
which can be either data-agnostic or data-dependent (e.g., explicitly
requiring the use of domain knowledge). Noise injection is a typical example
of data-agnostic regularization [3], where noise can be injected into the
input data [1], or the activation functions [27], or the hidden layers of
neural networks [5, 43].
$\boldsymbol{x}_{1}$$\boldsymbol{x}_{2}$$\lambda\boldsymbol{x}_{1}+(1-\lambda)\boldsymbol{x}_{2}$
$\boldsymbol{x}_{1}^{\prime}$$\boldsymbol{x}_{2}^{\prime}$$\lambda\boldsymbol{x}_{1}^{\prime}+(1-\lambda)\boldsymbol{x}_{2}^{\prime}$
Figure 1: An illustration of how two data points, ${\bf x}_{1}$ and ${\bf
x}_{2}$, are transformed in mixup (top) and NFM with $\mathcal{S}:=\\{0\\}$
(bottom).
Data augmentation constitutes a different class of regularization methods [2,
7, 12], which can also be either data-agnostic or data-dependent. Data
augmentation involves training a model with not just the original data, but
also with additional data that is properly transformed, and it has led to
state-of-the-art results in image recognition [9, 37]. The recently-proposed
data-agnostic method, mixup [79], trains a model on linear interpolations of a
random pair of examples and their corresponding labels, thereby encouraging
the model to behave linearly in-between training examples. Both noise
injection and mixup have been shown to impose smoothness and increase model
robustness to data perturbations [80, 6, 43], which is critical for many
safety and sensitive applications [24, 44].
In this paper, we propose and study a simple, inexpensive yet effective data
augmentation method, which we call Noisy Feature Mixup (NFM). This method
combines mixup and noise injection, thereby inheriting the benefits of both
methods, and can be seen as a generalization of input mixup [79] and manifold
mixup [68]. When compared to noise injection and mixup, NFM imposes
regularization on the largest natural region surrounding the dataset (see Fig.
1), which may help improve robustness and generalization when predicting on
out of distribution data. Conveniently, NFM can be implemented on top of
manifold mixup, introducing minimal computation overhead.
#### Contributions.
Our main contributions in this paper are summarized as follows.
* •
We study NFM via the lens of implicit regularization, showing that NFM
amplifies the regularizing effects of manifold mixup and noise injection,
implicitly reducing the feature-output Jacobians and Hessians according to the
mixing level and noise levels (see Theorem 1).
* •
We provide mathematical analysis to show that NFM can further improve model
robustness when compared to manifold mixup and noise injection. In particular,
we show that, under appropriate assumptions, NFM training approximately
minimizes an upper bound on the sum of an adversarial loss and feature-
dependent regularizers (see Theorem 2).
* •
We provide empirical results in support of our theoretical findings, showing
that NFM improves robustness with respect to various forms of data
perturbation across a wide range of state-of-the-art architectures on computer
vision benchmark tasks. Research codes are shared via
https://github.com/erichson/noisy_mixup.
In Supplementary Materials (SM), we provide proofs for our theorems along with
additional theoretical and empirical results to gain more insights into NFM.
In particular, we show that NFM can implicitly increase classification margin
(see Proposition 1 in SM C) and the noise injection procedure in NFM can
robustify manifold mixup in a probabilistic sense (see Theorem 5 in SM D). We
also provide and discuss generalization bounds for NFM (see Theorem 6 and 7 in
SM E).
Notation. $I$ denotes identity matrix, $[K]:=\\{1,\dots,K\\}$, the superscript
T denotes transposition, $\circ$ denotes composition, $\odot$ denotes Hadamard
product, $\mathbb{1}$ denotes the vector with all components equal one. For a
vector $v$, $v^{k}$ denotes its $k$th component and $\|v\|_{p}$ denotes its
$l_{p}$ norm for $p>0$. $conv(\mathcal{X})$ denote the convex hull of
$\mathcal{X}$. $M_{\lambda}(a,b):=\lambda a+(1-\lambda)b$, for random
variables $a,b,\lambda$. $\delta_{z}$ denotes the Dirac delta function,
defined as $\delta_{z}(x)=1$ if $x=z$ and $\delta_{z}(x)=0$ otherwise.
$\mathbb{1}_{A}$ denotes indicator function of the set $A$. For
$\alpha,\beta>0$,
$\tilde{\mathcal{D}}_{\lambda}:=\frac{\alpha}{\alpha+\beta}Beta(\alpha+1,\beta)+\frac{\beta}{\alpha+\beta}Beta(\beta+1,\alpha)$
denotes a uniform mixture of two Beta distributions. For two vectors $a,b$,
$\cos(a,b):=\langle a,b\rangle/\|a\|_{2}\|b\|_{2}$ denotes their cosine
similarity. $\mathcal{N}(a,b)$ is a Gaussian distribution with mean $a$ and
covariance $b$.
## 2 Related Work
Regularization. Regularization refers to any technique that reduces
overfitting in machine learning; see [46, 45] and references therein, in
particular for a discussion of _implicit_ regularization, a topic that has
received attention recently in the context of stochastic gradient optimization
applied to neural network models. Traditional regularization techniques such
as ridge regression, weight decay and dropout do not make use of the training
data to reduce the model capacity. A powerful class of techniques is data
augmentation, which constructs additional examples from the training set,
e.g., by applying geometric transformations to the original data [60]. A
recently proposed technique is mixup [79], where the examples are created by
taking convex combinations of pairs of inputs and their labels. [68] extends
mixup to hidden representations in deep neural networks. Subsequent works by
[26, 75, 18, 34, 76, 31] introduce different variants and extensions of mixup.
Regularization is also intimately connected to robustness [32, 61, 49, 17,
48]. Adding to the list is NFM, a powerful regularization method that we
propose to improve model robustness.
Robustness. Model robustness is an increasingly important issue in modern
machine learning. Robustness with respect to adversarial examples [39] can be
achieved by adversarial training [25, 44, 66]. Several works present
theoretical justifications to observed robustness and how data augmentation
can improve it [30, 74, 10, 51, 52, 80, 81, 6, 35, 11, 71, 23, 8]. Relatedly,
[20, 21, 43] investigate how noise injection can be used to improve
robustness. Parallel to this line of work, we provide theory to understand how
NFM can improve robustness. Also related to this line of work is the study of
the trade-offs between robustness and accuracy [47, 78, 65, 58, 63, 54, 73].
There are also attempts to study generalization in terms of robustness [72,
16, 33].
## 3 Noisy Feature Mixup
Noisy Feature Mixup is a generalization of input mixup [79] and manifold mixup
[68]. The main novelty of NFM against manifold mixup lies in the injection of
noise when taking convex combinations of pairs of input and hidden layer
features. Fig. 1 illustrates, at a high level, how this simple modification
alters the region in which the resulting augmented data resides. Fig. 2 shows
that NFM can be most effective at smoothing the decision boundary of the
trained classifiers; compared to noise injection and mixup alone, it imposes
the strongest smoothness on this dataset.
[5pt]Baseline (85.5%). [5pt]Dropout (87.0%). [5pt]Weight decay (88.0%).
[5pt]Noise injections (87.0%).
[5pt]Mixup (84.5%). [5pt]Manifold mixup (88.5%). [5pt]Noisy mixup (89.0%).
[5pt]NFM (90.0%).
Figure 2: The decision boundaries and test accuracy (in parenthesis) for
different training schemes on a toy dataset in binary classification (see
Subsection F.2 for details).
Formally, we consider multi-class classification with $K$ labels. Denote the
input space by $\mathcal{X}\subset\mathbb{R}^{d}$ and the output space by
$\mathcal{Y}=\mathbb{R}^{K}$. The classifier, $g$, is constructed from a
learnable map $f:\mathcal{X}\to\mathbb{R}^{K}$, mapping an input $x$ to its
label, $g(x)=\arg\max_{k}f^{k}(x)\in[K]$. We are given a training set,
$\mathcal{Z}_{n}:=\\{(x_{i},y_{i})\\}_{i=1}^{n}$, consisting of $n$ pairs of
input and one-hot label, with each training pair
$z_{i}:=(x_{i},y_{i})\in\mathcal{X}\times\mathcal{Y}$ drawn i.i.d. from a
ground-truth distribution $\mathcal{D}$. We consider training a deep neural
network $f:=f_{k}\circ g_{k}$, where $g_{k}:\mathcal{X}\to g_{k}(\mathcal{X})$
maps an input to a hidden representation at layer $k$, and
$f_{k}:g_{k}(\mathcal{X})\to g_{L}(\mathcal{X}):=\mathcal{Y}$ maps the hidden
representation to a one-hot label at layer $L$. Here,
$g_{k}(\mathcal{X})\subset\mathbb{R}^{d_{k}}$ for $k\in[L]$, $d_{L}:=K$,
$g_{0}(x)=x$ and $f_{0}(x)=f(x)$.
Training $f$ using NFM consists of the following steps:
1. 1.
Select a random layer $k$ from a set, $\mathcal{S}\subset\\{0\\}\cup[L]$, of
eligible layers in the neural network.
2. 2.
Process two random data minibatches $(x,y)$ and $(x^{\prime},y^{\prime})$ as
usual, until reaching layer $k$. This gives us two immediate minibatches
$(g_{k}(x),y)$ and $(g_{k}(x^{\prime}),y^{\prime})$.
3. 3.
Perform mixup on these intermediate minibatches, producing the mixed
minibatch:
$(\tilde{g}_{k},\tilde{y}):=(M_{\lambda}(g_{k}(x),g_{k}(x^{\prime})),M_{\lambda}(y,y^{\prime})),$
(1)
where the mixing level $\lambda\sim Beta(\alpha,\beta)$, with the hyper-
parameters $\alpha,\beta>0$.
4. 4.
Produce noisy mixed minibatch by injecting additive and multiplicative noise:
$\displaystyle(\tilde{\tilde{g}}_{k},\tilde{y})$
$\displaystyle:=((\mathbb{1}+\sigma_{mult}\xi_{k}^{mult})\odot
M_{\lambda}(g_{k}(x),g_{k}(x^{\prime}))+\sigma_{add}\xi_{k}^{add},M_{\lambda}(y,y^{\prime})),$
(2)
where the $\xi_{k}^{add}$ and $\xi_{k}^{mult}$ are $\mathbb{R}^{d_{k}}$-valued
independent random variables modeling the additive and multiplicative noise
respectively, and $\sigma_{add},\sigma_{mult}\geq 0$ are pre-specified noise
levels.
5. 5.
Continue the forward pass from layer $k$ until the output using the noisy
mixed minibatch $(\tilde{\tilde{g}}_{k},\tilde{y})$.
6. 6.
Compute the loss and gradients that update all the parameters of the network.
At the level of implementation, following [68], we backpropagate gradients
through the entire computational graph, including those layers before the
mixup layer $k$.
In the case where $\sigma_{add}=\sigma_{mult}=0$, NFM reduces to manifold
mixup [68]. If in addition $\mathcal{S}=\\{0\\}$, it reduces to the original
mixup method [79]. The main difference between NFM and manifold mixup lies in
the noise injection of the fourth step above. Note that NFM is equivalent to
injecting noise into $g_{k}(x),g_{k}(x^{\prime})$ first, then performing mixup
on the resulting pair, i.e., the order that the third and fourth steps occur
does not change the resulting noisy mixed minibatch. For simplicity, we have
used the same mixing level, noise distribution and noise levels for all layers
in $\mathcal{S}$ in our formulation.
Within the above setting, we consider the expected NFM loss:
$L^{NFM}(f)=\mathbb{E}_{(x,y),(x^{\prime},y^{\prime})\sim\mathcal{D}}\mathbb{E}_{k\sim\mathcal{S}}\mathbb{E}_{\lambda\sim
Beta(\alpha,\beta)}\mathbb{E}_{\boldsymbol{\xi}_{k}\sim\mathcal{Q}}l(f_{k}(M_{\lambda,\boldsymbol{\xi}_{k}}(g_{k}(x),g_{k}(x^{\prime}))),M_{\lambda}(y,y^{\prime})),$
where $l:\mathbb{R}^{K}\times\mathbb{R}^{K}\to[0,\infty)$ is a loss function
(note that here we have suppressed the dependence of both $l$ and $f$ on the
learnable parameter $\theta$ in the notation),
$\boldsymbol{\xi}_{k}:=(\xi_{k}^{add},\xi_{k}^{mult})$ are drawn from some
probability distribution $\mathcal{Q}$ with finite first two moments, and
$\displaystyle M_{\lambda,\boldsymbol{\xi}_{k}}(g_{k}(x),g_{k}(x^{\prime}))$
$\displaystyle:=(\mathbb{1}+\sigma_{mult}\xi_{k}^{mult})\odot
M_{\lambda}(g_{k}(x),g_{k}(x^{\prime}))+\sigma_{add}\xi_{k}^{add}.$
NFM seeks to minimize a stochastic approximation of $L^{NFM}(f)$ by sampling a
finite number of $k,\lambda,\boldsymbol{\xi}_{k}$ values and using minibatch
gradient descent to minimize this loss approximation.
## 4 Theory
In this section, we provide mathematical analysis to understand NFM. We begin
with formulating NFM in the framework of vicinal risk minimization and
interpreting NFM as a stochastic learning strategy in Subsection 4.1. Next, we
study NFM via the lens of implicit regularization in Subsection 4.2. Our key
contribution is Theorem 1, which shows that minimizing the NFM loss function
is approximately equivalent to minimizing a sum of the original loss and
feature-dependent regularizers, amplifying the regularizing effects of
manifold mixup and noise injection according to the mixing and noise levels.
In Subsection 4.3, we focus on demonstrating how NFM can enhance model
robustness via the lens of distributionally robust optimization. The key
result of Theorem 2 shows that NFM loss is approximately the upper bound on a
regularized version of an adversarial loss, and thus training with NFM not
only improves robustness but can also mitigate robust over-fitting, a dominant
phenomenon where the robust test accuracy starts to decrease during training
[57].
### 4.1 NFM: Beyond Empirical Risk Minimization
The standard approach in statistical learning theory [4] is to select a
hypothesis function $f:\mathcal{X}\to\mathcal{Y}$ from a pre-defined
hypothesis class $\mathcal{F}$ to minimize the expected risk with respect to
$\mathcal{D}$ and to solve the risk minimization problem:
$\inf_{f\in\mathcal{F}}\mathcal{R}(f):=\mathbb{E}_{(x,y)\sim\mathcal{D}}[l(f(x),y)]$,
for a suitable choice of loss function $l$. In practice, we do not have access
to the ground-truth distribution. Instead, we find an approximate solution by
solving the empirical risk minimization (ERM) problem, in which case
$\mathcal{D}$ is approximated by the empirical distribution
$\mathbb{P}_{n}=\frac{1}{n}\sum_{i=1}^{n}\delta_{z_{i}}$. In other words, in
ERM we solve the problem:
$\inf_{f\in\mathcal{F}}\mathcal{R}_{n}(f):=\frac{1}{n}\sum_{i=1}^{n}l(f(x_{i}),y_{i})$.
However, when the training set is small or the model capacity is large (as is
the case for deep neural networks), ERM may suffer from overfitting. Vicinal
risk minimization (VRM) is a data augmentation principle introduced in [67]
that goes beyond ERM, aiming to better estimate expected risk and reduce
overfitting. In VRM, a model is trained not simply on the training set, but on
samples drawn from a vicinal distribution, that smears the training data to
their vicinity. With appropriate choices for this distribution, the VRM
approach has resulted in several effective regularization schemes [7].
Input mixup [79] can be viewed as an example of VRM, and it turns out that NFM
can be constructed within a VRM framework at the feature level (see Section A
in SM). On a high level, NFM can be interpreted as a random procedure that
introduces feature-dependent noise into the layers of the deep neural network.
Since the noise injections are applied only during training and not inference,
NFM is an instance of a stochastic learning strategy. Note that the injection
strategy of NFM differs from those of [1, 5, 43]. Here, the structure of the
injected noise differs from iteration to iteration (based on the layer chosen)
and depends on the training data in a different way. We expect NFM to amplify
the benefits of training using either noise injection or mixup alone, as will
be shown next.
### 4.2 Implicit Regularization of NFM
We consider loss functions of the form $l(f(x),y):=h(f(x))-yf(x)$, which
includes standard choices such as the logistic loss and the cross-entropy
loss, and recall that $f:=f_{k}\circ g_{k}$. Denote
$L_{n}^{std}:=\frac{1}{n}\sum_{i=1}^{n}l(f(x_{i}),y_{i})$ and let
$\mathcal{D}_{x}$ be the empirical distribution of training samples
$\\{x_{i}\\}_{i\in[n]}$. We shall show that NFM exhibits a natural form of
implicit regularization, i.e., regularization imposed implicitly by the
stochastic learning strategy or approximation algorithm, without explicitly
modifying the loss.
Let $\epsilon>0$ be a small parameter. In the sequel, we rescale
$1-\lambda\mapsto\epsilon(1-\lambda)$,
$\sigma_{add}\mapsto\epsilon\sigma_{add}$,
$\sigma_{mult}\mapsto\epsilon\sigma_{mult}$, and denote $\nabla_{k}f$ and
$\nabla_{k}^{2}f$ as the first and second directional derivative of $f_{k}$
with respect to $g_{k}$ respectively, for $k\in\mathcal{S}$. By working in the
small parameter regime, we can relate the NFM empirical loss $L_{n}^{NFM}$ to
the original loss $L_{n}^{std}$ and identify the regularizing effects of NFM.
###### Theorem 1.
Let $\epsilon>0$ be a small parameter, and assume that $h$ and $f$ are twice
differentiable. Then,
$L^{NFM}_{n}=\mathbb{E}_{k\sim\mathcal{S}}L^{NFM(k)}_{n}$, where
$L^{NFM(k)}_{n}=L_{n}^{std}+\epsilon
R_{1}^{(k)}+\epsilon^{2}\tilde{R}_{2}^{(k)}+\epsilon^{2}\tilde{R}_{3}^{(k)}+\epsilon^{2}\varphi(\epsilon),$
(3)
with
$\tilde{R}_{2}^{(k)}=R_{2}^{(k)}+\sigma_{add}^{2}R_{2}^{add(k)}+\sigma_{mult}^{2}R_{2}^{mult(k)}$
and
$\tilde{R}_{3}^{(k)}=R_{3}^{(k)}+\sigma_{add}^{2}R_{3}^{add(k)}+\sigma_{mult}^{2}R_{3}^{mult(k)}$,
where
$\displaystyle R_{2}^{add(k)}$
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}h^{\prime\prime}(f(x_{i}))\nabla_{k}f(g_{k}(x_{i}))^{T}\mathbb{E}_{\boldsymbol{\xi}_{k}}[\xi_{k}^{add}(\xi_{k}^{add})^{T}]\nabla_{k}f(g_{k}(x_{i})),$
(4) $\displaystyle R_{2}^{mult(k)}$
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}h^{\prime\prime}(f(x_{i}))\nabla_{k}f(g_{k}(x_{i}))^{T}(\mathbb{E}_{\boldsymbol{\xi}_{k}}[\xi_{k}^{add}(\xi_{k}^{add})^{T}]\odot
g_{k}(x_{i})g_{k}(x_{i})^{T})\nabla_{k}f(g_{k}(x_{i})),$ (5) $\displaystyle
R_{3}^{add(k)}$
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}(h^{\prime}(f(x_{i}))-y_{i})\mathbb{E}_{\boldsymbol{\xi}_{k}}[(\xi_{k}^{add})^{T}\nabla_{k}^{2}f(g_{k}(x_{i}))\xi_{k}^{add}],$
(6) $\displaystyle R_{3}^{mult(k)}$
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}(h^{\prime}(f(x_{i}))-y_{i})\mathbb{E}_{\boldsymbol{\xi}_{k}}[(\xi_{k}^{mult}\odot
g_{k}(x_{i}))^{T}\nabla_{k}^{2}f(g_{k}(x_{i}))(\xi_{k}^{mult}\odot
g_{k}(x_{i}))].$ (7)
Here, $R_{1}^{{k}}$, $R_{2}^{{k}}$ and $R_{3}^{{k}}$ are the regularizers
associated with the loss of manifold mixup (see Theorem 3 in SM for their
explicit expression), and $\varphi$ is some function such that
$\lim_{\epsilon\to 0}\varphi(\epsilon)=0$.
Theorem 1 implies that, when compared to manifold mixup, NFM introduces
additional smoothness, regularizing the directional derivatives,
$\nabla_{k}f(g_{k}(x_{i}))$ and $\nabla_{k}^{2}f(g_{k}(x_{i}))$, with respect
to $g_{k}(x_{i})$, according to the noise levels $\sigma_{add}$ and
$\sigma_{mult}$, and amplifying the regularizing effects of manifold mixup and
noise injection. In particular, making $\nabla^{2}f(x_{i})$ small can lead to
smooth decision boundaries (at the input level), while reducing the confidence
of model predictions. On the other hand, making the
$\nabla_{k}f(g_{k}(x_{i}))$ small can lead to improvement in model robustness,
which we discuss next.
### 4.3 Robustness of NFM
We show that NFM improves model robustness. We do this by considering the
following three lenses: (1) implicit regularization and classification margin;
(2) distributionally robust optimization; and (3) a probabilistic notion of
robustness. We focus on (2) in the main paper. See Section C-D in SM and the
last paragraph in this subsection for details on (1) and (3).
We now demonstrate how NFM helps adversarial robustness. By extending the
analysis of [79, 41], we can relate the NFM loss function to the one used for
adversarial training, which can be viewed as an instance of distributionally
robust optimization (DRO) [40, 38, 55] (see also Proposition 3.1 in [62]). DRO
provides a framework for local worst-case risk minimization, minimizing
supremum of the risk in an ambiguity set, such as in the vicinity of the
empirical data distribution.
Following [41], we consider the binary cross-entropy loss, setting
$h(z)=\log(1+e^{z})$, with the labels $y$ taking value in $\\{0,1\\}$ and the
classifier model $f:\mathbb{R}^{d}\to\mathbb{R}$. In the following, we assume
that the model parameter
$\theta\in\Theta:=\\{\theta:y_{i}f(x_{i})+(y_{i}-1)f(x_{i})\geq 0\text{ for
all }i\in[n]\\}$. Note that this set contains the set of all parameters with
correct classifications of training samples (before applying NFM), since
$\\{\theta:1_{\\{f(x_{i})\geq 0\\}}=y_{i}\text{ for all
}i\in[n]\\}\subset\Theta$. Therefore, the condition of $\theta\in\Theta$ is
satisfied when the model classifies all labels correctly for the training data
before applying NFM. Since, in practice, the training error often becomes zero
in finite time, we study the effect of NFM on model robustness in the regime
of $\theta\in\Theta$.
Working in the data-dependent parameter space $\Theta$, we have the following
result.
###### Theorem 2.
Let $\theta\in\Theta:=\\{\theta:y_{i}f(x_{i})+(y_{i}-1)f(x_{i})\geq 0\text{
for all }i\in[n]\\}$ such that $\nabla_{k}f(g_{k}(x_{i}))$ and
$\nabla_{k}^{2}f(g_{k}(x_{i}))$ exist for all $i\in[n]$, $k\in\mathcal{S}$.
Assume that $f_{k}(g_{k}(x_{i}))=\nabla_{k}f(g_{k}(x_{i}))^{T}g_{k}(x_{i})$,
$\nabla_{k}^{2}f(g_{k}(x_{i}))=0$ for all $i\in[n]$, $k\in\mathcal{S}$. In
addition, suppose that $\|\nabla f(x_{i})\|_{2}>0$ for all $i\in[n]$,
$\mathbb{E}_{r\sim\mathcal{D}_{x}}[g_{k}(r)]=0$ and $\|g_{k}(x_{i})\|_{2}\geq
c_{x}^{(k)}\sqrt{d_{k}}$ for all $i\in[n]$, $k\in\mathcal{S}$. Then,
$\displaystyle
L_{n}^{NFM}\geq\frac{1}{n}\sum_{i=1}^{n}\max_{\|\delta_{i}\|_{2}\leq\epsilon_{i}^{mix}}l(f(x_{i}+\delta_{i}),y_{i})+L_{n}^{reg}+\epsilon^{2}\phi(\epsilon),$
(8)
where
$\displaystyle\epsilon_{i}^{mix}$
$\displaystyle:=\epsilon\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[1-\lambda]\cdot\mathbb{E}_{k\sim\mathcal{S}}\left[r_{i}^{(k)}c_{x}^{(k)}\frac{\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}}{\|\nabla
f(x_{i})\|_{2}}\sqrt{d_{k}}\right]$ (9)
and
$L_{n}^{reg}:=\frac{1}{2n}\sum_{i=1}^{n}|h^{\prime\prime}(f(x_{i}))|(\epsilon_{i}^{reg})^{2},$
(10)
with $r_{i}^{(k)}:=|\cos(\nabla_{k}f(g_{k}(x_{i})),g_{k}(x_{i}))|$ and
$\displaystyle(\epsilon_{i}^{reg})^{2}$
$\displaystyle:=\epsilon^{2}\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}^{2}\bigg{(}\mathbb{E}_{\lambda}[(1-\lambda)]^{2}\mathbb{E}_{x_{r}}[\|g_{k}(x_{r})\|_{2}^{2}\cos(\nabla_{k}f(g_{k}(x_{i})),g_{k}(x_{r}))^{2}]$
$\displaystyle\hskip
19.91684pt+\sigma_{add}^{2}\mathbb{E}_{\boldsymbol{\xi}_{k}}[\|\xi_{k}^{add}\|_{2}^{2}\cos(\nabla_{k}f(g_{k}(x_{i})),\xi_{k}^{add})^{2}]$
$\displaystyle\hskip
19.91684pt+\sigma_{mult}^{2}\mathbb{E}_{\boldsymbol{\xi}_{k}}[\|\xi_{k}^{mult}\odot
g_{k}(x_{i})\|_{2}^{2}\cos(\nabla_{k}f(g_{k}(x_{i})),\xi_{k}^{mult}\odot
g_{k}(x_{i}))^{2}]\bigg{)},$ (11)
and $\phi$ is some function such that $\lim_{\epsilon\to 0}\phi(\epsilon)=0$.
The second assumption stated in Theorem 2 is similar to the one made in [41,
80], and is satisfied by linear models and deep neural networks with ReLU
activation function and max-pooling. Theorem 2 shows that the NFM loss is
approximately an upper bound of the adversarial loss with $l_{2}$ attack of
size $\epsilon^{mix}=\min_{i\in[n]}\epsilon^{mix}_{i}$, plus a feature-
dependent regularization term $L_{n}^{reg}$. Therefore, we see that minimizing
the NFM loss not only results in a small adversarial loss, while retaining the
robustness benefits of manifold mixup, but it also imposes additional
smoothness (due to noise injection) on the adversarial loss. The latter can
help mitigate robust overfitting and improve test performance [57, 56]. This
offers a plausible explanation for the remarkable performance of NFM (see next
section).
NFM can also implicitly increase the classification margin (see Section C of
SM). Moreover, since the main novelty of NFM lies in the introduction of noise
injection, it would be insightful to isolate the robustness boosting benefits
of injecting noise on top of manifold mixup. We demonstrate these advantages
via the lens of probabilistic robustness in Section D of SM.
## 5 Empirical Results
In this section, we study the test performance of models trained with NFM, and
examine to what extent NFM can improve robustness to input perturbations. We
demonstrate the tradeoff between predictive accuracy on clean and perturbed
test sets. We consider input perturbations that are common in the literature:
(a) white noise; (b) salt and pepper; and (c) adversarial perturbations (see
Subsection F.1 for details).
We evaluate the average performance of NFM with different model architectures
on CIFAR-10 [36], CIFAR-100 [36], and ImageNet [13]. We use a pre-activated
residual network (ResNet) with depth 18 [29] and a compact vision transformer
(ViT-lite) with 7 attention layers and 4 heads [28] on small scale tasks. For
more challenging and higher dimensional tasks, we consider the performance of
wide ResNet-18 [77] and ResNet-50 architectures, respectively.
Baselines. We evaluate against related data augmentation schemes that have
shown performance improvements in recent years: mixup [79]; manifold mixup
[68]; and noisy mixup [74]. Further, we compare to both vanilla models trained
without data augmentation (baseline) and those trained on white noise
perturbed inputs.
Experimental details. All hyperparameters are consistent with those of the
baseline model across the ablation experiments. In the models trained on the
different data augmentation schemes, we vary only $\alpha$, i.e., the
parameter defining $Beta(\alpha,\alpha)$, from which the $\lambda$ parameter
controlling the convex combination between data point pairs is sampled.
Appropriately, we compare all methods varying $\alpha$ according to previous
works [68, 74]. Across all models trained with NFM, we control the level of
noise injections by fixing the additive noise level to $\sigma_{add}=0.4$ and
multiplicative noise to $\sigma_{mult}=0.2$. To demonstrate the significant
improvements on robustness upon the introduction of these small input
perturbations, we compare against an ablation model (‘*’) that was injected
with higher noise levels (i.e., $\sigma_{add}=1.0$, $\sigma_{mult}=0.5$). See
SM (Section F.4) for further details and comparisons against NFM models
trained on various other levels of noise injections.
Research code is provided here: https://github.com/erichson/noisy_mixup.
\begin{overpic}[width=433.62pt]{figures/cifar10_white.pdf}
\put(-6.0,16.0){\rotatebox{90.0}{ Test Accuracy}} \put(31.0,-3.0){ {White
Noise ($\sigma$)}} \end{overpic}
\begin{overpic}[width=433.62pt]{figures/cifar10_sp.pdf} \put(31.0,-3.0){ {Salt
and Pepper Noise ($\gamma$)}} \end{overpic}
Figure 3: Pre-actived ResNet-18 evaluated on CIFAR-10 with different training
schemes. Shaded regions indicate one standard deviation about the mean.
Averaged across 5 random seeds. Table 1: Robustness of ResNet-18 w.r.t. white
noise ($\sigma$) and salt and pepper ($\gamma$) perturbations evaluated on
CIFAR-10. The results are averaged over 5 models trained with different seed
values.
Scheme | Clean (%) | $\sigma$ (%) | $\gamma$ (%)
---|---|---|---
| | $0.1$ | $0.2$ | $0.3$ | $0.02$ | $0.04$ | $0.1$
Baseline | 94.6 | 90.4 | 76.7 | 56.3 | 86.3 | 76.1 | 55.2
Baseline + Noise | 94.4 | 94.0 | 87.5 | 71.2 | 89.3 | 82.5 | 64.9
Mixup ($\alpha=1.0$) [79] | 95.6 | 93.2 | 85.4 | 71.8 | 87.1 | 76.1 | 55.2
Noisy Mixup ($\alpha=1.0$) [74] | 78.9 | 78.6 | 66.6 | 46.7 | 66.6 | 53.4 | 25.9
Manifold Mixup ($\alpha=0.2$) [68] | 95.5 | 93.0 | 82.5 | 65.5 | 87.5 | 77.1 | 53.9
Manifold Mixup ($\alpha=1.0$) [68] | 95.7 | 92.7 | 82.7 | 67.6 | 88.9 | 80.2 | 57.6
Manifold Mixup ($\alpha=2.0$) [68] | 95.6 | 92.6 | 81.8 | 64.5 | 89.2 | 80.7 | 58.2
Noisy Feature Mixup ($\alpha=0.2$) | 95.4 | 94.7 | 90.2 | 78.7 | 92.2 | 88.2 | 74.4
Noisy Feature Mixup ($\alpha=1.0$) | 95.4 | 95.0 | 91.6 | 83.0 | 91.9 | 87.4 | 73.3
Noisy Feature Mixup ($\alpha=2.0$) | 95.3 | 94.9 | 91.3 | 82.4 | 91.3 | 85.9 | 69.7
### 5.1 CIFAR10
Pre-activated ResNet-18. Table 1 summarizes the performance improvements and
indicates a consistent robustness across different $\alpha$ values. The model
trained with NFM outperforms the baseline model on the clean test set, while
being more robust to input perturbations (Fig. 3; left). This advantage is
also displayed in the models trained with mixup and manifold mixup, though in
a less pronounced way. Notably, the NFM model is also robust to salt and
pepper perturbations and could be significantly more so by further increasing
the noise levels (Fig. 3; right).
Vision Transformer (ViT-7/4). Fig. 4 (left) compares vision transformers
trained with different data augmentation strategies. Again, NFM improves the
robustness of the models while achieving state-of-the-art accuracy when
evaluated on clean data. However, mixup and manifold mixup do not boost the
robustness. Further, Fig. 4 (right) shows that that the vision transformer is
less sensitive to salt and pepper perturbations as compared to the ResNet
model. These results are consistent with the high robustness properties of
transformers recently reported in [59, 50]. Table 4 provides additional
results for different $\alpha$ values.
\begin{overpic}[width=433.62pt]{figures/cifar10_vit_white.pdf}
\put(-6.0,16.0){\rotatebox{90.0}{ Test Accuracy}} \put(31.0,-3.0){ {White
Noise ($\sigma$)}} \end{overpic}
\begin{overpic}[width=433.62pt]{figures/cifar10_vit_sp.pdf} \put(31.0,-3.0){
{Salt and Pepper Noise ($\gamma$)}} \end{overpic}
Figure 4: Vision transformers evaluated on CIFAR-10 with different training
schemes.
\begin{overpic}[width=433.62pt]{figures/cifar100_white.pdf}
\put(-6.0,16.0){\rotatebox{90.0}{ Test Accuracy}} \put(31.0,-3.0){ {White
Noise ($\sigma$)}} \end{overpic}
\begin{overpic}[width=433.62pt]{figures/cifar100_sp.pdf} \put(31.0,-3.0){
{Salt and Pepper Noise ($\gamma$)}} \end{overpic}
Figure 5: Wide ResNets evaluated on CIFAR-100. Averaged across 5 random seeds.
### 5.2 CIFAR-100
Wide ResNet-18. Previous work indicates that data augmentation has a positive
effect on performance for this dataset [79]. Fig. 5 (left) confirms that mixup
and manifold mixup improve the generalization performance on clean data and
highlights the advantage of data augmentation. The NFM training scheme is also
capable of further improving the generalization performance. In addition, we
see that the model trained with NFM is less sensitive to both white noise and
salt and pepper perturbations. These results are surprising, as robustness is
often thought to be at odds with accuracy [65]. However, we demonstrate NFM
has the ability to improve both accuracy and robustness. Table 2 indicates
that for the same $\alpha$, NFM can achieve an average test accuracy of
$80.9\%$ compared to only $80.3\%$ in the mixup setting.
### 5.3 ImageNet
ResNet-50. Table 3 similarly shows that NFM improves both the generalization
and robustness capacities with respect to data perturbations. Although less
pronounced in comparison to previous datasets, NFM shows a favorable trade-off
without requiring additional computational resources. Note that due to
computational costs, we do not average across multiple seeds and only compare
NFM to the baseline and manifold mixup models.
Table 2: Robustness of Wide-ResNet-18 w.r.t. white noise ($\sigma$) and salt
and pepper ($\gamma$) perturbations evaluated on CIFAR-100. The results are
averaged over 5 models trained with different seed values.
Scheme | Clean (%) | $\sigma$ (%) | $\gamma$ (%)
---|---|---|---
| | $0.1$ | $0.2$ | $0.3$ | $0.02$ | $0.04$ | $0.1$
Baseline | 76.9 | 64.6 | 42.0 | 23.5 | 58.1 | 39.8 | 15.1
Baseline + Noise | 76.1 | 75.2 | 60.5 | 37.6 | 64.9 | 51.3 | 23.0
Mixup ($\alpha=1.0$) [79] | 80.3 | 72.5 | 54.0 | 33.4 | 62.5 | 43.8 | 16.2
Noisy Mixup ($\alpha=1.0$) [74] | 78.9 | 78.6 | 66.6 | 46.7 | 66.6 | 53.4 | 25.9
Manifold Mixup ($\alpha=0.2$) [68] | 79.7 | 70.6 | 46.6 | 25.3 | 62.1 | 43.0 | 15.2
Manifold Mixup ($\alpha=1.0$) [68] | 79.7 | 70.5 | 45.0 | 23.8 | 62.1 | 42.8 | 14.8
Manifold Mixup ($\alpha=2.0$) [68] | 79.2 | 69.3 | 43.8 | 23.0 | 62.8 | 44.2 | 16.0
Noisy Feature Mixup ($\alpha=0.2$) | 80.6 | 79.2 | 70.2 | 51.7 | 71.5 | 60.4 | 30.3
Noisy Feature Mixup ($\alpha=1.0$) | 80.9 | 80.1 | 72.1 | 55.3 | 72.8 | 62.1 | 34.4
Noisy Feature Mixup ($\alpha=2.0$) | 80.7 | 80.0 | 71.5 | 53.9 | 72.7 | 62.7 | 36.6
Table 3: Robustness of ResNet-50 w.r.t. white noise ($\sigma$) and salt and
pepper ($\gamma$) perturbations evaluated on ImageNet. Here, the NFM training
scheme improves both the predictive accuracy on clean data and robustness with
respect to data perturbations.
Scheme | Clean (%) | $\sigma$ (%) | $\gamma$ (%)
---|---|---|---
| | $0.1$ | $0.25$ | $0.5$ | $0.06$ | $0.1$ | $0.15$
Baseline | 76.0 | 73.5 | 67.0 | 50.1 | 53.2 | 50.4 | 45.0
Manifold Mixup ($\alpha=0.2$) [68] | 76.7 | 74.9 | 70.3 | 57.5 | 58.1 | 54.6 | 49.5
Noisy Feature Mixup ($\alpha=0.2$) | 77.0 | 76.5 | 72.0 | 60.1 | 58.3 | 56.0 | 52.3
Noisy Feature Mixup ($\alpha=1.0$) | 76.8 | 76.2 | 71.7 | 60.0 | 60.9 | 58.8 | 54.4
### 5.4 Robustness to Adversarial Examples
So far we have only considered white noise and salt and pepper perturbations.
We further consider adversarial perturbations. Here, we use projected gradient
decent [44] with $7$ iterations and various $\epsilon$ levels to construct the
adversarial perturbations. Fig. 6 highlights the improved resilience of
ResNets trained with NFM to adversarial input perturbations. Fig. 6 shows
results for CIFAR-10 (left) and CIFAR-100 (right). Models trained with both
mixup and manifold mixup do not show a substantially increased resilience to
adversarial perturbations.
In Section F.5, we compare NFM to models that are adversarially trained
models. There we see that adversarially trained models are indeed more robust
to adversarial attacks, while at the same time being less accurate on clean
data. However, models trained with NFM show an advantage compared to
adversarially trained models when faced with salt and pepper perturbations.
\begin{overpic}[width=433.62pt]{figures/cifar10_pgd.pdf}
\put(-6.0,16.0){\rotatebox{90.0}{ Test Accuracy}} \put(31.0,-3.0){
{Adverserial Noise ($\epsilon$)}} \end{overpic}
\begin{overpic}[width=433.62pt]{figures/cifar100_pgd.pdf} \put(31.0,-3.0){
{Adverserial Noise ($\epsilon$)}} \end{overpic}
Figure 6: Pre-actived ResNet-18 evaluated on CIFAR-10 (left) and Wide
ResNet-18 evaluated on CIFAR-100 (right) with respect to adversarially
perturbed inputs. Shaded regions indicate one standard deviation about the
mean. Averaged across 5 random seeds.
## 6 Conclusion
We introduce Noisy Feature Mixup, an effective data augmentation method that
combines mixup and noise injection. We identify the implicit regularization
effects of NFM, showing that the effects are amplifications of those of
manifold mixup and noise injection. Moreover, we demonstrate the benefits of
NFM in terms of superior model robustness, both theoretically and
experimentally. Our work inspires a range of interesting future directions,
including theoretical investigations of the trade-offs between accuracy and
robustness for NFM and applications of NFM beyond computer vision tasks.
Further, it will be interesting to study whether NFM may also lead to better
model calibration by extending the analysis of [64, 81].
## Acknowledgements
S. H. Lim would like to acknowledge the WINQ Fellowship and the Knut and Alice
Wallenberg Foundation for providing support of this work. N. B. Erichson and
M. W. Mahoney would like to acknowledge IARPA (contract W911NF20C0035), NSF,
and ONR for providing partial support of this work. Our conclusions do not
necessarily reflect the position or the policy of our sponsors, and no
official endorsement should be inferred. We are also grateful for the generous
support from Amazon AWS.
## References
* [1] Guozhong An. The effects of adding noise during backpropagation training on a generalization performance. Neural Computation, 8(3):643–674, 1996.
* [2] Henry S Baird. Document image defect models. In Structured Document Image Analysis, pages 546–556. Springer, 1992.
* [3] Chris M Bishop. Training with noise is equivalent to Tikhonov regularization. Neural Computation, 7(1):108–116, 1995.
* [4] Olivier Bousquet, Stéphane Boucheron, and Gábor Lugosi. Introduction to statistical learning theory. In Summer school on machine learning, pages 169–207. Springer, 2003\.
* [5] Alexander Camuto, Matthew Willetts, Umut Şimşekli, Stephen Roberts, and Chris Holmes. Explicit regularisation in Gaussian noise injections. arXiv preprint arXiv:2007.07368, 2020.
* [6] Luigi Carratino, Moustapha Cissé, Rodolphe Jenatton, and Jean-Philippe Vert. On mixup regularization. arXiv preprint arXiv:2006.06049, 2020.
* [7] Olivier Chapelle, Jason Weston, Léon Bottou, and Vladimir Vapnik. Vicinal risk minimization. Advances in Neural Information Processing Systems, pages 416–422, 2001.
* [8] Shuxiao Chen, Edgar Dobriban, and Jane H Lee. A group-theoretic framework for data augmentation. Journal of Machine Learning Research, 21(245):1–71, 2020.
* [9] Dan Claudiu Cireşan, Ueli Meier, Luca Maria Gambardella, and Jürgen Schmidhuber. Deep, big, simple neural nets for handwritten digit recognition. Neural Computation, 22(12):3207–3220, 2010.
* [10] Nicolas Couellan. Probabilistic robustness estimates for feed-forward neural networks. Neural Networks, 142:138–147, 2021.
* [11] Tri Dao, Albert Gu, Alexander Ratner, Virginia Smith, Chris De Sa, and Christopher Ré. A kernel theory of modern data augmentation. In International Conference on Machine Learning, pages 1528–1537. PMLR, 2019.
* [12] Dennis DeCoste and Bernhard Schölkopf. Training invariant support vector machines. Machine Learning, 46(1):161–190, 2002.
* [13] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255. Ieee, 2009.
* [14] Luc Devroye, Abbas Mehrabian, and Tommy Reddad. The total variation distance between high-dimensional Gaussians. arXiv preprint arXiv:1810.08693, 2018.
* [15] Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., 9(3-4):211–407, 2014.
* [16] Gintare Karolina Dziugaite, Alexandre Drouin, Brady Neal, Nitarshan Rajkumar, Ethan Caballero, Linbo Wang, Ioannis Mitliagkas, and Daniel M Roy. In search of robust measures of generalization. arXiv preprint arXiv:2010.11924, 2020.
* [17] Gamaleldin F Elsayed, Dilip Krishnan, Hossein Mobahi, Kevin Regan, and Samy Bengio. Large margin deep networks for classification. arXiv preprint arXiv:1803.05598, 2018.
* [18] Logan Engstrom, Justin Gilmer, Gabriel Goh, Dan Hendrycks, Andrew Ilyas, Aleksander Madry, Reiichiro Nakano, Preetum Nakkiran, Shibani Santurkar, Brandon Tran, Dimitris Tsipras, and Eric Wallace. A discussion of ’adversarial examples are not bugs, they are features’. Distill, 2019.
* [19] Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, and Aleksander Madry. Adversarial robustness as a prior for learned representations. ArXiv preprint arXiv:1906.00945, 2020.
* [20] Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers: from adversarial to random noise. arXiv preprint arXiv:1608.08967, 2016.
* [21] Jean-Yves Franceschi, Alhussein Fawzi, and Omar Fawzi. Robustness of classifiers to uniform $l_{p}$ and Gaussian noise. In International Conference on Artificial Intelligence and Statistics, pages 1280–1288. PMLR, 2018.
* [22] Alison L Gibbs and Francis Edward Su. On choosing and bounding probability metrics. International Statistical Review, 70(3):419–435, 2002.
* [23] Chengyue Gong, Tongzheng Ren, Mao Ye, and Qiang Liu. Maxup: A simple way to improve generalization of neural network training. arXiv preprint arXiv:2002.09024, 2020.
* [24] Ian Goodfellow, Patrick McDaniel, and Nicolas Papernot. Making machine learning robust against adversarial inputs. Communications of the ACM, 61(7):56–66, 2018.
* [25] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
* [26] Kristjan Greenewald, Anming Gu, Mikhail Yurochkin, Justin Solomon, and Edward Chien. k-mixup regularization for deep learning via optimal transport. arXiv preprint arXiv:2106.02933, 2021.
* [27] Caglar Gulcehre, Marcin Moczulski, Misha Denil, and Yoshua Bengio. Noisy activation functions. In International Conference on Machine Learning, pages 3059–3068. PMLR, 2016.
* [28] Ali Hassani, Steven Walton, Nikhil Shah, Abulikemu Abuduweili, Jiachen Li, and Humphrey Shi. Escaping the big data paradigm with compact transformers. arXiv preprint arXiv:2104.05704, 2021.
* [29] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, pages 630–645. Springer, 2016.
* [30] Matthias Hein and Maksym Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. arXiv preprint arXiv:1705.08475, 2017.
* [31] Dan Hendrycks, Norman Mu, Ekin D Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. arXiv preprint arXiv:1912.02781, 2019.
* [32] Judy Hoffman, Daniel A Roberts, and Sho Yaida. Robust learning with Jacobian regularization. arXiv preprint arXiv:1908.02729, 2019.
* [33] Yiding Jiang, Dilip Krishnan, Hossein Mobahi, and Samy Bengio. Predicting the generalization gap in deep networks with margin distributions. arXiv preprint arXiv:1810.00113, 2018.
* [34] Jang-Hyun Kim, Wonho Choo, and Hyun Oh Song. Puzzle mix: Exploiting saliency and local statistics for optimal mixup. In International Conference on Machine Learning, pages 5275–5285. PMLR, 2020.
* [35] Masanari Kimura. Mixup training as the complexity reduction. arXiv preprint arXiv:2006.06231, 2020.
* [36] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical Report, 2009.
* [37] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25:1097–1105, 2012.
* [38] Daniel Kuhn, Peyman Mohajerin Esfahani, Viet Anh Nguyen, and Soroosh Shafieezadeh-Abadeh. Wasserstein distributionally robust optimization: Theory and applications in machine learning. In Operations Research & Management Science in the Age of Analytics, pages 130–166. INFORMS, 2019.
* [39] Alexey Kurakin, Ian Goodfellow, Samy Bengio, et al. Adversarial examples in the physical world, 2016.
* [40] Yongchan Kwon, Wonyoung Kim, Joong-Ho Won, and Myunghee Cho Paik. Principled learning method for Wasserstein distributionally robust optimization with local perturbations. In International Conference on Machine Learning, pages 5567–5576. PMLR, 2020.
* [41] Alex Lamb, Vikas Verma, Juho Kannala, and Yoshua Bengio. Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, pages 95–103, 2019.
* [42] Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP), pages 656–672. IEEE, 2019.
* [43] Soon Hoe Lim, N Benjamin Erichson, Liam Hodgkinson, and Michael W Mahoney. Noisy recurrent neural networks. arXiv preprint arXiv:2102.04877, 2021.
* [44] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
* [45] M. W. Mahoney. Approximate computation and implicit regularization for very large-scale data analysis. In Proceedings of the 31st ACM Symposium on Principles of Database Systems, pages 143–154, 2012.
* [46] M. W. Mahoney and L. Orecchia. Implementing regularization implicitly via approximate eigenvector computation. In Proceedings of the 28th International Conference on Machine Learning, pages 121–128, 2011.
* [47] Yifei Min, Lin Chen, and Amin Karbasi. The curious case of adversarially robust models: More data can help, double descend, or hurt generalization. arXiv preprint arXiv:2002.11080, 2020.
* [48] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Jonathan Uesato, and Pascal Frossard. Robustness via curvature regularization, and vice versa. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9078–9086, 2019.
* [49] Roman Novak, Yasaman Bahri, Daniel A Abolafia, Jeffrey Pennington, and Jascha Sohl-Dickstein. Sensitivity and generalization in neural networks: an empirical study. arXiv preprint arXiv:1802.08760, 2018.
* [50] Sayak Paul and Pin-Yu Chen. Vision transformers are robust learners. arXiv preprint arXiv:2105.07581, 2021.
* [51] Rafael Pinot, Laurent Meunier, Alexandre Araujo, Hisashi Kashima, Florian Yger, Cédric Gouy-Pailler, and Jamal Atif. Theoretical evidence for adversarial robustness through randomization. arXiv preprint arXiv:1902.01148, 2019.
* [52] Rafael Pinot, Laurent Meunier, Florian Yger, Cédric Gouy-Pailler, Yann Chevaleyre, and Jamal Atif. On the robustness of randomized classifiers to adversarial examples. arXiv preprint arXiv:2102.10875, 2021.
* [53] Rafael Pinot, Florian Yger, Cédric Gouy-Pailler, and Jamal Atif. A unified view on differential privacy and robustness to adversarial examples. arXiv preprint arXiv:1906.07982, 2019.
* [54] Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, and Percy Liang. Understanding and mitigating the tradeoff between robustness and accuracy. arXiv preprint arXiv:2002.10716, 2020.
* [55] Hamed Rahimian and Sanjay Mehrotra. Distributionally robust optimization: A review. arXiv preprint arXiv:1908.05659, 2019.
* [56] Sylvestre-Alvise Rebuffi, Sven Gowal, Dan A Calian, Florian Stimberg, Olivia Wiles, and Timothy Mann. Fixing data augmentation to improve adversarial robustness. arXiv preprint arXiv:2103.01946, 2021.
* [57] Leslie Rice, Eric Wong, and Zico Kolter. Overfitting in adversarially robust deep learning. In International Conference on Machine Learning, pages 8093–8104. PMLR, 2020.
* [58] Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarially robust generalization requires more data. arXiv preprint arXiv:1804.11285, 2018.
* [59] Rulin Shao, Zhouxing Shi, Jinfeng Yi, Pin-Yu Chen, and Cho-Jui Hsieh. On the adversarial robustness of visual transformers. arXiv preprint arXiv:2103.15670, 2021.
* [60] Connor Shorten and Taghi M Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data, 6(1):1–48, 2019.
* [61] Jure Sokolić, Raja Giryes, Guillermo Sapiro, and Miguel RD Rodrigues. Robust large margin deep neural networks. IEEE Transactions on Signal Processing, 65(16):4265–4280, 2017\.
* [62] Matthew Staib and Stefanie Jegelka. Distributionally robust deep learning as a generalization of adversarial training. In NIPS workshop on Machine Learning and Computer Security, volume 3, page 4, 2017.
* [63] Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, and Yupeng Gao. Is robustness the cost of accuracy?–a comprehensive study on the robustness of 18 deep image classification models. In Proceedings of the European Conference on Computer Vision (ECCV), pages 631–648, 2018.
* [64] Sunil Thulasidasan, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, and Sarah Michalak. On mixup training: Improved calibration and predictive uncertainty for deep neural networks. arXiv preprint arXiv:1905.11001, 2019.
* [65] Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152, 2018.
* [66] Francisco Utrera, Evan Kravitz, N Benjamin Erichson, Rajiv Khanna, and Michael W Mahoney. Adversarially-trained deep nets transfer better. arXiv preprint arXiv:2007.05869, 2020.
* [67] Vladimir Vapnik. The Nature of Statistical Learning Theory. Springer Science & Business Media, 2013.
* [68] Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, and Yoshua Bengio. Manifold mixup: Better representations by interpolating hidden states. In International Conference on Machine Learning, pages 6438–6447. PMLR, 2019.
* [69] Colin Wei and Tengyu Ma. Data-dependent sample complexity of deep neural networks via Lipschitz augmentation. arXiv preprint arXiv:1905.03684, 2019.
* [70] Colin Wei and Tengyu Ma. Improved sample complexities for deep networks and robust classification via an all-layer margin. arXiv preprint arXiv:1910.04284, 2019.
* [71] Sen Wu, Hongyang Zhang, Gregory Valiant, and Christopher Ré. On the generalization effects of linear transformations in data augmentation. In International Conference on Machine Learning, pages 10410–10420. PMLR, 2020.
* [72] Huan Xu, Constantine Caramanis, and Shie Mannor. Robustness and regularization of support vector machines. Journal of Machine Learning Research, 10(7), 2009.
* [73] Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Ruslan Salakhutdinov, and Kamalika Chaudhuri. A closer look at accuracy vs. robustness. arXiv preprint arXiv:2003.02460, 2020.
* [74] Yaoqing Yang, Rajiv Khanna, Yaodong Yu, Amir Gholami, Kurt Keutzer, Joseph E Gonzalez, Kannan Ramchandran, and Michael W Mahoney. Boundary thickness and robustness in learning models. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 6223–6234, 2020.
* [75] Wenpeng Yin, Huan Wang, Jin Qu, and Caiming Xiong. BatchMixup: Improving training by interpolating hidden states of the entire mini-batch. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4908–4912, 2021.
* [76] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6023–6032, 2019.
* [77] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
* [78] Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, pages 7472–7482. PMLR, 2019.
* [79] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. Mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
* [80] Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, and James Zou. How does mixup help with robustness and generalization? arXiv preprint arXiv:2010.04819, 2020.
* [81] Linjun Zhang, Zhun Deng, Kenji Kawaguchi, and James Zou. When and how mixup improves calibration. arXiv preprint arXiv:2102.06289, 2021.
Supplementary Material (SM) for “Noisy Feature Mixup”
Organizational Details. This SM is organized as follows.
* •
In Section A, we study the regularizing effects of NFM within the vicinal risk
minimization framework, relating the effects to those of mixup and noise
injection.
* •
In Section B, we restate the results presented in the main paper and provide
their proof.
* •
In Section C, we study robutsness of NFM through the lens of implicit
regularization, showing that NFM can implicitly increase the classification
margin.
* •
In Section D, we study robustness of NFM via the lens of probabilistic
robustness, showing that noise injection can improve robustness on top of
manifold mixup while keeping track of maximal loss in accuracy incurred under
attack by tuning the noise levels.
* •
In Section E, we provide results on generalization bounds for NFM and their
proofs, identifying the mechanisms by which NFM can lead to improved
generalization bound.
* •
In Section F, we provide additional experimental results and their details.
We recall the notation that we use in the main paper as well as this SM.
Notation. $I$ denotes identity matrix, $[K]:=\\{1,\dots,K\\}$, the superscript
T denotes transposition, $\circ$ denotes composition, $\odot$ denotes Hadamard
product, $\mathbb{1}$ denotes the vector with all components equal one. For a
vector $v$, $v^{k}$ denotes its $k$th component and $\|v\|_{p}$ denotes its
$l_{p}$ norm for $p>0$. $conv(\mathcal{X})$ denote the convex hull of
$\mathcal{X}$. $M_{\lambda}(a,b):=\lambda a+(1-\lambda)b$, for random
variables $a,b,\lambda$. $\delta_{z}$ denotes the Dirac delta function,
defined as $\delta_{z}(x)=1$ if $x=z$ and $\delta_{z}(x)=0$ otherwise.
$\mathbb{1}_{A}$ denotes indicator function of the set $A$. For
$\alpha,\beta>0$,
$\tilde{\mathcal{D}}_{\lambda}:=\frac{\alpha}{\alpha+\beta}Beta(\alpha+1,\beta)+\frac{\beta}{\alpha+\beta}Beta(\beta+1,\alpha)$,
a uniform mixture of two Beta distributions. For two vectors $a,b$,
$\cos(a,b):=\langle a,b\rangle/\|a\|_{2}\|b\|_{2}$ denotes their cosine
similarity. $\mathcal{N}(a,b)$ denotes the Gaussian distribution with mean $a$
and covariance $b$.
## Appendix A NFM Through the Lens of Vicinal Risk Minimization
In this section, we shall show that NFM can be constructed within a vicinal
risk minimization (VRM) framework at the level of both input and hidden layer
representations.
To begin with, we define a class of vicinal distributions and then relate NFM
to such distributions.
###### Definition 1 (Randomly perturbed feature distribution).
Let $\mathcal{Z}_{n}=\\{z_{1},\dots,z_{n}\\}$ be a feature set. We say that
$\mathbb{P}_{n}^{\prime}$ is an $e_{i}$-randomly perturbed feature
distribution if there exists a set $\\{z_{1}^{\prime},\dots,z_{n}^{\prime}\\}$
such that
$\mathbb{P}_{n}^{\prime}=\frac{1}{n}\sum_{i=1}^{n}\delta_{z_{i}^{\prime}}$,
with $z_{i}^{\prime}=z_{i}+e_{i}$, for some random variable $e_{i}$ (possibly
dependent on $\mathcal{Z}_{n}$) drawn from a probability distribution.
Note that the support of an $e_{i}$-randomly perturbed feature distribution
may be larger than that of $\mathcal{Z}$.
If $\mathcal{Z}_{n}$ is an input dataset and the $e_{i}$ are bounded variables
such that $\|e_{i}\|\leq\beta$ for some $\beta\geq 0$, then
$\mathbb{P}_{n}^{\prime}$ is a $\beta$-locally perturbed data distribution
according to Definition 2 in [40]. Examples of $\beta$-locally perturbed data
distribution include that associated with denoising autoencoder, input mixup,
and adversarial training (see Example 1-3 in [40]). Definition 1 can be viewed
as an extension of the definition in [40], relaxing the boundedness condition
on the $e_{i}$ to cover a wide families of perturbed feature distribution. One
simple example is the Gaussian distribution, i.e., when
$e_{i}\sim\mathcal{N}(0,\sigma_{i}^{2})$, which models Gaussian noise
injection into the features. Another example is the distribution associated
with NFM, which we now discuss.
To keep the randomly perturbed distribution close to the original
distribution, the amplitude of the perturbation should be small. In the
sequel, we let $\epsilon>0$ be a small parameter and rescale
$1-\lambda\mapsto\epsilon(1-\lambda)$,
$\sigma_{add}\mapsto\epsilon\sigma_{add}$ and
$\sigma_{mult}\mapsto\epsilon\sigma_{mult}$.
Let $\mathcal{F}_{k}$ be the family of mappings from $g_{k}(\mathcal{X})$ to
$\mathcal{Y}$ and consider the VRM:
$\inf_{f_{k}\in\mathcal{F}_{k}}\mathcal{R}_{n}(f_{k}):=\mathbb{E}_{(g^{\prime}_{k}(x),y^{\prime})\sim\mathbb{P}^{(k)}_{n}}[l(f_{k}(g^{\prime}_{k}(x))),y^{\prime})],$
(12)
where
$\mathbb{P}^{(k)}_{n}=\frac{1}{n}\sum_{i=1}^{n}\delta_{(g_{k}^{\prime}(x_{i}),y_{i}^{\prime})}$,
with $g_{k}^{\prime}(x_{i})=g_{k}(x_{i})+\epsilon e_{i}^{NFM(k)}$ and
$y_{i}^{\prime}=y_{i}+\epsilon e_{i}^{y}$, for some random variables
$e_{i}^{NFM(k)}$ and $e_{i}^{y}$.
In NFM, we approximate the ground-truth distribution $\mathcal{D}$ using the
family of distributions $\\{\mathbb{P}^{(k)}_{n}\\}_{k\in\mathcal{S}}$, with a
particular choice of $(e_{i}^{NFM(k)},e_{i}^{y})$. In the sequel, we denote
NFM at the level of $k$th layer as $NFM(k)$ (i.e., the particular case when
$\mathcal{S}:=\\{k\\}$).
The following lemma identifies the $(e_{i}^{NFM(k)},e_{i}^{y})$ associated
with $NFM(k)$ and relates the effects of $NFM(k)$ to those of mixup and noise
injection, for any perturbation level $\epsilon>0.$
###### Lemma 1.
Let $\epsilon>0$ and denote $z_{i}(k):=g_{k}(x_{i})$. Learning the neural
network map $f$ using $NFM(k)$ is a VRM with the $(\epsilon
e_{i}^{NFM(k)},\epsilon e_{i}^{y})$-randomly perturbed feature distribution,
$\mathbb{P}_{n}^{(k)}=\frac{1}{n}\sum_{i=1}^{n}\delta_{(z_{i}^{\prime}(k),y_{i}^{\prime})}$,
with $z_{i}^{\prime}(k):=z_{i}(k)+\epsilon e_{i}^{NFM(k)}$,
$y_{i}^{\prime}:=y_{i}+\epsilon e_{i}^{y}$, as the vicinal distribution. Here,
$e_{i}^{y}=(1-\lambda)(\tilde{y}_{i}-y_{i})$,
$e^{NFM(k)}_{i}=(\mathbb{1}+\epsilon\sigma_{mult}\xi_{mult})\odot
e_{i}^{mixup(k)}+e_{i}^{noise(k)},$ (13)
where $e_{i}^{mixup(k)}=(1-\lambda)(\tilde{z}_{i}(k)-z_{i}(k))$ and
$e_{i}^{noise(k)}=\sigma_{mult}\xi_{mult}\odot
z_{i}(k)+\sigma_{add}\xi_{add}$, with $z_{i}(k),\tilde{z}_{i}(k)\in
g_{k}(\mathcal{X})$, $\lambda\sim Beta(\alpha,\beta)$, and
$y_{i},\tilde{y}_{i}\in\mathcal{Y}$. Here, $(\tilde{z}_{i}(k),\tilde{y}_{i})$
are drawn randomly from the training set.
Therefore, the random perturbation associated to NFM is data-dependent, and it
consists of a randomly weighted sum of that from injecting noise into the
feature and that from mixing pairs of feature samples. As a simple example,
one can take $\xi_{add},\xi_{mult}$ to be independent standard Gaussian random
variables, in which case we have
$e_{i}^{noise(k)}\sim\mathcal{N}(0,\sigma_{add}^{2}I+\sigma_{mult}^{2}diag(z_{i}(k))^{2})$,
and
$e_{i}\sim\mathcal{N}(0,\sigma_{add}^{2}+\sigma_{mult}^{2}M_{\lambda}(z_{i}(k),\tilde{z}_{i}(k))^{2})$
in Lemma 1.
We now prove Lemma 1.
###### Proof of Lemma 1.
Let $k$ be given and set $\epsilon=1$ without loss of generality. For every
$i\in[n]$, $NFM(k)$ injects noise on top of a mixed sample $z_{i}^{\prime}(k)$
and outputs:
$\displaystyle z_{i}^{\prime\prime}(k)$
$\displaystyle=(\mathbb{1}+\sigma_{mult}\xi_{mult})\odot
z_{i}^{\prime}(k)+\sigma_{add}\xi_{add}$ (14)
$\displaystyle=(\mathbb{1}+\sigma_{mult}\xi_{mult})\odot(\lambda
z_{i}(k)+(1-\lambda)\tilde{z}_{i}(k))+\sigma_{add}\xi_{add}$ (15)
$\displaystyle=z_{i}(k)+e_{i}^{NFM(k)},$ (16)
where
$e_{i}^{NFM(k)}=(1-\lambda)(\tilde{z}_{i}(k)-z_{i}(k))+\sigma_{mult}\xi_{mult}\odot(\lambda
z_{i}(k)+(1-\lambda)\tilde{z}_{i}(k))+\sigma_{add}\xi_{add}$.
Now, note that applying mixup to the pair $(z_{i}(k),\tilde{z}_{i}(k))$
results in $z_{i}^{\prime}(k)=z_{i}(k)+e_{i}^{mixup(k)}$, with
$e_{i}^{mixup(k)}=(1-\lambda)(\tilde{z}_{i}(k)-z_{i}(k))$, where
$z_{i}(k),\tilde{z}_{i}(k)\in g_{k}(\mathcal{X})$ and $\lambda\sim
Beta(\alpha,\beta)$, whereas applying noise injection to $z_{i}(k)$ results in
$(\mathbb{1}+\sigma_{mult}\xi_{mult})\odot
z_{i}(k)+\sigma_{add}\xi_{add}=z_{i}(k)+e_{i}^{noise(k)}$, with
$e_{i}^{noise(k)}=\sigma_{mult}\xi_{mult}\odot
z_{i}(k)+\sigma_{add}\xi_{add}$. Rewriting $e^{NFM(k)}_{i}$ in terms of
$e_{i}^{mixup(k)}$ and $e_{i}^{noise(k)}$ gives
$e^{NFM(k)}_{i}=(\mathbb{1}+\sigma_{mult}\xi_{mult})\odot
e_{i}^{mixup(k)}+e_{i}^{noise(k)}.$ (17)
Similarly, we can derive the expression for $e_{i}^{y}$ using the same
argument. The results in the lemma follow upon applying the rescaling
$1-\lambda\mapsto\epsilon(1-\lambda)$,
$\sigma_{add}\mapsto\epsilon\sigma_{add}$ and
$\sigma_{mult}\mapsto\epsilon\sigma_{mult}$, for $\epsilon>0$. ∎
## Appendix B Statements and Proof of the Results in the Main Paper
### B.1 Complete Statement of Theorem 1 in the Main Paper and the Proof
We first state the complete statement of Theorem 1 in the main paper.
###### Theorem 3 (Theorem 1 in the main paper).
Let $\epsilon>0$ be a small parameter, and assume that $h$ and $f$ are twice
differentiable. Then,
$L^{NFM}_{n}=\mathbb{E}_{k\sim\mathcal{S}}L^{NFM(k)}_{n}$, where
$L^{NFM(k)}_{n}=L_{n}^{std}+\epsilon
R_{1}^{(k)}+\epsilon^{2}\tilde{R}_{2}^{(k)}+\epsilon^{2}\tilde{R}_{3}^{(k)}+\epsilon^{2}\varphi(\epsilon),$
(18)
with
$\displaystyle\tilde{R}_{2}^{(k)}$
$\displaystyle=R_{2}^{(k)}+\sigma_{add}^{2}R_{2}^{add(k)}+\sigma_{mult}^{2}R_{2}^{mult(k)},$
(19) $\displaystyle\tilde{R}_{3}^{(k)}$
$\displaystyle=R_{3}^{(k)}+\sigma_{add}^{2}R_{3}^{add(k)}+\sigma_{mult}^{2}R_{3}^{mult(k)},$
(20)
where
$\displaystyle R_{1}^{(k)}$
$\displaystyle=\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[1-\lambda]}{n}\sum_{i=1}^{n}(h^{\prime}(f(x_{i})-y_{i})\nabla_{k}f(g_{k}(x_{i}))^{T}\mathbb{E}_{x_{r}\sim\mathcal{D}_{x}}[g_{k}(x_{r})-g_{k}(x_{i})],$
(21) $\displaystyle R_{2}^{(k)}$
$\displaystyle=\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[(1-\lambda)^{2}]}{2n}\sum_{i=1}^{n}h^{\prime\prime}(f(x_{i}))\nabla_{k}f(g_{k}(x_{i}))^{T}$
$\displaystyle\hskip
14.22636pt\times\mathbb{E}_{x_{r}\sim\mathcal{D}_{x}}[(g_{k}(x_{r})-g_{k}(x_{i}))(g_{k}(x_{r})-g_{k}(x_{i}))^{T}]\nabla_{k}f(g_{k}(x_{i})),$
(22) $\displaystyle R_{3}^{(k)}$
$\displaystyle=\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[(1-\lambda)^{2}]}{2n}\sum_{i=1}^{n}(h^{\prime}(f(x_{i}))-y_{i})$
$\displaystyle\hskip
14.22636pt\times\mathbb{E}_{x_{r}\sim\mathcal{D}_{x}}[(g_{k}(x_{r})-g_{k}(x_{i}))^{T}\nabla_{k}^{2}f(g_{k}(x_{i}))(g_{k}(x_{r})-g_{k}(x_{i}))],$
(23) $\displaystyle R_{2}^{add(k)}$
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}h^{\prime\prime}(f(x_{i}))\nabla_{k}f(g_{k}(x_{i}))^{T}\mathbb{E}_{\boldsymbol{\xi}_{k}}[\xi_{k}^{add}(\xi_{k}^{add})^{T}]\nabla_{k}f(g_{k}(x_{i})),$
(24) $\displaystyle R_{2}^{mult(k)}$
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}h^{\prime\prime}(f(x_{i}))\nabla_{k}f(g_{k}(x_{i}))^{T}(\mathbb{E}_{\boldsymbol{\xi}_{k}}[\xi_{k}^{add}(\xi_{k}^{add})^{T}]\odot
g_{k}(x_{i})g_{k}(x_{i})^{T})\nabla_{k}f(g_{k}(x_{i})),$ (25) $\displaystyle
R_{3}^{add(k)}$
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}(h^{\prime}(f(x_{i}))-y_{i})\mathbb{E}_{\boldsymbol{\xi}_{k}}[(\xi_{k}^{add})^{T}\nabla_{k}^{2}f(g_{k}(x_{i}))\xi_{k}^{add}],$
(26) $\displaystyle R_{3}^{mult(k)}$
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}(h^{\prime}(f(x_{i}))-y_{i})\mathbb{E}_{\boldsymbol{\xi}_{k}}[(\xi_{k}^{mult}\odot
g_{k}(x_{i}))^{T}\nabla_{k}^{2}f(g_{k}(x_{i}))(\xi_{k}^{mult}\odot
g_{k}(x_{i}))],$ (27)
and $\varphi$ is some function such that $\lim_{\epsilon\to
0}\varphi(\epsilon)=0$.
###### Proof of Theorem 3.
To begin with, we note that, following the argument of the proof of Lemma 3.1
in [80], the loss function minimized by NFM can be written as
$L^{NFM}_{n}=\mathbb{E}_{k\sim\mathcal{S}}L^{NFM(k)}_{n}$, where
$L^{NFM(k)}_{n}=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}\mathbb{E}_{x_{r}\sim\mathcal{D}_{x}}\mathbb{E}_{\boldsymbol{\xi}_{k}\sim\mathcal{Q}}[h(f_{k}(g_{k}(x_{i})+\epsilon
e_{i}^{NFM(k)}))-y_{i}f_{k}(g_{k}(x_{i})+\epsilon e_{i}^{NFM(k)})],$ (28)
with
$e^{NFM(k)}_{i}=(\mathbb{1}+\epsilon\sigma_{mult}\xi_{k}^{mult})\odot
e_{i}^{mixup(k)}+e_{i}^{noise(k)}.$ (29)
Here $e_{i}^{mixup(k)}=(1-\lambda)(g_{k}(x_{r})-g_{k}(x_{i}))$ and
$e_{i}^{noise(k)}=\sigma_{mult}\xi_{k}^{mult}\odot
g_{k}(x_{i})+\sigma_{add}\xi_{k}^{add}$, with $g_{k}(x_{i}),g_{k}(x_{r})\in
g_{k}(\mathcal{X})$ and $\lambda\sim Beta(\alpha,\beta)$.
Denote $\psi_{i}(\epsilon):=h(f_{k}(g_{k}(x_{i})+\epsilon
e_{i}^{NFM(k)}))-y_{i}f_{k}(g_{k}(x_{i})+\epsilon e_{i}^{NFM(k)})$. Since $h$
and $f_{k}$ are twice differentiable by assumption, $\psi_{i}$ is twice
differentiable in $\epsilon$, and
$\psi_{i}(\epsilon)=\psi_{i}(0)+\epsilon\psi_{i}^{\prime}(0)+\frac{\epsilon^{2}}{2}\psi^{\prime\prime}_{i}(0)+\epsilon^{2}\varphi_{i}(\epsilon),$
(30)
where $\varphi_{i}$ is some function such that $\lim_{\epsilon\to
0}\varphi_{i}(\epsilon)=0$.
Denoting $\tilde{g}_{k}(x_{i}):=g_{k}(x_{i})+\epsilon e_{i}^{NFM(k)}$, we
compute, using linearity and chain rule:
$\displaystyle\psi_{i}^{\prime}(\epsilon)$
$\displaystyle=(h^{\prime}(f_{k}(\tilde{g}_{k}(x_{i})))-y_{i})\nabla_{k}f_{k}(\tilde{g}_{k}(x_{i}))^{T}e_{i}^{NFM(k)}$
(31)
$\displaystyle=(h^{\prime}(f_{k}(\tilde{g}_{k}(x_{i})))-y_{i})\nabla_{k}f_{k}(\tilde{g}_{k}(x_{i}))^{T}[(1-\lambda)(\tilde{g}_{k}(x_{r})-\tilde{g}_{k}(x_{i}))+\sigma_{add}\xi_{k}^{add}$
$\displaystyle\ \ \ \
+\sigma_{mult}\xi_{k}^{mult}\odot\tilde{g}_{k}(x_{i})+\epsilon(1-\lambda)\sigma_{mult}\xi_{k}^{mult}\odot(\tilde{g}_{k}(x_{r})-\tilde{g}_{k}(x_{i}))],$
(32) $\displaystyle\psi_{i}^{\prime\prime}(\epsilon)$
$\displaystyle=h^{\prime\prime}(f_{k}(\tilde{g}_{k}(x_{i})))\nabla_{k}f_{k}(\tilde{g}_{k}(x_{i}))^{T}e_{i}^{NFM(k)}(e_{i}^{NFM(k)})^{T}\nabla_{k}f_{k}(\tilde{g}_{k}(x_{i}))$
$\displaystyle\ \ \ \
+(h^{\prime}(f_{k}(\tilde{g}_{k}(x_{i})))-y_{i})(e_{i}^{NFM(k)})^{T}\nabla_{k}^{2}f_{k}(\tilde{g}_{k}(x_{i}))e_{i}^{NFM(k)}$
(33)
$\displaystyle=h^{\prime\prime}(f_{k}(\tilde{g}_{k}(x_{i})))\nabla_{k}f_{k}(\tilde{g}_{k}(x_{i}))^{T}[(1-\lambda)(\tilde{g}_{k}(x_{r})-\tilde{g}_{k}(x_{i}))+\sigma_{add}\xi_{k}^{add}$
$\displaystyle\ \ \ \
+\sigma_{mult}\xi_{k}^{mult}\odot\tilde{g}_{k}(x_{i})+\epsilon(1-\lambda)\sigma_{mult}\xi_{k}^{mult}\odot(\tilde{g}_{k}(x_{r})-\tilde{g}_{k}(x_{i}))]$
$\displaystyle\ \ \ \
\times[(1-\lambda)(\tilde{g}_{k}(x_{r})-\tilde{g}_{k}(x_{i}))+\sigma_{add}\xi_{k}^{add}+\sigma_{mult}\xi_{k}^{mult}\odot\tilde{g}_{k}(x_{i})$
$\displaystyle\ \ \ \
+\epsilon(1-\lambda)\sigma_{mult}\xi_{k}^{mult}\odot(\tilde{g}_{k}(x_{r})-\tilde{g}_{k}(x_{i}))]^{T}\nabla_{k}f_{k}(\tilde{g}_{k}(x_{i}))$
$\displaystyle\ \ \ \
+(h^{\prime}(f_{k}(\tilde{g}_{k}(x_{i})))-y_{i})[(1-\lambda)(\tilde{g}_{k}(x_{r})-\tilde{g}_{k}(x_{i}))+\sigma_{add}\xi_{k}^{add}+\sigma_{mult}\xi_{k}^{mult}\odot\tilde{g}_{k}(x_{i})$
$\displaystyle\ \ \ \
+\epsilon(1-\lambda)\sigma_{mult}\xi_{k}^{mult}\odot(\tilde{g}_{k}(x_{r})-\tilde{g}_{k}(x_{i}))]^{T}\nabla_{k}^{2}f_{k}(\tilde{g}_{k}(x_{i}))[(1-\lambda)(\tilde{g}_{k}(x_{r})-\tilde{g}_{k}(x_{i}))$
$\displaystyle\ \ \ \
+\sigma_{add}\xi_{k}^{add}+\sigma_{mult}\xi_{k}^{mult}\odot\tilde{g}_{k}(x_{i})+\epsilon(1-\lambda)\sigma_{mult}\xi_{k}^{mult}\odot(\tilde{g}_{k}(x_{r})-\tilde{g}_{k}(x_{i}))].$
(34)
The equation in the theorem follows upon setting $\epsilon=0$ in the
expression for $\psi_{i}^{\prime}(\epsilon)$ and
$\psi_{i}^{\prime\prime}(\epsilon)$ above, and then substituting the resulting
expressions into (28), with
$\varphi(\epsilon):=\frac{1}{n}\sum_{i=1}^{n}\varphi_{i}(\epsilon)$. ∎
### B.2 Theorem 2 in the Main Paper and the Proof
We first restate Theorem 2 in the main paper and then provide the proof.
Recall that we consider the binary cross-entropy loss, setting
$h(z)=\log(1+e^{z})$, with the labels $y$ taking value in $\\{0,1\\}$ and the
classifier model $f:\mathbb{R}^{d}\to\mathbb{R}$.
###### Theorem 4 (Theorem 2 in the main paper).
Let $\theta\in\Theta:=\\{\theta:y_{i}f(x_{i})+(y_{i}-1)f(x_{i})\geq 0\text{
for all }i\in[n]\\}$ be a point such that $\nabla_{k}f(g_{k}(x_{i}))$ and
$\nabla_{k}^{2}f(g_{k}(x_{i}))$ exist for all $i\in[n]$, $k\in\mathcal{S}$.
Assume that $f_{k}(g_{k}(x_{i}))=\nabla_{k}f(g_{k}(x_{i}))^{T}g_{k}(x_{i})$,
$\nabla_{k}^{2}f(g_{k}(x_{i}))=0$ for all $i\in[n]$, $k\in\mathcal{S}$. In
addition, suppose that $\|\nabla f(x_{i})\|_{2}>0$ for all $i\in[n]$,
$\mathbb{E}_{r\sim\mathcal{D}_{x}}[g_{k}(r)]=0$ and $\|g_{k}(x_{i})\|_{2}\geq
c_{x}^{(k)}\sqrt{d_{k}}$ for all $i\in[n]$, $k\in\mathcal{S}$. Then,
$\displaystyle
L_{n}^{NFM}\geq\frac{1}{n}\sum_{i=1}^{n}\max_{\|\delta_{i}\|_{2}\leq\epsilon_{i}^{mix}}l(f(x_{i}+\delta_{i}),y_{i})+L_{n}^{reg}+\epsilon^{2}\phi(\epsilon),$
(35)
where
$\displaystyle\epsilon_{i}^{mix}$
$\displaystyle=\epsilon\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[1-\lambda]\cdot\mathbb{E}_{k\sim\mathcal{S}}\left[r_{i}^{(k)}c_{x}^{(k)}\frac{\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}}{\|\nabla
f(x_{i})\|_{2}}\sqrt{d_{k}}\right],$ (36) $\displaystyle r_{i}^{(k)}$
$\displaystyle=|\cos(\nabla_{k}f(g_{k}(x_{i})),g_{k}(x_{i}))|,$ (37)
$\displaystyle L_{n}^{reg}$
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}|h^{\prime\prime}(f(x_{i}))|(\epsilon_{i}^{reg})^{2},$
(38)
with
$\displaystyle(\epsilon_{i}^{reg})^{2}$
$\displaystyle=\epsilon^{2}\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}^{2}\bigg{(}\mathbb{E}_{\lambda}[(1-\lambda)]^{2}\mathbb{E}_{x_{r}}[\|g_{k}(x_{r})\|_{2}^{2}\cos(\nabla_{k}f(g_{k}(x_{i})),g_{k}(x_{r}))^{2}]$
$\displaystyle\hskip
19.91684pt+\sigma_{add}^{2}\mathbb{E}_{\boldsymbol{\xi}}[\|\xi_{add}\|_{2}^{2}\cos(\nabla_{k}f(g_{k}(x_{i})),\xi_{add})^{2}]$
$\displaystyle\hskip
19.91684pt+\sigma_{mult}^{2}\mathbb{E}_{\boldsymbol{\xi}}[\|\xi_{mult}\odot
g_{k}(x_{i})\|_{2}^{2}\cos(\nabla_{k}f(g_{k}(x_{i})),\xi_{mult}\odot
g_{k}(x_{i}))^{2}]\bigg{)},$ (39)
and $\phi$ is some function such that $\lim_{\epsilon\to 0}\phi(\epsilon)=0$.
###### Proof of Theorem 4.
For $h(z)=\log(1+e^{z})$, we have
$h^{\prime}(z)=\frac{e^{z}}{1+e^{z}}=:S(z)\geq 0$ and
$h^{\prime\prime}(z)=\frac{e^{z}}{(1+e^{z})^{2}}=S(z)(1-S(z))\geq 0$.
Substituting these expressions into the equation of Theorem 3 and using the
assumptions that
$f_{k}(g_{k}(x_{i}))=\nabla_{k}f(g_{k}(x_{i}))^{T}g_{k}(x_{i})$ and
$\mathbb{E}_{r\sim\mathcal{D}_{x}}[g_{k}(r)]=0$, we have, for
$k\in\mathcal{S}$,
$R_{1}^{(k)}=\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[1-\lambda]}{n}\sum_{i=1}^{n}(y_{i}-S(f(x_{i})))f_{k}(g_{k}(x_{i})),$
(40)
and we compute:
$\displaystyle R_{2}^{(k)}$
$\displaystyle=\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[(1-\lambda)^{2}]}{2n}\sum_{i=1}^{n}S(f(x_{i}))(1-S(f(x_{i})))\nabla_{k}f(g_{k}(x_{i}))^{T}$
$\displaystyle\hskip
14.22636pt\times\mathbb{E}_{x_{r}\sim\mathcal{D}_{x}}[(g_{k}(x_{r})-g_{k}(x_{i}))(g_{k}(x_{r})-g_{k}(x_{i}))^{T}]\nabla_{k}f(g_{k}(x_{i}))$
(41)
$\displaystyle\geq\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[(1-\lambda)]^{2}}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\nabla_{k}f(g_{k}(x_{i}))^{T}$
$\displaystyle\hskip
14.22636pt\times\mathbb{E}_{x_{r}\sim\mathcal{D}_{x}}[(g_{k}(x_{r})-g_{k}(x_{i}))(g_{k}(x_{r})-g_{k}(x_{i}))^{T}]\nabla_{k}f(g_{k}(x_{i}))$
(42)
$\displaystyle=\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[(1-\lambda)]^{2}}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\nabla_{k}f(g_{k}(x_{i}))^{T}$
$\displaystyle\hskip
14.22636pt\times(\mathbb{E}_{x_{r}\sim\mathcal{D}_{x}}[(g_{k}(x_{r})g_{k}(x_{r})^{T}]+g_{k}(x_{i})g_{k}(x_{i})^{T}])\nabla_{k}f(g_{k}(x_{i}))$
(43)
$\displaystyle=\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[(1-\lambda)]^{2}}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|(\nabla_{k}f(g_{k}(x_{i}))^{T}g_{k}(x_{i}))^{2}$
$\displaystyle\ \ \ \ \
+\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[(1-\lambda)]^{2}}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\mathbb{E}_{x_{r}\in\mathcal{D}_{x}}[(\nabla_{k}f(g_{k}(x_{i}))^{T}g_{k}(x_{r}))^{2}]$
(44)
$\displaystyle=\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[(1-\lambda)]^{2}}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}^{2}\|g_{k}(x_{i})\|_{2}^{2}$
$\displaystyle\hskip
14.22636pt\times(\cos(\nabla_{k}f(g_{k}(x_{i})),g_{k}(x_{i})))^{2}+\frac{1}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}^{2}$
$\displaystyle\ \ \ \ \ \
\times\mathbb{E}_{\lambda}[(1-\lambda)]^{2}\mathbb{E}_{x_{r}}[\|g_{k}(x_{r})\|_{2}^{2}\cos(\nabla_{k}f(g_{k}(x_{i})),g_{k}(x_{r}))^{2}]$
(45)
$\displaystyle\geq\frac{1}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}^{2}\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[(1-\lambda)]^{2}d_{k}(r_{i}^{(k)}c_{x}^{(k)})^{2}$
$\displaystyle\ \ \ \
+\frac{1}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}^{2}\cdot\mathbb{E}_{\lambda}[(1-\lambda)]^{2}\mathbb{E}_{x_{r}}[\|g_{k}(x_{r})\|_{2}^{2}$
$\displaystyle\ \ \ \ \
\times\cos(\nabla_{k}f(g_{k}(x_{i})),g_{k}(x_{r}))^{2}]$ (46)
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\|\nabla
f(x_{i})\|_{2}^{2}$ $\displaystyle\ \ \ \
\times\left(\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[(1-\lambda)]^{2}\frac{\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}^{2}}{\|\nabla
f(x_{i})\|_{2}^{2}}d_{k}(r_{i}^{(k)}c_{x}^{(k)})^{2}\right)$ $\displaystyle\ \
\ \
+\frac{1}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}^{2}\cdot\mathbb{E}_{\lambda}[(1-\lambda)]^{2}\mathbb{E}_{x_{r}}[\|g_{k}(x_{r})\|_{2}^{2}$
$\displaystyle\ \ \ \ \
\times\cos(\nabla_{k}f(g_{k}(x_{i})),g_{k}(x_{r}))^{2}].$ (47)
In the above, we have used the facts that
$\mathbb{E}[Z^{2}]=\mathbb{E}[Z]^{2}+Var(Z)\geq\mathbb{E}[Z]^{2}$ and
$S,S(1-S)\geq 0$ to obtain (42), the assumption that
$\mathbb{E}_{r\sim\mathcal{D}_{x}}[g_{k}(r)]=0$ to arrive at (B.2), the
assumption that $\|g_{k}(x_{i})\|_{2}\geq c_{x}^{(k)}\sqrt{d_{k}}$ for all
$i\in[n]$, $k\in\mathcal{S}$ to arrive at (46), and the assumption that
$\|\nabla f(x_{i})\|_{2}>0$ for all $i\in[n]$ to justify the last equation
above.
Next, we bound $R_{1}^{(k)}$, using the assumption that $\theta\in\Theta$.
Note that from our assumption on $\theta$, we have
$y_{i}f(x_{i})+(y_{i}-1)f(x_{i})\geq 0$, which implies that $f(x_{i})\geq 0$
if $y_{i}=1$ and $f(x_{i})\leq 0$ if $y_{i}=0$. Thus, if $y_{i}=1$, then
$(y_{i}-S(f(x_{i})))f_{k}(g_{k}(x_{i}))=(1-S(f(x_{i})))f_{k}(g_{k}(x_{i}))\geq
0$, since $f(x_{i})\geq 0$ and $(1-S(f(x_{i})))\geq 0$ due to the fact that
$S(f(x_{i}))\in(0,1)$. A similar argument leads to
$(y_{i}-S(f(x_{i})))f_{k}(g_{k}(x_{i}))\geq 0$ if $y_{i}=0$. So, we have
$(y_{i}-S(f(x_{i})))f_{k}(g_{k}(x_{i}))\geq 0$ for all $i\in[n]$.
Therefore, noting that
$\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[1-\lambda]\geq 0$, we
compute:
$\displaystyle R_{1}^{(k)}$
$\displaystyle=\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[1-\lambda]}{n}\sum_{i=1}^{n}|y_{i}-S(f(x_{i}))||f_{k}(g_{k}(x_{i}))|$
(48)
$\displaystyle=\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[1-\lambda]}{n}\sum_{i=1}^{n}|S(f(x_{i}))-y_{i}|\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}\|g_{k}(x_{i})\|_{2}|\cos(\nabla_{k}f(g_{k}(x_{i})),g_{k}(x_{i}))|$
(49)
$\displaystyle\geq\frac{1}{n}\sum_{i=1}^{n}|S(f(x_{i}))-y_{i}|\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}(\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[1-\lambda]r_{i}^{(k)}c_{x}^{(k)}\sqrt{d_{k}})$
(50) $\displaystyle=\frac{1}{n}\sum_{i=1}^{n}|S(f(x_{i}))-y_{i}|\|\nabla
f(x_{i})\|_{2}\left(\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[1-\lambda]\frac{\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}}{\|\nabla
f(x_{i})\|_{2}}r_{i}^{(k)}c_{x}^{(k)}\sqrt{d_{k}}\right).$ (51)
Note that $R_{3}^{(k)}=0$ as a consequence of our assumption that
$\nabla_{k}^{2}f(g_{k}(x_{i}))=0$ for all $i\in[n]$, $k\in\mathcal{S}$, and
similar argument leads to:
$\displaystyle R_{2}^{add(k)}$
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\nabla_{k}f(g_{k}(x_{i}))^{T}\mathbb{E}_{\boldsymbol{\xi}_{k}}[\xi_{k}^{add}(\xi_{k}^{add})^{T}]\nabla_{k}f(g_{k}(x_{i}))$
(52)
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}^{2}$
$\displaystyle\ \ \ \ \
\times\mathbb{E}_{\boldsymbol{\xi}_{k}}[\|\xi_{k}^{add}\|_{2}^{2}\cos(\nabla_{k}f(g_{k}(x_{i})),\xi_{k}^{add})^{2}]$
(53) $\displaystyle R_{2}^{mult(k)}$
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\nabla_{k}f(g_{k}(x_{i}))^{T}(\mathbb{E}_{\boldsymbol{\xi}_{k}}[\xi_{k}^{add}(\xi_{k}^{add})^{T}]\odot
g_{k}(x_{i})g_{k}(x_{i})^{T})$ $\displaystyle\ \ \ \
\times\nabla_{k}f(g_{k}(x_{i}))$
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}^{2}$
$\displaystyle\ \ \ \ \
\times\mathbb{E}_{\boldsymbol{\xi}_{k}}[\|\xi_{k}^{mult}\odot
g_{k}(x_{i})\|_{2}^{2}\cos(\nabla_{k}f(g_{k}(x_{i})),\xi_{mult}\odot
g_{k}(x_{i}))^{2}].$ (54)
Using Theorem 3 and the above results, we obtain:
$\displaystyle L^{NFM}_{n}-\frac{1}{n}\sum_{i=1}^{n}l(f(x_{i}),y_{i})$
$\displaystyle\geq\mathbb{E}_{k}[\epsilon
R_{1}^{(k)}+\epsilon^{2}R_{2}^{(k)}+\epsilon^{2}R_{2}^{add(k)}+\epsilon^{2}R_{2}^{mult(k)}+\epsilon^{2}\varphi(\epsilon)]$
(55) $\displaystyle\geq\frac{1}{n}\sum_{i=1}^{n}|S(f(x_{i}))-y_{i}|\|\nabla
f(x_{i})\|_{2}\epsilon_{i}^{mix}$ (56) $\displaystyle\ \ \ \
+\frac{1}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\|\nabla
f(x_{i})\|_{2}^{2}(\epsilon_{i}^{mix})^{2}$ $\displaystyle\ \ \ \
+\frac{1}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}^{2}\cdot\mathbb{E}_{\lambda}[(1-\lambda)]^{2}\mathbb{E}_{x_{r}}[\|g_{k}(x_{r})\|_{2}^{2}$
$\displaystyle\ \ \ \ \ \
\times\cos(\nabla_{k}f(g_{k}(x_{i})),g_{k}(x_{r}))^{2}]$ (57) $\displaystyle\
\ \ \
+\frac{1}{2n}\sum_{i=1}^{n}|S(f(x_{i}))(1-S(f(x_{i})))|(\epsilon_{i}^{noise})^{2}+\epsilon^{2}\varphi(\epsilon),$
(58)
where
$\epsilon_{i}^{mix}:=\epsilon\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[1-\lambda]\mathbb{E}_{k}\left[\frac{\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}}{\|\nabla
f(x_{i})\|_{2}}r_{i}^{(k)}c_{x}^{(k)}\sqrt{d_{k}}\right]$ and
$\displaystyle(\epsilon_{i}^{noise})^{2}$
$\displaystyle=\epsilon^{2}\|\nabla_{k}f(g_{k}(x_{i}))\|_{2}^{2}\bigg{(}\sigma_{add}^{2}\mathbb{E}_{\boldsymbol{\xi}_{k}}[\|\xi_{k}^{add}\|_{2}^{2}\cos(\nabla_{k}f(g_{k}(x_{i})),\xi_{k}^{add})^{2}]$
$\displaystyle\hskip
19.91684pt+\sigma_{mult}^{2}\mathbb{E}_{\boldsymbol{\xi}_{k}}[\|\xi_{k}^{mult}\odot
g_{k}(x_{i})\|_{2}^{2}\cos(\nabla_{k}f(g_{k}(x_{i})),\xi_{k}^{mult}\odot
g_{k}(x_{i}))^{2}]\bigg{)}.$ (59)
On the other hand, for any small parameters $\epsilon_{i}>0$ and any inputs
$z_{1},\dots,z_{n}$, we can, using a second-order Taylor expansion and then
applying our assumptions, compute:
$\displaystyle\frac{1}{n}\sum_{i=1}^{n}\max_{\|\delta_{i}\|_{2}\leq\epsilon_{i}}l(f(z_{i}+\delta_{i}),y_{i})-\frac{1}{n}\sum_{i=1}^{n}l(f(z_{i}),y_{i})$
$\displaystyle\leq\frac{1}{n}\sum_{i=1}^{n}|S(f(z_{i}))-y_{i}|\|\nabla
f(z_{i})\|_{2}\epsilon_{i}+\frac{1}{2n}\sum_{i=1}^{n}|S(f(z_{i}))(1-S(f(z_{i})))|\|\nabla
f(z_{i})\|_{2}^{2}\epsilon_{i}^{2}$ $\displaystyle\ \ \ \
+\frac{1}{n}\sum_{i=1}^{n}\max_{\|\delta_{i}\|_{2}\leq\epsilon_{i}}\|\delta_{i}\|_{2}^{2}\varphi_{i}^{\prime}(\delta_{i})$
(60) $\displaystyle\leq\frac{1}{n}\sum_{i=1}^{n}|S(f(z_{i}))-y_{i}|\|\nabla
f(z_{i})\|_{2}\epsilon_{i}+\frac{1}{2n}\sum_{i=1}^{n}|S(f(z_{i}))(1-S(f(z_{i})))|\|\nabla
f(z_{i})\|_{2}^{2}\epsilon_{i}^{2}$ $\displaystyle\ \ \ \
+\frac{1}{n}\sum_{i=1}^{n}\epsilon_{i}^{2}\varphi_{i}^{\prime\prime}(\epsilon_{i}),$
(61)
where the $\varphi_{i}^{\prime}$ are functions such that $\lim_{z\to
0}\varphi_{i}^{\prime}(z)=0$,
$\varphi_{i}^{\prime\prime}(\epsilon_{i}):=\max_{\|\delta_{i}\|_{2}\leq\epsilon_{i}}\varphi_{i}^{\prime}(\delta_{i})$
and $\lim_{z\to 0}\varphi_{i}^{\prime\prime}(z)=0$.
Combining (58) and (61), we see that
$\displaystyle L_{n}^{NFM}$
$\displaystyle\geq\frac{1}{n}\sum_{i=1}^{n}\max_{\|\delta_{i}^{mix}\|_{2}\leq\epsilon_{i}^{mix}}l(f(x_{i}+\delta_{i}^{mix}),y_{i})+L_{n}^{reg}+\epsilon^{2}\varphi(\epsilon)-\frac{1}{n}\sum_{i=1}^{n}(\epsilon_{i}^{mix})^{2}\varphi_{i}^{\prime\prime}(\epsilon_{i}^{mix})$
(62)
$\displaystyle=:\frac{1}{n}\sum_{i=1}^{n}\max_{\|\delta_{i}^{mix}\|_{2}\leq\epsilon_{i}^{mix}}l(f(x_{i}+\delta_{i}^{mix}),y_{i})+L_{n}^{reg}+\epsilon^{2}\phi(\epsilon),$
(63)
where $L_{n}^{reg}$ is defined in the theorem. Noting that $\lim_{\epsilon\to
0}\phi(\epsilon)=0$, the proof is done. ∎
## Appendix C NFM Through the Lens of Implicit Regularization and
Classification Margin
First, we define classification margin at the input level. We shall show that
minimizing the NFM loss can lead to an increase in the classification margin,
and therefore improve model robustness in this sense.
###### Definition 2 (Classification Margin).
The classification margin of a training input-label sample
$s_{i}:=(x_{i},c_{i})$ measured by the Euclidean metric $d$ is defined as the
radius of the largest $d$-metric ball in $\mathcal{X}$ centered at $x_{i}$
that is contained in the decision region associated with the class label
$c_{i}$, i.e., it is: $\gamma^{d}(s_{i})=\sup\\{a:d(x_{i},x)\leq a\Rightarrow
g(x)=c_{i}\ \ \forall x\\}.$
Intuitively, a larger classification margin allows a classifier to associate a
larger region centered on a point $x_{i}$ in the input space to the same
class. This makes the classifier less sensitive to input perturbations, and a
perturbation of $x_{i}$ is still likely to fall within this region, keeping
the classifier prediction. In this sense, the classifier becomes more robust.
In the typical case, the networks are trained by a loss (cross-entropy) that
promotes separation of different classes in the network output. This, in turn,
maximizes a certain notion of score of each training sample [61].
###### Definition 3 (Score).
For an input-label training sample $s_{i}=(x_{i},c_{i})$, we define its score
as $o(s_{i})=\min_{j\neq c_{i}}\sqrt{2}(e_{c_{i}}-e_{j})^{T}f(x_{i})\geq 0,$
where $e_{i}\in\mathbb{R}^{K}$ is the Kronecker delta vector (one-hot vector)
with $e_{i}^{i}=1$ and $e_{i}^{j}=0$ for $i\neq j$.
A positive score implies that at the network output, classes are separated by
a margin that corresponds to the score. A large score may not imply a large
classification margin, but score can be related to classification margin via
the following bound.
###### Proposition 1.
Assume that the score $o(s_{i})>0$ and let $k\in\mathcal{S}$. Then, the
classification margin for the training sample $s_{i}$ can be lower bounded as:
$\gamma^{d}(s_{i})\geq\frac{C(s_{i})}{\sup_{x\in
conv(\mathcal{X})}\|\nabla_{k}f(g_{k}(x))\|_{2}},$ (64)
where $C(s_{i})=o(s_{i})/\sup_{x\in conv(\mathcal{X})}\|\nabla
g_{k}(x)\|_{2}$.
Since NFM implicitly reduces the feature-output Jacobians $\nabla_{k}f$
(including the input-output Jacobian) according to the mixup level and noise
levels (see Proposition 3), this, together with Theorem 1, suggests that
applying NFM implicitly increases the classification margin, thereby making
the model more robust to input perturbations. We note that a similar, albeit
more involved, bound can also be obtained for the all-layer margin, a more
refined version of classification margin introduced in [70], and the
conclusion that applying NFM implicitly increases the margin also holds.
We now prove the proposition.
###### Proof of Proposition 1.
Note that, for any $k\in\mathcal{S},$ $\nabla f(x)=\nabla_{k}f(g_{k}(x))\nabla
g_{k}(x)$ by the chain rule, and so
$\displaystyle\|\nabla f(x)\|_{2}$
$\displaystyle\leq\|\nabla_{k}f(g_{k}(x))\|_{2}\|\nabla g_{k}(x)\|_{2}$ (65)
$\displaystyle\leq\left(\sup_{x\in
conv(\mathcal{X})}\|\nabla_{k}f(g_{k}(x))\|_{2}\right)\left(\sup_{x\in
conv(\mathcal{X})}\|\nabla g_{k}(x)\|_{2}\right).$ (66)
The statement in the proposition follows from a straightforward application of
Theorem 4 in [61] together with the above bound. ∎
## Appendix D NFM Through the Lens of Probabilistic Robustness
Since the main novelty of NFM lies in the introduction of noise injection, it
would be insightful to isolate the robustness boosting benefits of injecting
noise on top of manifold mixup. We shall demonstrate the isolated benefit in
this section.
The key idea is based on the observation that manifold mixup produces
minibatch outputs that lie in the convex hull of the feature space at each
iteration. Therefore, for $k\in\mathcal{S}$, $NFM(k)$ can be viewed as
injecting noise to the layer $k$ features sampled from some distribution over
$conv(g_{k}(\mathcal{X}))$, and so the $NFM(k)$ neural network $F_{k}$ can be
viewed as a probabilistic mapping from $conv(g_{k}(\mathcal{X}))$ to
$\mathcal{P}(\mathcal{Y})$, the space of probability distributions on
$\mathcal{Y}$.
To isolate the benefit of noise injection, we adapt the approach of [51, 52]
to our setting to show that the Gaussian noise injection procedure in NFM
robustifies manifold mixup in a probabilistic sense. At its core, this
probabilistic notion of robustness amounts to making the model locally
Lipschitz with respect to some distance on the input and output space,
ensuring that a small perturbation in the input will not lead to large changes
(as measured by some probability metric) in the output. Interestingly, it is
related to a notion of differential privacy [42, 15], as formalized in [53].
We now formalize this probabilistic notion of robustness.
Let $p>0$. We say that a standard model $f:\mathcal{X}\to\mathcal{Y}$ is
$\alpha_{p}$-robust if for any $(x,y)\sim\mathcal{D}$ such that $f(x)=y$, one
has, for any data perturbation $\tau\in\mathcal{X}$,
$\|\tau\|_{p}\leq\alpha_{p}\implies f(x)=f(x+\tau).$ (67)
Analogous definition can be formulated when output of the model is
distribution-valued.
###### Definition 4 (Probabilistic robustness).
A probabilistic model $F:\mathcal{X}\to\mathcal{P}(\mathcal{Y)}$ is called
$(\alpha_{p},\epsilon)$-robust with respect to $D$ if, for any
$x,\tau\in\mathcal{X}$, one has
$\|\tau\|_{p}\leq\alpha_{p}\implies D(F(x),F(x+\tau))\leq\epsilon,$ (68)
where $D$ is a metric or divergence between two probability distributions.
We refer to the probabilistic model (built on top of a manifold mixup
classifier) that injects Gaussian noise to the layer $k$ features as
probabilistic FM model, and we denote it by
$F^{noisy(k)}:conv(g_{k}(\mathcal{X}))\to\mathcal{P}(\mathcal{Y})$. We denote
$G$ as the classifier constructed from $F^{noisy(k)}$, i.e.,
$G:x\mapsto\arg\max_{j\in[K]}[F^{noisy(k)}]^{j}(x)$.
In the sequel, we take $D$ to be the total variation distance $D_{TV}$,
defined as:
$D_{TV}(P,Q):=\sup_{S\subset\mathcal{X}}|P(S)-Q(S)|,$ (69)
for any two distributions $P$ and $Q$ over $\mathcal{X}$. Recall that if $P$
and $Q$ have densities $\rho_{p}$ and $\rho_{q}$ respectively, then the total
variation distance is half of the $L^{1}$ distance, i.e.,
$D_{TV}(P,Q)=\frac{1}{2}\int_{\mathcal{X}}|\rho_{p}(x)-\rho_{q}(x)|dx$. The
choice of the distance depends on the problem on hand and will give rise to
different notions of robustness. One could also consider other statistical
distances such as the Wasserstein distance and Renyi divergence, which can be
related to total variation (see [52, 22] for details).
Before presenting our main result in this section, we need the following
notation. Let $\Sigma(x):=\sigma_{add}^{2}I+\sigma_{mult}^{2}xx^{T}$. For
$x,\tau\in\mathcal{X}$, let $\Pi_{x}$ be a $d_{k}$ by $d_{k}-1$ matrix whose
columns form a basis for the subspace orthogonal to $g_{k}(x+\tau)-g_{k}(x)$,
and $\\{\rho_{i}(g_{k}(x),\tau)\\}_{i\in[d_{k}-1]}$ be the eigenvalues of
$(\Pi_{x}^{T}\Sigma(g_{k}(x))\Pi_{x})^{-1}\Pi_{x}^{T}\Sigma(g_{k}(x+\tau))\Pi_{x}-I$.
Also, let $[F]^{topk}(x)$ denote the $k$th highest value of the entries in the
vector $F(x)$.
Viewing an $NFM(k)$ classifier as a probabilistic FM classifier, we have the
following result.
###### Theorem 5 (Gaussian noise injection robustifies FM classifiers).
Let $k\in\mathcal{S}$, $d_{k}>1$, and assume that
$g_{k}(x)g_{k}(x)^{T}\geq\beta_{k}^{2}I>0$ for all $x\in conv(\mathcal{X})$
for some constant $\beta_{k}$. Then, $F^{noisy(k)}$ is
$\left(\alpha_{p},\epsilon_{k}(p,d,\alpha_{p},\sigma_{add},\sigma_{mult})\right)$-robust
with respect to $D_{TV}$ against $l_{p}$ adversaries, with
$\epsilon_{k}(p,d,\alpha_{p},\sigma_{add},\sigma_{mult})=\frac{9}{2}\min\\{1,\max\\{A,B\\}\\},$
(70)
where
$\displaystyle A$
$\displaystyle=A_{p}(\alpha_{p})\frac{\sigma_{mult}^{2}}{\sigma_{add}^{2}+\sigma_{mult}^{2}\beta_{k}^{2}}\bigg{(}\left\|\int_{0}^{1}\nabla
g_{k}(x+t\tau)dt\right\|_{2}^{2}+2\|g_{k}(x)\|_{2}\left\|\int_{0}^{1}\nabla
g_{k}(x+t\tau)dt\right\|_{2}\bigg{)},$ (71) $\displaystyle B$
$\displaystyle=B_{k}(\tau)\frac{\alpha_{p}(\mathbb{1}_{p\in(0,2]}+d^{1/2-1/p}\mathbb{1}_{p\in(2,\infty)}+\sqrt{d}\mathbb{1}_{p=\infty})}{\sqrt{\sigma_{add}^{2}+\sigma_{mult}^{2}\beta_{k}^{2}}},$
(72)
with
$A_{p}(\alpha_{p})=\begin{cases}\alpha_{p}\mathbb{1}_{\alpha_{p}<1}+\alpha_{p}^{2}\mathbb{1}_{\alpha_{p}\geq
1},&\text{if}\ p\in(0,2],\\\
d^{1/2-1/p}(\alpha_{p}\mathbb{1}_{\alpha_{p}<1}+\alpha_{p}^{2}\mathbb{1}_{\alpha_{p}\geq
1}),&\text{if}\ p\in(2,\infty),\\\
\sqrt{d}(\alpha_{p}\mathbb{1}_{\alpha_{p}<1}+\alpha_{p}^{2}\mathbb{1}_{\alpha_{p}\geq
1}),&\text{if}\ p=\infty,\end{cases}$ (73)
and
$B_{k}(\tau)=\sup_{x\in conv(\mathcal{X})}\bigg{(}\left\|\int_{0}^{1}\nabla
g_{k}(x+t\tau)dt\right\|_{2}\cdot\sqrt{\sum_{i=1}^{d_{k}-1}\rho_{i}^{2}(g_{k}(x),\tau)}\bigg{)}.$
(74)
Moreover, if $x\in\mathcal{X}$ is such that
$[F^{noisy(k)}]^{top1}(x)\geq[F^{noisy(k)}]^{top2}(x)+2\epsilon(p,d,\alpha_{p},\sigma_{add},\sigma_{mult})$,
then for any $\tau\in\mathcal{X}$, we have
$\|\tau\|_{p}\leq\alpha\implies G(x)=G(x+\tau),$ (75)
for any $p>0$.
Theorem 5 implies that we can inject Gaussian noise into the feature mixup
representation to improve robustness of FM classifiers in the sense of
Definition 4, while keeping track of maximal loss in accuracy incurred under
attack, by tuning the noise levels $\sigma_{add}$ and $\sigma_{mult}$. To
illustrate this, suppose that $\sigma_{mult}=0$ and consider the case of
$p=2$, in which case $A=0$, $B\sim\alpha_{2}/\sigma_{add}$ and so injecting
additive Gaussian noise can help controlling the change in the model output,
keeping the classifier’s prediction, when the data perturbation is of size
$\alpha_{2}$.
We now prove Theorem 5. Before this, we need the following lemma.
###### Lemma 2.
Let $x_{1}:=z\in\mathbb{R}^{d_{k}}$ and $x_{2}:=z+\tau\in\mathbb{R}^{d_{k}}$,
with $\tau>0$ and $d_{k}>1$, and
$\Sigma(x):=\sigma_{add}^{2}I+\sigma_{mult}^{2}xx^{T}\geq(\sigma_{add}^{2}+\sigma_{mult}^{2}\beta^{2})I>0$,
for some constant $\beta$, for all $x$. Let $\Pi$ be a $d_{k}$ by $d_{k}-1$
matrix whose columns form a basis for the subspace orthogonal to $\tau$, and
let $\rho_{1}(z,\tau),\dots,\rho_{d_{k}-1}(z,\tau)$ denote the eigenvalues of
$(\Pi^{T}\Sigma(x_{1})\Pi)^{-1}\Pi^{T}\Sigma(x_{2})\Pi-I$.
Define the function $C(x_{1},x_{2},\Sigma):=\max\\{A,B\\}$, where
$\displaystyle A$
$\displaystyle=\frac{\sigma_{mult}^{2}}{\sigma_{add}^{2}+\sigma_{mult}^{2}\beta^{2}}(\|\tau\|_{2}^{2}+2\tau^{T}z),$
(76) $\displaystyle B$
$\displaystyle=\frac{\|\tau\|_{2}}{\sqrt{\sigma_{add}^{2}+\sigma_{mult}^{2}\beta^{2}}}\sqrt{\sum_{i=1}^{d_{k}-1}\rho_{i}^{2}(z,\tau)}.$
(77)
Then, the total variation distance between $\mathcal{N}(x_{1},\Sigma(x_{1}))$
and $\mathcal{N}(x_{2},\Sigma(x_{2}))$ admits the following bounds:
$\frac{1}{200}\leq\frac{D_{TV}(\mathcal{N}(x_{1},\Sigma(x_{1})),\mathcal{N}(x_{2},\Sigma(x_{2})))}{\min\\{1,C(x_{1},x_{2},\Sigma)\\}}\leq\frac{9}{2}.$
(78)
###### Proof of Lemma 2.
The result follows from a straightforward application of Theorem 1.2 in [14],
which provides bounds on the total variation distance between Gaussians with
different means and covariances. ∎
With this lemma in hand, we now prove Theorem 5.
###### Proof of Theorem 5.
We denote the noise injection procedure by the map
$\mathcal{I}:x\to\mathcal{N}(x,\Sigma(x))$, where
$\Sigma(x)=\sigma_{add}^{2}I+\sigma_{mult}^{2}xx^{T}$.
Let $x\in\mathcal{X}$ be a test datapoint and $\tau\in\mathcal{X}$ be a data
perturbation such that $\|\tau\|_{p}\leq\alpha_{p}$ for $p>0$.
Note that
$\displaystyle
D_{TV}(F_{k}(\mathcal{I}(g_{k}(x))),F_{k}(\mathcal{I}(g_{k}(x+\tau))))$
$\displaystyle\leq D_{TV}(\mathcal{I}(g_{k}(x)),\mathcal{I}(g_{k}(x+\tau)))$
(79) $\displaystyle\leq
D_{TV}(\mathcal{I}(g_{k}(x)),\mathcal{I}(g_{k}(x)+g_{k}(x+\tau)-g_{k}(x)))$
(80)
$\displaystyle=D_{TV}\left(\mathcal{I}(g_{k}(x)),\mathcal{I}\left(g_{k}(x)+\tau_{k}\right)\right)$
(81)
$\displaystyle\leq\frac{9}{2}\min\\{1,\Phi(g_{k}(x),\tau_{k},\sigma_{add},\sigma_{mult},\beta_{k})\\},$
(82)
where $\tau_{k}:=g_{k}(x+\tau)-g_{k}(x)=\left(\int_{0}^{1}\nabla
g_{k}(x+t\tau)dt\right)\tau$ by the generalized fundamental theorem of
calculus, and
$\displaystyle\Phi(g_{k}(x),\tau_{k},\sigma_{add},\sigma_{mult},\beta_{k})$
$\displaystyle:=\max\left\\{\frac{\sigma_{mult}^{2}}{\sigma_{add}^{2}+\sigma_{mult}^{2}\beta_{k}^{2}}(\|\tau_{k}\|_{2}^{2}+2\langle\tau_{k},g_{k}(x)\rangle),\frac{\|\tau_{k}\|_{2}}{\sqrt{\sigma_{add}^{2}+\sigma_{mult}^{2}\beta_{k}^{2}}}\sqrt{\sum_{i=1}^{d_{k}-1}\rho_{i}^{2}(g_{k}(x),\tau)}\right\\},$
(83)
where the $\rho_{i}(g_{k}(x),\tau)$ are the eigenvalues given in the theorem.
In the first line above, we have used the data preprocessing inequality
(Theorem 6 in [52]), and the last line follows from applying Lemma 2 together
with the assumption that $g_{k}(x)g_{k}(x)^{T}\geq\beta_{k}^{2}>0$ for all
$x$.
Using the bounds
$\displaystyle\|\tau_{k}\|_{2}$ $\displaystyle\leq\left\|\int_{0}^{1}\nabla
g_{k}(x+t\tau)dt\right\|_{2}\|\tau\|_{2}$ (84)
and
$|\langle\tau_{k},g_{k}(x)\rangle|\leq\|g_{k}(x)\|_{2}\left\|\int_{0}^{1}\nabla
g_{k}(x+t\tau)dt\right\|_{2}\|\tau\|_{2},$ (85)
we have
$\displaystyle\Phi(g_{k}(x),\tau_{k},\sigma_{add},\sigma_{mult},\beta_{k})$
$\displaystyle\leq\max\left\\{A,B\right\\},$ (86)
where
$A=\frac{\sigma_{mult}^{2}}{\sigma_{add}^{2}+\sigma_{mult}^{2}\beta_{k}^{2}}\bigg{(}\left\|\int_{0}^{1}\nabla
g_{k}(x+t\tau)dt\right\|_{2}^{2}\|\tau\|_{2}^{2}+2\|g_{k}(x)\|_{2}\left\|\int_{0}^{1}\nabla
g_{k}(x+t\tau)dt\right\|_{2}\|\tau\|_{2}\bigg{)}$ (87)
and
$\displaystyle B$ $\displaystyle=\frac{\left\|\int_{0}^{1}\nabla
g_{k}(x+t\tau)dt\right\|_{2}\|\tau\|_{2}}{\sqrt{\sigma_{add}^{2}+\sigma_{mult}^{2}\beta_{k}^{2}}}\sqrt{\sum_{i=1}^{d_{k}-1}\rho_{i}^{2}(g_{k}(x),\tau)}$
(88) $\displaystyle\leq\sup_{x\in
conv(\mathcal{X})}\bigg{(}\left\|\int_{0}^{1}\nabla
g_{k}(x+t\tau)dt\right\|_{2}\cdot\sqrt{\sum_{i=1}^{d_{k}-1}\rho_{i}^{2}(g_{k}(x),\tau)}\bigg{)}\frac{\|\tau\|_{2}}{\sqrt{\sigma_{add}^{2}+\sigma_{mult}^{2}\beta_{k}^{2}}}$
(89)
$\displaystyle=:B_{k}(\tau)\frac{\|\tau\|_{2}}{\sqrt{\sigma_{add}^{2}+\sigma_{mult}^{2}\beta_{k}^{2}}}.$
(90)
The first statement of the theorem then follows from the facts that
$\|\tau\|_{2}\leq\|\tau\|_{p}\leq\alpha_{p}$ for $p\in(0,2]$,
$\|\tau\|_{2}\leq d^{1/2-1/q}\|\tau\|_{q}\leq d^{1/2-1/q}\alpha_{q}$ for
$q>2$, and
$\|\tau\|_{2}\leq\sqrt{d}\|\tau\|_{\infty}\leq\sqrt{d}\alpha_{\infty}$ for any
$\tau\in\mathbb{R}^{d}$. In particular, these imply that $A\leq CA_{p}$, where
$A_{p}=\begin{cases}\alpha_{p}\mathbb{1}_{\alpha_{p}<1}+\alpha_{p}^{2}\mathbb{1}_{\alpha_{p}\geq
1},&\text{if}\ p\in(0,2],\\\
d^{1/2-1/p}(\alpha_{p}\mathbb{1}_{\alpha_{p}<1}+\alpha_{p}^{2}\mathbb{1}_{\alpha_{p}\geq
1}),&\text{if}\ p\in(2,\infty),\\\
\sqrt{d}(\alpha_{p}\mathbb{1}_{\alpha_{p}<1}+\alpha_{p}^{2}\mathbb{1}_{\alpha_{p}\geq
1}),&\text{if}\ p=\infty,\end{cases}$ (91)
and
$C:=\frac{\sigma_{mult}^{2}}{\sigma_{add}^{2}+\sigma_{mult}^{2}\beta_{k}^{2}}\bigg{(}\left\|\int_{0}^{1}\nabla
g_{k}(x+t\tau)dt\right\|_{2}^{2}+2\|g_{k}(x)\|_{2}\left\|\int_{0}^{1}\nabla
g_{k}(x+t\tau)dt\right\|_{2}\bigg{)}.$ (92)
The last statement in the theorem essentially follows from Proposition 3 in
[52]. ∎
## Appendix E On Generalization Bounds for NFM
Let $\mathcal{F}$ be the family of mappings $x\mapsto f(x)$ and
$Z_{n}:=((x_{i},y_{i}))_{i\in[n]}$. Given a loss function $l$, the Rademacher
complexity of the set $l\circ\mathcal{F}:=\\{(x,y)\mapsto
l(f(x),y):f\in\mathcal{F}\\}$ is defined as:
$R_{n}(l\circ\mathcal{F}):=\mathbb{E}_{Z_{n},\sigma}\left[\sup_{f\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^{n}\sigma_{i}l(f(x_{i}),y_{i})\right],$
(93)
where $\sigma:=(\sigma_{1},\dots,\sigma_{n})$, with the $\sigma_{i}$
independent uniform random variables taking values in $\\{-1,1\\}$.
Following [41], we can derive the following generalization bound for the NFM
loss function, i.e., the upper bound on the difference between the expected
error on unseen data and the NFM loss. This bound shows that NFM can reduce
overfitting and give rise to improved generalization.
###### Theorem 6 (Generalization bound for the NFM loss).
Assume that the loss function $l$ satisfies $|l(x,y)-l(x^{\prime},y)|\leq M$
for all $x,x^{\prime}$ and $y$. Then, for every $\delta>0$, with probability
at least $1-\delta$ over a draw of $n$ i.i.d. samples
$\\{(x_{i},y_{i})\\}_{i=1}^{n}$, we have the following generalization bound:
for all maps $f\in\mathcal{F}$,
$\mathbb{E}_{x,y}[l(f(x),y)]-L_{n}^{NFM}\leq
2R_{n}(l\circ\mathcal{F})+2M\sqrt{\frac{\ln(1/\delta)}{2n}}-Q_{\epsilon}(f),$
(94)
where
$Q_{\epsilon}(f)=\mathbb{E}[\epsilon
R^{(k)}_{1}+\epsilon^{2}\tilde{R}^{(k)}_{2}+\epsilon^{2}\tilde{R}^{(k)}_{3}]+\epsilon^{2}\varphi(\epsilon),$
(95)
for some function $\varphi$ such that $\lim_{x\to\infty}\varphi(x)=0$.
To compare the generalization behavior of NFM with that without using NFM, we
also need the following generalization bound for the standard loss function.
###### Theorem 7 (Generalization bound for the standard loss).
Assume that the loss function $l$ satisfies $|l(x,y)-l(x^{\prime},y)|\leq M$
for all $x,x^{\prime}$ and $y$. Then, for every $\delta>0$, with probability
at least $1-\delta$ over a draw of $n$ i.i.d. samples
$\\{(x_{i},y_{i})\\}_{i=1}^{n}$, we have the following generalization bound:
for all maps $f\in\mathcal{F}$,
$\mathbb{E}_{x,y}[l(f(x),y)]-L_{n}^{std}\leq
2R_{n}(l\circ\mathcal{F})+2M\sqrt{\frac{\ln(1/\delta)}{2n}}.$ (96)
By comparing the above two theorems and following the argument of [41], we see
that the generalization benefit of NFM comes from two mechanisms. The first
mechanism is based on the term $Q_{\epsilon}(f)$. Assuming that the Rademacher
complexity term is the same for both methods, then NFM has a better
generalization bound than that of standard method if $Q_{\epsilon}(f)>0$. The
second mechanism is based on the Rademacher complexity term
$R_{n}(l\circ\mathcal{F})$. For certain families of neural networks, this term
can be bounded by the norms of the hidden layers of the network and the norms
of the Jacobians of each layer with respect to all previous layers [69, 70].
Therefore, this term differs for the case of training using NFM and the case
of standard training. Since NFM implicitly reduces the feature-output
Jacobians (see Theorem 3), we can argue that NFM leads to a smaller Rademacher
complexity term and hence a better generalization bound.
We now prove Theorem 6. The proof of Theorem 7 follows the same argument as
that of Theorem 6.
###### Proof of Theorem 6.
Let $Z_{n}:=\\{(x_{i},y_{i})\\}_{i\in[n]}$ and
$Z^{\prime}_{n}:=\\{(x^{\prime}_{i},y^{\prime}_{i})\\}_{i\in[n]}$ be two test
datasets, where $Z_{n}^{\prime}$ differs from $Z_{n}$ by exactly one point of
an arbitrary index $i_{0}$.
Denote
$GE(Z_{n}):=\sup_{f\in\mathcal{F}}\mathbb{E}_{x,y}[l(f(x),y)]-L_{n}^{NFM}$,
where $L_{n}^{NFM}$ is computed using the dataset $Z_{n}$, and likewise for
$GE(Z_{n}^{\prime})$. Then,
$GE(Z_{n}^{\prime})-GE(Z_{n})\leq\frac{M(2n-1)}{n^{2}}\leq\frac{2M}{n},$ (97)
where we have used the fact that $L_{n}^{NFM}$ has $n^{2}$ terms and there are
$2n-1$ different terms for $Z_{n}$ and $Z_{n}^{\prime}.$ Similarly, we have
$GE(Z_{n})-GE(Z_{n}^{\prime})\leq\frac{2M}{n}$.
Therefore, by McDiarmid’s inequality, for any $\delta>0$, with probability at
least $1-\delta$,
$GE(Z_{n})\leq\mathbb{E}_{Z_{n}}[GE(Z_{n})]+2M\sqrt{\frac{\ln(1/\delta)}{2n}}.$
(98)
Applying Theorem 3, we have
$\displaystyle GE(Z_{n})$
$\displaystyle\leq\mathbb{E}_{Z_{n}}\left[\sup_{f\in\mathcal{F}}\mathbb{E}_{Z_{n}^{\prime}}\left[\frac{1}{n}\sum_{i=1}^{n}l(f(x_{i}^{\prime}),y_{i}^{\prime})\right]-L_{n}^{NFM}\right]+2M\sqrt{\frac{\ln(1/\delta)}{2n}}$
(99)
$\displaystyle=\mathbb{E}_{Z_{n}}\left[\sup_{f\in\mathcal{F}}\mathbb{E}_{Z_{n}^{\prime}}\left[\frac{1}{n}\sum_{i=1}^{n}l(f(x_{i}^{\prime}),y_{i}^{\prime})\right]-\frac{1}{n}\sum_{i=1}^{n}l(f(x_{i}),y_{i})\right]-Q_{\epsilon}(f)$
$\displaystyle\ \ \ \ +2M\sqrt{\frac{\ln(1/\delta)}{2n}}$ (100)
$\displaystyle\leq\mathbb{E}_{Z_{n},Z_{n}^{\prime}}\left[\sup_{f\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^{n}(l(f(x_{i}^{\prime}),y_{i}^{\prime})-l(f(x_{i}),y_{i}))\right]-Q_{\epsilon}(f)+2M\sqrt{\frac{\ln(1/\delta)}{2n}}$
(101)
$\displaystyle\leq\mathbb{E}_{Z_{n},Z_{n}^{\prime},\sigma}\left[\sup_{f\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^{n}\sigma_{i}(l(f(x_{i}^{\prime}),y_{i}^{\prime})-l(f(x_{i}),y_{i}))\right]-Q_{\epsilon}(f)+2M\sqrt{\frac{\ln(1/\delta)}{2n}}$
(102) $\displaystyle\leq
2\mathbb{E}_{Z_{n},\sigma}\left[\sup_{f\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^{n}\sigma_{i}l(f(x_{i}),y_{i})\right]-Q_{\epsilon}(f)+2M\sqrt{\frac{\ln(1/\delta)}{2n}}$
(103)
$\displaystyle=2R_{n}(l\circ\mathcal{F})-Q_{\epsilon}(f)+2M\sqrt{\frac{\ln(1/\delta)}{2n}},$
(104)
where (99) uses the definition of $GE(Z_{n})$, (100) uses
$\pm\frac{1}{n}\sum_{i=1}^{n}l(f(x_{i}),y_{i})$ inside the expectation and the
linearity of expectation, (101) follows from the Jensen’s inequality and the
convexity of the supremum, (102) follows from the fact that
$\sigma_{i}(l(f(x_{i}^{\prime}),y_{i}^{\prime})-l(f(x_{i}),y_{i}))$ and
$l(f(x_{i}^{\prime}),y_{i}^{\prime})-l(f(x_{i}),y_{i})$ have the same
distribution for each $\sigma_{i}\in\\{-1,1\\}$ (since $Z_{n},Z_{n}^{\prime}$
are drawn i.i.d. with the same distribution), and (103) follows from the
subadditivity of supremum.
The bound in the theorem then follows from the above bound. ∎
## Appendix F Additional Experiments and Details
### F.1 Input Perturbations
We consider the following three types of data perturbations during inference
time:
* •
_White noise perturbations_ are constructed as $\tilde{x}=x+\Delta x$, where
the additive noise is sampled from a Gaussian distribution $\Delta
x\sim\mathcal{N}(0,\sigma)$. This perturbation strategy emulates measurement
errors that can result from data acquisition with poor sensors (where $\sigma$
corresponds to the severity of these errors).
* •
_Salt and pepper perturbations_ emulate defective pixels that result from
converting analog signals to digital signals. The noise model takes the form
$\mathbb{P}(\tilde{X}=X)=1-\gamma$, and
$\mathbb{P}(\tilde{X}=\max)=\mathbb{P}(\tilde{X}=\min)=\gamma/2,$ where
$\tilde{X}(i,j)$ denotes the corrupted image and $\min$, $\max$ denote the
minimum and maximum pixel values, respectively. $\gamma$ parameterizes the
proportion of defective pixels.
* •
_Adversarial perturbations_ are “worst-case” non-random perturbations that
maximize the loss $\ell(g^{\delta}(X+\Delta X),y)$ subject to the constraint
$\|\Delta X\|\leq r$ on the norm of the perturbation. We consider the
projected gradient decent for constructing these perturbations [44].
### F.2 Illustration of the Effects of NFM on Toy Datasets
We consider a binary classification task for the noise corrupted 2D dataset
whose data points form two concentric circles. Points on the same circle
corresponds to the same label class. We generate 500 samples, setting the
scale factor between inner and outer circle to be 0.05 and adding Gaussian
noise with zero mean and standard deviation of 0.3 to the samples. Fig. 7
shows the training and test data points. We train a fully connected
feedforward neural network that has four layers with the ReLU activation
functions on these data, using 300 points for training and 200 for testing.
All models are trained with Adam and learning rate $0.1$, and the seed is
fixed across all experiments. Note that the learning rate can be considered as
a temperature parameter which introduces some amount of regularization itself.
Hence, we choose a learning rate that is large for this problem to better
illustrate the regularization effects imposed by the different schemes that we
consider.
Fig. 2 illustrates how different regularization strategies affect the decision
boundaries of the neural network classifier. The decision boundaries and the
test accuracy indicate that white noise injections and dropout (we explore
dropout rates in the range $[0.0,0.9]$ and we finds that $0.2$ yields the best
performance) introduce a favorable amount of regularization. Most notably is
the effect of weight decay (we use $9e{-3}$), i.e., the decision boundary is
nicely smoothed and the test accuracy is improved. In contrast, the simple
mixup data augmentation scheme shows no benefits here, whereas manifold mixup
is improving the predictive accuracy considerably. Combining mixup (manifold
mixup) with noise injections yields the best performance in terms of both
smoothness of the decision boundary and predictive accuracy. Indeed, NFM is
outperforming all other methods here.
The performance could be further improved by combining NFM with weight decay
or dropout. This shows that there are interaction effects between different
regularization schemes. In practice, when one trains deep neural networks,
different regularization strategies are considered as knobs that are fine-
tuned. From this perspective, NFM provides additional knobs to further improve
a model.
\begin{overpic}[width=433.62pt]{figures/toy_train.pdf} \end{overpic} (a) Data
points for training.
\begin{overpic}[width=433.62pt]{figures/toy_test.pdf} \end{overpic} (b) Data
points for testing.
Figure 7: The toy dataset in $\mathbb{R}^{2}$ that we use for binary
classification.
### F.3 Additional Results for Vision Transformers
Table 4 shows results for vision transformers trained with different data
augmentation schemes and different values of $\alpha$. It can be seen that NFM
with $\alpha=0.1$ helps to improve the predictive accuracy on clean data while
also improving the robustness of the models. For example, the model trained
with NFM shows about a $25\%$ improvement compared to the baseline model when
faced with salt and paper perturbations ($\gamma=0.2$). Further, our results
indicate that larger values of $\alpha$ have a negative effect on the
generalization performance of vision transformer.
Table 4: Robustness of Wide-ResNet-18 w.r.t. white noise ($\sigma$) and salt
and pepper ($\gamma$) perturbations evaluated on CIFAR-100. The results are
averaged over 5 models trained with different seed values.
Scheme | Clean (%) | $\sigma$ (%) | $\gamma$ (%)
---|---|---|---
| | $0.1$ | $0.2$ | $0.3$ | $0.08$ | $0.12$ | $0.2$
Baseline | 91.3 | 89.4 | 77.0 | 56.7 | 83.2 | 74.6 | 48.6
Mixup ($\alpha=0.1$) [79] | 91.2 | 89.5 | 77.6 | 57.7 | 82.9 | 74.6 | 48.6
Mixup ($\alpha=0.2$) [79] | 91.2 | 89.2 | 77.8 | 58.9 | 82.6 | 74.5 | 47.9
Noisy Mixup ($\alpha=0.1$) [74] | 90.9 | 90.4 | 87.5 | 80.2 | 84.0 | 79.4 | 63.8
Noisy Mixup ($\alpha=0.2$) [74] | 90.9 | 90.4 | 87.4 | 79.8 | 83.8 | 79.3 | 63.4
Manifold Mixup ($\alpha=0.1$) [68] | 91.2 | 89.2 | 77.2 | 56.9 | 83.0 | 74.3 | 47.1
Manifold Mixup ($\alpha=1.0$) [68] | 90.2 | 88.4 | 76.0 | 55.1 | 81.3 | 71.4 | 42.7
Manifold Mixup ($\alpha=2.0$) [68] | 89.0 | 87.0 | 74.3 | 53.7 | 79.8 | 70.3 | 41.9
Noisy Feature Mixup ($\alpha=0.1$) | 91.4 | 90.2 | 88.2 | 84.8 | 84.4 | 81.2 | 74.4
Noisy Feature Mixup ($\alpha=1.0$) | 89.8 | 89.1 | 86.6 | 82.7 | 82.5 | 79.0 | 71.4
Noisy Feature Mixup ($\alpha=2.0$) | 88.4 | 87.6 | 84.6 | 80.1 | 80.4 | 76.5 | 68.6
### F.4 Additional Results for ResNets with Higher Levels of Noise Injections
In the experiments in Section 5, we considered models trained with NFM that
use noise injection levels $\sigma_{add}=0.4$ and $\sigma_{mult}=0.2$, whereas
the ablation model uses $\sigma_{add}=1.0$ and $\sigma_{mult}=0.5$. Here, we
want to better illustrate the trade-off between accuracy and robustness. We
saw that there exists a potential sweet-spot where we are able to improve both
the predictive accuracy and the robustness of the model. However, if the
primary aim is to push the robustness of the model, then we need to sacrifice
some amount of accuracy.
Fig. 8 is illustrating this trade-off for pre-actived ResNet-18s trained on
CIFAR-10. We can see that increased levels of noise injections considerably
improve the robustness, while the accuracy on clean data points drops. In
practice, the amount of noise injection that the user chooses depend on the
situation. If robustness is critical, than higher noise levels can be used. If
adversarial examples are the main concern, than other training strategies such
as adversarial training might be favorable. However, the advantage of NFM over
adversarial training is that (a) we have a more favorable trade-off between
robustness and accuracy in the small noise regime, and (b) NFM is
computationally inexpensive, when compared to most adversarial training
schemes. This is further illustrated in the next section.
\begin{overpic}[width=433.62pt]{figures/cifar10_white_high.pdf}
\put(-6.0,16.0){\rotatebox{90.0}{ Test Accuracy}} \put(31.0,-3.0){ {White
Noise ($\sigma$)}} \end{overpic}
\begin{overpic}[width=433.62pt]{figures/cifar10_sp_high.pdf} \put(31.0,-3.0){
{Salt and Pepper Noise ($\gamma$)}} \end{overpic}
Figure 8: Pre-actived ResNet-18 evaluated on CIFAR-10 trained with NFM and
varying levels of additive ($\sigma_{add}$) and multiplicative
($\sigma_{mult}$) noise injections. Shaded regions indicate one standard
deviation about the mean. Averaged across 5 random seeds.
### F.5 Comparison with Adversarial Trained Models
Here, we compare NFM to adversarial training in the small noise regime, i.e.,
the situation where models do not show a significant drop on the clean test
set. Specifically, we consider the projected gradient decent (PGD) method [44]
using $7$ attack iterations and varying $l_{2}$ perturbation levels $\epsilon$
to train adversarial robust models. First, we compare how resilient the
different models are with respect to adversarial input perturbations during
inference time (Fig. 9; left). Again the adversarial examples are constructed
using the PGD method with $7$ attack iterations. Not very surprisingly, the
adversarial trained model with $\epsilon=0.01$ features the best resilience
while sacrificing about $0.5\%$ accuracy as compared to the baseline model
(here not shown). In contrast, the models trained with NFM are less robust,
while being about $1-1.5\%$ more accurate on clean data.
Next, we compare in (Fig. 9; right) the robustness with respect to salt and
pepper perturbations, i.e., perturbations that both models have not seen
before. Interestingly, here we see an advantage of the NFM scheme with high
noise injection levels as compared to the adversarial trained models.
\begin{overpic}[width=433.62pt]{figures/cifar10_adv_comp.pdf}
\put(-6.0,16.0){\rotatebox{90.0}{ Test Accuracy}} \put(31.0,-3.0){
{Adverserial Noise ($\epsilon$)}} \end{overpic}
\begin{overpic}[width=433.62pt]{figures/cifar10_adv_comp_sp.pdf}
\put(31.0,-3.0){ {Adverserial Noise ($\epsilon$)}} \end{overpic}
Figure 9: Pre-actived ResNet-18 evaluated on CIFAR-10 (left) and Wide
ResNet-18 evaluated on CIFAR-100 (right) with respect to adversarial perturbed
inputs. Shaded regions indicate one standard deviation about the mean.
Averaged across 5 random seeds.
### F.6 Feature Visualization Comparison
In this subsection, we concern ourselves with comparing the features learned
by three ResNet-50 models trained on Restricted Imagenet [65]: without mixup,
manifold mixup [68], and NFM. We can compare features by maximizing randomly
chosen pre-logit activations of each model with respect to the input, as
described by [19]. We do so for all models with Projected Gradient Ascent over
200 iterations, a step size of 16, and an $\ell_{2}$ norm constraint of 2,000.
Both the models trained with manifold mixup and NFM use an $\alpha=0.2$, and
the NFM model uses in addition $\sigma_{add}=2.4$ and $\sigma_{mult}=1.2$. The
result, as shown in Fig. 10, is that the features learned by the model trained
with NFM are slightly stronger (i.e., different from random noise) than the
clean model.
\begin{overpic}[width=238.49231pt]{figures/feature_viz.pdf} \end{overpic}
Figure 10: The features learned by the NFM classifier are slightly stronger
(i.e., different from random noise) than the clean model. See Subsection F.6
for more details.
|
# Congestion Analysis for the DARPA OFFSET CCAST Swarm
Robert Brown
Collaborative Robotics and Intelligent
Systems Institute
Oregon State University
Corvallis OR 97331, USA
<EMAIL_ADDRESS>
Julie A. Adams
Collaborative Robotics and Intelligent
Systems Institute
Oregon State University
Corvallis OR 97331, USA
<EMAIL_ADDRESS>
###### Abstract
The Defense Advanced Research Projects Agency’s (DARPA) OFFensive Swam-Enabled
Tactics program’s goal of launching 250 unmanned aerial and ground vehicles
from a limited sized launch zone was a daunting challenge. The swarm’s aerial
vehicles were primarily multi-rotor platforms, which can efficiently be
launched en mass. Each field exercise expected the deployment of an even
larger swarm. While the launch zone’s spatial area increased with each field
exercise, the relative space for each vehicle was not necessarily increased
considering the increasing size of the swarm and the vehicles’ associated GPS
error. However, safe mission deployment and execution were expected. At the
same time, achieving the mission goals required maximizing the efficiency of
the swarm’s performance, by reducing congestion that blocked vehicles from
completing tactic assignments. Congestion analysis conducted before the final
field exercise focused on adjusting various constraints to optimize the
swarm’s deployment without reducing safety. During the field exercise, data
was collected that permitted analyzing the number and durations of individual
vehicle blockages’ impact on the resulting congestion. After the field
exercise, additional analyses used the mission plan to validate the use of
simulation for analyzing congestion.
## 1 Introduction
The Defense Advanced Research Projects Agency (DARPA) OFFensive Swam-Enabled
Tactics (OFFSET) program was designed to enable a very large heterogeneous
swarm of unmanned air and ground vehicles in complex urban environments
[DARPA, nd]. As swarm size increased, DARPA intentionally limited the launch
zone size and allotted deployment time in order to “encourage” the teams to
address swarm deployment logistics challenges. The OFFSET program’s Command
and Control of Aggregate Swarm Tactics (CCAST) team’s swarm architecture was
designed to enable a single operator to deploy and monitor a swarm of up to
250 unmanned vehicles for diverse missions [Clark et al., 2021].
Over the course of the OFFSET program, the swarm size increased as the field
exercises occurred at differing Department of Defense Combined Armed
Collective Training Facilities (CACTF). Each CACTF presented different
challenges when deploying a hardware swarm composed of heterogeneous ground
and multi-rotor aerial vehicles. The CACTF’s size and shape as well as its
structures (e.g., buildings, light poles, power lines, street signs, and
curbs), the designated launch/landing zone size, along with the swarm’s size
and composition influenced the distribution of vehicles and increased the
likelihood of launch, en-route, and landing conflicts amongst the vehicles, in
other words, congestion. The challenge was determining how to deploy the swarm
effectively while minimizing the congestion and navigation path planning
conflicts. More specifically, these conflicts occurred when a large number of
vehicles that rely on GPS localization, with a large localization error,
deploy and return to a small area, called the launch zone. This congestion can
negatively impact the swarm’s performance, delaying or interrupting mission
plans as well as causing vehicles, particularly aerial vehicles, to deplete
their batteries and shorten their deployable mission time.
The CCAST team intended to deliver a fleet of at least a 250 hardware swarm
for the final exercise; however, due to uncontrollable circumstances, CCAST’s
entire fleet consisted of 183 vehicles: 44 ground robots (UGVs), and 139
multi-rotor aerial vehicles (UAVs). The heterogeneous swarm consisted of
relatively small, low cost (e.g., the most expensive being $3,900)
commercially available platforms, see Table 1, some of which were augmented
with necessary sensors and processing capabilities. While very capable, these
vehicles have limitations compared to larger, more expensive robots, but the
trade-off was to use inexpensive platforms in order to scale the swarm’s size.
Table 1: CCAST’s Robot Platforms, including cost, size, and number of each. Aion Robotics R1 | 3DR Solo | Uvify Ifo-S | Modal AI VOXL M500 | Modal AI Micro-Seeker
---|---|---|---|---
| | | |
~$3,600 | ~$750 | ~$3,900 | ~$2,300 | ~$2700
42.6cm x 48.6cm | 25.9cm x 18.8cm | 27.5cm x 27.5cm | 39.37cm x 39.37cm | 19.1cm x 19.1cm
44 | 40 | 21 | 69 | 9
DARPA intentionally limited the launch zone size, challenging the ability to
deploy all vehicles simultaneously. While the UGVs can detect and avoid UAVs
positioned in the launch zone, it is not desirable for the UGVs to navigate
through the UAVs. All vehicles self-localize via GPS, but the vehicles’
smaller size, relative to up to a 5 meter (m) GPS error, resulted in an
operational procedure to maintain a 5m distance between all vehicles within
the launch zone. Further, vehicles may be blocked, or unable to plan a
traversable navigation path, by other vehicles, either on the ground or in the
air. Blocked UAVs are required to hover while replanning, which consumes more
power and reduces their deployable time. Finally, the CACTF’s built
environment creates obstacles and choke points that can introduce vehicle
congestion. These blockages, or congestion, can occur either en route, near
task goals (e.g., approaching a building to surveil it), or over the launch
zone when taking off or returning to launch (RTL). Given these constraints,
the objective is to determine how to optimize deploying larger heterogeneous
swarms within the constrained launch area in order to achieve the field
exercise’s mission priorities. Achieving this objective requires investigating
launch zone vehicle configurations, safety protocol variations (e.g., reducing
the “safe” distance between platforms), and mission plan modifications.
Two CACTFs were analyzed. Joint Base Lewis-McChord’s Leschi Town, shown in
Figure 1(a), the location of Field Exercise (FX) 4 was initially considered
for FX6. Ultimately, Fort Campbell’s Cassidy CACTF111Note, Field Exercise 5
was canceled., shown in Figure 1(b), was chosen as the FX6 site. This
manuscript presents analyses of actual and simulated missions, with the
simulated mission results serving as a baseline for the FX6’s actual results.
Potential congestion has a more significant impact on UAVs’ flight times and
their contributions to achieving the mission scenario objectives; thus, the
CCAST swarm’s UAVs are the primary focus of this analysis.
(a) Joint Base Lewis McChord CACTF.
(b) Fort Campbell CACTF.
Figure 1: The field exercise CACTFs, not to scale.
An overview of the CCAST swarm system is provided, followed by a review of the
relevant congestion mitigation literature. The congestion analysis results for
both CACTFs prior to FX6 are provided. A follow-up analysis, after FX6,
investigates congestion that occurred during the FX6 to that derived from
simulation trials using the same mission plans. CCAST’s multi-fidelity swarm
simulator was used to generate the results. The experimental methodology for
each analysis is provided, along with the corresponding results. Discussions
provide insights into the impact of congestion on mission progress and
mitigation approaches.
## 2 CCAST Swarm System Architecture
The CCAST swarm architecture has four primary components: the vehicles, a
mission planner, the swarm commander interface (I3), and the swarm dispatcher
[Clark et al., 2021]. The mission planner is used prior to mission deployment
and permits composing tactics into a mission plan. The tactics may require
vehicles with specific capabilities or payloads and can incorporate inter-
tactic ordering dependencies.
The heterogeneous CCAST swarm consists of varying numbers of commercially
available vehicles (i.e., 3DR Solos [3DR, nd], Uvify Ifo-Ss [Uvify, nd], Modal
AI VOXL m500s [ModalAI, ndb], Modal AI Micro-Seekers [ModalAI, nda], and Aion
R1/R6 UGVs [Aion, nd]) with varying sensing and computational processing
capabilities, as detailed in Table 1. The CCAST architecture knows each
vehicle’s sensor and processing capabilities (e.g., Uvify Ifo-S’s payloads
permit indoor flight, 3DR Solos’s payloads do not). The exact mission
deployment hardware vehicle distribution depends on various factors, including
each field exercise’s available vehicles (e.g., the AI Modal UAVs were not
part of the CCAST swarm at FX4), the mission plan, environmental conditions,
etc. The FX6 CCAST swarm hardware composition is provided in the table;
however, deployed swarm compositions varied during the FX6 shifts.
Communication between I3, the swarm dispatcher, the vehicles, and I3 occurs
over an LTE network, using a publish/subscribe protocol. Each vehicle’s LTE
modem allows it to communicate with the LTE basestation. The vehicles
communicate telemetry data to the dispatcher, which relays that information to
other vehicles and I3. This communication architecture relies on the vehicles’
having direct line-of-sight with the LTE basestation in order to communicate
data packets. The nature of the FXs’ built CACTF environment necessitates the
CCAST system’s ability to be resilient to vehicles being unable to communicate
with the rest of the system. This situation can occur as vehicles move
throughout the dense urban environment, and buildings or trees block a
vehicle’s line-of-site to the LTE basestation.
Prior to mission deployments, the CCAST Swarm Tactic Operations Mission
Planner is used to prepare a relevant mission plan that seeks to achieve the
mission objectives. The resulting plan can be evaluated and refined using
virtual vehicles available in CCAST’s multi-resolution swarm simulation. After
the vehicles are staged in the FX’s launch zone and powered on, the mission
plan is instantiated, binding available vehicles on the LTE network to mission
relevant roles or groups. When the mission plan tactics are instantiated, they
are assigned to the appropriate vehicles that are spatially closest to the
tactics goal location.
The resulting mission plan is composed of relevant tactics from the CCAST
Swarm Tactics Exchange extensible library. The mission plan may include phases
that group tactics in order to achieve important mission goals (e.g., Phase I:
information, surveillance, and reconnaissance, Phase II: Act on gathered
intelligence to locate a verified hostile, Phase III: neutralize the verified
hostile). This library incorporates tactics for surveilling structures or
areas of interest, flocking, agent following, exploring building interiors,
etc. The swarm vehicles are assigned tactics either as individuals or as a
team. The vehicles can automatically swap in order to continue tactics when
vehicle battery levels become too low [Diehl and Adams, 2022]. Once a tactic
is assigned, the vehicles conduct on-board real-time navigation planning using
extensions to the real-time, rapidly exploring random tree star (RT-RRT*)
algorithm [Naderi et al., 2015]. The RT-RRT* algorithm incorporates randomness
when searching for potential paths, resulting in vehicles identifying
different paths to achieve the swarm’s mission objectives.
The CCAST multi-resolution swarm simulator extends Microsoft Research’s AirSim
[Shah et al., 2018] and facilitates rapid system development, pre-FX (e.g.,
congestion testing), and pre-mission (e.g., mission planning) analysis. The
CCAST extensions permit both larger swarm scales, and simultaneous
live/virtual vehicle deployments. This simulator was leveraged to generate the
reported congestion evaluation results.
The CCAST team is assigned designated shifts for the FX swarm deployments.
Early FX shifts are dedicated to shorter (i.e., 1.5 - 2 hours) system
integration and dry runs, while later longer shifts (i.e., 2 - 3.5 hours)
focus on “playing” the mission scenario. Once the CCAST hardware vehicles are
positioned in the launch zone, the remaining system components are activated,
the pre-mission brief has been conducted, and the vehicles are powered on, a
shift’s mission deployment can begin.
The CCAST swarm is deployed and managed by a single human, the swarm
commander, via a 3-D virtual reality-based interface (I3). At shift start, the
swarm commander loads the mission plan and either executes the entire mission
plan, or portions of (i.e., signals within) a multi-phase mission plan. The
swarm commander can also generate tactic assignments that are explicitly or
implicitly assigned to vehicles. The mission plan components and the swarm
commander’s generated tactics are communicated to the swarm dispatcher, which
takes the necessary actions to communicate the tactics to the relevant
vehicles. The swarm dispatcher coordinates inter-vehicle communication and
relays vehicle telemetry to I3.
If the swarm commander has not explicitly identified the vehicles to execute a
tactic, the dispatcher automatically selects them from the available
unassigned vehicles with the necessary capabilities that are spatially located
closest to the tactic’s goal location (e.g., the building to be surveilled).
The allocated vehicles individually plan navigation paths and, when found,
execute those paths. This navigation planning can fail for multiple reasons,
such as a vehicle’s path being blocked by another vehicle or the designated
target position being unreachable. Using a CCAST generated 3D terrain
elevation model that includes known structures and obstacles, the dispatcher
is expected only to assign tactic goal execution points the vehicles can
reach; however, congestion will occur when a vehicle is unable to plan a
navigation path due to being blocked by one or more vehicles, structures, or
obstacles.
Figure 2: A Building surveillance task, during which a UAV detects the
artifact. Photo courtesy of DARPA.
While the CCAST swarm is capable of a wide variety of offensive, defensive,
and surveillance tactics, the primary focus for the congestion analysis is a
prevalent mission tactic, the Building surveillance tactic. The Building
surveillance (surveil) tactic is central to gathering mission relevant
intelligence. This tactic requires four UAVs with forward-facing cameras to
investigate the sides of a building, and a fifth UAV with a downward-facing
camera to investigate the same building’s roof. The Phase I mission plan often
includes large numbers of Building surveil tactics that are “fired off”
simultaneously at shift start, which can generate a lot of vehicle movement,
and resulting congestion. The Building surveil tactic execution begins with
UAVs launching and ascending to a randomly assigned altitude, which was
between 25m and 50m above ground level (AGL) during FX6. Simultaneously with
these actions, each UAV begins planning a navigation path, and once a
deconflicted navigable path is identified, the UAV beings moving toward the
assigned building. While en route, the UAVs typically fly at an altitude
safely above buildings and treetops, but during the surveillance execution,
the UAVs descend to complete the task, as shown for one UAV in Figure 2. DARPA
places April Tags [Olson, 2011], representing different mission relevant
artifacts (e.g., building ingress points, improvised explosive devices,
locations of mission relevant information or high value targets) on horizontal
and vertical surfaces throughout the CACTF, including inside and outside
buildings. The Building surveil tactics’ UAVs with forward-facing cameras must
descend into the built environment, often within a few meters of the ground
(e.g., 5m AGL), in order to sense any available mission artifacts located on
the side of the building, as shown in Figure 2. Successful UAVs complete their
Building surveil tactic and automatically return to the location from which
they launched in the designated launch zone. However, many hazards (e.g.,
improvised explosive devices ) and adversarial (e.g., verified hostiles)
artifacts exist that can neutralize the CCAST swarm vehicles, rendering them
unable to continue executing the assigned tactic. Neutralized UAVs
automatically return to the launch zone and land, later to be revived by a
mobile medic. Further, the vehicles’ consumption of the available battery
power will shorten a UAV’s deployment time.
Vehicles can be assigned to complete multiple consecutive tactics, such as
multiple Building surveils in a particular region of the CACTF, rather than
automatically returning to the launch zone upon completion of a single tactic.
Using this approach optimizes the speed of gathering intelligence, can
potentially reduce congestion, and permits optimizing the usage of the UAVs’
power source, while working towards achieving the mission objectives. A key
factor is managing power usage. While not an issue for UGVs, whose batteries
provided sufficient power for even the longest FX shifts, UAVs’ batteries only
support 10-20 minute flights. As a safety precaution, the UAVs are programmed
to automatically RTL when their available battery level is reduced to the
Battery RTL level. Tactics that require the UAVs to hover, such as Building
surveils, consume more power than en-route flight maneuvers, and can result in
more frequent Battery RTL tactics and increased congestion over the launch
zone.222UAV batteries are manually swapped in the launch zone by CCAST
personnel in order to support continued mission progress.
The CCAST team manages hundreds of batteries, that can vary in age and usage,
as well as by vehicle platform type; thus, the CCAST team makes no attempt to
develop individual battery specific consumption models for any vehicle type.
However, the CCAST multi-resolution simulator needs to integrate battery
consumption models. The simulator incorporates a configurable battery life
that uses a normal distribution to assign virtual vehicles a battery power
level upon deployment. These useful battery durations are specified by virtual
vehicle type (e.g., Solos: 22 minutes, M500s: 30 minutes). The virtual
vehicles’ battery consumption is based on a linear battery model. While the
simulator’s battery consumption models differ from the hardware vehicle usage,
especially for UAVs, it provides a sufficient proxy to support the presented
congestion analysis.
## 3 Background
The possibility of using robot swarms to conduct complex missions in built
environments has become increasingly relevant [Davis et al., 2018, Chung et
al., 2018, Skorobogatov et al., 2019]. Recurring swarm related challenges
include congestion management, GPS error, detection and avoidance of other
vehicles, and difficulties in managing large-scale UAV takeoffs and landings.
Early decentralized congestion management research focused on leveraging road
networks and operating a network of automobiles without intersection signals.
Methods ranging from using rule sets [Grossman, 1988] to automobiles driving
in patterns [Ikemoto et al., 2004] were evaluated. More recent efforts used
auction based techniques to manage intersection crossings [Carlino et al.,
2013]. Each of these methods mitigated intersection congestion and collisions
successfully, but was specifically tailored to roadway environments, where the
vehicles drive in designated lanes and follow common intersection crossing
standards. While the OFFSET FX CACTFs’ have road networks that CCAST leverages
for UGV navigation, the UGVs are much smaller than automobiles. Two issues
arise. First, the GPS localization error for the CCAST UGVs is large, up to
5m, compared to using GPS with automobiles that generally do not encounter
significant localization errors. Second, optimizing the mission execution
seeks to deploy multiple UGVs on the road network simultaneously, such as
multiple UGVs navigating as a group to a goal location. This type of UGV
deployment is not required, nor does it need to follow the roadway usage rules
applied to automobiles. The OFFSET missions are intended to present a dynamic
environment in which the vehicles are not constrained to road networks, with
target locations being potentially anywhere in the CACTF, including in fields,
open areas between buildings, and even inside buildings. As well, the CCAST
UAVs do not have to follow the roadways at all, but do have to navigate while
avoiding other UAVs, structures, and obstacles in the CACTF environment. As a
result, these road network based methods are unsuitable proxies for analyzing
OFFSET relevant congestion scenarios, particularly when focused on UAVs.
Swarms deployed with unconstrained areas of operation can also encounter
congestion [Lerman and Galstyan, 2002]. This effort demonstrated that overall
swarm performance increased with the swarm’s size, but that individual vehicle
performance was shown to degrade as the total number of deployed vehicles
increased. However, as size increased further, performance diminished and
eventually resulted in negative returns. These results suggest that as the
CCAST swarm size increases, the vehicles are expected to increasingly
interfere with each other, resulting in congestion reduction becoming an
increasingly relevant consideration.
Pareto optimal path planning was explored as a solution to the interference
problem [Inalhan et al., 2002, Ghrist et al., 2005], as were sub-optimal
centralized planning methods [Turpin et al., 2013, Saha et al., 2016], and
dynamic models that predicted and reacted to neighboring UAVs’ actions during
flight [Arul et al., 2019, Arul and Manocha, 2020]. A limitation of these
methods is the relatively few dozen vehicles to which they were applied and
their inability to scale to the required OFFSET swarm size. Further, these
methods were designed for a homogeneous swarm performing a single task. The
OFFSET mission scenarios require a heterogeneous swarm simultaneously
executing a diverse set of tactics.
Centralized algorithms can provide organized UAV swarm landings [Dono, 2012,
Dono and Chung, 2013], or sequenced aerial holding pattern zones from which
UAV’s landings are executed using a follow the leader approach [Nazarov et
al., 2021]. While the CCAST vehicles report their telemetry to a centralized
process (i.e., the dispatcher), and that telemetry is shared with the swarm’s
vehicles, each vehicle conducts on-board decentralized navigation path
planning [Clark et al., 2021]. While a centrally coordinated tactic is
feasible within the CCAST system, this approach is not preferable when
deploying decentralized swarm vehicles. A CCAST objective is for vehicles to
conduct their mission assignments until the minimum safe battery level is
reached. Once that battery level is reached, a vehicle RTLs. An aerial buffer
zone, such as that required for following the leader landings, will require
vehicles, particularly UAVs, to have an additional reserve power threshold
that is higher than the current system requirements. A higher reserve power
threshold will further reduce UAVs’ time-on-task and can reduce the swarm’s
overall performance. Finally, none of these solutions incorporate simultaneous
UAVs launching and landing from the same launch zone, which occurs when
previously launched UAVs RTL at the same time UAVs takeoff based on newly
issued tactics.
Probabilistic state machines were explored to serve as a congestion reduction
methodology [Marcolino and Chaimowicz, 2009]. The state machines require
vehicles to randomly wait in close proximity to a target in order to avoid
congestion. An extension created pie-shaped ingress and egress lanes to the
target [Soriano Marcolino et al., 2017]. Requiring UAVs to wait randomly while
hovering at altitude expends more battery than en-route flight and will
necessitate allocating additional reserve power with an increased threshold to
trigger the CCAST Battery RTL tactic. The use of lanes around a target works
for a singular target situation, but likely will not generalize to the OFFSET
domain. The OFFSET mission objectives often incorporate multiple targets that
can result in overlapping lanes and may cause entire regions to become
inaccessible. A potential (i.e., untested) probabilistic state machine variant
relevant to CCAST can have the dispatcher assign a random wait time to UAVs
just prior to their launch. This approach avoids UAVs hovering in the air
unnecessarily and may potentially reduce congestion without lowering the
Battery RTL threshold or sacrificing additional battery life.
Coordinated UAV takeoffs can reduce congestion by minimizing the time for
swarm UAVs to launch and create pre-designated aerial formations, which
require direct UAV-to-UAV communication [Fabra et al., 2020, Hernández et al.,
2021]. These formation-forming methods create a localized sub-swarm of co-
located UAVs; however, the CCAST mission plan frequently deploys multiple
smaller sub-swarms with assigned tactics that are distributed throughout the
CACTF. The OFFSET program specifies a single launch zone, but places no
requirements on vehicle recovery, meaning vehicles are not required to return
to the launch zone. While CCAST can support distributed landing zones, UAV
battery replacement necessary to support ongoing mission tempo can only be
achieved by human teammates. Therefore, CCAST’s operating procedure generally
assumes UAVs return to the same location from which they launched within the
launch zone. Coupling this procedure with the mission’s tempo can result in
UAVs launching from and returning to the launch zone at irregular intervals.
It is also not uncommon to have UAVs with a completed tactic (e.g., a Building
surveil) RTL at the same time new UAVs are launched to address a new tactic.
Entertainment swarm light shows launch thousands of UAVs. The UAVs are
typically placed in rows, and the light show choreography launches them by
alternating which rows takeoff as waves [HighGreat, 2021]. These highly
choreographed light shows are generally conducted at AGLs that place the UAVs
high above structures and obstacles. The UAVs’ actions are typically quite
simplistic, often relying on pre-programmed deconflicted navigation paths,
especially compared to common CCAST tactics conducted in the dense CACTF air
space. The OFFSET mission objectives generally are vastly more complex,
requiring large numbers of UAVs to takeoff from significantly smaller launch
zones to conduct tactics that require dispersed navigation path plans across
the CACTF. The CCAST swarm relies on the tactic’s specified goal location’s
proximity (e.g., the building’s location for a surveil) to assign the
spatially closest vehicles automatically. The assigned vehicles each
individually plan their navigation path to the respective locations at which
the tactic is executed.
The relationship between robot size, robot quantity, and congestion was
explored recently [Schroeder et al., 2019]. Specifically, the balance between
the total swarm cost, as a function of robot size and quantity, and
interference between vehicles was used to identify the optimal physical size
of robots that comprise a swarm. The results rely on the positive correlation
between the robots’ physical size and their performance. The CCAST’s UAVs’
performance is less dependent on their physical size, as improved CCAST
vehicle performance for the OFFSET domain generally comes at the cost of
higher quality sensor payloads.
An immediate swarm congestion concern is UAV battery drainage prior to tactic
completion, particularly for persistent tactics. Automatic UAV battery
recharging is feasible [Erdelj et al., 2019, Li et al., 2017], but is beyond
the current CCAST swarm system capabilities. The CCAST swarm uses a swap
algorithm in which UAVs automatically transfer their tactic to another UAV
with a full battery [Diehl and Adams, 2022]. Two types of swaps are achieved.
UAVs conducting persistent tactics request a replacement UAV and remain on
task until the new UAV arrives, as allocated by the dispatcher. UAVs
performing interruptible tasks, relinquish their tactic to the dispatcher and
execute the RTL behavior. The dispatcher selects a replacement UAV that
launches to continue performing the prior UAVs’ tactic. These approaches can
address the “symptoms” of congestion, but can also create additional traffic
that may increase congestion.
Swarm congestion can occur for many reasons. The complexity of the CCAST
swarm, in conjunction with the DARPA OFFSET mission deployment constraints
introduce new factors that will impact swarm congestion. Conducting the
mission effectively and preparing a feasible multiple phase mission plan
requires understanding how all the facets of the mission, the CCAST system,
and constraints, such as launch zone space limitations impact congestion
during a mission deployment. None of the existing literature addresses all the
constraints encountered during the DARPA OFFSET program.
## 4 Pre-FX6 Launch Zone Configuration Analysis
The maximum number of vehicles that can be deployed at a CACTF is determined
by multiple constraints, such as the DARPA designated launch zone area,
environmental obstacles (e.g., trees and power lines), and the size of each
vehicle’s GPS error-associated safety zones. The CCAST architecture implements
a safety distance in order to avoid vehicles unnecessarily colliding with one
another when departing from the launch zone. The safety distance for all UAVs
is 1m, while the UGV distance is 3m. The difficulty is that given the CCAST
vehicle’s sizes, their dimensions are provided in Table 1, and their GPS
systems, the GPS error can be up to 5m. The minimum safe operation distance
between two vehicles is the sum of the vehicles’ safety distances (e.g., 2m
between two UAVs, 4m between a UAV and a UGV).
The CCAST team must use the DARPA specified launch zone; therefore, a key
initial question is how many UGVs and UAVs can fit within the launch zone,
while also meeting the CCAST specified safety distances? The planning for FX6
assumed that 240 vehicles (40 UGVs, 40 3DR Solos, 20 Uvify IFO-Ss, and 140
VOXL m500s UAVs) had to be safely accommodated within the FX6 launch zone.
Early in the FX planning, the Leschi Town CACTF at Joint Base Lewis McChord
was the intended FX6 destination; however, later the location changed to Fort
Campbell’s Cassidy CACTF. Analyses for both CACTFs are reported.
### 4.1 Joint Base Lewis McChord, Leschi Town CACTF
Joint Base Lewis McChord’s Leschi Town CACTF, see Figure 3, is roughly 200,000
meters2 (m2), approximately 250m north-to-south x 800m east-to-west. Fifty one
to five story buildings are dispersed throughout the CACTF, which also
includes light posts, street signs, natural vegetation and trees, drainage
ditches, barricades, a playground, etc. The planned primary launch zone, shown
in Figure 3, was a 7.5m wide x 170m long section of roadway (1300m2) with
grass or curb borders.
A 3 x 80 vehicle configuration with a 2m spacing between UGVs and a 1.5m
spacing between UAVs permits 240 vehicles to fit within the launch zone;
however, this configuration does not account for any GPS error, or the minimum
safety distances required for safe swarm operation. Given CCAST’s defined
safety distances, the minimum safe operating distance between two vehicles is
determined to be 2m between two UAVs, 6m between two UGVs, and 4m between a
UAV and a UGV. Adherence to the minimum safety distance between vehicles,
while maximizing the number of vehicles that can fit into the launch zone size
results in a maximum of 18 UGVs and 112 UAVs arranged in two rows of 65
vehicles each. However, this configuration falls 110 vehicles short of the
intended goal swarm size.
Figure 3: The Joint Base Lewis McChord Leschi Town CACTF showing the buildings
by Building set (A: white and B: black) and the launch zone (yellow).
### 4.2 Fort Campbell, Cassidy CACTF
Ultimately, FX6 was held at the Fort Campbell Cassidy CACTF, see Figure 4.
This CACTF is roughly 100,000 m2, approximately 350m north-south x 285m east-
west, with 43 one to five story buildings that are more densely distributed
than the Leschi Town CACTF. The pre-FX launch zone, shown as yellow in Figure
4, used for the presented pre-FX6 analysis, is an approximately 37m north-
south x 41m east-west area primarily covering a parking lot and a small
portion of the roadway (1517 m2). The actual FX launch zone, shown as the blue
areas in the figure, was roughly the same total size (1468m2), but had a
different spatial distribution. The small launch area on the left, close to
the building labeled 7, is approximately 6m wide x 16m long. The roadway
between the buildings was the largest continuous area, measuring approximately
6m wide x 142m long. The smaller area to the right of the buildings, which
overlaps with the anticipated Pre-FX launch zone, is 10m wide x 52m long. All
launch areas’ ground-based borders were generally grass, gravel, or pavement;
however, power lines, shown in their approximate position as the black lines
in the figure, hugged the roadway and impacted UAV launch zone placements.
Figure 4: The Fort Campbell Cassidy CACTF showing the tested building set, the
pre-FX expected and analyzed launch zone (yellow), the actual FX6 launch zone
(blue areas), and the approximate location of power lines close to the launch
zone (black lines).
A 15 row x 16 column vehicle configuration with a 2.5m spacing allows 240
vehicles to fit in the designated launch zone. However, as with the Leschi
Town analysis, this configuration does not account for GPS error or the
minimum safety distance requirements, which were violated for the UGVs.
Adherence to the safety distance requirements between the vehicles results in
a maximum of 14 rows. 12 rows with 16 columns are allocated to the UAVs, and
the UGVs had a 2 row x 7 column placement. This configuration accommodates 192
UAVs and 14 UGVs, which is 34 vehicles short of the 240 vehicle goal.
An analysis of the launch zones for both the Leschi Town and Cassidy CACTFs
provided insufficient space to deploy the entire planned swarm and maintain
the CCAST safety distances. Further, prior field exercises demonstrated that a
launch zone spacing twice the minimum specified safety distances was often
necessary to avoid congestion. Thus, richer analyses of inter-vehicle spacing,
the potential of using deployment waves, the vehicle placement pattern within
the launch zone, and the potential impact of the mission plan composition were
identified as additional alternatives for safely deploying 240 vehicles from
the limited launch zone.
## 5 Pre-FX6 Congestion Analysis
The largest impact from congestion, given the CCAST swarm’s composition, is on
the UAVs; thus, the Pre-FX6 analysis focus on them, and the UGVs are excluded.
The analysis hypotheses focus on understanding how the impacts of vehicle
placement spacing, using deployment waves, and vehicle placement patterns can
decrease congestion. Hypothesis I states that increasing launch zone spacing
will decrease swarm congestion. Hypothesis II states that increasing the
number of waves will decrease swarm congestion. Hypothesis III states that
using a hexagonal layout in the UAV launch zone, as opposed to a square
layout, will use less space without increasing congestion. While it may appear
to the causal reader that the first two hypotheses are obvious, given the FX6
and CCAST system constraints, that is not necessarily true. Hypothesis I and
II are evaluated using the experiments presented in Sections 5.1 and 5.2. The
third hypothesis is loosely based on prior FX launch zone trial and error
patterns, which is a focus of the Section 5.2 experiment.
### 5.1 Pre-FX6: Joint Base Lewis McChord, Leschi Town CACTF
The Pre-FX6 Leschi Town CACTF evaluation incorporated mission plans composed
of multiple Building surveil tactics, but varied the UAV launch zone spacing
(Hypothesis I) and the number of waves in which the UAVs are deployed
(Hypothesis II). This experiment was conducted using the CCAST swarm
architecture and associated multi-resolution simulator. Two mission plans were
developed that each incorporated twelve Building surveil tactics. Each
Building surveil tactic required five UAVs, four with a forward-facing camera
payload and the fifth with a downward-facing camera payload. A total of 60
UAVs were required to complete these mission plans, 48 with forward-facing and
12 with downward-facing camera payloads.
#### 5.1.1 Experimental Methodology
##### Independent Variables
All experimental trials required a mission plan that incorporated relevant
tactics. The developed mission plans incorporate two sets of twelve buildings
distributed across the CACTF, as identified in Figure 3. The selected
buildings and resulting mission plans represent independent variables. The
general mission plan focused on the Phase I intelligence gathering aspects of
a typical FX mission, and; therefore, focused on issuing Building surveil
tactics to collect available intelligence information. The building sets were
used to generate multiple mission plans, whose creation will be explained.
The remaining independent variables focus on the swarm vehicles’ placement and
usage of the available launch zone area. The launch zone spacing, or the
distance between UAVs, was varied between 2m and 5m in 1m increments. The
total number of waves focused on how many launch waves a mission plan
contained. A single (1) wave launched all UAVs simultaneously. The remaining
evaluated number of waves values divided the UAVs into groups based on the
number of waves the mission plan contained, either 2, 3, 4, or 6.
The number of regions was dependent on both the number of buildings and the
number of waves and decreases as the number of waves increases. The number of
regions evaluated (i.e., 12, 6, 4, 3 and 2) was calculated as follows:
$\textit{number of regions}=\textit{number of buildings}/\textit{number of
waves}$. A mission plan incorporating 12 buildings and 6 waves results in 2
regions. Adding buildings, not varied in these evaluations, or fewer waves
result in a larger number of regions. A qualitative analysis determined that
90 seconds (s) between waves was sufficient, as it generally allows the
current wave of UAVs to launch and begin moving towards their goal before the
next wave was tasked. If the time between waves decreases, then the likelihood
of congestion increases. While a larger time between waves may reduce the
likelihood of congestion, it may also increase congestion if UAVs deployed in
earlier waves return to the launch zone at the same time a new wave launches.
The number of tactic invocations per wave was similarly dependent on the
number of buildings and the number of launch waves. The number of tactic
invocations per wave was always equivalent to the number of regions, or a
single tactic invocation per region, per wave. Thus, the number of tactic
invocations per wave was evaluated using 12, 6, 4, 3, and 2 tactic invocations
per wave. While it is possible to have multiple tactics assigned to a region
during a wave, such assignments are considered outside the scope of this
analysis.
##### Dependent Variables
The CCAST swarm UAVs launch, ascend to altitude, and must complete their
navigation path planning prior to commencing travel to achieve the assigned
tactics. Thus, an extensive period hovering at altitude due to congestion and
blockage can cause a UAV to consume its battery, resulting in it RTLing
without contributing to the mission objectives, a highly undesirable outcome.
During the FX, the swarm commander may explicitly attempt to “assist” the
vehicle in becoming unblocked. Specifically, the swarm commander may Nudge a
vehicle, which in the case of a UAV causes it to change altitude slightly, in
hopes of unblocking navigable paths. A more severe swarm commander action
Stops the UAV’s current tactic, which is followed by issuing either an RTL or
an entirely new tactic. The swarm commander’s options to assist the vehicle in
becoming unblocked are not incorporated into the evaluation trials.
An independent block occurs when no clear navigation path is available (i.e.,
a navigation path plan cannot be generated), and the UAV continues to search
for a viable path plan. CCAST swarm UAVs mark themselves as blocked
immediately when path planning fails, or a mobile object blocks its path. The
path planning process resets after a block has persisted for 10 seconds,
meaning the vehicle marks itself as unblocked and restarts the path planning
process. After the third reset due to blocks (i.e., 30 seconds), the planner
resets and the entire path planning process resets. These steps are repeated
until either a navigable path is identified, or the UAV’s battery reaches the
Battery RTL threshold, at which point it will RTL. Consecutive blocks that
occur within ten seconds of one another are counted as a single block.
Multiple independent blockages can occur for the same vehicle within the same
tactic execution, or within a shift, since the vehicle can encounter a new
blockage situation as it executes a navigation plan in the environment.
The amount of time a vehicle is blocked is called the block duration. An
independent block duration is the amount of time over which an independent
block occurs. Multiple consecutive blockages are combined into a single
independent block. The corresponding block durations for each of the combined
consecutive blockages are summed to create the block duration. The latitude
and longitude of each block event are also recorded.
Upon trial completion the total block count and total block duration is
calculated by summing all recorded independent blocks and the independent
block durations, respectively. The independent and total blockage durations
are measured, in milliseconds, but the total block duration results are
reported as minutes.
##### Mission Plan Design
The CCAST FX mission plans are developed prior to shift deployments based on
information pertaining to the number of available vehicles, their types and
payloads, mission objectives and the tactics required to achieve that mission,
the launch zone and other environmental constraints, prior intelligence
information, etc. Conducting the evaluations requires developing
representative mission plans. Independent mission plans were developed to
account for all combinations of the independent variables. Each mission plan
focused on the Phase I information gathering mission objective. Specifically,
each mission assigned the vehicles to Building surveil tactics. The CCAST
team’s airspace deconfliction heuristics were applied to selecting the
buildings to be surveilled. The selected buildings are depicted in Figure 3.
Region creation allocates the buildings to separate CACTF areas (i.e.,
regions) in order to organize the wave deployments. Region creation is
required for each number of waves. The resulting regions are used to generate
the corresponding mission plan. At a minimum, each region must contain at
least one building to be surveilled. The regions ideally radiate outwards from
the launch zone, resembling pie slices if the launch zone was centered in a
circular CACTF.
The CCAST architecture can signal multiple tactics (e.g., multiple Building
surveils) to be executed simultaneously, which permits launch waves. Once the
regions are identified, where each region contains an equivalent number of
buildings, waves are assigned to the buildings inside each region. The
building assignment is repeated for each number of waves value. The first wave
begins with the outer perimeter of buildings, invoking the furthest Building
surveil tactic from the launch zone in each region. Subsequent waves assign
buildings to the tactics by moving inwards (e.g., closer to the launch zone)
from the last tactic’s building assignment, which allows earlier UAV waves to
clear the launch zone before the next wave launches.
Two sets of buildings were used to generate the mission plans, Building set A
and B, the buildings are labeled in Figure 3. The specific Leschi Town
Building set assignments for a given number of launch waves are provided in
Table 2. Each table entry decomposes the buildings, represented by their
number from the figure, into the required number of launch waves. Ten mission
plans, five per building allocation (i.e., Building Set A and B), were
developed. While the swarm commander typically launches the mission plan
waves, this experiment used a program script to instantiate the launch waves.
# Waves | Leschi Town | Cassidy
---|---|---
Building Set A | Building Set B | Building Set
1 | 1,2,9,29,33,37, 53,59,60, 62,63,65 | 12,15,25,31,32,35, 51,55,58,61,64,67 | 4c,7,9,12,16,21, 24,28,31,34,37b,43
2 | 1: 1,9,33,62,63,65; 2: 2,29,37,53,59,60 | 1: 12,15,25,55,64,67; 2: 31,32,35,51,58,61 | 1: 4c,12,21,31,34,16; 2: 7,9,24,28,37b,43
3 | 1: 1,9,63,65; 2: 2,29,60,62; 3: 33,37,53,59 | 1: 15,25,64,67; 2: 12,31,51,61; 3: 32,35,55,58 | 1: 4c,12,21,34; 2: 7,9,24,31; 3: 16,28,37b,43
4 | 1: 1,9,63; 2: 2,62,65; 3: 29,33,60; 4: 37,53,59 | 1: 15,64,67; 2: 12,25,61; 3: 35,51,58; 4: 31,32,55 | 1: 12,21,34; 2: 9,24,31; 3: 4c,16,37b; 4: 7,28,43
6 | 1: 1,63; 2: 9,65; 3: 2,62; 4: 29,59; 5: 33,60; 6: 37,53 | 1: 15,64; 2: 25,67; 3: 12,61; 4: 35,51; 5: 31,58; 6: 32,55 | 1: 12,21; 2: 4c,34; 3: 9,31; 4: 16,24; 5: 7,37b; 6: 28,43
Table 2: The Leschi Town and Cassidy CACTFs’ (A and B) Building set
assignments by number of launch waves (# Waves).
##### Launch Zone Configuration
The CCAST multi-resolution swarm simulator requires a launch zone
configuration file for each mission plan. This configuration file defines the
vehicles, their types, and their launch/home locations within the launch zone.
Separate launch zone configuration files must be created for each launch zone
spacing value. The launch zone was configured into two rows of 30 UAVs along
the road, based on the results from Section 4.1.
##### Experiment Execution
The computational complexity of the experimental design will vary depending on
the specific swarm simulator, the number of vehicles in the swarm, and the
mission plan complexity. The analysis of the experimental results are
dependent on the complexity of the produced log files and the amount of
generated data to be analyzed.
Twenty trials was performed for each combination of launch zone spacing and
number of waves. Each number of waves has an independent mission plan,
resulting in five mission plans per Leschi Town Building set (A and B). 800
trials were executed, 400 per Building set (i.e., 5 mission plans x 4 launch
zone configurations x 20 trials x 2 building sets = 800). After the final wave
launched, each simulation trial ran for 20 minutes. Prior analysis of the UAV
Swap tactic demonstrated that the average 3DR Solo battery life, the lowest of
all CCAST UAVs, was under 20 minutes [Diehl and Adams, 2022].
#### 5.1.2 Results
An overall heatmap of all locations at which congestion occurred across all
simulations for Building set A, as shown in Figure 5(a), can be used to
identify the locations of potential congestion. Across the Building set A
trials, congestion occurs across the CACTF, as shown in the figure. While
there is some increased congestion near the buildings specified for this set,
the vast majority of the congestion occurs along the launch zone (yellow and
orange in the figure). The heatmap generated for Building set B had a similar
distribution of block locations, with the dominant locations being the launch
zone, followed by the buildings in the set.
A histogram of each blocks’ start times is provided in Figure 5(b). The
majority of blockages occurred during the initial UAV wave deployments. Recall
that simulation trials containing multiple waves launch additional waves at 90
second intervals (i.e., $1^{st}$ wave: 0 minutes, $2^{nd}$: 1.5 minutes,
$3^{rd}$: 2 minutes, $4^{th}$: 4.5 minutes, and $6^{th}$: 7.5 minutes). The
histogram demonstrates that the majority of the blocks began when the mission
plan’s UAV waves launch, after which the number of new blocks is much lower.
Once the vehicles launched, the remaining block instances are related to
longer duration blockages in the launch zone, blockages out on the CACTF near
the buildings to be surveilled, or when the vehicles return to the launch
zone.
(a) Heatmap of the location of congestion instances.
(b) Histogram of the block start times.
Figure 5: Joint Base Lewis McChord, Leschi Town CACTF congestion instance
location heatmap across all Building set A simulations (a), and times block
instances began histogram by single minute increments (b).
(a) Building set A - block count.
(b) Building set B - block count.
(c) Building set A - block duration.
(d) Building set B - block duration.
Figure 6: Joint Base Lewis McChord, Leschi Town CACTF’s congestion median
total block count and duration box plots by building set, number of launch
waves, and launch zone spacing.
The median block counts and block durations for both building sets by number
of waves and launch zone spacing are provided in Figure 6. The Building set A
results shown in Figure 6(a) indicate that 3 waves, regardless of launch zone
spacing, result in the fewest blocks, with 2 waves having the second lowest
block count. Regardless of the launch zone spacing, the number of blocks
increased with 4 and 6 waves. Overall, Building set A’s total block count
significantly decreased with 2 and 3 deployment waves, but increased with $>$4
waves. Heatmaps for Building set A’s 1, 3, and 6 wave results across all
spacings are provided in Figure 7. The 3 wave block count (Figure 7(b)) is
substantially lower than that of the 1 and 6 wave results (Figure 7(a) and c,
respectively). A 5 (number of waves) $\times$ 4 (launch zone spacing) between-
groups ANOVA was performed for Building set A’s total block count results. No
significant main effect for the launch zone spacing was found, but a
significant main effect existed for the number of waves
($F(4,12)=189.82,p<0.01$). A posthoc Tukey test (p = 0.05) of the pairwise
differences by the number of waves found that the 2 and 3 wave instances were
both significantly lower than the 1, 4, and 6 wave instances. Additionally,
the block count for the 3 wave instances was significantly lower than the 2
wave results.
(a) One deployment wave.
(b) Three deployment waves.
(c) Six deployment waves.
Figure 7: Joint Base Lewis McChord, Leschi Town CACTF’s Building set A’s
congestion heatmap for 1 (a), 3 (b), and 6 (c) deployment waves.
Generally, little difference existed in Building set A’s median total block
durations with the 2m, 3m, and 4m spacings across 1, 3, 4 and 6 waves, as
depicted in Figure 6(c). However, increasing the launch zone spacing to 5m
lowered the total block duration for any number of waves compared to the other
spacings. Two waves had shorter block durations irrespective of spacing, with
the tightest minimum and maximum ranges. A 5 (number of waves) $\times$ 4
(launch zone spacing) between-groups ANOVA yielded significant main effects
for number of waves ($F(4,12)=44.25,p<0.01$), and launch zone spacing
($F(3,12)=33.97,p<0.01$). A posthoc Tukey test of the pairwise differences by
number of waves found that the 2 wave instances were significantly lower than
the 1, 3, 4, and 6 wave instances. A posthoc Tukey test assessing differences
by launch zone spacing indicated that 5m spacing had a significantly lower
block duration than the 2, 3, and 4m spacings. The block duration for 4m was
also significantly lower than the 2m spacing.
The best- and worst-case configurations for Building Set A were identified.
The best-case occurred for the 5m spacing with 2 launch waves, while the
worst-case had a 2m spacing for 1 Wave. Heatmaps for these configurations’
total block count and total block durations are shown in Figure 8. These
heatmaps highlight similarities between the concentrated locations of the
total block count and total block duration metrics, where the concentration of
more blockages and the longest blockages tend to occur in the launch zone. The
worst-case, 2m spacing with 1 launch wave, clearly has more (Figure 8(b)) and
longer (Figure 8(d)) blockages. Noticeably, the block duration metric more
clearly shows the severity of the congestion differences between the best- and
worst-cases. The worst-case’s increased blockage counts and durations appear
to reduce congestion throughout the CACTF, but this is likely due to the
launch zone congestion blocking vehicles from moving out to navigate around
the CACTF as required by the assigned tactics.
(a) Best-case configuration - block count.
(b) Worst-case configuration - block count.
(c) Best-case configuration - block duration.
(d) Worst-case configuration - block duration.
Figure 8: Joint Base Lewis McChord, Leschi Town CACTF’s Building Set A’s
congestion heat map by best-case (5m Spacing and 2 Waves) total block count
(a) and total block duration (b) and worst-case (2m Spacing and 1 Wave) total
block count (c) and total block duration (d).
The start times, represented as minutes from the beginning of the mission, of
these best-case and worst-case configuration blockages are shown with
histograms in Figure 9. The best case configuration with two launch waves
demonstrates that all blockages occur early in the mission. While there are
blockages after the first wave launches (i.e., 1-2 minutes), the number of
blockages increases substantially after the second wave launches (i.e., 2-3
minutes). The number of new blockages drops, until no new blockages are
detected after the $6^{th}$ minute. The worst-case configuration’s single
launch wave immediately results in the largest number of new blocks (i.e., 1-2
minutes). While the number of new blocks decreases substantially after the
first two minutes, recall that the worst-case block durations are much longer
than the best-case configuration (as shown in Figure 8). These longer block
durations clearly result in new blockages for an extended period into the
mission. Whereas the best-case’s new blocks across the CACTF occur early in
the mission, the worst-case’s instances occur throughout the first 18 minutes.
(a) Best-Case configuration - block start time.
(b) Worst-Case configuration - block start time.
Figure 9: Joint Base Lewis McChord, Leschi Town CACTF histograms showing the
block event start times for the best- (a), and worst-case (b).
The Building set B results demonstrate little effect from varying the launch
zone spacing, as shown in Figure 6(b). The 1 and 2 waves results with 2 and 3m
spacings were generally lower than the 4 and 5m spacings. Increasing the
number of waves consistently decreased the total block counts. Heatmaps by the
number of waves, similar to Figure 7 were not included to conserve space.
However, the heatmaps also reflect the decline with increased wave size, but
are otherwise similar to Building set A, with the majority of blocks occurring
in the launch zone, with some higher incidences of blocks near the set’s
buildings. The 5 (number of waves) $\times$ 4 (launch zone spacing) between-
groups ANOVA for the total block count values yielded significant main effects
for the number of waves ($F(count)=88.44,p<0.01$), and launch zone spacing
($F(4,12)=7.92,p<0.01$). A posthoc Tukey test (p = 0.05) assessing differences
by the number of waves found that there was no significant difference between
the 4 and 6 wave instances, but all other instances were significantly
different. A posthoc Tukey test (p=0.05) of launch zone spacing differences
found that the 2m instances were significantly lower than the 4m and 5m
instances.
Increasing launch zone spacing to 5m for Building set B resulted in lower
median total block durations overall that slightly decreased with increased
number of waves, as shown in Figure 6(d). Overall, the smaller the spacing,
the longer the median total block durations. The 5 (number of waves) $\times$
4 (launch zone spacing) between-groups ANOVA related to total block durations
identified significant main effects for number of waves
($F(4,12)=38.34,p<0.01$), and launch zone spacing ($F(3,12)=56.61,p<0.01$). A
posthoc Tukey test assessing differences by number of waves found that the 1
and 2 wave results were significantly higher than the 2, 3, 4, and 6 wave
results, and the 1 wave results were significantly higher than 2 waves. A
posthoc Tukey test of the launch zone spacing indicated that the 2m and 3m
spacing had significantly higher block durations than 4m and 5m. Additionally,
the 4m spacing block duration was significantly higher than 5m.
Overall, the Leschi Town CACTF’s total block duration significantly decreased
as the spacing between UAVs increased for both Building sets. Total block
duration consistently decreased for Building set A as the spacing increased,
irrespective of the number of waves. After 3 waves for Building set B, the
effects of increasing the spacing on the total block duration became less
prominent. The total block duration for Building set A significantly decreased
with 2 waves, but significantly increased with any additional waves. Building
set B saw a significant reduction in the total block duration with an
increase, up to 3 waves, at which point there was no further reduction in the
total block duration. These seemingly counterintuitive results emphasize the
importance of the mission plan design’s organization of and interdependencies
between tactics and their goal locations in reducing congestion.
### 5.2 Pre-FX6: Fort Campbell, Cassidy CACTF
The Pre-FX6 congestion analysis shifted to the Fort Campbell Cassidy CACTF
with the finalized FX6 schedule. The evaluation hypotheses remained the same,
and the evaluation was conducted in a nearly identical manner to the Leschi
Town CACTF evaluation, Section 5.1.
#### 5.2.1 Experimental Methodology
A notable difference for the Cassidy CACTF evaluation was the addition of a
new independent variable, configuration pattern, which refers to whether the
vehicles are placed in the launch zone using a hexagonal or a square
configuration. The mission plan design was completed as in Section 5.1.1;
however, only a single Building set was analyzed, as detailed in Table 2, due
to the added independent variable and limited time before the start of FX6.
These two changes primarily affect the launch zone configuration, with minor
changes to the evaluation’s execution. The dependent variables remained the
same, total block count and total block duration.
##### Launch Zone Configuration
The configuration pattern must be accommodated within the launch zone
configuration file. A configuration file was created for each combination of
launch zone spacing and configuration pattern (i.e., square and hexagonal),
resulting in eight total configuration files. Each square configuration
pattern used 6 rows with 10 columns of UAVs, as described in Section 4.2. The
analyzed launch zone spacing between vehicles were 2, 3, 4, and 5m. The
hexagonal configuration was created using the square layout, and adjusted the
spacing between rows of vehicles to $\textit{Launch zone
spacing}\times\sqrt{3}/2$. Every other vehicle row was shifted by
$\textit{Launch zone spacing}/2$ m laterally (i.e., half a column sideways).
These adjustments ensured that vehicles continued to conform to the minimum
safety distance requirements, while also consuming less overall space than the
square configuration.
##### Experiment Execution
Twenty trials were performed for each combination of launch zone spacing,
number of waves, and configuration pattern. Five total mission plans were
created based on the number of waves. 800 total trials were executed (i.e., 5
mission plans x 4 launch zone spacing values x 2 configuration patterns x 20
trials = 800). Each simulation trial ran for 20 minutes after the final wave
was deployed.
#### 5.2.2 Results
The Cassidy CACTF heatmaps, by the square, Figure 10(a), and hexagonal
configurations, Figure 10(b), indicate the locations at which all blockages
occurred across all simulation trials, and their frequency. Similar to the
Leschi Town CACTF’s heatmap, the majority of the congestion occurred in or
near the launch area, with congestion also occurring along heavily traveled
routes and near the buildings to which Building surveil tactics were assigned.
The hexagonal configuration does have a larger number of blocks just north of
the launch zone, as compared to the square layout. Generally, the congestion
away from the launch area is aligned with the buildings to which Building
surveil tactics were assigned.
Similarly to the Joint Base Lewis McChord Leschi Town CACTF Building set A
total blockage start time histogram, the timing of the Cassidy CACTF
blockages, shown in Figures 10(c) and d, occur much more frequently at the
start of the mission plans and appear associated with the launch wave timings.
The Cassidy CACTF’s overall size is about half of the Leschi Town CACTF.
Further, the Cassidy CACTF launch zone, as specified pre-FX6, was 520$m^{2}$,
or 40% the size of the Leschi Town’s launch zone. The smaller Cassidy launch
zone and more compact CACTF led to a substantially larger number of block
instances. This larger number of total blockages occurred throughout the
mission and across the CACTF. The square configuration generally had more
total blockages at the start of the mission, even though there was less
congestion at the choke point north of the launch zone. The hexagonal
configuration generally had a higher sustained number of new blockages after 7
minutes, which led to this configuration resulting in more total blockages.
There is an uptick in blockage instances between 14 and 19 minutes for both
configurations, which is associated with UAVs returning to the launch zone due
to low battery or task completion.
(a) Square - block locations and counts.
(b) Hexagonal - block locations and counts.
(c) Square - block count by block start time.
(d) Hexagonal - block count by block start times.
Figure 10: The Fort Campbell, Cassidy CACTF simulated congestion total block
location heatmaps for the square (a) and hexagonal configuration (b), as well
as the total block count by block start time histograms for the square (c) and
hexagonal configurations (d).
(a) Square layout - block count.
(b) Hexagonal layout - block count.
(c) Square layout - block duration.
(d) Hexagonal layout - block duration.
Figure 11: Fort Campbell, Cassidy CACTF’s Building set congestion results’ box
plots, median, for block count and duration by layout configuration, number of
launch waves and launch zone spacing.
The Cassidy CACTF’s median block count results, by configuration pattern, are
presented in Figures 11(a) and b. Both configurations’ results show that a 2m
spacing has the lowest block count across all numbers of waves. The
differences between the maximum and minimum block counts with the hexagonal
configuration was larger for all spacings greater than 2m. A 5 (number of
waves) $\times$ 4 (launch zone spacing) $\times$ 2 (configuration pattern)
between-groups ANOVA performed for the total block counts yielded significant
main effects for launch zone spacing ($F(4,12)=103.60,p<0.01$) and
configuration pattern ($F(4,12)=92.38,p<0.01$). No significant main effects
were found for the number of waves. The posthoc Tukey test of the launch zone
spacings determined that the total block count for 2m spacing is significantly
lower than for 3m, 4m, and 5m spacings. A posthoc Tukey test of the
differences by configuration pattern indicated that the hexagonal
configuration had a significantly higher block count than a square
configuration.
The median block duration results by configuration pattern for the Cassidy
CACTF are presented in Figures 11(c) and d, respectively. The square
configuration with the 2m and 3m spacings resulted in longer block durations
across all numbers of waves, which was also true with the hexagonal layout for
1, 2 and 3 waves. The square configuration overall had less difference between
the minimum and maximum block durations. A 5 (number of waves) $\times$ 4
(launch zone spacing) $\times$ 2 (configuration pattern) between-groups ANOVA
performed on the total block durations identified significant main effects for
number of waves ($F(4,12)=10.84,p<0.01$), launch zone spacing
$F(3,12)=228.78,p<0.01$), and configuration pattern ($F(12,12)=3.18,p<0.01$).
A posthoc Tukey test by the number of waves indicated that 3 waves were
significantly lower than the 1 or 6 wave instances. The posthoc Tukey test of
launch zone spacings found that 2m instances were significantly higher than
the 3, 4, and 5m spacings. Additionally, the total block duration for 3m
instances was significantly higher than the 4m or 5m. A posthoc Tukey test of
the pairwise differences by configuration pattern discovered a significantly
higher block duration for the hexagonal layout. Heatmaps for the Cassidy CACTF
results by number of launch waves did not present any substantial differences
and are excluded in favor of brevity.
(a) Best-case configuration - block count.
(b) Worst-case configuration - block count.
(c) Best-case configuration - block duration.
(d) Worst-case configuration - block duration.
Figure 12: The Fort Campbell, Cassidy CACTF’s Building set’s congestion
heatmap for the best-case (square layout, 5m Spacing and 4 waves), and the
worst-case (hexagonal layout, 2m spacing and 4 waves) configurations’ total
block count, (a) and (b) respectively, and total block duration, (c) and (d)
respectively.
A comparison of the subjective best-case (square layout, 5m space, 4 waves)
and worst-case (hexagonal layout, 2m space, and 4 waves) block counts and
block durations are shown in Figure 12. The results are similar in many ways
to the Joint Base Lewis McChord Leschi Town results. The block counts are
higher (Figure 12(b)) and have longer durations (Figure 12(d)) for the worst-
case configuration, compared to the same results for the best-case (Figures
12a and c, respectively). There is a choke point slightly north of the launch
zone, which is more prominent with the worst-case configuration. The best-case
configuration results in increased congestion at the CACTF’s outer regions,
which in contrast demonstrates that fewer vehicles in the worst-case
configuration navigated beyond the launch zone due to the severe congestion.
(a) Best-Case configuration - block start time.
(b) Worst-Case configuration - block start time.
Figure 13: The Fort Campbell, Cassidy CACTF’s best- (a) and worst-cast (b)
block start time histograms.
As with the overall blockage start time histogram, the best- and worst-case
blockage start time histograms demonstrate increases in congestion at the
start of the mission associated with the timing of each cases’ four launch
waves, see Figures 13(a) and b, respectively. While the best-case Cassidy
configuration shows a steep drop off in new blockages around minute 6, there
continues to be new blockages throughout the mission deployment. An increase
in new blockages occurs, beginning at the $14^{th}$ minute and continues until
the $19^{th}$ minute. This increase is due to UAVs returning to the launch
zone after completing their tactics or due to a low battery. The substantially
smaller launch zone and the cited choke point both contribute to this
increased congestion and new blockage instances. Since both the best- and
worst-case instances incorporate four launch waves, the start of the mission
for the worst-case is similar in terms of the number of new blockages. While
there is an overall decrease in the total blockages after minute 6, the total
number of worst-case new blockages is sustained at a higher level throughout
the mission as compared to the best-case.
Overall, the Cassidy CACTF’s block count was significantly lower for 2m launch
zone spacing than all other spacing values. Additionally, the hexagonal
configuration pattern had a significantly higher total block count than the
square pattern. The number of waves had no significant impact on the total
block count.
Cassidy’s total block duration significantly decreased as the spacing
increased. Total block duration for the 4m and 5m spacings were both
significantly lower than 2m and 3m. The total block duration was significantly
reduced when increasing the number of waves from 1 to 3, but increased with
more waves. The total block duration for a hexagonal configuration was
significantly higher overall than with the square layout.
### 5.3 Joint Pre-FX Discussion
The total number of blocks for both the Leschi Town and Cassidy CACTFs
increased as the spacing between vehicles increased. This result is
counterintuitive until it is compared with the total block duration results.
The highest total block duration occurs when the total block count is its
lowest, which suggests that as congestion worsens, the number of individual
blockages decreases, while their severity may increase significantly.
Alternatively, as congestion decreases, the number of blocks may actually
increase, but the resulting blockages may not be as severe and may have
shorter durations. The total block count alone can be an unreliable congestion
metric, for that reason, the total block duration appears to be more reliable
for measuring congestion. This phenomenon is visible in Figures 8 and 12,
where the block duration heatmaps more prominently present the congestion
differences between the configurations.
Choke points can arise based on the launch zone area and the CACTF layout, as
seen in the heatmaps for both CACTFs. The Leschi Town launch zone is longer
and more centrally located with tactics assigned across the breadth of the
CACTF, as a result, the launch zone itself becomes a choke point. The same
heatmaps for the Cassidy CACTF identify a choke point with the major increase
in congestion just north of the launch zone, typically associated with the
direction of the buildings to be surveilled. The more compact Cassidy CACTF
and launch zone result in this choke point congestion. A larger, in other
words longer, launch zone was expected to alleviate this particular choke
point.
Hypothesis I stated that congestion decreases with an increased launch zone
spacing, which was supported for both CACTFs. A decrease in congestion
occurred for both CACTFs as the launch zone spacing increased from 2m to 4m.
Increasing the launch zone spacing further to 5m had little to no effect,
except for the Leschi Town’s Building set A were 5m spacing further decreased
congestion.
Hypothesis II stated that congestion decreases with more deployment waves,
which was partially supported for both the Leschi Town and Cassidy CACTFs. The
Leschi Town Building set A’s mission plans’ congestion improved with 2 and 3
waves, but any additional waves increased congestion due to longer duration
blockages, as visible with the heatmaps in Figure 7. Leschi Town’s Building
set B’s mission plans’ congestion decreased, or was unchanged, with increased
waves. Congestion for the Cassidy CACTF mission plans’ for the square
configuration fell as the number of waves increased for the larger spaces
(i.e., $>2$). Further, the maximum and minimum congestion metrics were tighter
with the square configuration. The hexagonal configuration’s overall values
for more than 2m spacing were similar across the spacings and numbers of
waves, even though the maximum and minimum values covered a broader range.
Leschi Town’s Building sets provided differing results. No major discrepancies
existed between the selected buildings, with similar building location
distributions and wave assignment distributions. Although minor differences
exist (e.g., buildings 33 and 37 in Building set A, which are on opposite
sides of the street at the western end of the launch zone), such selection
choices appear to lead to choke points that increase congestion. Two waves
balanced the UAV launch zone traffic significantly better than the other wave
counts for Building set A, but two waves did not similarly affect the results
for Building set B. These changes in congestion with differing numbers of
deployment waves are visible in Figure 7, with a major reduction between 3
waves and either 1 or 6 waves. Building Set B’s congestion reductions occurred
with up to 4 waves, with 5 waves performing similarly.
The results support the notion that more waves can reduce congestion; however,
too many waves can increase congestion. Blockages tended to begin either at
the beginning of the mission or about twenty minutes into the mission (i.e.,
when vehicles take off and when they return from low battery or tactics
completion). Both the Leschi Town and Cassidy CACTFs’ results suggest the
possibility of identifying an “optimal” number of waves for a set of tactic
targets; although there is no guarantee that the optimal value will remain the
same for different tactic combinations. For example, 2 waves was optimal for
Leschi Town Building set A, but 3 waves were optimal for Cassidy’s hexagonal
configuration.
Hypothesis III stated that a hexagonal layout will use less overall space than
a square layout, while not increasing congestion. This hypothesis was
rejected. Even though the hexagonal layout succeeded in using less overall
space than the square layout, while maintaining the minimum distance between
vehicles, the level of congestion was significantly higher. While not
explicitly a requirement of the CCAST system, the additional space provided by
a square layout was shown to significantly decrease congestion. The downside
of a hexagonal layout can be mitigated by increasing the launch zone spacing.
## 6 Post-FX6 Congestion Analysis
The FX6 CCAST swarm deployments presented an opportunity to mine the log files
to understand the prevalence of congestion by vehicle blockages and blockage
durations. The FX mission plans were leveraged to conduct a simulation-based
congestion analysis. The evaluation Hypothesis IV states that a simulated and
real deployment with identical swarm composition and mission plans will have
similar congestion patterns.
The early, short FX6 integration and dry run shifts focused on validating
system capabilities that often incorporate fewer vehicles, minimal mission
plans, and minimal simultaneous tactic instantiations. The longer later shifts
are intended to deploy the swarm to achieve the mission objectives, but some
of those shifts also focus on OFFSET Sprinter integration validations. The
sprinters’ projects are designed to develop technology the integrator teams
potentially need, but are unable to develop themselves. The UAV focused
integrated technologies implied that the CCAST’s main swarm UAVs were grounded
during the sprinter integration testing. High sustained winds, with wind gusts
up to 29 MPH on November 17th, the last shift CCAST deployed as the only swarm
in the CACTF, created unique hardware-based challenges that resulted in
abnormal mission operations. The remaining FX6 shifts were “joint shifts”
during which both OFFSET integrator teams deployed their swarms simultaneously
on the CACTF. The teams shared their swarm vehicles’ telemetry, which
populated the shift log files with information that was difficult to
differentiate from the CCAST swarm vehicles. Unfortunately, this joint
deployment was not announced prior to the FX, so the log files were not
adjusted to facilitate the necessary analysis distinctions.
The November 16th FX6 shift results are analyzed due to a few key
characteristics. The number of vehicles staged in the launch zone was 91, 81
UAVs and 10 UGVs. During the shift, 74 unique vehicles were deployed, many of
them multiple times. During this shift, the CCAST LTE network did not
experience any outages, which resulted in consistent logging to support this
analysis. This particular shift’s mission plan launched multiple simultaneous
Building surveil tactics at the very start of the shift purposely intended to
launch as many vehicles as possible, creating a higher likelihood of
generating congestion.
The launch zone configuration was created using field notes of the vehicles’
staged positions in the FX6 launch zone. The evaluation’s independent variable
is simply whether the trial was real or simulated. The dependent variables are
counts (FX6 data) and descriptive statistics (simulation data) of the
timestamped blockages and assigned tactics, as well as the block durations.
##### Mission Plan Design
The FX6 mission plan for each shift deployment was logged. The CCAST simulator
can execute the FX6 mission plans; however, prior to using the mission plans
for the simulated evaluation, a few modifications were necessary. These
modifications were made to ensure simulation trials were as true-to-real as
possible, and no modifications fundamentally changed the underlying mission
plan.
The CCAST system supports live-virtual deployments, in which the swarm has
both hardware and virtual vehicles that are treated identically from a system
operation perspective. This feature enables increasing the swarm size
significantly and facilitates substituting virtual vehicles when conditions
(e.g., high winds) constrain the hardware vehicle deployments. This feature
provides many benefits, but the virtual UAVs deployed during live-virtual
shifts do not encounter congestion or create congestion for the hardware UAVs.
The FX6 mission plans often incorporated virtual vehicles. The mission plan’s
virtual vehicle tactics were independent of the hardware vehicle tactics,
meaning no tactics were assigned a mix of virtual and hardware vehicles. This
distinction allowed for removing all tactics assigned to the virtual UAVs and
UGVs from the FX6’s Nov. $16^{th}$ mission plan for use in the simulation
evaluation. The FX mission plan incorporates hardware UGVs, but they were also
excluded from the presented Nov $16^{th}$’s UAV congestion analysis results
and were not included in the simulation analysis mission plan.
The mission plan file is saved after the plan is instantiated during an FX
shift, which means the dispatcher has allocated hardware vehicles to the
plan’s tactics. As a result, the saved mission plan incorporates the hardware
vehicles’ unique identifiers (e.g., tail numbers). These identifier
assignments were removed to prevent the dispatcher from attempting to deploy
non-existent hardware UAVs during the simulation.
The CCAST simulator emulates the different swarm UAVs, including their
payloads, but the vehicle dynamics are the same regardless of the UAV type.
Therefore, only 3DR solos were used as proxies for the hardware UAVs in the
launch zone configuration file.
##### Experiment Execution
The November 16th shift’s mission plan supported a two hour deployment. The
mission plan was designed to deploy as many hardware-only vehicles for as many
tactic sorties as possible during this first thirty minutes of the shift;
thus, presenting the highest likelihood of generating swarm congestion. The
FX6 DARPA provided scenario had a very dense adversarial distribution, which
resulted in large numbers of vehicles becoming neutralized quickly. The
neutralized UAVs automatically return to the launch zone. During the shift 74
unique vehicles were deployed, and many were deployed multiple times. A mobile
medic was used to revive the neutralized UAVs just before the 15 minute point
in the mission. After the UAVs were revived, the swarm commander redeployed
them. Therefore, the decision was made to limit the simulation trial runs to a
total of sixty minutes, and the overall analysis to the first sixty minutes of
the FX6 shift. Twenty trials were performed using the CCAST simulator and the
adjusted mission plan.
### 6.1 Results
Heatmaps of the total block count and total block durations for both the
actual FX6 Nov $16^{th}$ shift and the simulation evaluation using the shift’s
mission plan are provided in Figure 14333Please note that since there are many
fewer block count map instances for this data, the instances are enlarged to
make them visible.. The mission plan deployed as many vehicles as possible at
the shift’s start; thus, one expects launch zone congestion, which is visible
for both the real FX6, see Figure 14(a), and simulated block counts, see
Figure 14(b). The initial mission plan for UAVs focused on Building surveil
tactics to the left side of the Cassidy CACTF. Blocks occurred across a
broader range of CACTF locations during the actual shift due to the swarm
commander issuing tactics to vehicles. The block duration heatmaps show some
of the longest blockages occurring in the launch area for both the real and
simulated results, as shown in Figures 14(c) and d, respectively. Differences
in blockages between the FX6 and the simulation results do exist. For example,
no blockages occur in the actual shift results at the building visible in the
upper left corner of the figure, but a large number of blockages occurred for
that same building with the simulation results. This difference arises from
the fact that this building was heavily fortified and the real UAVs were
neutralized when near the building, which mitigated the congestion around that
building. The simulator does not contain the adversarial artifacts; thus, the
virtual UAVs were not neutralized during when near this same building.
(a) FX6 - total block count.
(b) Simulated - total block count.
(c) FX6 - total block duration.
(d) Simulated - total block duration.
Figure 14: Heatmaps of the real FX6 Nov $16^{th}$ shift’s block locations (a)
and total block durations (c) vs. the simulated shift’s block locations (b)
and total block durations (d).
The results histograms facilitate comparisons between the single FX6 Nov
$16^{th}$ shift results and the simulation’s multiple trials results. The FX6
total block count, assigned tactic count, and total block durations are
provided in Figures 15(a), c, and e, respectively. The corresponding
simulation results’ across all trials means and standard deviations are
provided in Figures 15(b), d, and f, respectively.
The FX6 Phase I mission plan was loaded and deployed at the start of the
shift, which is represented in the number of tactics issued between 0 and 1
minute, as shown in Figure 15(c). While it appears that 8 is a small number of
tactics for the mission plan, in fact the eight Building surveil tactics each
incorporated the surveillance of multiple buildings. Note, the histogram
bucket ranges represent the values greater than or equal to the first number,
up to values less than the second value. After sending the initial mission
plan signal, the swarm commander manually issued a variety of tactics (e.g.,
Nudging vehicles, Stopping tactics) to relieve the congestion, as well as new
tactic assignments for vehicles still in the launch zone or for those vehicles
whose tactics had been stopped. These tactics were issued primarily between 5
and 15 minutes, but the number of new UAV blocks was substantially lower
during this time frame. The adversary dense scenario resulted in a very large
number of UAVs being neutralized during this same time period, which caused
neutralized UAVs to RTL. The majority of the deployed UAVs had RTL’ed by 15
minutes into the mission, either due to being neutralized, having completed
the assigned tactics, or having consumed the available battery power. Hence,
the very low number, one, of the new blocks between 10 and 15 minutes.
At approximately 15 minutes, the mobile medic was used to revive all
neutralized UAVs. After which, all UAV batteries were replaced. Once the UAVs
are restarted after the battery swap, the swarm commander began issuing
tactics to deploy as many vehicles as possible, which is reflected in the
number of tactics issued from 15 minutes to 30 minutes, as shown in Figure
15(c). The number of blockages increased at the same time, see Figure 15(a),
reflecting the swarm commander’s tactic issuing activity.
The FX6 Phase II mission plan was loaded and deployed 54 minutes into the
shift, providing the last opportunity for significant congestion to arise. The
FX6 figures reveal that issuing the mission plan’s associated tactics
generated new blockages. During the remainder of the shift, the swarm
commander generated and issued new tactics. While it is known that the mobile
medic and another round of battery replacements occurred later in the shift,
it was not recorded exactly when those events occurred.
An analysis of the actual FX6 total block durations, see Figure 15(e),
indicates that the majority of the durations were short. A total of the 241
block durations were less than or equal to one minute. The majority (195) of
those blocks lasted less than 30 seconds. Very few blocks had a duration
longer than 1 minute.
(a) FX6 - total block count.
(b) Simulated - mean total block count.
(c) FX6 - total tactic calls.
(d) Simulated - mean total tactic calls.
(e) FX6 - total block durations.
(f) Simulated - mean total block durations.
Figure 15: The FX6 Nov $16^{th}$ shift’s total block count (a), total tactic
calls (c) and total block durations (e) vs. the simulation’s total block count
(b), total tactic calls (d), and total block duration (d) means.
The goal of the simulation evaluation was to retain the authenticity of the
initial FX6 mission plan, particularly the first 30 minutes; therefore, the
swarm commander generated tactics were not included in the simulation
evaluation’s mission plan. The simulation results show mission plan tactics
being issued at the start of the mission plan, as compared to the FX6 mission
plan. The removal of the large number of swarm commander manually issued
tactics leads to the discrepancies between the real and simulated tactics
issued.
The simulation’s new block count at the start of the mission execution was
higher (mean = 38, standard deviation = 9.37), see Figure 15(b), than the
count of blocks during the FX6 shift, 31. The simulation additionally had a
higher mean count of new blocks through the first five minutes. Since only the
mission plan was used for the simulation analysis, no new tactics were created
after the mission plan signals were generated at the mission start, as shown
in Figure 15(d). Unlike the hardware vehicles, the virtual vehicles were
unable to be neutralized; thus, no congestion was generated in the simulation
results from this factor. The new blockages between 5 and 20 minutes are most
likely due to UAVs RTL’ing, but may also be due to UAVs that were delayed from
taking off when the tactic is received. The FX6 and simulated trials block
durations were very similar, see Figures 15(e) and f, respectively. The FX6
shift results show slightly longer blockage durations, about 20 seconds, as
compared to the mean simulation blockage duration.
The block durations generated by the simulation trials, see Figure 15(f), were
similar to the actual FX6 results in Figure 15(e). The most blocks lasted less
than one minute. The majority of these block durations were less than or equal
to 30 seconds (mean = 203, standard deviation = 23.6). Similar to the FX6
results, the remainder of the block durations that were less than or equal to
a minute in duration was substantially smaller (mean = 17, standard deviation
= 3.2). Very few blocks had a duration longer than 1 minute.
Pearson correlations were used to analyze the relationships between the number
of tactic calls and the number of generated blocks. A comparison of the FX6
tactic call results with the generated blocks found a positive correlation
($r(59)=0.4549,p<0.01$). A similar positive correlation was found for the
simulated mission tactic call results compared to the generated block counts
($r(59)=0.4573,p<0.01$).
The Pearson correlations comparing the FX6 results with the simulation trial
results for the tactic calls resulted in a positive correlation
($r(59)=0.5633,p<0.01$). The analysis of the total block counts from the FX6
results compared to the simulation results found a significant, highly
positive correlation ($r(59)=0.8159,p<0.01$). Lastly, the analysis of the
individual durations of blocks from FX6 compared to the simulation results
found a significant, highly positive correlation ($r(19)=0.9962,p<0.01$).
### 6.2 Post-FX Discussion
The results show that Hypothesis IV was partially supported across the FX6 and
simulation generated results. The analyzed mission plans were similar;
however, the presence of swarm commander dispatched tactics in the real data
lowered the tactics call correlation. The swarm commander generated tactics
appeared to have reduced the positive relationship; even so, the real FX6 and
simulated blockages showed a strong correlation of when blockages occurred and
how long those blockages lasted.
The positive correlation between the data sets supports using simulated
analyses to inform pre-deployment decisions that impact mission planning and
seek to reduce the impacts of congestion, as in Section 5. This post-FX
analysis, with the simulated comparison, focused on the locations, durations,
as well as how many and when blocks occurred. Prior to a mission deployment,
it will not be possible to know exactly how the adversary will impact the
mission plan; therefore, the lack of adversarial components that neutralize
the UAVs in the simulation trials is acceptable. The time, cost, and effort
associated with deploying a large hardware swarm is and will continue to be
very high; thus, increasing the outcomes associated with such deployments and
the effectiveness of mission plans is essential. A simulator that emulates the
adversarial agents and neutralizes the UAVs was beyond the scope of this
program, but is necessary if the intention is to fully understand the
potential mission plan’s impact on the associated congestion and mission
outcomes.
## 7 Conclusion
The DARPA OFFSET program requirements that sought to maximize the swarm size,
while minimizing the available launch zone area, create one set of constraints
resulting in increased blockages between vehicles attempting to depart the
launch zone. A CCAST procedural decision to require all deployed vehicles to
return to the location from which they deployed in the launch zone upon tactic
completion, neutralization, or low power supply, particularly when applied to
multi-rotor UAVs, did have some impact on increasing the number of vehicle
blockages and associated congestion. This early decision supported two
objectives. The first objective was the ability to recover all swarm vehicles
within the DARPA specified shift breakdown period. The second objective was to
ensure that vehicles returned to a location where human CCAST team members
were able to physically replace their batteries during shifts that lasted
multiple hours. While the CCAST UGVs can continue working towards achieving
mission objectives during the longest field exercise shifts of 3.5 hours, the
less expensive, commercially available off-the-shelf UAVs quickly consume a
single battery’s power supply, on the order of 10 to less than 20 minutes.
Achieving the DARPA OFFSET mission objectives necessitates the ability to
continually redeploy the UAVs after battery replacements. Two additional
interrelated constraints are associated with swarm vehicles’ sensing
capabilities. The need to minimize the cost of individual vehicles in order to
scale the swarm to 250 vehicles implies that the vehicles’ sensor payloads and
computation processing capabilities cannot support rapid, accurate detection
and avoidance maneuvers, especially for UAVs. As such, the CCAST vehicles,
when deployed outdoors, rely on GPS to localize themselves and deconflict with
other vehicles. However, the vehicles’ relatively small size compared to the
larger error associated with the GPS signals, particularly when attempting to
avoid mid-air collisions between UAVs in and around the launch zone, resulted
in the establishment of minimum safety distances between vehicles during
launch zone staging. As the swarm size scales up, the question becomes which
constraints can be relaxed to maximize safety and perform the mission, while
minimizing congestion.
The congestion analysis clearly found that 240 vehicles were unable to fit
inside the DARPA designated launch zone without violating the CCAST defined
safety distances between vehicles that were intended to account for GPS error
and avoid mid-air vehicle collisions. Since a mission objective is to deploy
large numbers of vehicles simultaneously, congestion will occur. Therefore,
additional analyses focused on how to safely deploy a swarm of 240 vehicles
using waves of deployments, while also reducing CCAST’s safety distance
requirements and minimizing congestion in the launch zone.
The total block count metric was found to be an insufficient measure of
congestion, as it can lead to incorrect interpretations and is unable to
differentiate between blockage severity. The total block duration metric was
the more meaningful congestion metric, due to its ability to account for
different blockages lasting different durations. Longer total block durations
in and around the launch zone led to fewer total blocks, as vehicles were
unable to resolve the blockages and move throughout the CACTF.
Congestion decreased as the distance between platforms increased, with
diminishing returns after four meters. The use of deployment waves proved to
be another avenue for significantly reducing congestion; however, careful
consideration of the number of waves relative to the anticipated UAV tactic
assignment deployment durations is critical. The use of too many waves
actually increased congestion, even with 90 seconds between waves. The optimal
number of waves is dependent on the exact composition of mission plans;
however, even the use of two waves led to a significant reduction in
congestion.
The final DARPA OFFSET field exercise presented an opportunity to compare the
incidence of blockages and the severity of congestion from an actual swarm
deployment with a simulation of that deployment’s mission plan, as a means of
demonstrating the efficacy of using the CCAST multi-resolution simulator to
analyze congestion mitigation strategies. The strong correlation between the
actual and simulated swarm deployments supports using the CCAST simulator to
investigate congestion mitigation trade-offs.
The immediately actionable takeaway for deploying swarms in constrained launch
areas applies to measuring and reducing robot swarm congestion. First, a
smaller launch area led to more congestion and choke points from the launch
area into the CACTF. Second, congestion is best assessed using a combination
of block count combined with the total block duration. Third, using a more
spacious launch zone spacing between vehicles (i.e., 4m or 5m) consistently
reduced congestion. Fourth, two deployment waves, and sometimes more, always
reduced congestion; however, high numbers of waves with shorter durations
between wave launches need to be avoided due to potential increases in
congestion.
#### Acknowledgments
This research was developed with funding from the Defense Advanced Research
Projects Agency (DARPA). The authors thank Drs. Shane Clark and David Diller,
Kerry Moffitt, and their CCAST team collaborators from Raytheon BBN and SIFT,
LLC. The views, opinions, and findings expressed are those of the authors and
are not to be interpreted as representing the official views or policies of
the Department of Defense or the U.S. Government.
DISTRIBUTION STATEMENT A: Approved for public release: distribution unlimited.
## References
* 3DR, nd 3DR (n.d.). 3DR: Solo. http://www.3dr.com/company/about-3dr/solo/. Accessed on February 6, 2022, URL.
* Aion, nd Aion (n.d.). R1 autonomous rover UGV. https://www.aionrobotics.com/r1. Accessed on February 6, 2022, URL.
* Arul and Manocha, 2020 Arul, S. H. and Manocha, D. (2020). DCAD: Decentralized collision avoidance with dynamics constraints for agile quadrotor swarms. IEEE Robotics and Automation Letters, 5(2):1191–1198.
* Arul et al., 2019 Arul, S. H., Sathyamoorthy, A. J., Patel, S., Otte, M., Xu, H., Lin, M. C., and Manocha, D. (2019). LSwarm: Efficient collision avoidance for large swarms with coverage constraints in complex urban scenes. IEEE Robotics and Automation Letters, 4(4):3940–3947.
* Carlino et al., 2013 Carlino, D., Boyles, S. D., and Stone, P. (2013). Auction-based autonomous intersection management. In IEEE International Conference on Intelligent Transportation Systems, pages 529–534.
* Chung et al., 2018 Chung, S.-J., Paranjape, A. A., Dames, P. M., Shen, S., and Kumar, V. R. (2018). A survey on aerial swarm robotics. IEEE Transactions on Robotics, 34:837–855.
* Clark et al., 2021 Clark, S., Usbeck, K., Diller, D., and Schantz, R. E. (2021). CCAST: A framework and practical deployment of heterogeneous unmanned system swarms. GetMobile: Mobile Comp. and Comm., 24(4):17–26.
* DARPA, nd DARPA (n.d.). Defense Advanced Research Projects Agency: OFFensive Swarm-Enabled Tactics. http://www.darpa.mil/work-with-us/offensive-swarm-enabled-tactics. Accessed on February 6, 2022, URL.
* Davis et al., 2018 Davis, D. T., Chung, T. H., Clement, M. R., and Day, M. A. (2018). Multi-swarm infrastructure for swarm versus swarm experimentation. In Groß, R., Kolling, A., Berman, S., Frazzoli, E., Martinoli, A., Matsuno, F., and Gauci, M., editors, International Symposium on Distributed Autonomous Robotic Systems, pages 649–663.
* Diehl and Adams, 2022 Diehl, G. and Adams, J. A. (2022). Battery variability management for swarms. In Matsuno, F., Azuma, S., and Yamamoto, M., editors, Distributed Autonomous Robotic Systems. DARS 2021. Springer Proceedings in Advanced Robotics, pages 214–226. Springer.
* Dono and Chung, 2013 Dono, T. and Chung, T. (2013). Optimized transit planning and landing of aerial robotic swarms. In IEEE International Conference on Robotics and Automation, pages 1843–1850.
* Dono, 2012 Dono, T. F. (2012). Optimized landing of autonomous unmanned aerial vehicle swarms. Master’s thesis, Naval Postgraduate School.
* Erdelj et al., 2019 Erdelj, M., Saif, O., Natalizio, E., and Fantoni, I. (2019). UAVs that fly forever: Uninterrupted structural inspection through automatic UAV replacement. Ad Hoc Networks, 94:101612.
* Fabra et al., 2020 Fabra, F., Wubben, J., Calafate, C. T., Cano, J. C., and Manzoni, P. (2020). Efficient and coordinated vertical takeoff of UAV swarms. In IEEE Vehicular Technology Conference.
* Ghrist et al., 2005 Ghrist, R., O’Kane, J. M., and LaValle, S. M. (2005). Pareto optimal coordination on roadmaps. In Algorithmic Foundations of Robotics VI, Springer Tracts on Advanced Robotics, volume 17, pages 171–186. Springer.
* Grossman, 1988 Grossman, D. (1988). Traffic control of multiple robot vehicles. IEEE Journal on Robotics and Automation, 4(5):491–497.
* Hernández et al., 2021 Hernández, D., Cecília, J. M., Calafate, C. T., Cano, J.-C., and Manzoni, P. (2021). The Kuhn-Munkres algorithm for efficient vertical takeoff of UAV swarms. In IEEE Vehicular Technology Conference.
* HighGreat, 2021 HighGreat (2021). 5200 drone light show, breaking 4 world records – high great. https://youtu.be/n9tu-L59YqQ. Accessed on February 6, 2022, URL.
* Ikemoto et al., 2004 Ikemoto, Y., Hasegawa, Y., Fukuda, T., and Matsuda, K. (2004). Zipping, weaving: Control of vehicle group behavior in non-signalized intersection. In IEEE International Conference on Robotics and Automation, volume 5, pages 4387–4391.
* Inalhan et al., 2002 Inalhan, G., Stipanovic, D. M., and Tomlin, C. J. (2002). Decentralized optimization, with application to multiple aircraft coordination. In IEEE Conference on Decision and Control, volume 1, pages 1147–1155.
* Lerman and Galstyan, 2002 Lerman, K. and Galstyan, A. (2002). Mathematical model of foraging in a group of robots: Effect of interference. Autonomous Robots, 13(2):127–141.
* Li et al., 2017 Li, G., Švogor, I., and Beltrame, G. (2017). Self-adaptive pattern formation with battery-powered robot swarms. In NASA/ESA Conference on Adaptive Hardware and Systems, pages 253–260.
* Marcolino and Chaimowicz, 2009 Marcolino, L. S. and Chaimowicz, L. (2009). Traffic control for a swarm of robots: Avoiding target congestion. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1955–1961.
* ModalAI, nda ModalAI (n.d.a). Seeker micro-development drone. https://www.modalai.com/products/seeker. Accessed on February 6, 2022, URL.
* ModalAI, ndb ModalAI (n.d.b). Voxl m500 - development drone for PX4 GPS-denied navigation and obstacle avoidance. http://www.modalai.com/products/voxl-m500. Accessed on February 6, 2022, URL.
* Naderi et al., 2015 Naderi, K., Rajamäki, J., and Hämäläinen, P. (2015). RT-RRT*: A real-time path planning algorithm based on RRT*. In ACM SIGRAPH Conference on Motion in Games, page 113–118.
* Nazarov et al., 2021 Nazarov, A., Vospitanyuk, A., Pasechnikov, R., and Pasechnikov, I. (2021). Landing system of a swarm of small unmanned aerial vehicles. Journal of Physics: Conference Series, 1925(1):012045.
* Olson, 2011 Olson, E. (2011). AprilTag: A robust and flexible visual fiducial system. In IEEE International Conference on Robotics and Automation, pages 3400–3407.
* Saha et al., 2016 Saha, I., Ramaithitima, R., Kumar, V., Pappas, G. J., and Seshia, S. A. (2016). Implan: Scalable incremental motion planning for multi-robot systems. In ACM/IEEE International Conference on Cyber-Physical Systems, pages 1–10.
* Schroeder et al., 2019 Schroeder, A., Trease, B., and Arsie, A. (2019). Balancing robot swarm cost and interference effects by varying robot quantity and size. Swarm Intelligence, 13(1):1–19.
* Shah et al., 2018 Shah, S., Dey, D., Lovett, C., and Kapoor, A. (2018). AirSim: High-fidelity visual and physical simulation for autonomous vehicles. In Field and Service Robotics, pages 621–635.
* Skorobogatov et al., 2019 Skorobogatov, G., Barrado, C., and Salamí, E. (2019). Multiple UAV systems: A survey. Unmanned Systems, 08(2):149–169.
* Soriano Marcolino et al., 2017 Soriano Marcolino, L., Tavares dos Passos, Y., Fonseca de Souza, A. A., dos Santos Rodrigues, A., and Chaimowicz, L. (2017). Avoiding target congestion on the navigation of robotic swarms. Autonomous Robots, 41(6):1297–1320.
* Turpin et al., 2013 Turpin, M., Michael, N., and Kumar, V. (2013). Trajectory planning and assignment in multirobot systems. In Frazzoli, E., Lozano-Perez, T., Roy, N., and Rus, D., editors, Algorithmic Foundations of Robotics X, pages 175–190.
* Uvify, nd Uvify (n.d.). UVify IFO-S. http://www.uvify.com/ifo-s/. Accessed on February 6, 2022, URL.
|
# Discussions on the nature of GLEAM-X J162759.5$-$523504.3
H. Tong School of Physics and Materials Science, Guangzhou University,
Guangzhou 510006, China
###### Abstract
The nature of the long period radio transient GLEAM-X J162759.5$-$523504.3
(GLEAM-X J1627 for short) is discussed. We try to understand both its radio
emission and pulsation in the neutron star scenario, as an alternative to the
white dwarf model. We think that: (1) From the radio emission point of view,
GLEAM-X J1627 can be a radio-loud magnetar. (2) From the rotational evolution
point of view, GLEAM-X J1627 is unlikely to be an isolated magnetar. (3) The
1091s period is unlikely to be the precession period. (4) GLEAM-X J1627 may be
a radio-loud magnetar spin-down by a fallback disk. (5) The pulsar death line
is modified due to the presence of a fallback disk or a twisted magnetic
field. In both cases, a higher maximum acceleration potential can be obtained.
This may explain why GLEAM-X J1627 is still radio active with such a long
pulsation period. (6) General constraint on the neutron star magnetic field
and initial disk mass are given analytically. Possible ways to discriminate
between different modelings are also discussed.
stars: magnetar – pulsars: general – pulsars: individual (GLEAM-X
J162759.5$-$523504.3)
## 1 Introduction
Recently, a transient radio source with a possible period of 1091 seconds
(about 18 minutes) is reported (Hurley-Walker et al. 2022) . It is thought to
be a long period radio emitting magnetar in the discovery paper. We would like
to comment on this possibility and give our discussions about the nature of
GLEAM-X J162759.5$-$523504.3 (here after GLEAM-X J1627 for short).
After more than 50 years, we know a lot about pulsars and magnetars. For their
rotational behaviors, the slowest radio pulsar at present is PSR J0250$+$5854
with a period of 23.5 seconds (Tan et al. 2018). It may be spin-down by
magnetospheric processes or involving magnetic field decay (Kou et al. 2019).
Possible precession signal in pulsars (Stairs et al. 2000, Ashton et al. 2017;
with period about 1000 days) and magnetars (Makishima et al. 2014, 2019, with
period about 0.5 days) are also found. The precession may be free precession
(Ashton et al. 2017; Makishima et al. 2019) or forced precession due to the
presence of a fallback disk (Qiao et al. 2003). Possible period of 16 and 159
days is also reported in two fast radio bursts (The CHIME/FRB Collaboration
2020; Rajwade et al. 2020). This long period may be due to binary origin or
forced precession due to a fallback disk (Lyutikov et al. 2020; Yang & Zou
2020; Ioka & Zhang 2020; Tong et al. 2020).
The central compact object inside the supernova remnant RCW 103 is confirmed
to be a magnetar (D’Ai et al. 2016; Rea et al. 2016). Its 6.6 hours period may
be the rotational period of the central magnetar (De Luca et al. 2006; D’Ai et
al. 2016; Rea et al. 2016). It may be spin-down by the presence of a fallback
disk (Tong et al. 2016). Different combination of magnetic field strength and
fallback disk mass may explain the behavior of normal magnetars with period
about 10 s and the magnetar with 6.6 hour period (Tong et al. 2016). At
present, the magnetar inside RCW 103 is the slowest isolated neutron star.
Compared with previous observations, GLEAM-X J1627’s period of 1091 seconds is
not very surprising. It is long compared with that of normal pulsars and
normal magnetars. However, it is much shorter compared with that of RCW 103
magnetar and that of possible precession signal in pulsars, magnetars, and
fast radio bursts etc.
By applying previous experiences in pulsars and magnetars, we think that:
GLEAM-X J1627 may be radio-loud magnetar spin-down by a fallback disk. From
figure 1 in Tong et al. (2016), a fallback disk accreting neutron star can
naturally result in periods about $10^{3}\ \rm s$, which we think is the case
of GLEAM-X J1627. Therefore, GLEAM-X J1627 (with a period about $1091\ \rm s$)
may be an intermediate object between normal magnetars (with period about $10\
\rm s$) and the magnetar inside RCW 103 (with a period of 6.6 hours).
### 1.1 Summary of observations
From Hurley-Walker et al. (2022), GLEAM-X J1627 has a flux of (5-40) Jy, is
observed in the frequency range (72-231) MHz, is at a distance about $1.3\ \rm
kpc$, has a brightness temperature $\sim 10^{16}\ \rm K$ (which requires a
coherent emission process), has a period of $1091\ \rm s$, has an upper limit
on period derivative $\dot{P}<1.2\times 10^{-9}\ \rm s\ s^{-1}$, and has an
upper limit of X-ray luminosity $L_{x}<10^{32}\ \rm erg\ s^{-1}$.
From the observational flux and distance, the isotropic radio luminosity is
estimated to be:
$\displaystyle L_{\rm iso}\sim f\times\nu\times 4\pi d^{2}$ (1)
$\displaystyle=4\times 10^{30}\ {\rm erg\ s^{-1}}\left(\frac{d}{1.3\ \rm
kpc}\right)^{2}\frac{f}{10\ \rm Jy}\frac{\nu}{200\ \rm MHz},$ (2)
where $f$ is the typical observed flux, $\nu$ is the observational frequency,
$d$ is the source distance. More exact calculation of the radio luminosity
will require the beam radius, duty cycle and spectra information (Szary et al.
2014). From the observed period and period derivative, a lower limit on the
characteristic age is:
$\tau_{c}=\frac{P}{2\dot{P}}>1.4\times 10^{4}\ {\rm yr}.$ (3)
An upper limit on the characteristic magnetic field is (surface dipole
magnetic field strength at the equator):
$\displaystyle B_{c}$ $\displaystyle=$ $\displaystyle 3.2\times
10^{19}\times\sqrt{P\dot{P}}I_{45}^{1/2}R_{6}^{-3}\ \rm G$ (4)
$\displaystyle<$ $\displaystyle 3.7\times 10^{16}I_{45}^{1/2}R_{6}^{-3}\ \rm
G,$ (5)
where $I_{45}$ is moment of inertial in units of $10^{45}\ \rm g\ cm^{2}$,
$R_{6}$ is the star radius in units of $10^{6}\ \rm cm$. An upper limit on the
rotational energy loss rate is:
$\dot{E}=\frac{4\pi^{2}I\dot{P}}{P^{3}}<3.6\times 10^{28}I_{45}\ \rm erg\
s^{-1}.$ (6)
For a typical neutron star with $I_{45}\approx 1$, $R_{6}\approx 1$, the radio
luminosity of GLEAM-X J1627 is larger than the neutron star’s rotational
energy loss rate. Possible beaming may soften this problem. Detailed
calculations for GLEAM-X J1627 can be found in Erkut (2022). However, for
normal pulsars, their radio luminosity is always much smaller than the
rotational energy loss rate. Therefore, even considering the effect of
beaming, GLEAM-X J1627 is very different from normal radio pulsars. Therefore,
the problem of GLEAM-X J1627 is always two fold (Hurley-Walker et al. 2022):
(1) what’s the energy budget for the radio emission, (2) what’s the origin for
its long pulsation period? Any modeling for GLEAM-X J1627 should address these
two problems simultaneously.
As can be seen from eq.(6), a white dwarf will have a much higher rotational
energy loss rate compared with that of the pulsar case, because the moment of
inertia of the white dwarf is much larger than that of the neutron star.
Therefore, a white dwarf model can easily account for the energy budget and
long pulsation period (Loeb & Maoz 2022; Katz 2022). Coincidentally, the white
dwarf model was also proposed as an alternative model for magnetars
observations (Paczynski 1990; Malheiro et al. 2012).
As an alternative to the white dwarf model, we will try to provide an
understanding of GLEAM-X J1627 in the neutron star scenario.
## 2 On the nature of GLEAM-X J1627
### 2.1 From the radio emission point of view, GLEAM-X J1627 can be a radio-
loud magnetar
The mean flux (averaged over the whole period) of radio pulsar is order of
$\rm mJy$. While their peak flux is order $\rm Jy$ (Lyne & Graham-Smith 2012).
For the radio emitting magnetar XTE J1810$-$197, at $3.3\ \rm kpc$, its peak
luminosity is about $10\ \rm Jy$ (Camilo et al. 2006). The third radio
emitting magnetar PSR J1622$-$4950 is radio-loud while in X-ray quiescence
with $L_{X}\leq 10^{33}\ \rm erg\ s^{-1}$ (Levin et al. 2010; Anderson et al.
2012). Therefore, both the radio luminosity and low X-ray luminosity of
GLEAM-X J1627 may be similar to a radio-loud magnetar in X-ray quiescence, as
also noted by the discovery paper (Hurley-Walker et al. 2022). In this
scenario, the radio emission of GLEAM-X J1627 is powered by the magnetic
energy of a magnetar. Future discovery of long period sources with high X-ray
luminosity and X-ray outburst (i.e. radio-loud magnetar not in X-ray
quiescence) will give direct support for the magnetar scenario, similar to the
confirmation of magnetar inside RCW 103 (D’Ai et al. 2016; Rea et al. 2016).
### 2.2 From the rotational evolution point of view, GLEAM-X J1627 is
unlikely to be an isolated magnetar
For both radio pulsars and radio-loud magnetars, they all lie above a fiducial
pulsar death line on the $P-\dot{P}$ diagram (Ruderman & Sutherland 1975; Zhou
et al. 2017). This fiducial death line can be defined as: the maximum
acceleration potential across the polar cap region equals $10^{12}\ \rm V$
(Ruderman & Sutherland 1975; Zhou et al. 2017):
$\Phi_{\rm max}=\frac{B_{\rm p}R^{3}\Omega^{2}}{2c^{2}}\equiv 10^{12}\ \rm V,$
(7)
where $\Omega=2\pi/R$ is the angular velocity of the neutron star, and $B_{\rm
p}=6.4\times 10^{19}\sqrt{P\dot{P}}$ is the surface magnetic field at the pole
region, which is two times the commonly reported equatorial surface magnetic
field (Lyne & Graham-Smith 2012). Although the definition of this pulsar death
line involves acceleration potential across the polar cap, it is just a
fiducial death line when plotted on the $P-\dot{P}$ diagram of pulsars (Zhou
et al. 2017). For a $1091\ \rm s$ pulsar to lie above this fiducial pulsar
death line, the required period derivative is: $\dot{P}\geq 10^{-8}\ \rm s\
s^{-1}$. However, the observational upper limit on the period derivative is:
$\dot{P}\leq 1.2\times 10^{-9}$. Therefore, GLEAM-X J1627 is unlikely to lie
above the pulsar death line. One way to overcome this difficulty is to involve
physical definitions of pulsar death lines (Zhang et al. 2000).
Furthermore, according to the observational upper limit on period derivative,
the required surface magnetic field is: $B_{c}\leq 3.7\times 10^{16}\ \rm G$
and characteristic age: $\tau_{c}\geq 1.4\times 10^{4}\ \rm yr$ (see the above
summary of observations). However, for a neutron star with a magnetic field of
$10^{16}\ \rm G$, its persistent X-ray luminosity will also be relatively high
(Vigano et al. 2013). This is in contradiction with the upper limit on X-ray
luminosity $L_{x}\leq 10^{32}\ \rm erg\ s^{-1}$. For normal magnetars, the
typical magnetic field is $\sim 10^{15}\ \rm G$, with luminosity
$10^{33}-10^{35}\ \rm erg\ s^{-1}$ (Coti Zelati et al. 2018). If the true
period derivative of GLEAM-X J1627 is two orders of magnitude smaller:
$\dot{P}\sim 10^{-11}$, the requirement of magnetic field strength will be
softened (down to $10^{15}\ \rm G$). However, the required timescale to spin-
down to the long rotational period will be: $\tau_{c}\sim 10^{6}\ \rm yr$. The
magnetic field strength will decay significantly during this long timescale
(Rea et al. 2010; Vigano et al. 2013; Kou et al. 2019). With only
magnetospheric braking mechanism, it is hard to spin-down a neutron star to a
period of $1091\ \rm s$.
In conclusion, from the rotational evolution point view, GLEAM-X J1627 is
unlikely to lie above the pulsar death line, and unlikely to be spin-down to
its present long period.
### 2.3 The 1091s period is unlikely to be the precession period
For normal pulsars, the typical rotational period is: $P\sim 0.1\ \rm s$. If
the neutron star is deformed under the influence of an internal toroidal
magnetic field of $B_{t}\sim 10^{16}\ \rm G$, the ellipticity of the neutron
star is (Makishima et al. 2019; Tong et al. 2020):
$\varepsilon\sim 10^{-4}\left(\frac{B_{t}}{10^{16}\ \rm G}\right)^{2}.$ (8)
The corresponding period of free precession is: $P_{\rm
precession}=P/\varepsilon\sim 10^{3}\ \rm s$. This may explain the pulsation
period of GLEAM-X J1627 (Eksi & Sasmaz 2022).
However, as discussed in the discovery paper (Hurley-Walker et al. 2022), the
period of GLEAM-X J1627 is very accurate: $\sigma_{P}/P<5\times 10^{-7}$.
Therefore, exact periodic mechanism are preferred, i.e. rotational or orbital
period (Hurley-Walker et al. 2022). While free precession may only result in
quasi-periodicity of neutron stars (Staris et al. 2000; Ashton et al. 2017).
The reason for a quasi-periodicity may be two fold: (1) the fluid core of the
neutron star will result in damping of the oscillation (Shaham 1977; Sedrakian
et al. 1999); (2) The spin-down torque due to the magnetosphere, both near
field and far field, will cause the precession to be torqued precession
instead of free precession (Gao et al. 2020). Furthermore, as stated above, a
magnetic strength of $10^{16}\ \rm G$ is hard to reconcile with the low X-ray
luminosity (Vigano et al. 2013).
A neutron star may also experience forced precession in the presence of a
fallback disk (Qiao et al. 2003; Tong et al. 2020). However, the corresponding
precession period is about several days or tens of days (eq.(7) and (10) in
Tong et al. 2020). The 1000 days period in PSR B1828-11, and 16-day/159-day
period in fast radio burst may be due to forced precession by a fallback disk.
However, the 18 minutes period of GLEAM-X J1627 is too short to be explained
by the forced precession.
In conclusion, the $1091\ \rm s$ period of GLEAM-X J1627 is unlikely to be due
to precession, either free precession or forced precession.
### 2.4 GLEAM-X J1627 as a radio-loud magnetar spin-down by a fallback disk
Normal magnetars have typical period about $10\ \rm s$ (Olausen & Kaspi 2014).
Two normal magnetars (4U 0142+61 and 1E 2259+586) may have passive fallback
disks (Wang et al. 2006; Kaplan et al. 2009). The central compact object
inside supernova remnant RCW 103 has a pulsation period about $6.6$ hours (De
Luca et al. 2006; D’Ai et al. 2016; Rea et al. 2016). It may be spin-down by
the presence of a fallback disk (Tong et al. 2016). A magnetar+fallback disk
system may provide a unified explanation for normal magnetars, magnetars with
fallback disks, and the magnetar inside RCW 103. Then it is natural that some
source with period lying between $10\ \rm s$ and $6.6$ hours can be seen.
Applying the modeling in Tong et al. (2016), the calculations for GLEAM-X
J1627 in shown in figure 1. The major input is a high magnetic field neutron
star, spin-down by a self-similar fallback disk, under a unified spin-up and
spin-down accretion torque. In figure 1, the neutron star magnetic field is
chosen $4\times 10^{14}\ \rm G$, ten times the critical magnetic field,
similar to the radio-loud magnetar PSR J1622-4950 (Levin et al. 2010). Three
typical initial disk mass are shown: $10^{-3}\ \rm M_{\odot}$, $10^{-4}\ \rm
M_{\odot}$, $10^{-5}\ \rm M_{\odot}$. An initial disk mass about
$10^{-3}-10^{-4}\ \rm M_{\odot}$ may explain the $1091\ \rm s$ period of
GLEAM-X J1627. When the disk mass is small, e.g. $10^{-5}\ \rm M_{\odot}$, the
disk can not enter into the neutron star magnetosphere. This is because a
smaller disk mass will result in a smaller mass accretion rate and a larger
accretion magnetosphere radius. When the magnetospheric radius is larger than
the neutron star light cylinder radius, the disk will not interact with the
neutron star and it will be a passive fallback disk. This may corresponds to
the fallback disk in magnetars 4U 0142+61 and 1E 2259+586 (Wang et al. 2006;
Kaplan et al. 2009), as pointed in Tong et al. (2016). From figure 1, it can
be seen that, there is a large parameter space (magnetic field, initial disk
mass) for the rotational evolution of GLEAM-X J1627. The calculations in
Ronchi et al. (2022) is similar to Tong et al. (2016) and the calculations
here. Ronchi et al. (2022) is highly numerical, while the calculations here
are to a large extent analytical. Numerical calculations are employed mainly
in the final step.
Therefore, from our previous experiences for the $6.6$ hour magnetar inside
RCW 103, GLEAM-X J1627 may be a magnetar spin-down by a fallback disk. Its
$1091\ \rm s$ lies between that of normal magnetars and the magnetar inside
RCW 103. Combining radio emission and timing requirement, GLEAM-X J1627 may be
a radio-loud magnetar spin-down by a fallback disk.
Figure 1: Rotational evolution of magnetars in the presence of a fallback
disk. The magnetic field is chosen as $4\times 10^{14}\ \rm G$. The solid,
dashed, and dotted lines are for initial disk mass of $10^{-3}\ \rm
M_{\odot}$, $10^{-4}\ \rm M_{\odot}$, $10^{-5}\ \rm M_{\odot}$, respectively.
GLEAM-X J1627 is represented by a line, since its age is unknown. The two
magnetars with possible fallback disks 4U 0142+61 and 1E 2259+586, the
magnetar inside RCW 103 with a pulsation period of $6.6$ hours are also shown.
The calculations are stopped at an age of $2\times 10^{4}\ \rm yr$, which is
the typical age of fallback disks.
### 2.5 Modification of pulsar death line for long period radio pulsars
For a large scale dipole magnetic field, the potential drop across the polar
cap with angular extent $\theta_{\rm pc}$ is (Ruderman & Sutherland 1975; Tong
2016):
$\Phi_{\rm max}=\frac{B_{\rm p}R^{2}\Omega}{2c}\sin^{2}\theta_{\rm pc}.$ (9)
The dipole field line equation is: $r=r_{e}\sin^{2}\theta$, where $r_{e}$ is
the maximum radial extent of the field lines. When the light cylinder is
chosen as the maximum radio extent, the corresponding maximum acceleration
potential is the commonly reported case, shown in eq.(7). This is the
fiducially pulsar death line (Zhou et al. 2017), shown in figure 2. According
to this fiducial pulsar death line, GLEAM-X J1627 already lies below the death
line. The question is: how can a 1091s neutron star still have radio
emissions?
There are two possible physical effects that may help overcome this
difficulty: an active fallback disk or a twisted magnetosphere. If the
fallback disk around GLEAM-X J1627 is still active, then both effects can
contribute. If the fallback disk is no longer active, and GLEAM-X J1627 can
now be treated as an isolated magnetar, then only the latter effect is
possible. Whether the fallback disk is active or not is not known at present
(Ronchi et al. 2022; Gencali et al. 2022; Rea et al. 2022).
(1) Death line modified by a fallback disk. In the disk accretion case, the
magnetospheric radius defines the maximum radio extent of the closed field
lines (Ghosh & Lamb 1979; Shapiro & Teukolsky 1983). In accretion equilibrium,
the corotation radius is equal to the magnetospheric radius (Fu & Li 2013).
Therefore, the corotation radius defines the maximum radial extent of the
closed field lines. Then the maximum acceleration potential across the polar
cap is:
$\Phi_{\rm max,disk}=\frac{B_{\rm p}R^{3}\Omega^{2}}{2c^{2}}\frac{R_{\rm
lc}}{R_{\rm co}},$ (10)
where $R_{\rm lc}$ and $R_{\rm co}$ is the light cylinder radius and
corotation radius, respectively. In the presence of fallback disk accretion,
the potential drop across the polar cap is enhanced by a factor $R_{\rm
lc}/R_{\rm co}$. And the definition of pulsar death line will be modified by
the presence of a fallback disk: $\Phi_{\rm max,disk}\equiv 10^{12}\ \rm V$.
For GLEAM-X J1627 with a pulsation period of $1091\ \rm s$, the potential drop
is: $\Phi_{\rm max,disk}=1.6\times 10^{11}B_{14}R_{6}^{3}\ \rm V$. For
magnetic field several times of $10^{14}\ \rm G$, the potential drop can be
near the critical value of $10^{12}\ \rm V$. Therefore, considering the
presence of a fallback disk, GLEAM-X J1627 may still have a high enough
potential to acceleration particles and emit radio emissions. Its transient
nature may because it lies near the pulsar death line.
Normally, the radio emission will be ceased during accretion, as demonstrated
by the transitional millisecond radio pulsars (Papitto & de Martino 2022). For
accreting neutron stars, the accretion may only occur in an annular region of
the polar cap (Ghosh & Lamb 1978; Frank et al. 2002). This is due to a finite
width of the boundary layer at the magnetospheric radius. Therefore, the core
region of the polar cap may still permits particle acceleration and radio
emission. This possibility is originally discussed in the fallback disk model
for the observations of anomalous X-ray pulsars and soft gamma-ray repeaters
(Ertan et al. 2009; Trumper et al. 2010), as an alternative to the magnetar
model. The difference between fallback accreting neutron stars and normal
accreting neutron stars may be that: the neutron star is spinning down
(instead of spin-up) due to a decreasing mass accretion rate of the fallback
disk (Chatterjee et al. 2000; Alpar 2001).
(2) Death line for a twisted magnetic field. Magnetars may have twisted
magnetic field compared with that of normal pulsars (Thompson et a. 2002;
Beloborodov 2009; Pavan et al. 2009). A twisted magnetic field will result in
a larger polar cap (Tong 2019). This will also result in a larger potential
drop across the polar cap.
For a twisted dipole field, the radial dependence of the magnetic field is:
$B(r)\propto r^{-(2+n)}$ (Wolfson 1995), where $n=1$ corresponds to the dipole
case, $n=0$ corresponds to the split monopole case, $0<n<1$ corresponds to a
twisted dipole case. Due to inflation of the field line in the radial
direction of a twisted dipole field, more field lines will become open and a
larger polar cap will be expected (Tong 2019). According to eq.(12) in Tong
(2019), the polar cap for a twisted dipole field is:
$\sin^{2}\theta_{\rm pc}\approx\left(\frac{R}{R_{\rm lc}}\right)^{n}.$ (11)
Again, $n=1$ corresponds to the dipole case. According to eq.(9), the maximum
acceleration potential for a twisted dipole field is:
$\Phi_{\rm max,twist}=\frac{B_{\rm
p}R^{3}\Omega^{2}}{2c^{2}}\left(\frac{R_{\rm lc}}{R}\right)^{1-n}.$ (12)
For $n=1$, the maximum acceleration potential returns to the dipole case. The
death line in the case of a twisted dipole field may be defined as: $\Phi_{\rm
max,twist}\equiv 10^{12}\ \rm V$.
The distribution of long period radio pulsars on the $P$-$\dot{P}$ diagram is
shown in figure 2. The five long period radio pulsars include: GLEAM-X J1627
(Hurley-Walker et al. 2022), the recently discovered $76\ \rm s$ radio pulsars
PSR J0901$-$4046 (Caleb et al. 2022), along with the three previously known
long period radio pulsars (Tan et al. 2018). The fiducial pulsar death line,
the death line modified by the presence of a fallback disk, and the death line
for a twisted dipole field (for $n=0.8$) are shown. The presence of a fallback
disk or a twisted magnetic field will lower the position of death line on the
$P$-$\dot{P}$ diagram. These two effects may explain why GLEAM-X J1627 and
other long period radio pulsars can still have radio emissions.
Figure 2: Distribution of long period radio pulsars (red circles) on the
$P$-$\dot{P}$ diagram of pulsars. The fiducial pulsar death line, the death
line modified by the fallback disk, the death line for a twisted dipole field
($n=0.8$) are also shown. It can be seen that a fallback disk or a twisted
magnetosphere may help to explain why long period radio pulsars can still have
radio emissions. The $P$-$\dot{P}$ diagram of various pulsar-like objects are
updated from figure 2 in Kou et al. (2019).
### 2.6 Constraint on the magnetic field and disk mass
The major input for a neutron star+fallback disk system are the neutron star’s
magnetic field strength and the initial disk mass (Chatterjee et a. 2000;
Alpar 2001; Wang et al. 2006; Tong et al. 2016). The light cylinder radius,
magnetospheric radius, and mass accretion rate (for a self-similar fallback
disk) can all be expressed analytically. Therefore, some analytical constraint
on the magnetic field and initial disk mass can be obtained.
The neutron star with a fallback disk will (1) firstly be spin-down under its
own magnetic dipole field, (2) enter into the propeller regime and be quickly
spin-down, (3) acquire accretion equilibrium with the disk (Tong et al. 2016).
In order for the fallback disk to enter into the neutron star’s light
cylinder, the magnetospheric radius should be smaller than the light cylinder
radius:
$R_{\rm m}(t)\leq R_{\rm lc}(t),$ (13)
where the two radii both evolves with time. The light cylinder radius is:
$R_{\rm lc}=P(t)c/2\pi\propto\mu\ t^{1/2}$, where $\mu$ is the magnetic dipole
moment, the period evolution with time can be approximated by the dipole
braking (eq.(11) in Tong 2016; eq.(5.18) in Lyne & Graham-Smith 2012) before
it interact with the fallback disk. The magnetospheric radius: $R_{\rm
m}\propto\mu^{4/7}\ t^{5/14}$ (Tong et al. 2016, see footnote 7 there for the
definition of magnetospheric radius and eq.(4) there for the accretion rate as
a function of time). The lower limit on the magnetic field strength in order
for the disk to enter into the neutron star light cylinder is:
$B\geq 4\times 10^{13}\left(\frac{M_{\rm d,0}}{10^{-3}\ \rm
M_{\odot}}\right)^{-2/3}\left(\frac{t}{10^{4}\ \rm yr}\right)^{-1/3}\ \rm G,$
(14)
where $M_{\rm d,0}$ is the initial disk mass, $t$ is the typical age of a
fallback disk. The initial mass of the fallback disk may be in the range
$(10^{-6},\ 0.1)\ \rm M_{\odot}$ (Michel 1988; Chevalier 1989; Wang et al.
2006; Perna et al. 2014).
The neutron star will be quickly spin-down during the ejector phase and
acquire accretion equilibrium with the fallback disk. When the magnetospheric
radius is equal to the corotation radius, the corresponding period is defined
as the equilibrium period (eq.(9) in Tong et al. 2016):
$P_{\rm eq}=915B_{15}^{6/7}\dot{M}_{\rm acc,17}^{-3/7}\ {\rm s}\propto
B^{6/7}t^{3\alpha/7},$ (15)
where $\alpha=5/4$ for a Kramers opacity dominated disk. In order to spin-down
the neutron star to the observed pulsation period in less than the typical age
$t$, it is required that: $P_{\rm eq}\geq P_{\rm obs}$, where $P_{\rm obs}$ is
the observational pulsation period. The lower limit on the magnetic field is:
$\scriptsize B\geq 3.7\times 10^{14}\left(\frac{M_{\rm d,0}}{10^{-3}\ \rm
M_{\odot}}\right)^{1/2}\left(\frac{t}{10^{4}\ \rm
yr}\right)^{-5/8}\left(\frac{P_{\rm obs}}{10^{3}\ \rm s}\right)^{7/6}\ \rm G.$
(16)
The above two constraint on the magnetic field as a function of initial disk
mass is plotted in figure 3. As can be seen from figure 3, for longer
pulsation period $P_{\rm obs}$ the required magnetic field will be higher.
This is why magnetars are always employed for long period pulsars, both
isolated and accreting ones. For a self-similar fallback disk, the initial
disk mass is proportional to the initial mass accretion rate (eq.(2) in Tong
et al. 2016). Therefore, figure 3 here and figure 5 in Ronchi et al. (2022)
are consistent with each other. Figure 5 in Ronchi et al. (2022) are specific
calculations for GLEAM-X J1627, while figure 3 here are general constraint on
fallback accreting neutron stars.
From figure 3, more analytical constraints can be obtained. In the allowed
region, there exist a lower limit on the magnetic field. Combining eq.(14) and
eq.(16), the lower limit on the magnetic field is:
$B\geq 1.4\times 10^{14}\left(\frac{P_{\rm obs}}{10^{3}\ \rm
s}\right)^{2/3}\left(\frac{t}{10^{4}\ \rm yr}\right)^{-1/2}\ \rm G.$ (17)
For a longer period, the required magnetic field is also higher. The
intersection point between the two line moves up-left for a longer period.
The initial mass of the fallback disk may may have a lower limit about
$10^{-6}\ \rm M_{\odot}$ (Michel 1988; Chevalier 1989; Wang et al. 2006; Perna
et al. 2014). The neutron star magnetic field may have an upper limit about
$10^{16}\ \rm G$ (Duncan & Thompson 1992; Olausen & Kaspi 2014). Combining
these two constraints, there exists an upper limit on the period of fallback
accreting neutron stars. From eq.(16) and considering the limit on disk mass
and magnetic field, the upper limit on the neutron star period is:
$\scriptsize P_{\rm obs}\leq 3\times 10^{5}\left(\frac{B}{10^{16}\ \rm
G}\right)^{6/7}\left(\frac{M_{\rm d,0}}{10^{-6}\ \rm
M_{\odot}}\right)^{-3/7}\left(\frac{t}{10^{4}\ \rm yr}\right)^{15/28}\ \rm s,$
(18)
which is about several days for a disk age about $10^{4}$ years. These
analytical constrains can be applied to more long period radio pulsars in the
future.
Figure 3: Constraint on magnetic field and initial disk mass. The black solid
line is the lower limit for magnetic field, in order for the disk to enter
into the neutron star light cylinder (eq.14). The blue solid line is the lower
limit for the magnetic field in order to spin-down it to the observed
pulsation period (eq.16), for $P_{\rm obs}=10^{3}\ \rm s$. The blue dashed
line is for $P_{\rm obs}=10^{4}\ \rm s$. The typical age of the fallback disk
is chosen as $10^{4}\ \rm yr$.
## 3 Discussions
As an alternative to the white dwarf model, the long period radio transient
GLEAM-X J1627 is modeled as a radio-loud magnetar spin-down by a fallback
disk. Future observations may help to discriminate between different
modelings, shown in below.
### 3.1 Comparison with other modelings
For the radio emission and long pulsation period of GLEAM-X J1627, these two
aspects can be explained naturally in the white dwarf model (Loeb & Maoz 2022;
Katz 2022). The physics of white dwarf pulsars may be similar to that of
pulsars (Goldreich & Julian 1969; Ruderman & Sutherland 1975). Optical
observations may help to discriminate between the white dwarf and the neutron
star model (Hurley-Walker et al. 2022; Rea et al. 2022). Since neutron stars
can sustain smaller period, future observations of more radio transients with
smaller period may also help to clarify whether they are neutron stars or
white dwarfs.
It can not be excluded that the long pulsation period of GLEAM-X J1627 is due
to precession (Eksi & Sasmaz 2022). However, the exactness of period may favor
rotational or orbital period (Hueley-Walker et al. 2022). Our previous
experiences in pulsars, magnetars and fast radio bursts tell us that
precession may only result in quasi-periodicity (Stairs et al. 2000; Makishima
et al. 2019; Tong et al. 2020). Future period observation of more sources may
tell us whether their period is exact or quasi-periodic. Furthermore, if two
periods can be found in one source (one spin period+one modulation period),
then the precession or orbital origin may be preferred.
A normal neutron star (with $B\sim 10^{12}\ \rm G$) with a fallback disk may
also explain the long period of GLEAM-X J1627 (Gencali et al. 2022). The
accretion equilibrium period depends on both the magnetic field and accretion
rate (see eq.(15)): $P_{\rm eq}\propto B^{6/7}\dot{M}^{-3/7}$. For a low
magnetic field, a low mass accretion rate is required to produce the same
period. Then the required initial disk mass should be smaller and typical age
of the system should be larger. This is consistent with the quantitative
result of Gencali et al. (2022). The difference between Gencali et al. (2022)
amd the calculation here (section 2.4) may be due to different modeling of the
disk evolution with time and different accretion torques.
For a normal neutron star at a period of $1091\ \rm s$, it is not sure whether
they can lie above the pulsar death line or not (see discussions in section
2.5). In our opinion, this is one difficulty for a normal neutron star. Future
period-derivative observations of more sources may given us some information
about the age of the neutron star. However, the theoretical period-derivative
depends strongly on whether the disk is still active or not. Therefore, the
period-derivative constraint may be model dependent. For young
neutron+fallback disk system, the surround supernova remnant may be still
visible (De Luca et al. 2006). Therefore, future observations of some
supernova remnant associated with sources like GLEAM-X J1627 will support a
young neutron star origin.
H.Tong would like to thank Gao Yong for helpful discussions on precession of
magnetars. This work is supported by National SKA Program of China (No.
2020SKA0120300) and NSFC (12133004).
## References
* (1) Alpar, M. A. 2001, ApJ, 554, 1245
* (2) Anderson, G. E., Gaensler, B. M., Slane, P. O., et al. 2012, ApJ, 751, 53
* (3) Ashton, G., Jones, D. I., & Prix, R. 2017, MNRAS, 467, 164
* (4) Beloborodov, A. M. 2009, ApJ, 703, 1044
* (5) Camilo, F., Ransom, S. M., Halpern, J. P., et al. 2006, Nature, 442, 892
* (6) Chatterjee, P., Hernquist, L., & Narayan, R. 2000, ApJ, 534, 373
* (7) Chen K., & Ruderman M. 1993, ApJ, 402, 264
* (8) Chevalier, R. A. 1989, ApJ, 346, 847
* (9) CHIME/FRB collaboration: Amiri, M., Andersen, B. C., Bandura, K. M., et al. 2020, Nature, 582, 351
* (10) Caleb, M., Heywood, I., Rajwade, K., et al. 2022, NatAs, 6, 828
* (11) Coti Zelati, F., Rea, N., Pons, J. A., et al. 2018, MNRAS, 474, 961
* (12) D’Ai, A., Evans, P. A., Burrows, P. A., et al. 2016, MNRAS, 463, 2394
* (13) De Luca, A., Caraveo, P. A., Mereghetti, S., et al. 2006, Science, 313, 814
* (14) Duncan R. C., & Thompson C. 1992, ApJ, 392, L9
* (15) Eksi, K. Y., & Sasmaz, S. 2022, arXiv:2202.05160
* (16) Erkut, M. H. 2022, MNRAS, 514, L41
* (17) Ertan, U., Eksi, K. Y., Erkut, M. H., & Alpar, M. A. 2009, ApJ, 702, 1309
* (18) Frank, J., King, A., & Raine, D. 2002, Accretion power in astrophysics, Cambridge University Press, Cambridge
* (19) Fu, L., & Li, X. D. 2013, ApJ, 775, 124
* (20) Gao, Y., Shao, L., Xu, R., et al. 2020, MNRAS, 498, 1826
* (21) Gencali, A. A., Ertan, U., & Alpar, M. A. 2022, MNRAS, 513, L68
* (22) Ghosh, P., & Lamb, F. K. 1978, ApJL, 233, L83
* (23) Ghosh, P., & Lamb, F. K. 1979, ApJ, 234, 296
* (24) Goldreich, P., & Julian, W. H. 1969, ApJ, 157, 869
* (25) Hurley-Walker, N., Zhang, X., Bahramian, A., et al. 2022, Nature, 601, 526
* (26) Ioka, K., & Zhang, B. 2020, ApJL, 893, L26
* (27) Katz, J. I. 2022, arXiv:2203.08112
* (28) Kaplan, D. L., Chakrabarty, D., Wang, Z. et al. 2009, ApJ, 700, 149
* (29) Kou, F. F., Tong, H., Xu, R. X., & Zhou, X. 2019, ApJ, 876, 131
* (30) Levin, L., Bailes, M., Bates, S., et al. 2010, ApJL, 721, L33
* (31) Loeb, A., & Maoz, D. 2022, RNAAS, 6, 27
* (32) Lyne, A. G., & Graham-Smith, F. 2012, Pulsar astronomy (4th ed.), Cambridge University Press, Cambridge
* (33) Lyutikov, M., Barkov, M., & Giannios, D. 2020, ApJL, 893, L39
* (34) Makishima, K., Enoto, T., Hiraga, H. S., et al. 2014, PRL, 112, 171102
* (35) Makishima, K., Murakami, H., Enoto, T., & Nakazawa, K. 2019, PASJ, 71, 15
* (36) Malheiro, M., Rueda, J. A., & Ruffini, R. 2012, PASJ, 65, 56
* (37) Michel, F. C. 1988, Nature, 333, 644
* (38) Olausen, S. A., & Kaspi, V. M. 2014, ApJS, 212, 6
* (39) Paczynski, B. 1990, ApJL, 365, L9
* (40) Papitto, A., & de Martino, D. 2022, Transitional millisecond pulsars, in S. Bhattacharyya et al. (eds.), Millisecond Pulsars, ASSL, 465
* (41) Pavan, L., Turolla, R., Zane, S., & Nobili, L. 2009, MNRAS, 395, 753
* (42) Perna, R., Duffell, P., Cantiello, M., et al. 2014, ApJ, 781, 119
* (43) Qiao, G. J., Xue, Y. Q., Xu, R. X., et al. 2003, A&A, 407, L25
* (44) Rajwade, K. M., Mickaliger, M. B., Stappers, B. W., et al. 2020, MNRAS, 495, 3551
* (45) Rea, N., Esposito, P., Turolla, R., et al. 2010, Science, 330, 944
* (46) Rea, N., Borehese, A., Esposito, P., et al. 2016, ApJ, 828, L13
* (47) Rea, N., Coti Zelati, F., Dehman, C., et al. 2022, arXiv:2210.01903 (ApJ in press)
* (48) Ronchi, M., Rea, N., Graber, V., Hurley-Walker, N. 2022, ApJ, 934, 184
* (49) Ruderman, M. A., & Sutherland, P. G. 1975, ApJ, 196, 51
* (50) Sedrakian, A., Wasserman, I., & Cordes, J. M. 1999, 524, 341
* (51) Shaham, J. 1977, ApJ, 214, 251
* (52) Shapiro, S. L., & Teukolsky S. A. 1983, Black holes, white dwarfs, and neutron stars, John Wiley & Sons, New York
* (53) Stairs, H., Lyne, A. G., & Shemar, S. L. 2000, Nature, 406, 484
* (54) Szary, A., Zhang, B., Melikidze, G. I., et al. 2014, ApJ, 784, 59
* (55) Tan, C. M., Bassa, C. G., Cooper, S., et al. 2018, ApJ, 866, 54
* (56) Thompson, C., Lyutikov, M., & Kulkarni, S. R. 2002, ApJ, 574, 332
* (57) Tong, H. 2016, SCPMA, 59, 5752 (arXiv:1506.04605)
* (58) Tong, H., Wang, W., Liu, X. W., & Xu, R. X. 2016, ApJ, 833, 265
* (59) Tong, H. 2019, MNRAS, 489, 3769
* (60) Tong, H., Wang, W., & Wang, H. G. 2020, RAA, 20, 142
* (61) Trumper, J. E., Zezas, A., Ertan, U., & Kylafis, N. D. 2010, A&A, 518, A46
* (62) Vigano, D., Rea, N., Pons, J. A., et al. 2013, MNRAS, 434, 123
* (63) Wang, Z., Chakrabarty, D., & Kaplan, D. L. 2006, Nature, 440, 772
* (64) Wolfson, R. 1995, ApJ, 443, 810
* (65) Yang, H., & Zou, Y. C. 2020, ApJL, 893, L31
* (66) Zhang, B., Harding, A. K., & Muslimov, A. G. 2000, ApJ, 531, L135
* (67) Zhou, X., Tong, H., Zhu, C., & Wang, N. 2017, MNRAS, 472, 2403
|
# PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems
Qing Jin1 , Zhiyu Chen211footnotemark: 1 , Jian Ren3, Yanyu Li1, Yanzhi Wang1,
Kaiyuan Yang2
1Northeastern University, 2Rice Univeristy, 3Snap Inc.
<EMAIL_ADDRESS>{zc37<EMAIL_ADDRESS>
{jren}@snapchat.com,{li.yanyu<EMAIL_ADDRESS>Equal contribution
###### Abstract
Processing-in-memory (PIM), an increasingly studied neuromorphic hardware,
promises orders of energy and throughput improvements for deep learning
inference. Leveraging the massively parallel and efficient analog computing
inside memories, PIM circumvents the bottlenecks of data movements in
conventional digital hardware. However, an extra quantization step (i.e. PIM
quantization), typically with limited resolution due to hardware constraints,
is required to convert the analog computing results into digital domain.
Meanwhile, non-ideal effects extensively exist in PIM quantization because of
the imperfect analog-to-digital interface, which further compromises the
inference accuracy. Due to hardware limitations, PIM systems decompose the
bulky matrix multiplication into smaller subsets, making the computing flow
fundamentally different from the conventionally quantized models. In this
paper, we propose a method for training quantized networks to incorporate PIM
quantization, which is ubiquitous to all PIM systems. Specifically, we propose
a PIM quantization aware training (PIM-QAT) algorithm, and introduce rescaling
techniques during backward and forward propagation by analyzing the training
dynamics to facilitate training convergence. We also propose two techniques,
namely batch normalization (BN) calibration and adjusted precision training,
to suppress the adverse effects of non-ideal linearity and stochastic thermal
noise involved in real PIM chips. Our method is validated on three mainstream
PIM decomposition schemes, and physically on a prototype chip. Comparing with
directly deploying conventionally trained quantized model on PIM systems,
which does not take into account this extra quantization step and thus fails,
our method provides significant improvement. It also achieves comparable
inference accuracy on PIM systems as that of conventionally quantized models
on digital hardware, across CIFAR10 and CIFAR100 datasets using various
network depths for the most popular network topology.
## 1 Introduction
Recent progress of deep learning has witnessed great success in a wide range
of applications at the cost of enormous computations and energy budget. To
alleviate the resource constraints and enable deep learning inference on
pervasive mobile and edge devices, extensive research has been conducted on
algorithm optimizations for conventional digital hardware (e.g. GPU, CPU),
with the goals of compressing models and reducing the number of operations
(Choi et al.,, 2018; Jin et al.,, 2019; 2020; Liu et al.,, 2020; Ye et al.,,
2018; Zhang et al.,, 2018; Zhou et al.,, 2016; Sun et al.,, 2020; Mishra et
al.,, 2017). On the other hand, hardware innovations for deep learning focus
on building dedicated devices with optimized dataflow and reusing to minimize
data movement (Chen et al.,, 2016; 2014; Du et al.,, 2015; Jouppi et al.,,
2017), which is the well known energy and latency bottleneck in deep learning
and many other data-centric computations (Horowitz,, 2014).
Figure 1: Comparison between conventional digital systems (left) and the
processing-in-memory (PIM) systems (middle), and their quantization effect
(right). Unlike conventional digital systems where quantization is only
applied once on both inputs and weights for efficient integer convolution,
processing-in-memory (PIM) systems require an extra quantization due to
limited resolution of the analog-to-digital interface. As shown in the right,
unlike conventional digital quantization that is flexible to quantize in sub-
range with an arbitrarily small LSB via scaling and clipping, PIM quantization
in state-of-the-art PIM systems (Rekhi et al.,, 2019; Biswas and
Chandrakasan,, 2019; Jia et al., 2021b, ; Lee et al., 2021b, ; Lee et al.,
2021a, ) typically performs direct bit-truncating (i.e. discarding the LSBs
and keeping the MSBs), mainly because accurate scaling operations in analog
domain will lead to unaffordable energy and area overhead that is potentially
even larger than the whole PIM system (Lee et al., 2021a, ). This direct bit-
truncating introduces significant information loss (Rekhi et al.,, 2019) and
severely deteriorates the accuracy performance of model running on it. PIM
quantization is thus drastically different and more challenging than the
digital counterparts. Note the input to PIM quantization can have a much
smaller range than 32-bit integers and here we use “INT32” to denote the most
general case.
_Processing in-memory_ (PIM), inspired by neuromorphic engineering, attracts
increasing attention as a potential hardware solution to data movement
bottlenecks (Ambrogio et al.,, 2018; Ielmini and Wong,, 2018; Jia et al.,,
2020; Prezioso et al.,, 2015; Xue et al.,, 2020; Yao et al.,, 2020; Zhang et
al.,, 2017). By performing computations directly inside the weight storage
memories, PIM promises significantly reduced data traffic between the memory
and computing units. The merits of PIM over conventional digital hardware are
three folds. First, the data movement energy and latency can be alleviated.
Second, massively parallel computing, like multiply-and-accumulate (MAC), in
memory arrays greatly amortize total energy and area. Third, the computation
in memory is essentially in analog, which is known to be more efficient than
in digital for low-precision computation. As an example, a recent PIM
demonstration (Yao et al.,, 2020) achieves 110$\times$ higher energy
efficiency and 30$\times$ better compute density than TESLA V100 GPU.
Meanwhile, PIM systems can be built upon various types of integrated memory
technologies, from static random-access memory (SRAM) that scales well with
Moore’s law, to emerging non-volatile memories that stores an analog weight in
a tiny unit, e.g. resistive random-access memory (ReRAM) (Prezioso et al.,,
2015; Xue et al.,, 2020; Yao et al.,, 2020) and phase change memory (PCM)
(Ambrogio et al.,, 2018; Joshi et al.,, 2020).
Despite the forthcoming efficiency and throughput gains, PIM systems require
an extra quantization step to digitize the analog MAC results because the
high-precision scaling multiplication is more efficient in digital domain (see
Fig. 1). However, such extra quantization typically has limited resolution
(typically 5-8 bit) due to the hardware constraints and thus leads to
significant inference accuracy loss. Moreover, as shown in the right of Fig.
1, conventional digital quantization is more flexible to quantize in a very
small sub-range of the whole output by scaling, clipping and rescaling before
bit-truncating, which effectively achieves an arbitrarily small LSB. On the
contrary, PIM quantization involved in modern PIM systems (Rekhi et al.,,
2019; Biswas and Chandrakasan,, 2019; Jia et al., 2021b, ; Lee et al., 2021b,
; Lee et al., 2021a, ) typically only supports direct bit-truncating, mainly
because accurate scaling operations in analog domain will lead to unaffordable
energy and area overhead that is potentially even larger than the whole PIM
system (Lee et al., 2021a, ). This direct bit-truncating introduces
significant information loss (Rekhi et al.,, 2019), which makes PIM
quantization drastically different and more challenging than the digital
counterparts. Furthermore, the inevitable non-idealilies in the PIM
quantization, including the imperfect linearity and random thermal noise of
the analog-to-digital converters (ADCs), aggravate the side-effect of the low-
resolution quantization and turn the conventionally quantized model into
random guess, as shown in Fig. 2. Limited by the memory array size and analog
computing precision, as well as to reduce the input range for less
quantization errors, PIM systems compute MACs in a $k$-bit-serial fashion
($1\leq k\leq$ input/weight bit-width) and decompose the channels into
multiple subsets. The partial sums of PIM output are then re-combined via
digital shift-and-adds or accumulation (see Fig. 1). As the computing flow is
fundamentally different from conventional models, a new method specialized for
PIM systems, taking the decomposition, quantization, as well as recombination
into account, is highly desired.
In this paper, we systematically analyze the discrepancies between PIM and
conventional digital hardware, and propose _PIM quantization aware training_
(PIM-QAT) for deep neural networks. Note that in this work we focus on the
extra quantization step involved in all types of PIM systems, as mentioned
above, and only consider imperfect linearity and stochastic thermal noise.
More sophisticated cases of hard-to-model non-linearities caused by inaccurate
storage of weights or other effects like data retention issues are out of
scope of this work, as they are less general but specific to some types of PIM
systems, such as ReRAM. Our method is ideally suitable for the SRAM PIM, where
only non-idealities coming from ADCs play a role. However, the problem of PIM
quantization is general enough and ubiquitous to all other types of PIM
systems, which share the same computing flow as ours, despite their different
memory technologies and hardware topologies, including PCM and ReRAM PIMs.
Therefore, our method is general and will greatly benefit models running on
these systems. We summarize our contributions as the following:
* •
We propose PIM-QAT based on a basic assumption of generalized straight-through
estimator (GSTE). GSTE is a generalization of the famous straight-through
estimator (STE) (Bengio et al.,, 2013), which has been adopted in conventional
quantization (Zhou et al.,, 2016).
* •
We study the training dynamics unique to the PIM-QAT, and propose scaling
techniques for both forward and backward propagation during training to tackle
convergence problems.
* •
We leverage Batch Normalization (BN) calibration to close the gap between
idealized training and real-case inference on real PIM systems with
fabrication and run-time variations.
* •
We further propose an adjusted precision training algorithm and study the
potential relations between training precision and the effective number of
bits (ENOB) of the actual physical PIM system for inference.
* •
We test the proposed method on three major PIM decomposition schemes (native,
bit serial, differential) that cover the majority of PIM hardware designs. We
extensively evaluate the method on a silicon prototype of SRAM PIM with
realistic non-idealities. A micrograph of the prototype chip is shown in Fig.
2.
Figure 2: Imperfect MAC in a processing in-memory (PIM) system and our
proposed solution (workflow). Compared to conventional digital systems, the
extra low-resolution quantization step in PIM systems introduces significant
information loss, making models trained with conventional quantization
techniques fail. The two non-idealities of this extra quantization, namely
imperfect linearity (which we simply denote as non-linearity) and stochastic
thermal noise, further aggravate the errors of PIM-quantization. On the
contrary, our method takes into account the ideal PIM quantization during
training, and applies BN calibration and adjusted precision training algorithm
to alleviate the impact of the two non-ideal effects. Note that non-idealities
are not directly modeled during training because PIM quantization in different
chips exhibits different linearity and noise behaviors due to inter-die
variations. Therefore, training with limited non-ideal samples may lead to
biased results. Our techniques reduce the accuracy gap and improve the
robustness for real PIM systems.
## 2 Background and Related Work
#### Processing In-Memory Hardware
Low-precision PIM quantization is ubiquitous in state-of-the-art PIM systems.
Depending on different accuracy targets and model sizes, the quantization
resolution ranges from 1-bit (Yin et al.,, 2020) to 8-bit (Jia et al., 2021a,
), and most of them introduce large quantization errors. The possible levels
of the analog MAC results can be up to 67.5$\times$ larger than the
quantization levels (Lee et al., 2021b, ). On the other hand, different PIM
systems adopt different decomposition strategies. The maximum number of
elements ($N$) in one analog MAC is an important parameter because a larger
$N$ brings more energy savings, but also extends the levels of analog MACs
(which is proportional to $N$). In reality, $N$ is selected from 9 (Yoon et
al.,, 2021) to 2304 (Valavi et al.,, 2019), making the effect of channel-wise
decomposition unique in different PIM systems. Meanwhile, weights are stored
in different formats as digital memories (e.g. SRAM) only store 1-bit data in
each cell while analog memories (e.g. ReRAM) have multi-state storage, and
inputs are decomposed depending on the resolution of digital-to-analog
converters (DACs). As a result, different memory topologies lead to different
PIM decomposition schemes and quantization errors. Our proposed method unifies
all the design choices above and tackles the quantization challenges under
various hardware settings.
Despite the potential accuracy loss, PIM is a promising approach for deep
learning applications due to its high energy efficiency. Table 1 summarizes
the efficiency of V100 GPU (Mujtaba,, 2017), TPU (Jouppi et al.,, 2017), ReRAM
PIM (Yao et al.,, 2020), and our SRAM PIM prototype, which represents “peak”
energy efficiency at 100% utilization of the hardware. Training techniques
specific for PIM systems is thus an urgent demand.
Table 1: Energy efficiency of different hardware.
Hardware | V100 | TPU | ReRAM | SRAM
---|---|---|---|---
GPU | PIM | (Ours)
Efficiency | 0.1 | 2.3 | 11 | 49.6
(TOPS/W)
#### Analog Computing/PIM Aware Quantization
Several prior studies (Rekhi et al.,, 2019; He et al.,, 2019; Joshi et al.,,
2020; Long et al.,, 2020) improve inference accuracy by incorporating PIM non-
idealities or quantization effects into training. He _et al_. (He et al.,,
2019) and Joshi _et al_. (Joshi et al.,, 2020) develop a noise-injection
approach to tolerate the data storage errors (e.g. conductance drift,
inaccurate data programming, IR drop, etc.) that exist in multi-state non-
volatile memories. However, both studies fail to model the PIM quantization in
a pratical way, where they either ignore the quantization step during
inference (Joshi et al.,, 2020) or assume a power-hungry analog scaling
operation (He et al.,, 2019). Q-PIM (Long et al.,, 2020) simplifies the model
quantization without the need of retraining, yet ignores all analog non-
idealities but only supports digital PIM platforms that have limited
applications. On the other hand, Rekhi _et al_. (Rekhi et al.,, 2019) propose
a more general analog/mixed signal (AMS) error model, where PIM quantization
together with its non-idealities are summarized into an additive noise
determined by the effective number of bits (ENOB) of the whole system. Such a
high-level abstraction is broadly applicable to different PIM decomposition
schemes without considering the detailed implementations, but it also renders
sub-optimal results. As shown in Table 2, it is unclear how to estimate ENOB
for complex PIM decomposition schemes such as bit serial and differential.
Meanwhile, different ENOBs require individually trained models, and the
underlying assumption of having a sufficiently large $N$ for central limit
theorem does not hold for many practical PIM systems. In this paper, we
attempt to solve this discrepancy by incorporating a more interpretable and
white-box model for any given PIM hardware in the training procedure.
Table 2: Comparison of training methods for neural networks applied on
processing in-memory (PIM) systems.
| Native | Bit Serial | Differential
---|---|---|---
Baseline | ✗ | ✗ | ✗
AMS | ✓ | ✗ | ✗
Ours | ✓ | ✓ | ✓
## 3 PIM Quantization Aware Training
In this section, we first describe a generic model of the extra quantization
involved in typical PIM systems, and introduce our basic assumption -
generalized straight-through estimator (GSTE). Based on these, we propose our
PIM-QAT method (Fig. 2), including two scaling techniques to stabilize
training dynamics, BN calibration to adapt to fabrication variations of PIM
hardware, and an adjusted precision training approach to account for
stochastic thermal noise and imperfect linearity together with its chip-to-
chip variations.
### 3.1 Problem Definition
Multiply-and-accumulate (MAC) is the basic operation involved in typical
neural networks, including convolution, recurrent, fully-connect, as well as
attention layers. Compared to software implementation with a digital system,
where the inner product of weight $W_{i}$ and $x_{i}$ is given by
$y=\sum\limits_{i=1}^{N}W_{i}x_{i}$, the output of inner product implemented
on a generic PIM system can be formulated as
$\widetilde{y}_{\mathrm{PIM}}=\bm{\mathsf{Q}}(\bm{\mathsf{NL}}(\sum_{i=1}^{N}\widetilde{Q}_{i}\widetilde{q}_{i});b_{\mathrm{PIM}})+\varepsilon$
(1)
Here, $\widetilde{Q}_{i}\in[-1,1]$ and $\widetilde{q}_{i}\in[0,1]$ are
quantized weights and activations, with $b_{w}$ and $b_{a}$ bits,
respectively. $\bm{\mathsf{Q}}$ and $\bm{\mathsf{NL}}$ denote quantization and
imperfect linearity, and $\varepsilon$ is the stochastic thermal noise
introduced by the system. $b_{\mathrm{PIM}}$ is the precision for PIM
quantization $\bm{\mathsf{Q}}$. Eqn. (1) represents one MAC operation in PIM
system (see Analog Computing in Fig. 1), and is generic to different PIM
decomposition schemes including native, bit serial, as well as differential
schemes (see Sec. 4, also see Appendix A1). Note that the variations of
$\widetilde{Q}_{i}$ and $\widetilde{q}_{i}$ are not considered here as those
non-idealities are only general in analog memories (e.g., ReRAM) but have
minor effects in digital memories (e.g., SRAM). We leave this feature as a
future investigation.
### 3.2 Generalized Straight-Through Estimator
In order to take the full advantage of and adapt the neural network to PIM
systems, we need to make training aware of $\bm{\mathsf{Q}}$. For this
purpose, we first investigate the conventional quantization-aware training
targeting digital accelerators. Generally, in order to back-propagate through
a quantized neural network, where the non-differentiable function
$\mathrm{round}(\cdot)$ is extensively used, the typical practice is to adopt
the straight-through estimator (Bengio et al.,, 2013) as proposed in (Zhou et
al.,, 2016), where for a real input $r_{i}\in[0,1]$, the derivative of
quantized output with respect to the input is given by
$\frac{\partial}{\partial
r_{i}}\Big{(}\frac{1}{2^{k}-1}\mathrm{round}\big{(}(2^{k}-1)r_{i}\big{)}\Big{)}=1$
(2)
Here, $k$ is the number of bits for quantization.
To evaluate the effect of $\bm{\mathsf{Q}}$ involved in PIM systems for both
forward and backward propagation, we first generalize the STE result in
equation (2) to a stronger yet more flexible assumption, which we name as
generalized straight-through estimator (GSTE) and is summarized in Assumption
1.
###### Assumption 1 (Generalized STE)
The differential of the round function is given by
$\mathrm{d}\,\mathrm{round}(x)=\xi\cdot\mathrm{d}x$ (3)
where $\xi$ is a scaling factor assigned empirically.
Note that GSTE can also be viewed as a definition for the differential of the
discontinuous function $\mathrm{round}(\cdot)$, and equation (2) can be easily
derived from it by setting $\xi=1$. In practice, $\xi$ can be set to different
values for different scenarios (for example, for different bit-widths or
inputs). We will elaborate more on this point in the following. GSTE will
serve as the basis for our whole analysis, and as shown in the Appendix, from
GSTE we can derive the following theorem for PIM-QAT.
###### Theorem 1 (PIM Quantization Aware Training)
For ideal PIM systems with PIM decomposition schemes including native, bit
serial, as well as differential, where the extra quantization taken into
account during forward propagation is ideal without imperfect linearity or
noise involved, the backward propagation takes exactly the same form as that
for conventional quantization, with only the quantized quantity involved are
adjusted accordingly. Specifically, for a PIM system with quantized weight
$\widetilde{Q}_{i}\in[-1,1]$ of $b_{w}$ bits and quantized input
$\widetilde{q}_{i}\in[0,1]$ of $b_{a}$ bits, the forward and backward
propagation are given by
$\displaystyle\mathbf{Forward\colon}$
$\displaystyle\widetilde{y}_{\mathrm{PIM}}=\bm{\mathsf{Q}}\bigg{(}\sum_{i=1}^{N}\widetilde{Q}_{i}\widetilde{q}_{i};b_{\mathrm{PIM}}\bigg{)}$
(4a) $\displaystyle\mathbf{Backward\colon}$
$\displaystyle\mathrm{d}\widetilde{y}_{\mathrm{PIM}}=\xi\cdot\mathrm{d}\bigg{(}\sum_{i=1}^{N}\widetilde{Q}_{i}\widetilde{q}_{i}\bigg{)}$
(4b)
respectively, where $N$ is the total number of MACs of the inner product and
$b_{\mathrm{PIM}}$ is PIM bit-width. For conventional quantization with
digital accelerator, we have $b_{\mathrm{PIM}}=+\infty$ and the forward
propagation is reduced to the typical case of
$\widetilde{y}=\sum\limits_{i=1}^{N}\widetilde{Q}_{i}\widetilde{q}_{i}$.
Theorem 1 demonstrates that quantization introduced by PIM systems only alters
the forward propagation and impacts the calculated values of outputs, but does
not change the way of taking derivative over inputs and weights. Additionally,
it enables awareness of such quantization during gradient calculation, which
is critical for optimization of neural networks targeting PIM systems.
### 3.3 Rescaling
With Theorem 1, we are ready to incorporate PIM quantization during training.
However, this does not guarantee good performance, which also relies on a
stable training determined by training dynamics (He et al.,, 2015; Poole et
al.,, 2016; Schoenholz et al.,, 2016; Yang and Schoenholz,, 2017). In a well-
trained model, gradients from different layers should be on the same order to
guarantee backward information propagation, in order to avoid gradient
exploding/vanishing problems (Bengio et al.,, 1994; Hochreiter,, 1991;
Hochreiter et al.,, 2001; Pascanu et al.,, 2013). As shown in Appendix A3, PIM
quantization has a scale-enlarging effect, especially for low bit-width. To
understand the impact of this effect, we first introduce the following
theorem.
###### Theorem 2 (Training Dynamics)
For a neural network composed of repeated blocks, where each block is a
sequential of a fully-connected layer, some nonlinear effect (for example, the
PIM quantization operation), an extra scaling, a batch normalization layer,
and the nonlinear activation $\varphi(\cdot)$, as defined as following
$\displaystyle x^{(l+1)}_{i}$ $\displaystyle=\varphi(y^{(l)}_{i})$ (5a)
$\displaystyle y^{(l)}_{i}$
$\displaystyle=\gamma^{(l)}_{i}\frac{z^{(l)}_{i}-\mu^{(l)}_{i}}{\sigma^{(l)}_{i}}+\beta^{(l)}_{i}$
(5b) $\displaystyle z^{(l)}_{i}$
$\displaystyle=\eta^{(l)}\widetilde{z}^{(l)}_{i}$ (5c)
$\displaystyle\widetilde{z}^{(l)}_{i}$
$\displaystyle=f(W^{(l)}_{ij},x^{(l)}_{j})\sim\rho^{(l)}\sum_{j=1}^{n^{(l)}}W^{(l)}_{ij}x^{(l)}_{j}$
(5d)
where $x^{(l)}$ is the input to the $l$-th block, $W^{(l)}$ is the weight
matrix of the fully-connected layer, ${n^{(l)}}$ is the number of input
neurons, $f$ represents the nonlinear effect, $\rho^{(l)}$ is introduced to
demonstrate the effect of the nonlinearity on the scale of output standard
deviation, $\eta^{(l)}$ is an extra scaling factor introduced and explained in
the following, and $\gamma^{(l)}$, $\beta^{(l)}$, $\sigma^{(l)}$, $\mu^{(l)}$
are parameters and running statistics of the batch norm layer. If the
differential of the nonlinear effect $f$ is given by
$\mathrm{d}\widetilde{z}^{(l)}_{i}=\xi^{(l)}\cdot\mathrm{d}\bigg{(}\sum_{j=1}^{n^{(l)}}W^{(l)}_{ij}x^{(l)}_{j}\bigg{)}$
(6)
where $\xi^{(l)}$ is the scaling factor for backward propagation inside the
$l$-th layer, then for zeroth order approximation (mean-field assumption), the
activation gradient variance ratio between two adjacent layers is given by
$\displaystyle\frac{\mathbb{VAR}[\partial_{x^{(l)}}\mathcal{L}]}{\mathbb{VAR}[\partial_{x^{(l+1)}}\mathcal{L}]}\approx\left(\frac{\xi^{(l)}}{\rho^{(l)}}\right)^{2}\cdot\frac{n^{(l+1)}}{n^{(l)}}$
(7)
Theorem 2 indicates that the scale ratio between activation gradients from two
adjacent layers depends on the scaling factors introduced during forward and
backward for the nonlinear effect.
Based on the results in (7), we can find that if we do not introduce extra
scaling factor as in (6) but follow the conventional practice of STE (in other
words, $\xi^{(l)}=1$ for all $l$), the scale-enlarging effect may cause
gradient exploding/vanishing problem. Proper intialization as proposed in He
et al., (2015) is not effective in this case. Experiment demonstrates that for
some PIM decomposition scheme (such as bit serial and differential) and
sufficiently low bit-width (such as $7$-bit), the training does not converge.
To overcome this problem, we propose to scale the gradient according to (6),
and determine the necessary scale by also calculating the standard deviation
of the result from software quantization. Specifically, the scaling factor in
(6) is given by
$\xi=\sqrt{\frac{\mathbb{VAR}[y_{\mathrm{PIM}}]}{\mathbb{VAR}[y]}}$ (8)
where $y_{\mathrm{PIM}}$ is the result with PIM system and $y$ is that with
conventional software. Note that this only introduces extra computation during
training as the scale factor is only necessary for backward to stablize
training, and will not impact the inference procedure. Experiment demonstrates
that this backward scaling solves the problem for cases those otherwise do not
give reasonable results.
Besides scaling for backward, we find that scaling during forward with
predefined constant factor helps training, especially for low bit-widths, such
as those lower than $5$-bit. Even for higher precision, introducing extra
scaling can still be beneficial. However, as shown in equation (7), the ratio
does not depend on this factor, as it should be absorbed into the running
variance of the following batch normalization layers. We guess this is related
to numerical stability for computation, but the underlying mechanism is still
unclear to us and we leave it as a future work. However, we list the scaling
factor that we find best for practice in the Appendix.
### 3.4 BN Calibration
In the above we discuss about PIM systems with ideal quantization, where the
PIM quantization is perfectly linear without stochastic thermal noise. For
real systems, there are two non-ideal effects. First, the circuit non-
idealities in the analog-to-digital conversion will degrade the quantization
linearity. Second, random fluctuations in the circuit will add thermal noise
on the quantized output. Moreover, the imperfectness accompanying the linear
mapping varies from chip to chip, and there lacks a unified model to describe
such variation accurately. On the other hand, direct training with injected
noise can either deteriorate the training progress (for example, if the noise
injected is too large), or the noise energy can be different for different
real systems. Consequently, it is almost impossible to directly consider these
effects during training, especially in backward propagation.
Experiments demonstrate that the non-idealities have the potential to change
the BN statistics (see Appendix A3), and following (Yu and Huang,, 2019), we
propose to use a small portion of training data and calibrate BN running
statistics before evaluation. For both BN calibration and final inference, we
apply exactly the same real-case non-idealities. We find this can
significantly improve the performance, especially when the non-ideal effect is
strong (e.g., more imperfect linearity or larger injected noise). Note that
Joshi et al., (2020) exploits calibrating batch normalization statistics for
the purpose of accuracy retention involved in PCM systems, which is a
different problem from ours. In Appendix A7, we present more experiments,
where we find that BN calibration is able to reduce the impact of gain and
offset in PIM quantization and thus alleviates hardware calibration efforts.
### 3.5 Adjusted Precision Training
Figure 3: Computing error as a function of the standard deviation of additive
noise in our 7-bit PIM chip.
Besides BN calibration, we study the possibility of employing different
precisions for training and inference. The reasoning behind is that the non-
idealities only affect the least significant bits during the involved
quantization mapping, which effectively reduces the number of distinguishable
output levels from the PIM system. To quantify this reduction, ADC designs
typically use a metric called the effective number of bit-width (ENOB) and it
can be adopted here. As an example, Fig. 3 shows that the standard deviation
of MAC computing errors in a 7-bit PIM system will be equal to that of ideal
lower bit PIM systems, when random noise is added. Note that this adjusted
precision training method considers both noise injection and imperfect
linearity. Depending on quantization bit-widths, noise levels, imperfect
linearity forms, the optimal training precision varies but is expected to be
always smaller than the ideal PIM resolution.
## 4 Experiments
Table 3: Effect of PIM quantization on accuracy of neural networks (ResNet20
for CIFAR10) trained with different methods for native scheme ($N=9$).
Baseline refers to model trained with conventional quantization method (Jin et
al.,, 2020). AMS refers to model trained with the method in (Rekhi et al.,,
2019).
ccc—ccc $b_{\mathrm{PIM}}$ Method Acc. $b_{\mathrm{PIM}}$ Method Acc.
3 Baseline 8.3 6 Baseline 89.2
AMS 73.3 AMS 90.3
Ours 81.7 Ours 90.9
4 Baseline 27.2 7 Baseline 91.0
AMS 85.0 AMS 90.7
Ours 87.7 Ours 91.0
5 Baseline 80.5 $+\infty$ Baseline 91.6
AMS 89.0
Ours 90.7
#### Native Scheme.
We first investigate the possibility of directly applying the conventional
quantized model on PIM system, which serves as the baseline for our
comparison. To this end, we take the native scheme as an example, and fix the
number of multiplications for each processing to $9$, namely we use a unit
channel of $1$ to split the input channels. We experiment on CIFAR10 with
ResNet20, and the results are summarized in Table 4. Our method significantly
outperforms baseline, especially for ultra-low bit-widths. As shown in Table
4, the AMS method in (Rekhi et al.,, 2019) is supposed to work for the native
scheme. It indeed improves over the baseline but shows inferior performance
than ours. These results demonstrate that PIM quantization has non-negligible
impacts on the final accuracy, and it is necessary to take this quantization
into account during training for optimal inference accuracy on PIM systems.
#### Real Chip Results.
We experiment on CIFAR10 and CIFAR100, with several ResNet models as well as
one modified VGGNet11 following (Jia et al.,, 2020). We also use different
numbers of unit channels, namely $8$ and $16$, to split the input channels,
corresponding to number of computing units of $72$ and $144$, respectively. As
shown in Table 4, our method provides significantly better results than the
baseline. Specifically, prediction in the baseline models is barely better
than random guess, meaning the non-idealities from the real chip corrupt the
behavior of neural networks trained in this way. In contrast, our method gives
comparable results as those on digital system (the software results), meaning
the trained models are robust to real-case non-idealities. Moreover, VGGNet
shows less accuracy loss than ResNet because the more redundant model has
better tolerance over the real-chip non-idealities. It is widely-used for PIM
platforms with high-accuracy requirements (Jia et al., 2021b, ; Lee et al.,
2021b, ). Note that using smaller $N$ typically leads to better performance,
especially for CIFAR100, at the cost of reduced throughput and energy
efficiency.
Table 4: Accuracy with the 7-bit real chip (with the real chip curve from
Figure A1 and noise level of 0.35) of bit-serial PIM system for different
datasets and models. Note that PIM systems is hundreds times more efficient
than software system.
Dataset | Model | Method | N | Acc. | Model | Method | N | Acc. | Model | Method | N | Acc.
---|---|---|---|---|---|---|---|---|---|---|---|---
CIFAR10 | ResNet20 | Software | - | 91.6 | ResNet44 | Software | - | 92.8 | VGGNet11$\dagger$ | Software | - | 93.7
Baseline | 72 | 13.9 | Baseline | 72 | 10.5 | Baseline | 72 | 10.0
144 | 10.9 | 144 | 10.0 | 144 | 9.9
Ours | 72 | 89.7 | Ours | 72 | 90.6 | Ours | 72 | 94.2
144 | 89.1 | 144 | 90.7 | 144 | 94.0
ResNet32 | Software | - | 92.5 | ResNet56 | Software | - | 92.4 |
Baseline | 72 | 10.0 | Baseline | 72 | 10.0
144 | 10.1 | 144 | 10.0
Ours | 72 | 90.6 | Ours | 72 | 90.7
144 | 89.3 | 144 | 90.4
CIFAR100 | ResNet20 | Software | - | 67.0 | ResNet56 | Software | - | 70.3 | VGGNet11$\dagger$ | Software | - | NA
Baseline | 72 | 1.8 | Baseline | 72 | 1.0 | Baseline | 72 | NA
144 | 1.3 | 144 | 1.1 | 144 | NA
Ours | 72 | 62.6 | Ours | 72 | 65.3 | Ours | 72 | NA
144 | 61.8 | 144 | 63.5 | 144 | NA
* $\dagger$
The architecture is the same as in (Jia et al.,, 2020).
* *
Larger $N$ indicates higher efficiency but more information loss during
quantization.
#### Other PIM Decomposition Schemes.
We further verify our method on three other most common PIM decomposition
schemes, including native, differential and bit serial (see Appendix A1). We
experiment on ideal PIM with different inference resolutions and noise levels.
As shown in Figure 5, we compare our method with the baseline using BN
calibration on ResNet20 with CIFAR10 dataset. It is clear that for all schemes
with different resolution and noise levels, our method is consistently
superior, especially for high noise level and for differential and bit-serial
schemes, both of which are more practical and complex than the native one.
This justifies that our proposed method is applicable to a wide range of PIM
implementations.
Figure 4: The desirable training resolutions (TR) for different inference
resolutions (IR) and noise levels, with bit-serial scheme (ResNet20 on
CIFAR10). Figure 5: Performance of ResNet20 on CIFAR10 with ideal PIM of
different schemes and PIM resolutions. Note that $N=9$ for native and $N=144$
for differential and bit serial schemes.
#### Adjusted Precision Training.
Here we provide some ablation studies on adjusted precision training. We use
an ideal PIM system with bit serial scheme as the example. For different
inference resolutions and noise levels, the best accuracy with the optimal
training resolution is illustrated in Fig. 4, where the accuracy is directly
listed and different colors denote different training precision adjustments.
We find that for low noise level, it is optimal to train the model with the
same resolution as that for inference, and for larger noise, it is better to
use a smaller one due to reduced ENOB. Moreover, we find that the noise level
threshold of adjusting the training resolution depends on the absolute value
of inference resolution, and higher inference resolution tends to be more
sensitive to noise and requires precision adjustment for a lower noise level
threshold. There is clear correlation between the precision reduction and
ENOB, but they are not exactly the same. This should be related to the varying
sensitivity of inference on each MAC operation. More in-depth analysis of the
relation between ENOB and training setting is beyond the scope of this work
and left for future study.
Our analysis and experiments demonstrate that naively deploying neural network
quantized with conventional method on PIM systems is problematic and
ineffective, and PIM quantization has non-negligible impact on final
performance. Incorporating it into training is critical and will improve the
accuracy to a large extent. It also inspires and provides a desirable starting
point for future research to incorporate hardware-specific behaviors into
algorithm co-design for energy-efficient analog computing systems. Such
efforts will bridge the gap between hardware and software developments to
achieve unprecedented energy efficiency, while maintaining a competitive
neural network performance.
## 5 Conclusion
In this paper, we systematically study the problem of training a neural
network for application on the processing in-memory (PIM) system, which is a
promising candidate for next-generation hardware for deep learning, and we
provide a method for the extra quantization step unique to PIM systems but
ubiquitous to all different types of PIM implementations. Specifically, we
formulate the problem and analyze the forward and backward propagation to
enable PIM quantization-aware training. We study the training dynamics of our
method, and propose rescaling techniques for both forward and backward
propagations, to avoid gradient exploding/vanishing issues. We also study the
discrepancy between training and inference, where more realistic non-ideal
effects, such as imperfect linearity and stochastic thermal noise, are
involved but difficult to incorporate during backward propagation. To this
end, we propose to leverage BN calibration technique and invent adjusted
precision training. Finally, we present experimental results to demonstrate
potential relationship between training and inference bit-widths, together
with noise level and effective number of bits for the PIM system for
inference.
## References
* Ambrogio et al., (2018) Ambrogio, S., Narayanan, P., Tsai, H., Shelby, R. M., Boybat, I., Di Nolfo, C., Sidler, S., Giordano, M., Bodini, M., Farinha, N. C., et al. (2018). Equivalent-accuracy accelerated neural-network training using analogue memory. Nature, 558(7708):60–67.
* Bengio et al., (2013) Bengio, Y., Léonard, N., and Courville, A. (2013). Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432.
* Bengio et al., (1994) Bengio, Y., Simard, P., and Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2):157–166.
* Biswas and Chandrakasan, (2019) Biswas, A. and Chandrakasan, A. P. (2019). Conv-sram: An energy-efficient sram with in-memory dot-product computation for low-power convolutional neural networks. IEEE Journal of Solid-State Circuits, 54(1):217–230.
* Chen et al., (2014) Chen, Y., Luo, T., Liu, S., Zhang, S., He, L., Wang, J., Li, L., Chen, T., Xu, Z., Sun, N., et al. (2014). Dadiannao: A machine-learning supercomputer. In 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture, pages 609–622. IEEE.
* Chen et al., (2016) Chen, Y.-H., Krishna, T., Emer, J. S., and Sze, V. (2016). Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE Journal of Solid-State Circuits, 52(1):127–138.
* Choi et al., (2018) Choi, J., Wang, Z., Venkataramani, S., Chuang, P. I.-J., Srinivasan, V., and Gopalakrishnan, K. (2018). Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085.
* Du et al., (2015) Du, Z., Fasthuber, R., Chen, T., Ienne, P., Li, L., Luo, T., Feng, X., Chen, Y., and Temam, O. (2015). ShiDianNao: Shifting vision processing closer to the sensor. In Proceedings of the 42nd Annual International Symposium on Computer Architecture, pages 92–104.
* Gray et al., (2009) Gray, P. R., Hurst, P. J., Lewis, S. H., and Meyer, R. G. (2009). Analysis and design of analog integrated circuits. John Wiley & Sons, 5th edition.
* He et al., (2015) He, K., Zhang, X., Ren, S., and Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034.
* He et al., (2019) He, Z., Lin, J., Ewetz, R., Yuan, J.-S., and Fan, D. (2019). Noise injection adaption: End-to-end reram crossbar non-ideal effect adaption for neural network mapping. In Proceedings of the 56th Annual Design Automation Conference 2019, pages 1–6.
* Hochreiter, (1991) Hochreiter, S. (1991). Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universität München, 91(1).
* Hochreiter et al., (2001) Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J., et al. (2001). Gradient flow in recurrent nets: the difficulty of learning long-term dependencies.
* Horowitz, (2014) Horowitz, M. (2014). 1.1 computing’s energy problem (and what we can do about it). In 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), pages 10–14. IEEE.
* Ielmini and Wong, (2018) Ielmini, D. and Wong, H.-S. P. (2018). In-memory computing with resistive switching devices. Nature Electronics, 1(6):333–343.
* (16) Jia, H., Ozatay, M., Tang, Y., Valavi, H., Pathak, R., Lee, J., and Verma, N. (2021a). 15.1 a programmable neural-network inference accelerator based on scalable in-memory computing. In 2021 IEEE International Solid-State Circuits Conference (ISSCC), volume 64, pages 236–238. IEEE.
* (17) Jia, H., Ozatay, M., Tang, Y., Valavi, H., Pathak, R., Lee, J., and Verma, N. (2021b). Scalable and programmable neural network inference accelerator based on in-memory computing. IEEE Journal of Solid-State Circuits.
* Jia et al., (2020) Jia, H., Valavi, H., Tang, Y., Zhang, J., and Verma, N. (2020). A programmable heterogeneous microprocessor based on bit-scalable in-memory computing. IEEE Journal of Solid-State Circuits, pages 1–1.
* Jiang et al., (2019) Jiang, Z., Yin, S., Seo, J.-S., and Seok, M. (2019). C3SRAM: In-memory-computing SRAM macro based on capacitive-coupling computing. IEEE Solid-State Circuits Letters, 2(9):131–134.
* Jin et al., (2019) Jin, Q., Yang, L., and Liao, Z. (2019). Towards efficient training for neural network quantization. arXiv preprint arXiv:1912.10207.
* Jin et al., (2020) Jin, Q., Yang, L., Liao, Z., and Qian, X. (2020). Neural network quantization with scale-adjusted training. In The 31st British Machine Vision Conference (BMVC).
* Joshi et al., (2020) Joshi, V., Le Gallo, M., Haefeli, S., Boybat, I., Nandakumar, S. R., Piveteau, C., Dazzi, M., Rajendran, B., Sebastian, A., and Eleftheriou, E. (2020). Accurate deep neural network inference using computational phase-change memory. Nature communications, 11(1):1–13.
* Jouppi et al., (2017) Jouppi, N. P. et al. (2017). In-datacenter performance analysis of a tensor processing unit. In 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA), pages 1–12.
* (24) Lee, E., Han, T., Seo, D., Shin, G., Kim, J., Kim, S., Jeong, S., Rhe, J., Park, J., Ko, J. H., and Lee, Y. (2021a). A charge-domain scalable-weight in-memory computing macro with dual-SRAM architecture for precision-scalable DNN accelerators. IEEE Transactions on Circuits and Systems I: Regular Papers, 68(8):3305–3316.
* (25) Lee, J., Valavi, H., Tang, Y., and Verma, N. (2021b). Fully row/column-parallel in-memory computing SRAM macro employing capacitor-based mixed-signal computation with 5-b inputs. In 2021 Symposium on VLSI Circuits, pages 1–2. IEEE.
* Liu et al., (2020) Liu, S., Ren, B., Shen, X., and Wang, Y. (2020). Cocopie: Making mobile ai sweet as pie–compression-compilation co-design goes a long way. arXiv preprint arXiv:2003.06700.
* Long et al., (2020) Long, Y., Lee, E., Kim, D., and Mukhopadhyay, S. (2020). Q-pim: A genetic algorithm based flexible dnn quantization method and application to processing-in-memory platform. In 2020 57th ACM/IEEE Design Automation Conference (DAC), pages 1–6. IEEE.
* Maloberti, (2007) Maloberti, F. (2007). Data converters. Springer Science & Business Media.
* Mishra et al., (2017) Mishra, A., Nurvitadhi, E., Cook, J. J., and Marr, D. (2017). Wrpn: Wide reduced-precision networks. arXiv preprint arXiv:1709.01134.
* Mujtaba, (2017) Mujtaba, H. (2017). Nvidia volta gv100 12nm finfet gpu detailed – tesla v100 specifications include 21 billion transistors, 5120 cuda cores, 16 gb hbm2 with 900 gb/s bandwidth. Wccftech.
* Pascanu et al., (2013) Pascanu, R., Mikolov, T., and Bengio, Y. (2013). On the difficulty of training recurrent neural networks. In International conference on machine learning, pages 1310–1318. PMLR.
* Pelgrom, (2013) Pelgrom, M. J. (2013). Analog-to-digital conversion. Springer, 2nd edition.
* Poole et al., (2016) Poole, B., Lahiri, S., Raghu, M., Sohl-Dickstein, J., and Ganguli, S. (2016). Exponential expressivity in deep neural networks through transient chaos. arXiv preprint arXiv:1606.05340.
* Prezioso et al., (2015) Prezioso, M., Merrikh-Bayat, F., Hoskins, B. D., Adam, G. C., Likharev, K. K., and Strukov, D. B. (2015). Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature, 521(7550):61–64.
* Razavi, (2016) Razavi, B. (2016). Design of analog CMOS integrated circuits. McGraw-Hill Education, 2nd edition.
* Rekhi et al., (2019) Rekhi, A. S., Zimmer, B., Nedovic, N., Liu, N., Venkatesan, R., Wang, M., Khailany, B., Dally, W. J., and Gray, C. T. (2019). Analog/mixed-signal hardware error modeling for deep learning inference. In Proceedings of the 56th Annual Design Automation Conference 2019, pages 1–6.
* Sansen, (2007) Sansen, W. M. (2007). Analog design essentials, volume 859. Springer Science & Business Media.
* Schoenholz et al., (2016) Schoenholz, S. S., Gilmer, J., Ganguli, S., and Sohl-Dickstein, J. (2016). Deep information propagation. arXiv preprint arXiv:1611.01232.
* Si et al., (2020) Si, X., Tu, Y.-N., Huanq, W.-H., Su, J.-W., Lu, P.-J., Wang, J.-H., Liu, T.-W., Wu, S.-Y., Liu, R., Chou, Y.-C., Zhang, Z., Sie, S.-H., Wei, W.-C., Lo, Y.-C., Wen, T.-H., Hsu, T.-H., Chen, Y.-K., Shih, W., Lo, C.-C., Liu, R.-S., Hsieh, C.-C., Tang, K.-T., Lien, N.-C., Shih, W.-C., He, Y., Li, Q., and Chang, M.-F. (2020). 15.5 A 28nm 64Kb 6T SRAM computing-in-memory macro with 8b MAC operation for AI edge chips. In 2020 IEEE International Solid- State Circuits Conference (ISSCC), pages 246–248.
* Su et al., (2020) Su, J.-W., Si, X., Chou, Y.-C., Chang, T.-W., Huang, W.-H., Tu, Y.-N., Liu, R., Lu, P.-J., Liu, T.-W., Wang, J.-H., Zhang, Z., Jiang, H., Huang, S., Lo, C.-C., Liu, R.-S., Hsieh, C.-C., Tang, K.-T., Sheu, S.-S., Li, S.-H., Lee, H.-Y., Chang, S.-C., Yu, S., and Chang, M.-F. (2020). 15.2 A 28nm 64Kb Inference-Training Two-Way Transpose Multibit 6T SRAM Compute-in-Memory Macro for AI Edge Chips. In 2020 IEEE International Solid- State Circuits Conference (ISSCC), pages 240–242.
* Sun et al., (2020) Sun, X., Wang, N., Chen, C.-Y., Ni, J., Agrawal, A., Cui, X., Venkataramani, S., El Maghraoui, K., Srinivasan, V. V., and Gopalakrishnan, K. (2020). Ultra-low precision 4-bit training of deep neural networks. Advances in Neural Information Processing Systems, 33.
* Valavi et al., (2019) Valavi, H., Ramadge, P. J., Nestler, E., and Verma, N. (2019). A 64-Tile 2.4-Mb In-Memory-Computing CNN Accelerator Employing Charge-Domain Compute. IEEE Journal of Solid-State Circuits, 54(6):1789–1799.
* Xue et al., (2020) Xue, C.-X. et al. (2020). A cmos-integrated compute-in-memory macro based on resistive random-access memory for ai edge devices. Nature Electronics, pages 1–10.
* Yang and Schoenholz, (2017) Yang, G. and Schoenholz, S. S. (2017). Mean field residual networks: On the edge of chaos. arXiv preprint arXiv:1712.08969.
* Yao et al., (2020) Yao, P., Wu, H., Gao, B., Tang, J., Zhang, Q., Zhang, W., Yang, J. J., and Qian, H. (2020). Fully hardware-implemented memristor convolutional neural network. Nature, 577(7792):641–646.
* Ye et al., (2018) Ye, S., Zhang, T., Zhang, K., Li, J., Xu, K., Yang, Y., Yu, F., Tang, J., Fardad, M., Liu, S., et al. (2018). Progressive weight pruning of deep neural networks using admm. arXiv preprint arXiv:1810.07378.
* Yin et al., (2020) Yin, S., Jiang, Z., Seo, J.-S., and Seok, M. (2020). XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks. IEEE Journal of Solid-State Circuits.
* Yoon et al., (2021) Yoon, J.-H., Chang, M., Khwa, W.-S., Chih, Y.-D., Chang, M.-F., and Raychowdhury, A. (2021). A 40-nm, 64-Kb, 56.67 TOPS/W voltage-sensing computing-in-memory/digital RRAM macro supporting iterative write with verification and online read-disturb detection. IEEE Journal of Solid-State Circuits.
* Yu and Huang, (2019) Yu, J. and Huang, T. S. (2019). Universally slimmable networks and improved training techniques. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1803–1811.
* Yue et al., (2020) Yue, J., Yuan, Z., Feng, X., He, Y., Zhang, Z., Si, X., Liu, R., Chang, M.-F., Li, X., Yang, H., and Liu, Y. (2020). 14.3 A 65nm computing-in-memory-based CNN processor with 2.9-to-35.8TOPS/W system energy efficiency using dynamic-sparsity performance-scaling architecture and energy-efficient inter/intra-macro data reuse. In 2020 IEEE International Solid- State Circuits Conference (ISSCC), pages 234–236.
* Zhang et al., (2017) Zhang, J., Wang, Z., and Verma, N. (2017). In-memory computation of a machine-learning classifier in a standard 6T SRAM array. IEEE Journal of Solid-State Circuits, 52(4):915–924.
* Zhang et al., (2018) Zhang, T., Ye, S., Zhang, K., Tang, J., Wen, W., Fardad, M., and Wang, Y. (2018). A systematic dnn weight pruning framework using alternating direction method of multipliers. In Proceedings of the European Conference on Computer Vision (ECCV), pages 184–199.
* Zhou et al., (2016) Zhou, S., Wu, Y., Ni, Z., Zhou, X., Wen, H., and Zou, Y. (2016). Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160.
## A1 Proof of Theorems
Here we present detailed proofs for Theorem 1 and 2. We first formulate the
quantization procedure of PIM systems with several popular schemes, then
derive the results of Theorem 1. After that, we analyze the training dynamics
of a generic neural networks to prove Theorem 2.
### A1.1 PIM Quantization-Aware Training
To prove Theorem 1, we first present the quantization procedure of PIM systems
with native, differential and bit serial schemes. The output of a linear layer
is given by
$\widetilde{y}=\sum_{i=1}^{N}\widetilde{Q}_{i}\widetilde{q}_{i}$ (A1)
where $\widetilde{Q}_{i}\in[-1,1]$ is quantized weight of $b_{w}$ bits and
$\widetilde{q}_{i}\in[0,1]$ is quantized input of $b_{a}$ bits, respectively.
For PIM systems, due to the limited resolution of digital-to-analog converter
(which is $m$ bits) for the inputs, the inputs are first decomposed into sub-
arrays of $m$ bits. In other words, we have
$\displaystyle\widetilde{q}_{i}$
$\displaystyle=\sum_{k=0}^{b_{a}/m-1}\widetilde{q}^{(m)}_{i,k}\Delta^{k}$
(A2a) $\displaystyle\widetilde{q}^{(m)}_{i,k}$
$\displaystyle=\frac{1}{2^{b_{a}}-1}q^{(m)}_{i,k}$ (A2b) $\displaystyle\Delta$
$\displaystyle=2^{m}$ (A2c)
with $q^{(m)}_{i,k}\in\\{0,1,\dots,\Delta-1\\}$.
#### Native Scheme
For PIM system with native scheme, the output is given by
$\displaystyle\widetilde{y}_{\mathrm{PIM}}$
$\displaystyle=\bm{\mathsf{Q}}\bigg{(}\sum_{i=1}^{N}\widetilde{Q}_{i}\widetilde{q}_{i};b_{\mathrm{PIM}}\bigg{)}$
(A3a)
$\displaystyle=\sum_{k=0}^{b_{a}/m-1}\Delta^{k}\cdot\frac{N(\Delta-1)}{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}$
$\displaystyle\cdot\mathrm{round}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}{N(\Delta-1)}\sum_{i=1}^{N}\widetilde{Q}_{i}\cdot\widetilde{q}^{(m)}_{i,k}\Big{)}$
(A3b)
With the GSTE assumption, we can derive its differential as
$\displaystyle\mathrm{d}\widetilde{y}_{\mathrm{PIM}}$
$\displaystyle=\sum_{k=0}^{b_{a}/m-1}\Delta^{k}\cdot\frac{N(\Delta-1)}{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}$
$\displaystyle\cdot\mathrm{d}\Big{[}\mathrm{round}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}{N(\Delta-1)}\sum_{i=1}^{N}\widetilde{Q}_{i}\cdot\widetilde{q}^{(m)}_{i,k}\Big{)}\Big{]}$
(A4a)
$\displaystyle=\sum_{k=0}^{b_{a}/m-1}\Delta^{k}\cdot\frac{N(\Delta-1)}{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}\cdot\xi$
$\displaystyle\quad\cdot\mathrm{d}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}{N(\Delta-1)}\sum_{i=1}^{N}\widetilde{Q}_{i}\cdot\widetilde{q}^{(m)}_{i,k}\Big{)}$
(A4b)
$\displaystyle=\xi\cdot\mathrm{d}\Big{(}\sum_{i=1}^{N}\widetilde{Q}_{i}\cdot\sum_{k=0}^{b_{a}/m-1}\widetilde{q}^{(m)}_{i,k}\Delta^{k}\Big{)}$
(A4c)
$\displaystyle=\xi\cdot\mathrm{d}\bigg{(}\sum_{i=1}^{N}\widetilde{Q}_{i}\widetilde{q}_{i}\bigg{)}$
(A4d)
#### Differential Scheme
For PIM system with differential scheme, the weight is first decomposed into
positive and negative parts, as
$\widetilde{Q}_{i}=\widetilde{Q}_{i}^{+}+\widetilde{Q}_{i}^{-}$ (A5)
where all elements in $\widetilde{Q}_{i}^{+}$ are positive and those in
$\widetilde{Q}_{i}^{-}$ are negative. Its differential is given by
$\mathrm{d}\widetilde{Q}_{i}=\mathrm{d}\widetilde{Q}_{i}^{+}+\mathrm{d}\widetilde{Q}_{i}^{-}$
(A6)
The output is the combination of these two parts as
$\displaystyle\widetilde{y}_{\mathrm{PIM}}$
$\displaystyle=\bm{\mathsf{Q}}\bigg{(}\sum_{i=1}^{N}\widetilde{Q}_{i}\widetilde{q}_{i};b_{\mathrm{PIM}}\bigg{)}$
(A7a)
$\displaystyle=\sum_{k=0}^{b_{a}/m-1}\Delta^{k}\cdot\frac{N(\Delta-1)}{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}$
$\displaystyle\quad\cdot\Big{[}\mathrm{round}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}{N(\Delta-1)}\sum_{i=1}^{N}\widetilde{Q}_{i}^{+}\widetilde{q}^{(m)}_{i,k}\Big{)}$
$\displaystyle\quad-\mathrm{round}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}{N(\Delta-1)}\sum_{i=1}^{N}(-\widetilde{Q}_{i}^{-})\widetilde{q}^{(m)}_{i,k}\Big{)}\Big{]}$
(A7b)
Taking differential on both sides gives
$\displaystyle\mathrm{d}\widetilde{y}_{\mathrm{PIM}}$
$\displaystyle=\mathrm{d}\sum_{k=0}^{b_{a}/m-1}\Delta^{k}\cdot\frac{N(\Delta-1)}{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}$
$\displaystyle\cdot\Big{[}\mathrm{round}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}{N(\Delta-1)}\sum_{i=1}^{N}\widetilde{Q}_{i}^{+}\widetilde{q}^{(m)}_{i,k}\Big{)}$
$\displaystyle-\mathrm{round}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}{N(\Delta-1)}\sum_{i=1}^{N}(-\widetilde{Q}_{i}^{-})\widetilde{q}^{(m)}_{i,k}\Big{)}\Big{]}$
(A8a)
$\displaystyle=\sum_{k=0}^{b_{a}/m-1}\Delta^{k}\cdot\frac{N(\Delta-1)}{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}$
$\displaystyle\cdot\Big{\\{}\mathrm{d}\Big{[}\mathrm{round}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}{N(\Delta-1)}\sum_{i=1}^{N}\widetilde{Q}_{i}^{+}\widetilde{q}^{(m)}_{i,k}\Big{)}\Big{]}$
$\displaystyle-\mathrm{d}\Big{[}\mathrm{round}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}{N(\Delta-1)}\sum_{i=1}^{N}(-\widetilde{Q}_{i}^{-})\widetilde{q}^{(m)}_{i,k}\Big{)}\Big{]}\Big{\\}}$
(A8b)
$\displaystyle=\sum_{k=0}^{b_{a}/m-1}\Delta^{k}\cdot\frac{N(\Delta-1)}{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}$
$\displaystyle\cdot\Big{[}\xi\cdot\mathrm{d}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}{N(\Delta-1)}\sum_{i=1}^{N}\widetilde{Q}_{i}^{+}\widetilde{q}^{(m)}_{i,k}\Big{)}$
$\displaystyle-\xi\cdot\mathrm{d}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{a}}-1)}{N(\Delta-1)}\sum_{i=1}^{N}(-\widetilde{Q}_{i}^{-})\widetilde{q}^{(m)}_{i,k}\Big{)}\Big{]}$
(A8c)
$\displaystyle=\xi\cdot\mathrm{d}\Big{(}\sum_{i=1}^{N}\widetilde{Q}_{i}^{+}\sum_{k=0}^{b_{a}/m-1}\widetilde{q}^{(m)}_{i,k}\Delta^{k}\Big{)}$
$\displaystyle+\xi\cdot\mathrm{d}\Big{(}\sum_{i=1}^{N}\widetilde{Q}_{i}^{-}\sum_{k=0}^{b_{a}/m-1}\widetilde{q}^{(m)}_{i,k}\Delta^{k}\Big{)}$
(A8d)
$\displaystyle=\xi\cdot\mathrm{d}\Big{(}\sum_{i=1}^{N}\widetilde{Q}_{i}^{+}\widetilde{q}_{i}\Big{)}+\xi\cdot\mathrm{d}\Big{(}\sum_{i=1}^{N}\widetilde{Q}_{i}^{-}\widetilde{q}_{i}\Big{)}$
(A8e)
$\displaystyle=\xi\cdot\mathrm{d}\Big{(}\sum_{i=1}^{N}(\widetilde{Q}_{i}^{+}+\widetilde{Q}_{i}^{-})\widetilde{q}_{i}\Big{)}$
(A8f)
$\displaystyle=\xi\cdot\mathrm{d}\bigg{(}\sum_{i=1}^{N}\widetilde{Q}_{i}\widetilde{q}_{i}\bigg{)}$
(A8g)
#### Bit Serial Scheme
For PIM system with bit serial scheme, the weight is first decomposed into
bits as
$\widetilde{Q}_{i}=\sum_{k=0}^{b_{w}-1}(-1)^{\delta_{k,b_{w}-1}}\widetilde{Q}_{i,k}2^{k}$
(A9)
where
$\widetilde{Q}_{i,k}=\frac{1}{2^{b_{w}-1}-1}Q_{i,k}$ (A10)
and $Q_{i,k}\in\\{0,1\\}$. The output is obtained for each bits separately and
then summed over as
$\displaystyle\widetilde{y}_{\mathrm{PIM}}$
$\displaystyle=\bm{\mathsf{Q}}\bigg{(}\sum_{i=1}^{N}\widetilde{Q}_{i}\widetilde{q}_{i};b_{\mathrm{PIM}}\bigg{)}$
(A11a)
$\displaystyle=\sum_{k=0}^{b_{w}-1}\sum_{l=0}^{b_{a}/m-1}(-1)^{\delta_{k,b_{w}-1}}2^{k}\Delta^{l}$
$\displaystyle\cdot\frac{N(\Delta-1)}{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{w}-1}-1)(2^{b_{a}}-1)}$
$\displaystyle\cdot\mathrm{round}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{w}-1}-1)(2^{b_{a}}-1)}{N(\Delta-1)}$
$\displaystyle\cdot\sum_{i=1}^{N}\widetilde{Q}_{i,k}\widetilde{q}^{(m)}_{i,l}\Big{)}$
(A11b)
The differential can thus be determined as
$\displaystyle\mathrm{d}\widetilde{y}_{\mathrm{PIM}}$
$\displaystyle=\mathrm{d}\sum_{k=0}^{b_{w}-1}\sum_{l=0}^{b_{a}/m-1}(-1)^{\delta_{k,b_{w}-1}}2^{k}\Delta^{l}$
$\displaystyle\quad\cdot\frac{N(\Delta-1)}{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{w}-1}-1)(2^{b_{a}}-1)}$
$\displaystyle\quad\cdot\mathrm{round}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{w}-1}-1)(2^{b_{a}}-1)}{N(\Delta-1)}$
$\displaystyle\quad\sum_{i=1}^{N}\widetilde{Q}_{i,k}\widetilde{q}^{(m)}_{i,l}\Big{)}$
(A12a)
$\displaystyle=\sum_{k=0}^{b_{w}-1}\sum_{l=0}^{b_{a}/m-1}(-1)^{\delta_{k,b_{w}-1}}2^{k}\Delta^{l}$
$\displaystyle\quad\cdot\frac{N(\Delta-1)}{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{w}-1}-1)(2^{b_{a}}-1)}$
$\displaystyle\quad\cdot\mathrm{d}\Big{[}\mathrm{round}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{w}-1}-1)(2^{b_{a}}-1)}{N(\Delta-1)}$
$\displaystyle\quad\sum_{i=1}^{N}\widetilde{Q}_{i,k}\widetilde{q}^{(m)}_{i,l}\Big{)}\Big{]}$
(A12b)
$\displaystyle=\sum_{k=0}^{b_{w}-1}\sum_{l=0}^{b_{a}/m-1}(-1)^{\delta_{k,b_{w}-1}}2^{k}\Delta^{l}$
$\displaystyle\quad\cdot\frac{N(\Delta-1)}{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{w}-1}-1)(2^{b_{a}}-1)}$
$\displaystyle\quad\cdot\xi\cdot\mathrm{d}\Big{(}\frac{(2^{b_{\mathrm{PIM}}}-1)(2^{b_{w}-1}-1)(2^{b_{a}}-1)}{N(\Delta-1)}$
$\displaystyle\quad\cdot\sum_{i=1}^{N}\widetilde{Q}_{i,k}\widetilde{q}^{(m)}_{i,l}\Big{)}$
(A12c)
$\displaystyle=\xi\cdot\mathrm{d}\Big{(}\sum_{i=1}^{N}\sum_{k=0}^{b_{w}-1}(-1)^{\delta_{k,b_{w}-1}}\widetilde{Q}_{i,k}2^{k}$
$\displaystyle\quad\cdot\sum_{l=0}^{b_{a}/m-1}\widetilde{q}^{(m)}_{i,l}\Delta^{l}\Big{)}$
(A12d)
$\displaystyle=\xi\cdot\mathrm{d}\bigg{(}\sum_{i=1}^{N}\widetilde{Q}_{i}\widetilde{q}_{i}\bigg{)}$
(A12e)
With (A4d), (A8g) and (A12e), we prove Theorem 1.
### A1.2 Training Dynamics
Here we analyze the training dynamics to prove Theorem 2. Our analysis is
similar to that presented in (Jin et al.,, 2019), which is zeroth order
approximation and is based on mean field theory, where different quantities
are assumed to be independent (although some of them have some dependence,
especially the gradients, as described following Axiom 3.2 in (Yang and
Schoenholz,, 2017)).
We want to analyze the training dynamics of a generic neural network with
repeated blocks composed of linear layer, some nonlinear effect, forward
scaling, batch normalization, and output activation function, as
$\displaystyle x^{(l+1)}_{i}$ $\displaystyle=\varphi(y^{(l)}_{i})$ (A13a)
$\displaystyle y^{(l)}_{i}$
$\displaystyle=\gamma^{(l)}_{i}\frac{z^{(l)}_{i}-\mu^{(l)}_{i}}{\sigma^{(l)}_{i}}+\beta^{(l)}_{i}$
(A13b) $\displaystyle z^{(l)}_{i}$
$\displaystyle=\eta^{(l)}\widetilde{z}^{(l)}_{i}$ (A13c)
$\displaystyle\widetilde{z}^{(l)}_{i}$
$\displaystyle=f(W^{(l)}_{ij},x^{(l)}_{j})$ (A13d)
$\displaystyle\sim\rho^{(l)}\sum_{j=1}^{n^{(l)}}W^{(l)}_{ij}x^{(l)}_{j}$
(A13e) $\displaystyle\mathrm{d}\widetilde{z}^{(l)}_{i}$
$\displaystyle=\xi^{(l)}\cdot\mathrm{d}\left(\sum_{j=1}^{n^{(l)}}W^{(l)}_{ij}x^{(l)}_{j}\right)$
(A13f)
where $\rho^{(l)}$ denotes modification on the output variance by the
nonlinear effect and $\xi^{(l)}$ is the scaling factor introduced during
backward propagation. From this we can derive the following statistics
$\displaystyle\mathbb{VAR}[y^{(l)}_{i}]$
$\displaystyle\approx(\gamma^{(l)}_{i})^{2}$ (A14a)
$\displaystyle\mathbb{E}[(x^{(l)}_{i})^{2}]$
$\displaystyle\approx\mathbb{E}[(\varphi^{\prime}(y^{(l)}_{i}))^{2}]\mathbb{VAR}[y^{(l)}_{i}]$
(A14b)
$\displaystyle\approx\mathbb{E}[(\varphi^{\prime}(y^{(l)}_{i}))^{2}](\gamma^{(l)}_{i})^{2}$
(A14c) $\displaystyle(\sigma^{(l)}_{i})^{2}$
$\displaystyle=\mathbb{VAR}[z^{(l)}_{i}]$ (A14d)
$\displaystyle=(\eta^{(l)})^{2}(\rho^{(l)})^{2}n^{(l)}\mathbb{VAR}[W^{(l)}_{ij}]\mathbb{E}[(x^{(l)}_{j})^{2}]$
(A14e)
$\displaystyle\approx(\eta^{(l)})^{2}(\rho^{(l)})^{2}n^{(l)}\mathbb{VAR}[W^{(l)}_{ij}]$
$\displaystyle\quad\cdot\mathbb{E}[(\varphi^{\prime}(y^{(l)}_{j}))^{2}](\gamma^{(l)}_{j})^{2}$
(A14f)
where (A14b) is valid if the activation $\phi$ is quasi-linear.
We first estimate the gradient of the batch normalization layer as following
$\displaystyle\frac{\partial\mu^{(l)}_{i}}{\partial z^{(l)}_{i}}$
$\displaystyle=\frac{1}{m_{B}}$ (A15a)
$\displaystyle\frac{\partial\sigma^{(l)}_{i}}{\partial z^{(l)}_{i}}$
$\displaystyle=\frac{1}{m_{B}}\frac{z^{(l)}_{i}-\mu^{(l)}_{i}}{\sigma^{(l)}_{i}}$
(A15b) $\displaystyle\frac{\partial y^{(l)}_{i}}{\partial z^{(l)}_{i}}$
$\displaystyle=\gamma^{(l)}_{i}\left[\frac{1}{\sigma^{(l)}_{i}}-\frac{1}{\sigma^{(l)}_{i}}\frac{\partial\mu^{(l)}_{i}}{\partial
z^{(l)}_{i}}\right.$
$\displaystyle\left.\quad+(z^{(l)}_{i}-\mu^{(l)}_{i})\frac{-1}{(\sigma^{(l)}_{i})^{2}}\frac{\partial\sigma^{(l)}_{i}}{\partial
z^{(l)}_{i}}\right]$ (A15c)
$\displaystyle=\gamma^{(l)}_{i}\left[\frac{1}{\sigma^{(l)}_{i}}-\frac{1}{\sigma^{(l)}_{i}}\frac{1}{m_{B}}\right.$
$\displaystyle\left.\quad-\frac{1}{\sigma^{(l)}_{i}}\frac{1}{m_{B}}\left(\frac{z^{(l)}_{i}-\mu^{(l)}_{i}}{\sigma^{(l)}_{i}}\right)^{2}\right]$
(A15d)
$\displaystyle\approx\frac{\gamma^{(l)}_{i}}{\sigma^{(l)}_{i}}\quad(m_{B}>>1)$
(A15e)
where we have assumed that the batch size is large enough, which is typically
satisfied in practice.
The gradients of loss $\mathcal{L}$ with respect to the input can be easily
calculated, which is
$\partial_{x^{(l)}_{j}}\mathcal{L}=\xi^{(l)}\sum_{i=1}^{n^{(l+1)}}W^{(l)}_{ij}\eta^{(l)}\frac{\gamma^{(l)}_{i}}{\sigma^{(l)}_{i}}\varphi^{\prime}(y^{(l)}_{i})\partial_{x^{(l+1)}_{i}}\mathcal{L}$
(A16)
from which we can derive the variance of the gradient, based on mean field
assumption, as
$\displaystyle\mathbb{VAR}[\partial_{x^{(l)}_{j}}\mathcal{L}]$
$\displaystyle=n^{(l+1)}(\xi^{(l)})^{2}(\eta^{(l)})^{2}\left(\frac{\gamma^{(l)}_{i}}{\sigma^{(l)}_{i}}\right)^{2}$
$\displaystyle\quad\cdot\mathbb{VAR}[W^{(l)}_{ij}]\mathbb{E}[(\varphi^{\prime}(y^{(l)}_{i}))^{2}]\mathbb{VAR}[\partial_{x^{(l+1)}_{i}}\mathcal{L}]$
(A17a)
Substituting (A14f), we have
$\displaystyle\mathbb{VAR}[\partial_{x^{(l)}_{j}}\mathcal{L}]$
$\displaystyle=$ $\displaystyle
n^{(l+1)}(\xi^{(l)})^{2}\cancel{(\eta^{(l)})^{2}}$
$\displaystyle\cdot\frac{\cancel{(\gamma^{(l)}_{i})^{2}}}{n^{(l)}\cancel{(\eta^{(l)})^{2}}(\rho^{(l)})^{2}\cancel{\mathbb{VAR}[W^{(l)}_{ij}]}\cancel{\mathbb{E}[(\varphi^{\prime}(y^{(l)}_{j}))^{2}]}\cancel{(\gamma^{(l)}_{j})^{2}}}$
$\displaystyle\cdot\cancel{\mathbb{VAR}[W^{(l)}_{ij}]}\cdot\cancel{\mathbb{E}[(\varphi^{\prime}(y^{(l)}_{i}))^{2}]}\mathbb{VAR}[\partial_{x^{(l+1)}_{i}}\mathcal{L}]$
$\displaystyle=$
$\displaystyle\left(\frac{\xi^{(l)}}{\rho^{(l)}}\right)^{2}\cdot\frac{n^{(l+1)}}{n^{(l)}}\cdot\mathbb{VAR}[\partial_{x^{(l+1)}_{i}}\mathcal{L}]$
(A18a)
Ignoring spatial dependence of all statistics, we have
$\boxed{\frac{\mathbb{VAR}[\partial_{x^{(l)}}\mathcal{L}]}{\mathbb{VAR}[\partial_{x^{(l+1)}}\mathcal{L}]}\approx\left(\frac{\xi^{(l)}}{\rho^{(l)}}\right)^{2}\cdot\frac{n^{(l+1)}}{n^{(l)}}}$
(A19)
## A2 Experiment Settings
Figure A1: Imperfect MAC outputs from a PIM chip.
### A2.1 General Experiment Settings
Our method is evaluated using several ResNet models on CIFAR. Weights and
inputs are quantized to $4$-bit, and $b_{\mathrm{PIM}}$ varies from $3$ to
$10$. The first convolution layer and the final fully-connection layer for
classification are implemented on digital system, namely
$b_{\mathrm{PIM}}=+\infty$ for these two layers. To accurately evaluate the
inference accuracy on actual hardware with variations, non-linearity, and
noise, we evaluate the proposed method using physical models of a state-of-
the-art SRAM PIM chip prototype. Each PIM SRAM macro in the chip computes 32
analog MACs ($N<=144$, $b_{\mathrm{PIM}}$=7) in parallel. The measured 32
transfer functions, shown in Fig. A1, capture all the nonlinearity and
mismatch of the physical chip. Random noise in computation, which follows
Gaussian distribution and is solely characterized by root-mean-square (RMS)
error (Gray et al.,, 2009; Razavi,, 2016; Sansen,, 2007; Pelgrom,, 2013;
Maloberti,, 2007), is measured to be 0.35 LSB. Due to the small size of the
prototype chip, running through all images in test dataset is infeasible in
time, so we build a hardware calibrated physical model to quickly, accurately
and flexibly simulate the inference accuracy of real hardware, which is a
widely adopted common practice in custom hardware research because of the
inhibiting costs of building a full-scale chip for large DNNs (Yue et al.,,
2020; Su et al.,, 2020; Si et al.,, 2020; Jia et al.,, 2020; Jiang et al.,,
2019). We have experimentally confirmed the identical MAC and inference
results of the model and a real physical chip. It is also worth noting that
the non-idealities presented in this SRAM PIM chip is representative of that
of various types of PIM hardware.
To verify the effectiveness of our method, we experiment on ResNet20,
ResNet32, ResNet44 and ResNet56 on CIFAR10 and ResNet20 on CIFAR100. Following
previous practice (Jin et al.,, 2020), weights and inputs of convolution and
fully-connected layers are quantized to $4$-bit ($b_{w}=b_{a}=4$), including
the first and last layers, except that inputs to the first layer are kept at
$8$ bit, and we do not apply normalization on these images. Batch
normalization layers and bias in the final fully-connected layers are full-
precision. The quantization resolution for PIM system ($b_{\mathrm{PIM}}$)
varies from $3$ to $10$. The first convolution layer and the final fully-
connection layer for classification are implemented on digital system, namely
$b_{\mathrm{PIM}}=+\infty$ for these two layers. For CIFAR10 and CIFAR100, the
$1\times 1$ convolution layers for residual connection require much less
computations and thus are also implemented on digital system. For differential
and bit serial scheme, inputs are first split along the channel dimension into
sub-tensors, each with a unit channel of $16$, corresponding to $N=144$ for
$3\times 3$ convolution, and processed separately before summing the final PIM
outputs. For native scheme, we instead use a unit channel of $1$ and thus
$N=9$ to match the experiment setting in Rekhi et al., (2019) which use $N=8$.
For real-curve results, since we totally have $32$ curves (ADC components),
each for $8$ outputs with $b_{w}=4$ bits, the output channels are split with
unit output channel of $8$.
The input image is randomly cropped to $32\times 32$ and randomly flipped
horizontally during training and directly applied without augmentation for
inference. All models are trained from scratch with $200$ epochs and multi-
step learning rate scheduler, where the initial learning rate is $0.1$ and
reduced by $10$ times at the $100$-th and $150$-th epochs. We use SGD
optimizer with Nesterov momentum of weight 0.9 and weight decay is set to
$0.0001$. Batch size is $128$ for all experiments. We apply constant rescaling
on all layers, including the convolution layers, in contrast to only the last
fully-connected layers suggested in (Jin et al.,, 2019), despite batch
normalization is applied in the model. All experiments are finished on one
GeForce GTX 1080 GPU with $12$GB memory.
Weights are quantized with a modified DoReFa scheme, without mapping between
intervals of $[-1,1]$ and $[0,1]$. Specifically, the quantized weights are
given as
$\displaystyle Q_{i}$
$\displaystyle=\frac{1}{2^{b_{w}-1}-1}s\cdot\mathrm{round}\Big{(}(2^{b_{w}-1}-1)$
$\displaystyle\qquad\cdot\frac{\tanh(W_{i})}{\max\limits_{k}|\tanh(W_{k})|}\Big{)}$
(A20a) $\displaystyle s$
$\displaystyle=\frac{1}{\sqrt{n_{\mathrm{out}}\mathbb{VAR}[Q_{i}]}}$ (A20b)
where $n_{\mathrm{out}}$ denotes the number of output neurons of the linear
layer. For native scheme, since the output can also be negative, we also adopt
such quantization function.
### A2.2 Error Analysis Experiment Settings
#### Computing Error Analysis (Fig. 3)
For this example, we first obtain the MAC results via uniform random sampling
on the output space, and apply PIM quantization together with noise injection.
By comparing the ideal output with the noisy quantized output for different
noise levels, we can obtain the errors, from which we estimate the standard
deviation of them, for each value of noise levels. These standard deviations
are then normalized by that for the noiseless quantization.
## A3 Scale-Enlarging Effect of PIM Quantization
Figure A2: Impact of PIM quantization on the output scale measured by
standard deviation. $\rho$ is defined as in (5d).
Here we show the scale-enlarging effect of PIM quantization. Specifically, we
study an idealized noiseless system to examine the effect of $\bm{\mathsf{Q}}$
on the standard deviation of outputs. As an example, we experiment on a toy
example of convolution with bit serial scheme, and calculate standard
deviation ratio between the outputs with and without PIM quantization. For
this purpose, we set the number of input channels to $16$ and that of output
channels to $32$. The kernel size is given by $3\times 3$, and both inputs and
weights are quantized to $4$ bit. We experiment on a random batch of $100$
data, each distributed uniformly on $[0,1]$ before quantized. Weights are
randomly sampled with normal distribution under Kaiming intialization
condition (He et al.,, 2015), and quantized with the previously mentioned
modified DoReFa scheme (Zhou et al.,, 2016), given by (A20a). We experiment on
an ideal bit-serial PIM system, and calculate standard deviation ratio between
the output of PIM system and that from conventional quantization with digital
accelerator. We plot this ratio against PIM bit-width ($b_{\mathrm{PIM}}$),
and obtain such curves for different numbers of input channels. As illustrated
in Figure A2, we find that the difference between the two scenarios is not
significant for high precision, which is as expected. However, for mediate
precision such as $5\sim 7$ bits, they start to become different, and for
ultra-low bit-widths, such as $3\sim 4$ bits, the discrepancy can be as large
as $2\sim 4$.
## A4 Impact of Non-idealities on BN Statistics
Figure A3: Impact of nonlinearity and noise non-idealities on the running
statistics. Note this is one sampling results.
In this section, we study the impact of non-ideal effects on BN statistics. We
experiment with a toy example of one layer convolution implemented on ideal or
real PIM systems, and calculate the running statistics of output for different
noise levels. For this purpose, we use the same toy experiment setting as in
A3, and set the unit output channel for real-curve inference to $8$. The
results are illustrated in Figure A3. We find that output statistics can
change by as much as $30\%$, which might have significant impact on the
model’s final output, especially if its behavior is sensitive to these values.
## A5 Scaling Factors for Forward Rescaling
Here we list the rescaling factors for forward propagation, as shown in Table
A1. We find that it depends on PIM resolution $b_{\mathrm{PIM}}$ and PIM
decomposition scheme. Moreover, it can even be different for different
software package versions. As mentioned in the text, the underlying reason is
still unclear to us.
Table A1: Scaling factor for forward rescaling for different PIM resolution $b_{\mathrm{PIM}}$ and different PIM decomposition schemes. $b_{\mathrm{PIM}}$ | Native | Differential | Bit Serial
---|---|---|---
3 | 100 | 1000 | 100
4 | 20 | 1000 | 30
5 | 1 | 1000 | 30
6 | 1 | 1000 | 30
7 | 1 | 1000 | 1.03
## A6 Ablation Study
Here we provide more in-depth ablation study for our methods, including of PIM
quantization-aware training, the rescaling techniques, and batch normalization
calibration.
### A6.1 PIM-QAT and Rescaling
We first study the effectiveness of PIM-QAT together with the rescaling
techniques. To this end, we compare the baseline with that trained with PIM-
QAT, without using BN calibration or adjusted bit training. The two rescaling
techniques for forward and backward propagation are applied, and we use
noiseless ideal PIM system, without using any real chip curve. Table A6.1
compares the results of bit serial scheme for several different
$b_{\mathrm{PIM}}$’s, which are also plotted in Figure A4. It can be seen that
for low $b_{\mathrm{PIM}}$, our method is significantly better than the
baseline. Specifically, for $b_{\mathrm{PIM}}<9$, our method gives better
results, and for ultra-low bit-width, such as $3$ bit, where baseline models
are not different from random guess, our method can still get a reasonable
top-1 accuracy of $61.8\%$. We also find that for sufficient high
$b_{\mathrm{PIM}}$ (larger than $8$), baseline can be better. This is also
reasonable as the noiseless PIM with such high precision will almost introduce
no precision loss.
Table A2: Accuracy of ResNet20 on CIFAR10 with idealized bit-serial PIM-
quantization, where noise or real chip curve are not involved. For our
results, we use rescaling techniques for both forward and backward
propagation, as described in the text.
ccc—ccc $b_{\mathrm{PIM}}$ Method Acc. $b_{\mathrm{PIM}}$ Method Acc.
3
Baseline 10.0 7 Baseline 85.8
Ours 61.8 Ours 90.8
4
Baseline 10.2 8 Baseline 90.3
Ours 77.2 Ours 90.8
5
Baseline 11.0 9 Baseline 91.2
Ours 86.5 Ours 90.8
6
Baseline 41.1 10 Baseline 91.6
Ours 89.5 Ours 90.8
Figure A4: Comparing our method of PIM-quantization-aware training with
baseline on idealized noiseless bit-serial PIM systems with different
resolutions.
### A6.2 Rescaling
We then study the rescaling techniques we propose for both forward and
backward propagations. As listed in Table A3 and shown in Fig. A5, for bit
serial scheme, if the $b_{\mathrm{PIM}}$ is lower than $6$, training without
forward or backward propagation will all make the training unstable. These
experiments demonstrate that both rescaling techniques we propose are
necessary and beneficial for stablized training dynamics of the neural
network. Experiments on native and differential schemes give similar results.
Table A3: Ablation study of forward and backward rescaling techniques for bit-
serial PIM systems with different resolutions. The accuracy results are based
on ResNet20 on CIFAR10.
$b_{\mathrm{PIM}}$ | Rescaling | Acc.
---|---|---
Forward | Backward
3 | N | N | 10.0
N | Y | 17.1
Y | Y | 61.8
4 | N | N | 61.0
N | Y | 76.7
Y | Y | 77.2
5 | N | N | 10.3
N | Y | 17.5
Y | Y | 86.5
6 | N | N | 10.3
N | Y | 89.1
Y | Y | 89.5
7 | N | N | 88.8
N | Y | 91.0
Y | Y | 90.8
Figure A5: Learning curve comparison with ResNet20 on CIFAR10 for bit-serial
PIM system.
### A6.3 BN Calibration
Figure A6: Effect of BN calibration for bit-serial PIM systems with idealized
and real curve quantization. We can find BN calibration helps for both our
method and the baseline.
Besides training techniques discussed above, the discrepancy between training
with idealized quantization and inference with real-case non-idealities,
including non-linearity and noise, are dealt with BN calibration. To verify
its effectiveness, we compare the results using the BN calibration or not for
both baseline and our method, and illustrate the results for $7$ bit ideal and
real PIM in Figure A6. We find that BN calibration significantly improves the
results for all cases, especially for those with real PIM system. More
interestingly, it also improves the baseline results, yet the performance is
still unsatisfactory and significantly worse than ours. These experiments
demonstrate that the change of BN running statistics caused by nonlinearity
and noise effects of real PIM systems has strong impact on the predictive
capability of the neural network, and this can be alleviated to a large extent
with a simple yet effective software solution, without extra training efforts.
## A7 More Study of BN Calibration
Figure A7: Generated idealized 7bit curves with gain and offset variations, where $N=72$ for (a) and $N=144$ for (b). Gains and offsets are both sampled with normal random distributions, where $\mathcal{N}_{\mathrm{offset}}\sim(0,2.04)$ and $\mathcal{N}_{\mathrm{gain}}\sim(1,0.024)$. The standard deviations for them are determined based on real-chip testing results. Table A4: Accuracy of ResNet20 and ResNet56 on CIFAR10 with idealized bit-serial PIM-quantization with gain and offset variations, where noise or real chip curve are not involved. Depth | N | Gain & Offset Variation | BN Calib. | Acc.
---|---|---|---|---
20 | 72 | N | - | 91.2
Y | N | 10.0
Y | Y | 90.7
144 | N | - | 90.8
Y | N | 10.0
Y | Y | 90.6
56 | 72 | N | - | 92.2
Y | N | 10.0
Y | Y | 91.7
144 | N | - | 90.3
Y | N | 10.1
Y | Y | 89.7
Here we present more study of the effectiveness of BN calibration, and
demonstrate that it is also beneficial for hardware calibrating. Specifically,
we use several PIM quantization transfer curves with variation in gain and
offset, as illustrated in Fig. A7. The gain and offset variation is extracted
from a real chip before hardware calibration. As shown in Table A4, directly
applying these curves on a pretrained model leads to random guess results,
while BN calibration is able to repair the model and recover the result to
reasonable final accuracy.
|
latexText page 17 contains only floats
# Multi-annotator Deep Learning:
A Probabilistic Framework for Classification
Marek Herde<EMAIL_ADDRESS>
Intelligent Embedded Systems
University of Kassel
Kassel, Hesse, Germany Denis Huseljic<EMAIL_ADDRESS>
Intelligent Embedded Systems
University of Kassel
Kassel, Hesse, Germany Bernhard Sick<EMAIL_ADDRESS>
Intelligent Embedded Systems
University of Kassel
Kassel, Hesse, Germany
###### Abstract
Solving complex classification tasks using deep neural networks typically
requires large amounts of annotated data. However, corresponding class labels
are noisy when provided by error-prone annotators, e.g., crowdworkers.
Training standard deep neural networks leads to subpar performances in such
multi-annotator supervised learning settings. We address this issue by
presenting a probabilistic training framework named multi-annotator deep
learning (MaDL). A downstream ground truth and an annotator performance model
are jointly trained in an end-to-end learning approach. The ground truth model
learns to predict instances’ true class labels, while the annotator
performance model infers probabilistic estimates of annotators’ performances.
A modular network architecture enables us to make varying assumptions
regarding annotators’ performances, e.g., an optional class or instance
dependency. Further, we learn annotator embeddings to estimate annotators’
densities within a latent space as proxies of their potentially correlated
annotations. Together with a weighted loss function, we improve the learning
from correlated annotation patterns. In a comprehensive evaluation, we examine
three research questions about multi-annotator supervised learning. Our
findings show MaDL’s state-of-the-art performance and robustness against many
correlated, spamming annotators.
## 1 Introduction
Supervised deep neural networks (DNNs) have achieved great success in many
classification tasks (Pouyanfar et al., 2018). In general, these DNNs require
a vast amount of annotated data for their successful employment (Algan &
Ulusoy, 2021). However, acquiring top-quality class labels as annotations is
time-intensive and/or financially expensive (Herde et al., 2021). Moreover,
the overall annotation load may exceed a single annotator’s workforce (Uma et
al., 2021). For these reasons, multiple non-expert annotators, e.g.,
crowdworkers, are often tasked with data annotation (Zhang, 2022; Gilyazev &
Turdakov, 2018). Annotators’ missing domain expertise can lead to erroneous
annotations, known as noisy labels. Further, even expert annotators cannot be
assumed to be omniscient because additional factors, such as missing
motivation, fatigue, or an annotation task’s ambiguity (Vaughan, 2018), may
decrease their performances. A popular annotation quality assurance option is
the acquisition of multiple annotations per data instance with subsequent
aggregation (Zhang et al., 2016), e.g., via majority rule. The aggregated
annotations are proxies of the ground truth (GT) labels to train DNNs.
Aggregation techniques operate exclusively on the basis of annotations. In
contrast, model-based techniques use feature or annotator information and thus
work well in low-redundancy settings, e.g., with just one annotation per
instance (Khetan et al., 2018). Through predictive models, these techniques
jointly estimate instances’ GT labels and annotators’ performances (APs) by
learning and inferring interdependencies between instances, annotators, and
their annotations. As a result, model-based techniques cannot only predict GT
labels and APs for training instances but also for test instances, i.e., they
can be applied in transductive and inductive learning settings (Vapnik, 1995).
Despite ongoing research, several challenges still need to be addressed in
multi-annotator supervised learning. To introduce these challenges, we
exemplarily look at the task of animal classification in Fig. 1. Eight
annotators have been queried to provide annotations for the image of a jaguar.
Such a query is difficult because jaguars have remarkable similarities to
other predatory cats, e.g., leopards. Accordingly, the obtained annotations
indicate a strong disagreement between the leopard and jaguar class. Simply
taking the majority vote of these annotations results in leopard as a wrongly
estimated GT label. Therefore, advanced multi-annotator supervised learning
techniques leverage annotation information from other (similar) annotated
images to estimate APs. However, producing accurate AP estimates is difficult
because one needs to learn many annotation patterns. Otherwise, the estimated
GT labels will be biased, e.g., when APs are exclusively modeled as a function
of annotators. In this case, we cannot identify annotators who are only
knowledgeable about certain classes or regions in the feature space. Another
challenge in multi-annotator supervised learning concerns potential (latent)
correlations between annotators. In our animal annotation task, we illustrate
this issue by visualizing three latent groups of similarly behaving
annotators. Although we assume that the annotators work independently of each
other, they can still share common or statistically correlated error patterns
(Chu et al., 2021). This is particularly problematic if a group of ordinary
persons strongly outvotes a much smaller group of professionals. Considering
prior information about the annotators, i.e., annotator features or metadata
(Zhang et al., 2023), can help to identify these groups. Moreover, prior
information enables a model to inductively learn performances for annotators
who have provided few or no annotations. In our example, zoological interest
could be a good indicator for this purpose. While the inductive learning of
APs for annotators poses an additional challenge to the already complex task,
its use may be beneficial for further applications, e.g., optimizing the
annotator selection in an active learning setting (Herde et al., 2021) or
training annotators to improve their own knowledge (Daniel et al., 2018).
Figure 1: Animal classification as an illustration of a multi-annotator
supervised learning problem.
In this article, we address the above challenges by making the following
contributions: • We propose multi-annotator deep learning (MaDL) as a
probabilistic and modular classification framework. In an end-to-end training
via a weighted maximum-likelihood approach, it learns embeddings of annotators
to account for possible correlations among them. • We specify six properties
concerning the estimation of APs and application scenarios for categorizing
related multi-annotator supervised learning techniques. • Associated with
these properties, we formulate three research questions (RQs), which we
experimentally investigate, including comparisons of MaDL to related
techniques.
The remainder of this article is structured as follows: In Section 2, we
formally introduce the problem setting of supervised learning from multiple
annotators. Subsequently, we identify central properties of multi-annotator
supervised learning techniques as a basis for categorizing related works and
pointing out their differences to MaDL in Section 3. Section 4 explains the
details of our MaDL framework. Experimental evaluations of MaDL and related
techniques are presented regarding RQs associated with the aforementioned
properties in Section 5. Finally, we conclude and give an outlook regarding
future research work in Section 6.
## 2 Problem Setting
In this section, we formalize the assumptions and objectives of multi-
annotator supervised learning for classification tasks.
Prerequisites: Without loss of generality, we represent a data instance as a
vector ${\mathbf{x}\coloneqq(x^{(1)},...,x^{(D)})^{\mathrm{T}}}$,
$D\in\mathbb{N}_{>0}$ in a $D$-dimensional real-valued input or feature space
$\Omega_{X}\coloneqq\mathbb{R}^{D}$. The $N\in\mathbb{N}_{>0}$ instances
jointly form a matrix
$\mathbf{X}\coloneqq(\mathbf{x}_{1},...,\mathbf{x}_{N})^{\mathrm{T}}$ and
originate from an unknown probability density function $\Pr(\mathbf{x})$. For
each observed instance $\mathbf{x}_{n}\sim\Pr(\mathbf{x})$, there is a GT
class label ${y_{n}\in\Omega_{Y}\coloneqq\\{1,\dots,C\\}}$. Each GT label
$y_{n}$ is assumed to be drawn from an unknown conditional distribution:
$y_{n}\sim\Pr(y\mid\mathbf{x}_{n})$. We denote the GT labels as the vector
$\mathbf{y}\coloneqq(y_{1},...,y_{N})^{\mathrm{T}}$. These GT labels are
unobserved since there is no omniscient annotator. Instead, we consider
multiple error-prone annotators. For the sake of simplicity, we represent an
annotator through individual features as a vector
$\mathbf{a}_{m}\in\Omega_{A}\coloneqq\mathbb{R}^{O},O\in\mathbb{N}_{>0}$. If
no prior annotator information is available, the annotators’ features are
defined through one-hot encoded vectors, i.e.,
$\Omega_{A}\coloneqq\\{\mathbf{e}_{1},\dots,\mathbf{e}_{M}\\}$ with
$\mathbf{a}_{m}\coloneqq\mathbf{e}_{m}$, to identify each annotator uniquely.
Otherwise, annotator features may provide information specific to the general
annotation task, e.g., the zoological interest when annotating animal images
or the years of experience in clinical practice when annotating medical data.
Together, the $M\in\mathbb{N}_{>0}$ annotators form a matrix
$\mathbf{A}\coloneqq(\mathbf{a}_{1},\dots,\mathbf{a}_{M})^{\mathrm{T}}$. We
denote the annotation assigned by annotator $\mathbf{a}_{m}$ to instance
$\mathbf{x}_{n}$ through $z_{nm}\in\Omega_{Y}\cup\\{\otimes\\}$, where
$z_{nm}=\otimes$ indicates that an annotation is unobserved, i.e., not
provided. An observed annotation is assumed to be drawn from an unknown
conditional distribution:
$z_{nm}\sim\Pr(z\mid\mathbf{x}_{n},\mathbf{a}_{m},y)$. Multiple annotations
for an instance $\mathbf{x}_{n}$ can be summarized as a vector
$\mathbf{z}_{n}\coloneqq(z_{n1},...,z_{nM})^{\mathrm{T}}$. Thereby, the set
$\mathcal{A}_{n}\coloneqq\\{m\mid m\in\\{1,\dots,M\\}\wedge
z_{nm}\in\Omega_{Y}\\}$ represents the indices of the annotators who assigned
an annotation to an instance $\mathbf{x}_{n}$. Together, the annotations of
all observed instances form the matrix
$\mathbf{Z}\coloneqq(\mathbf{z}_{1},...,\mathbf{z}_{N})^{\mathrm{T}}$. We
further assume there is a subset of annotators whose annotated instances are
sufficient to approximate the GT label distribution, i.e., together, these
annotated instances allow us to correctly differentiate between all classes.
Otherwise, supervised learning is hardly possible without explicit prior
knowledge about the distributions of GT labels and/or APs. Moreover, we expect
that the annotators independently decide on instances’ annotations and that
their APs are time-invariant.
Objectives: Given these prerequisites, the first objective is to train a
downstream GT model, which approximates the optimal GT decision function
$y_{\mathrm{GT}}:\Omega_{X}\rightarrow\Omega_{Y}$ by minimizing the expected
loss across all classes:
$\displaystyle
y_{\mathrm{GT}}(\mathbf{x})\coloneqq\operatorname*{arg\,min}_{y^{\prime}\in\Omega_{Y}}\left(\mathbb{E}_{y\mid\mathbf{x}}\left[L_{\mathrm{GT}}(y,y^{\prime})\right]\right).$
(1)
Thereby, we define the loss function
${L_{\text{GT}}:\Omega_{Y}\times\Omega_{Y}\rightarrow\\{0,1\\}}$ through the
zero-one loss:
$L_{\text{GT}}(y,y^{\prime})\coloneqq\delta(y\neq
y^{\prime})\coloneqq\begin{cases}0,\text{ if }y=y^{\prime},\\\ 1,\text{ if
}y\neq y^{\prime}.\end{cases}$ (2)
As a result, an optimal GT model for classification tasks can accurately
predict the GT labels of instances.
###### Proposition 1.
Assuming $L_{\mathrm{GT}}$ to be the zero-one loss in Eq. 2, the Bayes optimal
prediction for Eq. 1 is given by:
$\displaystyle
y_{\mathrm{GT}}(\mathbf{x})=\operatorname*{arg\,max}_{y^{\prime}\in\Omega_{Y}}\left(\Pr(y^{\prime}\mid\mathbf{x})\right).$
(3)
When learning from multiple annotators, the APs are further quantities of
interest. Therefore, the second objective is to train an AP model, which
approximates the optimal AP decision function
${y_{\mathrm{AP}}:\Omega_{X}\times\Omega_{A}\rightarrow\\{0,1\\}}$ by
minimizing the following expected loss:
$y_{\mathrm{AP}}(\mathbf{x},\mathbf{a})\coloneqq\operatorname*{arg\,min}_{y^{\prime}\in\\{0,1\\}}\left(\mathbb{E}_{y\mid\mathbf{x}}\left[\mathbb{E}_{z\mid\mathbf{x},\mathbf{a},y}\left[L_{\mathrm{AP}}\left(y^{\prime},L_{\mathrm{GT}}\left(y,z\right)\right)\right]\right]\right).$
(4)
Defining $L_{\mathrm{AP}}$ and $L_{\mathrm{GT}}$ as zero-one loss, an optimal
AP model for classification tasks can accurately predict the zero-one loss of
annotator’s class labels, i.e., whether an annotator $\mathbf{a}$ provides a
false, i.e., $y_{\mathrm{AP}}(\mathbf{x},\mathbf{a})=1$, or correct, i.e.,
$y_{\mathrm{AP}}(\mathbf{x},\mathbf{a})=0$, class label for an instance
$\mathbf{x}$.
###### Proposition 2.
Assuming both $L_{\mathrm{AP}}$ and $L_{\mathrm{GT}}$ to be the zero-one loss,
as defined in Eq. 2, the Bayes optimal prediction for Eq. 4 is given by:
$\displaystyle
y_{\mathrm{AP}}(\mathbf{x},\mathbf{a})=\delta\left(\sum_{y\in\Omega_{Y}}\Pr(y\mid\mathbf{x})\Pr(y\mid\mathbf{x},\mathbf{a},y)<0.5\right).$
(5)
We refer to Appendix A for the proofs of Proposition 1 and Proposition 2.
## 3 Related Work
This section discusses existing multi-annotator supervised learning techniques
targeting our problem setting of Section 2. Since we focus on the AP next to
the GT estimation, we restrict our discussion to techniques capable of
estimating both target types. In this context, we analyze related research
regarding three aspects, i.e., GT models, AP models, and algorithms for
training these models.
Ground truth model: The first multi-annotator supervised learning techniques
employed logistic regression models (Raykar et al., 2010; Kajino et al., 2012;
Rodrigues et al., 2013; Yan et al., 2014) for classification. Later, different
kernel-based variants of GT models, e.g., Gaussian processes, were developed
(Rodrigues et al., 2014; Long et al., 2016; Gil-Gonzalez et al., 2021).
Rodrigues et al. (2017) focused on documents and extended topic models to the
multi-annotator setting. More recently, several techniques were proposed to
train DNNs for large-scale and especially image classification tasks with
noisy annotations (Albarqouni et al., 2016; Guan et al., 2018; Khetan et al.,
2018; Rodrigues & Pereira, 2018; Yang et al., 2018; Tanno et al., 2019; Cao et
al., 2019; Platanios et al., 2020; Zhang et al., 2020; Gil-González et al.,
2021; Rühling Cachay et al., 2021; Chu et al., 2021; Li et al., 2022; Wei et
al., 2022; Gao et al., 2022). MaDL follows this line of work and also employs
a (D)NN as the GT model.
Annotator performance model: An AP model is typically seen as an auxiliary
part of the GT model since it provides AP estimates for increasing the GT
model’s performance. In this article, we reframe an AP model’s use in a more
general context because accurately assessing APs can be crucial in improving
several applications, e.g., human-in-the-loop processes (Herde et al., 2021)
or knowledge tracing (Piech et al., 2015). For this reason, we analyze
existing AP models regarding six properties, which we identified as relevant
while reviewing literature about multi-annotator supervised learning.
(P1) Class-dependent annotator performance: The simplest AP representation is
an overall accuracy value per annotator. On the one hand, AP models estimating
such accuracy values have low complexity and thus do not overfit (Rodrigues et
al., 2013; Long et al., 2016). On the other hand, they may be overly general
and cannot assess APs on more granular levels. Therefore, many other AP models
assume a dependency between APs and instances’ GT labels. Class-dependent AP
models typically estimate confusion matrices (Raykar et al., 2010; Rodrigues
et al., 2014; 2017; Khetan et al., 2018; Tanno et al., 2019; Platanios et al.,
2020; Gao et al., 2022; Li et al., 2022), which indicate annotator-specific
probabilities of mistaking one class for another, e.g., recognizing a jaguar
as a leopard. Alternatively, weights of annotation aggregation functions (Cao
et al., 2019; Rühling Cachay et al., 2021) or noise-adaption layers (Rodrigues
& Pereira, 2018; Chu et al., 2021; Wei et al., 2022) can be interpreted as
non-probabilistic versions of confusion matrices. MaDL estimates probabilistic
confusion matrices or less complex approximations, e.g., the elements on their
diagonals.
(P2) Instance-dependent annotator performance: In many real-world
applications, APs are additionally instance-dependent (Yan et al., 2014)
because instances of the same class can strongly vary in their feature values.
For example, recognizing animals in blurry images is more difficult than in
high-resolution images. Hence, several AP models estimate the probability of
obtaining a correct annotation as a function of instances and annotators
(Kajino et al., 2012; Yan et al., 2014; Guan et al., 2018; Yang et al., 2018;
Gil-Gonzalez et al., 2021; Gil-González et al., 2021). Combining instance- and
class-dependent APs results in the most complex AP models, which estimate a
confusion matrix per instance-annotator pair (Platanios et al., 2020; Zhang et
al., 2020; Rühling Cachay et al., 2021; Chu et al., 2021; Gao et al., 2022; Li
et al., 2022). MaDL also employs an AP model of this type. However, it
optionally allows dropping the instance and class dependency, which can
benefit classification tasks where each annotator provides only a few
annotations.
(P3) Annotator correlations: Although most techniques assume that annotators
do not collaborate, they can still have correlations regarding their
annotation patterns, e.g., by sharing statistically correlated error patterns
(Chu et al., 2021). Gil-Gonzalez et al. (2021) proposed a kernel-based
approach where a matrix quantifies such correlations for all pairs of
annotators. Inspired by weak supervision, Cao et al. (2019) and Rühling Cachay
et al. (2021) employ an aggregation function that takes all annotations per
instance as input to model annotator correlations. Gil-González et al. (2021)
introduce a regularized chained DNN whose weights encode correlations. Wei et
al. (2022) jointly model the annotations of all annotators as outputs and thus
take account of potential correlated mistakes. Chu et al. (2021) consider
common annotation noise through a noise adaptation layer shared across
annotators. Moreover, similar to our MaDL framework, they learn embeddings of
annotators. Going beyond, MaDL exploits these embeddings to determine
annotator correlations.
(P4) Robustness to spamming annotators: Especially on crowdsourcing platforms,
there have been several reports of workers spamming annotations (Vuurens et
al., 2011), e.g., by randomly guessing or permanently providing the same
annotation. Such spamming annotators can strongly harm the learning process.
As a result, multi-annotator supervised learning techniques are ideally robust
against these types of annotation noise. Cao et al. (2019) employ an
information-theoretic approach to separate expert annotators from possibly
correlated spamming annotators. Rühling Cachay et al. (2021) empirically
demonstrated that their weak-supervised learning technique is robust to large
numbers of randomly guessing annotators. MaDL ensures this robustness by
training via a weighted likelihood function, assigning high weights to
independent annotators whose annotation patterns have no or only slight
statistical correlations to the patterns of other annotators.
(P5) Prior annotator information: On crowdsourcing platforms, requesters may
acquire prior information about annotators (Daniel et al., 2018), e.g.,
through surveys, annotation quality tests, or publicly available profiles.
Several existing AP models leverage such information to improve learning.
Thereby, conjugate prior probability distributions, e.g., Dirichlet
distributions, represent a straightforward way of including prior estimates of
class-dependent accuracies (Raykar et al., 2010; Albarqouni et al., 2016;
Rodrigues et al., 2017). Other techniques (Platanios et al., 2020; Chu et al.,
2021), including our MaDL framework, do not directly expect prior accuracy
estimates but work with all types of prior information that can be represented
as vectors of annotator features.
(P6) Inductive learning of annotator performance: Accurate AP estimates can be
beneficial in various applications, e.g., guiding an active learning strategy
to select accurate annotators (Yang et al., 2018). For this purpose, it is
necessary that a multi-annotator supervised learning technique can inductively
infer APs for non-annotated instances. Moreover, an annotation process is
often a dynamic system where annotators leave and enter. Hence, it is highly
interesting to inductively estimate the performances of newly entered
annotators, e.g., through annotator features as used by Platanios et al. 2020
and MaDL.
Training: Several multi-annotator supervised learning techniques employ the
expectation-maximization (EM) algorithm for training (Raykar et al., 2010;
Rodrigues et al., 2013; Yan et al., 2014; Long et al., 2016; Albarqouni et
al., 2016; Guan et al., 2018; Khetan et al., 2018; Yang et al., 2018;
Platanios et al., 2020). GT labels are modeled as latent variables and
estimated during the E step, while the GT and AP models’ parameters are
optimized during the M step. The exact optimization in the M step depends on
the underlying models. Typically, a variant of gradient descent (GD), e.g.,
quasi-Newton methods, is employed, or a closed-form solution exists, e.g., for
AP models with instance-independent AP estimates. Other techniques take a
Bayesian view of the models’ parameters and therefore resort to expectation
propagation (EP) (Rodrigues et al., 2014; Long et al., 2016) or variational
inference (VI) (Rodrigues et al., 2017). As approximate inference methods are
computationally expensive and may lead to suboptimal results, several end-to-
end training algorithms have been proposed. Gil-Gonzalez et al. (2021)
introduced a localized kernel alignment-based relevance analysis that
optimizes via GD. Through a regularization term, penalizing differences
between GT and AP model parameters, Kajino et al. formulated a convex loss
function for logistic regression models. Rodrigues & Pereira (2018), Gil-
González et al. (2021), and Wei et al. (2022) jointly train the GT and AP
models by combining them into a single DNN with noise adaption layers. Chu et
al. (2021) follow a similar approach with two types of noise adaption layers:
one shared across annotators and one individual for each annotator. Gil-
González et al. (2021) employ a regularized chained DNN to estimate GT labels
and AP performances jointly. In favor of probabilistic AP estimates, Tanno et
al. (2019), Zhang et al. (2020), Li et al. (2022), and MaDL avoid noise
adaption layers but employ loss functions suited for end-to-end learning. Cao
et al. (2019) and Rühling Cachay et al. (2021) jointly learn an aggregation
function in combination with the AP and GT models.
Table 1 summarizes and completes the aforementioned discussion by categorizing
multi-annotator supervised learning techniques according to their GT model, AP
model, and training algorithm. Thereby, the AP model is characterized by the
six previously discussed properties (P1–P6). We assign ✓ if a property is
supported, ✗ if not supported, and ✦ if partially supported. More precisely, ✦
is assigned to property P5 if the technique can include prior annotator
information but needs a few adjustments and to property P6 if the technique
requires some architectural changes to learn the performances of new
annotators inductively. For property P4, a ✓ indicates that the authors have
shown that their proposed technique learns in the presence of many spamming
annotators.
Table 1: Literature categorization of multi-annotator supervised learning techniques. Reference | Ground Truth Model | Training | Annotator Performance Model
---|---|---|---
P1 | P2 | P3 | P4 | P5 | P6
Kajino et al. (2012) | Logistic Regression Model | GD | ✗ | ✓ | ✗ | ✗ | ✗ | ✗
Raykar et al. (2010) | EM & GD | ✓ | ✗ | ✗ | ✗ | ✓ | ✗
Rodrigues et al. (2013) | ✗ | ✗ | ✗ | ✗ | ✗ | ✗
Yan et al. (2014) | ✗ | ✓ | ✗ | ✗ | ✦ | ✦
Rodrigues et al. (2017) | Topic Model | VI & GD | ✓ | ✗ | ✗ | ✗ | ✓ | ✗
Rodrigues et al. (2014) | Kernel-based Model | EP | ✓ | ✗ | ✗ | ✗ | ✗ | ✗
Long et al. (2016) | EM & EP & GD | ✗ | ✗ | ✗ | ✗ | ✗ | ✗
Gil-Gonzalez et al. (2021) | GD | ✗ | ✓ | ✓ | ✗ | ✗ | ✗
Albarqouni et al. (2016) | (Deep) Neural Network | EM & GD | ✓ | ✗ | ✗ | ✗ | ✓ | ✗
Yang et al. (2018) | ✗ | ✓ | ✗ | ✗ | ✦ | ✦
Khetan et al. (2018) | ✓ | ✗ | ✗ | ✗ | ✦ | ✦
Platanios et al. (2020) | ✓ | ✓ | ✗ | ✗ | ✓ | ✓
Rodrigues & Pereira (2018) | GD | ✓ | ✗ | ✗ | ✗ | ✗ | ✗
Guan et al. (2018) | ✗ | ✓ | ✗ | ✗ | ✗ | ✗
Tanno et al. (2019) | ✓ | ✗ | ✗ | ✗ | ✦ | ✦
Cao et al. (2019) | ✓ | ✗ | ✓ | ✓ | ✗ | ✗
Zhang et al. (2020) | ✓ | ✓ | ✓ | ✗ | ✦ | ✦
Gil-González et al. (2021) | ✗ | ✓ | ✓ | ✓ | ✗ | ✗
Rühling Cachay et al. (2021) | ✓ | ✓ | ✓ | ✓ | ✗ | ✗
Chu et al. (2021) | ✓ | ✓ | ✓ | ✓ | ✓ | ✗
Li et al. (2022) | ✓ | ✓ | ✗ | ✗ | ✦ | ✦
Wei et al. (2022) | ✓ | ✗ | ✓ | ✗ | ✗ | ✗
Gao et al. (2022) | ✓ | ✓ | ✗ | ✗ | ✦ | ✦
MaDL (2023) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓
## 4 Multi-annotator Deep Learning
In this section, we present our modular probabilistic MaDL framework. We start
with a description of its underlying probabilistic model. Subsequently, we
introduce its GT and AP models’ architectures. Finally, we explain our end-to-
end training algorithm.
### 4.1 Probabilistic Model
The four nodes in Fig. 2 depict the random variables of an instance
$\mathbf{x}$, a GT label $y$, an annotator $\mathbf{a}$, and an annotation
$z$. Thereby, arrows indicate probabilistic dependencies among each other. The
random variable of an instance $\mathbf{x}$ and an annotator $\mathbf{a}$ have
no incoming arrows and thus no causal dependencies on other random variables.
In contrast, the distribution of a latent GT label $y$ depends on its
associated instance $\mathbf{x}$. For classification problems, the probability
of observing $y=c$ as GT label of an instance $\mathbf{x}$ can be modeled
through a categorical distribution:
$\displaystyle\Pr(y=c\mid\mathbf{x})\coloneqq\mathrm{Cat}(y=c\mid\bm{p}(\mathbf{x}))\coloneqq\prod_{k=1}^{C}\left(p^{(k)}(\mathbf{x})\right)^{\delta(k=c)}=p^{(c)}(\mathbf{x}),$
(6)
where
$\mathbf{p}:\Omega_{X}\rightarrow\Delta\coloneqq\\{\mathbf{p}\in[0,1]^{C}\mid\sum_{c=1}^{C}p^{(c)}=1\\}$
denotes the function outputting an instance’s true class-membership
probabilities. The outcome of an annotation process may depend on the
annotator’s features, an instance’s features, and the latent GT label. A
function $\mathbf{P}:\Omega_{X}\times\Omega_{A}\rightarrow[0,1]^{C\times C}$
outputting a row-wise normalized confusion matrix per instance-annotator pair
can capture these dependencies. The probability that an annotator $\mathbf{a}$
annotates an instance $\mathbf{x}$ of class $y=c$ with the annotation $z=k$
can then be modeled through a categorical distribution:
$\displaystyle\Pr(z=k\mid\mathbf{x},\mathbf{a},y=c)$
$\displaystyle\coloneqq\text{Cat}\left(z=k\,\middle|\,\mathbf{P}^{(c,:)}(\mathbf{x},\mathbf{a})\right)\coloneqq\prod_{l=1}^{C}\left(P^{(c,l)}(\mathbf{x},\mathbf{a})\right)^{\delta(l=k)}=P^{(c,k)}(\mathbf{x},\mathbf{a}),$
(7)
where the column vector $\mathbf{P}^{(c,:)}(\mathbf{x},\mathbf{a})\in\Delta$
corresponds to the $c$-th row of the confusion matrix
$\mathbf{P}(\mathbf{x},\mathbf{a})$.
Figure 2: Probabilistic graphical model of MaDL.
### 4.2 Model Architectures
Now, we introduce how MaDL’s GT and AP models are designed to approximate the
functions of true class-membership probabilities $\mathbf{p}$ and true
confusion matrices $\mathbf{P}$ for the respective instances and annotators.
Fig. 3 illustrates the architecture of the GT (purple) and AP (green) models
within our MaDL framework. Solid lines indicate mandatory components, while
dashed lines express optional ones.
Figure 3: Architectures of MaDL’s GT and AP models.
The GT model with parameters $\bm{\theta}$ is a (D)NN (cf. 4 in Fig. 3), which
takes an instance $\mathbf{x}$ as input to approximate its true class-
membership probabilities $\mathbf{p}(\mathbf{x})$ via
$\mathbf{\hat{p}}_{\bm{\theta}}(\mathbf{x})$. We define its decision function
in analogy to the Bayes optimal prediction in Eq. 3 through
$\displaystyle\hat{y}_{\bm{\theta}}(\mathbf{x})\coloneqq\operatorname*{arg\,max}_{y\in\Omega_{Y}}\left(\hat{p}_{\bm{\theta}}^{(y)}(\mathbf{x})\right).$
(8)
Figure 4: MaDL’s residual block combining annotator and instance embedding.
The architecture of the AP model with parameters $\bm{\omega}$ comprises
mandatory and optional components. We start by describing its most general
form, which consists of three (D)NNs and estimates annotator-, class-, and
instance-dependent APs. Annotator features $\mathbf{a}$ are propagated through
a first (D)NN (cf. 1 in Fig. 3) to learn an annotator embedding
${\mathbf{\widetilde{a}}\in\mathbb{R}^{R},R\in\mathbb{N}_{\geq 1}}$. During
training, we will use such embeddings for quantifying correlations between
annotators. Analogously, we propagate raw instance features $\mathbf{x}$ or a
representation learned by the GT model’s hidden layers through a second (D)NN
(cf. 2 in Fig. 3) for learning an instance embedding
${\mathbf{\widetilde{x}}\in\mathbb{R}^{Q},Q\in\mathbb{N}_{\geq 1}}$.
Subsequently, instance and annotator embeddings $\mathbf{\widetilde{x}}$ and
$\mathbf{\widetilde{a}}$ are combined through a third and final (D)NN (cf. 3
in Fig. 3) for approximating the true confusion matrix
$\mathbf{P}(\mathbf{x},\mathbf{a})$ via
$\mathbf{\hat{P}}_{\bm{\omega}}(\mathbf{x},\mathbf{a})$. Various architectures
for combining embeddings have already been proposed in the literature
(Fiedler, 2021). We adopt a solution from recommender systems where often
latent factors of users and items are combined (Zhang et al., 2019).
Concretely, in DNN 3, we use an outer product-based layer outputting
$\mathbf{\widetilde{o}}\in\mathbb{R}^{F},F\in\mathbb{N}_{\geq 1}$ to model the
interactions between instance and annotator embeddings (Qu et al., 2016). The
concatenation of $\mathbf{\widetilde{a}},\mathbf{\widetilde{x}}$, and
$\mathbf{\widetilde{o}}$ is propagated through a residual block (He et al.,
2016), whose architecture is visualized in Fig. 4. There, we add only the
annotator embedding $\mathbf{\widetilde{a}}$ to the learned mapping
$\mathbf{h}(\mathbf{\widetilde{a}},\mathbf{\widetilde{x}},\mathbf{\widetilde{o}})\in\mathbb{R}^{R}$.
The motivation behind this modification is that the annotator embeddings,
informing about an annotator’s individuality, are likely to be the most
influential inputs for estimating confusion matrices as APs. Empirical
investigations showed that $R=Q=F=16$ as the embedding size is a robust
default. Finally, we define the AP model’s decision function in analogy to the
Bayes optimal prediction in Eq. 5 through
$\displaystyle\hat{y}_{\bm{\theta},\bm{\omega}}(\mathbf{x},\mathbf{a})\coloneqq\delta\left(\sum_{c=1}^{C}\hat{p}_{\bm{\theta}}^{(c)}(\mathbf{x})\cdot\hat{P}_{\bm{\omega}}^{(c,c)}(\mathbf{x},\mathbf{a})<0.5\right)\coloneqq\delta\left(\underbrace{\hat{p}_{\bm{\theta},\bm{\omega}}(\mathbf{x},\mathbf{a})}_{\text{predicted
correctness probability}}<0.5\right).$ (9)
An AP model estimating a confusion matrix per instance-annotator pair can be
overly complex if there are only a few annotations per annotator or the number
of classes is high (Rodrigues et al., 2013). In such settings, ignoring the
instance features as input of the AP model may be beneficial. Alternatively,
we can constrain a confusion matrix’s degrees of freedom by reducing the
number of output neurons of the AP model. For example, we might estimate only
the diagonal elements of the confusion matrix and assume that the remaining
probability mass per row is uniformly distributed. Further, we can either
estimate each diagonal element individually (corresponding to $C$ output
neurons) or approximate them via a single scalar (corresponding to one output
neuron). Appendix G illustrates such confusion matrices with varying degrees
of freedom.
### 4.3 End-to-end Training
Given the probabilistic model and accompanying architectures of the GT and AP
models, we propose an algorithm for jointly learning their parameters. A
widespread method for training probabilistic models is to maximize the
likelihood of the observed data with respect to the model parameters. Assuming
that the joint distributions of annotations $\mathbf{Z}$ are conditionally
independent for given instances $\mathbf{X}$, we can specify the likelihood
function as follows:
$\Pr(\mathbf{Z}\mid\mathbf{X},\mathbf{A};\bm{\theta},\bm{\omega})=\prod_{n=1}^{N}\Pr(\mathbf{z}_{n}\mid\mathbf{x}_{n},\mathbf{A};\bm{\theta},\bm{\omega}).$
(10)
We further expect that the distributions of annotations $\mathbf{z}_{n}$ for a
given instance $\mathbf{x}_{n}$ are conditionally independent. Thus, we can
simplify the likelihood function:
$\displaystyle\Pr(\mathbf{Z}\mid\mathbf{X},\mathbf{A};\bm{\theta},\bm{\omega})$
$\displaystyle=\prod_{n=1}^{N}\prod_{m\in\mathcal{A}_{n}}\Pr(z_{nm}\mid\mathbf{x}_{n},\mathbf{a}_{m};\bm{\theta},\bm{\omega}).$
(11)
Leveraging our probabilistic model in Fig. 2, we can express the probability
of obtaining a certain annotation as an expectation with respect to an
instance’s (unknown) GT class label:
$\displaystyle\Pr(\mathbf{Z}\mid\mathbf{X},\mathbf{A};\bm{\theta},\bm{\omega})$
$\displaystyle=\prod_{n=1}^{N}\prod_{m\in\mathcal{A}_{n}}\mathbb{E}_{y_{n}\mid\mathbf{x}_{n};\bm{\theta}}\left[\Pr(z_{nm}\mid\mathbf{x}_{n},\mathbf{a}_{m},y_{n};\bm{\omega})\right]$
(12)
$\displaystyle=\prod_{n=1}^{N}\prod_{m\in\mathcal{A}_{n}}\left(\sum_{y_{n}=1}^{C}\Pr(y_{n}\mid\mathbf{x}_{n};\bm{\theta})\Pr(z_{nm}\mid\mathbf{x}_{n},\mathbf{a}_{m},y_{n};\bm{\omega})\right)$
(13)
$\displaystyle=\prod_{n=1}^{N}\prod_{m\in\mathcal{A}_{n}}\mathbf{e}^{\mathrm{T}}_{z_{nm}}\underbrace{\mathbf{\hat{P}}^{\mathrm{T}}_{\bm{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m})\mathbf{\hat{p}}_{\bm{\theta}}(\mathbf{x}_{n})}_{\text{annotation
probabilities}},$ (14)
where $\mathbf{e}_{z_{nm}}$ denotes the one-hot encoded vector of annotation
$z_{nm}$. Taking the logarithm of this likelihood function and converting the
maximization into a minimization problem, we get
$L_{\mathbf{X},\mathbf{A},\mathbf{Z}}(\bm{\theta},\bm{\omega})\coloneqq-\sum_{n=1}^{N}\sum_{m\in\mathcal{A}_{n}}\ln\left(\mathbf{e}^{\mathrm{T}}_{z_{nm}}\mathbf{\hat{P}}^{\mathrm{T}}_{\bm{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m})\mathbf{\hat{p}}_{\bm{\theta}}(\mathbf{x}_{n})\right)$
(15)
as cross-entropy loss function for learning annotation probabilities by
combining the outputs of the GT and AP models (cf. blue components in Fig. 3).
Yet, directly employing this loss function for learning may result in poor
results for two reasons.
Initialization: Reason number one has been noted by Tanno et al. (2019), who
showed that such a loss function cannot ensure the separation of the AP and GT
label distributions. This is because infinite many combinations of class-
membership probabilities and confusion matrices perfectly comply with the true
annotation probabilities, e.g., by swapping the rows of the confusion matrix
as the following example shows:
$\displaystyle\underbrace{\mathbf{P}^{\mathrm{T}}(\mathbf{x}_{n},\mathbf{a}_{m})\mathbf{p}(\mathbf{x}_{n})}_{\text{true
probabilities}}=\begin{pmatrix}1&0\\\ 0&1\end{pmatrix}\begin{pmatrix}1\\\
0\end{pmatrix}=\begin{pmatrix}1\\\ 0\end{pmatrix}=\begin{pmatrix}0&1\\\
1&0\end{pmatrix}\begin{pmatrix}0\\\
1\end{pmatrix}=\underbrace{\mathbf{\hat{P}}^{\mathrm{T}}_{\bm{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m})\mathbf{\hat{p}}_{\bm{\theta}}(\mathbf{x}_{n})}_{\text{predicted
probabilities}}.$ (16)
Possible approaches aim at resolving this issue by favoring certain
combinations, e.g., diagonally dominant confusion matrices. Typically, one can
achieve this via regularization (Tanno et al., 2019; Zhang et al., 2020; Li et
al., 2022) and/or suitable initialization of the AP model’s parameters
(Rodrigues & Pereira, 2018; Wei et al., 2022). We rely on the latter approach
because it permits encoding prior knowledge about APs. Concretely, we
approximate an initial confusion matrix for any instance-annotator pair
$(\mathbf{x}_{n},\mathbf{a}_{m})$ through
$\displaystyle\mathbf{\hat{P}}_{\bm{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m})\coloneqq\begin{pmatrix}\texttt{softmax}((\mathbf{v}^{\mathrm{T}}(\mathbf{x}_{n},\mathbf{a}_{m})\mathbf{W}+\mathbf{B})^{(1,:)})\\\
\vdots\\\
\texttt{softmax}((\mathbf{v}^{\mathrm{T}}(\mathbf{x}_{n},\mathbf{a}_{m})\mathbf{W}+\mathbf{B})^{(C,:)})\end{pmatrix}\approx\eta\mathbf{I}_{C}+\frac{\left(1-\eta\right)}{C-1}\left(\mathbf{1}_{C}-\mathbf{I}_{C}\right),$
(17)
where $\mathbf{I}_{C}\in\mathbb{R}^{C\times C}$ denotes an identity matrix,
$\mathbf{1}_{C}\in\mathbb{R}^{C\times C}$ an all-one matrix, and
$\eta\in(0,1)$ the prior probability of obtaining a correct annotation. For
example, in a binary classification problem, the initial confusion matrix
would approximately take the following values:
$P_{\bm{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m})\approx\begin{pmatrix}\eta&1-\eta\\\
1-\eta&\eta\end{pmatrix}.$ (18)
The outputs of the softmax functions represent the confusion matrix’s rows.
Provided that the initial AP model’s last layer’s weights
$\mathbf{W}\in\mathbb{R}^{H\times C\times C},H\in\mathbb{N}_{>0}$ satisfy
$\mathbf{v}^{\mathrm{T}}(\mathbf{x}_{n},\mathbf{a}_{m})\mathbf{W}\approx\mathbf{0}_{C}\in\mathbb{R}^{C\times
C}$ for the hidden representation
$\mathbf{v}(\mathbf{x}_{n},\mathbf{a}_{m})\in\mathbb{R}^{H}$ of each instance-
annotator pair, we approximate Eq. 17 by initializing the biases
$\mathbf{B}\in\mathbb{R}^{C\times C}$ of our AP model’s output layer via
$\displaystyle\mathbf{B}\coloneqq\ln\left(\frac{\eta\cdot(C-1)}{1-\eta}\right)\mathbf{I}_{C}.$
(19)
By default, we set $\eta=0.8$ to assume trustworthy annotators a priori.
Accordingly, initial class-membership probability estimates are close to the
annotation probability estimates.
Annotator Probability Densities:
$\Pr(\mathbf{a}_{1}\mid\mathbf{A})\mathrel{\vbox{
\offinterlineskip\halign{\hfil$#$\cr\propto\cr\kern
2.0pt\cr\sim\cr\kern-2.0pt\cr}}}1$$\Pr(\mathbf{a}_{2}\mid\mathbf{A})\approx\Pr(\mathbf{a}_{3}\mid\mathbf{A})\approx\Pr(\mathbf{a}_{4}\mid\mathbf{A})\approx\Pr(\mathbf{a}_{5}\mid\mathbf{A})\mathrel{\vbox{
\offinterlineskip\halign{\hfil$#$\cr\propto\cr\kern
2.0pt\cr\sim\cr\kern-2.0pt\cr}}}4$$\Pr(\mathbf{a}_{6}\mid\mathbf{A})\approx\Pr(\mathbf{a}_{7}\mid\mathbf{A})\approx\Pr(\mathbf{a}_{8}\mid\mathbf{A})\mathrel{\vbox{
\offinterlineskip\halign{\hfil$#$\cr\propto\cr\kern
2.0pt\cr\sim\cr\kern-2.0pt\cr}}}3$
Annotator Weights:
$w(\mathbf{a}_{1})\approx\frac{8}{3}$$w(\mathbf{a}_{2})\approx
w(\mathbf{a}_{3})\approx w(\mathbf{a}_{4})\approx
w(\mathbf{a}_{5})\approx\frac{8}{4\cdot 3}$$w(\mathbf{a}_{6})\approx
w(\mathbf{a}_{7})\approx w(\mathbf{a}_{8})\approx\frac{8}{3\cdot 3}$ Figure 5:
Visualization of annotator embeddings (left) accompanied by an exemplary
calculation of annotator probability densities and annotator weights (right).
Annotator weights: Reason number two has been noted by Cao et al. (2019), who
proved that maximum-likelihood solutions fail when there are strong annotator
correlations, i.e., annotators with significant statistical correlations in
their annotation patterns. To address this issue, we explore the annotator
correlations in the latent space of the learned annotator embeddings. For this
purpose, we assume that annotators with similar embeddings share correlated
annotation patterns. Recalling our example in Fig. 1, this assumption implies
that annotators of the same latent group are located near each other. The left
plot of Fig. 5 visualizes this assumption for a two-dimensional embedding
space, where the eight annotators are arranged into three clusters as proxies
of the three latent annotator groups. We aim to extend our loss function so
that its evaluation is independent of the annotator groups’ cardinalities. For
our example, we view the three annotator groups as three independent
annotators of equal importance. To this purpose, we extend the original
likelihood function in Eq. 11 by annotator weights, such that we obtain the
weighted likelihood function:
$\displaystyle\Pr(\mathbf{Z}\mid\mathbf{X},\mathbf{A};\bm{\theta},\bm{\omega},\mathbf{w})$
$\displaystyle=\prod_{n=1}^{N}\prod_{m\in\mathcal{A}_{n}}\Pr(z_{nm}\mid\mathbf{x}_{n},\mathbf{a}_{m};\bm{\theta},\bm{\omega})^{w(\mathbf{a}_{m})},$
(20)
where
$\mathbf{w}\coloneqq(w(\mathbf{a}_{1}),\dots,w(\mathbf{a}_{M}))^{\mathrm{T}}\in\mathbb{R}_{\geq
0}^{M}$ denotes a vector of non-negative annotator weights. From a
probabilistic perspective, we can interpret such a weight $w(\mathbf{a}_{m})$
as the effective number of observations (or copies) per annotation of
annotator $\mathbf{a}_{m}$. Interpreting the annotators $\mathbf{A}$ as
samples from a continuous latent space, we define an annotator weight
$w(\mathbf{a}_{m})$ to be inversely proportional to an annotator’s
$\mathbf{a}_{m}$ probability density:
$w(\mathbf{a}_{m})\coloneqq\frac{\Pr(\mathbf{a}_{m}\mid\mathbf{A})^{-1}}{Z},Z\coloneqq
M^{-1}\left(\sum_{m=1}^{M}\Pr(\mathbf{a}_{m}\mid\mathbf{A})^{-1}\right)\text{
provided that
}\Pr(\mathbf{a}_{1}\mid\mathbf{A}),\dots,\Pr(\mathbf{a}_{M}\mid\mathbf{A})>0.$
(21)
The normalization term $Z\in\mathbb{R}_{>0}$ ensures that the number of
effective annotations remains equal to the number of annotators, i.e.,
$\sum_{m=1}^{M}w(\mathbf{a}_{m})=M$. On the right side of our example in Fig.
5, we expect that an annotator’s probability density is approximately
proportional to the cardinality of the group to which the annotator belongs.
As a result, we assign high (low) weights to annotators belonging to small
(large) groups. Inspecting the exemplary annotator weights and adding the
weights per annotator group, we observe that each group provides the same
number of effective annotations, i.e., $\nicefrac{{8}}{{3}}$. More generally,
we support our definition of the annotator weights by the following theorem,
whose proof is given in Appendix A.
###### Theorem 1.
Let there be $G\in\\{1,\dots,M\\}$ non-empty, disjoint annotator groups, which
we denote as sets of indices such that
$\mathcal{A}^{(1)}\cup\dots\cup\mathcal{A}^{(G)}=\\{1,\dots,M\\}$. Further
assume, the annotators within each group $g\in\\{1,\dots,G\\}$ share identical
annotation patterns for the observed instances, i.e., $\forall
n\in\\{1,\dots,N\\},\forall
m,l\in\mathcal{A}^{(g)}:z_{nm}=z_{nl}\wedge\Pr(z_{nm}\mid\mathbf{x}_{n},\mathbf{a}_{m})=\Pr(z_{nl}\mid\mathbf{x}_{n},\mathbf{a}_{l}),$
$(\dagger)$ and the annotators’ probability densities are proportional to
their respective groups’ cardinalities, i.e., $\forall
m\in\\{1,\dots,M\\}:\Pr(\mathbf{a}_{m}\mid\mathbf{A})\propto\sum_{g=1}^{G}\delta(m\in\mathcal{A}^{(g)})|\mathcal{A}^{(g)}|.$
$(\star)$ Then, the true weighted log-likelihood function for all $M$
annotators reduces to the log-likelihood for $G$ annotators:
$\sum_{n=1}^{N}\sum_{m=1}^{M}w(\mathbf{a}_{m})\ln\left(\Pr(z_{nm}\mid\mathbf{x}_{n},\mathbf{a}_{m})\right)\propto\sum_{n=1}^{N}\sum_{g=1}^{G}\ln\left(\Pr(z_{n{m_{g}}}\mid\mathbf{x}_{n},\mathbf{a}_{m_{g}})\right),$
where $m_{g}\in\mathcal{A}^{(g)}$ represents the index of an arbitrary
annotator of the $g$-th annotator group.
Intuitively, Theorem 1 confirms that each group $\mathcal{A}^{(g)}$,
independent of its cardinality $|\mathcal{A}^{(g)}|$, equally contributes to
the weighted log-likelihood function. This way, we avoid any bias toward a
large group of highly correlated annotators during learning. Typically, the
assumptions $(\dagger)$ ‣ 1 and $(\star)$ ‣ 1 of Theorem 1 do not hold in
practice because there are no annotator groups with identical annotation
patterns. Therefore, we estimate degrees of correlations between annotators by
computing similarities between their embeddings
$\mathbf{\widetilde{A}}\coloneqq(\mathbf{\widetilde{a}}_{1},\dots,\mathbf{\widetilde{a}}_{M})^{\mathrm{T}}$
as the basis for a nonparametric annotator probability density estimation:
$\displaystyle\Pr\left(\mathbf{a}_{m}\mid\mathbf{A}\right)\approx\Pr\left(\mathbf{\widetilde{a}}_{m}\mid\mathbf{\widetilde{A}},k_{\gamma}\right)\propto\sum_{l=1}^{M}k_{\gamma}\left(\texttt{no\\_grad}\left(\mathbf{\widetilde{a}}_{l}\right),\texttt{no\\_grad}\left(\mathbf{\widetilde{a}}_{m}\right)\right),$
(22)
where $k_{\gamma}:\mathbb{R}^{R\times R}\rightarrow\mathbb{R}_{\geq 0}$
denotes a kernel function and $\gamma\in\mathbb{R}_{>0}$ its kernel scale. The
expression $\texttt{no\\_grad}(\mathbf{\widetilde{a}}_{m})\in\mathbb{R}^{R}$
indicates that no gradient regarding the learned annotator embedding
$\mathbf{\widetilde{a}}_{m}$ is computed, which is necessary to decouple the
learning of embeddings from computing annotator weights. Otherwise, we would
learn annotator embeddings, which optimize the annotator weights instead of
reflecting the annotation patterns. Although many kernel (or similarity)
functions are conceivable, we will focus on the popular Gaussian kernel:
$\displaystyle
k_{\gamma}(\texttt{no\\_grad}\left(\mathbf{\widetilde{a}}_{m}\right),\texttt{no\\_grad}\left(\mathbf{\widetilde{a}}_{l}\right))$
$\displaystyle\propto\exp\left(-\gamma\,||\texttt{no\\_grad}\left(\mathbf{\widetilde{a}}_{m}\right)-\texttt{no\\_grad}\left(\mathbf{\widetilde{a}}_{l}\right)||_{2}^{2}\right)$
(23)
with $||\cdot||_{2}$ as Euclidean distance. Typically, the kernel scale
$\gamma$ needs to fit the observed data, i.e., annotator embeddings in our
case. Therefore, its definition a priori is challenging, such that we define
$\gamma$ as a learnable parameter subject to a prior distribution. Concretely,
we employ the gamma distribution for this purpose:
$\displaystyle\Pr\left(\gamma\mid\alpha,\beta\right)\coloneqq\mathrm{Gam}\left(\gamma\mid\alpha,\beta\right)\coloneqq\frac{\beta^{\alpha}}{\Gamma(\alpha)}\gamma^{\alpha-1}\exp\left(-\beta\gamma\right),$
(24)
where $\Gamma$ is the gamma function and
$\alpha\in\mathbb{R}_{>1},\beta\in\mathbb{R}_{>0}$ are hyperparameters. Based
on experiments, we set $\alpha=1.25,\beta=0.25$ such that the mode is
$\nicefrac{{(\alpha-1)}}{{\beta}}=1$ (defining the initial value of $\gamma$
before optimization) and the variance with
$\nicefrac{{\alpha}}{{\beta^{2}}}=20$ is high in favor of flexible learning.
As a weighted loss function, we finally get
$\displaystyle
L_{\mathbf{X},\mathbf{A},\mathbf{Z},\alpha,\beta}(\bm{\theta},\bm{\omega},\gamma)$
$\displaystyle\coloneqq-\frac{1}{|\mathbf{Z}|}\sum_{n=1}^{N}\sum_{m\in\mathcal{A}_{n}}\left(\hat{w}_{\gamma}(\mathbf{a}_{m})\ln\left(\mathbf{e}^{\mathrm{T}}_{z_{nm}}\mathbf{\hat{P}}^{\mathrm{T}}_{\bm{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m})\mathbf{\hat{p}}_{\bm{\theta}}(\mathbf{x}_{n})\right)\right)-\ln\left(\mathrm{Gam}\left(\gamma\mid\alpha,\beta\right)\right),$
(25) $\displaystyle|\mathbf{Z}|$
$\displaystyle\coloneqq\sum_{n=1}^{N}\sum_{m=1}^{M}\delta(z_{nm}\in\Omega_{Y}),$
(26)
where $\hat{w}_{\gamma}(\mathbf{a}_{m})$ denotes that the annotator weights
$w(\mathbf{a}_{m})$ are estimated by learning the kernel scale $\gamma$. The
number of annotations $|\mathbf{Z}|$ is a normalization factor, which accounts
for potentially unevenly distributed annotations across mini-batches when
using stochastic GD.
Given the loss function in Eq. 25, we present the complete end-to-end training
algorithm of MaDL in Algorithm 1 and an example in Appendix B. During each
training step, we recompute the annotator weights and use them as the basis
for the weighted loss function to optimize the AP and GT models’ parameters.
After training, the optimized model parameters $(\bm{\theta},\bm{\omega})$ can
be used to make probabilistic predictions, e.g., class-membership
probabilities $\mathbf{\hat{p}}_{\bm{\theta}}(\mathbf{x})$ (cf. Fig. 3) and
annotator confusion matrix
$\mathbf{\hat{P}}_{\bm{\omega}}(\mathbf{x},\mathbf{a})$ (cf. Fig. 3), or to
decide on distinct labels, e.g., class label
$\hat{y}_{\bm{\theta}}(\mathbf{x})$ (cf. Eq. 8) and annotation error
$\hat{y}_{\bm{\theta},\bm{\omega}}(\mathbf{x},\mathbf{a})$ (cf. Eq. 9).
input: instances $\mathbf{X}$, annotators $\mathbf{A}$, annotations
$\mathbf{Z}$, number of training epochs $E$, mini-batch size $B$, initial
model parameters $(\bm{\theta},\bm{\omega})$, prior annotation accuracy
$\eta$, gamma distribution parameters $(\alpha,\beta)$;
start: initialize biases $\mathbf{B}$ of the AP model’s output layer using
$\eta$ (cf. Eq. 19);
start: initialize kernel scale
$\gamma\coloneqq\nicefrac{{(\alpha-1)}}{{\beta}}$ ;
for epoch $e\in\\{1,\dots,E\\}$ do
for sampled mini-batch
$\mathbf{\overline{X}}\coloneqq(\mathbf{x}_{i_{1}},\dots,\mathbf{x}_{i_{B}})^{\mathrm{T}},\mathbf{\overline{Z}}\coloneqq(\mathbf{z}_{i_{1}},\dots,\mathbf{z}_{i_{B}})^{\mathrm{T}}$
with $\\{i_{1},\dots,i_{B}\\}\subset\\{1,\dots,N\\}$ do
for $b\in\\{i_{1},\dots,i_{B}\\}$ do
compute class-membership probabilities
$\mathbf{\hat{p}}_{\bm{\theta}}(\mathbf{x}_{b})$ (cf. Fig. 3);
for $m\in\\{1,\dots,M\\}$ do compute confusion matrix
$\mathbf{\hat{P}}_{\bm{\omega}}(\mathbf{x}_{b},\mathbf{a}_{m})$ (cf. Fig. 3);
end
end for
for $(m,l)\in\\{1,\dots,M\\}^{2}$ do compute similarity
$k_{\gamma}(\texttt{no\\_grad}(\mathbf{\widetilde{a}}_{m}),\texttt{no\\_grad}(\mathbf{\widetilde{a}}_{l}))$
(cf. Eq. 23); end
for $m\in\\{1,\dots,M\\}$ do compute annotator weight
$w(\mathbf{a}_{m})\approx\hat{w}_{\gamma}(\mathbf{a}_{m})$ (cf. Eq. 21 and Eq.
22); end
optimize parameters $\bm{\theta},\bm{\omega},\gamma$ with reference to
$L_{\mathbf{\overline{X}},\mathbf{A},\mathbf{\overline{Z}},\alpha,\beta}(\bm{\theta},\bm{\omega},\gamma)$
(cf. Eq. 25);
end for
end for
output: optimized model parameters $(\bm{\theta},\bm{\omega})$
Algorithm 1 End-to-end training algorithm of MaDL.
## 5 Experimental Evaluation
This section investigates three RQs regarding the properties P1–P6 (cf.
Section 3) of multi-annotator supervised learning. We divide the analysis of
each RQ into four parts, which are (1) a takeaway summarizing the key
insights, (2) a setup describing the experiments, (3) a qualitative study, and
(4) a quantitative study. The qualitative studies intuitively explain our
design choices about MaDL, while the quantitative studies compare MaDL’s
performance to related techniques. Note that we analyze each RQ in the context
of a concrete evaluation scenario. Accordingly, the results provide potential
indications for an extension to related scenarios. As this section’s starting
point, we overview the general experimental setup, whose code base is publicly
available at https://www.github.com/ies-research/multi-annotator-deep-
learning.
### 5.1 Experimental Setup
We base our experimental setup on the problem setting in Section 2.
Accordingly, the goal is to evaluate the predictions of GT and AP models
trained via multi-annotator supervised learning techniques. For this purpose,
we perform experiments on several datasets with class labels provided by
error-prone annotators, with models of varying hyperparameters, and in
combination with a collection of different evaluation scores.
Datasets: We conduct experiments for the tabular and image datasets listed by
Table 2. labelme and music are actual crowdsourcing datasets, while we
simulate annotators for the other five datasets. For the labelme dataset,
Rodrigues & Pereira (2018) performed a crowdsourcing study to annotate a
subset of 1000 out of 2688 instances of eight different classes as training
data. This dataset consists of images, but due to its small training set size,
we follow the idea of Rodrigues & Pereira and transform it into a tabular
dataset by utilizing the features of a pretrained VGG-16 (Simonyan &
Zisserman, 2015) as inputs. There are class labels obtained from 59 different
annotators, and on average, about 2.5 class labels are assigned to an
instance. music is another crowdsourcing dataset, where 700 of 1000 audio
files are classified into ten music genres by 44 annotators, and on average,
about 2.9 class labels are assigned to a file. We use the features extracted
by Rodrigues et al. (2013) from the audio files for training and inference.
The artificial toy dataset with two classes and features serves to visualize
our design choices about MaDL. We generate this dataset via a Gaussian mixture
model. Frey & Slate (1991) published the letter dataset to recognize a pixel
display, represented through statistical moments and edge counts, as one of
the 26 capital letters in the alphabet for Modern English. The datasets
fmnist, cifar10, and svhn represent typical image benchmark classification
tasks, each with ten classes but different object types to recognize. Appendix
F presents a separate case study on cifar100 to investigate the outcomes on
datasets with more classes.
Table 2: Overview of datasets and associated base network architectures. Dataset | Annotators | Instances | Classes | Features | Base Network Architecture
---|---|---|---|---|---
Tabular Datasets
toy | simulated | 500 | 2 | 2 | MLP (Rodrigues & Pereira, 2018)
letter (Frey & Slate, 1991) | simulated | 20000 | 26 | 16 | MLP (Rodrigues & Pereira, 2018)
labelme (Rodrigues & Pereira, 2018) | real-world | 2688 | 8 | 8192 | MLP (Rodrigues & Pereira, 2018)
music (Rodrigues et al., 2013) | real-world | 1000 | 10 | 124 | MLP (Rodrigues & Pereira, 2018)
Image Datasets
fmnist (Xiao et al., 2017) | simulated | 70000 | 10 | 1 $\times$ 28 $\times$ 28 | LeNet-5 (LeCun & Cortes, 1998)
cifar10 (Krizhevsky, 2009) | simulated | 60000 | 10 | 3 $\times$ 32 $\times$ 32 | ResNet-18 (He et al., 2016)
svhn (Netzer et al., 2011) | simulated | 99289 | 10 | 3 $\times$ 32 $\times$ 32 | ResNet-18 (He et al., 2016)
Network Architectures: Table 2 lists the base network architectures selected
to meet the datasets’ requirements. These architectures are starting points
for designing the GT and AP models, which we adjust according to the
respective multi-annotator supervised learning technique. For the tabular
datasets, we follow Rodrigues & Pereira (2018) and train a multilayer
perceptron (MLP) with a single fully connected layer of 128 neurons as a
hidden layer. A modified LeNet-5 architecture (LeCun & Cortes, 1998), a simple
convolutional neural network, serves as the basis for fmnist as a gray-scale
image dataset, while we employ a ResNet-18 (He et al., 2016) for cifar10 and
svhn as RGB image datasets. We refer to our code base for remaining details,
e.g., on the use of rectified linear units (ReLU, Glorot et al. 2011) as
activation functions.
Annotator simulation: For the datasets without real-world annotators, we adopt
simulation strategies from related work (Yan et al., 2014; Cao et al., 2019;
Rühling Cachay et al., 2021; Wei et al., 2022) and simulate annotators
according to the following five types:
fnum@@desciitemAdversarial
annotators provide false class labels on purpose. In our case, such an
annotator provides a correct class label with a probability of 0.05.
fnum@@desciitemRandomly guessing
annotators provide class labels drawn from a uniform categorical distribution.
As a result, such an annotator provides a correct class label with a
probability of $\nicefrac{{1}}{{C}}$.
fnum@@desciitemCluster-specialized
annotators’ performances considerably vary across the clusters found by the
$k$-means clustering algorithm. For images, we cluster the latent
representations of the ResNet-18 pretrained on ImageNet (Russakovsky et al.,
2015). In total, there are $k=10$ clusters. For each annotator, we randomly
define five weak and five expert clusters. An annotator provides a correct
class label with a probability of 0.95 for an expert cluster and with a
probability of 0.05 for a weak cluster.
fnum@@desciitemCommon
annotators are simulated based on the identical clustering employed for the
cluster-specialized annotators. However, their APs vary less between the
clusters. Concretely, we randomly draw a correctness probability value in the
range $[\nicefrac{{1}}{{C}},1]$ for each cluster-annotator pair.
fnum@@desciitemClass-specialized
annotators’ performances considerably vary across classes to which instances
can belong. For each annotator, we randomly define
$\lfloor\nicefrac{{C}}{{2}}\rfloor$ weak and $\lceil\nicefrac{{C}}{{2}}\rceil$
expert classes. An annotator provides a correct class label with a probability
of 0.95 for an expert class and with a probability of 0.05 for a weak class.
We simulate annotation mistakes by randomly selecting false class labels.
Table 3 lists four annotator sets (blueish rows) with varying numbers of
annotators per annotator type (first five columns) and annotation ratios (last
column). Each annotator set is associated with a concrete RQ. A copy flag
indicates that the annotators in the respective types provide identical
annotations. This way, we follow Wei et al. (2022), Cao et al. (2019), and
Rühling Cachay et al. (2021) to simulate strong correlations between
annotators. For example, the entry ”1 + 11 copies“ of the annotator set
correlated indicates twelve cluster-specialized annotators, of which one
annotator is independent, while the remaining eleven annotators share
identical annotation patterns, i.e., they are copies of each other. The
simulated annotator correlations are not directly observable because the
copied annotators likely annotate different instances. This is because of the
annotation ratios, e.g., a ratio of 0.2 indicates that each annotator provides
annotations for only $20\text{\,}\mathrm{\char 37\relax}$ of randomly chosen
instances. The annotation ratios are well below 1.0 because, in practice
(especially in crowdsourcing applications), it is unrealistic for every
annotator to annotate every instance. We refer to Appendix E presenting the
results of a case study with higher annotation ratios for cifar10.
Table 3: Simulated annotator sets for each RQ. Adversarial | Common | Cluster-specialized | Class-specialized | Random | Annotation Ratio
---|---|---|---|---|---
independent (RQ1)
1 | 6 | 2 | 1 | 0 | 0.2
correlated (RQ2)
11 copies | 6 | 1 + 11 copies | 11 copies | 0 | 0.2
random-correlated (RQ2)
1 | 6 | 2 | 1 | 90 copies | 0.2
inductive (RQ3)
10 | 60 | 20 | 10 | 0 | 0.02
Evaluation scores: Since we are interested in quantitatively assessing GT and
AP predictions, we need corresponding evaluation scores. In this context, we
interpret the prediction of APs as a binary classification problem with the AP
model predicting whether an annotator provides the correct or a false class
label for an instance. Next to categorical predictions, the GT and AP models
typically provide probabilistic outputs, which we examine regarding their
quality (Huseljic et al., 2021). We list our evaluation scores in the
following, where arrows indicate which scores need to be maximized
($\uparrow$) or minimized ($\downarrow$):
fnum@@desciiitemAccuracy
(ACC, $\uparrow$) is probably the most popular score for assessing
classification performances. For the GT estimates, it describes the fraction
of correctly classified instances, whereas it is the fraction of (potential)
annotations correctly identified as false or correct for the AP estimates:
$\displaystyle\text{GT-
ACC}(\mathbf{X},\mathbf{y},\hat{y}_{\bm{\theta}})\coloneqq\frac{1}{N}\sum_{n=1}^{N}\delta\left(y_{n}=\hat{y}_{\bm{\theta}}(\mathbf{x}_{n})\right),$
(27) $\displaystyle\text{AP-
ACC}(\mathbf{X},\mathbf{y},\mathbf{Z},\hat{y}_{\bm{\theta},\bm{\omega}})\coloneqq\frac{1}{|\mathbf{Z}|}\sum_{n=1}^{N}\sum_{m\in\mathcal{A}_{n}}\delta\left(\delta\left(y_{n}\neq
z_{nm}\right)=\hat{y}_{\bm{\theta},\bm{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m})\right).$
(28)
Maximizing both scores corresponds to the Bayes optimal predictions in Eq. 3
and Eq. 5.
fnum@@desciiitemBalanced accuracy
(BAL-ACC, $\uparrow$) is a variant of ACC designed for imbalanced
classification problems (Brodersen et al., 2010). For the GT estimation, the
idea is to compute the ACC score for each class of instances separately and
then average them. Since our datasets are fairly balanced in their
distributions of class labels, we use this evaluation score only for assessing
AP estimates. We may encounter highly imbalanced binary classification
problems per annotator, where a class represents either a false or correct
annotation. For example, an adversarial annotator provides majorly false
annotations. Therefore, we extend the definition of BAL-ACC by computing the
ACC scores for each annotator-class pair separately to average them.
fnum@@desciiitemNegative log-likelihood
(NLL, $\downarrow$) is not only used as a typical loss function for training
(D)NNs but can also be used to assess the quality of probabilistic estimates:
$\displaystyle\text{GT-
NLL}(\mathbf{X},\mathbf{y},\mathbf{\hat{p}}_{\bm{\theta}})\coloneqq-\frac{1}{N}\sum_{n=1}^{N}\ln\left(\hat{p}^{(y_{n})}_{\bm{\theta}}(\mathbf{x}_{n})\right),$
(29) $\displaystyle\text{AP-
NLL}(\mathbf{X},\mathbf{y},\mathbf{Z},\hat{p}_{\bm{\theta},\bm{\omega}})\coloneqq$
$\displaystyle-\frac{1}{|\mathbf{Z}|}\sum_{n=1}^{N}\sum_{m\in\mathcal{A}_{n}}\Big{(}\delta\left(y_{n}=z_{nm}\right)\ln\left(\hat{p}_{\bm{\theta},\bm{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m})\right)+\delta\left(y_{n}\neq
z_{nm}\right)\ln\left(1-\hat{p}_{\bm{\theta},\bm{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m})\right)\Big{)}.$
(30)
Moreover, NLL is a proper scoring rule (Ovadia et al., 2019) such that the
best score corresponds to a perfect prediction.
fnum@@desciiitemBrier score
(BS, $\downarrow$), proposed by Brier (1950), is another proper scoring rule,
which measures the squared error between predicted probability vectors and
one-hot encoded target vectors:
$\displaystyle\text{GT-
BS}(\mathbf{X},\mathbf{y},\mathbf{\hat{p}}_{\bm{\theta}})\coloneqq\frac{1}{N}\sum_{n=1}^{N}||\mathbf{e}_{y_{n}}-\mathbf{\hat{p}}_{\bm{\theta}}(\mathbf{x}_{n})||_{2}^{2},$
(31) $\displaystyle\text{AP-
BS}(\mathbf{X},\mathbf{y},\mathbf{Z},\hat{p}_{\bm{\theta},\bm{\omega}})\coloneqq\frac{1}{|\mathbf{Z}|}\sum_{n=1}^{N}\sum_{m\in\mathcal{A}_{n}}\left(\delta\left(y_{n}=z_{nm}\right)-\hat{p}_{\bm{\theta},\bm{\omega}}(\mathbf{x}_{n},\mathbf{a}_{m})\right)^{2}.$
(32)
In the literature, there exist many further evaluation scores, particularly
for assessing probability calibration (Ovadia et al., 2019). As a
comprehensive evaluation of probabilities is beyond this article’s scope, we
focus on the aforementioned proper scoring rules. Accordingly, we have omitted
other evaluation scores, such as the expected calibration error (Naeini et
al., 2015) being a non-proper scoring rule.
Multi-annotator supervised learning techniques: By default, we train MaDL via
the weighted loss function in Eq. 25 using the hyperparameter values from
Section 4 and the most general architecture depicted by Fig. 3. Next to the
ablations as part of analyzing the three RQs, we present an ablation study on
the hyperparameters of MaDL in Appendix C and a practitioner’s guide with
concrete recommendations in Appendix G. We evaluate MaDL compared to a subset
of the related techniques presented in Section 3. This subset consists of
techniques that (1) provide probabilistic GT estimates for each instance, (2)
provide probabilistic AP estimates for each instance-annotator pair, and (3)
train a (D)NN as the GT model. Moreover, we focus on recent techniques with
varying training algorithms and properties P1–P6 (cf. Section 3). As a result,
we select crowd layer (CL, Rodrigues & Pereira, 2018), regularized estimation
of annotator confusion (REAC, Tanno et al., 2019), learning from imperfect
annotators (LIA, Platanios et al., 2020), common noise adaption layers (CoNAL,
Chu et al., 2021), and union net (UNION, Wei et al., 2022). Further, we
aggregate annotations through the majority rule as a lower baseline (LB) and
use the GT class labels as an upper baseline (UB). We adopt the architectures
of MaDL’s GT and AP models for both baselines. The GT model then trains via
the aggregated annotation (LB) or the GT class labels (UB). The AP model
trains using the aggregated annotations (LB) or the GT class labels (UB) to
optimize the annotator confusion matrices. Unless explicitly stated, no multi-
annotator supervised learning technique can access annotator features
containing prior knowledge about the annotators.
Experiment: An experiment’s run starts by splitting a dataset into train,
validation, and test sets. For music and labelme, these splits are predefined,
while for the other datasets, we randomly select $75\text{\,}\mathrm{\char
37\relax}$ of the samples for training, $5\text{\,}\mathrm{\char 37\relax}$
for validation, and $20\text{\,}\mathrm{\char 37\relax}$ for testing.
Following Rühling Cachay et al. (2021), a small validation set with GT class
labels allows a fair comparison by finding suitable hyperparameter values for
the optimizer of the respective multi-annotator supervised learning technique.
We employ the AdamW (Loshchilov & Hutter, 2019) optimizer, where the learning
rates $\\{0.01,0.005,0.001\\}$ and weight decays $\\{0.0,0.001,0.0001\\}$ are
tested. We decay learning rates via a cosine annealing schedule (Loshchilov &
Hutter, 2017) and set the optimizer’s mini-batch size to 64. For the datasets
music and labelme, we additionally perform experiments with 8 and 16 as mini-
batch sizes due to their smaller number of instances and, thus, higher
sensitivity to the mini-batch size. The number of training epochs is set to
100 for all techniques except for LIA, which we train for 200 epochs due to
its EM algorithm. After training, we select the models with the best
validation GT-ACC across the epochs. Each experiment is run five times with
different parameter initializations and data splits (except for labelme and
music). We report quantitative results as means and standard deviations over
the best five runs determined via the validation GT-ACC.
### 5.2 RQ1: Do class- and instance-dependent modeled APs improve learning?
(Properties P1, P2)
Takeaway: Estimating class- (property P1) and instance-dependent (property P2)
APs leads to superior performances of the GT and AP models. This observation
is especially true for GT models trained on datasets with real-world
annotators whose annotation patterns are unknown.
Setup: We address RQ1 by evaluating multi-annotator supervised learning
techniques with varying AP assumptions. We simulate ten annotators for the
datasets without real-world annotators according to the annotator set
independent in Table 3. Each simulated annotator provides class labels for
$20\text{\,}\mathrm{\char 37\relax}$ of randomly selected training instances.
Next to the related multi-annotator supervised learning techniques and the two
baselines, we evaluate six variants of MaDL denoted via the scheme MaDL(P1,
P2). Property P1 refers to the estimation of potential class-dependent APs.
There, we differentiate between the options class-independent (I), partially
(P) class-dependent, and fully (F) class-dependent APs. We implement them by
constraining the annotator confusion matrices’ degrees of freedom. Concretely,
class-independent refers to a confusion matrix approximated by estimating a
single scalar, partially class-dependent refers to a confusion matrix
approximated by estimating its diagonal elements, and fully class-dependent
refers to estimating each matrix element individually (cf. Appendix G).
Property P2 indicates whether the APs are estimated as a function of instances
(X) or not ($\overline{\text{X}}$). Combining the two options of the
properties P1 and P2 represents one variant. For example, MaDL(X, F) is the
default MaDL variant estimating instance- and fully class-dependent APs.
Qualitative study: Fig. 6 visualizes MaDL’s predictive behavior for the
artificial dataset toy. Thereby, each row represents the predictions of a
different MaDL variant. Since this is a binary classification problem, the
variant MaDL(X, P) is identical to MaDL(X, F), and MaDL($\overline{\text{X}}$,
P) is identical to MaDL($\overline{\text{X}}$, F). The first column visualizes
instances as circles colored according to their GT labels, plots the class-
membership probabilities predicted by the respective GT model as contours
across the feature space, and depicts the decision boundary for classification
as a black line. The last four columns show the class labels provided by four
of the ten simulated annotators. The instances’ colors indicate the class
labels provided by an annotator, their forms mark whether the class labels are
correct (circle) or false (cross) annotations, and the contours across the
feature space visualize the AP model’s predicted annotation correctness
probabilities. The GT models of the variants MaDL($\overline{\text{X}}$, F),
MaDL(X, I), and MaDL(X, F) successfully separate the instances of both
classes, whereas the GT model of MaDL($\overline{\text{X}}$, I) fails in this
task. Likely, the missing consideration of instance- and class-dependent APs
explains this observation. Further, the class-membership probabilities of the
successful MaDL variants reflect instances’ actual class labels but exhibit
the overconfident behavior typical of deterministic (D)NNs, particularly for
feature space regions without observed instances (Huseljic et al., 2021).
Investigating the estimated APs for the adversarial annotator (second column),
we see that each MaDL variant correctly predicts low APs (indicated by the
white-colored contours) across the feature space. When comparing the AP
estimates for the class-specialized annotator (fifth column), clear
differences between MaDL($\overline{\text{X}}$, I) and the other three
variants of MaDL are visible. Since MaDL($\overline{\text{X}}$, I) ignores any
class dependency regarding APs, it cannot differentiate between classes of
high and low APs. In contrast, the AP predictions of the other three variants
reflect the class structure learned by the respective GT model and thus can
separate between weak and expert classes. The performances of the cluster-
specialized and common annotator depend on the regions in the feature space.
Therefore, only the variants MaDL(X, I) and MaDL(X, F) can separate clusters
of low and high APs. For example, both variants successfully identify the two
weak clusters of the cluster-specialized annotator. Analogous to the class-
membership probabilities, the AP estimates are overconfident for feature space
regions without observed instances.
MaDL($\overline{\text{X}}$, I)
MaDL($\overline{\text{X}}$, F)
MaDL(X, I)
MaDL(X, F)
ClassificationAdversarialAnnotatorCluster-
specializedAnnotatorCommonAnnotatorClass-specializedAnnotator Figure 6:
Visualization of MaDL’s predictive behavior for the two-dimensional dataset
toy.
Quantitative study: Table 4 presents the numerical evaluation results for the
two datasets with real-world annotators. There, we only report the GT models’
test results since no annotations for the test instances are available to
assess the AP models’ test performances. Table 5 presents the GT and AP
models’ test results for the four datasets with simulated annotators. Both
tables indicate whether a technique models class-dependent (property P1)
and/or instance-dependent (property P2) APs. Generally, training with GT
labels as UB achieves the best performances, while the LB with annotations
aggregated according to the majority rule leads to the worst ones. The latter
observation confirms that leveraging AP estimates during training is
beneficial. Moreover, these AP estimates are typically meaningful,
corresponding to BAL-ACC values above 0.5. An exception is
MaDL($\overline{\text{X}}$, I) because this variant only estimates by design a
constant performance per annotator across the feature space. Comparing MaDL(X,
F) as the most general variant to related techniques, we observe that it
achieves competitive or superior results for all datasets and evaluation
scores. Next to MaDL(X, F), CoNAL often delivers better results than the
competitors. When we investigate the performances of the MaDL variants with
instance-independent APs, we find that MaDL($\overline{\text{X}}$, F) achieves
the most robust performances across all datasets. In particular, for the
datasets with real-world annotators, the ACC of the respective GT model is
superior. This observation suggests that modeling class-dependent APs
(property P1) is beneficial. We recognize a similar trend for the MaDL
variants with instance-dependent APs (property P2). Comparing each pair of
MaDL variants with X and $\overline{\text{X}}$, we observe that instance-
dependent APs often improve GT and, in particular, AP estimates. The advantage
of class- and instance-dependent APs is confirmed by CoNAL as a strong
competitor of MaDL(X, F). LIA’s inferior performance contrasts this, although
LIA estimates class- and instance-dependent APs. The difference in training
algorithms can likely explain this observation. While MaDL(X, F) and CoNAL
train via an end-to-end algorithm, LIA trains via the EM algorithm, leading to
higher runtimes and introducing additional sensitive hyperparameters, e.g.,
the number of EM iterations and training epochs per M step.
Table 4: Results regarding RQ1 for datasets with real-world annotators:
Best
and
second best
performances are highlighted per dataset and evaluation score while excluding the performances of the UB. Technique | P1 | P2 | Ground Truth Model | Ground Truth Model
---|---|---|---|---
ACC $\uparrow$ | NLL $\downarrow$ | BS $\downarrow$ | ACC $\uparrow$ | NLL $\downarrow$ | BS $\downarrow$
| music | labelme
UB | ✓ | ✓ | 0.785$\pm$0.020 | 0.710$\pm$0.037 | 0.314$\pm$0.027 | 0.914$\pm$0.003 | 0.580$\pm$0.112 | 0.150$\pm$0.003
LB | ✓ | ✓ | 0.646$\pm$0.045 | 1.096$\pm$0.103 | 0.492$\pm$0.051 | 0.810$\pm$0.015 | 0.724$\pm$0.155 | 0.294$\pm$0.024
CL | ✓ | ✗ | 0.675$\pm$0.015 | 1.672$\pm$0.400 | 0.524$\pm$0.021 | 0.857$\pm$0.011 | 1.774$\pm$1.155 | 0.250$\pm$0.014
REAC | ✓ | ✗ | 0.705$\pm$0.023 | 0.893$\pm$0.081 | 0.410$\pm$0.033 | 0.843$\pm$0.006 | 0.833$\pm$0.088 | 0.254$\pm$0.006
UNION | ✓ | ✗ | 0.682$\pm$0.013 | 1.396$\pm$0.143 | 0.501$\pm$0.027 | 0.855$\pm$0.004 | 1.074$\pm$0.340 | 0.248$\pm$0.011
LIA | ✓ | ✓ | 0.658$\pm$0.023 | 1.158$\pm$0.047 | 0.498$\pm$0.020 | 0.813$\pm$0.010 | 0.976$\pm$0.234 | 0.295$\pm$0.009
CoNAL | ✓ | ✓ | 0.708$\pm$0.031 | 0.964$\pm$0.081 | 0.423$\pm$0.035 | 0.866$\pm$0.004 | 2.740$\pm$1.304 | 0.247$\pm$0.023
MaDL($\overline{\text{X}}$, I) | ✗ | ✗ | 0.718$\pm$0.010 | 0.871$\pm$0.027 | 0.394$\pm$0.009 | 0.815$\pm$0.009 | 0.616$\pm$0.125 | 0.276$\pm$0.017
MaDL($\overline{\text{X}}$, P) | ✦ | ✗ | 0.720$\pm$0.018 | 0.871$\pm$0.030 | 0.396$\pm$0.009 | 0.811$\pm$0.012 | 0.630$\pm$0.128 | 0.281$\pm$0.022
MaDL($\overline{\text{X}}$, F) | ✓ | ✗ | 0.725$\pm$0.015 | 0.977$\pm$0.064 | 0.403$\pm$0.019 | 0.859$\pm$0.007 | 1.008$\pm$0.278 | 0.240$\pm$0.014
MaDL(X, I) | ✗ | ✓ | 0.713$\pm$0.027 | 0.876$\pm$0.041 | 0.402$\pm$0.022 | 0.816$\pm$0.008 | 0.559$\pm$0.027 | 0.276$\pm$0.010
MaDL(X, P) | ✦ | ✓ | 0.714$\pm$0.014 | 0.909$\pm$0.036 | 0.398$\pm$0.013 | 0.811$\pm$0.009 | 0.771$\pm$0.160 | 0.289$\pm$0.016
MaDL(X, F) | ✓ | ✓ | 0.743$\pm$0.018 | 0.877$\pm$0.030 | 0.381$\pm$0.012 | 0.867$\pm$0.004 | 0.623$\pm$0.124 | 0.214$\pm$0.008
Table 5: Results regarding RQ1 for datasets with simulated annotators:
Best
and
second best
performances are highlighted per dataset and evaluation score while excluding the performances of the UB. Technique | P1 | P2 | Ground Truth Model | Annotator Performance Model
---|---|---|---|---
ACC $\uparrow$ | NLL $\downarrow$ | BS $\downarrow$ | ACC $\uparrow$ | NLL $\downarrow$ | BS $\downarrow$ | BAL-ACC $\uparrow$
letter (independent)
UB | ✓ | ✓ | 0.961$\pm$0.003 | 0.130$\pm$0.006 | 0.059$\pm$0.004 | 0.770$\pm$0.001 | 0.488$\pm$0.003 | 0.315$\pm$0.002 | 0.709$\pm$0.001
LB | ✓ | ✓ | 0.878$\pm$0.004 | 0.980$\pm$0.021 | 0.385$\pm$0.008 | 0.664$\pm$0.004 | 0.624$\pm$0.003 | 0.433$\pm$0.003 | 0.666$\pm$0.004
CL | ✓ | ✗ | 0.886$\pm$0.013 | 1.062$\pm$0.145 | 0.181$\pm$0.020 | 0.663$\pm$0.006 | 0.625$\pm$0.013 | 0.430$\pm$0.010 | 0.601$\pm$0.002
REAC | ✓ | ✗ | 0.936$\pm$0.005 | 0.238$\pm$0.018 | 0.097$\pm$0.007 | 0.685$\pm$0.002 | 0.560$\pm$0.001 | 0.385$\pm$0.001 | 0.604$\pm$0.002
UNION | ✓ | ✗ | 0.905$\pm$0.016 | 0.906$\pm$0.435 | 0.151$\pm$0.030 | 0.670$\pm$0.004 | 0.589$\pm$0.008 | 0.408$\pm$0.006 | 0.605$\pm$0.002
LIA | ✓ | ✓ | 0.897$\pm$0.005 | 0.778$\pm$0.052 | 0.305$\pm$0.021 | 0.669$\pm$0.004 | 0.654$\pm$0.010 | 0.447$\pm$0.004 | 0.616$\pm$0.003
CoNAL | ✓ | ✓ | 0.907$\pm$0.016 | 0.813$\pm$0.354 | 0.143$\pm$0.027 | 0.723$\pm$0.018 | 0.555$\pm$0.024 | 0.372$\pm$0.020 | 0.663$\pm$0.017
MaDL($\overline{\text{X}}$, I) | ✗ | ✗ | 0.934$\pm$0.003 | 0.269$\pm$0.035 | 0.100$\pm$0.004 | 0.607$\pm$0.001 | 0.627$\pm$0.000 | 0.444$\pm$0.000 | 0.500$\pm$0.000
MaDL($\overline{\text{X}}$, P) | ✦ | ✗ | 0.935$\pm$0.005 | 0.235$\pm$0.013 | 0.099$\pm$0.006 | 0.692$\pm$0.001 | 0.556$\pm$0.001 | 0.381$\pm$0.001 | 0.606$\pm$0.003
MaDL($\overline{\text{X}}$, F) | ✓ | ✗ | 0.933$\pm$0.005 | 0.255$\pm$0.025 | 0.100$\pm$0.005 | 0.691$\pm$0.002 | 0.556$\pm$0.001 | 0.381$\pm$0.001 | 0.606$\pm$0.002
MaDL(X, I) | ✗ | ✓ | 0.938$\pm$0.006 | 0.247$\pm$0.043 | 0.092$\pm$0.008 | 0.770$\pm$0.004 | 0.492$\pm$0.016 | 0.316$\pm$0.007 | 0.708$\pm$0.004
MaDL(X, P) | ✦ | ✓ | 0.940$\pm$0.004 | 0.242$\pm$0.045 | 0.090$\pm$0.004 | 0.770$\pm$0.006 | 0.496$\pm$0.020 | 0.316$\pm$0.009 | 0.708$\pm$0.005
MaDL(X, F) | ✓ | ✓ | 0.935$\pm$0.006 | 0.303$\pm$0.092 | 0.098$\pm$0.009 | 0.766$\pm$0.004 | 0.491$\pm$0.006 | 0.317$\pm$0.004 | 0.702$\pm$0.005
fmnist (independent)
UB | ✓ | ✓ | 0.909$\pm$0.002 | 0.246$\pm$0.005 | 0.131$\pm$0.003 | 0.756$\pm$0.001 | 0.485$\pm$0.001 | 0.321$\pm$0.001 | 0.704$\pm$0.001
LB | ✓ | ✓ | 0.883$\pm$0.001 | 0.903$\pm$0.003 | 0.385$\pm$0.001 | 0.644$\pm$0.007 | 0.645$\pm$0.005 | 0.453$\pm$0.004 | 0.585$\pm$0.007
CL | ✓ | ✗ | 0.892$\pm$0.002 | 0.312$\pm$0.008 | 0.158$\pm$0.004 | 0.674$\pm$0.002 | 0.580$\pm$0.001 | 0.402$\pm$0.001 | 0.623$\pm$0.001
REAC | ✓ | ✗ | 0.894$\pm$0.003 | 0.309$\pm$0.011 | 0.155$\pm$0.004 | 0.703$\pm$0.001 | 0.535$\pm$0.001 | 0.364$\pm$0.000 | 0.641$\pm$0.001
UNION | ✓ | ✗ | 0.893$\pm$0.002 | 0.305$\pm$0.006 | 0.155$\pm$0.003 | 0.674$\pm$0.002 | 0.570$\pm$0.002 | 0.395$\pm$0.002 | 0.622$\pm$0.001
LIA | ✓ | ✓ | 0.858$\pm$0.002 | 1.017$\pm$0.016 | 0.442$\pm$0.008 | 0.665$\pm$0.024 | 0.628$\pm$0.017 | 0.437$\pm$0.016 | 0.613$\pm$0.027
CoNAL | ✓ | ✓ | 0.894$\pm$0.004 | 0.304$\pm$0.009 | 0.155$\pm$0.004 | 0.725$\pm$0.016 | 0.521$\pm$0.018 | 0.351$\pm$0.016 | 0.679$\pm$0.018
MaDL($\overline{\text{X}}$, I) | ✗ | ✗ | 0.896$\pm$0.003 | 0.340$\pm$0.006 | 0.161$\pm$0.004 | 0.590$\pm$0.000 | 0.638$\pm$0.000 | 0.453$\pm$0.000 | 0.500$\pm$0.000
MaDL($\overline{\text{X}}$, P) | ✦ | ✗ | 0.894$\pm$0.001 | 0.307$\pm$0.003 | 0.155$\pm$0.001 | 0.705$\pm$0.001 | 0.534$\pm$0.000 | 0.363$\pm$0.000 | 0.640$\pm$0.001
MaDL($\overline{\text{X}}$, F) | ✓ | ✗ | 0.894$\pm$0.002 | 0.307$\pm$0.006 | 0.155$\pm$0.003 | 0.705$\pm$0.000 | 0.534$\pm$0.000 | 0.363$\pm$0.000 | 0.640$\pm$0.000
MaDL(X, I) | ✗ | ✓ | 0.895$\pm$0.003 | 0.291$\pm$0.005 | 0.150$\pm$0.003 | 0.752$\pm$0.004 | 0.490$\pm$0.004 | 0.325$\pm$0.003 | 0.699$\pm$0.004
MaDL(X, P) | ✦ | ✓ | 0.899$\pm$0.003 | 0.286$\pm$0.006 | 0.147$\pm$0.003 | 0.751$\pm$0.003 | 0.489$\pm$0.004 | 0.324$\pm$0.003 | 0.698$\pm$0.005
MaDL(X, F) | ✓ | ✓ | 0.896$\pm$0.002 | 0.288$\pm$0.006 | 0.148$\pm$0.003 | 0.750$\pm$0.005 | 0.491$\pm$0.005 | 0.326$\pm$0.005 | 0.697$\pm$0.006
cifar10 (independent)
UB | ✓ | ✓ | 0.933$\pm$0.002 | 0.519$\pm$0.026 | 0.118$\pm$0.004 | 0.710$\pm$0.001 | 0.547$\pm$0.001 | 0.369$\pm$0.001 | 0.658$\pm$0.001
LB | ✓ | ✓ | 0.789$\pm$0.004 | 1.081$\pm$0.031 | 0.460$\pm$0.015 | 0.575$\pm$0.021 | 0.673$\pm$0.006 | 0.481$\pm$0.006 | 0.547$\pm$0.011
CL | ✓ | ✗ | 0.833$\pm$0.003 | 0.536$\pm$0.012 | 0.242$\pm$0.004 | 0.664$\pm$0.001 | 0.604$\pm$0.002 | 0.420$\pm$0.001 | 0.613$\pm$0.001
REAC | ✓ | ✗ | 0.839$\pm$0.003 | 0.581$\pm$0.010 | 0.245$\pm$0.003 | 0.676$\pm$0.003 | 0.580$\pm$0.006 | 0.397$\pm$0.004 | 0.625$\pm$0.002
UNION | ✓ | ✗ | 0.834$\pm$0.003 | 0.595$\pm$0.022 | 0.249$\pm$0.005 | 0.668$\pm$0.001 | 0.592$\pm$0.001 | 0.410$\pm$0.001 | 0.617$\pm$0.002
LIA | ✓ | ✓ | 0.805$\pm$0.003 | 1.102$\pm$0.035 | 0.469$\pm$0.016 | 0.622$\pm$0.024 | 0.645$\pm$0.014 | 0.453$\pm$0.014 | 0.579$\pm$0.019
CoNAL | ✓ | ✓ | 0.838$\pm$0.005 | 0.530$\pm$0.021 | 0.236$\pm$0.008 | 0.668$\pm$0.001 | 0.600$\pm$0.001 | 0.416$\pm$0.001 | 0.616$\pm$0.001
MaDL($\overline{\text{X}}$, I) | ✗ | ✗ | 0.832$\pm$0.006 | 0.583$\pm$0.021 | 0.256$\pm$0.009 | 0.576$\pm$0.010 | 0.646$\pm$0.002 | 0.461$\pm$0.002 | 0.500$\pm$0.000
MaDL($\overline{\text{X}}$, P) | ✦ | ✗ | 0.844$\pm$0.004 | 0.529$\pm$0.014 | 0.231$\pm$0.004 | 0.682$\pm$0.001 | 0.568$\pm$0.001 | 0.390$\pm$0.001 | 0.630$\pm$0.002
MaDL($\overline{\text{X}}$, F) | ✓ | ✗ | 0.840$\pm$0.005 | 0.545$\pm$0.019 | 0.237$\pm$0.006 | 0.681$\pm$0.001 | 0.569$\pm$0.002 | 0.390$\pm$0.001 | 0.630$\pm$0.001
MaDL(X, I) | ✗ | ✓ | 0.843$\pm$0.005 | 0.555$\pm$0.024 | 0.236$\pm$0.008 | 0.697$\pm$0.002 | 0.559$\pm$0.005 | 0.380$\pm$0.003 | 0.646$\pm$0.002
MaDL(X, P) | ✦ | ✓ | 0.845$\pm$0.002 | 0.546$\pm$0.027 | 0.232$\pm$0.005 | 0.697$\pm$0.001 | 0.557$\pm$0.002 | 0.380$\pm$0.001 | 0.646$\pm$0.002
MaDL(X, F) | ✓ | ✓ | 0.846$\pm$0.003 | 0.521$\pm$0.014 | 0.229$\pm$0.005 | 0.697$\pm$0.002 | 0.557$\pm$0.004 | 0.379$\pm$0.002 | 0.646$\pm$0.003
svhn (independent)
UB | ✓ | ✓ | 0.965$\pm$0.000 | 0.403$\pm$0.024 | 0.064$\pm$0.001 | 0.675$\pm$0.002 | 0.567$\pm$0.001 | 0.392$\pm$0.001 | 0.590$\pm$0.004
LB | ✓ | ✓ | 0.930$\pm$0.002 | 0.811$\pm$0.030 | 0.332$\pm$0.015 | 0.581$\pm$0.021 | 0.680$\pm$0.008 | 0.487$\pm$0.008 | 0.540$\pm$0.000
CL | ✓ | ✗ | 0.944$\pm$0.001 | 0.237$\pm$0.008 | 0.085$\pm$0.002 | 0.646$\pm$0.001 | 0.598$\pm$0.001 | 0.419$\pm$0.001 | 0.546$\pm$0.001
REAC | ✓ | ✗ | 0.943$\pm$0.001 | 0.278$\pm$0.048 | 0.096$\pm$0.020 | 0.648$\pm$0.006 | 0.593$\pm$0.015 | 0.414$\pm$0.010 | 0.543$\pm$0.000
UNION | ✓ | ✗ | 0.942$\pm$0.002 | 0.250$\pm$0.005 | 0.087$\pm$0.001 | 0.646$\pm$0.001 | 0.594$\pm$0.001 | 0.416$\pm$0.000 | 0.544$\pm$0.001
LIA | ✓ | ✓ | 0.935$\pm$0.002 | 0.809$\pm$0.162 | 0.333$\pm$0.081 | 0.585$\pm$0.016 | 0.667$\pm$0.023 | 0.476$\pm$0.021 | 0.536$\pm$0.004
CoNAL | ✓ | ✓ | 0.944$\pm$0.002 | 0.246$\pm$0.012 | 0.086$\pm$0.002 | 0.688$\pm$0.036 | 0.560$\pm$0.029 | 0.384$\pm$0.026 | 0.602$\pm$0.050
MaDL($\overline{\text{X}}$, I) | ✗ | ✗ | 0.942$\pm$0.003 | 0.253$\pm$0.023 | 0.093$\pm$0.008 | 0.613$\pm$0.003 | 0.630$\pm$0.003 | 0.446$\pm$0.003 | 0.500$\pm$0.000
MaDL($\overline{\text{X}}$, P) | ✦ | ✗ | 0.940$\pm$0.002 | 0.262$\pm$0.011 | 0.091$\pm$0.003 | 0.652$\pm$0.000 | 0.585$\pm$0.000 | 0.408$\pm$0.000 | 0.544$\pm$0.000
MaDL($\overline{\text{X}}$, F) | ✓ | ✗ | 0.940$\pm$0.002 | 0.264$\pm$0.007 | 0.092$\pm$0.002 | 0.652$\pm$0.001 | 0.585$\pm$0.000 | 0.408$\pm$0.000 | 0.543$\pm$0.001
MaDL(X, I) | ✗ | ✓ | 0.944$\pm$0.003 | 0.240$\pm$0.007 | 0.085$\pm$0.003 | 0.665$\pm$0.001 | 0.575$\pm$0.001 | 0.399$\pm$0.001 | 0.565$\pm$0.001
MaDL(X, P) | ✦ | ✓ | 0.945$\pm$0.002 | 0.245$\pm$0.010 | 0.084$\pm$0.004 | 0.669$\pm$0.002 | 0.572$\pm$0.002 | 0.396$\pm$0.002 | 0.573$\pm$0.005
MaDL(X, F) | ✓ | ✓ | 0.943$\pm$0.001 | 0.254$\pm$0.013 | 0.087$\pm$0.002 | 0.668$\pm$0.003 | 0.572$\pm$0.003 | 0.396$\pm$0.003 | 0.570$\pm$0.006
### 5.3 RQ2: Does modeling correlations between (potentially spamming)
annotators improve learning? (Properties P3, P4)
Takeaway: Modeling correlations between annotators leads to better results in
scenarios with many correlated spamming annotators (property P4). Capturing
the correlations of beneficial annotators does not lead to consistently better
results (property P3). However, estimating and leveraging APs during training
becomes more critical in scenarios with correlated annotators.
Setup: We address RQ2 by evaluating multi-annotator supervised learning
techniques with and without modeling annotator correlations. We simulate two
annotator sets for each dataset without real-world annotators according to
Table 3. The first annotator set correlated consists of the same ten
annotators as in RQ1. However, we extend this set by ten additional copies of
the adversarial, the class-specialized, and one of the two cluster-specialized
annotators, so there are 40 annotators. The second annotator set random-
correlated also consists of the same ten annotators as in RQ1 but is extended
by 90 identical randomly guessing annotators. Each simulated annotator
provides class labels for $20\text{\,}\mathrm{\char 37\relax}$ of randomly
selected training instances. Next to the related multi-annotator supervised
learning techniques and the two baselines, we evaluate two variants of MaDL
denoted via the scheme MaDL(P3). Property P3 refers to the modeling of
potential annotator correlations. There, we differentiate between the variant
MaDL(W) using annotator weights via the weighted loss function (cf. Eq. 25)
and the variant MaDL($\overline{\text{W}}$) training via the loss function
without any weights (cf. Eq. 15). MaDL(W) corresponds to MaDL’s default
variant in this setup.
Qualitative study: Fig. 7 visualizes MaDL(W)’s learned annotator embeddings
and weights for the dataset letter with the two annotator sets, correlated and
random-correlated, after five training epochs. Based on MaDL(W)’s learned
kernel function, we create the two scatter plots via multi-dimensional scaling
(Kruskal, 1964) for dimensionality reduction. This way, the annotator
embeddings, originally located in an $(R=16)$-dimensional space, are
transformed into a two-dimensional space, where each circle represents one
annotator embedding. A circle’s color indicates to which annotator group the
embedding belongs. The two bar plots visualize the mean annotator weight of
the different annotator groups, again indicated by their respective color.
Analyzing the scatter plot of the annotator set correlated, we observe that
the annotator embeddings’ latent representations approximately reflect the
annotator groups’ correlations. Concretely, there are four clusters. The
center cluster corresponds to the seven independent annotators, one cluster-
specialized annotator and six common annotators. The three clusters in the
outer area represent the three groups of correlated annotators. The bar plot
confirms our goal to assign lower weights to strongly correlated annotators.
For example, the single independent cluster-specialized annotator has a weight
of 4.06, while the eleven correlated cluster-specialized annotators have a
mean weight of 0.43. We make similar observations for the annotator set
random-correlated. The scatter plot shows that the independent annotators also
form a cluster, separated from the cluster of the large group of correlated,
randomly guessing annotators. The single adversarial annotator belongs to the
cluster of randomly guessing annotators since both groups of annotators make
many annotation errors and thus have highly correlated annotation patterns.
Again, the bar plot confirms that the correlated annotators get low weights.
Moreover, these annotator weights are inversely proportional to the size of a
group of correlated annotators. For example, the 90 randomly guessing
annotators have a similar weight in sum as the single class-specialized
annotator.
correlated
Latent Dimension 2
Latent Dimension 1
Mean Annotator Weight
Annotator Group
random-correlated
Latent Dimension 2
Latent Dimension 1
Mean Annotator Weight
Annotator Group Figure 7: Visualization of MaDL(W)’s learned similarities
between annotator embeddings and associated annotator weights.
Quantitative study: Table 6 presents the GT and AP models’ test performances
for the four datasets with the annotator set correlated and Table 7 for the
annotator set random-correlated. Both tables indicate whether a technique
models correlations between annotators (property P3) and whether the authors
of a technique demonstrated its robustness against spamming annotators
(property P4). Analogous to RQ1, training with GT labels achieves the best
performances (UB), while annotation aggregation via the majority rule leads to
the worst ones (LB). The LB’s significant underperformance confirms the
importance of modeling APs in scenarios with correlated annotators. MaDL(W),
as the default MaDL variant, achieves competitive and often superior results
for all datasets and evaluation scores. In particular, for the annotator set
random-correlated, MaDL(W) outperforms the other techniques, which are
vulnerable to many randomly guessing annotators. This observation is also
confirmed when we compare MaDL(W) to MaDL($\overline{\text{W}}$). In contrast,
there is no consistent performance gain of MaDL(W) over
MaDL($\overline{\text{W}}$) for the annotator set correlated. While CoNAL is
competitive for the annotator set correlated, its performance strongly
degrades for the annotator set random-correlated. The initial E step in LIA’s
EM algorithm estimates the GT class labels via a probabilistic variant of the
majority rule. Similarly to the LB, such an estimate is less accurate for
correlated and/or spamming annotators. Besides MaDL(W), only CL and UNION
consistently outperform the LB by large margins for the annotator set random-
correlated.
Table 6: Results regarding RQ2 for datasets with simulated annotators:
Best
and
second best
performances are highlighted per dataset and evaluation score while excluding the performances of the UB. Technique | P3 | P4 | Ground Truth Model | Annotator Performance Model
---|---|---|---|---
ACC $\uparrow$ | NLL $\downarrow$ | BS $\downarrow$ | ACC $\uparrow$ | NLL $\downarrow$ | BS $\downarrow$ | BAL-ACC $\uparrow$
letter (correlated)
UB | ✗ | ✓ | 0.962$\pm$0.004 | 0.129$\pm$0.004 | 0.058$\pm$0.003 | 0.887$\pm$0.002 | 0.305$\pm$0.004 | 0.173$\pm$0.002 | 0.757$\pm$0.002
LB | ✗ | ✗ | 0.762$\pm$0.007 | 1.302$\pm$0.005 | 0.482$\pm$0.004 | 0.682$\pm$0.005 | 0.604$\pm$0.003 | 0.416$\pm$0.002 | 0.602$\pm$0.006
CL | ✗ | ✗ | 0.803$\pm$0.035 | 2.435$\pm$1.218 | 0.318$\pm$0.057 | 0.800$\pm$0.008 | 0.446$\pm$0.016 | 0.285$\pm$0.012 | 0.674$\pm$0.007
REAC | ✗ | ✗ | 0.922$\pm$0.003 | 0.288$\pm$0.065 | 0.115$\pm$0.007 | 0.815$\pm$0.001 | 0.395$\pm$0.001 | 0.249$\pm$0.001 | 0.684$\pm$0.001
UNION | ✓ | ✗ | 0.866$\pm$0.019 | 1.668$\pm$0.322 | 0.224$\pm$0.034 | 0.795$\pm$0.007 | 0.432$\pm$0.007 | 0.278$\pm$0.007 | 0.667$\pm$0.006
LIA | ✗ | ✗ | 0.823$\pm$0.005 | 1.483$\pm$0.018 | 0.569$\pm$0.007 | 0.676$\pm$0.005 | 0.629$\pm$0.004 | 0.436$\pm$0.004 | 0.575$\pm$0.004
CoNAL | ✓ | ✓ | 0.871$\pm$0.015 | 1.380$\pm$0.349 | 0.213$\pm$0.024 | 0.840$\pm$0.014 | 0.390$\pm$0.028 | 0.238$\pm$0.021 | 0.712$\pm$0.014
MaDL($\overline{\text{W}}$) | ✗ | ✗ | 0.946$\pm$0.006 | 0.293$\pm$0.082 | 0.083$\pm$0.009 | 0.883$\pm$0.002 | 0.314$\pm$0.001 | 0.178$\pm$0.002 | 0.751$\pm$0.003
MaDL(W) | ✓ | ✓ | 0.947$\pm$0.003 | 0.282$\pm$0.069 | 0.080$\pm$0.004 | 0.887$\pm$0.001 | 0.308$\pm$0.004 | 0.175$\pm$0.002 | 0.756$\pm$0.001
fmnist (correlated)
UB | ✗ | ✓ | 0.909$\pm$0.002 | 0.246$\pm$0.005 | 0.131$\pm$0.003 | 0.866$\pm$0.002 | 0.333$\pm$0.002 | 0.198$\pm$0.002 | 0.741$\pm$0.002
LB | ✗ | ✗ | 0.787$\pm$0.003 | 1.127$\pm$0.013 | 0.475$\pm$0.007 | 0.668$\pm$0.009 | 0.626$\pm$0.006 | 0.436$\pm$0.006 | 0.580$\pm$0.005
CL | ✗ | ✗ | 0.868$\pm$0.003 | 0.447$\pm$0.020 | 0.217$\pm$0.010 | 0.799$\pm$0.004 | 0.421$\pm$0.004 | 0.270$\pm$0.003 | 0.677$\pm$0.004
REAC | ✗ | ✗ | 0.873$\pm$0.004 | 0.415$\pm$0.012 | 0.196$\pm$0.006 | 0.828$\pm$0.001 | 0.382$\pm$0.001 | 0.237$\pm$0.001 | 0.697$\pm$0.001
UNION | ✓ | ✗ | 0.859$\pm$0.006 | 0.411$\pm$0.018 | 0.205$\pm$0.008 | 0.801$\pm$0.009 | 0.420$\pm$0.014 | 0.269$\pm$0.011 | 0.678$\pm$0.009
LIA | ✗ | ✗ | 0.837$\pm$0.006 | 1.277$\pm$0.008 | 0.553$\pm$0.004 | 0.685$\pm$0.002 | 0.633$\pm$0.001 | 0.441$\pm$0.001 | 0.569$\pm$0.002
CoNAL | ✓ | ✓ | 0.897$\pm$0.002 | 0.299$\pm$0.009 | 0.152$\pm$0.004 | 0.844$\pm$0.001 | 0.356$\pm$0.003 | 0.217$\pm$0.002 | 0.721$\pm$0.001
MaDL($\overline{\text{W}}$) | ✗ | ✗ | 0.904$\pm$0.002 | 0.272$\pm$0.007 | 0.139$\pm$0.003 | 0.863$\pm$0.003 | 0.337$\pm$0.004 | 0.201$\pm$0.004 | 0.737$\pm$0.004
MaDL(W) | ✓ | ✓ | 0.903$\pm$0.002 | 0.273$\pm$0.004 | 0.141$\pm$0.002 | 0.863$\pm$0.003 | 0.338$\pm$0.003 | 0.202$\pm$0.003 | 0.738$\pm$0.003
cifar10 (correlated)
UB | ✗ | ✓ | 0.933$\pm$0.002 | 0.495$\pm$0.017 | 0.118$\pm$0.003 | 0.837$\pm$0.001 | 0.384$\pm$0.001 | 0.235$\pm$0.001 | 0.711$\pm$0.001
LB | ✗ | ✗ | 0.652$\pm$0.014 | 1.309$\pm$0.016 | 0.540$\pm$0.008 | 0.602$\pm$0.011 | 0.623$\pm$0.003 | 0.436$\pm$0.003 | 0.541$\pm$0.008
CL | ✗ | ✗ | 0.850$\pm$0.007 | 0.490$\pm$0.022 | 0.224$\pm$0.011 | 0.799$\pm$0.002 | 0.439$\pm$0.004 | 0.282$\pm$0.003 | 0.674$\pm$0.002
REAC | ✗ | ✗ | 0.856$\pm$0.003 | 0.600$\pm$0.063 | 0.259$\pm$0.025 | 0.775$\pm$0.017 | 0.445$\pm$0.015 | 0.287$\pm$0.012 | 0.648$\pm$0.017
UNION | ✓ | ✗ | 0.858$\pm$0.007 | 0.499$\pm$0.024 | 0.211$\pm$0.009 | 0.800$\pm$0.003 | 0.432$\pm$0.002 | 0.276$\pm$0.002 | 0.675$\pm$0.003
LIA | ✗ | ✗ | 0.776$\pm$0.002 | 1.343$\pm$0.020 | 0.565$\pm$0.009 | 0.741$\pm$0.002 | 0.617$\pm$0.003 | 0.424$\pm$0.003 | 0.617$\pm$0.002
CoNAL | ✓ | ✓ | 0.862$\pm$0.002 | 0.473$\pm$0.005 | 0.213$\pm$0.003 | 0.800$\pm$0.001 | 0.433$\pm$0.003 | 0.277$\pm$0.002 | 0.676$\pm$0.001
MaDL($\overline{\text{W}}$) | ✗ | ✗ | 0.878$\pm$0.004 | 0.439$\pm$0.015 | 0.184$\pm$0.005 | 0.824$\pm$0.004 | 0.398$\pm$0.004 | 0.247$\pm$0.004 | 0.699$\pm$0.004
MaDL(W) | ✓ | ✓ | 0.875$\pm$0.008 | 0.434$\pm$0.020 | 0.188$\pm$0.011 | 0.823$\pm$0.002 | 0.397$\pm$0.003 | 0.248$\pm$0.002 | 0.698$\pm$0.002
svhn (correlated)
UB | ✗ | ✓ | 0.966$\pm$0.001 | 0.382$\pm$0.018 | 0.062$\pm$0.001 | 0.794$\pm$0.003 | 0.414$\pm$0.002 | 0.266$\pm$0.002 | 0.657$\pm$0.004
LB | ✗ | ✗ | 0.900$\pm$0.005 | 1.012$\pm$0.038 | 0.420$\pm$0.017 | 0.624$\pm$0.022 | 0.634$\pm$0.008 | 0.444$\pm$0.007 | 0.567$\pm$0.017
CL | ✗ | ✗ | 0.947$\pm$0.001 | 0.314$\pm$0.044 | 0.116$\pm$0.017 | 0.789$\pm$0.009 | 0.433$\pm$0.001 | 0.281$\pm$0.002 | 0.655$\pm$0.012
REAC | ✗ | ✗ | 0.946$\pm$0.002 | 0.263$\pm$0.012 | 0.097$\pm$0.005 | 0.767$\pm$0.002 | 0.431$\pm$0.001 | 0.283$\pm$0.000 | 0.620$\pm$0.003
UNION | ✓ | ✗ | 0.947$\pm$0.001 | 0.250$\pm$0.025 | 0.089$\pm$0.010 | 0.767$\pm$0.003 | 0.435$\pm$0.003 | 0.286$\pm$0.002 | 0.621$\pm$0.005
LIA | ✗ | ✗ | 0.929$\pm$0.002 | 1.123$\pm$0.023 | 0.477$\pm$0.011 | 0.716$\pm$0.013 | 0.623$\pm$0.010 | 0.431$\pm$0.010 | 0.594$\pm$0.013
CoNAL | ✓ | ✓ | 0.952$\pm$0.000 | 0.231$\pm$0.003 | 0.075$\pm$0.001 | 0.835$\pm$0.003 | 0.379$\pm$0.005 | 0.235$\pm$0.004 | 0.702$\pm$0.004
MaDL($\overline{\text{W}}$) | ✗ | ✗ | 0.950$\pm$0.002 | 0.237$\pm$0.006 | 0.078$\pm$0.003 | 0.790$\pm$0.003 | 0.416$\pm$0.002 | 0.269$\pm$0.002 | 0.652$\pm$0.002
MaDL(W) | ✓ | ✓ | 0.952$\pm$0.001 | 0.227$\pm$0.006 | 0.075$\pm$0.002 | 0.784$\pm$0.003 | 0.420$\pm$0.002 | 0.273$\pm$0.002 | 0.645$\pm$0.004
Table 7: Results regarding RQ2 for datasets with simulated annotators:
Best
and
second best
performances are highlighted per dataset and evaluation score while excluding the performances of the UB. Technique | P3 | P4 | Ground Truth Model | Annotator Performance Model
---|---|---|---|---
ACC $\uparrow$ | NLL $\downarrow$ | BS $\downarrow$ | ACC $\uparrow$ | NLL $\downarrow$ | BS $\downarrow$ | BAL-ACC $\uparrow$
letter (random-correlated)
UB | ✗ | ✓ | 0.960$\pm$0.003 | 0.131$\pm$0.006 | 0.059$\pm$0.003 | 0.937$\pm$0.002 | 0.212$\pm$0.003 | 0.104$\pm$0.002 | 0.516$\pm$0.002
LB | ✗ | ✗ | 0.056$\pm$0.009 | 3.307$\pm$0.049 | 0.965$\pm$0.004 | 0.088$\pm$0.000 | 9.950$\pm$2.090 | 1.816$\pm$0.002 | 0.500$\pm$0.000
CL | ✗ | ✗ | 0.565$\pm$0.028 | 3.519$\pm$0.455 | 0.682$\pm$0.052 | 0.925$\pm$0.000 | 0.237$\pm$0.004 | 0.124$\pm$0.002 | 0.506$\pm$0.000
REAC | ✗ | ✗ | 0.607$\pm$0.024 | 1.810$\pm$0.127 | 0.561$\pm$0.034 | 0.926$\pm$0.000 | 0.221$\pm$0.004 | 0.116$\pm$0.002 | 0.507$\pm$0.000
UNION | ✓ | ✗ | 0.615$\pm$0.034 | 3.317$\pm$0.582 | 0.625$\pm$0.065 | 0.925$\pm$0.000 | 0.232$\pm$0.004 | 0.122$\pm$0.002 | 0.506$\pm$0.000
LIA | ✗ | ✗ | 0.352$\pm$0.010 | 2.960$\pm$0.035 | 0.932$\pm$0.004 | 0.088$\pm$0.000 | 2.131$\pm$0.137 | 1.474$\pm$0.041 | 0.500$\pm$0.000
CoNAL | ✓ | ✓ | 0.581$\pm$0.015 | 2.325$\pm$0.249 | 0.599$\pm$0.027 | 0.925$\pm$0.000 | 0.236$\pm$0.002 | 0.124$\pm$0.001 | 0.507$\pm$0.000
MaDL($\overline{\text{W}}$) | ✗ | ✗ | 0.548$\pm$0.033 | 1.902$\pm$0.215 | 0.673$\pm$0.064 | 0.801$\pm$0.044 | 0.423$\pm$0.033 | 0.265$\pm$0.027 | 0.506$\pm$0.006
MaDL(W) | ✓ | ✓ | 0.932$\pm$0.003 | 0.277$\pm$0.038 | 0.101$\pm$0.005 | 0.940$\pm$0.000 | 0.204$\pm$0.003 | 0.101$\pm$0.001 | 0.519$\pm$0.001
fmnist (random-correlated)
UB | ✗ | ✓ | 0.909$\pm$0.002 | 0.246$\pm$0.005 | 0.131$\pm$0.003 | 0.888$\pm$0.000 | 0.337$\pm$0.001 | 0.191$\pm$0.000 | 0.520$\pm$0.000
LB | ✗ | ✗ | 0.172$\pm$0.019 | 2.296$\pm$0.005 | 0.899$\pm$0.001 | 0.140$\pm$0.000 | 21.865$\pm$6.169 | 1.703$\pm$0.000 | 0.500$\pm$0.000
CL | ✗ | ✗ | 0.880$\pm$0.003 | 0.462$\pm$0.169 | 0.222$\pm$0.073 | 0.880$\pm$0.003 | 0.347$\pm$0.004 | 0.200$\pm$0.003 | 0.513$\pm$0.002
REAC | ✗ | ✗ | 0.870$\pm$0.003 | 0.470$\pm$0.009 | 0.204$\pm$0.004 | 0.885$\pm$0.000 | 0.342$\pm$0.000 | 0.194$\pm$0.000 | 0.514$\pm$0.000
UNION | ✓ | ✗ | 0.884$\pm$0.002 | 0.387$\pm$0.022 | 0.182$\pm$0.007 | 0.881$\pm$0.000 | 0.345$\pm$0.000 | 0.198$\pm$0.000 | 0.514$\pm$0.000
LIA | ✗ | ✗ | 0.677$\pm$0.008 | 2.094$\pm$0.002 | 0.852$\pm$0.001 | 0.140$\pm$0.000 | 2.067$\pm$0.005 | 1.418$\pm$0.002 | 0.500$\pm$0.000
CoNAL | ✓ | ✓ | 0.858$\pm$0.012 | 0.457$\pm$0.086 | 0.219$\pm$0.031 | 0.882$\pm$0.002 | 0.344$\pm$0.002 | 0.197$\pm$0.002 | 0.516$\pm$0.001
MaDL($\overline{\text{W}}$) | ✗ | ✗ | 0.337$\pm$0.046 | 2.131$\pm$0.090 | 0.855$\pm$0.029 | 0.229$\pm$0.075 | 1.038$\pm$0.146 | 0.814$\pm$0.128 | 0.498$\pm$0.002
MaDL(W) | ✓ | ✓ | 0.896$\pm$0.002 | 0.290$\pm$0.003 | 0.150$\pm$0.002 | 0.889$\pm$0.000 | 0.337$\pm$0.000 | 0.191$\pm$0.000 | 0.520$\pm$0.000
cifar10 (random-correlated)
UB | ✗ | ✓ | 0.932$\pm$0.002 | 0.519$\pm$0.016 | 0.119$\pm$0.004 | 0.886$\pm$0.000 | 0.340$\pm$0.002 | 0.192$\pm$0.001 | 0.515$\pm$0.000
LB | ✗ | ✗ | 0.141$\pm$0.008 | 2.301$\pm$0.002 | 0.900$\pm$0.000 | 0.139$\pm$0.000 | 14.224$\pm$6.699 | 1.704$\pm$0.001 | 0.500$\pm$0.000
CL | ✗ | ✗ | 0.576$\pm$0.023 | 1.395$\pm$0.090 | 0.576$\pm$0.028 | 0.878$\pm$0.000 | 0.353$\pm$0.002 | 0.204$\pm$0.001 | 0.507$\pm$0.000
REAC | ✗ | ✗ | 0.462$\pm$0.010 | 2.093$\pm$0.062 | 0.767$\pm$0.011 | 0.875$\pm$0.001 | 0.353$\pm$0.000 | 0.204$\pm$0.000 | 0.505$\pm$0.001
UNION | ✓ | ✗ | 0.540$\pm$0.049 | 1.517$\pm$0.209 | 0.629$\pm$0.065 | 0.876$\pm$0.002 | 0.355$\pm$0.003 | 0.205$\pm$0.002 | 0.506$\pm$0.002
LIA | ✗ | ✗ | 0.211$\pm$0.014 | 2.273$\pm$0.007 | 0.894$\pm$0.001 | 0.139$\pm$0.000 | 2.096$\pm$0.007 | 1.429$\pm$0.002 | 0.500$\pm$0.000
CoNAL | ✓ | ✓ | 0.555$\pm$0.020 | 1.379$\pm$0.053 | 0.592$\pm$0.020 | 0.876$\pm$0.001 | 0.355$\pm$0.002 | 0.206$\pm$0.002 | 0.506$\pm$0.001
MaDL($\overline{\text{W}}$) | ✗ | ✗ | 0.217$\pm$0.042 | 6.992$\pm$0.386 | 1.219$\pm$0.087 | 0.872$\pm$0.001 | 0.398$\pm$0.011 | 0.229$\pm$0.009 | 0.502$\pm$0.001
MaDL(W) | ✓ | ✓ | 0.822$\pm$0.007 | 0.593$\pm$0.033 | 0.262$\pm$0.010 | 0.885$\pm$0.000 | 0.339$\pm$0.001 | 0.192$\pm$0.001 | 0.514$\pm$0.000
svhn (random-correlated)
UB | ✗ | ✓ | 0.965$\pm$0.001 | 0.399$\pm$0.017 | 0.064$\pm$0.001 | 0.877$\pm$0.000 | 0.349$\pm$0.000 | 0.201$\pm$0.000 | 0.509$\pm$0.001
LB | ✗ | ✗ | 0.190$\pm$0.000 | 2.298$\pm$0.002 | 0.899$\pm$0.000 | 0.138$\pm$0.000 | 24.019$\pm$7.802 | 1.704$\pm$0.001 | 0.500$\pm$0.000
CL | ✗ | ✗ | 0.908$\pm$0.038 | 0.398$\pm$0.226 | 0.143$\pm$0.056 | 0.873$\pm$0.001 | 0.354$\pm$0.002 | 0.205$\pm$0.001 | 0.505$\pm$0.000
REAC | ✗ | ✗ | 0.189$\pm$0.001 | 2.294$\pm$0.003 | 0.898$\pm$0.001 | 0.140$\pm$0.000 | 2.262$\pm$0.734 | 1.384$\pm$0.304 | 0.500$\pm$0.000
UNION | ✓ | ✗ | 0.881$\pm$0.104 | 0.529$\pm$0.553 | 0.179$\pm$0.154 | 0.872$\pm$0.002 | 0.356$\pm$0.008 | 0.206$\pm$0.005 | 0.505$\pm$0.000
LIA | ✗ | ✗ | 0.192$\pm$0.004 | 2.294$\pm$0.004 | 0.898$\pm$0.001 | 0.138$\pm$0.000 | 3.864$\pm$3.540 | 1.483$\pm$0.111 | 0.500$\pm$0.000
CoNAL | ✓ | ✓ | 0.231$\pm$0.048 | 2.933$\pm$0.526 | 0.956$\pm$0.072 | 0.860$\pm$0.000 | 0.414$\pm$0.008 | 0.242$\pm$0.003 | 0.500$\pm$0.000
MaDL($\overline{\text{W}}$) | ✗ | ✗ | 0.243$\pm$0.102 | 6.055$\pm$3.173 | 1.119$\pm$0.230 | 0.575$\pm$0.352 | 0.702$\pm$0.344 | 0.505$\pm$0.319 | 0.500$\pm$0.001
MaDL(W) | ✓ | ✓ | 0.940$\pm$0.002 | 0.244$\pm$0.011 | 0.091$\pm$0.003 | 0.877$\pm$0.000 | 0.349$\pm$0.000 | 0.201$\pm$0.000 | 0.508$\pm$0.000
### 5.4 RQ3: Do annotator features containing prior information about
annotators improve learning and enable inductively learning annotators’
performances? (Properties P5, P6)
Takeaway: Annotator features containing prior information about annotators
improve the learning of GT and AP models (property P5). Furthermore, we can
use these annotator features to inductively estimate the performances of
annotators unavailable during training (property P6).
Setup: We address RQ3 by evaluating multi-annotator supervised learning
techniques with and without using annotator features containing prior
information. For each dataset, we simulate 100 annotators according to the
annotator set inductive in Table 3. However, only 75 annotators provide class
labels for training. Each of them provides class labels for
$2\text{\,}\mathrm{\char 37\relax}$ of randomly selected training instances.
The lower annotation ratio is used to study the generalization across
annotators sharing similar features. The remaining 25 annotators form a test
set to assess AP predictions. We generate annotator features containing prior
information by composing information about annotator type, class-wise APs, and
cluster-wise APs. Fig. 8 provides examples for two annotators based on two
classes and four clusters. We evaluate two variants of LIA, CoNAL, and MaDL,
denoted respectively by the schemes LIA(P5), CoNAL(P5), and MaDL(P5). Property
P5 refers to a technique’s ability to consider prior information about
annotators. We differentiate between the variant with annotator features
containing prior information (A) and the one using one-hot encoded features to
separate between annotators’ identities ($\overline{\text{A}}$).
MaDL($\overline{\text{A}}$) corresponds to MaDL’s default variant in this
setup. We do not evaluate CL, UNION, and REAC since these techniques cannot
handle annotator features.
Adversarial Annotator
$\mathbf{a}_{1}$
Cluster-specialized Annotator
$\mathbf{a}_{2}$$\mathbf{a}_{1}=\begin{pmatrix}{\color[rgb]{0.1640625,0.49609375,1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1640625,0.49609375,1}\text{adversarial}}\\\
{\color[rgb]{0.99609375,0.1640625,0.1640625}\definecolor[named]{pgfstrokecolor}{rgb}{0.99609375,0.1640625,0.1640625}0.04}\\\
{\color[rgb]{0.99609375,0.1640625,0.1640625}\definecolor[named]{pgfstrokecolor}{rgb}{0.99609375,0.1640625,0.1640625}0.06}\\\
{\color[rgb]{0,0.5,0.5}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.5,0.5}0.03}\\\
{\color[rgb]{0,0.5,0.5}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.5,0.5}0.07}\\\
{\color[rgb]{0,0.5,0.5}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.5,0.5}0.04}\\\
{\color[rgb]{0,0.5,0.5}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.5,0.5}0.05}\end{pmatrix}$$\mathbf{a}_{2}=\begin{pmatrix}{\color[rgb]{0.1640625,0.49609375,1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1640625,0.49609375,1}\text{cluster-
specialized}}\\\
{\color[rgb]{0.99609375,0.1640625,0.1640625}\definecolor[named]{pgfstrokecolor}{rgb}{0.99609375,0.1640625,0.1640625}0.51}\\\
{\color[rgb]{0.99609375,0.1640625,0.1640625}\definecolor[named]{pgfstrokecolor}{rgb}{0.99609375,0.1640625,0.1640625}0.49}\\\
{\color[rgb]{0,0.5,0.5}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.5,0.5}0.95}\\\
{\color[rgb]{0,0.5,0.5}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.5,0.5}0.03}\\\
{\color[rgb]{0,0.5,0.5}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.5,0.5}0.95}\\\
{\color[rgb]{0,0.5,0.5}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.5,0.5}0.07}\end{pmatrix}$
Figure 8: Visualization of MaDL(A)’s inductive AP estimates for two unknown
annotators.
Qualitative study: Fig. 8 visualizes AP predictions of MaDL(A) regarding two
exemplary annotators for the dataset toy. The visualization of these AP
predictions is analogous to Fig. 6. Neither of the two annotators provides
class labels for the training, and the plotted training instances show only
potential annotations to visualize the annotation patterns. The vectors at the
right list the annotator features containing prior information for both
annotators. The colors reveal the meanings of the respective feature values.
These meanings are unknown to MaDL(A), such that its AP predictions
exclusively result from generalizing similar annotators’ features and their
annotations available during training. MaDL(A) correctly identifies the left
annotator as adversarial because it predicts low (white) AP scores across the
feature space regions close to training instances. For the right cluster-
specialized annotator, MaDL(A) accurately separates the two weak clusters
(feature space regions with predominantly crosses) with low AP estimates from
the two expert clusters (feature space regions with predominantly circles)
with high AP estimates.
Quantitative study: Table 8 presents the GT and AP models’ test performances
for the four datasets with the simulated annotator set inductive. The table
further indicates whether a technique processes prior information as annotator
features (property P5) and whether a technique can inductively estimate the
performances of annotators unavailable during the training phase (property
P6). Note that the AP results refer to the aforementioned 25 test annotators.
Hence, there are no results (marked as –) for techniques with AP models not
fulfilling property P6. For completeness, we provide the results for the 75
annotators providing class labels for training in Appendix D. As for RQ1 and
RQ2, training with GT labels leads to the best performance results (UB),
whereas learning from annotations aggregated via the majority rule mostly
results in the worst performances (LB). Inspecting the results of MaDL(A)’s GT
model compared to the other techniques, we observe competitive or partially
superior results across all four datasets. Concerning its AP model, we further
note that MaDL(A) provides meaningful AP estimates, indicated by BAL-ACC
values greater than 0.5. Comparing the GT models’ results of each pair of
variants, performance gains for LIA and MaDL demonstrate the potential
benefits of learning from annotator features containing prior annotator
information. In contrast, the GT models’ results of CoNAL(A) and
CoNAL($\overline{\text{A}}$) hardly differ.
Table 8: Results regarding RQ3 for datasets with simulated annotators:
Best
and
second best
performances are highlighted per dataset and evaluation score while excluding the performances of the UB. The AP models’ results refer to the 25 test annotators providing no class labels for training. An entry – marks a technique whose AP model cannot make predictions for such test annotators. Technique | P5 | P6 | Ground Truth Model | Annotator Performance Model
---|---|---|---|---
ACC $\uparrow$ | NLL $\downarrow$ | BS $\downarrow$ | ACC $\uparrow$ | NLL $\downarrow$ | BS $\downarrow$ | BAL-ACC $\uparrow$
letter (inductive)
UB | ✓ | ✓ | 0.962$\pm$0.002 | 0.129$\pm$0.003 | 0.058$\pm$0.002 | 0.672$\pm$0.005 | 0.745$\pm$0.047 | 0.457$\pm$0.011 | 0.612$\pm$0.005
LB | ✓ | ✓ | 0.861$\pm$0.005 | 1.090$\pm$0.017 | 0.429$\pm$0.008 | 0.569$\pm$0.008 | 0.730$\pm$0.011 | 0.522$\pm$0.007 | 0.537$\pm$0.006
LIA($\overline{\text{A}}$) | ✗ | ✗ | 0.875$\pm$0.006 | 0.901$\pm$0.060 | 0.350$\pm$0.024 | – | – | – | –
LIA(A) | ✓ | ✓ | 0.876$\pm$0.006 | 1.006$\pm$0.177 | 0.397$\pm$0.074 | 0.609$\pm$0.017 | 1.447$\pm$0.845 | 0.597$\pm$0.105 | 0.545$\pm$0.033
CoNAL($\overline{\text{A}}$) | ✗ | ✗ | 0.875$\pm$0.009 | 0.804$\pm$0.119 | 0.186$\pm$0.010 | – | – | – | –
CoNAL(A) | ✓ | ✗ | 0.874$\pm$0.007 | 0.808$\pm$0.116 | 0.186$\pm$0.011 | – | – | – | –
MaDL($\overline{\text{A}}$) | ✗ | ✗ | 0.911$\pm$0.006 | 0.334$\pm$0.026 | 0.129$\pm$0.008 | – | – | – | –
MaDL(A) | ✓ | ✓ | 0.914$\pm$0.004 | 0.303$\pm$0.009 | 0.124$\pm$0.005 | 0.668$\pm$0.007 | 0.813$\pm$0.115 | 0.471$\pm$0.015 | 0.600$\pm$0.010
fmnist (inductive)
UB | ✓ | ✓ | 0.909$\pm$0.002 | 0.246$\pm$0.005 | 0.131$\pm$0.003 | 0.730$\pm$0.008 | 0.536$\pm$0.019 | 0.357$\pm$0.010 | 0.656$\pm$0.009
LB | ✓ | ✓ | 0.881$\pm$0.002 | 0.876$\pm$0.005 | 0.370$\pm$0.002 | 0.590$\pm$0.023 | 0.681$\pm$0.005 | 0.487$\pm$0.006 | 0.537$\pm$0.010
LIA($\overline{\text{A}}$) | ✗ | ✗ | 0.852$\pm$0.003 | 1.011$\pm$0.020 | 0.436$\pm$0.010 | – | – | – | –
LIA(A) | ✓ | ✓ | 0.855$\pm$0.002 | 0.972$\pm$0.012 | 0.417$\pm$0.006 | 0.674$\pm$0.036 | 0.626$\pm$0.026 | 0.436$\pm$0.024 | 0.601$\pm$0.027
CoNAL($\overline{\text{A}}$) | ✗ | ✗ | 0.889$\pm$0.002 | 0.322$\pm$0.005 | 0.163$\pm$0.003 | – | – | – | –
CoNAL(A) | ✓ | ✗ | 0.890$\pm$0.002 | 0.323$\pm$0.011 | 0.163$\pm$0.005 | – | – | – | –
MaDL($\overline{\text{A}}$) | ✗ | ✗ | 0.895$\pm$0.002 | 0.297$\pm$0.004 | 0.152$\pm$0.002 | – | – | – | –
MaDL(A) | ✓ | ✓ | 0.893$\pm$0.004 | 0.297$\pm$0.008 | 0.153$\pm$0.004 | 0.723$\pm$0.004 | 0.538$\pm$0.003 | 0.362$\pm$0.003 | 0.649$\pm$0.005
cifar10 (inductive)
UB | ✓ | ✓ | 0.931$\pm$0.002 | 0.527$\pm$0.022 | 0.122$\pm$0.003 | 0.686$\pm$0.006 | 0.646$\pm$0.101 | 0.409$\pm$0.016 | 0.613$\pm$0.006
LB | ✓ | ✓ | 0.781$\pm$0.003 | 1.054$\pm$0.035 | 0.447$\pm$0.016 | 0.583$\pm$0.009 | 0.684$\pm$0.004 | 0.490$\pm$0.004 | 0.521$\pm$0.003
LIA($\overline{\text{A}}$) | ✗ | ✗ | 0.798$\pm$0.008 | 1.072$\pm$0.014 | 0.455$\pm$0.006 | – | – | – | –
LIA(A) | ✓ | ✓ | 0.804$\pm$0.004 | 1.056$\pm$0.022 | 0.447$\pm$0.011 | 0.607$\pm$0.020 | 0.670$\pm$0.017 | 0.477$\pm$0.016 | 0.544$\pm$0.010
CoNAL($\overline{\text{A}}$) | ✗ | ✗ | 0.835$\pm$0.002 | 0.576$\pm$0.016 | 0.245$\pm$0.005 | – | – | – | –
CoNAL(A) | ✓ | ✗ | 0.834$\pm$0.006 | 0.574$\pm$0.017 | 0.248$\pm$0.007 | – | – | – | –
MaDL($\overline{\text{A}}$) | ✗ | ✗ | 0.811$\pm$0.008 | 0.626$\pm$0.036 | 0.277$\pm$0.014 | – | – | – | –
MaDL(A) | ✓ | ✓ | 0.837$\pm$0.003 | 0.557$\pm$0.028 | 0.242$\pm$0.006 | 0.698$\pm$0.003 | 0.567$\pm$0.015 | 0.383$\pm$0.004 | 0.617$\pm$0.004
svhn (inductive)
UB | ✓ | ✓ | 0.965$\pm$0.001 | 0.393$\pm$0.015 | 0.063$\pm$0.002 | 0.613$\pm$0.004 | 0.943$\pm$0.113 | 0.511$\pm$0.015 | 0.524$\pm$0.006
LB | ✓ | ✓ | 0.927$\pm$0.002 | 0.805$\pm$0.016 | 0.328$\pm$0.009 | 0.588$\pm$0.010 | 0.704$\pm$0.007 | 0.509$\pm$0.006 | 0.511$\pm$0.007
LIA($\overline{\text{A}}$) | ✗ | ✗ | 0.929$\pm$0.003 | 0.818$\pm$0.133 | 0.336$\pm$0.068 | – | – | – | –
LIA(A) | ✓ | ✓ | 0.932$\pm$0.001 | 0.754$\pm$0.152 | 0.303$\pm$0.079 | 0.603$\pm$0.013 | 0.671$\pm$0.024 | 0.478$\pm$0.022 | 0.513$\pm$0.008
CoNAL($\overline{\text{A}}$) | ✗ | ✗ | 0.941$\pm$0.001 | 0.258$\pm$0.009 | 0.090$\pm$0.003 | – | – | – | –
CoNAL(A) | ✓ | ✗ | 0.942$\pm$0.001 | 0.260$\pm$0.012 | 0.090$\pm$0.002 | – | – | – | –
MaDL($\overline{\text{A}}$) | ✗ | ✗ | 0.928$\pm$0.002 | 0.299$\pm$0.019 | 0.109$\pm$0.005 | – | – | – | –
MaDL(A) | ✓ | ✓ | 0.935$\pm$0.001 | 0.256$\pm$0.009 | 0.098$\pm$0.002 | 0.624$\pm$0.007 | 0.632$\pm$0.013 | 0.444$\pm$0.008 | 0.521$\pm$0.006
## 6 Conclusion
In this article, we made three main contributions. (1) We started with a
formalization of the objectives in multi-annotator supervised learning.
Focusing on AP estimation, we then presented six relevant properties (cf.
P1–P6 in Section 3) for categorizing related techniques in this research area.
(2) Considering these six properties, we proposed our framework MaDL. A
modular, probabilistic design and a weighted loss function modeling annotator
correlations characterize its novelties. (3) We experimentally investigated
the six properties via three RQs. The results confirmed MaDL’s robust and
often superior performance to related multi-annotator supervised learning
techniques. The findings of this article, with a focus on AP estimation,
provide a starting point for several aspects of future research, some examples
of which are given below.
Although the annotator embeddings already contain information about the
annotation patterns concerning instances and classes, MaDL is currently
limited to computing annotator correlations on a global level, i.e., annotator
weights are not an explicit function of instance-annotator pairs. For example,
an extension in this direction may be valuable to quantify correlations in
certain regions of the feature space. Leveraging AP estimates for additional
applications, e.g., selecting the best crowdworkers to obtain high-quality
annotations during a crowdsourcing campaign (Herde et al., 2023), is also of
great value. Another neglected aspect is the study of epistemic uncertainty
(Huseljic et al., 2021). For example, the visualizations for the two-
dimensional dataset in Fig. 6 show high certainty of the GT and AP models in
feature space regions with no observed instances. However, meaningful
epistemic uncertainty estimates are essential in many (safety-critical)
applications (Hüllermeier & Waegeman, 2021) and would improve the
characterization of annotators’ knowledge. During our experiments, we showed
the potential benefit of annotator features. We had no access to a dataset
with prior information from real-world annotators, so we needed a suitable
simulation for these features. Therefore, and also noted by Zhang et al.
(2023), future research may acquire such prior information via crowdsourcing
to verify their benefit. As the concentration of annotators may fluctuate or
annotators may learn during the annotation process, taking time-varying APs
into account is another potential avenue for future research (Donmez et al.,
2010). Furthermore, there are already crowdsourcing approaches (Chang et al.,
2017) and concepts (Calma et al., 2016) supporting collaboration between
annotators. Thus, developing techniques considering or recommending such
collaborations is of practical value (Fang et al., 2012).
Finally, we limited ourselves to empirical performance results and
classification tasks with class labels as annotations. Future investigations
on theoretical performance guarantees of MaDL and the learning with different
annotation types, such as class labels with confidence scores (Berthon et al.,
2021) or partial labels (Yu et al., 2022), are apparent. Furthermore, the
extension to related supervised learning tasks, such as semantic segmentation,
sequence classification, and regression, is of interest. The goal of semantic
segmentation is to classify individual pixels (Minaee et al., 2021). A
potential approach to extend MaDL would be to implement its GT model through a
U-Net (Ronneberger et al., 2015) and feed its latent representations as input
to the AP model for estimating pixel-wise confusion matrices per annotator.
Likewise, we may adapt MaDL to be applied to sequence classification tasks,
such as named entity recognition (Li et al., 2020). Concretely, we could
implement the GT model through a BiLSTM-network with softmax outputs (Reimers
& Gurevych, 2017) and feed its latent word representations as inputs to the AP
model for estimating word-wise confusion matrices per annotator. Since both
extensions involve higher computational costs than standard classification
tasks, one may alternatively investigate the estimation of a single (pixel- or
word-independent) confusion matrix per annotator. Regression tasks expect the
prediction of continuous target variables. Therefore, the probabilistic model
of MaDL has to be adapted. For example, the GT model could estimate the mean
and variance of an instance’s target variable, while the AP model learns
annotators’ biases and variances.
#### Broader Impact Statement
Big data is a driving force behind the success of machine learning (Zhou et
al., 2017). Reducing the effort and cost required for annotating this data is
essential for its ongoing development In this context, MaDL is a possible tool
to leverage the workforce of cost-efficient but error-prone annotators. Yet,
as a central resource for data annotation, crowdsourcing can negatively impact
individuals or even entire communities. Some of these impacts include
exploiting vulnerable individuals who participate in low-wage crowdsourcing
tasks (Schlagwein et al., 2019), producing low-quality data (Daniel et al.,
2018), and outsourcing jobs (Howe, 2008). On the one hand, multi-annotator
supervised learning techniques can improve data quality and support awarding
well-performing crowdworkers. On the other hand, such a technique may
intensify the already existing competition between crowdworkers (Schlagwein et
al., 2019). It also requires tight monitoring to ensure fair assessments of
crowdworkers. Besides the benefits of annotator features containing prior
information about annotators, there are several risks. Collecting and leaking
potentially sensitive personal data about the annotators is such a significant
risk (Xia & McKernan, 2020). Thus, the annotator features must contain only
information relevant to the learning task. Further, a lack of control over
this or other processes can lead to discrimination and bias based on gender,
origin, and other factors (Goel & Faltings, 2019). For these reasons, it is
crucial to consider and address the potential risks via responsible policies
and practices when employing multi-annotator supervised learning techniques.
#### Acknowledgments
We thank Lukas Rauch for the insightful discussions and comments, which
greatly improved this article.
## References
* Albarqouni et al. (2016) Shadi Albarqouni, Christoph Baur, Felix Achilles, Vasileios Belagiannis, Stefanie Demirci, and Nassir Navab. Aggnet: Deep Learning from Crowds for Mitosis Detection in Breast Cancer Histology Images. _IEEE Trans. Med. Imaging_ , 35(5):1313–1321, 2016.
* Algan & Ulusoy (2021) Görkem Algan and Ilkay Ulusoy. Image classification with deep learning in the presence of noisy labels: A survey. _Knowl. Based Syst._ , 215:106771, 2021.
* Arik & Pfister (2021) Sercan Ö Arik and Tomas Pfister. TabNet: Attentive Interpretable Tabular Learning. In _AAAI Conf. Artif. Intell._ , pp. 6679–6687, 2021.
* Berthon et al. (2021) Antonin Berthon, Bo Han, Gang Niu, Tongliang Liu, and Masashi Sugiyama. Confidence Scores Make Instance-dependent Label-noise Learning Possible. In _Int. Conf. Machine Learn._ , pp. 825–836, Virtual Conf., 2021\.
* Brier (1950) Glenn W Brier. Verification of Forecasts Expressed in Terms of Probability. _Mon. Weather Rev._ , 78(1):1–3, 1950.
* Brodersen et al. (2010) Kay Henning Brodersen, Cheng Soon Ong, Klaas Enno Stephan, and Joachim M Buhmann. The Balanced Accuracy and Its Posterior Distribution. In _IEEE Int. Conf. Pattern Recognit._ , pp. 3121–3124, Istanbul, Turkey, 2010.
* Calma et al. (2016) Adrian Calma, Jan Marco Leimeister, Paul Lukowicz, Sarah Oeste-Reiß, Tobias Reitmaier, Albrecht Schmidt, Bernhard Sick, Gerd Stumme, and Katharina Anna Zweig. From Active Learning to Dedicated Collaborative Interactive Learning. In _Int. Conf. Archit. Comp. Syst._ , pp. 1–8, Nuremberg, Germany, 2016.
* Cao et al. (2019) Peng Cao, Yilun Xu, Yuqing Kong, and Yizhou Wang. Max-MIG: an Information Theoretic Approach for Joint Learning from Crowds. In _Int. Conf. Learn. Represent._ , New Orleans, LA, 2019.
* Chang et al. (2017) Joseph Chee Chang, Saleema Amershi, and Ece Kamar. Revolt: Collaborative Crowdsourcing for Labeling Machine Learning Datasets. In _CHI Conf. Hum. Factors Comp. Syst._ , pp. 2334–2346, Denver, CO, 2017.
* Chu et al. (2021) Zhendong Chu, Jing Ma, and Hongning Wang. Learning from Crowds by Modeling Common Confusions. In _AAAI Conf. Artif. Intell._ , pp. 5832–5840, Virtual Conf., 2021\.
* Daniel et al. (2018) Florian Daniel, Pavel Kucherbaev, Cinzia Cappiello, Boualem Benatallah, and Mohammad Allahbakhsh. Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions. _ACM Comput. Surv._ , 51(1):1–40, 2018.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. _arXiv:1810.04805_ , 2018.
* Donmez et al. (2010) Pinar Donmez, Jaime Carbonell, and Jeff Schneider. A Probabilistic Framework to Learn from Multiple Annotators with Time-Varying Accuracy. In _SIAM Int. Conf. Data Min._ , pp. 826–837, Columbus, OH, 2010\.
* Fang et al. (2012) Meng Fang, Xingquan Zhu, Bin Li, Wei Ding, and Xindong Wu. Self-Taught Active Learning from Crowds. In _IEEE Int. Conf. Data Min._ , pp. 858–863, Brussels, Belgium, 2012.
* Fiedler (2021) James Fiedler. Simple Modifications to Improve Tabular Neural Networks. _arXiv:2108.03214_ , 2021.
* Frey & Slate (1991) Peter W. Frey and David J. Slate. Letter recognition using Holland-style adaptive classifiers. _Machine Learn._ , 6(2):161–182, 1991.
* Gao et al. (2022) Zhengqi Gao, Fan-Keng Sun, Mingran Yang, Sucheng Ren, Zikai Xiong, Marc Engeler, Antonio Burazer, Linda Wildling, Luca Daniel, and Duane S. Boning. Learning from Multiple Annotator Noisy Labels via Sample-Wise Label Fusion. In _Eur. Conf. Comput. Vis._ , pp. 407–422, Tel Aviv, Israel, 2022\.
* Gil-Gonzalez et al. (2021) J. Gil-Gonzalez, A. Orozco-Gutierrez, and A. Alvarez-Meza. Learning from multiple inconsistent and dependent annotators to support classification tasks. _Neurocomputing_ , 423:236–247, 2021.
* Gil-González et al. (2021) Julián Gil-González, Andrés Valencia-Duque, Andrés Álvarez-Meza, Álvaro Orozco-Gutiérrez, and Andrea García-Moreno. Regularized Chained Deep Neural Network Classifier for Multiple Annotators. _Appl. Sci._ , 11(12):5409, 2021.
* Gilyazev & Turdakov (2018) Ruslan A. Gilyazev and Denis Y. Turdakov. Active Learning and Crowdsourcing: A Survey of Optimization Methods for Data Labeling. _Program. Comput. Softw._ , 44(6):476–491, 2018\.
* Glorot et al. (2011) Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep Sparse Rectifier Neural Networks. In _Int. Conf. Artif. Intell. Stat._ , pp. 315–323, Fort Lauderdale, FL, 2011.
* Goel & Faltings (2019) Naman Goel and Boi Faltings. Crowdsourcing with Fairness, Diversity and Budget Constraints. In _AAAI/ACM Conf. AI, Ethics, Soc._ , pp. 297–304, Honolulu, HI, 2019.
* Gu et al. (2022) Keren Gu, Xander Masotto, Vandana Bachani, Balaji Lakshminarayanan, Jack Nikodem, and Dong Yin. An instance-dependent simulation framework for learning with label noise. _Mach. Learn._ , 2022.
* Guan et al. (2018) Melody Y. Guan, Varun Gulshan, Andrew M. Dai, and Geoffrey E. Hinton. Who Said What: Modeling Individual Labelers Improves Classification. In _AAAI Conf. Artif. Intell._ , pp. 3109–3118, New Orleans, LA, 2018.
* Gupta et al. (2014) Maya R Gupta, Samy Bengio, and Jason Weston. Training Highly Multiclass Classifiers. _J. Mach. Learn. Res._ , 15(1):1461–1492, 2014\.
* He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In _Conf. Comput. Vis. Pattern Recognit._ , pp. 770–778, Las Vegas, NV, 2016.
* He et al. (2018) Xiangnan He, Xiaoyu Du, Xiang Wang, Feng Tian, Jinhui Tang, and Tat-Seng Chua. Outer Product-based Neural Collaborative Filtering. In _Int. Joint Conf. Artif. Intell._ , pp. 2227–2233, Stockholm, Sweden, 2018.
* Herde et al. (2021) Marek Herde, Denis Huseljic, Bernhard Sick, and Adrian Calma. A Survey on Cost Types, Interaction Schemes, and Annotator Performance Models in Selection Algorithms for Active Learning in Classification. _IEEE Access_ , 9:166970–166989, 2021.
* Herde et al. (2023) Marek Herde, Denis Huseljic, Bernhard Sick, Ulrich Bretschneider, and Sarah Oeste-Reiß. Who knows best? A Case Study on Intelligent Crowdworker Selection via Deep Learning. In _Int. Workshop on Interact. Adapt. Learn. @ Eur. Conf. Mach. Learn._ , pp. 14–18, Turin, Italy, 2023.
* Hestness et al. (2017) Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. Deep Learning Scaling is Predictable, Empirically. _arXiv:1712.00409_ , 2017.
* Hoiem et al. (2021) Derek Hoiem, Tanmay Gupta, Zhizhong Li, and Michal Shlapentokh-Rothman. Learning Curves for Analysis of Deep Networks. In _Int. Conf. Machine Learn._ , pp. 4287–4296, Virtual Conf., 2021\.
* Howe (2008) Jeff Howe. Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business, 2008.
* Hüllermeier & Waegeman (2021) Eyke Hüllermeier and Willem Waegeman. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. _Machine Learn._ , 110:457–506, 2021.
* Huseljic et al. (2021) Denis Huseljic, Bernhard Sick, Marek Herde, and Daniel Kottke. Separation of Aleatoric and Epistemic Uncertainty in Deterministic Deep Neural Networks. In _IEEE Int. Conf. Pattern Recognit._ , pp. 9172–9179, Virtual Conf., 2021.
* Kajino et al. (2012) Hiroshi Kajino, Yuta Tsuboi, and Hisashi Kashima. A Convex Formulation for Learning from Crowds. In _AAAI Conf. Artif. Intell._ , pp. 73–79, Toronto, ON, 2012.
* Khetan et al. (2018) Ashish Khetan, Zachary C. Lipton, and Animashree Anandkumar. Learning From Noisy Singly-labeled Data. In _Int. Conf. Learn. Represent._ , Vancouver, BC, 2018.
* Krizhevsky (2009) Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Master’s thesis, University of Toronto, 2009.
* Kruskal (1964) Joseph B. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. _Psychometrika_ , 29(1):1–27, 1964.
* LeCun & Cortes (1998) Yann LeCun and Corinna Cortes. The MNIST database of handwritten digits, 1998. |
# Richardson–Lucy deconvolution with a spatially Variant point-spread function
of Chandra: Supernova Remnant Cassiopeia A as an Example
Yusuke Sakai Department of Physics, Rikkyo University, Toshima-Ku, Tokyo,
171-8501, Japan Shinya Yamada Department of Physics, Rikkyo University,
Toshima-Ku, Tokyo, 171-8501, Japan Toshiki Sato Department of Physics, Rikkyo
University, Toshima-Ku, Tokyo, 171-8501, Japan Department of Physics, School
of Science and Technology, Meiji University, 1-1-1 Higashi Mita, Tama-ku,
Kawasaki, Kanagawa 214-8571, Japan Ryota Hayakawa Department of Physics,
Rikkyo University, Toshima-Ku, Tokyo, 171-8501, Japan International Center
for Quantum-field Measurement Systems for Studies of the Universe and
Particles (QUP), KEK, 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan Ryota
Higurashi Department of Physics, Rikkyo University, Toshima-Ku, Tokyo,
171-8501, Japan Nao Kominato Department of Physics, Rikkyo University,
Toshima-Ku, Tokyo, 171-8501, Japan
###### Abstract
Richardson–Lucy (RL) deconvolution is one of the classical methods widely used
in X-ray astronomy and other areas. Amid recent progress in image processing,
RL deconvolution still leaves much room for improvement under a realistic
situations. One direction is to include the positional dependence of a point-
spread function (PSF), so-called RL deconvolution with a spatially variant PSF
(RLsv). Another is the method of estimating a reliable number of iterations
and their associated uncertainties. We developed a practical method that
incorporates the RLsv algorithm and the estimation of uncertainties. As a
typical example of bright and high-resolution images, the Chandra X-ray image
of the supernova remnant Cassiopeia A was used in this paper. RLsv
deconvolution enables us to uncover the smeared features in the
forward/backward shocks and jet-like structures. We constructed a method to
predict the appropriate number of iterations by using statistical fluctuation
of the observed images. Furthermore, the uncertainties were estimated by error
propagation from the last iteration, which was phenomenologically tested with
the observed data. Thus, our method is a practically efficient framework to
evaluate the time evolution of the remnants and their fine structures embedded
in high-resolution X-ray images.
Astronomy data analysis (1858), Astronomy image processing (2306), High
angular resolution (2167), X-ray astronomy (1810)
## 1 Introduction
Imaging analysis is critically important for studying diffuse celestial
sources. X-ray astronomy, starting with the first space application of X-ray
CCD in ASCA (Burke et al., 1994), has delivered detailed images of various
celestial objects; e.g., supernova remnants such as SN1006 (Bamba et al.,
2003) and Cassiopeia A (hereafter Cas A; Hwang et al., 2004), and galaxy
clusters such as A2142 (Markevitch et al., 2000). Since the X-ray mirrors used
in Chandra are the largest and most precisely built, exceeding the angular
resolution of Chandra is considered to be challenging. Therefore, enhancing
the technique of imaging analysis has been an essential direction to utilize
the highest spatial resolution and data accumulated over decades.
X-rays are collected primarily by total reflection from the surface of an
X-ray mirror, therefore the response function for the distribution of focused
X-rays, called the point-spread function (PSF), is nearly energy independent.
On the condition that a PSF is independent of incoming photon energy and the
position of the focal plane, a reverse calculation of a convolution of PSF,
so-called image deconvolution (see the review on deconvolution in astronomy by
Starck et al. (2002)), is highly simplified. There are various deconvolution
methods proposed by assuming that a PSF is constant during the deconvolution
process, such as the deconvolution of Suzaku XIS (Sugizaki et al., 2009). One
of the latest examples is the image restoration algorithm Expectation via
Markov chain Monte Carlo (Esch et al., 2004). It is applied to the double
active galactic nuclei in NGC 6240 (e.g., Fabbiano et al., 2020; Paggi et al.,
2022), succeeding in finely resolving the two cores. Similarly, a classical
method, Richardson–Lucy (RL) deconvolution proposed by Richardson (1972) and
Lucy (1974), is often used (e.g., Grefenstette et al., 2015; Thimmappa et al.,
2020; Sobolenko et al., 2022).
The choice of method depends on the trade-off between accuracy and
computational cost. Relaxing the condition that a PSF is positional-
independent and/or energy-independent, the deconvolution methods increase the
complexity of the calculation. RL deconvolution is one of the simplified
methods but still has room for improvement in practical situations. In gamma-
ray astronomy, a PSF can change by one order of magnitude with energy and
incident angle; it is calculated for each event, e.g., RL algorithm optimized
for Fermi-LAT and EGRET (Tajima et al., 2007). In contrast, as the number of
photons is much larger in X-ray astronomy, event-by-event reconstruction is
less practical; image-based reconstruction thus can be the first choice in
X-rays. However, there are few studies on extending the RL method, especially
their application to diffuse sources obtained by Chandra. We therefore
explored its applicability to the Chandra data and considered the associated
systematic errors.
In this paper, we implement RL deconvolution with a spatially variant PSF
(RLsv) algorithm, assuming it to be used for Chandra images. Section 2
describes the principle of the RLsv method. One of the technical difficulties
is reducing computational cost in calculating PSFs. This is solved by
decimating the sampling interval of PSFs, while a side-effect is discussed in
Section 5.1. Section 3 presents an example of its application to a diffuse
source observed by Chandra. We apply the RLsv method to the supernova remnant
of Cas A as an example, because Cas A is bright and extended over the entire
field of view of the ACIS detector, which would be the best target for the
first application. The remnant is intensively studied because of its unique
structure and evolution, e.g., the velocities and thickness of shocked
filaments (Patnaude & Fesen, 2009; Sato et al., 2018; Tsuchioka et al., 2022),
where the method can contribute to advancing our understanding of the
phenomena. In Section 4, we propose a reliable number of stop iterations and
uncertainties of the method. We develop the method to estimate the number of
convergent iterations by generating fluctuations due to statistical errors
during iteration. Furthermore, the uncertainty on the RLsv-deconvolved image
is estimated by using the law of error propagation (e.g., Ku, 1966). As a
result, filaments and ambiguous structures of Cas A are deconvolved to be
sharper with some knowledge of the statistical uncertainties.
Table 1: Basic information on the Chandra Observations of Cas A Used in this Paper Obs. ID | Obs. Start | Exp. Time | detector | R.A. | Decl. | Roll
---|---|---|---|---|---|---
| yyyy mmm dd | (ks) | | (deg) | (deg) | (deg)
4636 | 2004 Apr 20 | 143.48 | ACIS-S | 350.9129 | 58.8412 | 49.7698
4637 | 2004 Apr 22 | 163.50 | ACIS-S | 350.9131 | 58.8414 | 49.7665
4639 | 2004 Apr 25 | 79.05 | ACIS-S | 350.9132 | 58.8415 | 49.7666
5319 | 2004 Apr 18 | 42.26 | ACIS-S | 350.9127 | 58.8411 | 49.7698
5196 | 2004 Feb 8 | 49.53 | ACIS-S | 350.9129 | 58.7933 | 325.5035
## 2 Method
### 2.1 RL Deconvolution
The RL algorithm iteratively estimates a true image from an observed image
using Bayesian inference. It generally assumes that the PSF does not change
with a position in the image. The RL algorithm is expressed by
$W_{i}^{(r+1)}=W_{i}^{(r)}\sum_{k}\frac{P_{ik}H_{k}}{\sum_{j}P_{jk}W_{j}^{(r)}},$
(1)
where $i$ and $j$ are mapping the image in the sky, and $k$ is mapping the
image on the detector. The indices of the summation run through all the
pixels. $W^{(r)}$ is the restored image after $r$ iterations, and $H$ is the
observed image on the ACIS detector. $P_{jk}$ is the probability that a photon
emitted in sky $W$ bin $j$ is measured in data space $H$ bin $k$, or
$P(H_{k}|W_{j})$.
### 2.2 RL with a Spatially Variant PSF
Previous Chandra image deconvolution approaches (e.g., Thimmappa et al., 2020;
Sobolenko et al., 2022) used a simplified approximation for the $P_{jk}$
values, i.e., they used the same PSF for each $j$ bin. Here we assume that the
PSF changes as a function of the off-axis angle and the roll angle. As a
consequence, the Chandra RL algorithm is extended. The formula for RLsv is
obtained by rewriting Equation (1) as
$W_{i}^{(r+1)}=W_{i}^{(r)}\sum_{k}\frac{P_{iik}H_{k}}{\sum_{j}P_{jjk}W_{j}^{(r)}}.$
(2)
$P_{jjk}$ refers to a PSF at a position of $j$ (first index) which returns a
probability that an event emitted at $W_{j}$ (second index) is observed at
$H_{k}$ (third index), or $P_{j}(H_{k}|W_{j})$. Computational cost and memory
requirements need to be minimized for calculating the third-order tensor of
the PSF, which is a distinctive feature of the RLsv algorithm. When $H$ is
corrected for slight differences among pixels in effective area and exposures,
its normalization can be chosen arbitrarily. Here we use $H_{k}=N_{k}/A_{k}$,
where $N_{k}$ is the detector count image, and $A_{k}$ is the Hadamard product
of effective area and exposure time.
### 2.3 RLsv Deconvolution with Total Variation Regularization
There are regularization techniques to enhance the RL method, which are also
readily available for the RLsv method. Among these, total variation (TV)
regularization (Rudin et al., 1992) is effective in handling statistical
errors, which is used in the RL method (Dey et al., 2006). The formula for
RLsv with the regularization is obtained by rewriting Equation (2) as
$W_{i}^{(r+1)}=\frac{W_{i}^{(r)}}{1-\lambda_{\rm{TV}}\textrm{div}\left(\frac{\nabla
W_{i}^{(r)}}{|\nabla
W_{i}^{(r)}|}\right)}\sum_{k}\frac{P_{iik}H_{k}}{\sum_{j}P_{jjk}W_{j}^{(r)}}.$
(3)
The only difference from the RLsv algorithm of Equation (2) is the
regularization term of $1-\lambda_{\rm{TV}}\textrm{div}(\nabla
W_{i}^{(r)}/|\nabla W_{i}^{(r)}|)$, where $\lambda_{\rm{TV}}$ is the
regularization parameter, $\textrm{div}(\cdot)$ is the divergence, and $\nabla
W_{i}^{(r)}$ is the gradient of $W_{i}^{(r)}$. In this paper, we utilize the
parameter of $\lambda_{\rm{TV}}=0.002$, as proposed by Dey et al. (2006).
### 2.4 Comparison to other methods
Deconvolution methods require an understanding of their applicability to a
practical condition, as well as optimization of computation cost and accuracy
(for features of various methods see Naik & Sahu (2013)). The RL method is
well studied and has been used to incorporate regularization (e.g., van Kempen
& van Vliet, 2000; Dey et al., 2006; Yuan et al., 2008; Yongpan et al., 2010)
and the recent trend of deep learning (e.g., Agarwal et al., 2020). For the
Chandra users, RL deconvolution with a single PSF is frequently used because
the method is already implemented as arestore in the Chandra Interactive
Analysis of Observations (CIAO; Fruscione et al., 2006), Chandra’s standard
data processing package.
Figure 1: Cas A image (Obs. ID=4636) and the two-dimensional probabilities of
the point-spread functions (PSFs). The integral of each PSF is normalized to
be 1. The PSF color scale is a fixed range. The location of the optical axis
is indicated with a green cross.
Compared to other methods, the RL method forces the deconvolved image of each
iteration to be non-negative, and its integral value is conserved.
Additionally, the method converges to the maximum likelihood solution for a
Poisson noise distribution (Shepp & Vardi, 1982), which is suitable for
Chandra images with noise from counting statistics. Depending on the
application, it is less prone to ringing artifacts than inverse PSF-based
methods (e.g., Sekko et al., 1999; Neelamani et al., 2004); see the results of
the comparison by Dalitz et al. (2015). According to White (1994), it is
robust against small errors in the PSF.
Figure 2: (a) X-ray image in the 0.5–7.0 keV band of Cas A obtained with
Chandra. (a-1, 2, 3) Enlarged images specified by the colored frames in (a).
(b) Same as (a), but for the RLsv-deconvolved results. The unit of flux in the
images is $\rm{photons~{}cm^{-2}~{}s^{-1}}$.
## 3 Application to Observed Data
### 3.1 Data selection
Because Cas A is a bright and diffuse X-ray source with a moderately large
apparent diameter, it is an ideal target to demonstrate the RLsv method. It
has been observed by Chandra almost every year since 1999. The Chandra data of
Cas A used in this paper are listed in Table 1: ACIS-S observation of 2004 in
Obs. ID=4636, 4637, 4639, and 5319. The image size is $1489\times 1488$
pixels, or $743^{\prime\prime}\times 742^{\prime\prime}$ given a unit pixel of
$0.^{\prime\prime}492$. Data processing and analysis were performed using CIAO
version 4.13. The data were reprocessed from the level 1 event files by
chandra_repro. Since the roll angle and optical axis of the four observations
are almost the same (the maximum difference of the optical axis location is
about 4 unit pixels), all the events were merged into one by merge_obs. The
total exposure time was 428.29 ks.
### 3.2 Generating the PSF of Chandra
The Chandra telescope system consists of four pairs of nested reflecting
surfaces, configured in the Wolter type I geometry. The high energy response
is achieved by coating the mirrors with iridium. It has attained the highest
angular resolution of $0.^{\prime\prime}492$ among existing X-ray telescopes.
Its mirror of Chandra has been extensively calibrated on the ground and in
orbit (Jerius et al., 2000). The Chandra PSF is positional-dependent, mainly
due to aberrations. The RLsv method includes the position dependence of the
PSF, which is useful for highly extended X-ray sources.
Because creating a PSF for each position is computationally expensive, it is
decimated at some intervals. For reference, creating a PSF takes several
seconds, depending on the computational environment and desired accuracy. The
sampling interval of PSFs was chosen to be $35\times 35$ pixels (total of
$43\times 43=1849$). The interval was determined empirically by trying several
different ranges. In general, the PSFs simulated from each observation should
be merged for a precise calculation. Here, the PSF of the Obs. ID of 4636 was
used as a representative since its sampling of the PSF is decimated. The PSFs
at the lattice points were generated by CIAO’s simulate_psf using the Model of
AXAF Response to X-rays (Wise et al., 1997; Davis et al., 2012) at a
monochromatic energy of 2.3 keV. They were applied to the observed image with
energies from 0.5 to 7.0 keV.
Figure 1 shows all the PSFs sampled every $35\times 35$ pixels. The optical
axis is located at the northeast in the image, where the spread of the PSF is
minimum. As the position is away from the optical axis, the tail of the PSF
increases with its gradual shift of the elliptical axis. Although it is a
trade-off with photon statistics, it is effective to run the RLsv method with
the optimal monoenergetic PSF for each of the multiple energy decompositions
(see Figure 6). This is because, at shorter wavelengths, the effect of diffuse
reflection due to the roughness of the mirror surface is not negligible
(Jerius et al., 2000).
### 3.3 Results of the RLsv method
Figure 2(a) is an observed $\sim$400 ks image using the energy range from 0.5
to 7.0 keV, as explained in Section 3.1. We applied the RLsv method to the
image with a sampling interval of each PSF of $35\times 35$ pixels. The number
of iterations is 200. Note that the choice of the iteration number is
discussed in Section 4.1. The result of the RLsv method is presented in Figure
2(b). The unit of flux and its range in Figure 2(b) is the same as in Figure
2(a). The overall structures in the RLsv image become more vivid than the
original ones. The images around the off-axis are significantly improved
compared to those around the optical axis.
To make the differences more precise, we present magnified images of the
original image in Figures 2(a-1, 2, 3) and the RLsv-image in Figures 2(b-1, 2,
3). The three regions represent a sharp filament in the northeast, complicated
filaments in the north, and a slightly diffuse area in the south. The
filamentary structures in the northeast and north become sharper in the RLsv-
image. We will quantify the filament width in detail and discuss the
systematic uncertainties associated with the method in Section 4.2.
## 4 Uncertainty Estimation
### 4.1 Assessment of the reasonable number of iterations
We considered a way of assessing an appropriate number of iterations, which is
one of the issues with the RL method. This is because the method has the
property of excessive amplification of noise as the number of iterations
increases. We propose a method to suppress convergence using statistical
errors during the iteration. The formula of the method is written as
$W_{i}^{(r+1)}=W_{i}^{(r)}\sum_{k}\frac{P_{iik}G(N_{k})/A_{k}}{\sum_{j}P_{jjk}W_{j}^{(r)}}.$
(4)
The only difference from the RLsv algorithm, Equation (2), is the
$G(N_{k})/A_{k}$ term. $N$ ($\rm{counts}$) is the map of detector counts. $A$
($\rm{photons~{}cm^{-2}~{}s^{-1}}$) is the Hadamard product of effective area
and exposure time. $G(N_{k})$ is a random number generator following a Poisson
distribution with a count in the $k$th pixel of $N_{k}$; i.e.,
$G(N_{k})/A_{k}$ is a flux in units of $\rm{photons~{}cm^{-2}~{}s^{-1}}$. The
reason for normalizing $G(N)$ by dividing it with $A$ is to account for the
slight variations in effective area and exposure time among pixels.
The performance of the RLsv algorithm using Equation (4) is compared to that
using Equation (2). The convergence is evaluated by using the mean squared
error (MSE) of the two images: one step before and after iterations. Figure 3
shows the history of the MSE during the iteration. The curve obtained by the
RLsv algorithm, Equation (4) saturates at a certain level, while the other
continues to decrease. The saturation level is caused by the injection of
Poisson fluctuation at each step, which is considered as an indicator of
stopping. In Figure 3, the iteration number of $\sim$30 seems appropriate.
Figure 3: Residuals of the two images before and after iterations for the
entire region of Cas A vs. the number of iterations. The results of RLsv
method with and without statistical errors are plotted as a blue and an orange
line, respectively.
### 4.2 Assessment of image blurredness
We then designed a simplified method for evaluating a certain amount of
confidence. The RLsv-deconvolved image should have a similar amount of
fluctuation accompanying the observation image. The principle of the method is
to propagate errors of the converged RLsv image into the next step. Our choice
of using the last step for the error propagation is just to simplify the task.
Here, each error in the observed image is considered statistically
independent. Assuming only uncertainties on $H_{k}$, using the law of error
propagation, the image uncertainty can be expressed as
$\begin{split}\sigma_{W^{\prime}_{i}}&=\sqrt{\sum_{k}\left[\frac{\partial}{\partial
H_{k}}\left(W_{i}\sum_{k}\frac{P_{iik}H_{k}}{\sum_{j}P_{jjk}W_{j}}\right)\sigma_{H_{k}}\right]^{2}}\\\
&=W_{i}\sqrt{\sum_{k}\left(\frac{P_{iik}}{\sum_{j}P_{jjk}W_{j}}\frac{\sqrt{N_{k}}}{A_{k}}\right)^{2}},\end{split}$
(5)
where $W^{\prime}$ is the image of the next iteration number of any estimated
true image of $W$, and $\sqrt{N_{k}}$ is the statistical error of $N_{k}$.
Figure 4: Images and radial profiles of the southeastern filament of Cas A.
(a) Off-axis image of Obs. ID=4636, 4637, 4639, and 5319. (b) Result of RLsv-
image of (a). (c) On-axis image of Obs. ID=5196. (d) Results of the radial
profile in (a), (b), and (c), radially projected from he central compact
object (CCO) using the fan-shaped regions in green. The horizontal axis
represents the distance from the CCO.
We compared the off-axis RL image with the error of Equation (5) to an on-axis
observation in Figure 4. Figures 4(a) and (b) are the off-axis southeastern
images from Figures 2(a) and (b), respectively. Figure 4(c) shows a
southeastern on-axis image of Obs. ID=5196. The exposure time for the on-axis
observation is $\sim$50 ks, resulting in a larger statistical error compared
to the off-axis observation of $\sim$400 ks. The fan-shaped regions in Figures
4(a)–(c), along the filament, are chosen to create the radial profiles, which
is the one-dimensional profile of the photons in each region extending from
the central compact object (CCO) of Cas A toward the outer regions. These
radial profiles were created using dmextract in CIAO. In Figure 4(d), the
framed regions from Figures 4(a)–(c) are color-coded as blue, orange, and
black, respectively. The error bars of the radial profiles in Figure 4(d)
correspond to statistical errors, represented by blue and black, and the
result obtained by applying Equation (5) to the 199th iteration of the RLsv
image, is indicated by orange. From Figure 4(d), the profile of the off-axis
RLsv image agreed with that of the on-axis image within the statistical
errors. This method gives a guideline for a certain level of confidence
associated with the RLsv method.
## 5 Discussion
### 5.1 Enhancement Technique and Possibilities
In this section, further enhancements to the RLsv method are discussed. The
first is to reduce the loss of down-sampling PSFs. The positional dependence
of the PSF does not contain high-frequency components, so the decimation of
PSF sampling should work to some extent. For a small image such as a core plus
jet structure in an active galactic nucleus, keeping a high sampling rate of
the PSFs might be possible. However, for a largely extended source such as a
supernova remnant or galaxy cluster, to minimize the sampling rate is
critically important for practical use. The higher the decimation, the more
emphasized the boundary of the segment. Taking Cas A as an example, the edges
of specific segments clearly appear when the sampling interval is $35\times
35$ pixels. To smooth out the edges, we propose that the PSFs’ boundaries be
randomly selected from nearby PSFs (see more details in the Appendix).
Figure 5: Comparison of the results of the RLsv method without and with total
variation regularization, shown in (a) and (b) respectively.
Second, this method can be developed by incorporating several regularization
methods. We implemented an RLsv method incorporating the TV regularization
expressed in Equation (3). Finally, the RLsv method is naturally applied to
color images. By decomposing observed images into several colors (or energy
bands) and generating PSFs for an appropriate energy in each band, an energy-
dependent RLsv method can be realized.
Figure 6: (a) X-ray RGB (red: 0.2–1.2 keV, green: 1.2–2.0 keV and blue:
2.0–7.0 keV) band images of Cas A obtained with Chandra. (a-1, -2): Enlarged
images specified by the colored frames in (a). (b) Same as (a) except for
RLsv-deconvolved in each energy band. The unit of flux in the images is
$\rm{photons~{}cm^{-2}~{}s^{-1}}$.
We compare the RLsv method with and without the TV regularization. We use the
same image as in Section 3.1. The PSF sampling is $35\times 35$ pixels and the
number of iterations is 200. The PSFs’ boundaries are randomly selected
following the Appendix. Figure 5 presents the enlarged eastern image after
applying the RLsv method to the entire region. Figures 5(a) and (b) show the
results of the RLsv method of Equation (2) and the regularization version of
Equation (3), respectively. The TV regularization preserves the sharp
structure to remain and smoothes out statistical errors. In this way,
regularization can be added to the RLsv method.
We implement the RLsv method including these enhancements and adapt it to the
Cas A observational data described in Section 3.1. Figure 6(a) is the observed
image of Section 3.3 divided into three energies in RGB: 0.2–1.2 keV (red),
1.2–2.0 keV (green), and 2.0–7.0 keV (blue). Cas A is dominated by the thermal
radiation in $\leqslant$4 keV and the nonthermal radiation in $\geqslant$4
keV. We applied the RLsv method with the TV regularization Equation (3) to
each energy image of Figure 6(a) using the appropriate energy of the PSF (red
is 0.92 keV, green is 1.56 keV, and blue is 3.8 keV) based on the official
CIAO page.111https://cxc.cfa.harvard.edu/ciao/why/monochromatic_energy.html
The sampling interval of PSFs is $35\times 35$ pixels. PSF is randomly
selected at the sampling boundaries according to the Appendix. The number of
iterations is 30, according to Section 4.2. The result of the RLsv method is
presented in Figure 6(b). The energy dependence in Figures 6(b-1) and (b-2)
are clearly visible by this method.
### 5.2 Constraint on the Uncertainty
We propose two complementary ways to obtain a guideline on the stop condition
of the RLsv method. One is to obtain a minimum of residuals by inserting
statistical uncertainties into each update during the iteration process. This
gives a rough estimate of the limit of the iterations. The other is to include
the statistical uncertainties in the last step of the iteration. By combining
the two methods, it is possible to derive uncertainties using the errors
obtained by the latter method at the optimal iteration number estimated by the
former. This is a quick and convenient way to derive the systematic
uncertainties associated with the RLsv method.
The systematic uncertainty in the former method is a way of defining the
optimal number of iterations. One easy way is to use the same level of
residuals as shown in Section 4.1. It is intrinsically difficult to
distinguish the signal of the celestial objects from the statistical noise.
This difficulty needs to be overcome by comparing the deconvolved images
without errors and the error-estimated images around the optimal iteration
number recommended by the former method. This method is based on a compromise
between the computational cost and the simplicity of use while keeping a
reasonable statistical error.
## 6 Conclusion
We have improved the processing capability of RL deconvolution by
incorporating the positional dependency of the Chandra PSF. The RLsv method is
applied to the entire region of Cas A with an estimation of its limit and
errors, which are based on the phenomenological method for evaluating a
reasonable number of iterations and uncertainties. It shows that the features
of shock waves and jets are sharper than those measured in the original image,
with a certain amount of knowledge of the associated errors. The RLsv-
deconvolved profile of the off-axis image at the southeastern filament became
shaper and agreed with that of the on-axis observation within the statistical
errors. This method is useful for a detailed diagnosis of other extended X-ray
sources obtained by Chandra.
The code used in this paper is available at doi:10.5281/zenodo.8020557.
We would like to thank the anonymous referee for helpful comments and feedback
on this paper. This research has made use of data obtained from the Chandra
Data Archive and the Chandra Source Catalog, and software provided by the
Chandra X-ray Center (CXC) in the application packages CIAO. This work was
supported by JSPS KAKENHI grant Nos. 20K20527, 22H01272, and 20H01941.
## Appendix
Boundaries of the PSF
Figure 7: Illustration of the $9\times 9$ pixels around the intersection of
the PSF switchover, overlaid with the probability weights on selecting PSFs.
The quadrants are named A, B, C, and D for illustrative purposes. The weights
of the probabilities are either 1/3 or 2/3 at the two boundaries, while they
are 1/9, 2/9, or 4/9 at the corners.
Decimating the sampling number of PSFs is an effective approach to minimize
computational cost. However, this technique can introduce side-effects at the
boundary of segments when switching between PSFs. The variation in shape
between neighboring PSFs, caused by the sampling interval, leads to
discontinuities in the deconvolution process. To mitigate this issue, a simple
countermeasure is to randomly select adjacent PSFs at their boundaries, which
helps to smooth out the discontinuities. The weights of the probabilities for
selecting PSFs are illustrated in Figure 7. The presence and severity of
artifacts depend on factors such as the dissimilarity in shape between
neighboring PSFs, statistical characteristics of the observed images, and
other relevant factors. Therefore, the presented technique serves as an
example, and the problem is optimized by the range of pixels to be randomized.
Figure 8: Comparison of the RLsv method without and with PSF randomization at
the boundaries. (a) PSF images corresponding to (b) and (c). (b) RLsv-image
without correcting the PSF boundaries. (b-1) Enlarged images specified by the
colored frames in (b). The arrows correspond to the boundaries of the PSFs.
(c) Same as (b), but using the randomization of the PSFs.
The result of applying the selection rule in Figure 7 is shown in Figure 8. We
compared the RLsv method for the observed data in the eastern region in
Section 3.1 with a PSF sampling of $35\times 35$ pixels and 200 iterations.
Figure 8(a) shows the PSF images corresponding to Figures 8(b) and (c), where
the white lines are used to clarify the border lines. Figures 8(b) and (c) are
the RLsv-deconvolved images with and without using the randomization of PSFs,
respectively. To illustrate the differences, we presented magnified images of
Figures 8(b) and (c) as Figures 8(b-1) and (c-1), respectively. It appears
that the discontinuity at the boundaries of the PSFs, indicated by the green
arrows, is smeared out to some extent.
## References
* Agarwal et al. (2020) Agarwal, C., Khobahi, S., Bose, A., Soltanalian, M., & Schonfeld, D. 2020, in 2020 IEEE International Conference on Image Processing (ICIP), 3299–3303, doi: 10.1109/ICIP40778.2020.9190825
* Bamba et al. (2003) Bamba, A., Yamazaki, R., Ueno, M., & Koyama, K. 2003, The Astrophysical Journal, 589, 827
* Burke et al. (1994) Burke, B., Mountain, R., Daniels, P., Cooper, M., & Dolat, V. 1994, IEEE Transactions on Nuclear Science, 41, 375
* Dalitz et al. (2015) Dalitz, C., Pohle-Frohlich, R., & Michalk, T. 2015, IEEE transactions on ultrasonics, ferroelectrics, and frequency control, 62, 531
* Davis et al. (2012) Davis, J. E., Bautz, M. W., Dewey, D., et al. 2012, in Space Telescopes and Instrumentation 2012: Ultraviolet to Gamma Ray, Vol. 8443, SPIE, 375–386
* Dey et al. (2006) Dey, N., Blanc-Feraud, L., Zimmer, C., et al. 2006, Microscopy research and technique, 69, 260
* Esch et al. (2004) Esch, D. N., Connors, A., Karovska, M., & van Dyk, D. A. 2004, The Astrophysical Journal, 610, 1213
* Fabbiano et al. (2020) Fabbiano, G., Paggi, A., Karovska, M., et al. 2020, The Astrophysical Journal, 902, 49
* Fruscione et al. (2006) Fruscione, A., McDowell, J. C., Allen, G. E., et al. 2006, in Observatory Operations: Strategies, Processes, and Systems, Vol. 6270, SPIE, 586–597
* Grefenstette et al. (2015) Grefenstette, B. W., Reynolds, S. P., Harrison, F. A., et al. 2015, The Astrophysical Journal, 802, 15
* Hwang et al. (2004) Hwang, U., Laming, J. M., Badenes, C., et al. 2004, The Astrophysical Journal, 615, L117
* Jerius et al. (2000) Jerius, D., Donnelly, R. H., Tibbetts, M., et al. 2000, in X-Ray Optics, Instruments, and Missions III, Vol. 4012, SPIE, 17–27
* Ku (1966) Ku, H. H. 1966, Journal of Research of the National Bureau of Standards, 70, 263
* Lucy (1974) Lucy, L. B. 1974, The astronomical journal, 79, 745
* Markevitch et al. (2000) Markevitch, M., Ponman, T., Nulsen, P., et al. 2000, The Astrophysical Journal, 541, 542
* Naik & Sahu (2013) Naik, R. K., & Sahu, P. 2013, in 2013 International Conference on Microwave and Photonics (ICMAP), IEEE, 1–6
* Neelamani et al. (2004) Neelamani, R., Choi, H., & Baraniuk, R. 2004, IEEE Transactions on signal processing, 52, 418
* Paggi et al. (2022) Paggi, A., Fabbiano, G., Nardini, E., et al. 2022, The Astrophysical Journal, 927, 166
* Patnaude & Fesen (2009) Patnaude, D. J., & Fesen, R. A. 2009, The Astrophysical Journal, 697, 535
* Richardson (1972) Richardson, W. H. 1972, JoSA, 62, 55
* Rudin et al. (1992) Rudin, L. I., Osher, S., & Fatemi, E. 1992, Physica D: nonlinear phenomena, 60, 259
* Sato et al. (2018) Sato, T., Katsuda, S., Morii, M., et al. 2018, The Astrophysical Journal, 853, 46
* Sekko et al. (1999) Sekko, E., Thomas, G., & Boukrouche, A. 1999, Signal processing, 72, 23
* Shepp & Vardi (1982) Shepp, L. A., & Vardi, Y. 1982, IEEE transactions on medical imaging, 1, 113
* Sobolenko et al. (2022) Sobolenko, M., Kompaniiets, O., Berczik, P., et al. 2022, Monthly Notices of the Royal Astronomical Society, 517, 1791
* Starck et al. (2002) Starck, J.-L., Pantin, E., & Murtagh, F. 2002, Publications of the Astronomical Society of the Pacific, 114, 1051
* Sugizaki et al. (2009) Sugizaki, M., Kamae, T., & Maeda, Y. 2009, Publications of the Astronomical Society of Japan, 61, S55
* Tajima et al. (2007) Tajima, H., Finazzi, S., Cohen-Tanugi, J., Chiang, J., & Kamae, T. 2007in , American Institute of Physics, 187–189
* Thimmappa et al. (2020) Thimmappa, R., Marchenko, V., Balasubramaniam, K., et al. 2020, The Astrophysical Journal, 903, 109
* Tsuchioka et al. (2022) Tsuchioka, T., Sato, T., Yamada, S., & Uchiyama, Y. 2022, The Astrophysical Journal, 932, 93
* van Kempen & van Vliet (2000) van Kempen, G. M., & van Vliet, L. J. 2000, JOSA A, 17, 425
* White (1994) White, R. L. 1994, in Instrumentation in Astronomy VIII, Vol. 2198, SPIE, 1342–1348
* Wise et al. (1997) Wise, M. W., Huenemoerder, D. P., & Davis, J. E. 1997, in Astronomical Data Analysis Software and Systems VI, Vol. 125, 477
* Yongpan et al. (2010) Yongpan, W., Huajun, F., Zhihai, X., Qi, L., & Chaoyue, D. 2010, Optics & Laser Technology, 42, 845
* Yuan et al. (2008) Yuan, L., Sun, J., Quan, L., & Shum, H.-Y. 2008, Acm Transactions on Graphics (TOG), 27, 1
|
# NLO corrections to $J/\psi+c+\bar{c}$ photoproduction
Qi-Ming Feng1 and Cong-Feng<EMAIL_ADDRESS>1 School of Physics,
University of Chinese Academy of Sciences, Beijing 100049, China
2 Key Laboratory of Vacuum Physics of CAS, Beijing 100049, China
###### Abstract
Based on the factorization framework of nonrelativistic quantum
chromodynamics, we study the associated $J/\psi+c+\bar{c}$ photoproduction
process at next-to-leading order in $\alpha_{s}$ and leading order in the
velocity expansion. The total cross section and differential cross section in
$p_{T}^{2}$, $W$ and $z$ are presented. The results indicate that the next-to-
leading order corrections are substantial, and testable in experiment.
## I Introduction
The study of heavy quarkonium system presents an exceptional opportunity to
explore the nuances of the phenomena quantum chromodynamics (QCD) involved.
Especially, charmonium provides a peculiar playground for the study of flavor
physics and QCD in the charm sector, of which a huge amount of experimental
data have been accumulated. The moderate charm energy enables the perturbative
QCD(pQCD) calculation reliable to some extent, while poses a challenge to
higher order pQCD corrections.
Nonrelativistic QCD (NRQCD) Bodwin:1994jh provides a consistent theoretical
framework for the study of quarkonium production and decays. In NRQCD, the
quarkonium production and decay processes are factorized into two sectors: the
perturbative generation and decay of heavy quark pairs, the dominant
quarkonium quark components, and the nonperturbative hadronization or
dehadronization of these heavy quarks. The perturbative contributions are
represented by the matching coefficients to pQCD calculation, namely the
short-distance coefficients (SDCs), while the nonperturbative hadronization is
described by the matrix elements of process-independent effective operators,
known as the long-distance matrix elements (LDMEs).
Nevertheless, there are still some unsolved problems pending in the
application and understanding of NRQCD. The NRQCD factorization formalism
suggests that besides the leading Fock state contribution usually
corresponding to the color singlet mechanism (CSM), higher Fock state
contributions, such as the color octet mechanism (COM), emerge in the
expansion of heavy quark relative velocity $v$ ($v\ll 1$). The proposal of COM
effectively reduces the discrepancies in $J/\psi$ production between next-to-
leading order (NLO) CSM predictions and experimental results across
$e^{+}e^{-}$ collisions at B factories, photoproduction at DESY HERA, and
hadroproduction at Fermilab Tevatron and CERN LHC Chang:2009uj ;
Artoisenet:2009xh ; Campbell:2007ws ; Gong:2008sn ; Lansberg:2010vq ;
Kramer:1994zi ; Kramer:1995nb ; Butenschoen:2009zy ; Butenschoen:2011ks ;
Butenschoen:2012px ; Chao:2012iv ; Ma:2010jj ; Ma:2010yw ; Zhang:2009ym ;
Gong:2012ug . However, the COM introduces considerable uncertainties. In Ref.
Bodwin:2012ft , comparisons between two LDMEs fitted through different
procedures in various collision processes show somehow incompatible results.
Since different processes rely on distinct sets of LDME data, the process-
independence of COM LDMEs is challenged. Some new methods for fitting COM
LDMEs have been proposed later, but discussing them in detail is beyond the
scope of this text and will be skipped here.
Experimental and theoretical inquiries into inclusive heavy quarkonium
production have spanned several decades (see Brambilla:2010cs ;
Lansberg:2019adr ; QuarkoniumWorkingGroup:2004kpm for reviews). Recent
studies of $J/\psi$ photoproduction in electron-proton ($ep$) collisions
indicate that CS contributions, such as intrinsic charm Flore:2020jau and
higher-order processes like $J/\psi+c+\bar{c}$ Li:2019nlr , are evident.
Inspired by the fact that production processes akin to NLO QCD corrections
exhibit notable contributions Campbell:2007ws ; Chen:2016hju ; Yang:2022yxb ,
we posit that the NLO contributions of the aforementioned 3-body final states
photoproduction process $\gamma+g\to J/\psi+c+\bar{c}$ at $ep$ colliders
remains relatively significant. Since the $J/\psi+c+\bar{c}$ final state is
experimentally detectable, theoretical analysis of HERA data holds
significance and provides insights for future $ep$ colliders like EIC, EicC,
and LHeC (FCC-eh).
In this work, based on the framework of NRQCD, we systematically compute the
NLO corrections to the photoproduction process $\gamma+g\to J/\psi+c+\bar{c}$
at leading $v$ expansion in $ep$ collisions. The structure of this paper is
organized as follows. In Section II, we detail the formalism and calculation
of the concerned process. In Section III, the results of numerical evaluation
are presented. The last section is reserved for the summary and conclusions.
## II Formalism and Calculation
Within the framework of NRQCD, the cross section for the photoproduction
process at leading $v$ in $ep$ collision can be formulated as:
$\displaystyle d\sigma(ep\to J/\psi+c+\bar{c})=\int$ $\displaystyle dxd\eta
f_{\gamma/e}(x,Q^{2}_{\max})f_{g/p}(\eta,\mu^{2})$ $\displaystyle\\!\\!\times
d\sigma(\gamma+g\to
c\bar{c}[^{3}\\!S_{1}]+c+\bar{c})\langle\mathcal{O}^{J/\psi}(^{3}\\!S_{1})\rangle\
.$ (1)
Here, $\langle\mathcal{O}^{J/\psi}(^{3}\\!S_{1})\rangle$ is the LDME of a
$c\bar{c}[^{3}\\!S_{1}]$ pair hadronizing into a $J/\psi$ meson.
$f_{g/p}(\eta,\mu^{2})$ is the parton distribution function (PDF) of the
incident gluon, and $\mu$ is the corresponding factorization scale.
$f_{\gamma/e}(x,Q^{2}_{\max})$ is the Weizsacker-Williams approximation (WWA)
function of the photon distribution, defined as:
$\displaystyle
f_{\gamma/e}(x,Q^{2}_{\max})=\frac{\alpha}{2\pi}\left[\frac{1+{(1-x)}^{2}}{x}\ln\frac{Q^{2}_{\max}}{Q^{2}_{\min}(x)}+2m_{e}^{2}x\left(\frac{1}{Q^{2}_{\max}}-\frac{1}{Q^{2}_{\min}(x)}\right)\right]\
,$ (2)
where $m_{e}$ is the electron mass. $Q^{2}_{\min}=m_{e}^{2}x^{2}/(1-x)$ and
$Q^{2}_{\max}$ represents the minimal and maximum virtuality of the incident
photon, respectively.
In the calculation of the concerned process, the spinor helicity formalism
Kleiss:1985yh ; Qiao:2003ue ; Dixon:1996wi ; Dixon:2013uaa ; Arkani-
Hamed:2017jhn of the scattering amplitude and the conventional amplitude
squaring approach are introduced in evaluating diagrams. Specifically, most of
the diagrams are calculated in helicity amplitudes, except for the Coulomb
divergent part, where the conventional amplitude squaring approach is
employed.
The dipole subtraction method Catani:1996vz ; Catani:2002hc is adopted to
counteract the infrared (IR) poles. Therefore, the total cross section at NLO
can be expressed as:
$\displaystyle\sigma_{tot}$
$\displaystyle=\int_{3\text{-}\mathrm{body}}(d\sigma^{\mathrm{LO}}+d\sigma^{\mathrm{Virtual}}+d\sigma^{C}+\int_{1\text{-}\mathrm{body}}d\sigma^{A})+\int_{4\text{-}\mathrm{body}}(d\sigma^{\mathrm{Real}}-d\sigma^{A})\
,$ (3)
where $d\sigma^{\mathrm{LO}}$, $d\sigma^{\mathrm{Virtual}}$, and
$d\sigma^{\mathrm{Real}}$ are the LO, virtual, and real contributions to the
cross section. $d\sigma^{C}$ represents the collinear subtraction counterterm
arising from the redefination of the parton distributions. The contribution
$d\sigma^{A}$ represents the dipole counterterm. The correspondence of the
various terms in (3) with helicity amplitudes goes as
$\displaystyle d\sigma^{\mathrm{LO}}\ ,\ d\sigma^{C}\ ,\
\int_{1\text{-}\mathrm{body}}d\sigma^{A}\propto\sum_{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}}|\mathcal{A}_{\mathrm{LO}}^{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}}|^{2}\
,$ $\displaystyle
d\sigma^{\mathrm{Virtual}}\propto\sum_{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}}2\mathrm{Re}\left[{(\mathcal{A}_{\mathrm{LO}}^{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}})}^{*}\mathcal{A}_{\mathrm{Virtual}}^{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}}\right]\
,$ $\displaystyle
d\sigma^{\mathrm{Real}}\propto\sum_{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5},\alpha_{6}}|\mathcal{A}_{\mathrm{Real}}^{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5},\alpha_{6}}|^{2}\
.$ (4)
Here, $\alpha_{1,2,3,4,5,6}$ represent the helicities (polarizations) of on-
shell particles, $\alpha_{1,2,4,5,6}\in{(+,-)}$ denote the helicities of
initial photon, initial gluon, the final charm quark, the charm antiquark, and
the final emitted gluon, $\alpha_{3}\in{(+,0,-)}$ signify the helicities of
$J/\psi$ meson.
$\mathcal{A}_{\mathrm{LO}}^{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}}$,
$\mathcal{A}_{\mathrm{Virtual}}^{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}}$,
and
$\mathcal{A}_{\mathrm{Real}}^{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5},\alpha_{6}}$
are helicity amplitudes of LO, virtual corrections, and real corrections,
respectively.
Figure 1: Typical LO Feynman diagrams for $\gamma+g\to J/\psi+c+\bar{c}$. All
other diagrams can be generated by: 1. exchanging initial photon and gluon in
$(a)$, $(b)$ and $(d)$; 2. reversing fermion lines; 3. constraining $J/\psi$
in other quark-antiquark pairs. There are 6 diagrams related to (a), 12
diagrams related to (b), 4 diagrams related to (c), and 8 diagrams related to
(d). Diagrams with no contribution in the end are neglected.
There are 30 non-zero Feynman diagrams at the leading order of the concerned
process, as schematically shown in FIG. 1. Helicity amplitude of the LO
contribution can be expressed as:
$\displaystyle\mathcal{A}_{\mathrm{LO}}^{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}}=\sum_{i=1}^{30}\mathcal{A}^{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}}_{i}=\sum_{j=1}^{59}\mathcal{C}_{j}(p_{a}\cdot
p_{b},p_{a}\cdot\varepsilon^{\alpha_{b}}_{b},\varepsilon^{\alpha_{a}}_{a}\cdot\varepsilon^{\alpha_{b}}_{b})\mathcal{\hat{A}}_{j}^{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}}\
,$ (5)
where $p_{a}$ and $p_{b}$ represent on-shell momenta,
$\varepsilon^{\alpha_{a}}_{a}$ and $\varepsilon^{\alpha_{b}}_{b}$ represent
polarization vectors. The total helicity amplitude
$\mathcal{A}_{\mathrm{LO}}^{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}}$
sums over all 30 LO diagrams
$\mathcal{A}_{i}^{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}}$. We
rearrange the summation into 59 distinct combinations of spinor products for
simplification, expressed as
$\mathcal{\hat{A}}_{j}^{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}}$.
$\mathcal{C}_{j}$ represents corresponding coefficients of
$\mathcal{\hat{A}}_{j}^{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}}$,
composed of scalar products of momenta and polarization vectors.
In the calculation of NLO corrections, the ’t Hooft-Veltman scheme (HV)
dimensional regularization (DR) is employed. Some of the NLO diagrams are
shown in FIG. 2.
Figure 2: Typical NLO Feynman diagrams of $\gamma+g\to J/\psi+c+\bar{c}$.
$(a)$ and $(b)$ are typical renormalizaiton diagrams, where the $\otimes$
denotes the counterterm of a propagator or a vertex. $(c)\sim(e)$ are typical
loop diagrams with UV, IR and Coulomb poles, respectively. The UV pole in (c)
can be canceled by introducing renormalization. The IR pole in $(d)$ can be
counteracted with real corrections (initial gluon introduced pole) and other
loop diagrams (the pole introduced form $J/\psi$ and $c$). The coulomb pole in
$(e)$ can be factorization out. And $(f)$ is a typical diagram of the real
correction process $\gamma+g\to J/\psi+c+\bar{c}+g$.
Loop diagrams are evaluated using two distinct methods, the integrand
reduction and IBP reduction. Helicity amplitudes of loop diagrams without
Coulomb divergence are evaluated using integrand reduction via Laurent-
expansion method Mastrolia:2012bu by the semi-numerical reduction C++ library
Ninja Peraro:2014cba . For the Coulomb divergent diagrams, the Coulomb term
can be expressed as:
$\displaystyle\frac{1}{[{(-\frac{p_{3}}{2}+q)}^{2}-m_{c}^{2}]q^{2}[(\frac{p_{3}}{2}+q)^{2}-m_{c}^{2}]}=\frac{1}{2}\left(\frac{1}{{(q^{2})}^{2}[(-\frac{p_{3}}{2}+q)^{2}-m_{c}^{2}]}+\frac{1}{{(q^{2})}^{2}[{(\frac{p_{3}}{2}+q)}^{2}-m_{c}^{2}]}\right),~{}~{}$
(6)
where only the loop propagators related to the Coulomb poles are depicted.
$p_{3}$ is $J/\psi$ meson momentum, $q$ is the loop momentum, and $m_{c}$ is
the charm quark mass. The introduction of exceptional higher-power loop
propagators $\frac{1}{{(q^{2})}^{2}}$ prevents the evaluation of Coulomb
divergent diagrams using Ninja. Therefore, we employ the integration by parts
(IBP) reduction method to evaluate Coulomb loops by NeatIBP Wu:2023upw .
Ultraviolet (UV) singularities are canceled by renormalization. In our
calculation, renormalizaiton constant of the QCD coupling constant $Z_{g}$ is
defined in the modified minimal subtraction ($\overline{\rm MS}$) scheme,
renormalizaiton constant of the charm quark field $Z_{2}$ and mass $Z_{m}$ and
the gluon field $Z_{3}$ are defined in the on-shell (OS) schem. The
counterterms are given by:
$\displaystyle\delta Z_{2}^{\rm
OS}=-C_{F}\frac{\alpha_{s}}{4\pi}\left[\frac{1}{\epsilon_{\rm{UV}}}+\frac{2}{\epsilon_{\rm{IR}}}-3\gamma_{E}+3\ln\frac{4\pi\mu_{r}^{2}}{m^{2}}+4\right]\
,$ $\displaystyle\delta Z_{3}^{\rm
OS}=\frac{\alpha_{s}}{4\pi}\left[(\beta^{\prime}_{0}-2C_{A})(\frac{1}{\epsilon_{\rm{UV}}}-\frac{1}{\epsilon_{\rm{IR}}})-\frac{4}{3}T_{f}\left(\frac{1}{\epsilon_{\rm{UV}}}-\gamma_{E}+\ln\frac{4\pi\mu_{r}^{2}}{m^{2}}\right)\right]\
,$ $\displaystyle\delta Z_{m}^{\rm
OS}=-3C_{F}\frac{\alpha_{s}}{4\pi}\left[\frac{1}{\epsilon_{\rm
UV}}-\gamma_{E}+\ln\frac{4\pi\mu_{r}^{2}}{m^{2}}+\frac{4}{3}\right]\ ,$
$\displaystyle\delta Z_{g}^{\overline{\rm
MS}}=-\frac{\beta_{0}}{2}\frac{\alpha_{s}}{4\pi}\left[\frac{1}{\epsilon_{\rm
UV}}-\gamma_{E}+\ln(4\pi)\right]\ .$ (7)
Here, $\beta^{\prime}_{0}=(11/3)C_{A}-(4/3)T_{f}n^{\prime}_{f}$ is the one-
loop coefficient of the QCD beta function, $n^{\prime}_{f}=3$ is the number of
light quarks, $\gamma_{E}$ is Euler’s constant, $m$ represents the mass of
charm quark, $C_{A}$ and $T_{f}$ attribute to the color $SU(3)$ group,
$n_{f}=4$ is the number of active quarks, $\mu_{r}$ denotes the
renormalization scale.
In the evaluation of real corrections, as expected, the dipole counterterm
$d\sigma^{A}$ exhibits the same pointwise singular behaviour as
$d\sigma^{\mathrm{Real}}$, and $\int_{1\text{-}\mathrm{body}}d\sigma^{A}$
stands for the analytic integration of $d\sigma^{A}$ over the phase space with
an additional real gluon in dimension $D=4-2\varepsilon$, cancelling out the
remaining analytic $\frac{1}{\varepsilon}$ and $\frac{1}{\varepsilon^{2}}$
divergences in virtual correction.
In the concerned process, the dipole terms associated with quarkonium cancel
out. As a result, only 3 types of dipole terms remain:
1. 1.
initial gluon emitter with final charm (anti-charm) spectator:
$\mathcal{D}^{gg}_{c},\ \mathcal{D}^{gg}_{\bar{c}}$,
2. 2.
final charm (anti-charm) emitter with initial gluon spectator:
$\mathcal{D}^{g}_{cg},\ \mathcal{D}^{g}_{\bar{c}g}$,
3. 3.
final charm (anti-charm) emitter with final anti-charm (charm) spectator:
$\mathcal{D}_{cg,\bar{c}},\ \mathcal{D}_{\bar{c}g,c}$,
Here, dipole contributions $\mathcal{D}^{gg}_{c}$,
$\mathcal{D}^{gg}_{\bar{c}}$, $\mathcal{D}^{g}_{cg}$,
$\mathcal{D}^{g}_{\bar{c}g}$, $\mathcal{D}_{cg,\bar{c}}$, and
$\mathcal{D}_{\bar{c}g,c}$ are defined in Ref. Catani:2002hc . Hence, the
dipole factorization form of $d\sigma^{A}$ writes:
$\displaystyle
d\sigma^{A}=d\Gamma^{(4)}\left(\sum_{\begin{subarray}{c}i,k=c,\bar{c}\\\ i\neq
k\end{subarray}}\mathcal{D}_{ig,k}+\sum_{i=c,\bar{c}}\mathcal{D}_{ig}^{g}+\sum_{k=c,\bar{c}}\mathcal{D}_{k}^{gg}\right)$
(8)
with $d\Gamma^{(4)}$ being the 4-body phase space including all factors of QCD
independent. The integrated dipoles are obtained from $(5.23)$, $(5.56)$, and
$(5.88)$ of Ref. Catani:2002hc .
Of the cancellation of divergences, we extract the IR divergences in loop
diagrams by means of the method developed in Ref. Dittmaier:2003bc , which
ensures all IR divergences in one-loop diagrams with more than three loop
propagators being expressed as sum of triangles (diagrams with three loop
propagators). For example, for an $N$-point loop ($N\geq 3$)
$\displaystyle\int_{\rm{div}}d^{4-2\epsilon}q\prod_{i=0}^{N-1}\frac{1}{D_{i}}=\int_{\rm{div}}d^{4-2\epsilon}q\sum_{i=0}^{N-1}\sum_{\begin{subarray}{c}j=0\\\
k\neq i,i+1\end{subarray}}^{N-1}\frac{A_{ij}}{D_{i}D_{i+1}D_{j}}\ ,$ (9)
where $D_{i}$ represents loop propagators. $A_{ij}$ represents the
corresponding coefficients. In the calculation, divergences in virtual
correction and dipole contribution are analytically canceled out.
## III Results
In our numerical calculation, the charm mass takes half of $J/\psi$ mass, i.e.
$m_{c}=1.5\ \rm{GeV}$. The renormalization scale $\mu_{r}$ and the
factorizaiton scale $\mu_{f}$ are set as
$\mu_{r}=\mu_{f}=m_{T}\equiv\sqrt{p_{T}^{2}+4m_{c}^{2}}$, where $m_{T}$ is the
$J/\psi$ transverse mass. Theoretical uncertainties are estimated by varying
the charm quark mass in $m_{c}=1.5\pm 0.1\ \rm{GeV}$ and the scales in the
interval $\frac{1}{2}m_{T}\leq\mu_{r},\mu_{f}\leq 2m_{T}$. The running
coupling constant is determined by the one-loop (two-loop) formula at LO
(NLO), and the PDF set CTEQ6L1 (CTEQ6M) Pumplin:2002vw is used at LO (NLO).
The LDME follows
$\langle\mathcal{O}^{J/\psi}(^{3}\\!S_{1})\rangle=2(2J+1)N_{c}|R(0)|^{2}/{4\pi}$
with $J=1$ for the $J/\psi$ meson, and $N_{c}=3$ is the number of color
charges. The radial wave function $|R(0)|^{2}=1.01\ \rm{GeV}^{3}$ is extract
form $\Gamma(J/\psi\to e^{+}e^{-})=5.55\ \rm{keV}$ ParticleDataGroup:2020ssz ,
thus, we have $\langle\mathcal{O}^{J/\psi}(^{3}\\!S_{1})\rangle=1.45\
\rm{GeV}^{3}$.
The collision energy is set accroding to the HERA collider: $27.5\ \rm{GeV}$
for eletrons (positrons) and $920\ \rm{GeV}$ for protons. The photon
virtuality is constrainted to $Q^{2}_{\max}\leq 2.5\ \rm{GeV}^{2}$. To exclude
resolved photoproduction and diffractive production of $J/\psi$, experimental
cuts based on the H1 collaboration measurement H1:2010udv are applied:
$p_{T}>1\ \rm{GeV}$, $60\ \rm{GeV}<W<240\ \rm{GeV}$, and $0.3<z<0.9$. Here,
$W=\sqrt{{(p_{\gamma}+p_{p})}^{2}}$ is the mass of the hadronic final state,
$z=(p_{3}\cdot p_{p})/(p_{\gamma}\cdot p_{p})$ is the elasticity of the
$J/\psi$ meson production process, and $p_{p}$, $p_{\gamma}$ are the momenta
of the incident proton and photon, respectively. Additionally, the feed-down
contribution from the $\psi^{\prime}$ is taken into account, which yields an
enhancing factor about 0.278.
Table 1: Scale and mass dependence of the total cross section at LO (expressed in $\rm{nb}$) in various PDF sets without feed-down contribution. Here, $\mu=\mu_{r}=\mu_{f}$. PDF sets | $m_{c}\backslash\mu$ | $\frac{1}{2}m_{T}$ | $m_{T}$ | $2m_{T}$
---|---|---|---|---
CTEQ6L1 | $1.4\ \rm{GeV}$ | $0.185$ | $0.111$ | $0.070$
| $1.5\ \rm{GeV}$ | $0.123$ | $0.074$ | $0.046$
| $1.6\ \rm{GeV}$ | $0.084$ | $0.049$ | $0.031$
CT14LO | $1.4\ \rm{GeV}$ | $0.128$ | $0.073$ | $0.046$
| $1.5\ \rm{GeV}$ | $0.084$ | $0.048$ | $0.030$
| $1.6\ \rm{GeV}$ | $0.056$ | $0.032$ | $0.020$
CTEQ6M | $1.4\ \rm{GeV}$ | $0.109$ | $0.066$ | $0.043$
| $1.5\ \rm{GeV}$ | $0.073$ | $0.044$ | $0.029$
| $1.6\ \rm{GeV}$ | $0.051$ | $0.030$ | $0.019$
CT14NLO | $1.4\ \rm{GeV}$ | $0.095$ | $0.063$ | $0.042$
| $1.5\ \rm{GeV}$ | $0.066$ | $0.042$ | $0.028$
| $1.6\ \rm{GeV}$ | $0.045$ | $0.029$ | $0.019$
CT18NLO | $1.4\ \rm{GeV}$ | $0.094$ | $0.063$ | $0.042$
| $1.5\ \rm{GeV}$ | $0.065$ | $0.042$ | $0.028$
| $1.6\ \rm{GeV}$ | $0.045$ | $0.029$ | $0.019$
As a result, the total cross section of the concerned process at NLO (LO) is
$\displaystyle\sigma_{tot}=0.118^{+0.168}_{-0.065}\ (0.074^{+0.111}_{-0.043})\
\rm{nb}.$ (10)
Our LO result agrees with what in Ref. Li:2009zzu after taking the same
inputs. NLO corrections yield a $K$ factor about $1.60$, which is a prominent
enhancement of the cross section. It is evident that the error on the total
cross section is large. In Table 1, we present the errors of the LO total
cross section on the charm quark mass and scales across various PDF sets. The
cross section with each PDF set shows strong sensitivity to both charm quark
mass and scales, particularly to scales. Results using the NLO PDF sets
CTEQ6M, CT14NLO Dulat:2015mca , and CT18NLO Hou:2019qau exhibit stronger
dependence on scales compared to those obtained using LO PDF sets CTEQ6L1 and
CT14LO Dulat:2015mca .
In estimating the number of the concerned process events at HERA, we consider
that the H1 collaboration reconstructed the $J/\psi$ meson candidates through
the decay channel $J/\psi\to\mu^{+}\mu^{-}$ in the photoproduction process,
and the photoproduction sample corresponds to an integrated luminosity of
$\mathcal{L}=165\ \rm{pb}^{-1}$ H1:2010udv . Accroding to our numerical
results, with a branching fraction of
$\Gamma(J/\psi\to\mu^{+}\mu^{-})/\Gamma_{tot}\simeq 6\%$, the number of
reconstructed $J/\psi+c+\bar{c}$ events in the photoproduction process at NLO
(LO) is about $521\sim 2833\ (304\sim 1829)$ at HERA. Here, we omit the
unclear tagging efficiency of $c$-jets.
Figure 3: The differential cross section in (a) $p_{T}^{2}$, (b) $W$, and (c)
$z$ distributions of the photoproduction process $\gamma+g\to
J/\psi+c+\bar{c}$ at LO and NLO in the CSM. The shaded bands indicate the
theoretical uncertainties with upper bound for
$\mu_{r},\mu_{f}=\frac{1}{2}m_{T}$ and $m_{c}=1.4\ \rm{GeV}$ and lower bound
for $\mu_{r},\mu_{f}=2m_{T}$ and $m_{c}=1.6\ \rm{GeV}$.
The differential cross section distributions in $p_{T}^{2}$, $W$, and $z$, are
presented in FIG. 3 $(a)\sim(c)$, respectively. The $p_{T}^{2}$ distribution
of $J/\psi$, as shown in FIG. 3$(a)$, is presented in the range $1\
\rm{GeV}^{2}<p_{T}^{2}<100\ \rm{GeV}^{2}$. Compared to the $p_{T}^{2}$
distribution of the inclusive process $\gamma+g\to J/\psi+X$ at NLO CSM in
Ref. Butenschoen:2009zy , the $J/\psi+c+\bar{c}$ process have a much milder
drop as $p_{T}^{2}$ increases. Although the differential cross section of the
concerned process is much lower than that of the $J/\psi+X$ production process
at low-$p_{T}$, in the region of $60\ \rm{GeV}<p_{T}^{2}<100\ \rm{GeV}^{2}$,
the ratio $\sigma(\gamma+g\to J/\psi+c+\bar{c})/\sigma(\gamma+g\to J/\psi+X)$
tends to about $1/2$. It means that the concerned process is a not negligible
contribution in the evaluation of $J/\psi$ photoproduction processes in $ep$
colliders, particularly in the large-$p_{T}$ region.
The $W$ and $z$ distributions in FIG. 3$(b)\sim(c)$ significantly undershoot
the inclusive $\gamma+g\to J/\psi+X$ process in the CSM in Ref.
Butenschoen:2009zy . The $W$ distribution of the concerned process, the CS
$J/\psi+X$ production process, and the H1 measurement show a similar trend.
However, the $z$ distribution shows a completely different trend compared to
the CS $J/\psi+X$ production process and H1 data. In contrast to the moderate
$z$ distribution trends in Ref.Butenschoen:2009zy and Ref.H1:2010udv , the
concerned process exhibits a rapid decrease in distribution with increasing
$z$, particularly at large $z$.
The discrepancy on $z$ distribution may originate from the distinct dynamics
of these processes. With the elasticity $z$ increases, the $J/\psi$ meson
tends to parallel with the incident photon, that makes the momentum of the sum
of the final charm quark and final anti-charm quark reduced. Since the mass of
the sum momentum greater than the sum of charm quark mass and anti-charm quark
mass: $m_{c+\bar{c}}=\sqrt{{(p_{c}+p_{\bar{c}})}^{2}}>2m_{c}$, the dynamics of
the two (anti-) charm quarks are constrainted, apparently, the cross section
is also limited. For the $\gamma+g\to J/\psi+X$ process, $X=g,q$ is a light
particle, there is no such constraint in $z$, so the $z$-distribution is flat.
This can be confirmed by comparing upper, middle, and lower bounds. As $m_{c}$
drops from $1.6\ \rm{GeV}$ to $1.4\ \rm{GeV}$, the distribution tends to be
more flat.
## IV Summary
In this work, we investigate the photoproduction of $J/\psi+c+\bar{c}$ in $ep$
collider HERA at NLO QCD and LO in $v$ within the framework of NRQCD.
Numerical result shows that the NLO QCD corrections are prominent in the
concerned process. We also present the differential cross section in $p_{T}$,
$W$ and $z$. The concerned process shows a significantly contribution in
$J/\psi$ photoproduction processes at large-$p_{T}$. Comparing with the
$p_{T}$ distribution of the inclusive $\gamma+g\to J/\psi+X$ process in the
CSM, the concerned process have a much milder drop as $p_{T}$ increases. The
trend in the $W$ distribution typically agrees with that of the CS $J/\psi+X$
photoproduction process and H1 data, by a factor of approximately $1/10$. The
$z$ distribution shows a severe drop, which differs from that of the inclusive
process and H1 data. This difference may originate from the distinct dynamics
of these processes, and expects a further data analysis. Future $ep$
colliders, such as EIC, EicC, and LHeC (FCC-eh), are expected to have rich
production of the concerned process due to their high luminosity. Furthermore,
some of them are capable of detecting particle polarizations, theoretical
analysis on the polarizations of the concerned process awaits future
investigation.
Acknowledgments
This work was supported in part by the National Key Research and Development
Program of China under Contracts Nos.2020YFA0406400, and the National Natural
Science Foundation of China (NSFC) under the Grant 12235008.
## References
* (1) G. T. Bodwin, E. Braaten and G. P. Lepage, Phys. Rev. D 51 (1995), 1125-1171 [erratum: Phys. Rev. D 55 (1997), 5853] doi:10.1103/PhysRevD.55.5853 [arXiv:hep-ph/9407339 [hep-ph]].
* (2) C. H. Chang, R. Li and J. X. Wang, Phys. Rev. D 80, 034020 (2009) doi:10.1103/PhysRevD.80.034020 [arXiv:0901.4749 [hep-ph]].
* (3) P. Artoisenet, J. M. Campbell, F. Maltoni and F. Tramontano, Phys. Rev. Lett. 102, 142001 (2009) doi:10.1103/PhysRevLett.102.142001 [arXiv:0901.4352 [hep-ph]].
* (4) J. M. Campbell, F. Maltoni and F. Tramontano, Phys. Rev. Lett. 98, 252002 (2007) doi:10.1103/PhysRevLett.98.252002 [arXiv:hep-ph/0703113 [hep-ph]].
* (5) B. Gong and J. X. Wang, Phys. Rev. Lett. 100, 232001 (2008) doi:10.1103/PhysRevLett.100.232001 [arXiv:0802.3727 [hep-ph]].
* (6) J. P. Lansberg, Phys. Lett. B 695, 149-156 (2011) doi:10.1016/j.physletb.2010.10.054 [arXiv:1003.4319 [hep-ph]].
* (7) M. Kramer, J. Zunft, J. Steegborn and P. M. Zerwas, Phys. Lett. B 348, 657-664 (1995) doi:10.1016/0370-2693(95)00155-E [arXiv:hep-ph/9411372 [hep-ph]].
* (8) M. Krämer, Nucl. Phys. B 459, 3-50 (1996) doi:10.1016/0550-3213(95)00568-4 [arXiv:hep-ph/9508409 [hep-ph]].
* (9) M. Butenschoen and B. A. Kniehl, Phys. Rev. Lett. 104, 072001 (2010) doi:10.1103/PhysRevLett.104.072001 [arXiv:0909.2798 [hep-ph]].
* (10) M. Butenschoen and B. A. Kniehl, Phys. Rev. Lett. 108, 172002 (2012) doi:10.1103/PhysRevLett.108.172002 [arXiv:1201.1872 [hep-ph]].
* (11) M. Butenschoen and B. A. Kniehl, Phys. Rev. Lett. 107, 232001 (2011) doi:10.1103/PhysRevLett.107.232001 [arXiv:1109.1476 [hep-ph]].
* (12) K. T. Chao, Y. Q. Ma, H. S. Shao, K. Wang and Y. J. Zhang, Phys. Rev. Lett. 108, 242004 (2012) doi:10.1103/PhysRevLett.108.242004 [arXiv:1201.2675 [hep-ph]].
* (13) Y. Q. Ma, K. Wang and K. T. Chao, Phys. Rev. D 84, 114001 (2011) doi:10.1103/PhysRevD.84.114001 [arXiv:1012.1030 [hep-ph]].
* (14) Y. Q. Ma, K. Wang and K. T. Chao, Phys. Rev. Lett. 106, 042002 (2011) doi:10.1103/PhysRevLett.106.042002 [arXiv:1009.3655 [hep-ph]].
* (15) Y. J. Zhang, Y. Q. Ma, K. Wang and K. T. Chao, Phys. Rev. D 81, 034015 (2010) doi:10.1103/PhysRevD.81.034015 [arXiv:0911.2166 [hep-ph]].
* (16) B. Gong, L. P. Wan, J. X. Wang and H. F. Zhang, Phys. Rev. Lett. 110, no.4, 042002 (2013) doi:10.1103/PhysRevLett.110.042002 [arXiv:1205.6682 [hep-ph]].
* (17) G. T. Bodwin, [arXiv:1208.5506 [hep-ph]].
* (18) N. Brambilla, S. Eidelman, B. K. Heltsley, R. Vogt, G. T. Bodwin, E. Eichten, A. D. Frawley, A. B. Meyer, R. E. Mitchell and V. Papadimitriou, et al. Eur. Phys. J. C 71, 1534 (2011) doi:10.1140/epjc/s10052-010-1534-9 [arXiv:1010.5827 [hep-ph]].
* (19) J. P. Lansberg, Phys. Rept. 889, 1-106 (2020) doi:10.1016/j.physrep.2020.08.007 [arXiv:1903.09185 [hep-ph]].
* (20) N. Brambilla et al. [Quarkonium Working Group], doi:10.5170/CERN-2005-005 [arXiv:hep-ph/0412158 [hep-ph]].
* (21) C. Flore, J. P. Lansberg, H. S. Shao and Y. Yedelkina, Phys. Lett. B 811, 135926 (2020) doi:10.1016/j.physletb.2020.135926 [arXiv:2009.08264 [hep-ph]].
* (22) R. Li, [arXiv:1912.12822 [hep-ph]].
* (23) Z. Q. Chen, L. B. Chen and C. F. Qiao, Phys. Rev. D 95, no.3, 036001 (2017) doi:10.1103/PhysRevD.95.036001 [arXiv:1608.06231 [hep-ph]].
* (24) H. Yang, Z. Q. Chen and C. F. Qiao, Phys. Rev. D 105, no.9, 094014 (2022) doi:10.1103/PhysRevD.105.094014 [arXiv:2203.14204 [hep-ph]].
* (25) R. Kleiss and W. J. Stirling, Nucl. Phys. B 262, 235-262 (1985) doi:10.1016/0550-3213(85)90285-8
* (26) C. F. Qiao, Phys. Rev. D 67, 097503 (2003) doi:10.1103/PhysRevD.67.097503 [arXiv:hep-ph/0302128 [hep-ph]].
* (27) L. J. Dixon, [arXiv:hep-ph/9601359 [hep-ph]].
* (28) L. J. Dixon, doi:10.5170/CERN-2014-008.31 [arXiv:1310.5353 [hep-ph]].
* (29) N. Arkani-Hamed, T. C. Huang and Y. t. Huang, JHEP 11, 070 (2021) doi:10.1007/JHEP11(2021)070 [arXiv:1709.04891 [hep-th]].
* (30) S. Catani and M. H. Seymour, Nucl. Phys. B 485, 291-419 (1997) [erratum: Nucl. Phys. B 510, 503-504 (1998)] doi:10.1016/S0550-3213(96)00589-5 [arXiv:hep-ph/9605323 [hep-ph]].
* (31) S. Catani, S. Dittmaier, M. H. Seymour and Z. Trocsanyi, Nucl. Phys. B 627, 189-265 (2002) doi:10.1016/S0550-3213(02)00098-6 [arXiv:hep-ph/0201036 [hep-ph]].
* (32) P. Mastrolia, E. Mirabella and T. Peraro, JHEP 06, 095 (2012) [erratum: JHEP 11, 128 (2012)] doi:10.1007/JHEP11(2012)128 [arXiv:1203.0291 [hep-ph]].
* (33) T. Peraro, Comput. Phys. Commun. 185, 2771-2797 (2014) doi:10.1016/j.cpc.2014.06.017 [arXiv:1403.1229 [hep-ph]].
* (34) Z. Wu, J. Boehm, R. Ma, H. Xu and Y. Zhang, Comput. Phys. Commun. 295, 108999 (2024) doi:10.1016/j.cpc.2023.108999 [arXiv:2305.08783 [hep-ph]].
* (35) S. Dittmaier, Nucl. Phys. B 675, 447-466 (2003) doi:10.1016/j.nuclphysb.2003.10.003 [arXiv:hep-ph/0308246 [hep-ph]].
* (36) J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. M. Nadolsky and W. K. Tung, JHEP 07, 012 (2002) doi:10.1088/1126-6708/2002/07/012 [arXiv:hep-ph/0201195 [hep-ph]].
* (37) P. A. Zyla et al. [Particle Data Group], PTEP 2020, no.8, 083C01 (2020) doi:10.1093/ptep/ptaa104
* (38) F. D. Aaron et al. [H1], Eur. Phys. J. C 68, 401-420 (2010) doi:10.1140/epjc/s10052-010-1376-5 [arXiv:1002.0234 [hep-ex]].
* (39) R. Li and K. T. Chao, Phys. Rev. D 79, 114020 (2009) doi:10.1103/PhysRevD.79.114020 [arXiv:0904.1643 [hep-ph]].
* (40) S. Dulat, T. J. Hou, J. Gao, M. Guzzi, J. Huston, P. Nadolsky, J. Pumplin, C. Schmidt, D. Stump and C. P. Yuan, Phys. Rev. D 93, no.3, 033006 (2016) doi:10.1103/PhysRevD.93.033006 [arXiv:1506.07443 [hep-ph]].
* (41) T. J. Hou, K. Xie, J. Gao, S. Dulat, M. Guzzi, T. J. Hobbs, J. Huston, P. Nadolsky, J. Pumplin and C. Schmidt, et al. [arXiv:1908.11394 [hep-ph]].
|
Also at ]the University of Chicago, Chicago, Illinois 60637, USA
# Extracting Dynamical Frequencies from Invariants of Motion in Finite-
Dimensional Nonlinear Integrable Systems
Chad E. Mitchell<EMAIL_ADDRESS>Robert D. Ryne Kilean Hwang Lawrence
Berkeley National Laboratory, Berkeley, CA 94720, USA Sergei Nagaitsev [
Timofey Zolkin Fermi National Accelerator Laboratory, Batavia, IL 60510, USA
###### Abstract
Integrable dynamical systems play an important role in many areas of science,
including accelerator and plasma physics. An integrable dynamical system with
$n$ degrees of freedom (DOF) possesses $n$ nontrivial integrals of motion, and
can be solved, in principle, by covering the phase space with one or more
charts in which the dynamics can be described using action-angle coordinates.
To obtain the frequencies of motion, both the transformation to action-angle
coordinates and its inverse must be known in explicit form. However, no
general algorithm exists for constructing this transformation explicitly from
a set of $n$ known (and generally coupled) integrals of motion. In this paper
we describe how one can determine the dynamical frequencies of the motion as
functions of these $n$ integrals in the absence of explicitly-known action-
angle variables, and we provide several examples.
###### pacs:
## I Introduction
Integrable dynamical systems play an important role in many areas of science,
including accelerator [1, 2] and plasma physics. It is well-known that an
$n$-DOF integrable system can be solved, in principle, by constructing action-
angle coordinates. However, in general such action-angle coordinates are
defined only locally, and break down near critical phase space structures
(e.g., the separatrix of the nonlinear pendulum). In addition, the canonical
transformation to action-angle coordinates is difficult to obtain in explicit
closed form for even the simplest systems. In practice, this can be an
obstacle to extracting the dynamical frequencies of motion of the system,
which are often the primary quantities of interest. Finally, the trend in
mechanics is to move toward results that can be expressed in a geometric form,
independent of a specific choice of coordinates.
In this paper, we propose a method to find the $n$ dynamical frequencies of an
integrable symplectic map or a Hamiltonian flow without knowledge of the
transformation to action-angle coordinates. This result is motivated by the
Mineur-Arnold formula [3, 4, 5, 6], which states that the $n$ action
coordinates $I_{j}$ can be constructed as path integrals of the form:
$I_{j}=\frac{1}{2\pi}\oint_{\gamma_{j}}\sum_{k=1}^{n}p_{k}dq_{k},\quad(j=1,\ldots,n),$
(1)
where the $\gamma_{j}$ define $n$ appropriately-chosen closed paths (cycles)
in the invariant level set (Appendix A). We will show that an explicit
integral formula analogous to (1) can be obtained for the $n$ dynamical
frequencies. This result is a generalization to arbitrary dimension of a
result described in [7], which is valid for the special case when $n=1$.
It is emphasized that this procedure is developed for the narrow class of
Hamiltonian systems (or symplectic maps) with a sufficient number of exactly-
known invariants, and not for arbitrary Hamiltonian systems. However,
experience suggests that this procedure may be used to extract and to
understand the frequency behavior of systems for which “approximate
invariants” can be constructed, which exhibit sufficiently small variation
over the time scale of interest. Such quantities can sometimes be constructed
analytically or numerically [8, 9].
The structure of this paper is as follows. Section II provides a brief summary
of background definitions regarding integrable maps and flows. Section III
motivates the concept of the tunes (or equivalently, the rotation vector) of
an integrable symplectic map. Section IV contains the main technical result of
this paper (16), relating the tunes of an integrable symplectic map to its
dynamical invariants. Section V describes the mathematical properties of this
solution, together with its proof. In Section VI, we describe how this result
can be applied to determine the characteristic frequencies of an integrable
Hamiltonian flow. Section VII illustrates the application of these results
using two numerical examples. Conclusions are provided in Section VIII. There
are four Appendices.
## II Integrable Maps and Flows
For simplicity, we take the phase space $M$ to be an open subset of
$\mathbb{R}^{2n}$ with its standard symplectic form. In any local set of
canonical coordinates $(q_{1},\ldots,q_{n},p_{1},\ldots,p_{n})$, the
symplectic form is represented by the matrix:
$J=\begin{pmatrix}0&I_{n\times n}\\\ -I_{n\times n}&0\end{pmatrix}.$ (2)
We will frequently use the fact that $J^{T}=J^{-1}=-J$.
Let $\mathcal{M}:M\rightarrow M$ denote a symplectic map. A smooth function
$f:M\rightarrow\mathbb{R}$ is said to be an invariant of the map $\mathcal{M}$
if:
$f\circ\mathcal{M}=f.$ (3)
The map $\mathcal{M}$ is said to be completely integrable if there exists a
set of $n$ invariants $f_{k}$ such that: i) the invariants Poisson-commute:
$\\{f_{j},f_{k}\\}=0$ $(j,k=1,\ldots,n)$, and ii) the set of gradient vectors
$\nabla f_{k}$ $(k=1,\ldots,n)$ is linearly independent at every point of $M$,
except for a possible set of zero measure (phase space volume) [10, 11, 12].
Similarly, if $H:M\rightarrow\mathbb{R}$ denotes a smooth Hamiltonian
function, the flow generated by $H$ is said to be completely integrable if the
conditions i)-ii) apply, with the invariant condition (3) replaced by the
local condition $\\{f,H\\}=0$.
To analyze the behavior of such a map or a flow, let
$\mathcal{F}:M\rightarrow\mathbb{R}^{n}$ denote the momentum mapping, the
function that takes each point in the phase space to its $n$-tuple of
invariants [3]:
$\mathcal{F}(\zeta)=(f_{1}(\zeta),\ldots,f_{n}(\zeta)),\quad\quad\zeta\in M.$
(4)
Each orbit is then confined to lie in some level set of $\mathcal{F}$ of the
form:
$M_{c}=\\{\zeta\in M:\mathcal{F}(\zeta)=c\\},\quad\quad c\in\mathbb{R}^{n}.$
(5)
The level set (5) is said to be regular if the linear map $D\mathcal{F}$,
represented by the Jacobian matrix of $\mathcal{F}$, is surjective (rank $n$)
everywhere on $M_{c}$. In this case, $M_{c}$ is a smooth surface of dimension
$n$. Assuming that $M_{c}$ is also compact and connected, the Liouville-Arnold
theorem [3, 4, 5, 6] states that $M_{c}$ may be smoothly transformed by a
symplectic change of coordinates into the standard $n$-torus $\mathbb{T}^{n}$,
and application of the map (or flow) corresponds to rotation about this torus
with a fixed frequency vector, which we wish to determine.
## III Tunes of an Integrable Map
Let $\mathcal{M}$ be an integrable symplectic map, and let $M_{c}$ be one of
its regular level sets. By the Liouville-Arnold theorem, there exists a
neighborhood of the level set $M_{c}$ in which there is a set of canonical
action-angle coordinates $\zeta=(\phi_{1},\ldots,\phi_{n},I_{1},\ldots,I_{n})$
in which the map takes the form
$\mathcal{M}({\phi},{I})=({\phi}^{f},{I}^{f})$, where:
${I}^{f}={I},\quad\quad{\phi}^{f}={\phi}+2\pi\nu({I})\quad\operatorname{mod}2\pi.$
(6)
The coordinates $(\phi,I)$ in (6) are not unique [13]. However, the quantities
$\nu_{j}$ $(j=1,\ldots,n)$, called the tunes of $\mathcal{M}$, have a
coordinate-invariant physical meaning, described as follows.
If $F$ denotes any observable, given by a smooth real-valued function defined
in our neighborhood of $M_{c}$, then $F$ may be expressed as a uniformly
convergent Fourier series in the angle coordinates $\phi$, so that:
$F(\phi,I)=\sum_{k\in\mathbb{Z}^{n}}a_{k}(I)e^{ik\cdot\phi},\quad\quad
a_{k}\in\mathbb{C}.$ (7)
Applying the map $\mathcal{M}$ in the form (6) $N$ times shows that:
$F(\mathcal{M}^{N}(\phi,I))=\sum_{k\in\mathbb{Z}^{N}}a_{k}(I)e^{ik\cdot(\phi+2\pi
k\cdot\nu(I)N)}.$ (8)
From (8), it follows that there exist smooth complex-valued functions $F_{k}$
$(k\in\mathbb{Z}^{n})$ on our neighborhood of $M_{c}$ such that:
$F\circ\mathcal{M}^{N}=\sum_{k\in\mathbb{Z}^{n}}F_{{k}}e^{i2\pi({k}\cdot{\nu})N}.$
(9)
One sees from (9) that any time series obtained by following an observable $F$
(defined on the level set $M_{c}$) during iteration of the map $\mathcal{M}$
contains contributions at the discrete set of frequencies:
$\Omega_{\nu}=\\{k\cdot\nu+k_{0}:k\in\mathbb{Z}^{n},k_{0}\in\mathbb{Z}\\}.$
(10)
Algorithms to determine the basic frequencies $\nu_{j}$ $(j=1,\ldots,n)$ from
a series of the form (9) are well-established [14, 15].
Note that knowledge of the set of frequencies (10) does not specify the vector
$\nu\in\mathbb{R}^{n}$ uniquely. To see this, let
$\nu^{\prime}=U\nu+m,$ (11)
where $m\in\mathbb{Z}^{n}$ is any $n$-tuple of integers and $U$ is any
unimodular integer matrix (an $n\times n$ integer matrix with
$\operatorname{det}U=\pm 1$). This implies that $U$ is invertible, $U^{-1}$ is
also a unimodular integer matrix, and $U$ defines an invertible linear
transformation from $\mathbb{Z}^{n}$ to $\mathbb{Z}^{n}$. The same conclusion
holds for $U^{T}$. By making the transformation of integer indices
$k=U^{T}k^{\prime}$, the sum in (9) becomes:
$F\circ\mathcal{M}^{N}=\sum_{k^{\prime}\in\mathbb{Z}^{n}}F_{U^{T}{k^{\prime}}}e^{i2\pi({k^{\prime}}\cdot{\nu^{\prime}})N},$
(12)
which takes the same form as (9), with $\nu$ replaced by $\nu^{\prime}$. A
similar argument starting from (10) shows that
$\Omega_{\nu^{\prime}}=\Omega_{\nu}$. Thus, the vector $\nu$ is at best
defined only up to transformations of the form (11) [16].
Indeed, one can construct action-angle coordinates in which the map
$\mathcal{M}$ has the form (6) with the tunes $\nu^{\prime}$ given by (11). In
terms of the original coordinates $(\phi,I)$, let:
$I^{\prime}=U^{-T}I,\quad\quad\phi^{\prime}=U\phi\quad\operatorname{mod}2\pi.$
(13)
The quantities $(\phi^{\prime}_{1},\ldots,\phi^{\prime}_{n})$ define periodic
angle coordinates on the torus $\mathbb{T}^{n}$, since $\phi^{A}=\phi^{B}$
$\operatorname{mod}2\pi\Leftrightarrow U\phi^{A}=U\phi^{B}$
$\operatorname{mod}2\pi$, by the condition that $U$ be a unimodular integer
matrix. The transformation (13) is easily verified to be symplectic. The map
$\mathcal{M}$ in the coordinates $(\phi^{\prime},I^{\prime})$ takes the form:
$I^{\prime f}=I^{\prime},\quad\quad\phi^{\prime
f}=\phi^{\prime}+2\pi\nu^{\prime}(I^{\prime})\quad\operatorname{mod}2\pi,$
(14)
where
$\nu^{\prime}(I^{\prime})=U\nu(U^{T}I^{\prime})+m.$ (15)
Since points on the level set $M_{c}$ satisfy a condition of the form
$I_{0}=I=U^{T}I^{\prime}$ for some constant $I_{0}\in\mathbb{R}^{n}$, it
follows that (11) holds on $M_{c}$, as claimed.
The vector $\nu$ is called the rotation vector of the map $\mathcal{M}$
corresponding to the level set $M_{c}$ [17]. Two rotation vectors $\nu$ and
$\nu^{\prime}$ will be said to be equivalent if there exists a relation of the
form (11). In practice, one would like a natural method to select a unique
representative from each equivalence class. In addition, one would like the
selected vector $\nu$ to vary smoothly with the invariant value
$c\in\mathbb{R}^{n}$. If the map $\mathcal{M}$ decouples when expressed using
a particular choice of canonical coordinates, then the $n$ tunes can be chosen
(up to a permutation) to correspond to rotation angles in each of the $n$
conjugate phase planes. If the system is coupled, then selecting a natural
choice of representative is a more subtle issue. However, note that the
rotation vector $\nu$ may always be chosen so that $0\leq\nu_{j}\leq 1/2$
$(j=1,\ldots,n)$.
The precise choice of the rotation vector is closely related to geometric
considerations. In the following section, we will see that there is a
correspondence between the rotation vector and the choice of certain paths
lying in the invariant torus. It is of interest to study the relationships
between the analytic properties of the rotation vector and the topology of
these curves. However, for the remainder of this paper, we content ourselves
with demonstrating that all results are valid up to an equivalance of the form
(11).
## IV Tunes from invariants
Let $\mathcal{M}$ be an integrable symplectic map with momentum mapping
$\mathcal{F}$, as defined in (4). The goal of this paper is to demonstrate
that on any regular level set of $\mathcal{F}$, the tunes
$\nu=(\nu_{1},\ldots,\nu_{n})^{T}$ may be expressed using a set of $n(n+1)$
path integrals over the level set, in the form:
$\displaystyle{S}$
$\displaystyle=-\int_{\gamma}(D\mathcal{F}^{+})^{T}Jd{\zeta},$ (16a)
$\displaystyle\quad R_{jk}$
$\displaystyle=\left(-\oint_{\gamma_{k}}(D\mathcal{F}^{+})^{T}Jd{\zeta}\right)_{j},$
(16b) $\displaystyle{\nu}$ $\displaystyle=R^{-1}{S}.$ (16c)
Here $\nu$ and $S$ are real $n$-vectors, $R$ is a real $n\times n$ matrix, and
$J$ is the $2n\times 2n$ matrix of the symplectic form (2). It will be shown
that the matrix $R$ is, in fact, invertible.
In (16), $\gamma$ is a parameterized path in the level set $M_{c}$ from any
point $\zeta\in M_{c}$ to its image $\mathcal{M}(\zeta)$ under the map.
Likewise, the $\gamma_{k}$ ($k=1,\ldots,n$) are parameterized closed paths in
the level set $M_{c}$, and these must be chosen to form a basis for the group
of 1-cycles in $M_{c}$. (See Appendix A.) We will show that the resulting
value of $\nu\in\mathbb{R}^{n}$ is independent, modulo the equivalence (11),
of the choice of the paths $\gamma$ and $\gamma_{k}$. Furthermore, the precise
value of $\nu$ depends only on the topology of the curves $\gamma$ and
$\gamma_{k}$. Intuitively, the paths $(\gamma_{1},\ldots,\gamma_{n})$ specify
$n$ independent “winding directions” around the level set $M_{c}$, and the
tunes $(\nu_{1},\ldots,\nu_{n})$ specify the fraction of a cycle (in each
direction) by which a point is moved under application the map $\mathcal{M}$.
Finally, $D\mathcal{F}^{+}$ denotes any $2n\times n$ right matrix inverse of
the $n\times 2n$ Jacobian matrix $D\mathcal{F}$. Since
$\operatorname{rank}(D\mathcal{F})=n$ on the level set $M_{c}$, such a right
inverse exists at every point on $M_{c}$. It is convenient to use the Moore-
Penrose inverse of $D\mathcal{F}$, given explicitly by:
$D\mathcal{F}^{+}=(D\mathcal{F})^{T}\left[(D\mathcal{F})(D\mathcal{F})^{T}\right]^{-1}.$
(17)
By the rank assumption on $D\mathcal{F}$, the matrix appearing in square
brackets in (17) is always invertible. It follows that the matrix elements of
$D\mathcal{F}^{+}$ are smooth, bounded functions when restricted to the level
set $M_{c}$, and the path integrals in (16) are convergent and finite.
Appendix B describes important properties of the matrix $D\mathcal{F}^{+}$
that are used in the remainder of this paper.
### IV.1 Simple Example
Consider the 2D linear symplectic map described in matrix form as:
$\begin{pmatrix}q^{f}\\\
p^{f}\end{pmatrix}=\begin{pmatrix}\cos\Psi&\sin\Psi\\\
-\sin\Psi&\cos\Psi\end{pmatrix}\begin{pmatrix}q\\\ p\end{pmatrix},$ (18)
which arises naturally in the study of the simple harmonic oscillator. In this
case $n=1$ and an invariant is given by:
$f(q,p)=\frac{1}{2}(q^{2}+p^{2}).$ (19)
The level set $M_{c}=\\{(q,p)\in\mathbb{R}^{2}:f(q,p)=c\\}$ is regular for any
$c>0$, corresponding to the circle of radius $\sqrt{2c}$ with center at the
origin. (See Fig. 1.) We therefore express the two curves $\gamma$ and
$\gamma_{1}$ appearing in (16) as:
$\displaystyle\gamma(t)=(\sqrt{2c}\cos\alpha(t),\sqrt{2c}\sin\alpha(t)),\quad
a\leq t\leq b$ (20a)
$\displaystyle\gamma_{1}(t)=(\sqrt{2c}\cos\beta(t),\sqrt{2c}\sin\beta(t)),\quad
c\leq t\leq d$ (20b)
where $\alpha$ and $\beta$ are (smooth) real-valued functions of some
parameter $t$. The definitions of $\gamma$ and $\gamma_{1}$ in (16) require
only that the functions $\alpha$ and $\beta$ satisfy:
$\alpha(b)=\alpha(a)-\Psi-2\pi m,\quad\beta(d)=\beta(c)\mp 2\pi,$ (21)
where $m$ may be any integer. (In order to serve as a basis cycle, the curve
$\gamma_{1}$ must wind around the circle exactly once, in either direction.)
One verifies using (19) that, since $\mathcal{F}=f$ we have:
$D\mathcal{F}=\begin{pmatrix}q&p\end{pmatrix},\quad\quad
D\mathcal{F}^{+}=\frac{1}{q^{2}+p^{2}}\begin{pmatrix}q\\\ p\end{pmatrix}.$
(22)
Using these results in (16) gives:
$\displaystyle S$
$\displaystyle=-\int_{a}^{b}(D\mathcal{F}^{+})^{T}J\gamma^{\prime}(t)dt=-\int_{a}^{b}\alpha^{\prime}(t)dt=\Psi+2\pi
m,$ $\displaystyle R$
$\displaystyle=-\int_{c}^{d}(D\mathcal{F}^{+})^{T}J\gamma_{1}^{\prime}(t)dt=-\int_{c}^{d}\beta^{\prime}(t)dt=\pm
2\pi.$
This yields the following result for the tune $\nu$ of the map (18):
$\nu=R^{-1}S=\pm\left(\frac{\Psi}{2\pi}+m\right),\quad\quad m\in\mathbb{Z}.$
(23)
Figure 1: Illustration of the map (18), showing one of the level sets $M_{c}$
$(c>0)$ of the invariant $f$ in (19) and the two curves $\gamma$ (red) and
$\gamma_{1}$ (black) used to evaluate (16). Although not shown here, each
curve is allowed to change direction during transit. The curve $\gamma$ may
wind around the origin multiple times.
This result is expected, since (18) represents a clockwise rotation in the
phase space by the angle $\Psi$. If we think of the basis cycle $\gamma_{1}$
as defining an orientation of the circle $M_{c}$ (i.e., defining the clockwise
or the counterclockwise direction to be positive), then $\nu$ represents the
fraction of a cycle that is completed as we move along the curve $\gamma$,
completing one iteration of the map. The sign in (23) is determined by the
direction of $\gamma$, while the integer $m$ counts the number of complete
turns that the curve $\gamma$ winds about the origin. Note that the final
result is independent of the parametrization (20), as defined by the choices
of the functions $\alpha$ and $\beta$.
The purpose of this example is to illustrate the result (16) in its simplest
possible setting. More sophisticated examples are considered in Section VII,
in Appendices C-D, and in the reference [7].
## V Properties of the Solution
In this section, we discuss the properties of the general solution (16), and
we provide its mathematical proof.
### V.1 Path integrals in the level set
If $A:M\rightarrow\mathbb{R}^{n\times 2n}$ is a smooth matrix-valued function
on the phase space, and if $\gamma:[a,b]\rightarrow M$ is a smooth
parametrized path, then an integral of the form (16) is to be interpreted as:
$\int_{\gamma}Ad\zeta=\int_{a}^{b}A(\gamma(t))\gamma^{\prime}(t)dt,$ (24)
where $\gamma^{\prime}(t)$ is the $2n$-vector tangent to $\gamma$ at $t$. For
any path $\gamma$ confined to a level set of $\mathcal{F}$, $\mathcal{F}$ is
invariant along $\gamma$, and applying the chain rule gives that:
$0=\frac{d}{dt}(\mathcal{F}\circ\gamma)(t)=D\mathcal{F}({\gamma(t)})\gamma^{\prime}(t).$
(25)
Since this holds for every such path $\gamma$, motivated by (24) we will
denote (25) more simply as:
$(D\mathcal{F})d{\zeta}=0.$ (26)
Since it follows from (26) that $Jd{\zeta}\in
J\operatorname{ker}(D\mathcal{F})$, we have from (113) that:
$(D\mathcal{F}^{+})(D\mathcal{F})Jd{\zeta}=Jd{\zeta}.$ (27)
Since $(D\mathcal{F}^{+})(D\mathcal{F})$ is symmetric, as is easily verified,
we have:
$(D\mathcal{F})^{T}(D\mathcal{F}^{+})^{T}Jd{\zeta}=Jd{\zeta}.$ (28)
The identity (28) allows us to prove many results on coordinate and path
independence of the integrals in (16).
As an example, let $B$ denote any right matrix inverse of $D\mathcal{F}$. Then
$B^{T}$ is a left inverse of $(D\mathcal{F})^{T}$. Multiplying (28) on the
left by $B^{T}$ gives:
$(D\mathcal{F}^{+})^{T}Jd{\zeta}=B^{T}Jd{\zeta},$ (29)
which shows that we could replace $D\mathcal{F}^{+}$ by any right matrix
inverse of $D\mathcal{F}$ in the integrals (16) without changing the result.
### V.2 Coordinate-independence
Let $\zeta^{\prime}$ denote a vector of new phase space coordinates related to
$\zeta$ by an arbitrary symplectic coordinate transformation $\mathcal{N}$, so
that
$\zeta^{\prime}=\mathcal{N}(\zeta).$ (30)
Let all quantities expressed in these new coordinates be denoted with ′. Then
it is straightforward to verify that:
$d\zeta^{\prime}=(D\mathcal{N})d\zeta,\quad
D\mathcal{F}^{\prime}=(D\mathcal{F})(D\mathcal{N})^{-1}.$ (31)
Since the map $\mathcal{N}$ is symplectic:
$(D\mathcal{N})^{T}J(D\mathcal{N})=J.$ (32)
To simplify notation, let $dv$ denote the form appearing in the integrals
(16), namely
$dv=(D\mathcal{F}^{+})^{T}Jd\zeta.$ (33)
Writing down the identity (28) in the primed coordinates, we have:
$(D\mathcal{F^{\prime}})^{T}{dv}^{\prime}=Jd\zeta^{\prime}.$ (34)
Making the substitutions of (31) into (34) gives:
$D\mathcal{N}^{-T}(D\mathcal{F})^{T}dv^{\prime}=J(D\mathcal{N})d\zeta.$ (35)
Multiplying both sides by $D\mathcal{N}^{T}$ gives
$(D\mathcal{F})^{T}dv^{\prime}=(D\mathcal{N})^{T}J(D\mathcal{N})d\zeta.$ (36)
Applying the symplectic condition (32) gives:
$(D\mathcal{F})^{T}dv^{\prime}=Jd\zeta.$ (37)
Finally, multiplying both sides by $(D\mathcal{F}^{+})^{T}$ and noting that
this is a left inverse of $(D\mathcal{F})^{T}$ gives:
$dv^{\prime}=(D\mathcal{F}^{+})^{T}Jd\zeta=dv.$ (38)
Since (16) can be written as:
$S=-\int_{\gamma}dv,\quad R_{jk}=\left(-\oint_{\gamma_{k}}dv\right)_{j},$ (39)
it follows from (38) that for a fixed choice of paths $\gamma$ and
$\gamma_{k}$ $(k=1,\ldots,n)$ each integral in (39) is independent of the
choice of canonical coordinates.
### V.3 Reduced forms in canonical coordinates
Consider canonical coordinates given by
${\zeta}=(q_{1},\ldots,q_{n},p_{1},\ldots,p_{n})^{T}$. We may express the
$n\times 2n$ matrix $D\mathcal{F}$ in terms of two $n\times n$ blocks, which
correspond to partial derivatives with respect to the variables
$q=(q_{1},\ldots,q_{n})$ and $p=(p_{1},\ldots,p_{n})$, respectively:
$D\mathcal{F}=\begin{pmatrix}D_{q}\mathcal{F}&D_{p}\mathcal{F}\end{pmatrix}.$
(40)
Let $dv$ be defined as in (33). Then using identity (28) gives:
$(D\mathcal{F})^{T}dv=Jd{\zeta}.$ (41)
Expressing this in terms of its $n\times n$ blocks using (2) and (40) gives:
$\begin{pmatrix}D_{q}\mathcal{F}^{T}dv\\\
D_{p}\mathcal{F}^{T}dv\end{pmatrix}=\begin{pmatrix}d{p}\\\
-d{q}\end{pmatrix}.$ (42)
In the special case that the matrix $(D_{q}\mathcal{F})^{T}$ is invertible
along the integration path, we may use the first row in (42) to give:
$dv=(D_{q}\mathcal{F})^{-T}d{p}.$ (43)
Noting the definition of $dv$ it follows that:
$\displaystyle{S}$ $\displaystyle=-\int_{\gamma}(D_{q}\mathcal{F})^{-T}d{p},$
(44a) $\displaystyle R_{jk}$
$\displaystyle=\left(-\oint_{\gamma_{k}}(D_{q}\mathcal{F})^{-T}d{p}\right)_{j},\quad{\nu}=R^{-1}{S}.$
(44b)
Alternatively, in the special case that the matrix $(D_{p}\mathcal{F})^{T}$ is
invertible along the integration path, we may use the second row in (42) to
give:
$dv=-(D_{p}\mathcal{F})^{-T}d{q}.$ (45)
Noting the definition of $dv$ it follows that:
$\displaystyle{S}$ $\displaystyle=\int_{\gamma}(D_{p}\mathcal{F})^{-T}d{q},$
(46a) $\displaystyle R_{jk}$
$\displaystyle=\left(\oint_{\gamma_{k}}(D_{p}\mathcal{F})^{-T}d{q}\right)_{j},\quad{\nu}=R^{-1}{S}.$
(46b)
In the special case of one degree of freedom $(n=1)$, the expression (46)
reduces to the expression appearing in [7]. Another example, for a map with
two degrees of freedom $(n=2)$ separable in polar coordinates, is provided in
Appendix D.
### V.4 Proof of the result
By the Liouville-Arnold theorem for integrable symplectic maps, there exists a
neighborhood of the level set $M_{c}$ in which there is a set of canonical
action-angle coordinates $\zeta=(\phi_{1},\ldots,\phi_{n},I_{1},\ldots,I_{n})$
in which the map takes the form
$\mathcal{M}({\phi},{I})=({\phi}^{f},{I}^{f})$, where:
${I}^{f}={I},\quad\quad{\phi}^{f}={\phi}+2\pi\nu({I})\quad\operatorname{mod}2\pi,$
(47)
and the invariants $f_{k}$ are functions of the action coordinates only, so
that:
$D_{\phi}\mathcal{F}=0,\quad\quad
D\mathcal{F}=\begin{pmatrix}0&D_{I}\mathcal{F}\end{pmatrix}.$ (48)
Since we have assumed that $D\mathcal{F}$ is of full rank, it follows from
(48) that $D_{I}\mathcal{F}$ is invertible, and we may apply the result (46)
to obtain:
${S}=\int_{\gamma}(D_{I}\mathcal{F})^{-T}d{\phi}.$ (49)
Since the invariants are functions of the action coordinates only, the matrix
$D_{I}\mathcal{F}$ is constant along the integration path $\gamma$, and we
need only evaluate an integral of the form:
$\int_{\gamma}d\phi=\Delta\phi+2\pi m,$ (50)
where $\Delta\phi=(\Delta\phi_{1},\ldots,\Delta\phi_{n})$ denotes the net
change in the angle coordinates $(\phi_{1},\ldots,\phi_{n})$, when taken to
lie in the range $[0,2\pi)$, and $m=(m_{1},\ldots,m_{n})\in\mathbb{Z}^{n}$
denotes the number of times the path $\gamma$ winds around the torus with
respect to the angles $\phi_{1},\ldots,\phi_{n}$, respectively. Using (47) and
(50) in (49) gives:
${S}=2\pi(D_{I}\mathcal{F})^{-T}(\nu+m),\quad m\in\mathbb{Z}^{n}.$ (51)
Similarly, we have
$R_{jk}=\left(\oint_{\gamma_{k}}(D_{I}\mathcal{F})^{-T}d{\phi}\right)_{j}.$
(52)
By definition, the closed paths $\gamma_{k}$ $(k=1,\ldots,n)$ form a basis for
the group of 1-cycles on $M_{c}$. Consider the coordinate curves
$\tilde{\gamma}_{k}:[0,1]\rightarrow M_{c}$, given in action-angle coordinates
by:
$\tilde{\gamma}_{k}(t)=(0,\ldots,0,2\pi t,0,\ldots,0),$ (53)
where the nontrivial entry corresponds to the $k$th angle coordinate. Then the
paths $\tilde{\gamma}_{k}$ $(k=1,\ldots,n)$ also form a basis for the group of
1-cycles on $M_{c}$. The change of basis is represented by some unimodular
integer matrix $U$, so that:
$\int_{\gamma_{k}}d\phi=\sum_{l=1}^{n}U_{kl}\oint_{\tilde{\gamma}_{l}}d\phi.$
(54)
However,
$\oint_{\tilde{\gamma}_{l}}d\phi=\int_{0}^{1}(2\pi e_{l})dt=2\pi e_{l}.$ (55)
It follows that the $l$-th component of (54) is given by:
$\left(\oint_{\gamma_{k}}d\phi\right)_{l}=2\pi U_{kl},$ (56)
so using (52) gives:
$R=2\pi(D_{I}\mathcal{F})^{-T}U^{T}.$ (57)
Since $U^{T}$ is invertible, it follows that the matrix $R$ is invertible and
we have:
$R^{-1}S=U^{-T}(\nu+m)=U^{\prime}\nu+m^{\prime},$ (58)
where $U^{\prime}=U^{-T}$ is a unimodular integer matrix, and
$m^{\prime}=U^{-T}m$ is an $n$-vector of integers. It follows that (58) yields
the vector of tunes $\nu$ appearing in (47), up to an equivalence of the form
(11). Coordinate-independence then shows that the same is true of the
expression in (16).
More can be said. If the basis cycles $\gamma_{1},\ldots,\gamma_{n}$ are
initially chosen to be homologous to the coordinate curves
$\tilde{\gamma}_{1},\ldots,\tilde{\gamma}_{n}$, then $U^{\prime}=I_{n\times
n}$, and (58) correctly yields the vector of tunes $\nu$ modulo 1. Otherwise,
by making a change of coordinates of the form (13), one may transform to
action-angle coordinates in which the tunes appearing in (58) are equal to
those in (47), modulo 1. Thus we may assume, without loss of generality, that
the initial action-angle coordinates are chosen such that the coordinate
curves (53) are homologous to the basis cycles $\gamma_{k}$ $(k=1,\ldots,n)$.
In this way, the choice of basis cycles fixes the tunes uniquely mod 1.
This proof also demonstrates that the expression (16) is independent of the
choice of the initial point $\zeta$ and the paths $\gamma$, $\gamma_{k}$. This
occurs because we can transform to coordinates in which the integrand is
constant along these paths, and the path dependence of each integral is
determined only by the net change in the angular coordinates along each path.
In particular, the result depends only on the homotopy class of the paths
$\gamma$ and $\gamma_{k}$.
### V.5 Changing the set of invariants
In the previous subsection, we showed that (16) correctly produces the
dynamical tunes of the map $\mathcal{M}$. The proof uses the fact that (16) is
invariant under a change of coordinates for the domain of $\mathcal{F}$ (the
phase space). In fact, (16) is also invariant under a change of coordinates
for the range of $\mathcal{F}$ (which is $\mathbb{R}^{n}$). More precisely,
let $f^{\prime}=(f_{1}^{\prime},\ldots,f_{n}^{\prime})$ denote a new set of
invariants that is related to the previous set of invariants
$f=(f_{1},\ldots,f_{n})$ through a smooth coordinate transformation
$\mathcal{A}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$, so that
${f}^{\prime}=\mathcal{A}({f}).$ (59)
Let all quantities expressed in these new coordinates be denoted with ′. Then
by definition we have:
$\mathcal{F}^{\prime}=\mathcal{A}\circ\mathcal{F},\quad\quad
D\mathcal{F}^{\prime}=(D\mathcal{A})(D\mathcal{F}).$ (60)
Let the quantity $dv$ be defined as in (33). The identity (28) in the primed
coordinates is:
$(D\mathcal{F}^{\prime})^{T}dv^{\prime}=J^{\prime}d\zeta^{\prime}.$ (61)
Using (60) and noting that $J^{\prime}=J$ and $d\zeta^{\prime}=d\zeta$ gives
$(D\mathcal{F})^{T}(D\mathcal{A})^{T}dv^{\prime}=Jd\zeta.$ (62)
Multiplying both sides by $(D\mathcal{F}^{+})^{T}$ and noting that this is a
left inverse of $(D\mathcal{F})^{T}$ gives:
$(D\mathcal{A})^{T}dv^{\prime}=(D\mathcal{F}^{+})^{T}Jd\zeta=dv.$ (63)
Thus, we have:
$dv^{\prime}=(D\mathcal{A})^{-T}dv.$ (64)
Since the level sets of $\mathcal{F}$ and $\mathcal{F}^{\prime}$ coincide, we
assume that we use the same paths $\gamma$ and $\gamma_{k}$ to integrate (64)
on both sides of the equality. Note that $D\mathcal{A}$ is evaluated at the
point $\mathcal{F}(\zeta)$, so it depends on the invariants only and is
therefore constant along the integration path. It follows that:
${S}^{\prime}=(D\mathcal{A})^{-T}{S},\quad\quad
R^{\prime}=(D\mathcal{A})^{-T}R,$ (65)
and therefore
${\nu}^{\prime}=R^{-1}(D\mathcal{A})^{T}(D\mathcal{A})^{-T}{S}=R^{-1}S={\nu}.$
(66)
This shows that the vector of tunes ${\nu}\in\mathbb{R}^{n}$ does not change
under a transformation (59) of the invariants.
One may simplify the proof in the previous subsection as follows. In addition
to using action-angle coordinates to evaluate (16), one may choose to
transform the invariants $(f_{1},\ldots,f_{n})$ to the set of action
coordinates $(I_{1},\ldots,I_{n})$ using an invertible transformation
$\mathcal{A}({f})={I}$. Using these coordinates for the domain and range of
$\mathcal{F}$, we have $D_{I}\mathcal{F}^{\prime}=I_{n\times n}$, the
identity, and the integrals (49,52) take a trivial form. We chose not to take
this approach, in order to illustrate explicitly the path independence of the
separate factors ${S}$ and $R$.
## VI Frequencies of Hamiltonian flows
Let $\mathcal{M}$ denote the period-1 map associated with an integrable
Hamiltonian $H$. Expressing the dynamics in action-angle form, we have:
$I(t)=I(0),\quad{\phi}(t)={\phi}(0)+{\omega}({I(0)})t,$ (67)
where the frequency vector $\omega=(\omega_{1},\ldots,\omega_{n})$ is given
by:
$\omega_{k}=\frac{\partial H}{\partial I_{k}}.$ (68)
The period-1 map is given by $\mathcal{M}({\phi},I)=({\phi}^{f},I^{f})$, where
$I^{f}=I,\quad{\phi}^{f}={\phi}+2\pi\nu({I}),\quad{\nu}=\frac{{\omega}}{2\pi}.$
(69)
It follows that we may apply the result for integrable maps (16) to extract
the frequency vector ${\omega}$ without knowledge of the actions $I$ that
appear in (68).
Of the many available choices for the path $\gamma$, we may choose an integral
curve of the Hamiltonian flow. Along this curve,
$\frac{d{\zeta}}{dt}={J}\nabla H({\zeta}).$ (70)
Assume that $H$ is given by some function $G$ of the invariants, so that
$H=G\circ\mathcal{F}$. Then
$DH=(DG)(D\mathcal{F}),$ (71)
and
$\frac{d{\zeta}}{dt}=J(D\mathcal{F})^{T}(DG)^{T}.$ (72)
Using this as the path $\gamma$ in (16), and noting that application of the
map corresponds to moving from $t=0$ to $t=1$:
${S}=\int_{0}^{1}(D\mathcal{F}^{+})^{T}(D\mathcal{F})^{T}(DG)^{T}dt.$ (73)
Since $(D\mathcal{F}^{+})^{T}$ is a left inverse of $(D\mathcal{F})^{T}$, and
the matrix $DG$ is constant along the path, it follows that:
${S}=DG^{T},\quad\quad{\nu}=R^{-1}DG^{T}.$ (74)
In the special case that $H=f_{1}$, then $DG^{T}={e}_{1}$ and
${\omega}=2\pi R^{-1}{e}_{1},$ (75)
where ${e}_{1}=(1,0,0,\ldots,0)^{T}$. Note that the result (74) no longer
requires explicit knowledge of the period-1 map $\mathcal{M}$, which has been
eliminated in favor of the Hamiltonian $H$.
Let us check the coordinate-invariant expression (74) by evaluating the matrix
$R$ using action-angle coordinates. In these coordinates,
$R_{jk}=\left(\oint_{\gamma_{k}}(D_{I}\mathcal{F})^{-T}d{\phi}\right)_{j}.$
(76)
Since the matrix $D_{I}\mathcal{F}$ is constant along the integration path, it
follows that:
$R=2\pi(D_{I}\mathcal{F})^{-T}.$ (77)
But then:
${\nu}=R^{-1}{S}=\frac{1}{2\pi}(D_{I}\mathcal{F})^{T}(DG)^{T}.$ (78)
Finally, evaluating expression (71) in terms of its $n\times n$ blocks gives:
$\begin{pmatrix}D_{\phi}H&D_{I}H\end{pmatrix}=DG\begin{pmatrix}D_{\phi}\mathcal{F}&D_{I}\mathcal{F}\end{pmatrix},$
(79)
so that:
$D_{I}H=(DG)(D_{I}\mathcal{F}).$ (80)
Using this result in (78) and multiplying by $2\pi$ then gives:
${\omega}=(D_{I}H)^{T},\quad\text{or}\quad\omega_{j}=\frac{\partial
H}{\partial I_{j}},$ (81)
which is (68).
## VII Numerical Examples
To illustrate the application of (16) using practical examples, the results of
this paper were used to determine: 1) the dynamical frequencies of one
nonlinear Hamiltonian flow, and 2) the tunes of one nonlinear symplectic map,
both defined on the phase space $\mathbb{R}^{4}$. Appendix C illustrates in
detail how (16) can also be used to correctly produce the tunes of a stable
linear symplectic map of arbitrary dimension.
### VII.1 Integrable Hénon-Heiles Hamiltonian
Consider the Hamiltonian given by (for $\lambda>0$):
$H=\frac{1}{2}\left(p_{x}^{2}+p_{y}^{2}+x^{2}+y^{2}\right)+\lambda\left(x^{2}y+\frac{y^{3}}{3}\right).$
(82)
This is the usual Hénon-Heiles Hamiltonian [18], except that the sign of the
$y^{3}$ term is reversed. It is known that (82) is integrable, with two
invariants of the form [19, 20]:
$f_{1}=H,\quad\quad
f_{2}=p_{x}p_{y}+xy+\lambda\left(xy^{2}+\frac{x^{3}}{3}\right).$ (83)
An analysis of (83) shows that an invariant level set $M_{c}$ for some
$c\in\mathbb{R}^{2}$ contains a connected component $M_{c}^{0}$ near the
origin that is regular and compact provided that:
$\displaystyle 0\leq c_{1}-c_{2}\leq\frac{1}{6\lambda^{2}},\quad 0\leq
c_{1}+c_{2}\leq\frac{1}{6\lambda^{2}}.$ (84)
For orbits on $M_{c}^{0}$, we wish to evaluate the characteristic frequency
vector ${\omega}=(\omega_{1},\omega_{2})^{T}$ using (75).
To evaluate the path integrals appearing in the matrix $R$, we need to choose
two basis cycles $\gamma_{1}$, $\gamma_{2}$ lying in the two-dimensional
surface $M_{c}^{0}$. One approach is to consider the curve obtained by
intersecting $M_{c}^{0}$ with the hyperplane $y=kx$ $(k\in\mathbb{R})$. Using
(83) to solve for $p_{x}$ and $p_{y}$ locally in terms of the coordinates $x$,
$y$ and setting $y=kx$ gives the parameterized curve segment:
$t\mapsto(t,kt,p_{x}(t),p_{y}(t)),$ (85)
where $p_{x}(t)$ is given by:
$\displaystyle p_{x}(t)$
$\displaystyle=\pm\sqrt{\frac{1}{2}(c_{1}+c_{2})-\frac{1}{4}(k+1)^{2}t^{2}-\frac{\lambda}{6}(k+1)^{3}t^{3}}$
$\displaystyle\pm\sqrt{\frac{1}{2}(c_{1}-c_{2})-\frac{1}{4}(k-1)^{2}t^{2}-\frac{\lambda}{6}(k-1)^{3}t^{3}},$
(86)
and the signs of the two terms may be chosen independently. In each case,
$p_{y}(t)$ is given by reversing the sign of the second term in (86). To
construct the cycle $\gamma_{1}$, one must then paste together curve segments
that utilize the appropriate signs in (86) to produce a closed path. For
convenience, the closed path $\gamma_{2}$ is obtained using the same
procedure, for the choice of intersecting hyperplane $y=-kx$. Independence of
the two cycles $\gamma_{1}$ and $\gamma_{2}$ will be explored momentarily.
In the coordinates $(x,y,p_{x},p_{y})$, note that the Jacobian matrix of the
momentum mapping is given by:
$D\mathcal{F}=\begin{pmatrix}x+2\lambda
xy&y+\lambda(x^{2}+y^{2})&p_{x}&p_{y}\\\
y+\lambda(x^{2}+y^{2})&x+2xy\lambda&p_{y}&p_{x}\end{pmatrix},$ (87)
and its Moore-Penrose inverse (17) can be evaluated explicitly. Alternatively,
we may use only the $2\times 2$ momentum block $D_{p}\mathcal{F}$ by applying
(46), provided we avoid points where $p_{x}=0$ or $p_{y}=0$. Evaluating the
integrals in (16) numerically along the paths $\gamma_{1}$ and $\gamma_{2}$ to
produce the matrix $R$, and using (75) to produce the frequency vector
$\omega$ yields the results shown in Fig. 2
This system can also be solved exactly. Note that by making the symplectic
coordinate transformation:
$\displaystyle q_{1}$ $\displaystyle=\frac{1}{\sqrt{2}}(y+x),\quad
p_{1}=\frac{1}{\sqrt{2}}(p_{y}+p_{x}),$ (88) $\displaystyle q_{2}$
$\displaystyle=\frac{1}{\sqrt{2}}(y-x),\quad
p_{2}=\frac{1}{\sqrt{2}}(p_{y}-p_{x}),$ (89)
the Hamiltonian decouples as:
$H=H_{1}+H_{2},\quad\quad
H_{j}=\frac{1}{2}\left(p_{j}^{2}+q_{j}^{2}\right)+\frac{\lambda\sqrt{2}}{3}q_{j}^{3},$
(90)
and the invariants take the form:
$f_{1}=H_{1}+H_{2},\quad\quad f_{2}=H_{1}-H_{2}.$ (91)
Periodic motion in the coordinate $q_{j}$ $(j=1,2)$ occurs between two turning
points $q_{j}^{min}$, $q_{j}^{max}$ when:
$0\leq H_{j}\leq\frac{1}{12\lambda^{2}}=H_{max},$ (92)
with period given by:
$T_{j}=\oint\left(\frac{dq_{j}}{dt}\right)^{-1}dq_{j}=2\int_{{q_{j}^{min}}}^{q_{j}^{max}}\frac{dq_{j}}{\sqrt{2H_{j}-q_{j}^{2}-2\lambda\sqrt{2}q_{j}^{3}/3}}.$
(93)
The corresponding frequency $\omega_{j}=2\pi/T_{j}$ is given explicitly by:
$\omega_{j}=\frac{\pi\sqrt{\zeta_{bj}-\zeta_{aj}}}{\sqrt{6}K\left(\frac{\zeta_{cj}-\zeta_{bj}}{\zeta_{aj}-\zeta_{bj}}\right)},$
(94)
where $K$ denotes the complete elliptic integral of the first kind, and
$\zeta_{aj}$, $\zeta_{bj}$, $\zeta_{cj}$ denote the three roots of the
polynomial:
$P_{j}(\zeta)=2\zeta^{3}+3\zeta^{2}-(H_{j}/H_{max}),$ (95)
ordered such that $\zeta_{aj}<\zeta_{bj}<0<\zeta_{cj}$ for $j=1,2$.
Figure 2 shows a comparison between the result obtained by numerically
evaluating the path integrals in (75) and the exact solution in (94). This
result is shown for $k=1/2$. By varying $k$, one may study the dependence on
the choice of cycles $\gamma_{1}$ and $\gamma_{2}$. For example, Fig. 3 shows
that the frequencies $\omega_{1}$, $\omega_{2}$ on the level set
$(c_{1},c_{2})=(0.1,0.03)$ are independent of $k$, for $0.4<k<4.5$. Beyond
this range, the two cycles obtained by intersecting $M_{c}^{0}$ with the
hyperplanes $y=kx$ and $y=-kx$ fail to be independent, and the matrix $R$ is
not invertible. In this case, at least one of the two cycles must be modified
if (75) is to be used.
Figure 2: Frequencies of the Hamiltonian (82) with $\lambda=1$, shown for the
level set $M_{c}^{0}$ defined by $(f_{1},f_{2})=(c_{1},c_{2})$. Dots
correspond to the analytical expression given in (94), while solid curves
correspond to the result obtained using (16). (a) The value $\omega_{1}$ is
shown for $6\lambda^{2}(c_{1}-c_{2})=1/2$. (b) The value $\omega_{2}$ is shown
for $6\lambda^{2}(c_{1}+c_{2})=1/2$. In both cases, a separatrix is approached
as the horizontal axis approaches 1. Figure 3: Demonstration that the
frequencies of the Hamiltonian (82) ($\lambda=1$) obtained using (75) are
unchanged under deformation of the cycles $\gamma_{1}$ and $\gamma_{2}$. These
are defined by intersection of the level set $M_{c}^{0}$ with the hyperplanes
$y=kx$ and $y=-kx$, respectively. The results are shown for the case
$c_{1}=0.1$, $c_{2}=0.03$.
### VII.2 Integrable 4D McMillan Map
Consider the symplectic map
$\mathcal{M}:\mathbb{R}^{4}\rightarrow\mathbb{R}^{4}$ given by
$\mathcal{M}(x,y,p_{x},p_{y})=(x^{f},y^{f},p_{x}^{f},p_{y}^{f})$, where:
$\displaystyle x^{f}$ $\displaystyle=p_{x},\quad\quad
p_{x}^{f}=-x+\frac{ap_{x}}{1+b(p_{x}^{2}+p_{y}^{2})},$ (96a) $\displaystyle
y^{f}$ $\displaystyle=p_{y},\quad\quad
p_{y}^{f}=-y+\frac{ap_{y}}{1+b(p_{x}^{2}+p_{y}^{2})},$ (96b)
and $a,b>0$. This is a 4D analogue of the so-called McMillan mapping [21]. It
is known that (96) is integrable, with two invariants of the form:
$\displaystyle f_{1}$
$\displaystyle=x^{2}+y^{2}+p_{x}^{2}+p_{y}^{2}-a(xp_{x}+yp_{y})$ (97a)
$\displaystyle\quad\quad\quad\quad\quad+b(xp_{x}+yp_{y})^{2},$ $\displaystyle
f_{2}$ $\displaystyle=xp_{y}-yp_{x}.$ (97b)
We wish to evaluate the tunes of this map using (16).
The cycles $\gamma_{1}$ and $\gamma_{2}$ can be defined, as before, by taking
the intersection of $M_{c}$ with hypersurfaces of the form $G_{j}(x,y)=0$ for
smooth functions $G_{j}$ $(j=1,2)$, chosen to make $\gamma_{1}$ and
$\gamma_{2}$ independent. One must also choose an arbitrary initial point
$\zeta\in M_{c}$ and a path $\gamma$ to its image $\mathcal{M}(\zeta)$. An
example of a regular invariant level set is shown in Fig. 4, together with two
independent basis cycles $\gamma_{1}$ and $\gamma_{2}$, and the path $\gamma$.
In the coordinates $(x,y,p_{x},p_{y})$, note that the Jacobian matrix of the
momentum mapping is given by:
$D_{q}\mathcal{F}=\begin{pmatrix}-ap_{x}+2(x+bp_{x}\tau)&-ap_{y}+2(y+bp_{y}\tau)\\\
p_{y}&-p_{x}\end{pmatrix},$ (98)
$D_{p}\mathcal{F}=\begin{pmatrix}-ax+2(p_{x}+bx\tau)&-ay+2(p_{y}+by\tau)\\\
-y&x\end{pmatrix},$ (99)
where $\tau=xp_{x}+yp_{y}$. Using these results, the integrals in (16) can be
evaluated numerically to obtain the rotation vector $\nu$ as a function of the
two invariants.
Figure 4: (Orange) Level set $(f_{1},f_{2})=(2,0.5)$ of the 4D McMillan map
(96) with $a=1.6$, $b=1$. The apparent self-intersections of the 2D surface
are an artifact of projection into $\mathbb{R}^{3}$. This is shown together
with examples of basis cycles $\gamma_{1}$ and $\gamma_{2}$ and the path
$\gamma$ used to evaluate the tunes $\nu_{1}$, $\nu_{2}$ from (16).
This system can also be solved exactly [7]. Figure 5 shows a comparison
between the exact solution provided in [7] and the solution obtained using the
above procedure. The agreement confirms that the tunes can be accurately
determined from (16), without the construction of action-angle coordinates or
knowledge of a coordinate system in which the dynamics is separable.
Figure 5: Tunes $\nu_{1}$, $\nu_{2}$ of the 4D McMillan map (96) with $a=1.6$,
$b=1$, shown for the invariant level set defined by
$(f_{1},f_{2})=(c_{1},c_{2})$. Dots correspond to the analytical expression
given in [7], while solid curves correspond to the result obtained using (16).
Compare Figure 5 of [7].
## VIII Conclusions
Integrable Hamiltonian systems and symplectic maps play important roles in
many areas of science, as well as providing an active area of contemporary
mathematical research [3]. However, the standard techniques for exact solution
of these systems are difficult to apply, except in the simplest cases. This
paper provides an explicit formula (16) that connects the $n$ tunes of an
integrable symplectic map (on a phase space of dimension 2$n$) with its $n$
invariants of motion. The same formula can be used to extract the $n$
dynamical frequencies of a Hamiltonian flow (Section VI). By construction, the
formula is invariant under a canonical (symplectic) change of coordinates and
can be expressed in a geometric form that is coordinate-free. The construction
of action-angle coordinates is not required.
This formula is consistent with an expression previously obtained for 2D
integrable symplectic maps [7], and it reproduces exactly known results for
dynamical frequencies that have been independently obtained for several
nonlinear benchmark problems (Section VII). A demonstration that this result
correctly reproduces the tunes of a linear symplectic map of any dimension is
found in Appendix C, and additional special cases of low dimension are treated
in Appendix D.
In practice, this formula can be used to extract the dynamical frequencies of
the orbits of an integrable system without the need for numerical tracking,
which is especially useful when studying the dependence of the dynamical
frequencies on the choice of the initial condition or system parameters.
Evaluation of (16) requires only that one parameterize a set of paths in the
invariant level set, which is often done by solving locally for one of the
phase space variables in terms of the others. Note that this result can also
be applied to extract approximate dynamical frequencies of orbits (of a
symplectic map or a Hamiltonian flow) when a sufficient number of approximate
invariants are known.
Most importantly, the expression (16) captures, in a precise way, the
connection between the geometry of an integrable system and its dynamical
behavior, providing first-principles insight into the physics of such systems.
## IX Acknowledgments
The authors thank A. Valishev and the IOTA collaboration team at Fermilab for
discussions. This work was supported by the Director, Office of Science of the
U.S. Department of Energy under Contracts No. DE-AC02-05CH11231 and DE-
AC02-07CH11359, and made use of computer resources at the National Energy
Research Scientific Computing Center. The authors acknowledge support from the
U.S. DOE Early Career Research Program under the Office of High Energy
Physics.
## Appendix A: Cycles on the Torus
The closed paths $\gamma_{k}$ $(k=1,\ldots,n)$ appearing in (16) must lie
within the invariant level set $M_{c}$, and they must form a basis for the
group of 1-cycles on $M_{c}$. A proper discussion of the latter condition
requires the use of (singular) homology [22]. However, intuition for this
condition can be obtained by visualizing several examples for the special case
when $n=2$ (dimension 4).
In this case, each regular level set $M_{c}$ can be smoothly deformed into the
standard 2-torus, defined by:
$\mathbb{T}^{2}=\\{(q_{1},q_{2},p_{1},p_{2})\in\mathbb{R}^{4}:(\forall
j)q_{j}^{2}+p_{j}^{2}=1\\}.$ (100)
Let $q:\mathbb{R}^{2}\rightarrow\mathbb{T}^{2}$ denote the function given by:
$q(t_{1},t_{2})=(\cos 2\pi t_{1},\cos 2\pi t_{2},\sin 2\pi t_{1},\sin 2\pi
t_{2}).$ (101)
Let $\gamma:[a,b]\rightarrow\mathbb{T}^{2}$ be any continuous path with
$\gamma(a)=\gamma(b)$. A lift of $\gamma$ is a continuous map
$\tilde{\gamma}:[a,b]\rightarrow\mathbb{R}^{2}$ such that
$\gamma=q\circ\tilde{\gamma}$. For any closed path $\gamma$, define its index
by:
$[\gamma]=\tilde{\gamma}(b)-\tilde{\gamma}(a)\in\mathbb{Z}^{2}.$ (102)
It can be verified that the index does not depend on the specific choice of
the lift $\tilde{\gamma}$. It is also invariant under continuous deformations
of the path $\gamma$. Intuitively, $[\gamma]$ is a pair of integers denoting
how many times the path $\gamma$ “winds around” the torus with respect to each
of its two “holes”. Two closed paths $\gamma_{1}$ and $\gamma_{2}$ will be
said to form a basis for the group of 1-cycles on $\mathbb{T}^{2}$ when
$[\gamma_{1}]$ and $[\gamma_{2}]$ form a basis for the additive group
$\mathbb{Z}^{2}$ over the integers.
The simplest example of a basis on $\mathbb{T}^{2}$ is shown in Fig. 6(a). The
paths $\gamma_{1}$ and $\gamma_{2}$ can be represented by the lifts:
$\displaystyle\tilde{\gamma}_{1}(t)=(t,0),\quad\tilde{\gamma}_{2}=(0,t),\quad
0\leq t\leq 1,$ (103)
so that $[\gamma_{1}]=(1,0)$ and $[\gamma_{2}]=(0,1)$. Any two paths that can
be obtained by continuous deformation of the paths $\gamma_{1}$ and
$\gamma_{2}$ also results in a basis.
Fig. 6(b) illustrates an example of two closed paths that do not form a basis
on $\mathbb{T}^{2}$. In fact, if $-\gamma_{2}$ denotes the path $\gamma_{2}$
transversed in the opposite direction, then the path $-\gamma_{2}$ can be
continuously deformed into $\gamma_{1}$.
A less obvious example of a basis on $\mathbb{T}^{2}$ is given in Fig. 6(c).
In this example, $[\gamma_{1}]=(0,1)$ and $[\gamma_{2}]=(1,-1)$. The number of
such possible bases is infinite, but bases whose cycles have larger index
become increasingly difficult to visualize.
Figure 6: Examples of 1-cycles on the torus $\mathbb{T}^{2}$. One of the two
holes has been made larger than the other, in order to embed the torus in
$\mathbb{R}^{3}$ without self-intersection. (a) Two basis cycles with
$[\gamma_{1}]=(1,0)$ and $[\gamma_{2}]=(0,1)$. (b) Two cycles that do not form
a basis, with $[\gamma_{1}]=(1,0)$, $[\gamma_{2}]=(-1,0)$. (c) Two basis
cycles with $[\gamma_{1}]=(1,-1)$ and $[\gamma_{2}]=(0,1)$.
## Appendix B: Properties of the Moore-Penrose inverse
The Poisson bracket condition that $\\{f_{j},f_{k}\\}=0$ for all $j$, $k$ is
equivalent to the matrix identity:
$(D\mathcal{F})J(D\mathcal{F})^{T}=0.$ (104)
It follows from (17) and (104) that $D\mathcal{F}^{+}$ satisfies the two
conditions:
$(D\mathcal{F})(D\mathcal{F}^{+})=I_{n\times
n},\quad(D\mathcal{F}){J}(D\mathcal{F}^{+})=0.$ (105)
Consider the linear map corresponding to $(D\mathcal{F}^{+})(D\mathcal{F})$.
This map is a linear projection since:
$(D\mathcal{F}^{+}D\mathcal{F})^{2}=D\mathcal{F}^{+}D\mathcal{F}.$ (106)
We examine its null space ($\operatorname{ker}$) and range
($\operatorname{im}$). Using the leftmost identity in (105), we obtain:
$\operatorname{ker}(D\mathcal{F}^{+}D\mathcal{F})=\operatorname{ker}(D\mathcal{F}).$
(107)
Similarly, it follows from the rightmost identity in (105) that:
$\operatorname{im}(D\mathcal{F}^{+}D\mathcal{F})\subseteq\operatorname{ker}(D\mathcal{F}{J}).$
(108)
It is straightforward to verify that
$\operatorname{ker}(D\mathcal{F}{J})=J\operatorname{ker}(D\mathcal{F})$ (109)
and since $J$ is invertible,
$\operatorname{dim}(J\operatorname{ker}(D\mathcal{F}))=\operatorname{dim}(\operatorname{ker}(D\mathcal{F})).$
(110)
Since $\operatorname{rank}(D\mathcal{F})=n$ by assumption, it follows by the
rank-nullity theorem that
$\operatorname{dim}(\operatorname{ker}(D\mathcal{F}))=n$. By (107-110), the
two subspaces in (108) have the same dimension $n$, and it follows that they
coincide:
$\operatorname{im}(D\mathcal{F}^{+}D\mathcal{F})=J\operatorname{ker}(D\mathcal{F}).$
(111)
Thus, at every point in the phase space $M$ we have the direct-sum
decomposition:
$\mathbb{R}^{2n}=\operatorname{ker}(D\mathcal{F})\oplus
J\operatorname{ker}(D\mathcal{F}),$ (112)
and the projection $P$ onto the second summand is given by:
$P=(D\mathcal{F}^{+})(D\mathcal{F}).$ (113)
The two conditions (105) therefore determine $D\mathcal{F}^{+}$ uniquely. For
if $B$ is any matrix satisfying the two conditions (105), then for any vector
$\zeta\in\mathbb{R}^{2n}$,
$(D\mathcal{F})JB\zeta=0,$ (114)
so that $B\zeta$ lies in
$\operatorname{ker}(D\mathcal{F}J)=J\operatorname{ker}(D\mathcal{F})$, and
therefore:
$B\zeta=PB\zeta=(D\mathcal{F}^{+})(D\mathcal{F})B\zeta=(D\mathcal{F}^{+})\zeta.$
(115)
The results (112-113) are used in Section V.1.
## Appendix C: Treatment of Linear Maps
Consider a linear symplectic map on the phase space $M=\mathbb{R}^{2n}$,
represented by a $2n\times 2n$ real symplectic matrix $R$. Suppose that the
$2n$ eigenvalues of $R$ are distinct and lie on the unit circle. It follows
that the eigenvalues of $R$ occur in complex-conjugate pairs, and one may
select $n$ eigenvalues $\lambda_{j}$ and (complex) eigenvectors $\psi_{j}$ so
that for $j=1,\ldots,n$:
$R\psi_{j}=\lambda_{j}\psi_{j},\quad\quad
R\bar{\psi}_{j}=\bar{\lambda}_{j}\bar{\psi}_{j},\quad\quad|\lambda_{j}|=1.$
(116)
Following [23], we introduce the angular bracket notation:
$\langle{u,v\rangle}=-i\bar{u}^{T}Jv,\quad\quad u,v\in\mathbb{C}^{2n}.$ (117)
Then the eigenvectors $\psi_{j}$ may be indexed and normalized such that for
$l,m=1,\ldots,n$:
$\displaystyle\langle{\psi_{l},\psi_{m}\rangle}$ $\displaystyle=\delta_{l,m},$
(118a) $\displaystyle\langle{\bar{\psi}_{l},\bar{\psi}_{m}\rangle}$
$\displaystyle=-\delta_{l,m},$ (118b)
$\displaystyle\langle{\psi_{l},\bar{\psi}_{m}\rangle}$
$\displaystyle=\langle{\bar{\psi}_{l},\psi_{m}\rangle}=0.$ (118c)
Since the eigenvalues $\lambda_{j}$, $\bar{\lambda}_{j}$ $(j=1,\ldots,n)$ are
all distinct, the vectors $\psi_{j}$,$\bar{\psi}_{j}$ $(j=1,\ldots,n)$ form a
basis for $\mathbb{C}^{2n}$. Using this fact, together with the conditions
(118), it follows that any $\zeta\in\mathbb{R}^{2n}$ may be written uniquely
as:
$\displaystyle\zeta=2\mathcal{R}e\sum_{k=1}^{n}\langle{\zeta,\psi_{k}\rangle}\psi_{k}.$
(119)
Consider the set of quadratic functions $f_{k}$ given for
$\zeta\in\mathbb{R}^{2n}$ by:
$f_{k}(\zeta)=\left|\langle{\zeta,\psi_{k}\rangle}\right|^{2}\quad\quad(k=1,\ldots,n).$
(120)
Then each $f_{k}$ is invariant under the linear map since:
$f_{k}(R\zeta)=\left|\langle{R\zeta,\psi_{k}\rangle}\right|^{2}=\left|\langle{\zeta,R^{-1}\psi_{k}\rangle}\right|^{2}=f_{k}(\zeta).$
(121)
To obtain the second equality, we used the symplectic condition $R^{T}JR=J$,
and to obtain the third equality, we used the facts that
$R^{-1}\psi_{k}=\lambda_{k}^{-1}\psi_{k}$ and $|\lambda_{k}^{-1}|=1$, which
follow from (116).
Using (120), one may verify that the Jacobian matrix $Df_{k}(\zeta)$ at the
point $\zeta\in\mathbb{R}^{2n}$ acts on vectors $v$ to give:
$Df_{k}(\zeta)v=2\mathcal{R}e\langle{\zeta,\psi_{k}\rangle}\langle{\psi_{k},v\rangle},\quad
v\in\mathbb{R}^{2n}.$ (122)
Likewise, the Jacobian matrix of the momentum mapping $D\mathcal{F}(\zeta)$ at
any point $\zeta\in\mathbb{R}^{2n}$ becomes:
$D\mathcal{F}(\zeta)=\begin{pmatrix}Df_{1}(\zeta)\\\ \vdots\\\
Df_{n}(\zeta)\end{pmatrix}.$ (123)
Using (123), the Poisson bracket condition (104) takes the form:
$(Df_{j})J(Df_{k})^{T}=0,\quad j,k=1,\ldots,n$ (124)
where we have suppressed the dependence on $\zeta$. This follows from the
orthogonality conditions (118), using (122).
Define a $2n\times n$ matrix $B$ by:
$B=\begin{pmatrix}b_{1}&\cdots&b_{n}\end{pmatrix},$ (125)
where the $b_{k}$ are real $2n$-vectors given by:
$b_{k}=\mathcal{R}e\left(\psi_{k}/\langle{\zeta,\psi_{k}\rangle}\right),$
(126)
which are defined, provided that $f_{k}(\zeta)\neq 0$. Then it follows from
(123) and (125) that
$[D\mathcal{F}(\zeta)B]_{jk}=Df_{j}(\zeta)b_{k}=2\mathcal{R}e\langle{\zeta,\psi_{j}\rangle}\langle{\psi_{j},b_{k}\rangle},$
(127)
where in the last equality we used (122). However,
$\langle{\psi_{j},b_{k}\rangle}=\frac{1}{2}\left(\frac{\langle{\psi_{j},\psi_{k}\rangle}}{\langle{\zeta,\psi_{k}\rangle}}+\frac{\langle{\psi_{j},\overline{\psi}_{k}\rangle}}{{\langle{\psi_{k},\zeta\rangle}}}\right)=\frac{\delta_{jk}}{2\langle{\zeta,\psi_{k}\rangle}},$
(128)
by the orthonormality conditions, so that
$[D\mathcal{F}(\zeta)B]_{jk}=\delta_{jk},$ (129)
and $B$ is a right matrix inverse of $D\mathcal{F}(\zeta)$. This shows that
$\operatorname{rank}(D\mathcal{F}(\zeta))=n$, provided $f_{k}(\zeta)\neq 0$
for all $k=1,\ldots,n$.
We now examine the regular level sets of the momentum mapping $\mathcal{F}$,
which take the form:
$M_{c}=\\{\zeta\in\mathbb{R}^{2n}:f_{k}(\zeta)=c_{k},k=1,\ldots,n\\},$ (130)
where $c_{k}\neq 0$ for all $k$. Note that by (120) we have
$f_{k}(\zeta)=c_{k}\Leftrightarrow\langle{\zeta,\psi_{k}\rangle}=\sqrt{c_{k}}e^{it_{k}},$
(131)
for some real phase angle $t_{k}$. It follows from (119) that:
$\zeta\in
M_{c}\Leftrightarrow\zeta=2\mathcal{R}e\sum_{k=1}^{n}\sqrt{c_{k}}e^{it_{k}}\psi_{k},$
(132)
for some real $t_{1},\ldots,t_{n}$. Given a point $\zeta\in M_{c}$, applying
the map $R$ gives:
$R\zeta=2\mathcal{R}e\sum_{k=1}^{n}\sqrt{c_{k}}e^{it_{k}}R\psi_{k}=2\mathcal{R}e\sum_{k=1}^{n}\sqrt{c_{k}}e^{i(t_{k}+\phi_{k})}\psi_{k},$
(133)
where in the last equality we have introduced the angle $\phi_{k}$ by
$\lambda_{k}=e^{i\phi_{k}}$. Define the path $\gamma:[0,1]\rightarrow M_{c}$
by:
$\gamma(t)=2\mathcal{R}e\sum_{k=1}^{n}\sqrt{c_{k}}e^{it\phi_{k}}\psi_{k}.$
(134)
The tangent vector takes the form:
$\gamma^{\prime}(t)=2\mathcal{R}e\sum_{k=1}^{n}i\phi_{k}\sqrt{c_{k}}e^{it\phi_{k}}\psi_{k}.$
(135)
We can now evaluate the vector quantity $S$ appearing in (16). By (125), its
components take the form:
$S_{k}=\left(-\int_{\gamma}B^{T}Jd\zeta\right)_{k}=-\int_{0}^{1}b_{k}^{T}J\gamma^{\prime}(t)dt.$
(136)
Using the explicit form for the tangent vector (135) gives:
$S_{k}=2\mathcal{R}e\sum_{j=1}^{n}\phi_{j}\sqrt{c_{j}}\int_{0}^{1}e^{it\phi_{j}}\langle{b_{k},\psi_{j}\rangle}dt.$
(137)
Now using (128) we have:
$S_{k}=\mathcal{R}e\phi_{k}\sqrt{c_{k}}\int_{0}^{1}\frac{e^{it\phi_{k}}}{\langle{\psi_{k},\gamma(t)\rangle}}dt.$
(138)
Using the explicit form of the path (134) gives:
$\langle{\psi_{k},\gamma(t)\rangle}=\sum_{j=1}^{n}\sqrt{c_{j}}e^{it\phi_{j}}\langle{\psi_{k},\psi_{j}\rangle}+\sum_{j=1}^{n}\sqrt{c_{j}}e^{-it\phi_{j}}\langle{\psi_{k},\overline{\psi}_{j}\rangle},$
(139)
which gives, using the conditions (118),
$\langle{\psi_{k},\gamma(t)\rangle}=\sqrt{c_{k}}e^{it\phi_{k}}.$ (140)
Using this in (138), the integral gives trivially that:
$S_{k}=\phi_{k}.$ (141)
For the basis cycles $\gamma_{k}$ $(k=1,\ldots,n)$, we will take paths
$\gamma_{k}:[0,1]\rightarrow M_{c}$ given by:
$\gamma_{k}(t)=2\mathcal{R}e\sqrt{c_{k}}e^{i2\pi t}\psi_{k},$ (142)
with tangent vectors
$\gamma_{k}^{\prime}(t)=2\mathcal{R}e\sqrt{c_{k}}(2\pi i)e^{i2\pi t}\psi_{k}.$
(143)
Then we have:
$R_{jk}=\left(-\oint_{\gamma_{k}}B^{T}Jd\zeta\right)_{j}=-\int_{0}^{1}b_{j}^{T}J\gamma_{k}^{\prime}(t)dt.$
(144)
Using the explicit form for the tangent vector gives:
$R_{jk}=2\mathcal{R}e\sqrt{c_{k}}(2\pi)\int_{0}^{1}e^{i2\pi
t}\langle{b_{j},\psi_{k}\rangle}dt.$ (145)
Now using (128) we have:
$R_{jk}=\mathcal{R}e2\pi\sqrt{c_{k}}\delta_{jk}\int_{0}^{1}\frac{e^{i2\pi
t}}{\langle{\psi_{j},\gamma_{k}(t)\rangle}}dt.$ (146)
Since this is nonzero only when $j=k$, we have in this case using the path
(142) that:
$\langle{\psi_{k},\gamma_{k}(t)\rangle}=\sqrt{c_{k}}e^{i2\pi t}.$ (147)
It follows that the integral in (146) gives trivially that:
$R_{jk}=2\pi\delta_{jk},$ (148)
so $R=2\pi I_{n\times n}$, and therefore (16) gives the tunes:
$\nu=R^{-1}S,\quad\quad\nu_{k}=\frac{\phi_{k}}{2\pi}\quad(k=1,\ldots,n),$
(149)
which are expressed in terms of the eigenvalues $\lambda_{k}=e^{i\phi_{k}}$,
as expected [23].
The freedom in (11) can be explored by making alternative choices for the
paths $\gamma$ and $\gamma_{k}$, after noting that a general smooth path
$\gamma:[0,1]\rightarrow M_{c}$ takes the form:
$\gamma(t)=2\mathcal{R}e\sum_{j=1}^{n}\sqrt{c_{j}}e^{ig_{j}(t)}\psi_{j},$
(150)
where $g:[0,1]\rightarrow\mathbb{R}^{n}$ is a smooth path in $\mathbb{R}^{n}$.
## Appendix D: Special Cases in Low Dimension
Consider a symplectic map
$\mathcal{M}:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$ given by:
$(q^{f},p^{f})=\mathcal{M}(q,p),$ (151)
together with a smooth function $f:\mathbb{R}^{2}\rightarrow\mathbb{R}$
satisfying:
$f(q^{f},p^{f})=f(q,p),$ (152)
so that $f$ is an invariant of the map $\mathcal{M}$. Evaluating (44,46) in
the special case $n=1$ shows that the rotation number of $\mathcal{M}$ on the
level set $f=c$ is given by [7]:
$\nu=\frac{\int_{q}^{q^{f}}\left(\frac{\partial f}{\partial
p}\right)^{-1}\,dq}{\oint\left(\frac{\partial f}{\partial
p}\right)^{-1}\,dq}=\frac{\int_{p}^{p^{f}}\left(-\frac{\partial f}{\partial
q}\right)^{-1}\,dp}{\oint\left(-\frac{\partial f}{\partial
q}\right)^{-1}\,dp},$ (153)
where each integral is taken along a path lying in the curve $f=c$, which may
be parameterized by solving locally for $q$ as a function of $p$ or vice-
versa.
As a special case with $n=2$, consider a symplectic map given in canonical
polar coordinates as:
$(r^{f},\theta^{f},p_{r}^{f},p_{\theta}^{f})=\mathcal{M}(r,\theta,p_{r},p_{\theta}),$
(154)
together with two invariants $f_{1}$ and $f_{2}$ of the form:
$\displaystyle f_{1}(r,\theta,p_{r},p_{\theta})$
$\displaystyle=f(r,p_{r},p_{\theta}),$ (155a) $\displaystyle
f_{2}(r,\theta,p_{r},p_{\theta})$ $\displaystyle=p_{\theta}.$ (155b)
Here $f$ is any smooth function of 3 variables. The first invariant is
independent of the angle coordinate, while the second invariant is just the
angular momentum. Choose $\gamma_{1}$ to be a closed curve in the invariant
level set $(f_{1},f_{2})=(c_{1},c_{2})$ obtained after setting $\theta=$const.
This curve can be parameterized by solving locally for $r$ as a function of
$p_{r}$ or vice-versa. Choose $\gamma_{2}$ to be a closed curve in the same
invariant level set obtained after setting $r=$const, allowing $\theta$ to
vary from 0 to 2$\pi$.
Evaluating (44,46) shows that the rotation vector $\nu=(\nu_{r},\nu_{\theta})$
can be written in terms of tunes associated with radial and angular motion as:
$\displaystyle\nu_{r}$
$\displaystyle=\frac{\int_{r}^{r^{f}}\left(\frac{\partial f}{\partial
p_{r}}\right)^{-1}\,dr}{\oint\left(\frac{\partial f}{\partial
p_{r}}\right)^{-1}\,dr}=\frac{\int_{p_{r}}^{p_{r}^{f}}\left(\frac{\partial
f}{\partial r}\right)^{-1}\,dp_{r}}{\oint\left(\frac{\partial f}{\partial
r}\right)^{-1}\,dp_{r}},$ (156a) $\displaystyle\nu_{\theta}$
$\displaystyle=\nu_{r}\frac{\Delta_{\theta}}{2\,\pi}-\frac{\Delta_{\theta}^{\prime}}{2\,\pi}+\frac{\delta\theta}{2\,\pi},$
(156b)
where the integrals are taken over all or part of the path $\gamma_{1}$ and:
$\displaystyle\Delta_{\theta}^{\prime}$
$\displaystyle=\int_{r}^{r^{f}}\frac{\partial f}{\partial
p_{\theta}}\left(\frac{\partial f}{\partial
p_{r}}\right)^{-1}\,dr=\int_{p_{r}}^{p_{r}^{f}}\frac{\partial f}{\partial
p_{\theta}}\left(-\frac{\partial f}{\partial r}\right)^{-1}\,dp_{r},$
$\displaystyle\Delta_{\theta}$ $\displaystyle=\oint\frac{\partial f}{\partial
p_{\theta}}\left(\frac{\partial f}{\partial
p_{r}}\right)^{-1}\,dr=\oint\frac{\partial f}{\partial
p_{\theta}}\left(-\frac{\partial f}{\partial r}\right)^{-1}\,dp_{r},$
$\displaystyle\delta\theta$ $\displaystyle=\theta^{f}-\theta.$ (157)
## References
* [1] V. Danilov and S. Nagaitsev, “Nonlinear Accelerator Lattices with One and Two Analytic Invariants,” Phys. Rev. ST Accel. Beams 13, 084002 (2010).
* [2] V. Danilov and S. Nagaitsev, “Accelerator-Feasible $N$-Body Nonlinear Integrable System,” Phys. Rev. ST Accel. Beams 17, 124402 (2014).
* [3] A. V. Bolsinov and A. T. Fomenko, Integrable Hamiltonian Systems: Geometry, Topology, Classification, Chapman & Hall/CRC Press, Boca Raton, 2004.
* [4] V. I. Arnold, Mathematical Methods of Classical Mechanics, 2nd ed., Springer, NY, 1989.
* [5] R. Abraham and J. Marsden, Foundations of Mechanics, 2nd ed., Addison-Wesley Publishing Co., Inc. Redwood City, CA, 1978.
* [6] J. Moser and E. Zehnder, Notes on Dynamical Systems, AMS, Courant Institute of Mathematical Sciences, 2005.
* [7] S. Nagaitsev and T. Zolkin, “Betatron Frequency and the Poincare Rotation Number,” Phys. Rev. Accel. Beams 23, 054001 (2020).
* [8] T. Bertalan et al, “On Learning Hamiltonian Systems from Data,” Chaos 29, 121107 (2019).
* [9] Z. Liu and M. Tegmark, ”Machine Learning Conservation Laws from Trajectories,” Phys. Rev. Lett. 126, 180604 (2021).
* [10] M. Bruschi et al, “Integrable Symplectic Maps,” Physica D 49, 273-294 (1991).
* [11] J. D. Meiss, “Symplectic Maps, Variational Principles, and Transport,” Rev. Mod. Phys. 64, 795 (1992).
* [12] H. Ito, “Integrable Symplectic Maps and their Birkhoff Normal Forms,” Tohoku Math. J. 49, 73-114 (1997).
* [13] F. Fassó, “Notes on Finite Dimensional Integrable Hamiltonian Systems”, Universitá di Padova, 1999, www.math.unipd.it/fasso/, Proposition 2.3, p. 24.
* [14] J. Laskar, “Introduction to Frequency Map Analysis”, in Hamiltonian Systems with Three or More Degrees of Freedom, NATO ASI Series, Springer, Dordrecht, 1999, pp. 134-150.
* [15] J. Laskar, “Frequency Analysis for Multi-Dimensional Systems. Global Dynamics and Diffusion”, Physica D 67, 257-281 (1993).
* [16] L. Cabrer and D. Mundici, “Classifying Orbits of the Affine Group Over the Integers,” Ergod. Theory Dyn. Syst. 37, 440-453 (2017).
* [17] A. Katok and B. Hasselblatt, Introduction to the Modern Theory of Dynamical Systems, Cambridge University Press, Cambridge, UK, 1995, Section 14.7, p.483.
* [18] M. Hénon and C. Heiles, “The Applicability of the Third Integral of Motion: Some Numerical Experiments”, Astron. J. 69, 73-79 (1964).
* [19] Y. Aizawa and N. Saito, “On the Stability of Isolating Integrals. I. Effect of the Perturbation in the Potential Function”, J. Phys. Soc. Jpn. 32, 1636-1640 (1972).
* [20] M. Blaszak and S. Rauch-Wojciechowski, “A Generalized Henon-Heiles System and Related Integrable Newton Equations”, J. Math. Phys. 35, 1693 (1994).
* [21] E. M. McMillan, in Topics in Modern Physics, a Tribute to E.V. Condon, edited by E. Brittin and H. Odabasi (Colorado Associated University Press, Boulder, CO, 1971), pp. 219-244.
* [22] A. Hatcher, Algebraic Topology, Cambridge University Press, Cambridge, UK, 2002.
* [23] A. J. Dragt, Lie Methods for Nonlinear Dynamics with Applications to Accelerator Physics, University of Maryland (2011).
|
H. Moeini
G.H. Bordbar
# Neutron star calculations with the phenomenological three-nucleon force
<EMAIL_ADDRESS><EMAIL_ADDRESS>Department of Physics, School
of Science, Shiraz University, Shiraz, 71454, Iran
###### Abstract
In this work, we have studied the effect of three-nucleon interaction on the
neutron stars structure. In our calculations, we have considered the neutron
star matter as a beta-stable nuclear matter. We have put the results
concerning the TBF effect in perspective against two-body results and other
calculations of three-nucleon interactions, using the Urbana $\it{v_{14}}$
potential and the parabolic approximation of the nuclear-matter energy for
approximating the problem of asymmetric nuclear matter. As such, solving the
Tolman-Oppenheimer-Volkoff equation, we have estimated bulk properties of
neutron stars and investigated how the present calculations would agree with
the expected dynamical-stability condition.
###### keywords:
three-nucleon interaction, neutron star structure
###### pacs:
[
MSC Classification]21.65.+f, 21.30.-x, 21.30.Fe, 21.60.-n, 26.60.-c
## 1 Introduction
The notion of introducing three-nucleon forces (TBF) has manifested itself to
be indispensable in deriving bulk properties of symmetric nuclear matter such
as the saturation density, energy, symmetry energy, and incompressibility –
the interest to the latter of which concerns the physics of neutron stars and
evolution of supernovae. The TBF effect in the equation of state (EOS) of
high-density nuclear matter is envisaged to be substantial and, as such, vital
in addressing high-energy heavy-ion collisions and properties of dense objects
such as neutron stars [1, 2, 3]. As the maximum mass of such objects is known
to depend sensitively on EOS [4], their bulk properties such as radius at
maximum mass can thus be influenced by TBF. This is especially the case at
high nuclear-matter densities where there are also a lot of interest in, for
instance, modified gravities to study the astrophysical dynamics, matter
instability and singularities appearing in collapse processes of compact
objects [5, 6, 7]. Thus, neutron stars can be viewd as astrophysical
laboratories to test nuclear matter EOS at high densities, since recent
discoveries of about 1.97 [8], 2.01 [9], 2.10 [10], and 2.3 $M_{\odot}$ [11]
neutron stars – which are heavier than most of the observed ones in binary
systems of $1.2-1.6~{}M_{\odot}$ [12, 13] – have challenged many of the EOS
models.
By predicting a greater burst of compact objects – which have profound
significance for experimental astrophysics – modified gravity theories stand
out in favoring the existence of super-massive structures of smaller radii
than foreseen by general relativity. These theories provide a framework for
describing also the distribution of compact objects, employing an equation of
state within their own context [14, 15]. Hence, it is imperative that gravity
theories with suitable frameworks could address the effects of mass, EOS
parameters, and electric charge – within the largest ranges of possible values
– that could fulfill the stability requirements [16]. It is important to have
suitable frameworks that would allow for searching models which could present
a smooth matching between two different space-times at a separation
hypersurface of compact objects, such as isotropic perfect fluid stars,
supported by thin shells in modified gravity [17]. As such, one could derive
among others surface energy densities as well as various ingredients of
surface pressures at separation hypersurface [14].
The lowest order constrained variational method (LOCV) was established for
$v_{8}$ [18], $v_{12}$ and Urbana $v_{14}$ (U$v_{14}$) [19], Argonne $v_{14}$
(A$v_{14}$) [20], and A$v_{18}$ [21] potentials and has delivered comparable
results to variational methods that incorporate many-body contributions [22].
Using LOCV, we have studied bulk properties of symmetric nuclear and pure
neutron matter [23, 24, 25, 26, 27, 28, 29, 30, 31] as well as asymmetric
nuclear matter [22, 32, 33, 34, 35], especially in connection with neutron
star properties [36, 37, 38, 39, 40, 41]. It should be stated that, in what
follows, what we refer to as the TBF effect is specifically assumed to be the
combined effects of a two-pion-exchange potential and a phenomenological
repulsive term [42, 43].
Similar to other potentials, since the fitted U$v_{14}$ or A$v_{14}$
(hereafter, referred to as UA$\it{v_{14}}$) to two-nucleon data underestimates
binding energies of light nuclei (like 3H and 4He) and at the same time
overbinds nuclear matter, a three-body term is introduced to take into account
the required binding adjustments and also the theoretical anticipation of the
existence of non-nucleonic resonances like $\Delta$ states, which are
overlooked in building up two-nucleon potentials [3]. Previously, we have
reported on the symmetric nuclear matter calculations within the LOCV
framework employing UA$\it{v_{14}}$ potentials and accounting for the
phenomenological TBF effect based upon the UVII three-nucleon potential [44].
Our U$\it{v_{14}}$ calculations resulted in closer values of saturation
energy, incompressibility, and symmetry energy to the empirical values than
the A$\it{v_{14}}$ results did. As such, we have presented here our results
using U$\it{v_{14}}$ and investigated the TBF effect on the pure neutron and
beta-stable matter and, hence, on neutron stars purely made out of nucleons.
In this regard, a parabolic approximation of the energy of asymmetric matter
was employed to derive EOS.
In what follows, we first present a short review of the zero-temperature two-
and three-nucleon interactions and energy contributions in the UA models,
using the correlation functions derived within the LOCV formalism. Next, we
provide an overview of the beta-stability condition and how the bulk
properties of a beta-stable neutron star were derived under the assumption of
hydrostatic equilibrium formulated within general relativity by the TOV
equation [45, 46, 47]. Finally, we present the results and conclusions.
## 2 Two- and three-nucleon interactions
Below pion-production energies, the low-energy Hamiltonian can be approximated
by taking into account only two- and three-body terms as [48]:
$H=-\sum_{{i\leq A}}\frac{\hbar^{2}}{2m}\nabla^{2}_{i}+\sum_{{i<j\leq
A}}V_{ij}+\sum_{{i<j<k\leq A}}V_{ijk}.$ (1)
where $V_{ij}$ and $V_{ijk}$ stand for two-body and three-body potentials,
respectively. The two-body potential, constrained by $NN$ scattering data, is
constructed in the UA$\it{v_{14}}$ models on the basis of fourteen operators
($O_{12}$) and takes the following form [19]:
$\displaystyle V(12)=\sum_{p=1}^{14}v^{(p)}(r_{12})O^{(p)}_{12}$ (2)
The three-body potential is assumed to be comprised of a phenomenological
medium-range repulsive term $V_{ijk}^{R}$ and a long-range attractive term
corresponding to two-pion exchange $V_{ijk}^{2\pi}$ as follows [49, 50, 3,
51]:
$\displaystyle
V_{ijk}=V_{ijk}^{R}+V_{ijk}^{2\pi}=U\sum_{cyc}T^{2}_{\pi}(r_{ij})T^{2}_{\pi}(r_{ik})+$
$\displaystyle
A_{2\pi}\sum_{cyc}\Big{(}\\{X_{ij}^{\pi},X_{ik}^{\pi}\\}\\{{\boldsymbol{\tau}}_{i}\cdot{\boldsymbol{\tau}}_{j},{\boldsymbol{\tau}}_{i}\cdot{\boldsymbol{\tau}}_{k}\\}+$
$\displaystyle\frac{1}{4}[X_{ij}^{\pi},X_{ik}^{\pi}][{\boldsymbol{\tau}}_{i}\cdot{\boldsymbol{\tau}}_{j},{\boldsymbol{\tau}}_{i}\cdot{\boldsymbol{\tau}}_{k}]\Big{)}$
(3)
where
$\displaystyle X_{ij}^{\pi}$ $\displaystyle=$ $\displaystyle
Y_{\pi}(r_{ij}){\boldsymbol{\sigma}}_{i}\cdot{\boldsymbol{\sigma}}_{j}+T_{\pi}(r_{ij})\textbf{S}_{ij},$
$\displaystyle Y_{\pi}(r)$ $\displaystyle=$
$\displaystyle\frac{e^{-m_{\pi}r}}{m_{\pi}r}\big{(}1-e^{-cr^{2}}\big{)},$
$\displaystyle T_{\pi}(r)$ $\displaystyle=$
$\displaystyle\Big{(}1+\frac{3}{m_{\pi}r}+\frac{3}{m_{\pi}^{2}r^{2}}\Big{)}Y_{\pi}(r)\big{(}1-e^{-cr^{2}}\big{)},$
$\displaystyle\textbf{S}_{ij}$ $\displaystyle=$ $\displaystyle
3({\boldsymbol{\sigma}}_{i}\cdot\hat{\textbf{r}}_{ij})({\boldsymbol{\sigma}}_{j}\cdot\hat{\textbf{r}}_{ij})-{\boldsymbol{\sigma}}_{i}\cdot{\boldsymbol{\sigma}}_{j},$
(4)
the details of which, including calculation of the constants
$A_{2\pi}=-0.0331$ MeV and $U=0.0045$ MeV for the U$v_{14}$ potential as well
as the two- and three-body nucleon-nucleon energy contributions, were
presented in [44]. As such, inter-particle interactions were accounted for by
employing inter-nucleon correlation functions $f(ij)$ calculated within the
LOCV formalism [22]. Hence, the expectation value of the three-nucleon
interaction was shown to relate to the three-body radial distribution function
defined as [52]:
$\displaystyle
g(\textbf{r}_{1},\textbf{r}_{2},\textbf{r}_{3})=f^{2}(r_{12})f^{2}(r_{23})f^{2}(r_{13})g_{F}(\textbf{r}_{1},\textbf{r}_{2},\textbf{r}_{3})$
(5)
in which $g_{F}(\textbf{r}_{1},\textbf{r}_{2},\textbf{r}_{3})$ is the so-
called three-body radial distribution function for the ground state of the
interaction-free Fermi-gas.
It should be noted that in our previous work, using LOCV in conjunction with
different two-body potentials, we had investigated the EOS of nuclear matter
in presence of the three-nucleon interaction (TNI) [23]. Here, the effect of
TNI plus U$v_{14}$ is but an approximation of the effect of $V_{ijk}$ in which
TNI is assumed to be composed of repulsive (TNR) and attractive (TNA) terms –
accounting for the effects of $l=0$ and $l\neq 0$, respectively. The three-
nucleon repulsion term is assumed as an exponential term $e^{-\gamma\rho}$
multiplied by the intermediate-range part of $V(12)$, namely
$v_{I}^{(p)}(r_{12})$. The exponential term is introduced to also approximate
higher-than-third order interactions, where the third-order interactions
correspond to $-\gamma\rho v_{I}^{(p)}(r_{12})$ terms with more complicated
spin-isospin dependence than $V_{ijk}^{R}$ [19, 3].
## 3 Beta-stable matter and the neutron star calculations
As the EOS of nucleonic matter is expected to either govern or have direct
influence in bulk properties of the neutron star, we briefly lay out the
framework for such envisaged connection between microscopic EOS and neutron
star’s bulk properties like its maximum mass. We shall employ the
U$\it{v_{14}}$ potential in conjunction with a three-body contribution,
calculated based upon the phenomenological UVII model addressed in Sec. 2. The
beta-stability condition requires the inclusion of leptonic relativistic
contributions to the energy content of the neutron star:
$E_{lep}=\sum_{i}\frac{{m_{i}}^{4}c^{5}}{8{\pi}^{2}{\hbar}^{3}\rho}\Bigg{(}\frac{\hbar
k_{i}}{m_{i}c}\Big{[}1+\Big{(}\frac{\hbar
k_{i}}{m_{i}c}\Big{)}^{2}\Big{]}^{1/2}\Big{[}1+2\Big{(}\frac{\hbar
k_{i}}{m_{i}c}\Big{)}^{2}\Big{]}-\sinh^{-1}\Big{(}\frac{\hbar
k_{i}}{m_{i}c}\Big{)}\Bigg{)}$ (6)
in which $i$ runs over electrons and muons, and $k_{i}$ represents their
respective Fermi momenta, which are related as dictated by the following beta-
stability condition:
$\mu_{n}=\mu_{p}+\mu_{e}=\mu_{p}+\mu_{\mu}$ (7)
in which $\mu_{j}$ (in MeV) stands for the chemical potential of neutrons,
protons, electrons, or muons. Hence, knowing that $\rho=\rho_{p}+\rho_{n}$ (in
fm-3) and assuming the charge neutrality condition
$\rho_{p}=\rho_{e}+\rho_{\mu}$ for relativistic electrons and muons with
chemical potentials of approximately $\hbar
c\big{(}3\pi^{2}\rho_{e,\mu}\big{)}^{1/3}$, we used the parabolic
approximation for the energy of asymmetric matter [53]:
$\displaystyle
E(\rho,\rho_{p})=\frac{3}{5}\frac{\hbar^{2}}{2m_{N}}\big{(}3\pi^{2}\rho\big{)}^{2/3}\Big{[}(\rho_{p}/\rho)^{5/3}+$
$\displaystyle(1-\rho_{p}/\rho)^{5/3}\Big{]}+V_{0}(\rho)+(1-2\rho_{p}/\rho)^{2}E_{symm}(\rho)$
(8)
in which the first term is the Fermi-gas kinetic energy
$T_{F}(\rho,\rho_{p}/\rho)$ – with $T_{F}(\rho,\rho_{p}/\rho)+V_{0}(\rho)$
representing the symmetric nuclear-matter energy – resulting in the following
relation to be used in conjunction with the above relations for extracting the
nucleonic and leptonic densities of beta-stable matter:
$\displaystyle\mu_{n}-\mu_{p}=\frac{\hbar^{2}}{2m_{N}}\big{(}3\pi^{2}\rho\big{)}^{2/3}\Big{[}\big{(}1-\rho_{p}/\rho\big{)}^{2/3}-$
$\displaystyle\big{(}\rho_{p}/\rho\big{)}^{2/3}\Big{]}+4\big{(}1-2\rho_{p}/\rho\big{)}E_{symm}(\rho)$
(9)
$V_{0}(\rho)$ and the symmetry energy $E_{symm}(\rho)$ were obtained from the
symmetric nuclear matter and pure neutron matter calculations, assuming the
parabolic approximation.
Astrophysically, a star’s equilibrium is reached owing to the balance between
internal pressure and gravitational force. Such balance is expressed by an
underlying hydrostatic equilibrium equation (HEE) established by Tolman,
Oppenheimer, and Volkoff (TOV) within the framework of Einstein gravity. Using
the TOV equation which holds for the general-relativistic hydrostatic
equilibrium:
$\frac{dP}{dr}=-\frac{G}{r^{2}}\Big{[}\epsilon(r)+P(r)/c^{2}\Big{]}\frac{m(r)+4\pi
r^{3}P(r)/c^{2}}{1-2Gm(r)/rc^{2}},$ (10)
the bulk properties (mass and radius) of the beta-stable neutron star were
thus calculated as a function of the central pressure $P_{c}$ (in MeV/fm3) and
mass density $\epsilon_{c}$ (in gr/cm3). Here, $G$,
$\epsilon(r)=\rho\big{[}E/N(\rho)+m_{N}c^{2}\big{]}$, and $m(r)$ are,
respectively, the gravitational constant, the mass density at distance $r$
from the center of the assumed spherical neutron star of radius $R$, and the
total mass enclosed within a sphere of radius $r<R$. The neutron-star mass is
thus $m(R)$ and $R$ is obtained by integrating the TOV equation from $r=0$ to
$r=R$, at which point the pressure is assumed to vanish effectively (see [54]
for details).
Figure 1: Various U$\it{v_{14}}$ results for the binding energy per nucleon,
as a function of nucleon density, of beta-stable as well as neutron matter in
presence/absence of the TBF contribution. The data labeled as Bordbar-Riazi
were extracted from [72].
## 4 Results
Fig. 1 compares various U$\it{v_{14}}$ results for the mean binding energy of
beta-stable as well as neutron matter in presence and absence of three-body
contribution estimated as TBF or TNI. As such, our results for various
particle densities of the beta-stable matter are shown in Fig. 2 and the
pressure, sound-velocity, and dynamical stability results are presented in
Figs. 3, 4, and 7. The results in Figs. 5 and 6 are derived based on the
solutions of the TOV equation.
### 4.1 Binding energy
Our calculations, using U$\it{v_{14}}$ potential and introducing a TBF effect
based on the phenomenological UVII model [44], resulted in saturation density,
incompressibility, and symmetry energy values of, respectively, about 0.364
(0.178) fm-3, 302 (193) MeV, and 44.8 (29.2) MeV, for U$\it{v_{14}}$
(U$\it{v_{14}}$+TBF) potential. These are to be compared with the empirical
values of, respectively, 0.17 fm-3, 230$\pm 40$ MeV [55], and 32$\pm 1$ MeV
[56]. The results indicated that the TBF effect has worked in the direction of
increasing the core stiffness of the effective potential. This is also
reflected in the binding energy results per nucleon ($E/N$) of neutron as well
as beta-stable matter in Fig. 1. It is clear that the pure neutron matter,
with or without TBF, would correspond to a stiffer EOS than the beta-stable
matter. Considering the beta-stable two-body results of Bordbar-Riazi and this
work together with their slight differences over the range of densities shown
in this figure, the inclusion of TNI as compared to TBF would seem to have had
a smaller effect on stiffening the potential for densities below about 0.7
fm-3. This is reversed for densities above about 1 fm-3, where the inclusion
of TNI, as compared to TBF, appears to result in significantly higher energies
at high density. It could partly be attributed to the exponential construct of
the U$\it{v_{14}}$+TNI model, which incorporates higher-than-three-body terms
by superposing forces of alternating signs. It could also be attributed to the
more complex dependence of $V_{ijk}$ to spin and isospin in U$\it{v_{14}}$+TNI
model than the plain central force of $V_{ijk}^{R}$ in Eq. 3.
It is to be noted that although both of the beta-stable two-body calculations
in this figure indicate consistently a smaller stifness as compared with the
pure neutron-matter calculations, it would seem not to be the case when the
effects of either TNI or TBF are to be added to the corresponding two-body
contributions and compared with the neutron-matter results in presence of TBF.
The sharp deviation of the beta-stable results plus TNI effect from the
neutron-matter results plus TBF effect indicates that, especially at larger
densities, the neutron-matter results with TNI inclusion would considerably be
stiffer than the ones with TBF inclusion represented in this figure.
Figure 2: Various particle densities, as a function of nucleon density, of
beta-stable matter with and without the TBF contribution. The dotted and dash-
dotted data represent the case in which the beta-stability equilibrium is only
governed through $n\leftrightarrow p+e^{-}$.
### 4.2 Particle densities in beta-stable nuclear matter
Eq. 8 allows for calculating the proton fraction under beta-stable
equilibrium. Fig. 2 compares the electron, muon, and proton densities expected
in a beta-stable nuclear matter, assuming that $n\leftrightarrow p+\mu^{-}$ is
energitically allowed above nuclear-matter density at which point the electron
chemical potential would surpass the muon mass. Clearly, the muon contribution
has ensured significant increase of the proton density, especially at higher
nucleonic densities. However, the difference between $E/N$ of the two cases of
electrons-only (an equilibrium governed by $n\leftrightarrow p+e^{-}$ alone)
and electrons-plus-muons (an equilibrium governed by both $n\leftrightarrow
p+e^{-}$ and $n\leftrightarrow p+\mu^{-}$) is not as significant. This
difference is estimated in the electrons-only case to increase relative to the
electrons-plus-muons case by a maximum of about 7.8% (U$\it{v_{14}}$ at
$\rho=0.67$ fm-3) and 2.6% (U$\it{v_{14}}$+TBF at $\rho=0.59$ fm-3). In
contrast to the behavior of $E/N$, the proton density is obtained in the
electrons-only case to increase with $\rho$ relative to the electrons-plus-
muons case, reaching a maximum of about 33% (U$\it{v_{14}}$) and 34%
(U$\it{v_{14}}$+TBF).
Larger short-range repulsions are expected at high densities, as the short-
range repulsion between nucleon pairs that make up isospin singlets dominates
the one between isospin triplets [57]. Hence, pure neutron matter is to be
expected at high enough densities. The reason this is not reflected in Fig. 2
data with TBF effect could partly reflect the fact that the central-force
repulsion term $V_{ijk}^{R}$ assumed in the TBF construction does not account
for complex spin and isospin dependencies as it should, in order to have a
microscopic approach toward the repulsion force. Hence, the particular form of
$V_{ijk}^{R}$ in the TBF construction could be one of the reasons we would not
witness the onset of pure neutron matter as we approached toward high
densities. As such, further analysis using more realistic nucleon-nucleon
models could help pinpoint such problems (e.g. regarding Fig. 2), especially
when it concerns the TBF form and the expected effect of $V_{ijk}^{R}$.
Figure 3: Pressure of beta-stable and neutron matter for different potentials,
as a function of nucleon density. The data labeled as Bordbar-Riazi and
Bordbar-Hayati were extracted from [72] and [4], respectively.
### 4.3 Pressure
Assuming proton and neutron densities of $\rho_{p}$ and $\rho_{n}$ with
$\rho=\rho_{p}+\rho_{n}$, the nuclear-matter pressure is obtained as:
$P={\rho}^{2}\frac{\partial{E(\rho_{p},\rho_{n})}}{\partial{\rho}}$ (11)
Fig. 3 represents our parabolic approximation results for the pressure of the
beta-stable and neutron matter with and without TBF. The results indicate
generally that accounting for the three-body contribution as TBF or TNI
increases the pressure considerably and, in accordance with the results of
Fig. 1, makes the equation of state much stiffer. Considering the effect of
three-body interactions on $E/N$ and assuming the overall incompressibility
$9{\rho}^{2}\frac{{\partial}^{2}{(E/N)}}{\partial{\rho}^{2}}$, it is to be
expected that the three-body effect would add to the incompressibility at a
given density – in agreement with the pressure curves in Fig. 3. The neutron-
matter calculations plus TNI effect predict drastically higher pressures as
compared with TBF effect. This is in accordance with the final notes in Sec.
4.1, as a result of stiffer potential predicted in the case of TNI inclusion.
Figure 4: Sound speed in beta-stable and neutron matter with and without TBF.
The inset shows how the corresponding nucleon density would change with the
mass density. Figure 5: Neutron star’s mass in units of the Sun’s mass
($M_{\odot}$) as a function of its central density ($\epsilon_{c}$).
Given the nuclear-matter pressure, it is interesting to investigate the sound
speed in the neutron star’s interior as a function of density,
$v(\epsilon)=\sqrt{\partial P(\epsilon)/\partial\epsilon}$, which is one of
the vital conditions ($v<c$) in addressing the EOS stability [58]. Fig. 4
compares the results for beta-stable and neutron matter, based on
U$\it{v_{14}}$ and U$\it{v_{14}}$+TBF potentials. A common feature of the
results is that they all respect the causality in that the sound speed does
not exceed the speed of light over the investigated densities of up to 1.5
fm-3. A clear effect of TBF is the overall increase of the sound speed as
compared with the two-nucleon results. This is a reflection of the
corresponding pressure results in Fig. 3, taking into account the small
differences of nucleon-density variations against mass density
($\partial\rho/\partial\epsilon$; see the inset of Fig. 4) as opposed to the
sizable differences of pressure variations against nucleon density ($\partial
P/\partial\rho$; see Fig. 3). Indeed, at densities smaller than about 0.5
fm-3, it is primarily the rate of pressure change with nucleon density that
determines the sound speed in both beta-stable and neutron matter, with and
without TBF. Hence, as the pressure variations of various results with density
converge at small densities, so does the sound speed values. At ever higher
densities, the two factors – namely, the decrease of $\rho$ variations with
$\epsilon$ due to the TBF effect and the increase of pressure variations with
$\rho$ – go against one another to influence the sound speed. Although the
two-nucleon results in Fig. 4 appear at high densities to approach the ones
with TBF, it is the dominant effect of $\partial P/\partial\rho$ that would
guarantee higher sound speeds in presence of TBF as compared with two-nucleon
results. In the same line of argument and based on the TBF results in Fig. 3,
higher differences (at ever larger $\rho$ values) of $\partial P/\partial\rho$
between neutron and beta-stable matter seems to have been diminished by the
counter-effect of the corresponding $\partial\rho/\partial\epsilon$ results.
However, it is not as clear to relate the relative changes of the sound speed
results of the two-nucleon cases (beta-stable and neutron) to their
corresponding $\partial P/\partial\rho$ behavior in Fig. 3. This is partly so,
since the two pressure slopes do not seem to divert monotonically as a
function of $\rho$, which is contrary to what the corresponding TBF results
indicate. As such, the two-body neutron matter results above about 1.2 fm-3
are suspect – seen either from the relative change of pressure slope in
neutron matter and beta-stable matter or judged certainly from the sound speed
in neutron matter which starts to decline unreasonably from about 1.2 fm-3
upwards. Though the parabolic approximation has no say in the two-body results
of neutron matter – as opposed to the beta-stable matter – the sound speed
outcomes raise suspicion in neutron matter results at high densities, and this
involves the projection of the maximum supportable mass for the neutron star.
Figure 6: Neutron star’s mass in units of the Sun’s mass ($M_{\odot}$) as a
function of its radius.
### 4.4 Neutron star’s mass, radius, and dynamical stability
Integrating the TOV equation, allows for predicting how the mass or radius of
the neutron star would change with its central density and pressure. In our
calculations, we have taken into account a crust equation of state before
calculating the neutron-star properties. As such, Fig. 5 shows the variation
of Neutron star’s mass with its central density and Fig. 6 puts in persective
the relation between the mass and radius of a neutron star that is either made
purely of neutrons or is in beta-stable equilibrium, assuming a governing
U$\it{v_{14}}$ potential in presence and absence of TBF or TNI.
Table 1: Different properties of neutron stars calculated in different works, in the absence of magnetic fields. Left column indicates the reference to the work. Next three columns show neutron star’s maximum mass ($M$) and its corresponding radius ($R$) and central density. Other columns to the right represent the corresponding Schwarzschild radius $R_{Sch}$, mean density $\overline{\epsilon}$, compactness factor $\sigma$, gravitational redshift $z$, Kretschmann scalar $K$, and the GR compactness limit. Our results constitute the last four rows. Here, G, c, and $M_{\odot}$ refer to the gravitational constant, light speed, and the Sun’s mass, respectively. Ref. | $M$ | $R$ | $\epsilon_{c}/10^{15}$ | $R_{Sch}$ | $\overline{\epsilon}/10^{15}$ | $\sigma$ | $z$ | $K/10^{-7}$ | $\frac{4c^{2}R}{9G}$
---|---|---|---|---|---|---|---|---|---
| $[M_{\odot}]$ | $[km]$ | $[g/cm^{3}]$ | $[km]$ | $[g/cm^{3}]$ | | | $[1/m^{2}]$ | $[M_{\odot}]$
[39] | 1.68 | 8.42 | - | 4.96 | 1.34 | 0.59 | 0.56 | 0.29 | 2.53
[73] | 1.69 | 8.59 | - | 4.99 | 1.27 | 0.58 | 0.54 | 0.27 | 2.58
[41] | 1.68 | 9.00 | - | 4.96 | 1.09 | 0.55 | 0.49 | 0.23 | 2.71
$\beta$-stable matter: | | | | | | | | |
U$\it{v_{14}}$ | 1.59 | 6.96 | 5.37 | 4.70 | 2.24 | 0.67 | 0.75 | 0.48 | 2.09
U$\it{v_{14}}$+TBF | 1.89 | 9.36 | 3.26 | 5.58 | 1.09 | 0.60 | 0.57 | 0.23 | 2.82
neutron matter: | | | | | | | | |
U$\it{v_{14}}$ | 1.50 | 8.13 | 4.38 | 4.43 | 1.32 | 0.54 | 0.48 | 0.28 | 2.45
U$\it{v_{14}}$+TBF | 1.91 | 9.59 | 3.19 | 5.64 | 1.03 | 0.59 | 0.56 | 0.22 | 2.88
Along with other calculations, Table 1 shows our calculations for the maximum
mass and the corresponding radius of neutron stars – under beta-stability
equilibrium as well as made of pure neutron matter – based on which the values
of few characteristic parameters were obtained. These include the
Schwarzschild radius $R_{Sch}=2GM/c^{2}$, mean density
$\overline{\epsilon}=3M/4\pi R^{3}$, compactness factor $\sigma=R_{Sch}/R$,
gravitational redshift $z=\frac{1}{\sqrt{1-2GM/c^{2}R}}-1$, Kretschmann scalar
$K=4\sqrt{3}GM/c^{2}R^{3}$ [59, 60], and Buchdahl-Bondi upper mass limit
$M_{max}\leq 4c^{2}R/9G$ [61, 62, 63]. Since our results for the radius of the
neutron star are more than the maximum Schwarzschild radii, associated with
their respective maximum mass, none of our hypothesized neutron stars made of
either pure neutron or beta-stable matter (with and without TBF) are expected
to end up with a black hole. In general, the TBF effect has translated into an
increased $R_{Sch}$, which is clearly what we expect also from the neutron
star’s maximum mass. Unlike the expected increase in both of the maximum mass
and the corresponding neutron star’s volume due to the TBF effect, the
resulting average density appears to shrink relatively
($\Delta\overline{\epsilon}/\overline{\epsilon}$) by about 54% and 22% in the
case of beta-stable and neutron matter, respectively. Hence, a lower average
density due to TBF together with the fact that the overall pressure increases
due to TBF (see Fig. 3) means that as the neutron star’s overall pressure
increases due to the TBF effect, so does the average inter-nucleon distance.
Thus, it is not surprising that given a neutron star’s mass, the TBF effect as
compared to lack thereof have resulted in a larger radius (see Fig. 6).
Similar situation arises either in the presence or absence of TBF, by
considering the overall pressure of the pure neutron matter which is higher
than the beta-stable matter, contrary to the resulting average density of a
neutron star purely made of neutrons which is smaller than its average density
in beta-stability equilibrium. The compactness factor which is a measure of
the gravity strength is proportional to $M/R$, approximately resembling the
behavior of the gravitational redshift as a function of radius. The
Kretschmann scalar $K$ is a measure of the neutron star’s curvature at its
surface and, due to an extra dependence on $R^{-2}$, resembles $\sigma$ or $z$
to a lesser degree so that its values for the neutron matter and corresponding
to $M_{max}$ have appeared in different order than $\sigma$ or $z$ values. The
numbers in the right column show the general-relativity compactness limit
which is the upper mass limit for a static spherical neutron star of constant
density. The fact that the maximum-mass values are obtained to be smaller than
Buchdahl-Bondi limit is another indication that the hypothesized neutron stars
in this work (made of pure neutron or beta-stable matter bound by U$v_{14}$
potential, in presence or absence of TBF, and governed by the TOV equation)
would not turn into black hole. The dynamical stability, which was defined by
Chandrasekhar [64], is a concept introduced to check the neutron star’s
stability against infinitesimal radial adiabatic perturbations and is
fulfilled so long as the adiabatic index $\gamma=\frac{\epsilon
c^{2}+P}{c^{2}P}\frac{dP}{d\epsilon}>4/3$, which has been checked for many
astrophysical cases including [65, 66, 67]. Fig. 7 represents our results for
the adiabatic index as a function of density, showing that the dynamical-
stability condition is satisfied for the hypothesized neutron stars studied
over $\rho\leq 1.5$ fm-3.
Figure 7: Adiabatic index versus density, for $\rho>0.07$ fm-3. The full
circles, empty circles, and empty squares on each curve correspond to
$\rho$=1.5, 1.6 and 1.65 fm-3, respectively.
Table 2 puts the measured mass and in some cases – where the measurement of
radius succeeded through complex procedures involved in observation – the
radius of a few neutron stars into persective. The masses span over about one
to two times the mass of the Sun. Given a measured mass, the data on the right
side demonstrate our calculations for radius. The calculations correspond to
pure neutron as well as beta-stable matter, in which electrons and muons both
have contributed to hold up the equilibrium. There are fields that were left
empty, since our results would not predict masses as large as the measured
ones. Incidentally, all our results are compatible with the observed masses
smaller than about 1.50 $M_{\odot}$ (see Table 1, left column) in that they
could work out a radius corresponding to the observed mass. But, for masses
above 1.59 $M_{\odot}$, they would deliver a radius only when TBF is accounted
for; hence, they could only amount to a radius for two of the observed masses
(above 1.59 $M_{\odot}$) in Table 2. Parenthetically, our pure-neutron and
beta-stable results both agree – in present of TBF – with the measured radius
of VelaX-1 [68] within the uncertainties. The reason our calculations could
not work out a radius for masses as high as about 2 $M_{\odot}$ could partly
be due to the possibility of quark-hadron-phase existence within the neutron
star, in which case our model of a neutron star – purely made of nucleonic
matter – would break down. Indeed, there are studies on PSRJ0348+0432 and
PSRJ1614-2230 (see Table 2) arguing that there may exist a region of quark-
hybrid matter within their core [69, 70], or that compact stars with masses
close to 2 $M_{\odot}$ (like the three cases in Table 2), are compatible with
deconfined quark matter presence at their core [71].
Table 2: Measured mass and radius of few neutron stars through observation. Right columns: our estimates for the radius corresponding to the measured mass. Observation | Calculated $R~{}[km]$
---|---
| | | beta-stable | neutron-matter
Name [Ref.] | $M~{}[M_{\odot}]$ | $R~{}[km]$ | U$\it{v_{14}}$ | U$\it{v_{14}}$+TBF | U$\it{v_{14}}$ | U$\it{v_{14}}$+TBF
SMC X-1 [74] | $1.05\pm 0.09$ | - | 8.39 | 11.20 | 9.08 | 11.80
Cen X-3 [74] | $1.24\pm 0.24$ | - | 8.25 | 11.09 | 8.88 | 11.63
LMC X-4 [74] | $1.31\pm 0.14$ | - | 8.16 | 11.04 | 8.77 | 11.55
V395 CAR/2S 0921C630 [75] | $1.44\pm 0.10$ | - | 7.89 | 10.92 | 8.49 | 11.40
PSRJ0740+6620 [10] | $2.10$ | $12\pm 2$ | - | - | - | -
PSRJ0348+0432 [9] | $2.01$ | $13\pm 2$ | - | - | - | -
PSRJ1614-2230 [8] | $1.97$ | $12\pm 2$ | - | - | - | -
VelaX-1 [68] | $1.80$ | $11\pm 2$ | - | 10.19 | - | 10.62
4U1608-52 [76] | $1.74$ | $9\pm 1$ | - | 10.39 | - | 10.82
## 5 Summary and conclusions
Performing calculations for the asymmetric nuclear matter with the help of
parabolic approximation and U$v_{14}$ potential, we have investigated the
effect of a newly constructed phenomenological three-nucleon force which was
constructed exploiting two-body correlations – derived using the LOCV method
and the concept of three-body radial distribution function – the details of
which were discussed in [44]. Applying the method to the specific cases of
pure neutron and beta-stable matter allowed us to assess the TBF effect on
various particle densities as well as the bulk properties of neutron stars.
These included the influence of TBF on the sound speed and adiabatic index as
well as how the mass and radius of the neutron star would change with its
central density and pressure, and as a result what would be its maximum mass
and corresponding radius. Obtaining the neutron star’s maximum mass has a
special importance in that it indicates that the degeneracy pressure of
nucleons would be enough not to allow the neutron stars with $M\leq M_{max}$
to turn into black holes [54].
The TBF effect seemed to have been in the direction of increasing the neutron
star’s maximum mass and decreasing the central density associated with maximum
mass. Investigating the dependence of the radius on the central density
showed, generally, that the radius would decrease as the central density
increases. More specifically, at small values of central density or pressure,
the radius would experience a relatively sharp drop as the central density or
pressure grows. Beyond a certain central pressure or dencity (around $5\times
10^{14}$ g/cm3 with TBF and $9\times 10^{14}$ g/cm3 without TBF), there
appears a drastic change where the radius would not shrink as sharp. Our
hypothesized neutron star, constructed using U$v_{14}$ potential+TBF+parabolic
approximation+TOV equation, could predict a radius for all the observed masses
below 1.89 $M_{\odot}$ (beta-stability results) or 1.91 $M_{\odot}$ (neutron-
matter results). In the observation case of VelaX-1 [68], both of the radius
results (neutron and beta-stable matter) agreed with the observed one within
the reported uncertainties.
Knowing that there are inherent problems regarding the U$v_{14}$ potential and
in order to study the significance of the proposed TBF and its implications
for the beta-stable matter and neutron star’s stability, we are encouraged to
further investigate the prospects of the TBF effect in conjunction with more
realistic nucleon-nucleon models constructed on the basis of large-scale
scattering databases at intermediate energies.
## Acknowledgments
We wish to thank the Shiraz University Research Council.
Data Availability Statement: No Data associated in the manuscript
## References
* [1] A. Kievsky, M. Viviani, L. Girlanda, and L.E. Marcucci, Phys. Rev. C 81, 044003 (2010).
* [2] A. Lovato, O. Benhar, S. Fantoni, and K. Schmidt, Phys. Rev. C 85, 024003 (2012).
* [3] R.B. Wiringa, V. Fiks, and A. Fabrocini, Phys. Rev. C 38(2), 1010 (1988).
* [4] G.H. Bordbar and M. Hayati, Int. J. Mod. Phys. A 21, 7, 1555 (2006).
* [5] K. Bamba, S. Nojiri, and D. Odintsov, arXiv: 1101.2820 (2011).
* [6] Z. Yousaf, K. Bamba, and M.Z. Bhatti, Phys. Rev. D 93, 124048 (2016).
* [7] M.Z. Bhatti, Z. Yousaf, and M. Yousaf, Mod. Phys. Lett. A 38, no.12n13, p.2350067 (2023).
* [8] P. Demorest, T. Pennucci, S. Ransom, M. Roberts, and J. Hessels, Nature 467, 1081 (2010).
* [9] J. Antoniadis, P.C.C. Freire, N. Wex et al., Science 340, 6131 (2013).
* [10] H.T. Cromartie, E. Fonseca, S.M. Ransom, P.B. Demorest, Z. Arzoumanian et al., Nat. Astron. 4, 72 (2020).
* [11] M. Linares, T. Shahbaz, and J. Casares, Astrophys. J. 859, 54 (2018).
* [12] F. Ozel, D. Psaltis, R. Narayan, and A.S. Villarreal, Astrophys. J. 757, 55 (2012).
* [13] F. Foucart, arXiv:astro-ph.HE 2006.10570v (2020).
* [14] M.Z. Bhatti, Z. Yousaf, and M. Yousaf, New Astronomy 106, 102132 (2024).
* [15] M.Z. Bhatti, Z. Yousaf, and M. Yousaf, Int. J. Mod. Phys. D 31, no.16, p.2250116 (2022).
* [16] M. Yousaf, M.Z. Bhatti, and Z. Yousaf, Nucl Phys. B 995, 116328 (2023).
* [17] M.Z. Bhatti, M. Yousaf, and Z. Yousaf, General Relativity and Gravitation 55, 16 (2023).
* [18] I.E. Lagaris and V.R. Pandharipande, Nucl. Phys. A 334, 217 (1980).
* [19] I.E. Lagaris and V.R. Pandharipande, Nucl. Phys. A 359, 331 (1981).
* [20] R.B. Wiringa, R.A. Smith, and T.L. Ainsworth, Phys. Rev. C 29, 1207 (1984).
* [21] R.B. Wiringa, V.G.J. Stokes, and R. Schiavilla, Phys. Rev. C 51, 38 (1995).
* [22] G.H. Bordbar and M. Modarres, Phys. Rev. C 57, 714 (1998).
* [23] G.H. Bordbar and M. Modarres, J. Phys. G 23, 1631 (1997).
* [24] G.H. Bordbar Int. J. Mod. Phys. A 18, 3629 (2003).
* [25] M. Bigdeli, G.H. Bordbar, and Z. Rezaei, Phys. Rev. C 80, 343101 (2009).
* [26] Z. Rezaei and G.H. Bordbar, Eur. Phys. J. A 53, 43 (2017).
* [27] G.H. Bordbar and Z. Rezaei, Phys. Lett. B 718, 1125 (2013).
* [28] G.H. Bordbar and M. Bigdeli, Phys. Rev. C 75, 0458041 (2007).
* [29] G.H. Bordbar and M. Bigdeli, Phys. Rev. C 78, 0543151 (2008).
* [30] G.H. Bordbar, Z. Rezaei, and A. Montakhab, Phys. Rev. C 83, 0443101 (2011).
* [31] Z. Rezaei, M. Bigdeli, and G.H. Bordbar, Int. J. Mod. Phys. E 24, 1550075 (2015).
* [32] M. Modarres and G.H. Bordbar, Phys. Rev. C 58, 2781 (1998).
* [33] G.H. Bordbar and M. Bigdeli, Phys. Rev. C 77, 0158051 (2008).
* [34] G.H. Bordbar and M. Bigdeli, Phys. Rev. C 76, 0358031 (2007).
* [35] M. Bigdeli, G.H. Bordbar, and A. Poostforush, Phys. Rev. C 82, 0343091 (2010).
* [36] G.H. Bordbar, S.H. Hendi, and B. Eslam Panah, Eur. Phys. J. Plus 131, 315 (2016).
* [37] Z. Rezaei and G.H. Bordbar, Eur. Phys. J. A 52, 132 (2016).
* [38] S.H. Hendi, G.H. Bordbar, B. Eslam Panah, and S. Panahiyan, JCAP 07, 004 (2017).
* [39] B. Eslam Panah, G.H. Bordbar, S.H. Hendi, R. Ruffini, Z. Rezaei and R. Moradi, Astrophys. J. 848, 24 (2017).
* [40] B. Eslam Panah, T. Yazdizadeh, and G.H. Bordbar, Eur. Phys. J. C 79, 815 (2019).
* [41] G.H. Bordbar and M. Karami, Eur. Phys. J. C 82, 74 (2022).
* [42] J. Carlson, V.R. Pandharipande, and R.B. Wiringa, Nucl. Phys. A 401, 59 (1983).
* [43] B.S. Pudliner, V.R. Pandharipande, J. Carlson, and R.B. Wiringa, Phys. Rev. Lett. 74, 4396 (1995).
* [44] H. Moeini and G.H. Bordbar, Nucl. Phys. A 1017, 122339 (2022).
* [45] R.C. Tolman, Proc. Natl. Acad. Sci. USA 20, 169 (1934).
* [46] R.C. Tolman, Phys. Rev. J. Arch. 55, 364 (1939).
* [47] J.R. Oppenheimer and G.M. Volkoff, Phys. Rev. J. Arch. 55, 374 (1939).
* [48] M.N. Harakeh, J.H. Koch, and O. Scholten, in Proceedings of a NATO Advanced Study Institute on Correlations and Clustering Phenomena in Subatomic Physics, (Netherlands, Dronten, 1997).
* [49] I. Fujita and H. Miyazawa, Prog. Theor. Phys. 17, 360 (1957).
* [50] B.F. Gibson and B.H.J. McKellar, Few-Body Systems 3, 143 (1988).
* [51] I.E. Lagaris and V.R. Pandharipande, Nucl. Phys. A 359, 349 (1981).
* [52] J.W. Clark, Prog. Part. Nucl. Phys. 2, 89 (1979).
* [53] I.E. Lagaris and V.R. Pandharipande, Nucl. Phys. A 369, 470 (1981).
* [54] S. Shapiro and S. Teukolsky, Black Holes, White Dwarfs, and Neutron Stars, (Wiley, New York, 1983).
* [55] E. Khan, J. Margueron, and I. Vida$\tilde{\rm{n}}$a, Phys. Rev. Lett. 109, 092501 (2012).
* [56] M. Baldo and G.F. Burgio, Prog. Part. Nucl. Phys. 91, 203 (2016).
* [57] V.R. Pandharipande and V.K. Garde, Phys. Lett. B 39, 608 (1972).
* [58] H. Abreu, H. Hernandez, and L.A. Nunes, Class. Quantum Grav. 24, 4631 (2007).
* [59] D. Psaltis, Living Rev. Relativ. 11, 9 (2008).
* [60] K.Y. Eksi, C. Gungor, and M.M. Turkoglu, Phys. Rev. D 89, 063003 (2014).
* [61] H.A. Buchdahl, Phys. Rev. 116, 1027 (1959).
* [62] H. Bondi, Proc. R. Soc. Lond. A 282, 303 (1964).
* [63] H.A. Buchdahl, Astrophys. J. 146, 275 (1966).
* [64] S. Chandrasekhar, Astrophys. J. 140, 417 (1964).
* [65] H. Kunstem, MNRAS 232, 163 (1988).
* [66] M.K. Mak and T. Harko, Eur. Phys. J. C 73, 2585 (2013).
* [67] M. Kalam, S.M. Hossein, and S. Molla, arXiv:1510.07015v1 [gr-qc], (2015).
* [68] K.L. Rawls et al., Astrophys. J. 730, 25 (2011).
* [69] M. Orsaria, H. Rodrigues, F. Weber, and G.A. Contrera, Phys. Rev. D 87, 023001 (2013).
* [70] M. Orsaria, H. Rodrigues, F. Weber, and G.A. Contrera, Phys. Rev. C 89, 015806 (2014).
* [71] R. Lastowiecki, D. Blaschke. T. Fischer, T. Klahn, Phys. Part. Nucl. 46, 843 (2015).
* [72] G.H. Bordbar and N. Riazi, Astrophys. Space Sci. 282, 563 (2002).
* [73] G.H. Bordbar and Z. Rezaei, Res. Astron. Astrophys. 13, 197 (2013).
* [74] A. van der Meer, L. Kapper, M.H. van Kerkwijk, and E.P.J. van den Heuvel, in Interacting Binaries: Accretion, Evolution, and Outcomes, American Institute of Physics Conference Series 797, (2005).
* [75] D. Steeghs and P.G. Jonker, Astrophys. J. 669, L85 (2007).
* [76] T. Guver, F. Ozel, A. Cebrera-Lavers, and P. Wroblewski, Astrophys. J. 712, 964 (2010).
|
.tocmtchapter mtchaptersubsection mtappendixnone
# UMix: Improving Importance Weighting for Subpopulation Shift via
Uncertainty-Aware Mixup
Zongbo Han1 333 , Zhipeng Liang2111 444 , Fan Yang3111, Liu Liu3, Lanqing Li3,
Yatao Bian3,
Peilin Zhao3, Bingzhe Wu3222, Changqing Zhang1222, Jianhua Yao3222
1College of Intelligence and Computing, Tianjin University,
2 Hong Kong University of Science and Technology, 3Tencent AI Lab Equal
contribution. ‡ Supported by 2021 Tencent Rhino-Bird Research Elite Training
Program. § Work done during an internship at Tencent AI Lab. $\dagger$
Corresponding authors<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Subpopulation shift widely exists in many real-world machine learning
applications, referring to the training and test distributions containing the
same subpopulation groups but varying in subpopulation frequencies. Importance
reweighting is a normal way to handle the subpopulation shift issue by
imposing constant or adaptive sampling weights on each sample in the training
dataset. However, some recent studies have recognized that most of these
approaches fail to improve the performance over empirical risk minimization
especially when applied to over-parameterized neural networks. In this work,
we propose a simple yet practical framework, called uncertainty-aware mixup
(UMix), to mitigate the overfitting issue in over-parameterized models by
reweighting the “mixed” samples according to the sample uncertainty. The
training-trajectories-based uncertainty estimation is equipped in the proposed
UMix for each sample to flexibly characterize the subpopulation distribution.
We also provide insightful theoretical analysis to verify that UMix achieves
better generalization bounds over prior works. Further, we conduct extensive
empirical studies across a wide range of tasks to validate the effectiveness
of our method both qualitatively and quantitatively. Code is available at this
URL.
## 1 Introduction
Empirical risk minimization (ERM) typically faces challenges from distribution
shift, which refers to the difference between training and test distributions
[61, 27, 3]. One common type of distribution shift is subpopulation shift
wherein the training and test distributions consist of the same subpopulation
groups but differ in subpopulation frequencies [6, 8]. Many practical research
problems (e.g., fairness of machine learning and class imbalance) can all be
considered as a special case of subpopulation shift [32, 21, 28]. For example,
in the setting of fair machine learning, we train the model on a training
dataset with biased demographic subpopulations and test it on an unbiased test
dataset [32, 21]. Therefore the essential goal of fair machine learning is to
mitigate the subpopulation shift between training and test datasets.
Many approaches have been proposed for solving this problem. Among these
approaches, importance weighting (IW) is a classical yet effective technique
by imposing static or adaptive weights on each sample when building weighted
empirical loss. Therefore each subpopulation group contributes comparably to
the final training objective. Specifically, there are normally two ways to
achieve importance reweighting. Early works propose to reweight the sample
inverse proportionally to the subpopulation frequencies (i.e., static weights)
[61, 59, 13, 58, 11, 42], such as class-imbalanced learning approaches [13,
11, 42]. Alternatively, a more flexible way is to reweight individual samples
adaptively according to training dynamics [66, 74, 47, 72, 35, 48, 40, 62].
Distributional robust optimization (DRO) is one of the most representative
methods in this line, which minimizes the loss over the worst-case
distribution in a neighborhood of the empirical training distribution. A
commonly used dual form of DRO can be seen as a special case of importance
reweighting wherein the sampling weights are updated based on the current loss
[52, 24, 38, 25] in an alternated manner.
However, some recent studies have shown both empirically and theoretically
that these IW methods could fail to achieve better worst-case subpopulation
performance compared with ERM. Empirically, prior works [10, 58] recognize
that various IW methods tend to exacerbate overfitting, which leads to a
diminishing effect on stochastic gradient descent (SGD) over training epochs
especially when they are applied to over-parameterized neural networks (NNs).
Theoretically, previous studies prove that for over-parameterized neural
networks, reweighting algorithms do not improve over ERM because their
implicit biases are (almost) equivalent [73, 59, 68]. In addition, some prior
works also point out that using conventional regularization techniques such as
weight decay cannot significantly improve the performance of IW [58].
To this end, we introduce a novel technique called uncertainty-aware mixup
(UMix), by reweighting the mixed samples according to uncertainty within the
mini-batch while mitigating overfitting. Specifically, we employ the well-
known mixup technique to produce “mixed” augmented samples. Then we train the
model on these mixed samples to make sure it can always see “novel” samples
thus the effects of IW will not dissipate even at the end of the training
epoch. To enforce the model to perform fairly well on all subpopulations, we
further efficiently reweight the mixed samples according to uncertainty of the
original samples. The weighted mixup loss function is induced by combining the
weighted losses of the corresponding two original samples. At a high level,
this approach augments training samples in an uncertainty-aware manner, i.e.,
putting more focus on samples with higher prediction uncertainties that belong
to minority subpopulations with high probabilities. We also show UMix can
provide additional theoretical benefit which achieves a tighter generalization
bound than weighted ERM [41, 40, 72, 38]. The contributions of this paper are:
* •
We propose a simple and practical approach called uncertainty-aware mixup
(UMix) to improve previous IW methods by reweighting the mixed samples, which
provides a new framework to mitigate overfitting in over-parameterized neural
networks.
* •
Under the proposed framework, we provide theoretical analysis with insight
that UMix can achieve a tighter generalization bound than the weighted ERM.
* •
We perform extensive experiments on a wide range of tasks, where the proposed
UMix achieves excellent performance in both group-oblivious and group-aware
settings.
Comparison with existing works. Here, we discuss the key differences between
UMix and other works. In contrast to most IW methods (e.g., CVaR-DRO [38] and
JTT [41]), UMix employs a mixup strategy to improve previous IW methods and
mitigate the model overfitting. Among these methods, JTT [41] and LISA [70]
are the two most related works to ours. Specifically, JTT provides a two-stage
optimization framework in which an additional network is used for building the
error set, and then JTT upweights samples in the error set in the following
training stage. Besides, LISA also modifies mixup for improving model
robustness against distribution shift. However, LISA intuitively mixes the
samples within the same subpopulation or same label thus it needs additional
subpopulation information. In contrast to them, UMix introduces sample weights
into the vanilla mixup strategy by quantitatively measuring the sample
uncertainties without subpopulation information. In addition, our work is
orthogonal to LISA, i.e., we can use our weight building strategy to improve
LISA’s performance. In practice, our method consistently outperforms previous
approaches that do not use subpopulation information and even achieves quite
competitive performance to those methods which leverage subpopulation
information. We also provide theoretical analysis to explain why UMix works
better than the weighted ERM [41, 40, 72, 38].
## 2 Related Work
### 2.1 Importance weighting
To improve the model robustness against subpopulation shift, importance
weighting (IW) is a classical yet effective technique by imposing static or
adaptive weight on each sample and then building weighted empirical loss.
Therefore each subpopulation group can have a comparable strength in the final
training objective. Specifically, there are typically two ways to achieve
importance reweighting, i.e., using static or adaptive importance weights.
Static methods. The naive reweighting approaches perform static reweighting
based on the distribution of training samples [61, 59, 13, 58, 11, 42]. Their
core motivation is to make different subpopulations have a comparable
contribution to the training objective by reweighting. Specifically, the most
intuitive way is to set the weight of each sample to be inversely proportional
to the number of samples in each subpopulation [61, 59, 58]. Besides, there
are some methods to obtain sample weights based on the effective number of
samples [13], subpopulation margins [11], and Bayesian networks [42].
Adaptive methods. In contrast to the above static methods, a more essential
way is to assign each individual sample an adaptive weight that can vary
according to training dynamics [66, 74, 47, 72, 35, 48, 40, 62].
Distributional robust optimization (DRO) is one of the most representative
methods in this line, which minimizes the loss over the worst-case
distribution in a neighborhood of the empirical training distribution. A
commonly-used dual form of DRO can be considered as a special case of
importance reweighting wherein the sampling weights are updated based on the
current loss [52, 24, 38, 25] in an alternated manner. For example, in the
group-aware setting (i.e., we know each sample belongs to which
subpopulation), GroupDRO [58] introduces an online optimization algorithm to
update the weights of each group. In the group-oblivious setting, [66, 35, 47,
48] model the problem as a (regularized) minimax game, where one player aims
to minimize the loss by optimizing the model parameters and another player
aims to maximize the loss by assigning weights to each sample.
### 2.2 Uncertainty quantification
The core of our method is based on the high-quality uncertainty quantification
of each sample. There are many approaches proposed for this goal. The
uncertainty of deep learning models includes epistemic (model) uncertainty and
aleatoric (data) uncertainty [30]. To obtain the epistemic uncertainty,
Bayesian neural networks (BNNs) [53, 45, 15, 30] have been proposed which
replace the deterministic weight parameters of model with distribution. Unlike
BNNs, ensemble-based methods obtain the epistemic uncertainty by training
multiple models and ensembling them [36, 22, 2, 26]. Aleatoric uncertainty
focuses on the inherent noise in the data, which usually is learned as a
function of the data [30, 37, 54]. Uncertainty quantification has been
successfully equipped in many fields such as multimodal learning [44, 20, 19],
multitask learning [31, 14], and reinforcement learning [29, 39]. Unlike
previous methods, our method focuses on estimating the epistemic uncertainty
of training samples with subpopulation shift and upweighting uncertain
samples, thereby improving the performance of minority subpopulations with
high uncertainty.
## 3 Method
In this section, we introduce technical details of UMix. The key idea of UMix
is to exploit uncertainty information to upweight mixed samples, and thus can
encourage the model to perform uniformly well on all subpopulations. We first
introduce the basic procedure of UMix and then present how to provide high-
quality uncertainty estimations which is the fundamental block of UMix.
### 3.1 Background
The necessary background and notations are provided here. Let the input and
label space be $\mathcal{X}$ and $\mathcal{Y}$ respectively. Given training
dataset $\mathcal{D}$ with $N$ training samples
$\\{(x_{i},y_{i})\\}_{i=1}^{N}$ i.i.d. sampled from a probability distribution
$P$. We consider the setting that the training distribution $P$ is a mixture
of $G$ predefined subpopulations, i.e., $P=\sum_{g=1}^{G}k_{g}P_{g}$, where
$k_{g}$ and $P_{g}$ denote the $g$-th subpopulation’s proportion and
distribution respectively. Our goal is to obtain a model
$f_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}$ parameterized by
$\theta\in\Theta$ that performs well on all subpopulations.
The well-known empirical risk minimization (ERM) algorithm doesn’t consider
the subpopulations and minimizes the expected risk
$\mathbb{E}{[\ell(\theta,x_{i},y_{i})]}$, where $\ell$ denotes the loss
function. This leads to the model paying more attention to the majority
subpopulations in the training set and resulting in poor performance on the
minority subpopulations. For example, the ERM-based models may learn spurious
correlations that exist in majority subpopulations but not in minority
subpopulations [58]. The proposed method aims to learn a model that is robust
against subpopulation shift by importance weighting.
Previous works on improving subpopulation shift robustness investigate several
different settings, i.e., group-aware and group-oblivious [72, 41, 58]. Most
of the previous works have assumed that the group label is available during
training [58, 70]. This is called the group-aware setting. However, due to
some reasons, we may not have training group labels. For example, in many real
applications, it’s hard to extract group label information. Meanwhile, the
group label information may not be available due to privacy concerns. This
paper studies the group-oblivious setting, which cannot obtain group
information for each example at training time. This requires the model to
identify underperforming samples and then pay more attention to them during
training.
### 3.2 Importance-weighted mixup
UMix employs an aggressive data augmentation strategy called uncertainty-aware
mixup to mitigate overfitting. Specifically, vanilla mixup [75, 76] constructs
virtual training examples (i.e., mixed samples) by performing linear
interpolations between data/features and corresponding labels as:
$\widetilde{x}_{i,j}=\lambda
x_{i}+(1-\lambda)x_{j},\;\widetilde{y}_{i,j}=\lambda y_{i}+(1-\lambda)y_{j},$
(1)
where $(x_{i},y_{i}),{(x_{j},y_{j})}$ are two samples drawn at random from
empirical training distribution and $\lambda\in[0,1]$ is usually sampled from
a beta distribution. Then vanilla mixup optimizes the following loss function:
$\mathbb{E}_{\\{(x_{i},y_{i}),(x_{j},y_{j})\\}}[\ell(\theta,\widetilde{x}_{i,j},\widetilde{y}_{i,j})].$
(2)
When the cross entropy loss is employed, Eq. 2 can be rewritten as:
$\mathbb{E}_{\\{(x_{i},y_{i}),(x_{j},y_{j})\\}}[\lambda\ell(\theta,\widetilde{x}_{i,j},y_{i})+(1-\lambda)\ell(\theta,\widetilde{x}_{i,j},y_{j})].$
(3)
Eq. 3 can be seen as a linear combination (mixup) of
$\ell(\theta,\widetilde{x}_{i,j},y_{i})$ and
$\ell(\theta,\widetilde{x}_{i,j},y_{j})$. Unfortunately, since vanilla mixup
doesn’t consider the subpopulations with poor performance, it has been shown
experimentally to be non-robust against subpopulation shift [70]. To this end,
we introduce a simple yet effective method called UMix, which further employs
a weighted linear combination of the original loss based on Eq. 3 to encourage
the learned model to pay more attention to samples with poor performance.
In contrast to previous IW methods, the importance weights of UMix are used on
the mixed samples. To do this, we first estimate the uncertainty of each
sample and then use this quantity to construct the importance weight (i.e.,
the higher the uncertainty, the higher the weight, and vice versa). For the
$i$-th sample $x_{i}$, we denote its importance weight as $w_{i}$. Once we
obtain the importance weight, we can perform weighted linear combination of
$\ell(\theta,\widetilde{x}_{i,j},y_{i})$ and
$\ell(\theta,\widetilde{x}_{i,j},y_{j})$ by:
$\mathbb{E}_{\\{(x_{i},y_{i}),(x_{j},y_{j})\\}}[{\color[rgb]{.75,0,.25}{w}_{i}}\lambda\ell(\theta,\widetilde{x}_{i,j},y_{i})+{\color[rgb]{.75,0,.25}w_{j}}(1-\lambda)\ell(\theta,\widetilde{x}_{i,j},y_{j})],$
(4)
where ${w}_{i}$ and ${w}_{j}$ denote the importance weight of the $i$-th and
$j$-th samples respectively. In practice, to balance the UMix and normal
training, we set a hyperparameter $\sigma$ that denotes the probability to
apply UMix. The whole training pseudocode for UMix is shown in Algorithm 1.
Input: Training dataset $\mathcal{D}$ and the corresponding importance weights
$\mathbf{w}=[w_{1},\cdots,w_{N}]$, hyperparameter $\sigma$ to control the
probability of doing UMix, and parameter $\alpha$ of the beta distribution;
1 for _each iteration_ do
2 Obtain training samples $(x_{i},y_{i})$, $(x_{j},y_{j})$ and the
corresponding weight $w_{i}$, $w_{j}$;
3 Sample $p\sim$ Uniform(0,1);
4 if $p<\sigma$ then Sample $\lambda\sim Beta(\alpha,\alpha)$; else
$\lambda=0$;
5 Obtain the mixed input $\widetilde{x}_{i,j}$ where
$\widetilde{x}_{i,j}=\lambda x_{i}+(1-\lambda)x_{j}$;
6 Obtain the loss of the model with
${\color[rgb]{.75,0,.25}{w}_{i}}\lambda\ell(\theta,\widetilde{x}_{i,j},y_{i})+{\color[rgb]{.75,0,.25}w_{j}}(1-\lambda)\ell(\theta,\widetilde{x}_{i,j},y_{j})$;
7 Update model parameters $\theta$ to minimize loss with an optimization
algorithm.
Algorithm 1 The training pseudocode of UMix.
### 3.3 Uncertainty-aware importance weights
Now we present how to obtain the uncertainty-aware training importance
weights. In the group-oblivious setting, the key to obtaining importance
weights is to find samples with high uncertainty. For example, DRO-based
algorithms construct the uncertainty set with the current loss [52, 24, 38,
25]. It has been shown experimentally that the uncertain samples found in this
way are constantly changing during training [41], resulting in these methods
not always upweighting the minority subpopulations. Therefore, we introduce a
sampling-based stable uncertainty estimation to better characterize the
subpopulation shift.
Given a well trained neural classifier
$f_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}$ that could produce the
predicted class $\hat{f}_{\theta}(x)$, a simple way to obtain the uncertainty
of a sample is whether the sample is correctly classified. However, as pointed
out in previous work [36], a single model cannot accurately characterize the
sampling uncertainty. Therefore, we propose to obtain the uncertainty through
Bayesian sampling from the model posterior distribution
$p(\theta;\mathcal{D})$. Specifically, given a sample $(x_{i},y_{i})$, we
define the training uncertainty as:
$u_{i}=\int\kappa(y_{i},\hat{f_{\theta}}(x_{i}))p(\theta;\mathcal{D})d\theta,\text{where}\;\kappa(y_{i},\hat{f}_{\theta}(x_{i}))=\begin{cases}0,&\text{
if }y_{i}=\hat{f}_{\theta}(x_{i})\\\ 1,&\text{ if
}y_{i}\neq\hat{f}_{\theta}(x_{i})\end{cases}.$ (5)
Then, we can obtain an approximation of Eq. 5 with $T$ Monte Carlo samples as
$u_{i}\approx\frac{1}{T}\sum_{t=1}^{T}\kappa(y_{i},\hat{f}_{\theta_{t}}(x_{i}))$,
where $\theta_{t}\in\Theta$ can be obtained by minimizing the expected risk.
In practice, sampling $\\{\theta_{t}\\}_{t=1}^{T}$ from the posterior (i.e.,
$\theta_{t}\sim p(\theta;\mathcal{D})$) is computationally expensive and
sometimes even intractable since multiple training models need to be built or
extra approximation errors need to be introduced. Inspired by a recent
Bayesian learning paradigm named SWAG [46], we propose to employ the
information from the historical training trajectory to approximate the
sampling process. More specifically, we train a model with ERM and save the
prediction results $\hat{f}_{\theta_{t}}(x_{i})$ of each sample on each
iteration epoch $t$. Then, to avoid the influence of inaccurate predictions at
the beginning of training, we estimate uncertainty with predictions after
training $T_{s}-1$ epochs with:
$u_{i}\approx\frac{1}{T}\sum_{t=T_{s}}^{T_{s}+T}\kappa(y_{i},\hat{f}_{\theta_{t}}(x_{i})).$
(6)
We empirically show that the proposed approximation could obtain reliable
uncertainty in Sec. B.4 of the Appendix.
To obtain reasonable importance weights, we assume that the samples with high
uncertainty should be given a higher weight and vice versa. Therefore a
reasonable importance weight could be linearly positively related to the
corresponding uncertainty,
$w_{i}=\eta u_{i}+c,$ (7)
where $\eta\in\mathbb{R}_{+}$ is a hyperparameter and $c\in\mathbb{R}_{+}$ is
a constant that keeps the weight to be positive. In practice, we set $c$ to 1.
The whole process for obtaining training importance weights is shown in
Algorithm 2.
Input: Training dataset $\mathcal{D}$, sampling start epoch $T_{s}$, the
number of sampling $T$, and upweight hyperparameter $\eta$ ;
Output: The training importance weights $\mathbf{w}=[w_{1},\cdots,w_{n}]$;
1 for _each iteration_ do
2 Train $f_{\theta}$ by minimizing the expected risk
$\mathbb{E}\\{\ell(\theta,x_{i},y_{i})\\}$;
3 Save the prediction results $\\{\hat{f}_{\theta_{t}}(x_{i})\\}_{i=1}^{N}$ of
the current epoch $t$;
4
5Obtain the uncertainty of each sample with
$u_{i}\approx\frac{1}{T}\sum_{t=T_{s}}^{T_{s}+T}\kappa(y_{i},\hat{f}_{\theta_{t}}(x_{i}))$;
Obtain the importance weight of each sample with $w_{i}=\eta u_{i}+c$.
Algorithm 2 The process for obtaining training importance weights.
Remark. Total uncertainty can be divided into epistemic and aleatoric
uncertainty [30]. In the proposed method, the samples are weighted only based
on epistemic uncertainty by sampling from the model on the training
trajectory, which can be seen as sampling from the model posterior in a more
efficient way. What’s more, we consider that the training samples do not
contain the inherent noise (aleatoric uncertainty) since it is usually
intractable to distinguish between noisy samples and minority samples from
data with subpopulation shifts.
Rethink why this estimation approach could work? Recent work has empirically
shown that compared with the hard-to-classify samples, the easy-to-classify
samples are learned earlier during training [18]. Meanwhile, the hard-to-
classify samples are also more likely to be forgotten by the neural networks
[64]. The frequency with which samples are correctly classified during
training can be used as supervision information in confidence calibration
[51]. Snapshot performs ensemble learning on several local minima models along
the optimization path [26]. The proposed method is also inspired by these
observations and algorithms. During training, samples from the minority
subpopulations are classified correctly less frequently, which corresponds to
higher training uncertainty. On the other hand, samples from the majority
subpopulations will have lower training uncertainty due to being classified
correctly more often. In Sec. B.5 of the Appendix, we show the accuracy of
different subpopulations during training to empirically validate our claim.
Meanwhile, we explain in detail why the uncertainty estimation based on
historical information is chosen in Sec. C of the Appendix.
## 4 Experiments
In this section, we conduct experiments on multiple datasets with
subpopulation shift to answer the following questions. Q1 Effectiveness (I).
In the group-oblivious setting, does the proposed method outperform other
algorithms? Q2 Effectiveness (II). How does UMIX perform without the group
labels in the validation set? Q3 Effectiveness (III). Although our method does
not use training group labels, does it perform better than the algorithms
using training group labels? Q4 Effectiveness (IV). Can UMix improve the model
robustness against domain shift where the training and test distributions have
different subpopulations. Q5 Qualitative analysis. Are the obtained
uncertainties of the training samples trustworthy? Q6 Ablation study. What is
the key factor of performance improvement in our method?
### 4.1 Setup
We briefly present the experimental setup here, including the experimental
datasets, evaluation metrics, model selection, and comparison methods. Please
refer to Sec. B in Appendix for more detailed setup.
Datasets. We perform experiments on three datasets with multiple
subpopulations, including Waterbirds [58], CelebA [43] and CivilComments [9].
We also validate UMix on domain shift scenario which is a more challenging
distribution shift problem since there are different subpopulations between
training and test data. Hence, we conduct experiments on a medical dataset
called Camelyon17 [5, 33] that consists of pathological images from five
different hospitals. The training data is drawn from three hospitals, while
the validation and test data are sampled from other hospitals.
Evaluation metrics. To be consistent with existing works [70, 33, 56], we
report the average accuracy of Camelyon17 over 10 different random seeds. On
other datasets, we repeat experiments over 3 times and report the average and
worst-case accuracy among all subpopulations. The trade-off between the
average and worst-case accuracy is a well-known challenge [21]. In this paper,
we lay emphasis on worst-case accuracy, which is more important than average
accuracy in some application scenarios. For example, in fairness-related
applications, we should pay more attention to the performance of the minority
groups to reduce the gap between the majority groups and ensure the fairness
of the machine learning decision system.
Model selection. Following prior works [41, 72], we assume the group labels of
validation samples are available and select the best model based on worst-case
accuracy among all subpopulations on the validation set. We also conduct model
selection based on the average accuracy to show the impact of validation group
label information in our method.
Comparisons in the group-oblivious setting. Here we list the baselines used in
the group-oblivious setting. (1) ERM trains the model using standard empirical
risk minimization. (2) Focal loss [40] downweights the well-classified
examples’ loss according to the current classification confidences. (3) DRO-
based methods including CVaR-DRO, $\chi^{2}$-DRO [38], CVaR-DORO and
$\chi^{2}$-DORO [72] minimize the loss over the worst-case distribution in a
neighborhood of the empirical training distribution. (4) JTT [41] constructs
an error set and upweights the samples in the error set to improve the worst-
case performance among all subpopulations.
Comparison in the group-aware setting. To better demonstrate the performance
of the proposed method, we compare our method with multiple methods that use
training group labels, including IRM [3], IB-IRM [1], V-REx [34], CORAL [63],
Group DRO [58], DomainMix [69], Fish [60], and LISA [70].
Mixup-based comparison methods. We compare our method with vanilla mixup and
in-group mixup, where vanilla mixup is performed on any pair of samples and
in-group mixup is performed on the samples with the same labels and from the
same subpopulations.
### 4.2 Experimental results
We present experimental results and discussions to answer the above-posed
questions.
Q1 Effectiveness (I). Since our algorithm does not need training group labels,
thus we conduct experiments to verify its superiority over current group-
oblivious algorithms. The experimental results are shown in Table 1 and we
have the following observations: (1) The proposed UMix achieves the best
worst-case accuracy on all three datasets. For example, for the CelebA
dataset, UMix achieves worst-case accuracy of 85.3%, while the second-best is
81.1%. (2) ERM consistently outperforms other methods in terms of average
accuracy. However, it typically comes with the lowest worst-case accuracy. The
underlying reason is that the dominance of the majority subpopulations during
training leads to poor performance of the minority subpopulations. (3) UMix
shows competitive average accuracy compared to other methods. For example, on
CelebA, UMix achieves the average accuracy of 90.1%, which outperforms all
other IW/DRO methods.
Q2 Effectiveness (II). We conduct the evaluation on the Waterbirds and CelebA
datasets without using the validation set group label information.
Specifically, after each training epoch, we evaluate the performance of the
current model on the validation set and save the model with the best average
accuracy. Finally, we test the performance of the saved model on the test set.
The experimental results are shown in Table 2. From the experimental results,
we can observe that when the validation set group information is not used, the
worst-case accuracy of our method drops a little while the average accuracy
improves a little.
Q3 Effectiveness (III). We further conduct comparisons with algorithms that
require training group labels. The comparison results are shown in Table 3.
According to the experimental results, it is observed that the performance
from our UMix without using group label is quite competitive compared with
these group-aware algorithms. Specifically, benefiting from the uncertainty-
aware mixup, UMix usually performs in the top three in terms of both average
and worst-case accuracy. For example, on WaterBirds, UMix achieves the best
average accuracy of 93.0% and the second-best worst-case accuracy of 90.0%.
Table 1: Comparison results with other methods in the group-oblivious setting. The best results are in bold and blue. Full results with standard deviation are in the Table 6 in Appendix. | Waterbirds | CelebA | CivilComments | Camelyon17
---|---|---|---|---
| Avg. | Worst | Avg. | Worst | Avg. | Worst | Avg.
ERM | 97.0% | 63.7% | 94.9% | 47.8% | 92.2% | 56.0% | 70.3%
Focal Loss [40] | 87.0% | 73.1% | 88.4% | 72.1% | 91.2% | 60.1% | 68.1%
CVaR-DRO [38] | 90.3% | 77.2% | 86.8% | 76.9% | 89.1% | 62.3% | 70.5%
CVaR-DORO [72] | 91.5% | 77.0% | 89.6% | 75.6% | 90.0% | 64.1% | 67.3%
$\chi^{2}$-DRO [38] | 88.8% | 74.0% | 87.7% | 78.4% | 89.4% | 64.2% | 68.0%
$\chi^{2}$-DORO [72] | 89.5% | 76.0% | 87.0% | 75.6% | 90.1% | 63.8% | 68.0%
JTT [41] | 93.6% | 86.0% | 88.0% | 81.1% | 90.7% | 67.4% | 69.1%
Ours | 93.0% | 90.0% | 90.1% | 85.3% | 90.6% | 70.1% | 75.1%
Table 2: Experimental results when the group labels in the validation set are available or not. Group labels in | Waterbirds | CelebA
---|---|---
validation set? | Average ACC | Worst-case ACC | Average ACC | Worst-case ACC
Yes | 93.00% | 90.00% | 90.10% | 85.30%
No | 93.60% | 88.90% | 90.40% | 84.60%
Table 3: Comparison results with the algorithms using training group labels (Our method is not dependent on this type of information). Results of baseline models are from [70]. The best three results are in bold brown or bold blue and the color indicates whether the training group labels are used. Full results with standard deviation are in the Table 7 in Appendix. | Group labels | Waterbirds | CelebA | CivilComments | Cam17
---|---|---|---|---|---
| in train set? | Avg. | Worst | Avg. | Worst | Avg. | Worst | Avg.
IRM [3] | Yes | 87.5% | 75.6% | 94.0% | 77.8% | 88.8% | 66.3% | 64.2%
IB-IRM [1] | Yes | 88.5% | 76.5% | 93.6% | 85.0% | 89.1% | 65.3% | 68.9%
V-REx [34] | Yes | 88.0% | 73.6% | 92.2% | 86.7% | 90.2% | 64.9% | 71.5%
CORAL [63] | Yes | 90.3% | 79.8% | 93.8% | 76.9% | 88.7% | 65.6% | 59.5%
GroupDRO [58] | Yes | 91.8% | 90.6% | 92.1% | 87.2% | 89.9% | 70.0% | 68.4%
DomainMix [69] | Yes | 76.4% | 53.0% | 93.4% | 65.6% | 90.9% | 63.6% | 69.7%
Fish [60] | Yes | 85.6% | 64.0% | 93.1% | 61.2% | 89.8% | 71.1% | 74.7%
LISA [70] | Yes | 91.8% | 89.2% | 92.4% | 89.3% | 89.2% | 72.6% | 77.1%
Ours | No | 93.0% | 90.0% | 90.1% | 85.3% | 90.6% | 70.1% | 75.1%
Q4 Effectiveness (IV). We conduct comparison experiments on Camelyon17 to
investigate the effectiveness of our algorithm under the domain shift
scenario. The experimental results are shown in the last column of Table 1 and
Table 3 respectively. In the group-oblivious setting, the proposed method
achieves the best average accuracy on Camelyon17 as shown in Table 1. For
example, UMix achieves the best average accuracy of 75.1% while the second is
70.3%. Meanwhile, in Table 3, benefiting from upweighting the mixed samples
with poor performance, our method achieves a quite competitive generalization
ability on Camelyon17 compared with other algorithms using training group
labels.
(a) Waterbirds
(b) CelebA
Figure 1: Visualization of the obtained uncertainty with kernel density
estimation on Waterbirds and CelebA datasets, where group size refers to the
sample number of the group.
Q5 Qualitative analysis. To intuitively investigate the rationality of the
estimated uncertainty, we visualize the density of the uncertainty for
different groups with kernel density estimation. As shown in Fig. 1, the
statistics of estimated uncertainty is basically correlated to the training
sample size of each group. For example, on Waterbirds and CelebA, the average
uncertainties of minority groups are much higher, while those of majority
groups are much lower.
Q6 Ablation study. Finally, we conduct the ablation study in comparison with
vanilla mixup and in-group mixup. The experimental results are shown in Table
4. Compared with ERM, vanilla mixup cannot significantly improve worst-case
accuracy. After using the group label, the in-group mixup slightly improves
the worst-case accuracy compared to ERM. The possible reason is that mixup-
based methods do not increase the influence of minority subpopulations in the
model objective function. Although our method does not use the group label of
the training samples, our method can still significantly improve the worst-
case accuracy.
Table 4: Comparison with ERM and mixup based methods. Results of baseline models are from [70]. The best results are in bold brown or bold blue and the color indicates whether the training group labels are used. Full results with standard deviation are in the Table 8 in Appendix. | Group labels | Waterbirds | CelebA | CivilComments | Cam17
---|---|---|---|---|---
| in train set? | Avg. | Worst | Avg. | Worst | Avg. | Worst | Avg.
ERM | No | 97.0% | 63.7% | 94.9% | 47.8% | 92.2% | 56.0% | 70.3%
vanilla mixup | No | 81.0% | 56.2% | 95.8% | 46.4% | 90.8% | 67.2% | 71.2%
in-group mixup | Yes | 88.7% | 68.0% | 95.2% | 58.3% | 90.8% | 69.2% | 75.5%
Ours | No | 93.0% | 90.0% | 90.1% | 85.3% | 90.6% | 70.1% | 75.1%
## 5 Theory
In this section, we provide a theoretical understanding of the generalization
ability for UMix. At a high level, we prove that our method can achieve a
better generalization error bound than traditional IW methods without using
mixup. For simplicity, our analysis focuses on generalized linear model (GLM).
The roadmap of our analysis is to first approximate the mixup loss and then
study the generalization bound from a Rademacher complexity perspective. To
introduce the theoretical framework, we first present the basic settings.
Basic settings. Our analysis mainly focuses on GLM model classes whose loss
function $\ell$ follows $\ell(\theta,x,y)=A(\theta^{\top}x)-y\theta^{\top}x$,
where $x\in\mathbb{R}^{d}$ is the input , $\theta\in\mathbb{R}^{d}$ is the
parameter, $y\in\mathbb{R}$ is the label and $A(\cdot)$ is the log-partition
function.
Recall the setting of subpopulation shift, we assume that the population
distribution $P$ consists of $G$ different subpopulations with the $g$-th
subpopulation’s proportion being $k_{g}$ and the $g$-th subpopulation follows
the distribution $P_{g}$. Specifically, we have $P=\sum_{g=1}^{G}k_{g}P_{g}$.
Then we denote the covariance matrix for the $g$-th subpopulation as
$\Sigma_{X}^{g}=\mathbb{E}_{(x,y)\sim P_{g}}[xx^{\top}]$. For simplicity, we
consider the case where a shared weight $w_{g}$ is assigned to all samples
from the $g$-th subpopulation. The main goal of our theoretical analysis is to
characterize the generalization ability of the model learned using Algorithm
1. Formally, we focus on analyzing the upper bound of the weighted
generalization error defined as:
$\displaystyle\operatorname{GError}(\theta)=\mathbb{E}_{(x,y)\sim
P}[w(x,y)\ell(\theta,x,y)]-\frac{1}{N}\sum_{i=1}^{N}w(x_{i},y_{i})\ell(\theta,x_{i},y_{i}),$
where the function $w(x,y)$ is the weighted function to return the weight of
the subpopulation to which the sample $(x,y)$ belongs.
First of all, we present our main result in this section. The main theorem of
our analysis provides a subpopulation-heterogeneity dependent bound for the
above generalization error. This theorem is formally presented as:
###### Theorem 5.1.
Suppose $A(\cdot)$ is $L_{A}$-Lipschitz continuous, then there exists
constants $L,B>0$ such that for any $\theta$ satisfying
$\theta^{\top}\Sigma_{X}\theta\leq\gamma$, the following holds with a
probability of at least $1-\delta$,
$\displaystyle\operatorname{GError}(\theta)\leq 2L\cdot
L_{A}\cdot(\max\\{(\frac{\gamma(\delta/2)}{\rho})^{1/4},(\frac{\gamma(\delta/2)}{\rho})^{1/2}\\}\cdot\sqrt{{\color[rgb]{.75,0,.25}\frac{\operatorname{rank}(\Sigma_{X})}{n}}})+B\sqrt{\frac{\log(2/\delta)}{2n}},$
where $\gamma(\delta)$ is a constant dependent on $\delta$,
$\Sigma_{X}=\sum_{g=1}^{G}k_{g}w_{g}\Sigma_{X}^{g}$ and $\rho$ is some
constant related to the data distribution, which will be formally introduced
in Assumption 5.1.
We will show later that the output of our Algorithm 1 can satisfy the
constraint $\theta^{\top}\Sigma_{X}\theta\leq\gamma$ and thus Theorem 5.1 can
provide a theoretical understanding of our algorithm. In contrast to weighted
ERM, the bound improvement of UMix is on the red term which can partially
reflect the heterogeneity of the training subpopulations. Specifically, the
red term would become $\sqrt{d/n}$ in the weighted ERM setting (see more
detailed theoretical comparisons in Appendix). Thus our bound can be tighter
when the intrinsic dimension of data is small (i.e.,
$\text{rank}(\Sigma_{X})\ll d$).
The proof of Theorem 5.1 follows this roadmap: (1) We first show that the
model learned with UMix can fall into a specific hypothesis set
$\mathcal{W}_{\gamma}$. (2) We analyze the Rademacher complexity of the
hypothesis set and obtain its complexity upper bound (Lemma A.3). (3) Finally,
we can characterize the generalization bound by using complexity-based
learning theory [7] (Theorem 8). More details of the proof can be found in
Appendix.
As we discuss in Appendix, the weighted mixup can be seen as an approximation
of a regularization term
$\frac{C}{n}[\sum_{i=1}^{n}w_{i}A^{\prime\prime}(x_{i}^{\top}\theta)]\theta^{\top}\widehat{\Sigma}_{X}\theta$
for some constant $C$ compared with the non-mixup algorithm, which motivates
us to study the following hypothesis space
$\displaystyle\mathcal{W}_{\gamma}\coloneqq\\{x\rightarrow\theta^{\top}x,\text{such
that }\theta\text{ satisfying
}\mathbb{E}_{x,y}[w(x,y)A^{\prime\prime}(x^{\top}\theta)]\theta^{\top}\Sigma_{X}\theta\leq\gamma\\},$
for some constant $\gamma$.
To further derive the generalization bound, we also need the following
assumption, which is satisfied by general GLMs when $\theta$ has bounded
$\ell_{2}$ norm and it is adopted in, e.g., [4, 76].
###### Assumption 5.1 ($\rho$-retentive).
We say the distribution of $x$ is $\rho$-retentive for some $\rho\in(0,1/2]$
if for any non-zero vector $v\in\mathbb{R}^{d}$ and given the event that
$\theta\in\mathcal{W}_{\gamma}$ where the $\theta$ is output by our Algorithm
1, we have
$\displaystyle\mathbb{E}_{x}^{2}[A^{\prime\prime}(x^{\top}v)]\geq\rho\cdot\min\\{1,\mathbb{E}_{x}(v^{\top}x)^{2}\\}.$
Finally, we can derive the Rademacher complexity of the $\mathcal{W}_{\gamma}$
and the proof of Theorem 5.1 is obtained by combining Lemma A.3 and the
Theorem 8 of [7].
###### Lemma 5.1.
Assume that the distribution of $x_{i}$ is $\rho$-retentive, i.e., satisfies
the assumption 5.1. Then the empirical Rademacher complexity of
$\mathcal{W}_{r}$ satisfies
$\displaystyle
Rad(\mathcal{W}_{r},\mathcal{S})\leq\max\\{(\frac{\gamma(\delta)}{\rho})^{1/4},(\frac{\gamma(\delta)}{\rho})^{1/2}\\}\cdot\sqrt{\frac{rank(\Sigma_{X})}{n}},$
with probability at least $1-\delta$.
## 6 Conclusion
In this paper, we propose a novel method called UMix to improve the model
robustness against subpopulation shift. We propose a simple yet reliable
approach to estimate the sample uncertainties and integrate them into the
mixup strategy so that UMix can mitigate the overfitting thus improving over
prior IW methods. Our method consistently outperforms previous approaches on
commonly-used benchmarks. Furthermore, UMix also shows the theoretical
advantage that the learned model comes with subpopulation-heterogeneity
dependent generalization bound. In the future, how to leverage subpopulation
information to improve UMix can be a promising research direction.
## Acknowledgements
This work was supported in part by the National Key Research and Development
Program of China under Grant 2019YFB2101900, the National Natural Science
Foundation of China (61976151, 61925602, 61732011).
## References
* [1] Kartik Ahuja, Ethan Caballero, Dinghuai Zhang, Jean-Christophe Gagnon-Audet, Yoshua Bengio, Ioannis Mitliagkas, and Irina Rish. Invariance principle meets information bottleneck for out-of-distribution generalization. In NeurIPS, 2021.
* [2] Javier Antorán, James Urquhart Allingham, and José Miguel Hernández-Lobato. Depth uncertainty in neural networks. In NeurIPS, 2020.
* [3] Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
* [4] Raman Arora, Peter Bartlett, Poorya Mianjy, and Nathan Srebro. Dropout: Explicit forms and capacity control. In ICML, 2021.
* [5] Peter Bandi, Oscar Geessink, Quirine Manson, Marcory Van Dijk, Maschenka Balkenhol, Meyke Hermsen, Babak Ehteshami Bejnordi, Byungjae Lee, Kyunghyun Paeng, Aoxiao Zhong, et al. From detection of individual metastases to classification of lymph node status at the patient level: the camelyon17 challenge. IEEE transactions on medical imaging, 38(2):550–560, 2018.
* [6] Solon Barocas and Andrew D Selbst. Big data’s disparate impact. Calif. L. Rev., 104:671, 2016.
* [7] Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463–482, 2002.
* [8] Steffen Bickel, Michael Brückner, and Tobias Scheffer. Discriminative learning for differing training and test distributions. In ICML, 2007.
* [9] Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. Nuanced metrics for measuring unintended bias with real data for text classification. In WWW, 2019.
* [10] Jonathon Byrd and Zachary Lipton. What is the effect of importance weighting in deep learning? In ICML, 2019.
* [11] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. In NeurIPS, 2019.
* [12] Yongqiang Chen, Yonggang Zhang, Yatao Bian, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, and James Cheng. Invariance principle meets out-of-distribution generalization on graphs. arXiv preprint arXiv:2202.05441, 2022.
* [13] Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. Class-balanced loss based on effective number of samples. In CVPR, 2019.
* [14] Didan Deng, Liang Wu, and Bertram E Shi. Iterative distillation for better uncertainty estimates in multitask emotion recognition. In ICCV, 2021.
* [15] John Denker and Yann LeCun. Transforming neural-net output levels to probability distributions. In NeurIPS, 1990.
* [16] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL, 2019.
* [17] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML, 2016.
* [18] Yonatan Geifman, Guy Uziel, and Ran El-Yaniv. Bias-reduced uncertainty estimation for deep neural classifiers. In ICLR, 2019.
* [19] Yu Geng, Zongbo Han, Changqing Zhang, and Qinghua Hu. Uncertainty-aware multi-view representation learning. In AAAI, 2021.
* [20] Zongbo Han, Changqing Zhang, Huazhu Fu, and Joey Tianyi Zhou. Trusted multi-view classification with dynamic evidential fusion. IEEE TPAMI, 2022.
* [21] Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. Fairness without demographics in repeated loss minimization. In ICML, 2018.
* [22] Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew Mingbo Dai, and Dustin Tran. Training independent subnetworks for robust prediction. In ICLR, 2021.
* [23] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
* [24] Weihua Hu, Gang Niu, Issei Sato, and Masashi Sugiyama. Does distributionally robust supervised learning give robust classifiers? In ICML, 2018.
* [25] Zhaolin Hu and L Jeff Hong. Kullback-leibler divergence constrained distributionally robust optimization. Available at Optimization Online, pages 1695–1724, 2013.
* [26] Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E Hopcroft, and Kilian Q Weinberger. Snapshot ensembles: Train 1, get m for free. In ICLR, 2017.
* [27] Jiayuan Huang, Arthur Gretton, Karsten Borgwardt, Bernhard Schölkopf, and Alex Smola. Correcting sample selection bias by unlabeled data. In NeurIPS, 2006.
* [28] Nathalie Japkowicz. The class imbalance problem: Significance and strategies. In IJCAI, 2000.
* [29] Gabriel Kalweit and Joschka Boedecker. Uncertainty-driven imagination for continuous deep reinforcement learning. In Conference on Robot Learning, pages 195–206. PMLR, 2017.
* [30] Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? In NeurIPS, 2017.
* [31] Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In CVPR, 2018.
* [32] Allison Koenecke, Andrew Nam, Emily Lake, Joe Nudell, Minnie Quartey, Zion Mengesha, Connor Toups, John R Rickford, Dan Jurafsky, and Sharad Goel. Racial disparities in automated speech recognition. Proceedings of the National Academy of Sciences, 117(14):7684–7689, 2020.
* [33] Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al. Wilds: A benchmark of in-the-wild distribution shifts. In ICML, 2021.
* [34] David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). In ICML, 2021.
* [35] Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed Chi. Fairness without demographics through adversarially reweighted learning. In NeurIPS, 2020.
* [36] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In NeurIPS, 2017.
* [37] Quoc V Le, Alex J Smola, and Stéphane Canu. Heteroscedastic gaussian process regression. In ICML, 2005.
* [38] Daniel Levy, Yair Carmon, John C Duchi, and Aaron Sidford. Large-scale methods for distributionally robust optimization. In NeurIPS, 2020.
* [39] Kevin Li, Abhishek Gupta, Ashwin Reddy, Vitchyr H Pong, Aurick Zhou, Justin Yu, and Sergey Levine. Mural: Meta-learning uncertainty-aware rewards for outcome-driven reinforcement learning. In ICML, 2021.
* [40] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In ICCV, 2017.
* [41] Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In ICML, 2021.
* [42] Wei Liu and Sanjay Chawla. Class confidence weighted knn algorithms for imbalanced data sets. In Pacific-Asia conference on knowledge discovery and data mining, pages 345–356. Springer, 2011.
* [43] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In ICCV, 2015.
* [44] Huan Ma, Zongbo Han, Changqing Zhang, Huazhu Fu, Joey Tianyi Zhou, and Qinghua Hu. Trustworthy multimodal regression with mixture of normal-inverse gamma distributions. NeurIPS, 2021.
* [45] David JC MacKay. A practical bayesian framework for backpropagation networks. Neural computation, 4(3):448–472, 1992.
* [46] Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P Vetrov, and Andrew Gordon Wilson. A simple baseline for bayesian uncertainty in deep learning. In NeurIPS, 2019.
* [47] Paul Michel, Tatsunori Hashimoto, and Graham Neubig. Modeling the second player in distributionally robust optimization. In ICLR, 2021.
* [48] Paul Michel, Tatsunori Hashimoto, and Graham Neubig. Distributionally robust models with parametric likelihood ratios. In ICLR, 2022.
* [49] Microsoft. Neural Network Intelligence, 1 2021.
* [50] John Stuart Mill. Utilitarianism. In Seven masterpieces of philosophy, pages 337–383. Routledge, 2016\.
* [51] Jooyoung Moon, Jihyo Kim, Younghak Shin, and Sangheum Hwang. Confidence-aware learning for deep neural networks. In ICML, 2020.
* [52] Hongseok Namkoong and John C Duchi. Stochastic gradient methods for distributionally robust optimization with f-divergences. In NeurIPS, 2016.
* [53] Radford M Neal. Bayesian learning for neural networks, volume 118. Springer Science & Business Media, 2012.
* [54] David A Nix and Andreas S Weigend. Estimating the mean and variance of the target probability distribution. In ICNN, 1994.
* [55] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, 2019.
* [56] Vihari Piratla, Praneeth Netrapalli, and Sunita Sarawagi. Focus on the common good: Group distributional robustness follows. In ICLR, 2022.
* [57] John Rawls. Justice as fairness: A restatement. Harvard University Press, 2001.
* [58] Shiori Sagawa*, Pang Wei Koh*, Tatsunori B. Hashimoto, and Percy Liang. Distributionally robust neural networks. In ICLR, 2020.
* [59] Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. An investigation of why overparameterization exacerbates spurious correlations. In ICML, 2020.
* [60] Yuge Shi, Jeffrey Seely, Philip HS Torr, N Siddharth, Awni Hannun, Nicolas Usunier, and Gabriel Synnaeve. Gradient matching for domain generalization. arXiv preprint arXiv:2104.09937, 2021.
* [61] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244, 2000\.
* [62] Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. Meta-weight-net: Learning an explicit mapping for sample weighting. In NeurIPS, 2019.
* [63] Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In European conference on computer vision, pages 443–450. Springer, 2016.
* [64] Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J. Gordon. An empirical study of example forgetting during deep neural network learning. In ICLR, 2019.
* [65] Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, and Yoshua Bengio. Manifold mixup: Better representations by interpolating hidden states. In ICML, 2019.
* [66] Junfeng Wen, Chun-Nam Yu, and Russell Greiner. Robust learning under uncertain test distributions: Relating covariate shift to model misspecification. In ICML, 2014.
* [67] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, 2019.
* [68] Da Xu, Yuting Ye, and Chuanwei Ruan. Understanding the role of importance weighting for deep learning. In ICLR, 2021.
* [69] Minghao Xu, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, and Wenjun Zhang. Adversarial domain adaptation with domain mixup. In AAAI, 2020.
* [70] Huaxiu Yao, Yu Wang, Sai Li, Linjun Zhang, Weixin Liang, James Zou, and Chelsea Finn. Improving out-of-distribution robustness via selective augmentation. In ICML, 2022.
* [71] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In CVPR, 2019.
* [72] Runtian Zhai, Chen Dan, Zico Kolter, and Pradeep Ravikumar. Doro: Distributional and outlier robust optimization. In ICML, 2021.
* [73] Runtian Zhai, Chen Dan, Zico Kolter, and Pradeep Ravikumar. Understanding why generalized reweighting does not improve over erm. arXiv preprint arXiv:2201.12293, 2022.
* [74] Runtian Zhai, Chen Dan, Arun Suggala, J Zico Kolter, and Pradeep Ravikumar. Boosted cvar classification. In NeurIPS, 2021.
* [75] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In ICLR, 2018.
* [76] Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, and James Zou. How does mixup help with robustness and generalization? In ICLR, 2021.
## Checklist
1. 1.
For all authors…
1. (a)
Do the main claims made in the abstract and introduction accurately reflect
the paper’s contributions and scope? [Yes]
2. (b)
Did you describe the limitations of your work? [Yes] See Sec. D in Appendix.
3. (c)
Did you discuss any potential negative societal impacts of your work? [Yes]
See Sec. D in Appendix.
4. (d)
Have you read the ethics review guidelines and ensured that your paper
conforms to them? [Yes]
2. 2.
If you are including theoretical results…
1. (a)
Did you state the full set of assumptions of all theoretical results? [Yes]
See Sec. 5.
2. (b)
Did you include complete proofs of all theoretical results? [Yes] See Sec. A
in Appendix.
3. 3.
If you ran experiments…
1. (a)
Did you include the code, data, and instructions needed to reproduce the main
experimental results (either in the supplemental material or as a URL)? [Yes]
Code has been released.
2. (b)
Did you specify all the training details (e.g., data splits, hyperparameters,
how they were chosen)? [Yes] See Sec. B in Appendix.
3. (c)
Did you report error bars (e.g., with respect to the random seed after running
experiments multiple times)? [Yes] See Sec. B in Appendix.
4. (d)
Did you include the total amount of compute and the type of resources used
(e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Sec. B in
Appendix.
4. 4.
If you are using existing assets (e.g., code, data, models) or
curating/releasing new assets…
1. (a)
If your work uses existing assets, did you cite the creators? [Yes]
2. (b)
Did you mention the license of the assets? [Yes]
3. (c)
Did you include any new assets either in the supplemental material or as a
URL? [No]
4. (d)
Did you discuss whether and how consent was obtained from people whose data
you’re using/curating? [No] The datasets used are all publicly available
datasets.
5. (e)
Did you discuss whether the data you are using/curating contains personally
identifiable information or offensive content? [No] The datasets used are all
publicly available datasets.
5. 5.
If you used crowdsourcing or conducted research with human subjects…
1. (a)
Did you include the full text of instructions given to participants and
screenshots, if applicable? [N/A] We didn’t conduct research with human
subjects.
2. (b)
Did you describe any potential participant risks, with links to Institutional
Review Board (IRB) approvals, if applicable? [N/A] We didn’t conduct research
with human subjects.
3. (c)
Did you include the estimated hourly wage paid to participants and the total
amount spent on participant compensation? [N/A] We didn’t conduct research
with human subjects.
Appendix
.tocmtappendix mtchapternone mtappendixsubsection
###### Contents
1. 1 Introduction
2. 2 Related Work
1. 2.1 Importance weighting
2. 2.2 Uncertainty quantification
3. 3 Method
1. 3.1 Background
2. 3.2 Importance-weighted mixup
3. 3.3 Uncertainty-aware importance weights
4. 4 Experiments
1. 4.1 Setup
2. 4.2 Experimental results
5. 5 Theory
6. 6 Conclusion
7. A Proofs
8. B Experimental details
1. B.1 Backbone model
2. B.2 Datasets details
3. B.3 Implementation details
4. B.4 Uncertainty quantification results on simulated dataset
5. B.5 Training accuracy throughout training
6. B.6 Additional results
9. C Justification for choosing historical-based uncertainty score
10. D Societal impact and limitations
1. D.1 Societal impact
2. D.2 Limitations and future works
## Appendix A Proofs
In this appendix, we prove the Theorem 5.1 in Section 5. We consider the
following optimization objective, which is the expected version of our
weighted mixup loss (Equation 4).
$\displaystyle
L_{n}^{\text{mix}}(\theta,S)=\frac{1}{n^{2}}\sum^{n}_{i,j=1}\mathbb{E}_{\lambda\sim
D_{\lambda}}[\lambda
w_{i}l(\theta,\tilde{x}_{i,j},y_{i})+(1-\lambda)w_{j}l(\theta,\tilde{x}_{i,j},y_{j})],$
where the loss function we consider is
$l(\theta,x,y)=h(f_{\theta}(x))-yf_{\theta}(x)$ and $h(\cdot)$ and
$f_{\theta}(\cdot)$ for all $\theta\in\Theta$ are twice differentiable. We
compare it with the standard weighted loss function
$\displaystyle
L_{n}^{std}(\theta,S)=\frac{1}{n}\sum_{i=1}^{n}w_{i}[h(f_{\theta}(x_{i}))-y_{i}f_{\theta}(x_{i})].$
###### Lemma A.1.
The weighted mixup loss can be rewritten as
$\displaystyle
L_{n}^{mix}(\theta,S)=L_{n}^{std}(\theta,S)+\sum_{i=1}^{3}\mathcal{R}_{i}(\theta,S)+\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}\left[(1-\lambda)^{2}\varphi(1-\lambda)\right],$
where $\tilde{\mathcal{D}}_{\lambda}$ is a uniform mixture of two Beta
distributions, i.e.,
$\frac{\alpha}{\alpha+\beta}Beta(\alpha+1,\beta)+\frac{\beta}{\alpha+\beta}Beta(\beta+1,\alpha)$
and $\psi(\cdot)$ is some function with $\lim_{a\rightarrow 0}\psi(a)=0$.
Moreover,
$\displaystyle\mathcal{R}_{1}(\theta,S)$
$\displaystyle=\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[1-\lambda]}{n}\sum_{i=1}^{n}w_{i}\left(h^{\prime}\left(f_{\theta}\left(x_{i}\right)\right)-y_{i}\right)\nabla
f_{\theta}\left(x_{i}\right)^{\top}\mathbb{E}_{r_{x}\sim\mathcal{D}_{X}}\left[r_{x}-x_{i}\right]$
$\displaystyle\mathcal{R}_{2}(\theta,S)$
$\displaystyle=\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}\left[(1-\lambda)^{2}\right]}{2n}\sum_{i=1}^{n}w_{i}h^{\prime\prime}\left(f_{\theta}\left(x_{i}\right)\right)\nabla
f_{\theta}\left(x_{i}\right)^{\top}\mathbb{E}_{r_{x}\sim\mathcal{D}_{X}}\left[\left(r_{x}-x_{i}\right)\left(r_{x}-x_{i}\right)^{\top}\right]\nabla
f_{\theta}\left(x_{i}\right)$ $\displaystyle\mathcal{R}_{3}(\theta,S)$
$\displaystyle=\frac{\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}\left[(1-\lambda)^{2}\right]}{2n}\sum_{i=1}^{n}w_{i}\left(h^{\prime}\left(f_{\theta}\left(x_{i}\right)\right)-y_{i}\right)\mathbb{E}_{r_{x}\sim\mathcal{D}_{X}}\left[\left(r_{x}-x_{i}\right)\nabla^{2}f_{\theta}\left(x_{i}\right)\left(r_{x}-x_{i}\right)^{\top}\right].$
###### Proof.
The corresponding mixup version is
$\displaystyle L_{n}^{\text{mix}}(\theta,S)$
$\displaystyle=\frac{1}{n^{2}}\mathbb{E}_{\lambda\sim
Beta(\alpha,\beta)}\sum_{i,j=1}^{n}[\lambda
w_{i}h(f_{\theta}(\tilde{x}_{i,j}(\lambda)))-\lambda w_{i}y_{i}$
$\displaystyle\qquad\qquad\qquad\qquad\qquad+(1-\lambda)w_{j}h(f_{\theta}(\tilde{x}_{i,j}(\lambda)))-(1-\lambda)w_{j}y_{j}]$
$\displaystyle=\frac{1}{n^{2}}\mathbb{E}_{\lambda\sim
Beta(\alpha,\beta)}\mathbb{E}_{B\sim
Bern(\lambda)}\sum_{i,j=1}^{n}[w_{i}B(h(f_{\theta}(\tilde{x}_{i,j}))-y_{i})$
$\displaystyle\qquad\qquad\qquad\qquad\qquad+w_{j}(1-B)(h(f_{\theta}(\tilde{x}_{i,j}))-y_{j})]$
$\displaystyle=\frac{1}{n^{2}}\sum_{i,j=1}^{n}\\{\frac{\alpha}{\alpha+\beta}\mathbb{E}_{\lambda\sim
Beta(\alpha+1,\beta)}w_{i}[h(f_{\theta}(\tilde{x}_{i,j}))-y_{i}]$
$\displaystyle\qquad\qquad\qquad\qquad\qquad+\frac{\beta}{\alpha+\beta}\mathbb{E}_{\lambda\sim
Beta(\alpha,\beta+1)}w_{j}[h(f_{\theta}(\tilde{x}_{i,j}))-y_{j}])\\}$
$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}w_{i}\mathbb{E}_{\lambda\sim\tilde{D}_{\lambda}}\mathbb{E}_{r_{x}\sim
D_{x}^{w}}h(f(\theta,\lambda x_{i}+(1-\lambda)r_{x}))-y_{i}f(\theta,\lambda
x_{i}+(1-\lambda)r_{x})$
$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}w_{i}\mathbb{E}_{\lambda\sim\tilde{D}_{x}}l_{\check{x}_{i},y_{i}}(\theta),$
where $D_{x}^{w}=\frac{1}{n}\sum_{i=1}^{n}w_{i}\delta_{i}$ and
$\check{x}_{i}=\lambda x_{i}+(1-\lambda)r_{x}$.
We let $\alpha=1-\lambda$ and
$\psi_{i}(\alpha)=l_{\check{x}_{i},y_{i}}(\theta)$. Then since we know
$\psi_{i}$ is twice-differential, we have
$\displaystyle
l_{\breve{x}_{i},y_{i}}(\theta)=\psi_{i}(\alpha)=\psi_{i}(0)+\psi_{i}^{\prime}(0)\alpha+\frac{1}{2}\psi_{i}^{\prime\prime}(0)\alpha^{2}+\alpha^{2}\varphi_{i}(\alpha).$
By the proof of Lemma 3.1 in [76] we know
$\displaystyle\psi_{i}^{\prime}(0)$
$\displaystyle=\left(h^{\prime}\left(f_{\theta}\left(x_{i}\right)\right)-y_{i}\right)\nabla
f_{\theta}\left(x_{i}\right)^{\top}\left(r_{x}-x_{i}\right),$
$\displaystyle\psi_{i}^{\prime\prime}(0)$
$\displaystyle=h^{\prime\prime}\left(f_{\theta}\left(x_{i}\right)\right)\nabla
f_{\theta}\left(x_{i}\right)^{\top}\left(r_{x}-x_{i}\right)\left(r_{x}-x_{i}\right)^{\top}\nabla
f_{\theta}\left(x_{i}\right)$
$\displaystyle\quad+\left(h^{\prime}\left(f_{\theta}\left(x_{i}\right)\right)-y_{i}\right)\left(r_{x}-x_{i}\right)^{\top}\nabla^{2}f_{\theta}\left(x_{i}\right)\left(r_{x}-x_{i}\right).$
∎
###### Lemma A.2.
Consider the centralized dataset, i.e., $\frac{1}{n}\sum_{i=1}^{n}x_{i}=0$, we
have
$\displaystyle\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}[L_{n}^{mix}(\theta,\tilde{S})]\approx
L_{n}^{std}(\theta,S)+\frac{1}{2n}[\sum_{i=1}^{n}w_{i}A^{\prime\prime}(x_{i}^{\top}\theta)]\mathbb{E}_{\lambda\sim\tilde{\mathcal{D}}_{\lambda}}(\frac{(1-\lambda)^{2}}{\lambda^{2}})\theta^{\top}\widehat{\Sigma}_{X}\theta,$
where $\widehat{\Sigma}_{X}=\frac{1}{n}\sum_{i=1}^{n}w_{i}x_{i}x_{i}^{\top}$,
and the expectation is taken with respect to the randomness of $\lambda$.
###### Proof.
For GLM, the prediction is invariant to the scaling of the training data and
thus we consider the re-scaled dataset
$\tilde{S}=\\{(\tilde{x}_{i},y_{i})\\}_{i=1}^{n}$ where
$\tilde{x}_{i}=\frac{1}{\lambda}(\lambda x_{i}+(1-\lambda)r_{x})$. For GLM the
mixed stadard loss function is
$\displaystyle
L_{n}^{std}(\theta,\tilde{S})=\frac{1}{n}\sum_{i=1}^{n}w_{i}l_{\check{x}_{i},y_{i}}(\theta)=\frac{1}{n}\sum_{i=1}^{n}-w_{i}(y_{i}\tilde{x}_{i}^{\top}\theta-A(\tilde{x}_{i}^{\top}\theta)).$
In the proof of Lemma 3.3 in [76], we know by taking expectation with respect
to the randomness of $\lambda$ and $r_{x}$ we have the following second-order
approximation for the GLM loss,
$\displaystyle\mathbb{E}[L_{n}^{std}(\theta,\tilde{S})]\approx
L_{n}^{std}(\theta,S)+\frac{1}{2n}[\sum_{i=1}^{n}w_{i}A^{\prime\prime}(x_{i}^{\top}\theta)]\mathbb{E}(\frac{(1-\lambda)^{2}}{\lambda^{2}})\theta^{\top}\widehat{\Sigma}_{X}\theta,$
where $\widehat{\Sigma}_{X}=\frac{1}{n}\sum_{i=1}^{n}w_{i}x_{i}x_{i}^{\top}$.
∎
###### Lemma A.3.
Assume that the distribution of $x_{i}$ is $\rho$-retentive, i.e., satisfies
the Assumption 5.1. Then the empirical Rademacher complexity of
$\mathcal{W}_{r}$ satisfies
$\displaystyle
Rad(\mathcal{W}_{r},\mathcal{S})\leq\max\\{(\frac{\gamma(\delta)}{\rho})^{1/4},(\frac{\gamma(\delta)}{\rho})^{1/2}\\}\cdot\sqrt{\frac{rank(\Sigma_{X})}{n}},$
with probability at least $1-\delta$ for some constant $\gamma(\delta)$ that
only depends on $\delta$.
###### Proof.
The proof is mainly based on [76]. By definition, given $n$ i.i.d. Rademacher
rv. $\xi_{1},\ldots,\xi_{n}$, the empirical Rademacher complexity is
$\operatorname{Rad}\left(\mathcal{W}_{\gamma},S\right)=\mathbb{E}_{\xi}\sup_{a(\theta)\cdot\theta^{\top}\Sigma_{X}\theta\leq\gamma}\frac{1}{n}\sum_{i=1}^{n}\xi_{i}\theta^{\top}x_{i}$
Let
$\tilde{x}_{i}=\Sigma_{X}^{\dagger/2}x_{i},a(\theta)=\mathbb{E}_{x}\left[A^{\prime\prime}\left(x^{\top}\theta\right)\right]$
and $v=\Sigma_{X}^{1/2}\theta$, then $\rho$-retentiveness condition implies
$a(\theta)^{2}\geq\rho\cdot\min\left\\{1,\mathbb{E}_{x}\left(\theta^{\top}x\right)^{2}\right\\}\geq\rho\cdot\min\left\\{1,\theta^{\top}\Sigma_{X}\theta\right\\}$
and therefore $a(\theta)\cdot\theta^{\top}\Sigma_{X}\theta\leq\gamma$ implies
that
$\|v\|^{2}=\theta^{\top}\Sigma_{X}\theta\leq\max\left\\{\left(\frac{\gamma}{\rho}\right)^{1/2},\frac{\gamma}{\rho}\right\\}$.
As a result,
$\displaystyle\operatorname{Rad}\left(\mathcal{W}_{\gamma},S\right)$
$\displaystyle=\mathbb{E}_{\xi}\sup_{a(\theta)\cdot\theta^{\top}\Sigma_{X}\theta\leq\gamma}\frac{1}{n}\sum_{i=1}^{n}\xi_{i}\theta^{\top}x_{i}$
$\displaystyle=\mathbb{E}_{\xi}\sup_{a(\theta)\cdot\theta^{\top}\Sigma_{X}\theta\leq\gamma}\frac{1}{n}\sum_{i=1}^{n}\xi_{i}v^{\top}\tilde{x}_{i}$
$\displaystyle\leq\mathbb{E}_{\xi}\sup_{\|v\|^{2}\leq\left(\frac{\gamma}{\rho}\right)^{1/2}\vee\frac{\gamma}{\rho}}\frac{1}{n}\sum_{i=1}^{n}\xi_{i}v^{\top}\tilde{x}_{i}$
$\displaystyle\leq\frac{1}{n}\cdot\left(\frac{\gamma}{\rho}\right)^{1/4}\vee\left(\frac{\gamma}{\rho}\right)^{1/2}\cdot\mathbb{E}_{\xi}\left\|\sum_{i=1}^{n}\xi_{i}\tilde{x}_{i}\right\|$
$\displaystyle\leq\frac{1}{n}\cdot\left(\frac{\gamma}{\rho}\right)^{1/4}\vee\left(\frac{\gamma}{\rho}\right)^{1/2}\cdot\sqrt{\mathbb{E}_{\xi}\left\|\sum_{i=1}^{n}\xi_{i}\tilde{x}_{i}\right\|^{2}}$
$\displaystyle\leq\frac{1}{n}\cdot\left(\frac{\gamma}{\rho}\right)^{1/4}\vee\left(\frac{\gamma}{\rho}\right)^{1/2}\cdot\sqrt{\sum_{i=1}^{n}\tilde{x}_{i}^{\top}\tilde{x}_{i}}$
Consequently,
$\displaystyle\operatorname{Rad}\left(\mathcal{W}_{\gamma},S\right)=\mathbb{E}_{S}\left[\operatorname{Rad}\left(\mathcal{W}_{\gamma},S\right)\right]$
$\displaystyle\leq\frac{1}{n}\cdot\left(\frac{\gamma}{\rho}\right)^{1/4}\vee\left(\frac{\gamma}{\rho}\right)^{1/2}\cdot\sqrt{\sum_{i=1}^{n}\mathbb{E}_{x_{i}}\left[\tilde{x}_{i}^{\top}\tilde{x}_{i}\right]}$
$\displaystyle\leq\frac{1}{\sqrt{n}}\cdot\left(\frac{\gamma}{\rho}\right)^{1/4}\vee\left(\frac{\gamma}{\rho}\right)^{1/2}\cdot\operatorname{rank}\left(\Sigma_{X}\right)$
Based on this bound on Rademacher complexity, Theorem 5.1 can be proved by
directly applying the Theorem 8 from [7]. ∎
## Appendix B Experimental details
In this section, we present experimental setup in detail. Specifically, we
describe the backbone model for each dataset in Sec. B.1, the detailed
datasets description in Sec. B.2, the implementation details in Sec. B.3,
uncertainty quantification results on simulated dataset in Sec. B.4, training
accuracy of different subpopulations throughout training process in Sec. B.5
and additional results in Sec. B.6.
### B.1 Backbone model
Within each dataset, we keep the same model architecture as in previous work
[70]: ResNet-50 [23] for Waterbirds and CelebA, DistilBERT [16] for
CivilComments, and DenseNet-121 for Camelyon17. For ResNet-50, we used the
PyTorch [55] implementation pre-trained with ImageNet. For DistilBERT, we
employ the HuggingFace [67] implementation and start from the pre-trained
weights. Same as previous work [70], for DenseNet-121 we employ the
implementation without pretraining.
### B.2 Datasets details
We describe the datasets used in the experiments in detail and summarize the
datasets in Table 4.
* •
WaterBirds [58]. The task of this dataset is to distinguish whether the bird
is a waterbird or a landbird. According to the background and label of an
image, this dataset has four predefined subpopulations, i.e., “landbirds on
land”, “landbirds on water”, “waterbirds on land“ , and “waterbirds on water”.
In the training set, the largest subpopulation is “landbirds on land” with
3,498 samples, while the smallest subpopulation is “landbirds on water” with
only 56 samples.
* •
CelebA [43]. CelebA is a well-known large-scale face dataset. Same as previous
works [58, 41], we employ this dataset to predict the color of the human hair
as “blond” or “not blond”. There are four predefined subpopulations based on
gender and hair color, i.e., “dark hair, female”, “dark hair, male”, “blond
hair, female” and “blond hair, male” with 71,629, 66,874, 22,880, and 1,387
training samples respectively.
* •
CivilComments [9]. For this dataset, the task is to classify whether an online
comment is toxic or not, where according to the demographic identities (e.g.,
Female, Male, and White) and labels, 16 overlapping subpopulations can be
defined. We use 269,038, 45,180, and 133,782 samples as training, validation,
and test datasets respectively.
* •
Camelyon17 [5, 33]. Camelyon17 is a pathological image dataset with over 450,
000 lymph-node scans used to distinguish whether there is cancer tissue in a
patch. The training data is drawn from three hospitals, while the validation
and test data are sampled from other hospitals. However, due to the different
coloring methods, even the same hospital samples have different distributions.
Therefore, we cannot get reliable subpopulation labels of Camelyon17.
Table 4: Summary of the datasets used in the experiments. Datasets | Labels | Groups | Population type | Data type | Backbone model
---|---|---|---|---|---
Waterbirds | 2 | 2 | Label×Group | Image | ResNet-50
CelebA | 2 | 2 | Label×Group | Image | ResNet-50
CivilComments | 2 | 8 | Label×Group | Text | DistilBERT-uncased
Camelyon17 | 2 | 5 | Group | Image | DenseNet-121
### B.3 Implementation details
In this section, we present the implementation details of all approaches. We
implement our method in the codestack released with the WILDS datasets [33].
For some comparative methods, including ERM, IRM [3], IB-IRM [1], V-REx [34],
CORAL [63], Group DRO [58], DomainMix [69], Fish [60], LISA [70], vanilla
mixup and in-group mixup, we directly use the results in previous work [70].
For JTT [41], on the Waterbirds and CelebA datasets, we directly report the
results in the paper, and on the CivilComments dataset, due to a different
backbone model being employed, we reimplement the algorithm for fairly
comparison. Same as the proposed method, we reimplement other methods in the
codestack released with the WILDs datasets. We employ vanilla mixup on
WaterBirds and Camelyon17 datasets. On CelebA and CivilComments datasets, we
employ cutmix [71] and manifoldmix [65] respectively. For all approaches, we
tune all hyperparameters with AutoML toolkit NNI [49] based on validation
performance. Then we run the experiment multiple times on a computer with 8
Tesla V100 GPUs with different seeds to obtain the average performance and
standard deviation. The selected hyperparameters for Algorithm 1 and Algorithm
2 are listed in Tabel 5.
Table 5: Hyperparameter settings for Algorithm 1 and Algorithm 2.
| WaterBirds | CelebA | CivilComments | Camelyon17
---|---|---|---|---
Learning rate | 1e-5 | 1e-4 | 5e-5 | 1e-5
Weight decay | 1 | 1e-4 | 1e-4 | 1e-2
Batch size | 64 | 128 | 128 | 32
Optimizer | SGD | SGD | AdamW | SGD
Hyperparameter $\alpha$ | 0.5 | 1.5 | 0.5 | 0.5
Hyperparameter $\sigma$ | 0.5 | 0.5 | 1 | 1
Maximum Epoch | 300 | 20 | 10 | 5
(a) Hyperparameter settings for Algorithm 1.
| WaterBirds | CelebA | CivilComments | Camelyon17
---|---|---|---|---
Learning rate | 1e-5 | 1e-5 | 1e-05 | 1e-3
Weight decay | 1 | 1e-1 | 1e-2 | 1e-2
Batch size | 64 | 128 | 128 | 32
Optimizer | SGD | SGD | AdamW | SGD
Start epoch $T_{s}$ | 50 | 0 | 0 | 0
Sampling epoch $T$ | 50 | 5 | 5 | 5
Hyperparameter $\eta$ | 80 | 50 | 3 | 5
(b) Hyperparameter settings for Algorithm 2.
### B.4 Uncertainty quantification results on simulated dataset
We conduct a toy experiment to show the uncertainty quantification could work
well on the dataset with subpopulation shift. Specifically, we construct a
four moons dataset (i.e., a dataset with four subpopulations) as shown in Fig.
2. We compare our approximation (i.e., Eq. 6) with the following ensemble-
based approximation:
$u_{i}\approx\frac{1}{T}\sum_{t=1}^{T}\kappa(y_{i},\hat{f}_{\theta_{t}}(x_{i}))p(\theta_{t};\mathcal{D})d\theta.$
(8)
Specifically, we train $T$ models and then ensemble them. The quantification
results are shown in Fig. 3. We can observe that (1) the proposed historical-
based uncertainty quantification method could work well on the simulated
dataset; (2) compared with the ensemble-based method, the proposed method
could better characterize the subpopulation shift.
Figure 2: Simulated dataset with four different subpopulations. In the four
subpopulations, Group 0 and Group 2 have the same label and groups 1 and 3
have the same labels.
(a) Ours
(b) Ensemble
Figure 3: Visualization of the obtained uncertainty with kernel density
estimation on simulated dataset, where group size refers to the sample number
of the group.
### B.5 Training accuracy throughout training
We present how the training accuracy change throughout training in Fig. 4 on
the CelebA and Waterbirds datasets to empirically show why the proposed
estimation approach could work. From the experimental results, we observe that
during training, easy groups with sufficient samples can be fitted well, and
vice versa. For example, on the CelebA dataset, Group 0 and Group 1 with about
72K and 67K training samples quickly achieved over 95% accuracy. The accuracy
rate on Group 2, which has about 23K training samples, increased more slowly
and finally reached around 84%. The accuracy on Group 3, which has only about
1K training samples, is the lowest. Meanwhile, On the Waterbirds dataset, the
samples of hard-to-classify group (e.g., Group 1) are also more likely to be
forgotten by the neural networks.
(a) CelebA
(b) Waterbirds
Figure 4: Visualization of the changing of training accuracy on different
groups of CelebA and Waterbirds datasets.
### B.6 Additional results
In this section, we present the full results with standard deviation in Table
6, Table 7, and Table 8.
Table 6: Full comparison results with other methods in the group-oblivious setting where NA indicates the standard deviation in the original paper [41] is not available. The best results are in bold blue. | Waterbirds | CelebA
---|---|---
| Avg. | Worst | Avg. | Worst
ERM | 97.0 ± 0.2% | 63.7 ± 1.9% | 94.9 ± 0.2% | 47.8 ± 3.7%
Focal Loss [40] | 87.0 ± 0.5% | 73.1 ± 1.0% | 88.4 ± 0.3% | 72.1 ± 3.8%
CVaR-DRO [38] | 90.3 ± 1.2% | 77.2 ± 2.2% | 86.8 ± 0.7% | 76.9 ± 3.1%
CVaR-DORO [72] | 91.5 ± 0.7% | 77.0 ± 2.8% | 89.6 ± 0.4% | 75.6 ± 4.2%
$\chi^{2}$-DRO [38] | 88.8 ± 1.5% | 74.0 ± 1.8% | 87.7 ± 0.3% | 78.4 ± 3.4%
$\chi^{2}$-DORO [72] | 89.5 ± 2.0% | 76.0 ± 3.1% | 87.0 ± 0.6% | 75.6 ± 3.4%
JTT [41] | 93.6 ± (NA)% | 86.0 ± (NA)% | 88.0 ± (NA)% | 81.1 ± (NA)%
Ours | 93.0 ± 0.5% | 90.0 ± 1.1% | 90.1 ± 0.4% | 85.3 ± 4.1%
| CivilComments | Camelyon17
| Avg. | Worst | Avg.
ERM | 92.2 ± 0.1% | 56.0 ± 3.6% | 70.3 ± 6.4%
Focal Loss [40] | 91.2 ± 0.5% | 60.1 ± 0.7% | 68.1 ± 4.8%
CVaR-DRO [38] | 89.1 ± 0.4% | 62.3 ± 0.7% | 70.5 ± 5.1%
CVaR-DORO [72] | 90.0 ± 0.4% | 64.1 ± 1.4% | 67.3 ± 7.2%
$\chi^{2}$-DRO [38] | 89.4 ± 0.7% | 64.2 ± 1.3% | 68.0 ± 6.7%
$\chi^{2}$-DORO [72] | 90.1 ± 0.5% | 63.8 ± 0.8% | 68.0 ± 7.5%
JTT [41] | 90.7 ± 0.3% | 67.4 ± 0.5% | 69.1 ± 6.4%
Ours | 90.6 ± 0.4% | 70.1 ± 0.9% | 75.1 ± 5.9%
Table 7: Full comparison results with the algorithms using training group labels (Our method does not depend on this type of information). Results of baseline models are from [70]. The best three results are in bold brown or bold blue and the color indicates whether the train group label is used. | Group labels | Waterbirds | CelebA
---|---|---|---
| in train set? | Avg. | Worst | Avg. | Worst
IRM | Yes | 87.5 ± 0.7% | 75.6 ± 3.1% | 94.0 ± 0.4% | 77.8 ± 3.9%
IB-IRM | Yes | 88.5 ± 0.6% | 76.5 ± 1.2% | 93.6 ± 0.3% | 85.0 ± 1.8%
V-REx | Yes | 88.0 ± 1.0% | 73.6 ± 0.2% | 92.2 ± 0.1% | 86.7 ± 1.0%
CORAL | Yes | 90.3 ± 1.1% | 79.8 ± 1.8% | 93.8 ± 0.3% | 76.9 ± 3.6%
GroupDRO | Yes | 91.8 ± 0.3% | 90.6 ± 1.1% | 92.1 ± 0.4% | 87.2 ± 1.6%
DomainMix | Yes | 76.4 ± 0.3% | 53.0 ± 1.3% | 93.4 ± 0.1% | 65.6 ± 1.7%
Fish | Yes | 85.6 ± 0.4% | 64.0 ± 0.3% | 93.1 ± 0.3% | 61.2 ± 2.5%
LISA | Yes | 91.8 ± 0.3% | 89.2 ± 0.6% | 92.4 ± 0.4% | 89.3 ± 1.1%
Ours | No | 93.0 ± 0.5% | 90.0 ± 1.1% | 90.1 ± 0.4% | 85.3 ± 4.1%
| Group labels | CivilComments | Camelyon17
| in train set? | Avg. | Worst | Avg.
IRM | Yes | 88.8 ± 0.7% | 66.3 ± 2.1% | 64.2 ± 8.1%
IB-IRM | Yes | 89.1 ± 0.3% | 65.3 ± 1.5% | 68.9 ± 6.1%
V-REx | Yes | 90.2 ± 0.3% | 64.9 ± 1.2% | 71.5 ± 8.3%
CORAL | Yes | 88.7 ± 0.5% | 65.6 ± 1.3% | 59.5 ± 7.7%
GroupDRO | Yes | 89.9 ± 0.5% | 70.0 ± 2.0% | 68.4 ± 7.3%
DomainMix | Yes | 90.9 ± 0.4% | 63.6 ± 2.5% | 69.7 ± 5.5%
Fish | Yes | 89.8 ± 0.4% | 71.1 ± 0.4% | 74.7 ± 7.1%
LISA | Yes | 89.2 ± 0.9% | 72.6 ± 0.1% | 77.1 ± 6.5%
Ours | No | 90.6 ± 0.5% | 70.1 ± 0.9% | 75.1 ± 5.9%
Table 8: Full comparison with ERM and mixup based methods. Results of baseline models are from [70]. The best results are in bold brown or bold blue and the color indicates whether the train group label is used. | Group labels | Waterbirds | CelebA
---|---|---|---
| in train set? | Avg. | Worst | Avg. | Worst
ERM | No | 97.0 ± 0.2% | 63.7 ± 1.9% | 94.9 ± 0.2% | 47.8 ± 3.7%
vanilla mixup | No | 81.0 ± 0.2% | 56.2 ± 0.2% | 95.8 ± 0.0% | 46.4 ± 0.5%
in-group mixup | Yes | 88.7 ± 0.3% | 68.0 ± 0.4% | 95.2 ± 0.3% | 58.3 ± 0.9%
Ours | No | 93.0 ± 0.5% | 90.0 ± 1.1% | 90.1 ± 0.4% | 85.3 ± 4.1%
| Group labels | CivilComments | Camelyon17
| in train set? | Avg. | Worst | Avg.
ERM | No | 92.2 ± 0.1% | 56.0 ± 3.6% | 70.3 ± 6.4%
vanilla mixup | No | 90.8 ± 0.8% | 67.2 ± 1.2% | 71.2 ± 5.3%
in-group mixup | Yes | 90.8 ± 0.6% | 69.2 ± 0.8% | 75.5 ± 6.7%
Ours | No | 90.6 ± 0.5% | 70.1 ± 0.9% | 75.1 ± 5.9%
## Appendix C Justification for choosing historical-based uncertainty score
We employ the information from the historical training trajectory to
approximate the sampling process because it is simple and effective in
practice. Empirically, in contrast to other typical uncertainty quantification
methods such as Bayesian learning or model ensemble [17, 36], our method can
significantly reduce the computational and memory-storage cost by employing
the information from the historical training trajectory, since Bayesian
learning or model ensemble needs to sample/save multiple DNN models and
performs inference computations on them. Meanwhile, our method has achieved
quite promising final accuracy in contrast to other methods. In summary, we
choose an uncertainty score that can achieve satisfactory performance while
being more memory and computationally efficient.
## Appendix D Societal impact and limitations
### D.1 Societal impact
Algorithmic fairness and justice are closely related to our work.
Philosophically, there are two different views on justice. Firstly, Jeremy
Bentham believes “the greatest good for the greatest number” can be seen as
justice [50]. ERM can be considered to inherit this spirit which pays more
attention to minimizing the majority subpopulation risks. Different from
Jeremy Bentham’s opinion, Rawlsian distributive justice [57] argues that we
should maximize the welfare of the worst-off group. The proposed method and
other IW-based methods can be seen as the practice of Rawlsian distributive
justice due to focusing more on the minority subpopulations. However, in
practice, the proposed method and other IW-based methods may sacrifice the
average accuracy. Therefore, the ones using the proposed method need to
carefully consider what fairness and justice are in a social context to decide
whether to sacrifice the average accuracy and improve the worst-case accuracy.
### D.2 Limitations and future works
Even though the proposed method achieves excellent performance, it still has
some potential limitations. (1) Similar to other IW-based methods, the
proposed method may sacrifice the average accuracy. Therefore, it is also
important and valuable to conduct a theoretical analysis of this phenomenon
and explore novel ways to improve the worst-case accuracy of the model without
sacrificing the average accuracy in the future work. (2) Although our method
does not require training set group labels, how to leverage unreliable
subpopulation information (e.g., subpopulation labels are noise) to improve
UMix would be a promising research topic. For example, when the unreliable
subpopulation labels are available, UMix could be improved by equipping with
existing importance weighting methods. (3) Similar to the previous IW-based
methods, the label noise is also not considered in our method, which may lead
to over-focusing on noisy samples. Currently, it’s still a challenging open
problem to distinguish the minority samples from the mislabeled noise samples
in the data with subpopulation shift. (4) At the same time, this work only
considers subpopulation shifts on Euclidean data, hence it is also a promising
future direction to generalize IW-based methods to graph-structured data,
under the guidance of invariance principle on graphs, such as that of [12]. We
leave them as important future works.
|
# Coleman-Gurtin type equations with dynamic boundary conditions
Ciprian G. Gal1 and Joseph L. Shomberg2 1Department of Mathematics, Florida
International University, Miami, FL 33199, USA,
<EMAIL_ADDRESS>2Department of Mathematics and Computer Science, Providence
College, Providence, RI 02918, USA,
<EMAIL_ADDRESS>
###### Abstract.
We present a new formulation and generalization of the classical theory of
heat conduction with or without fading memory which includes the usual heat
equation subject to a dynamic boundary condition as a special case. We
investigate the well-posedness of systems which consist of Coleman-Gurtin type
equations subject to dynamic boundary conditions, also with memory. Nonlinear
terms are defined on the interior of the domain and on the boundary and
subject to either classical dissipation assumptions, or to a nonlinear balance
condition in the sense of [11]. Additionally, we do not assume that the
interior and the boundary share the same memory kernel.
###### Key words and phrases:
Coleman-Gurtin equation, dynamic boundary conditions with memory, heat
conduction, heat equations.
###### 2000 Mathematics Subject Classification:
35B25, 35B40, 35B41, 35K57, 37L30, 45K05.
###### Contents
1. 1 Introduction
2. 2 Derivation of the model equations
3. 3 Past history formulation and functional setup
4. 4 Variational formulation and well-posedness
## 1\. Introduction
In recent years there has been an explosive growth in theoretical results
concerning dissipative infinite-dimensional systems with memory including
models arising in the theory of heat conduction in special materials and the
theory of phase-transitions. The mathematical and physical literature,
concerned primarily with qualitative/quantitative properties of solutions to
these models, is quite extensive and much of the work before 2002 is largely
referenced in the survey paper by Grasselli and Pata [19]. More recent results
and updates can be found in [7, 8, 9, 10] (cf. also [16, 17]). A basic
evolution equation considered in these references is that for an homogeneous
and isotropic heat conductor occupying a $d$-dimensional (bounded) domain
$\Omega$ with sufficiently smooth boundary $\Gamma=\partial\Omega$ and reads
$\partial_{t}u-\omega\Delta
u-\left(1-\omega\right)\int_{0}^{\infty}k_{\Omega}\left(s\right)\Delta
u\left(x,t-s\right)ds+f\left(u\right)=0,$ (1.1)
in $\Omega\times\left(0,\infty\right).$ Here $u=u\left(t\right)$ is the
(absolute) temperature distribution, $\omega>0,$
$r=-f\left(u\left(t\right)\right)$ is a temperature dependent heat supply, and
$k_{\Omega}:[0,\infty)\rightarrow\mathbb{R}$ is a continuous nonnegative
function, smooth on $(0,\infty)$ and vanishing at infinity, and summable. As
usual, (1.1) is derived by assuming the following energy balance equation
$\partial_{t}e+\text{div}\left(q\right)=r$
by considering the following relationships:
$e=e_{\infty}+c_{0}u,\text{ }q=-\omega\nabla
u-\left(1-\omega\right)\int_{0}^{\infty}k_{\Omega}\left(s\right)\nabla
u\left(x,t-s\right)ds,$ (1.2)
for some constants $e_{\infty},c_{0}>0$. Equation (1.1) is always subject to
either homogeneous Dirichlet ($u=0$) or Neumann boundary conditions
($\partial_{n}u=0$) on $\Gamma\times\left(0,\infty\right)$. The first one
asserts that the temperature is kept constant and close to a given reference
temperature at $\Gamma$ for all time $t>0$, while the second “roughly” states
that the system is thermally isolated from outside interference. This equation
is also usually supplemented by the “initial” condition
$\widetilde{u}:(-\infty,0]\rightarrow\mathbb{R}$ such that
$u_{\mid t\in(-\infty,0]}=\widetilde{u}\text{ in }\Omega.$ (1.3)
These choices of boundary conditions, although help simplify substantially the
mathematical analysis of (1.1)-(1.3), are actually debatable in practice since
in many such systems it is usually difficult, if not impossible, to keep the
temperature constant at $\Gamma$ for all positive times without exerting some
additional kind of control at $\Gamma$ for $t>0$. A matter of principle also
arises for thermally isolated systems in which, in fact, the correct physical
boundary condition for (1.1) turns out to be the following
$q\cdot
n=\omega\partial_{n}u+\left(1-\omega\right)\int_{0}^{\infty}k_{\Omega}\left(s\right)\partial_{n}u\left(x,t-s\right)ds=0\text{
on }\Gamma\times\left(0,\infty\right)\text{,}$ (1.4)
see, for instance, [5, Section 6]. Indeed, the condition $\partial_{n}u=0$ on
$\Gamma\times\left(0,\infty\right)$ implies (1.4), say when $u$ is a
sufficiently smooth solution of (1.1)-(1.3), but clearly the converse cannot
hold in general.
In the classical theory of heat conduction, it is common to model a wide range
of diffusive phenomena including heat propagation in homogeneous isotropic
conductors, but generally it is assumed, as above, that surface (i.e.,
boundary) conditions are completely static or stationary. In some important
cases this perspective neglects the contribution of boundary sources to the
total heat content of the conductor. A first step to remedy this situation was
done in Goldstein [18] for heat equations. The approach presented there
introduces dynamic boundary conditions into an _ad hoc_ fashion and lacks some
rigor in the case of reaction-diffusion equations. In the next section of the
paper we will make use of the usual physical principles and present a new
formulation and generalization of the classical theory. Our general approach
follows that of Coleman and Mizel [5] which regards the second law of
thermodynamics as included among the laws of physics and which is compatible
with the principle of equipresence in the sense of Truesdell and Toupin (see
Section 2). Thus, this new formulation is expected to give a solid foundation
to the arguments employed in derivations of the heat equation with “dynamic”
boundary conditions developed in Goldstein [18], or in models for phase
transitions developed in Gal and Grasselli [13, 14]. Accounting for the
presence of boundary sources, the new formulation naturally leads to dynamic
boundary conditions for the temperature function $u$ and that contain the
above static conditions (especially, (1.4)) as special cases (see Section 2).
In particular, we derive on $\Gamma\times\left(0,\infty\right),$ the following
boundary condition for (1.1):
$\displaystyle\partial_{t}u-\nu\Delta_{\Gamma}u+\omega\partial_{n}u+g\left(u\right)$
$\displaystyle+\left(1-\omega\right)\int_{0}^{\infty}k_{\Omega}\left(s\right)\partial_{n}u\left(x,t-s\right)ds+\left(1-\nu\right)\int_{0}^{\infty}k_{\Gamma}\left(s\right)\left(-\Delta_{\Gamma}+\beta\right)u\left(x,t-s\right)ds$
(1.5) $\displaystyle=0,$
for some $\nu\in\left(0,1\right)$ and $\beta>0$. Here
$k_{\Gamma}:[0,\infty)\rightarrow\mathbb{R}$ is also a smooth nonnegative,
summable function over $(0,\infty)$ such that $k_{\Gamma}$ is vanishing at
infinity. The last two boundary terms on the left-hand side of equation (1.5)
are due to contributions coming from a (linear) heat exchange rate between the
bulk $\Omega$ and the boundary $\Gamma$, and boundary fluxes, respectively
(cf. Section 2).
Our goal in this paper is to extend the previous well-posedness results of [7,
8, 9, 10, 19, 16, 17] and [11, 12, 15] in the following directions:
* •
by allowing general boundary processes take place also on $\Gamma$, equation
(1.1) is now subject to boundary conditions of the form (1.5);
* •
we consider more general functions $f,g\in C^{1}\left(\mathbb{R}\right)$
satisfying either classical dissipation assumptions, or more generally,
nonlinear balance conditions allowing for bad behavior of $f,g$ at infinity;
* •
we develop a general framework allowing for both weak and smooth initial data
for (1.1), (1.5), and possibly _different_ memory functions
$k_{\Omega},k_{\Gamma}.$
* •
we extend a Galerkin approximation scheme whose explicit construction is
crucial for the existence of strong solutions.
The paper is organized as follows. In Section 3, we provide the functional
setup. In Section 4, we prove theorems concerning the well-posedness of the
system, based on (1.1), (1.5), generated by the new formulation. In the
subsequent section, we present a rigorous formulation and examples in which
(1.5) naturally occurs for (1.1).
## 2\. Derivation of the model equations
To begin let us consider a bounded domain $\Omega\subset\mathbb{R}^{d}$ which
is occupied by a rigid body. The region $\Omega$ is assumed to be bounded by a
smooth boundary $\Gamma:=\partial\Omega$ which is assumed to be at least
Lipschitz continuous. As usual, a thermodynamic process taking place in
$\Omega$ is defined by five basic functions, that is, the specific internal
energy $e_{\Omega}\left(x,t\right)$, the specific entropy
$\eta_{\Omega}=\eta_{\Omega}\left(x,t\right)$, the heat flux
$q=q\left(x,t\right)$, the absolute temperature $u=u\left(x,t\right)>0$ and
the heat supply $h_{\Omega}\left(x,t\right)\,$, absorbed by the material at
$x\in\Omega,$ and possibly furnished by the external world (i.e.,
thermodynamic processes that occur outside of $\Omega$). All these quantities,
defined per unit volume and unit time, are scalars except for
$q\in\mathbb{R}^{d}$ which is a vector. The classical theory [4, 5] of heat
conduction in the body $\Omega$ ignores any heat contribution which may be
supplied from processes taking place on $\Gamma$ and, hence, this situation is
never modelled by the theory. This is the case in many applications, in
calorimetry, which go back to problems that occur as early as the mid 1950’s,
see [3, Chapter I, Section 1.9, pg. 22-24]. A typical example arises when a
given body $\Omega$ is in perfect thermal contact with a thin metal sheet,
possibly of different material $\Gamma=\partial\Omega$ completely insulating
the body $\Omega$ from contact with, say, a well-stirred hot or cold fluid.
The assumption made is that the metal sheet $\Gamma$ is sufficiently thin such
that the temperature $v\left(t\right)$ at any point on $\Gamma$ is constant
across its thickness. Since the sheet $\Gamma$ is in contact with a fluid it
will either heat or cool the body $\Omega$ in which case the heat supplied to
$\Omega$ is due to both $\Gamma$ and the body of fluid, not to mention the
fact that the temperature distribution in the sheet is also affected by heat
transfer between $\Gamma$ and the interior $\Omega$. Since the outershell
$\Gamma$ is in perfect contact with the body $\Omega$, it is reasonable to
assume by continuity that the temperature distribution $u\left(t\right)$ in
$\Omega,$ in an infinitesimal layer near $\Gamma$ is equal to
$v\left(t\right)$, for all times $t>\delta$, that is,
$u\left(t\right)_{\mid\Gamma}=v\left(t\right)$ for all $t>\delta$; they need
not, of course, be equal at $t=\delta$, where $\delta$ is the (initial)
starting time. When $\rho_{1},$ $\rho_{2}$ correspond to the densities of
$\Omega$ and $\Gamma$, respectively, and $c_{1},c_{2}$ denote the heat
capacities of $\Omega$ and $\Gamma,$ respectively, this example can be
modelled by the balance equation
$\rho_{1}c_{1}\partial_{t}u=-\text{div}\left(q\right)+h_{\Omega}\text{ in
}\Omega\times\left(\delta,\infty\right),$ (2.1)
suitably coupled with an equation for $\Gamma$, by considering the heat
balance of an element of area of the sheet $\Gamma$, which is
$\rho_{2}c_{2}\partial_{t}u=q\cdot
n-\text{div}_{\Gamma}\left(q_{\Gamma}\right)+l_{\Gamma}\text{ in
}\Gamma\times\left(\delta,\infty\right).$ (2.2)
Here $n\in\mathbb{R}^{d}$ denotes the exterior unit normal vector to $\Gamma$,
$l_{\Gamma}\left(x,t\right)$ is an external heat supply and $q_{\Gamma}$ is a
tangential heat flux on $\Gamma$ while divΓ is the surface divergence whose
definition is given below. Note that the correct term to couple the balance
equations for $\Omega$ and $\Gamma$ is given by $q\cdot n$, since this is used
to quantify a (linear) heat exchange rate across $\Gamma$ from $\Omega$ in all
directions normal to the boundary $\Gamma$. The system (2.1)-(2.2) is also
important in control problems for the heat equation, say when a specific
temperature distribution at the boundary $\Gamma$ is desired (see [21]).
As mentioned earlier, in the classical theory on heat conduction one usually
ignores boundary contributions by either prescribing the temperature on
$\Gamma$ or assuming that the flux across the surface $\Gamma$ from $\Omega$
is null, or simply, by invoking Newton’s law of cooling which states that the
flux across the surface is directly proportional to temperature differences
between the surface and the surrounding medium. In the sequel, it is our goal
to include general boundary processes into the classical theory of heat
conduction. To this end, in order to define a complete thermodynamic process
in $\overline{\Omega}=\Omega\cup\Gamma$, as in the previous example, we need
to add four more response functions, that is, the specific surface energy
$e_{\Gamma}\left(x,t\right),$ the specific surface entropy density
$\eta_{\Gamma}\left(x,t\right)$, the tangential heat flux
$q_{\Gamma}=q_{\Gamma}\left(x,t\right)\in\mathbb{R}^{d-1}$, and the external
heat supply $h_{\Gamma}\left(x,t\right),$ all defined for $x\in\Gamma,$ per
unit area and unit time. It is assumed that the absolute (local) temperature
$u\left(\cdot,t\right)$ is sufficiently smooth up to $\overline{\Omega}$ as a
function of the spatial coordinate. We now introduce the following definition.
* •
We say that the set of nine time-dependent variables constitutes a _complete_
_thermodynamic process_ in $\overline{\Omega}$ if the following conservation
law holds, not only for $\overline{\Omega}$, but also for any subdomain
$\Omega_{0}\subset\Omega$ and any part $\Gamma_{0}\subset\Gamma$:
$\int_{\Omega}\overset{\centerdot}{e}_{\Omega}dx+\int_{\Gamma}\overset{\centerdot}{e}_{\Gamma}d\sigma=-\int_{\Omega}\text{div}\left(q\right)dx-\int_{\Gamma}\text{div}_{\Gamma}\left(q_{\Gamma}\right)d\sigma+\int_{\Omega}h_{\Omega}dx+\int_{\Gamma}h_{\Gamma}d\sigma.$
(2.3)
In (2.3), $dx$ denotes the volume element, $d\sigma$ is the element of surface
area and the superimposed dot denotes the time-derivative. Note that in
general, the external heat supply $h_{\Gamma}$ on $\Gamma$ must also depend,
possibly in a nonlinear fashion, on the heat content exchanged across $\Gamma$
from $\Omega$, i.e., $h_{\Gamma}=f\left(q\cdot n\right)+l_{\Gamma}$, for some
function $f$, where $l_{\Gamma}$ accounts either for the heat supply coming
solely from $\Gamma$ or some other source outside of $\Gamma$, see the above
example (2.1)-(2.2). In order to give a rigorous definition to
div${}_{\Gamma}\left(q_{\Gamma}\right),$ we regard $\Gamma$ as a compact
Riemanian manifold without boundary, endowed with the natural metric inherited
from $\mathbb{R}^{d}$, given in local coordinates by $\mathbf{\tau}$ and with
fundamental form $\left(\mathbf{\tau}_{ij}\right)_{i,j=1,...,d-1}$. A scalar-
valued function $w\in C^{\infty}\left(\Gamma\right)$ induces an element of the
dual space of $T_{x}\Gamma$ via the directional derivative of tangential
vectors at $x\in\Gamma$. Clearly, $T_{x}\Gamma$ is a Hilbert space when
endowed with scalar product induced from $\mathbb{R}^{d}$. For a tangential
vector field $q_{\Gamma}\in C^{\infty}\left(\Gamma\right),$ that is,
$q_{\Gamma}\left(x\right)\in T_{x}\Gamma,$ for $x\in\Gamma,$ the surface
divergence, div${}_{\Gamma}\left(q_{\Gamma}\right),$ is in the local
coordinates $\mathbf{\tau}$ for $\Gamma,$
$\text{div}_{\Gamma}q_{\Gamma}\left(\mathbf{\tau}\right)=\frac{1}{\sqrt{\left|\mathbf{\tau}\right|}}\sum_{i=1}^{d-1}\partial_{i}(\sqrt{\left|\mathbf{\tau}\right|}q_{i}\left(\mathbf{\tau}\right)),$
where $q_{i}$ are the components of $q_{\Gamma}$ with respect to the basis
$\left\\{\partial_{1}\mathbf{\tau,...,\partial}_{d-1}\mathbf{\tau}\right\\}$
of $T_{x}\Gamma$ and
$\left|\mathbf{\tau}\right|=\det\left(\mathbf{\tau}_{ij}\right)$. Moreover, we
can define the surface gradient $\nabla_{\Gamma}u$ as a unique element of
$T_{x}\Gamma$ corresponding to this dual space element via a natural
isomorphism, that is,
$\nabla_{\Gamma}u\left(\mathbf{\tau}\right)=\sum_{i,j=1}^{d-1}\mathbf{\tau}_{ij}\partial_{j}u\left(\mathbf{\tau}\right)\partial_{i}\mathbf{\tau,}$
with respect to the canonical basis
$\left\\{\partial_{1}\mathbf{\tau,...,\partial}_{d-1}\mathbf{\tau}\right\\}$
of $T_{x}\Gamma$. For a multi-index $\alpha\in\mathbb{N}_{0}^{m}$, the
operator $\nabla_{\Gamma}^{\alpha}u$ is defined by taking iteratively the
components of $\nabla_{\Gamma}u.$ It is worth emphasizing that our form of the
first law (2.3) is equivalent to
$\overset{\centerdot}{e}_{\Omega}=-\text{div}\left(q\right)+h_{\Omega}\text{
in \ }\Omega\text{, and
}\overset{\centerdot}{e}_{\Gamma}=-\text{div}_{\Gamma}\left(q_{\Gamma}\right)+h_{\Gamma}\text{
on }\Gamma,$ (2.4)
under suitable smoothness assumptions on the response functions involved in
(2.4). Equation (2.3) may be called the law of conservation of total energy or
the _extended_ First Law of Thermodynamics. For each such complete
thermodynamic process, let us define the total rate of production of entropy
in $\overline{\Omega}=\Omega\cup\Gamma$ to be
$\Upsilon:=\int_{\Omega}\overset{\centerdot}{\eta}_{\Omega}dx+\int_{\Gamma}\overset{\centerdot}{\eta}_{\Gamma}d\sigma-\int_{\Omega}\frac{h_{\Omega}}{u}dx+\int_{\Omega}\text{div}\left(\frac{q}{u}\right)dx+\int_{\Gamma}\text{div}_{\Gamma}\left(\frac{q_{\Gamma}}{u}\right)d\sigma-\int_{\Gamma}\frac{h_{\Gamma}}{u}d\sigma,$
(2.5)
where we regard $q/u$ as a vectorial flux of entropy in $\Omega$,
$h_{\Omega}/u$ as a scalar supply of entropy produced by radiation from inside
the body $\Omega$, $h_{\Gamma}/u$ is viewed as a scalar supply of entropy
produced by radiation from $\Gamma$ and $q_{\Gamma}/u$ is a tangential flux of
entropy on $\Gamma$. More precisely, we define $\Upsilon$ to be the difference
between the total rate of change in entropy of $\overline{\Omega}$ and that
rate of change which comes from the heat supplies in both $\Omega$ and
$\Gamma$, and both the inward and tangential fluxes. We postulate the
following extended version of the Second Law of Thermodynamics as follows.
* •
For every complete thermodynamic process in $\overline{\Omega}$ the inequality
$\Upsilon\geq 0$ (2.6)
must hold for all $t$, not only in $\overline{\Omega}$, but also on all
subdomains $\Omega_{0}\subset\Omega$ and all parts $\Gamma_{0}\subset\Gamma,$
respectively111When (2.6) holds on all parts $\Omega_{0}\subset\Omega,$ it is
understood that all the boundary integrals in (2.5) drop out; in the same
fashion, when (2.6) is satisfied for all parts $\Gamma_{0}\subset\Gamma,$ the
bulk integrals are also omitted from the definition of $\Upsilon.$. For
obvious reasons, we will refer to the inequality $\Upsilon\geq 0$ as the
_extended_ Clausius-Duhem inequality. Finally, a complete thermodynamic
process is said to be _admissible_ in $\overline{\Omega}$ if it is compatible
with a set of constitutive conditions given on the response functions
introduced above, at each point of $\overline{\Omega}$ and at all times $t$.
Of course, for the postulate to hold, the various response functions must obey
some restrictions, including the usual ones which are consequences of the
_classical_ Clausius-Duhem inequality. In particular, the entropy
$\eta_{\Omega}$ at each point $x\in\Omega$ must be determined only by a
function of the specific internal energy $e_{\Omega},$ and the temperature $u$
at $x\in\Omega$ is determined only by a relation involving $e_{\Omega}$ and
$\eta_{\Omega}$. More precisely, it turns out that for the postulate to hold
on any $\Omega_{0}\subset\Omega$, both the internal energy $e_{\Omega}$ and
the entropy function $\eta_{\Omega}$ must be constitutively independent of any
higher-order stress tensors $\nabla^{\gamma}u$ for any $\gamma\geq 1$, such
that they are only functions of the local temperature, i.e., it follows that
$e_{\Omega}=e_{\Omega}\left(u\right)\text{ and
}\eta_{\Omega}=\eta_{\Omega}\left(u\right),$ (2.7)
respectively, cf. [5, Theorem 1, pg. 251]. Indeed, our postulate implies that
the local form of the second law must hold also on any subdomain $\Omega_{0}$
of $\Omega$; this implies that
$\gamma_{\Omega}:=\left(\overset{\centerdot}{\eta}_{\Omega}-\frac{h_{\Omega}}{u}+\text{div}\left(\frac{q}{u}\right)\right)\geq
0\text{ in }\Omega$ (2.8)
and
$\gamma_{\Gamma}:=\left(\overset{\centerdot}{\eta}_{\Gamma}-\frac{h_{\Gamma}}{u}+\text{div}_{\Gamma}\left(\frac{q_{\Gamma}}{u}\right)\right)\geq
0\text{ on }\Gamma.$ (2.9)
From [5], we know that $\gamma_{\Omega}\geq 0$ in the body $\Omega$ if and
only if
$q\cdot\nabla u\leq 0,$ (2.10)
for all values $u,$ $\nabla u,$…., $\nabla^{\gamma}u$, with $q=q\left(u,\nabla
u,\nabla^{2}u,...,\nabla^{\gamma}u\right)$. This inequality is called the heat
conduction inequality in $\Omega$. In fact, this inequality was established in
[20] under more general constitutive assumptions on $\eta_{\Omega},q$ and
$e_{\Omega}$, excluding memory effects, as functionals of the entropy field
over the entire body $\Omega$ at the same time.
We now find necessary and sufficient set of restrictions on the remaining
functions $\eta_{\Gamma},$ $e_{\Gamma}$, $q_{\Gamma}$. As in [5], we assume a
formulation of constitutive equations to be compatible with the Principle of
Equipresence in the sense of Truesdell and Toupin [26, pg. 293], which
basically states that “a variable present as an independent variable in one
constitutive equation should be so present in all”. In the present
formulation, the material at $x\in\Gamma$ is characterized by the response
functions $\widehat{\eta}_{\Gamma},$ $\widehat{e}_{\Gamma}$ and
$\widehat{q}_{\Gamma},$ which give the functions
$\eta_{\Gamma}\left(x,t\right),$ $e_{\Gamma}\left(x,t\right)$ and
$q_{\Gamma}\left(x,t\right)$, respectively, when the values
$\nabla_{\Gamma}^{j}u\left(x,t\right)$ are known for $j=0,1,2,...,\alpha.$
Dropping the hats for the sake of convenience and by force of this principle,
we assume that
$\displaystyle e_{\Gamma}$
$\displaystyle=e_{\Gamma}\left(u,\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right),$
(2.11) $\displaystyle\eta_{\Gamma}$
$\displaystyle=\eta_{\Gamma}\left(u,\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right),$
(2.12) $\displaystyle q_{\Gamma}$
$\displaystyle=q_{\Gamma}\left(u,\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right).$
(2.13)
Furthermore, we assume that for any fixed values of $\nabla_{\Gamma}^{j}u,$
the response function $e_{\Gamma}$ is smooth in the first variable $u$, i.e.,
we suppose $\frac{\partial e_{\Gamma}}{\partial
u}\left(u,\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right)\neq
0.$ This implies that there exist new response functions, say
$\widetilde{\eta}_{\Gamma},$ $\widetilde{e}_{\Gamma}$ and
$\widetilde{q}_{\Gamma},$ which can be used to write (2.11)-(2.13) in the
following form:
$\displaystyle u$
$\displaystyle=\widetilde{u}\left(e_{\Gamma},\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right),$
(2.14) $\displaystyle\eta_{\Gamma}$
$\displaystyle=\widetilde{\eta}_{\Gamma}\left(e_{\Gamma},\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right),$
(2.15) $\displaystyle q_{\Gamma}$
$\displaystyle=\widetilde{q}_{\Gamma}\left(e_{\Gamma},\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right).$
(2.16)
For each fixed values of the tensors $\nabla_{\Gamma}^{j}u,$
$j=0,1,2,...,\alpha$, the variable
$\widetilde{u}\left(\cdot,\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right)$
is determined through the inverse function of $e_{\Gamma},$ given by (2.11),
such that $\widetilde{\eta}_{\Gamma}$ and $\widetilde{q}_{\Gamma}$ are defined
by
$\displaystyle\widetilde{\eta}_{\Gamma}\left(e_{\Gamma},\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right)$
$\displaystyle=\eta_{\Gamma}\left(\widetilde{u}\left(e_{\Gamma},\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right),\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right),$
$\displaystyle\widetilde{q}_{\Gamma}\left(e_{\Gamma},\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right)$
$\displaystyle=q_{\Gamma}\left(\widetilde{u}\left(e_{\Gamma},\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right),\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right).$
Note that with $u\left(x,t\right)$ specified for all $x$ and $t$, equations
(2.11)-(2.13) give $\eta_{\Gamma}\left(x,t\right),$
$e_{\Gamma}\left(x,t\right)$ and $q_{\Gamma}\left(x,t\right),$ for all $x$ and
$t,$ in which case the local form of the First Law (see also (2.4)) determines
also $h_{\Gamma}$. In particular, every temperature distribution
$u\left(x,t\right)>0$ with $x$ varying over $\Gamma$, determines a unique
complete thermodynamic process in $\Gamma$. By a standard argument in [5, pg.
249], in (2.11)-(2.13) we may regard not only
$e_{\Gamma},\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u$
as independent variables, but also their time-derivatives
$\overset{\centerdot}{e}_{\Gamma}$, $\nabla_{\Gamma}\overset{\centerdot}{u},$
$\nabla_{\Gamma}^{2}\overset{\centerdot}{u},...,$
$\nabla_{\Gamma}^{\alpha}\overset{\centerdot}{u},$ to form a set of quantities
which can be chosen independently at one fixed point $x\in\Gamma$ and time.
For each complete thermodynamic process in $\overline{\Omega}$, the second
energy balance equation (2.4) allows us to write (2.9) as
$\gamma_{\Gamma}=\overset{\centerdot}{\eta}_{\Gamma}-\frac{h_{\Gamma}}{u}+\text{div}_{\Gamma}\left(\frac{q_{\Gamma}}{u}\right)=\overset{\centerdot}{\eta}_{\Gamma}-\frac{e_{\Gamma}}{u}+q_{\Gamma}\cdot\nabla_{\Gamma}\left(\frac{1}{u}\right).$
(2.17)
Since $q_{\Gamma}$ and $\eta_{\Gamma}$ must be given by (2.16) and (2.15), at
any point $\left(x,t\right),$ we have
$\overset{\centerdot}{\eta}_{\Gamma}=\frac{\partial\widetilde{\eta}_{\Gamma}}{\partial
e_{\Gamma}}\overset{\centerdot}{e}_{\Gamma}+\sum\nolimits_{j=1}^{\alpha}\left(\frac{\partial\widetilde{\eta}_{\Gamma}}{\partial
u_{,l_{1}l_{2}...l_{j}}}\right)\overset{\centerdot}{u}_{,l_{1}l_{2}...l_{j}},$
where the summation convention is used and where in local coordinates of
$\Gamma$,
$u_{,l_{1}l_{2}...l_{j}}=(\nabla_{\Gamma}^{j}u)_{l_{1}l_{2}...l_{j}}$. It
follows that
$\gamma_{\Gamma}=\left(\frac{\partial\widetilde{\eta}_{\Gamma}}{\partial
e_{\Gamma}}-\frac{1}{\widetilde{u}}\right)\overset{\centerdot}{e}_{\Gamma}+\sum\nolimits_{j=1}^{\alpha}\left(\frac{\partial\widetilde{\eta}_{\Gamma}}{\partial
u_{,l_{1}l_{2}...l_{j}}}\right)\overset{\centerdot}{u}_{,l_{1}l_{2}...l_{j}}-\frac{1}{u^{2}}\widetilde{q}_{\Gamma}\cdot\nabla_{\Gamma}u.$
(2.18)
In order for $\gamma_{\Gamma}\geq 0$ to hold on $\Gamma$ (but also on all
parts $\Gamma_{0}\subset\Gamma$), according to (2.9) and our postulate, it is
necessary and sufficient that
$\frac{\partial\widetilde{\eta}_{\Gamma}}{\partial
e_{\Gamma}}=\frac{1}{\widetilde{u}},\text{ and
}\frac{\partial\widetilde{\eta}_{\Gamma}}{\partial
u_{,l_{1}l_{2}...l_{j}}}=0,\text{ }j=1,2,...,\alpha.$ (2.19)
It follows from (2.19) that the functions $\widetilde{\eta}_{\Gamma}$ and
$\widetilde{u}_{\Gamma}$ from (2.14)-(2.15) cannot depend on
$\nabla_{\Gamma}u$, $\nabla_{\Gamma}^{2}u$, $...$,
$\nabla_{\Gamma}^{\alpha}u$, and they must reduce to functions of the scalar
variable $e_{\Gamma}$ only, i.e.,
$\eta_{\Gamma}=\widetilde{\eta}_{\Gamma}\left(e_{\Gamma}\right),$
$u=\widetilde{u}_{\Gamma}\left(e_{\Gamma}\right)$. These function must also
obey the first equation of (2.19); hence, the variables $\nabla_{\Gamma}u$,
$\nabla_{\Gamma}^{2}u$, $...$, $\nabla_{\Gamma}^{\alpha}u$ must also be
dropped out of equations (2.14) and (2.15) to get
$e_{\Gamma}=e_{\Gamma}\left(u\right)\text{ and
}\eta_{\Gamma}=\eta_{\Gamma}\left(u\right).$ (2.20)
Consequently, with this reduction we observe that (2.18) becomes
$\gamma_{\Gamma}=-\frac{1}{u^{2}}\widetilde{q}_{\Gamma}\cdot\nabla_{\Gamma}u,$
for all temperature fields $u>0$ and $q_{\Gamma}$ given by (2.16). Thus, in
order to have $\gamma_{\Gamma}\geq 0$ on $\Gamma$, it is necessary and
sufficient that $\widetilde{q}_{\Gamma}\cdot\nabla_{\Gamma}u\leq 0,$ or
equivalently,
$q_{\Gamma}\left(u,\nabla_{\Gamma}u,\nabla_{\Gamma}^{2}u,...,\nabla_{\Gamma}^{\alpha}u\right)\cdot\nabla_{\Gamma}u\leq
0,$ (2.21)
for all values $u$, $\nabla_{\Gamma}u$, $\nabla_{\Gamma}^{2}u$, $...,$
$\nabla_{\Gamma}^{\alpha}u.$ We call (2.21) the heat conduction inequality on
$\Gamma$. Therefore, we have established that a necessary and sufficient
condition for the _extended_ Clausius-Duhem inequality to hold for all
complete thermodynamic processes on $\overline{\Omega}$ is that both the
conduction inequalities (2.10)-(2.21) in $\Omega$ and $\Gamma$, respectively,
hold. An interesting consequence is that the following choices
$q=-\kappa_{\Omega}\left(u\right)\nabla u$ and
$q_{\Gamma}=-\kappa_{\Gamma}\left(u\right)\nabla_{\Gamma}u,$ where
$\kappa_{\Omega},\kappa_{\Gamma}>0$ are the thermal conductivity of $\Omega$
and $\Gamma$, respectively, are covered by this theory. Such choices were
assumed by the theories developed in [13], [14], [18] for the system
(2.1)-(2.2).
Motivated by the above result, we now wish to investigate more general
constitutive conditions for the response functions involved in (2.5), by
allowing them to depend also explicitly on histories up to time $t$ of the
temperature and/or the temperature gradients at $x$. Following the approach of
[4], using the abbreviations $g_{\Omega}:=\nabla u$,
$g_{\Gamma}:=\nabla_{\Gamma}u$, we consider a fixed point
$x\in\overline{\Omega}$, and define the functions $u^{t}$, $g_{\Omega}^{t},$
$g_{\Gamma}^{t}$ as the histories up to time $t$ of the temperature and the
temperature gradients at $x$. More precisely, we let
$\left\\{\begin{array}[]{ll}u^{t}\left(x,s\right)=u\left(x,t-s\right),&\\\
\text{ }g_{\Omega}^{t}\left(x,s\right)=g_{\Omega}\left(x,t-s\right)&\\\ \text{
}g_{\Gamma}^{t}\left(x,s\right)=g_{\Gamma}\left(x,t-s\right),&\end{array}\right.$
for all $\,s\in[0,\infty)$, on which these functions are well-defined. For a
complete thermodynamic process in $\overline{\Omega}$, we define the following
energy densities on $\Omega$ and $\Gamma$, respectively, by
$\psi_{\Omega}:=e_{\Omega}-u\eta_{\Omega},\text{
}\psi_{\Gamma}:=e_{\Gamma}-u\eta_{\Gamma}.$ (2.22)
Of course, knowledge of $e_{\Omega},e_{\Gamma}$ and
$\eta_{\Omega},\eta_{\Gamma}$ obviously determine $\psi_{\Omega}$ and
$\psi_{\Gamma}$ by these relations. We now consider a new generalization of
the constitutive equations for (2.7), (2.20) and the bulk and surface fluxes
$q,$ $q_{\Gamma},$ respectively. We shall investigate the implications that
the second law (2.6) has on these functions. We assume that the material at
$x\in\overline{\Omega}$ is characterized by three constitutive functionals
$P_{\Omega}$, $H_{\Omega}$ and $q$, in the bulk $\Omega$, and three more
constitutive functionals $P_{\Gamma},$ $H_{\Gamma}$ and $q_{\Gamma}$, on the
surface $\Gamma$, which give the present values of
$\psi_{\Omega},\psi_{\Gamma},\eta_{\Omega},\eta_{\Gamma},q$ and $q_{\Gamma}$
at any $x$, whenever the histories are specified at $x$. Note that the
restrictions of the functions $u^{t}$, $g_{\Omega}^{t},$ $g_{\Gamma}^{t}$ on
the open interval $\left(0,\infty\right)$, denoted here by $u_{r}^{t}$,
$g_{\Omega,r}^{t},$ $g_{\Gamma,r}^{t}$, are called past histories. Since a
knowledge of the histories $\left(u^{t},g_{\Omega}^{t},g_{\Gamma}^{t}\right)$
is equivalent to a knowledge of the past histories
$\left(u_{r}^{t},g_{\Omega,r}^{t},g_{\Gamma,r}^{t}\right),$ and the present
values $u^{t}\left(0\right)=u,$
$g_{\Omega}^{t}\left(0\right)=g_{\Omega}\left(t\right),$
$g_{\Gamma}^{t}\left(0\right)=g_{\Gamma}\left(t\right),$ it suffices to
consider
$\left\\{\begin{array}[]{ll}\psi_{\Omega}=P_{\Omega}\left(u^{t},g_{\Omega}^{t}\right),&\psi_{\Gamma}=P_{\Gamma}\left(u^{t},g_{\Gamma}^{t}\right),\\\
\eta_{\Omega}=H_{\Omega}\left(u^{t},g_{\Omega}^{t}\right),&\eta_{\Gamma}=H_{\Gamma}\left(u^{t},g_{\Gamma}^{t}\right),\\\
q=q\left(u^{t},g_{\Omega}^{t}\right),&q_{\Gamma}=q_{\Gamma}\left(u^{t},g_{\Gamma}^{t}\right),\end{array}\right.$
(2.23)
where the Principle of Equipresence is assumed in (2.23). We further suppose
that all the functionals in (2.23) obey the principle of fading memory as
formulated in [6] (cf. also [20, Section 5]). In particular, this assumption
means that “deformations and temperatures experienced in the distant past
should have less effect on the present values of the entropies, energies,
stresses, and heat fluxes than deformations and temperatures which occurred in
the recent past”. Such assumptions can be made precise through the so-called
“memory” functions $m_{\Omega},$ $m_{\Gamma},$ which characterize the rate at
which the memory fades both in the body $\Omega$ and on the surface $\Gamma$,
respectively. In particular, we may assume that both functions
$m_{S}\left(\cdot\right),$ $S\in\left\\{\Omega,\Gamma\right\\}$, are positive,
continuous functions on $\left(0,\infty\right)$ decaying sufficiently fast to
zero as $s\rightarrow\infty$. In this case, we let $D_{S}$ denote the common
domain for the functionals $P_{S},H_{S}$ and $q_{S}$ ($q_{\Omega}=q$), as the
set of all pairs $\left(u^{t},g_{S}^{t}\right)$ for which $u^{t}>0$ and
$\left\|\left(u^{t},g_{S}^{t}\right)\right\|<\infty$, where
$\left\|\left(u^{t},g_{S}^{t}\right)\right\|^{2}:=\left|u^{t}\left(0\right)\right|^{2}+\left|g_{S}^{t}\left(0\right)\right|^{2}+\int_{0}^{\infty}\left|u^{t}\left(s\right)\right|^{2}m_{S}\left(s\right)ds+\int_{0}^{\infty}\left(g_{S}^{t}\left(s\right)\cdot
g_{S}^{t}\left(s\right)\right)m_{S}\left(s\right)ds,$ (2.24)
and where $S\in\left\\{\Omega,\Gamma\right\\}$. Furthermore, for each
$S\in\left\\{\Omega,\Gamma\right\\}$ we assume as in [4] that $P_{S}$,
$H_{S}$, and $q_{S}$ ($q_{\Omega}=q$) are continuous over $D_{S}$ with respect
to the norm (2.24), but also that $P_{S}$ is continuously differentiable over
$D_{S}$ in the sense of Fréchet, and that the corresponding functional
derivatives are jointly continuous in their arguments.
In order to observe the set of restrictions that the postulate (2.6) puts on
the response functions, we recall (2.4) and substitute (2.22) into the local
forms (2.8), (2.9) to derive the following (local) forms of the extended
Clasius-Duhem inequality on $\overline{\Omega}$:
$\left\\{\begin{array}[]{ll}\overset{\centerdot}{\psi}_{\Omega}+\overset{\centerdot}{u}\eta_{\Omega}+\frac{1}{u}q_{\Omega}\cdot\nabla
u\leq 0&\text{in }\Omega,\\\
\overset{\centerdot}{\psi}_{\Gamma}+\overset{\centerdot}{u}\eta_{\Gamma}+\frac{1}{u}q_{\Gamma}\cdot\nabla_{\Gamma}u\leq
0&\text{on }\Gamma.\end{array}\right.$ (2.25)
We recall that a complete thermodynamic process is _admissible_ in
$\overline{\Omega}$ if it is compatible with the set of constitutive
conditions given in (2.23) at each point $x$ and at all times $t$. Since we
believe that our postulate (2.6) _should_ hold for all time-dependent
variables compatible with the extended law of balance of energy in (2.3), it
follows from [4, Theorem 6] (cf. also [6, Section 6, Theorem 1]) that the
Clausius-Duhem inequalities (2.25) imply for each
$S\in\left\\{\Omega,\Gamma\right\\}$ that
* •
The instantaneous derivatives of $P_{S}$ and $H_{S}$ with respect to $g_{S}$
are zero; more precisely,
$D_{g_{S}}P_{S}=D_{g_{S}}H_{S}=0.$
* •
The functional $H_{S}$ is determined by the functional $P_{S}$ through the
entropy relation:
$H_{S}=-D_{u}P_{S}.$
* •
The modified heat conduction inequalities
$\frac{1}{u^{2}}\left(q_{S}\cdot g_{S}\right)\leq\sigma_{S},\text{
}S\in\left\\{\Omega,\Gamma\right\\},$
(with $q_{\Omega}=q$) hold for all smooth processes in $\overline{\Omega}$ and
for all $t$.
Above, $\sigma_{S}$ denotes the internal/boundary dissipation
$\sigma_{S}\left(t\right):=-\frac{1}{u\left(t\right)}\left[\delta_{u}P_{S}\left(u^{t},g_{S}^{t}\mid\overset{\centerdot}{u}_{r}^{t}\right)+\delta_{g_{S}}P_{S}\left(u^{t},g_{S}^{t}\mid\overset{\centerdot}{g}_{S,r}^{t}\right)\right],$
at time $t$, corresponding to the histories $\left(u^{t},g_{S}^{t}\right)$,
where $\overset{\centerdot}{u}$ is the present rate of change of $u$ at $x$,
$\overset{\centerdot}{u}_{r}^{t}$ is the past history of the rate of change of
$u$ at $x,$ and so on. Moreover, $D_{g_{S}}P_{S}$, $\delta_{u}P_{S}$ and
$\delta_{g_{S}}P_{S}$ denote the following linear differential operators
$\displaystyle D_{g_{S}}P_{S}\left(u^{t},g_{S}^{t}\right)\cdot l$
$\displaystyle=\left(\frac{\partial}{\partial
y}P_{S}\left(u_{r}^{t},g_{S,r}^{t},u,g_{S}+yl\right)\right)_{y=0},$
$\displaystyle\delta_{u}P_{S}\left(u^{t},g_{S}^{t}\mid k\right)$
$\displaystyle=\left(\frac{\partial}{\partial
y}P_{S}\left(u_{r}^{t}+yk,g_{S,r}^{t},u,g_{S}\right)\right)_{y=0},$
$\displaystyle\delta_{g_{S}}P_{S}\left(u^{t},g_{S}^{t}\mid\kappa\right)$
$\displaystyle=\left(\frac{\partial}{\partial
y}P_{S}\left(u_{r}^{t},g_{S,r}^{t}+y\kappa,u,g_{S}\right)\right)_{y=0},$
with identities which hold clearly for $\left(u^{t},g_{S}^{t}\right)\in
D_{S},$ $S\in\left\\{\Omega,\Gamma\right\\}$, $l\in\mathbb{R}^{\zeta_{S}}$
($\zeta_{\Omega}=d$, $\zeta_{\Gamma}=d-1$), and all $\left(k,\kappa\right)$
such that
$\int_{0}^{\infty}\left|k\left(s\right)\right|^{2}m_{S}\left(s\right)ds<\infty,\int_{0}^{\infty}\left|\kappa\left(s\right)\right|^{2}m_{S}\left(s\right)ds<\infty.$
To derive a simple model which is sufficiently general (see (2.28)-(2.29)
below), we need to consider a set of constitutive equations for $e_{S},q_{S}$,
$S\in\left\\{\Omega,\Gamma\right\\}$, which comply with the above implications
that the second law has on the response functions associated with a given
complete thermodynamic process in $\overline{\Omega}$. A fairly general
assumption is to consider small variations in the absolute temperature and
temperature gradients on both $\Omega$ and $\Gamma$, respectively, from
equilibrium reference values (cf. (2.1)-(2.2)). We take
$e_{\Omega}\left(u\right)=e_{\Omega,\infty}+\rho_{\Omega}c_{\Omega}u,\text{
}e_{\Gamma}\left(u\right)=e_{\Gamma,\infty}+\rho_{\Gamma}c_{\Gamma}u,$
where the involved positive constants $e_{S,\infty},$ $c_{S},$ $\rho_{S}$
denote the internal energies at equilibrium, the specific heat capacities and
material densities of $S\in\left\\{\Omega,\Gamma\right\\}$, respectively. In
addition, we assume that the internal and boundary fluxes satisfy the
following constitutive equations:
$\begin{array}[]{ll}q\left(t\right)=-\omega\nabla
u-\left(1-\omega\right)\int_{0}^{\infty}m_{\Omega}\left(s\right)\nabla
u^{t}\left(s\right)ds,&\\\ &\\\
q_{\Gamma}\left(t\right)=-\nu\nabla_{\Gamma}u-\left(1-\nu\right)\int_{0}^{\infty}m_{\Gamma}\left(s\right)\nabla_{\Gamma}u^{t}\left(s\right)ds,&\end{array}$
(2.26)
for some constants $\omega,\nu\in\left(0,1\right)$. Of course, when $m_{S}=0$,
$S\in\left\\{\Omega,\Gamma\right\\}$, we recover in (2.26) the usual Fourier
laws. Thus, in this context the constants $\omega,\nu$ correspond to the
instantaneous conductivities of $\Omega$ and $\Gamma$, respectively.
Furthermore, we assume in (2.4) nonlinear temperature dependent heat sources
$h_{S},$ $S\in\left\\{\Omega,\Gamma\right\\}$, namely, we take
$\begin{array}[]{ll}h_{\Omega}\left(t\right):=-f\left(u\left(t\right)\right)-\alpha\left(1-\omega\right)\int_{0}^{\infty}m_{\Omega}\left(s\right)u\left(x,t-s\right)ds,&\\\
&\\\ h_{\Gamma}\left(t\right):=-g\left(u\left(t\right)\right)-q\cdot
n-\beta\left(1-\nu\right)\int_{0}^{\infty}m_{\Gamma}\left(s\right)u\left(x,t-s\right)ds,&\end{array}$
(2.27)
for some $\beta>0,\alpha>0$, where the source on $\Gamma,$ $h_{\Gamma}$ is
also assumed to depend linearly on heat transport from inside of $\Omega$ in
directions normal to the boundary $\Gamma$. With these assumptions, (2.4)
yields the following system with memory
$\displaystyle\partial_{t}u-\omega\Delta
u-\left(1-\omega\right)\int_{0}^{\infty}m_{\Omega}\left(s\right)\Delta
u\left(x,t-s\right)ds+f\left(u\right)$ (2.28)
$\displaystyle+\alpha\left(1-\omega\right)\int_{0}^{\infty}m_{\Omega}\left(s\right)u\left(x,t-s\right)ds$
$\displaystyle=0,$
in $\Omega\times\left(0,\infty\right),$ subject to the boundary condition
$\displaystyle\partial_{t}u-\nu\Delta_{\Gamma}u+\omega\partial_{n}u+\left(1-\omega\right)\int_{0}^{\infty}m_{\Omega}\left(s\right)\partial_{n}u\left(x,t-s\right)ds$
(2.29)
$\displaystyle+\left(1-\nu\right)\int_{0}^{\infty}m_{\Gamma}\left(s\right)\left(-\Delta_{\Gamma}+\beta\right)u\left(x,t-s\right)ds+g\left(u\right)$
$\displaystyle=0,$
on $\Gamma\times\left(0,\infty\right).$
It is worth emphasizing that a different choice
$e_{\Gamma}\left(u\right)=e_{\Gamma,\infty}$ in (2.4) leads to a formulation
in which the boundary condition (2.29) is not dynamic any longer in the sense
that it does not contain the term $\partial_{t}u$ anymore. This stationary
boundary condition can be also reduced to (1.4) by a suitable choice of the
parameters $\beta,$ $\nu$ and the history $m_{\Gamma}$ involved in (2.26) and
(2.27). On the other hand, it is clear that if we (formally) choose
$m_{S}=\delta_{0}$ (the Dirac mass at zero), for each
$S\in\left\\{\Omega,\Gamma\right\\}$, equations (2.28)-(2.29) reduce into the
following system
$\left\\{\begin{array}[]{ll}\partial_{t}u-\Delta
u+\overline{f}\left(u\right)=0,&\text{in
}\Omega\times\left(0,\infty\right),\\\
\partial_{t}u-\Delta_{\Gamma}u+\partial_{n}u+\overline{g}\left(u\right)=0,&\text{on
}\Gamma\times\left(0,\infty\right),\end{array}\right.$ (2.30)
where $\overline{g}\left(x\right):=g\left(x\right)+\left(1-\nu\right)\beta x,$
$\overline{f}\left(x\right):=f\left(x\right)+\left(1-\omega\right)\alpha x$,
$x\in\mathbb{R}$. The latter has been investigated quite extensively recently
in many contexts (i.e., phase-field systems, heat conduction phenomena with
both a dissipative and non-dissipative source $\overline{g}$, Stefan problems,
and many more). We refer the reader to recent investigations pertaining the
system (2.30) in [1, 11, 12, 14, 13, 15], and the references therein.
## 3\. Past history formulation and functional setup
As in [8] (cf. also [19]), we can introduce the so-called integrated past
history of $u$, i.e., the auxiliary variable
$\eta^{t}\left(x,s\right)=\int_{0}^{s}u\left(x,t-y\right)dy,$
for $s,t>0$. Setting
$\mu_{\Omega}\left(s\right)=-\omega^{-1}\left(1-\omega\right)m_{\Omega}^{{}^{\prime}}\left(s\right),\text{
}\mu_{\Gamma}\left(s\right)=-\nu^{-1}\left(1-\nu\right)m_{\Gamma}^{{}^{\prime}}\left(s\right),$
(3.1)
assuming that $m_{S},$ $S\in\left\\{\Omega,\Gamma\right\\}$, is sufficiently
smooth and vanishing at $\infty$, formal integration by parts into
(2.28)-(2.29) yields
$\displaystyle\left(1-\omega\right)\int_{0}^{\infty}m_{\Omega}\left(s\right)\Delta
u\left(x,t-s\right)ds$
$\displaystyle=\omega\int_{0}^{\infty}\mu_{\Omega}\left(s\right)\Delta\eta^{t}\left(x,s\right)ds,$
$\displaystyle\left(1-\omega\right)\int_{0}^{\infty}m_{\Omega}\left(s\right)\partial_{n}u\left(x,t-s\right)ds$
$\displaystyle=\omega\int_{0}^{\infty}\mu_{\Omega}\left(s\right)\partial_{n}\eta^{t}\left(x,s\right)ds$
and
$\left(1-\nu\right)\int_{0}^{\infty}m_{\Gamma}\left(s\right)\left(-\Delta_{\Gamma}u\left(t-s\right)+\beta
u\left(t-s\right)\right)ds=\nu\int_{0}^{\infty}\mu_{\Gamma}\left(s\right)\left(-\Delta_{\Gamma}\eta^{t}\left(s\right)+\beta\eta^{t}\left(s\right)\right)ds.$
(3.2)
Thus, we consider the following formulation.
Problem P. Find a function $\left(u,\eta^{t}\right)$ such that
$\partial_{t}u-\omega\Delta
u-\omega\int_{0}^{\infty}\mu_{\Omega}\left(s\right)\Delta\eta^{t}\left(s\right)ds+\alpha\omega\int_{0}^{\infty}\mu_{\Omega}\left(s\right)\eta^{t}\left(x,s\right)ds+f\left(u\right)=0,$
(3.3)
in $\Omega\times\left(0,\infty\right),$
$\displaystyle\partial_{t}u-\nu\Delta_{\Gamma}u+\omega\partial_{n}u+\omega\int_{0}^{\infty}\mu_{\Omega}\left(s\right)\partial_{n}\eta^{t}\left(s\right)ds$
(3.4)
$\displaystyle+\nu\int_{0}^{\infty}\mu_{\Gamma}\left(s\right)\left(-\Delta_{\Gamma}\eta^{t}\left(s\right)+\beta\eta^{t}\left(s\right)\right)ds+g\left(u\right)$
$\displaystyle=0,$
on $\Gamma\times\left(0,\infty\right),$ and
$\partial_{t}\eta^{t}\left(s\right)+\partial_{s}\eta^{t}\left(s\right)=u\left(t\right),\text{
in }\overline{\Omega}\times\left(0,\infty\right),$ (3.5)
subject to the boundary conditions
$\eta^{t}\left(0\right)=0\text{, in
}\overline{\Omega}\times\left(0,\infty\right)$ (3.6)
and initial conditions
$u\left(0\right)=u_{0}\text{ in }\Omega,\text{ }u\left(0\right)=v_{0}\text{ on
}\Gamma,$ (3.7)
and
$\eta^{0}\left(s\right)=\eta_{0}\text{ in }\Omega\text{,
}\eta^{0}\left(s\right)=\xi_{0}\text{ on }\Gamma.$ (3.8)
Note that we do not require that the boundary traces of $u_{0}$ and $\eta_{0}$
equal to $v_{0}$ and $\xi_{0}$, respectively. Thus, we are solving a much more
general problem in which equation (3.3) is interpreted as an evolution
equation in the bulk $\Omega$ properly coupled with the equation (3.4) on the
boundary $\Gamma.$ Finally, we note that $\eta_{0},\xi_{0}$ are defined by
$\displaystyle\eta_{0}$ $\displaystyle=$
$\displaystyle\int_{0}^{s}u_{0}\left(x,-y\right)dy,\text{ in }\Omega\text{,
for }s>0,$ $\displaystyle\xi_{0}$ $\displaystyle=$
$\displaystyle\int_{0}^{s}v_{0}\left(x,-y\right)dy,\text{ on }\Gamma\text{,
for }s>0.$
However, from now on both $\eta_{0}$ and $\xi_{0}$ will be regarded as
independent of the initial data $u_{0},v_{0}.$ Indeed, below we will consider
a more general problem with respect to the original one. In order to give a
more rigorous notion of solutions for problem (3.3)-(3.8), we need to
introduce some terminology and the functional setting associated with this
system.
In the sequel, we denote by $\left\|\cdot\right\|_{L^{2}\left(\Gamma\right)}$
and $\left\|\cdot\right\|_{L^{2}\left(\Omega\right)}$ the norms on
$L^{2}\left(\Gamma\right)$ and $L^{2}\left(\Omega\right)$, whereas the inner
products in these spaces are denoted by
$\left\langle\cdot,\cdot\right\rangle_{L^{2}\left(\Gamma\right)}$ and
$\left\langle\cdot,\cdot\right\rangle_{L^{2}\left(\Omega\right)},$
respectively. Furthermore, the norms on $H^{s}\left(\Omega\right)$ and
$H^{s}\left(\Gamma\right),$ for $s>0,$ will be indicated by
$\left\|\cdot\right\|_{H^{s}}$ and
$\left\|\cdot\right\|_{H^{s}\left(\Gamma\right)}$, respectively. The symbol
$\left\langle\cdot,\cdot\right\rangle$ stands for pairing between any generic
Banach spaces $V$ and its dual $V^{\ast}$; $(u,v)^{\mathrm{tr}}$ will also
simply denote the vector-valued function $\binom{u}{v}.$ Constants below may
depend on various structural parameters such as $|\Omega|$, $|\Gamma|$,
$\ell_{1},$ $\ell_{2}$, etc, and these constants may even change from line to
line. Furthermore, we denote by $K(R)$ a generic monotonically increasing
function of $R>0,$ whose specific dependance on other parameters will be made
explicit on occurrence.
Let us now define the basic functional setup for (3.3)-(3.8). From this point
on, we assume that $\Omega$ is a bounded domain of $\mathbb{R}^{3}$ with
boundary $\Gamma$ which is of class $\mathcal{C}^{2}$. To this end, consider
the space $\mathbb{X}^{2}=L^{2}\left(\overline{\Omega},d\mu\right),$ where
$d\mu=dx_{\mid\Omega}\oplus d\sigma,$ such that $dx$ denotes the Lebesgue
measure on $\Omega$ and $d\sigma$ denotes the natural surface measure on
$\Gamma$. It is easy to see that
$\mathbb{X}^{2}=L^{2}\left(\Omega,dx\right)\oplus
L^{2}\left(\Gamma,d\sigma\right)$ may be identified under the natural norm
$\left\|u\right\|_{\mathbb{X}^{2}}^{2}=\int\limits_{\Omega}\left|u\left(x\right)\right|^{2}dx+\int\limits_{\Gamma}\left|u\left(x\right)\right|^{2}d\sigma.$
Moreover, if we identify every $u\in C\left(\overline{\Omega}\right)$ with
$U=\left(u_{\mid\Omega},u_{\mid\Gamma}\right)\in C\left(\Omega\right)\times
C\left(\Gamma\right)$, we may also define $\mathbb{X}^{2}$ to be the
completion of $C\left(\overline{\Omega}\right)$ in the norm
$\left\|\cdot\right\|_{\mathbb{X}^{2}}$. In general, any function
$u\in\mathbb{X}^{2}$ will be of the form $u=\binom{u_{1}}{u_{2}}$ with
$u_{1}\in L^{2}\left(\Omega,dx\right)$ and $u_{2}\in
L^{2}\left(\Gamma,d\sigma\right),$ and there need not be any connection
between $u_{1}$ and $u_{2}$. From now on, the inner product in the Hilbert
space $\mathbb{X}^{2}$ will be denoted by
$\left\langle\cdot,\cdot\right\rangle_{\mathbb{X}^{2}}.$ Next, recall that the
Dirichlet trace map
${\mathrm{tr_{D}}}:C^{\infty}\left(\overline{\Omega}\right)\rightarrow
C^{\infty}\left(\Gamma\right),$ defined by
${\mathrm{tr_{D}}}\left(u\right)=u_{\mid\Gamma}$ extends to a linear
continuous operator ${\mathrm{tr_{D}}}:H^{r}\left(\Omega\right)\rightarrow
H^{r-1/2}\left(\Gamma\right),$ for all $r>1/2$, which is onto for $1/2<r<3/2.$
This map also possesses a bounded right inverse
${\mathrm{tr_{D}}}^{-1}:H^{r-1/2}\left(\Gamma\right)\rightarrow
H^{r}\left(\Omega\right)$ such that
${\mathrm{tr_{D}}}\left({\mathrm{tr_{D}}}^{-1}\psi\right)=\psi,$ for any
$\psi\in H^{r-1/2}\left(\Gamma\right)$. We can thus introduce the subspaces of
$H^{r}\left(\Omega\right)\times H^{r-1/2}\left(\Gamma\right)$ and
$H^{r}\left(\Omega\right)\times H^{r}\left(\Gamma\right)$, respectively, by
$\displaystyle\mathbb{V}_{0}^{r}$ $\displaystyle:=\\{U=\left(u,\psi\right)\in
H^{r}\left(\Omega\right)\times
H^{r-1/2}\left(\Gamma\right):{\mathrm{tr_{D}}}\left(u\right)=\psi\\},$ (3.9)
$\displaystyle\mathbb{V}^{r}$
$\displaystyle:=\\{U=\left(u,\psi\right)\in\mathbb{V}_{0}^{r}:{\mathrm{tr_{D}}}\left(u\right)=\psi\in
H^{r}\left(\Gamma\right)\\},$
for every $r>1/2,$ and note that $\mathbb{V}_{0}^{r},$ $\mathbb{V}^{r}$ are
not product spaces. However, we have the following dense and compact
embeddings $\mathbb{V}_{0}^{r_{1}}\subset\mathbb{V}_{0}^{r_{2}},$ for any
$r_{1}>r_{2}>1/2$ (by definition, this also true for the sequence of spaces
$\mathbb{V}^{r_{1}}\subset\mathbb{V}^{r_{2}}$). Naturally, the norm on the
spaces $\mathbb{V}_{0}^{r},$ $\mathbb{V}^{r}$ are defined by
$\|U\|_{\mathbb{V}_{0}^{r}}^{2}:=\|u\|_{H^{r}}^{2}+\|\psi\|_{H^{r-1/2}(\Gamma)}^{2},\text{
}\|U\|_{\mathbb{V}^{r}}^{2}:=\|u\|_{H^{r}}^{2}+\|\psi\|_{H^{r}(\Gamma)}^{2}.$
(3.10)
In particular, the norm in the spaces $\mathbb{V}^{1},$ $\mathbb{V}_{0}^{1}$
can be defined as in terms of the following equivalent norms:
$\displaystyle\|U\|_{\mathbb{V}^{1}}$ $\displaystyle:$
$\displaystyle=\left(\omega\|\nabla
u\|_{L^{2}\left(\Omega\right)}^{2}+\nu\|\nabla_{\Gamma}\psi\|_{L^{2}(\Gamma)}^{2}+\beta\nu\left\|\psi\right\|_{L^{2}\left(\Gamma\right)}^{2}\right)^{1/2},\text{
}\nu>0,$ $\displaystyle\|U\|_{\mathbb{V}_{0}^{1}}$ $\displaystyle:$
$\displaystyle=\left(\omega\|\nabla
u\|_{L^{2}\left(\Omega\right)}^{2}+\alpha\omega\left\|u\right\|_{L^{2}\left(\Omega\right)}^{2}\right)^{1/2}.$
Now we introduce the spaces for the memory vector-valued function
$\left(\eta,\xi\right)$. For a given nonnegative, not identically equal to
zero, and measurable function $\theta_{S},$
$S\in\left\\{\Omega,\Gamma\right\\}$, defined on $\mathbb{R}_{+},$ and a real
Hilbert space $W$ (with inner product denoted by
$\left\langle\cdot,\cdot\right\rangle_{\mathrm{W}}$), let
$L_{\theta_{S}}^{2}\left(\mathbb{R}_{+};W\right)$ be the Hilbert space of
$W$-valued functions on $\mathbb{R}_{+}$, endowed with the following inner
product
$\left\langle\phi_{1},\phi_{2}\right\rangle_{L_{\theta_{S}}^{2}\left(\mathbb{R}_{+};W\right)}=\int_{0}^{\infty}\theta_{S}(s)\left\langle\phi_{1}\left(s\right),\phi_{2}\left(s\right)\right\rangle_{\mathrm{W}}ds.$
(3.11)
Moreover, for each $r>1/2$ we define
$L_{\theta_{\Omega}\oplus\theta_{\Gamma}}^{2}\left(\mathbb{R}_{+};\mathbb{V}^{r}\right)\simeq
L_{\theta_{\Omega}}^{2}\left(\mathbb{R}_{+};\mathbb{V}_{0}^{r}\right)\oplus
L_{\theta_{\Gamma}}^{2}\left(\mathbb{R}_{+};H^{r}\left(\Gamma\right)\right)$
as the Hilbert space of $\mathbb{V}^{r}$-valued functions
$\left(\eta,\xi\right)^{\mathrm{tr}}$ on $\mathbb{R}_{+}$ endowed with the
inner product
$\left\langle\binom{\eta_{1}}{\xi_{1}},\binom{\eta_{2}}{\xi_{2}}\right\rangle_{L_{\theta_{\Omega}\oplus\theta_{\Gamma}}^{2}\left(\mathbb{R}_{+};\mathbb{V}^{r}\right)}=\int_{0}^{\infty}\left(\theta_{\Omega}(s)\left\langle\eta_{1}\left(s\right),\eta_{2}\left(s\right)\right\rangle_{H^{r}}+\theta_{\Gamma}(s)\left\langle\xi_{1}\left(s\right),\xi_{2}\left(s\right)\right\rangle_{H^{r}\left(\Gamma\right)}\right)ds.$
Consequently, for $r>1/2$ we set
$\mathcal{M}_{\Omega}^{0}:=L_{\mu_{\Omega}}^{2}\left(\mathbb{R}_{+};L^{2}\left(\Omega\right)\right)\text{,
}\mathcal{M}_{\Omega}^{r}:=L_{\mu_{\Omega}}^{2}(\mathbb{R}_{+};\mathbb{V}_{0}^{r})\text{,
}\mathcal{M}_{\Gamma}^{r}:=L_{\mu_{\Gamma}}^{2}(\mathbb{R}_{+};H^{r}\left(\Gamma\right))$
and
$\mathcal{M}_{\Omega,\Gamma}^{0}:=L_{\mu_{\Omega}\oplus\mu_{\Gamma}}^{2}\left(\mathbb{R}_{+};\mathbb{X}^{2}\right),\text{
}\mathcal{M}_{\Omega,\Gamma}^{r}:=L_{\mu_{\Omega}\oplus\mu_{\Gamma}}^{2}\left(\mathbb{R}_{+};\mathbb{V}^{r}\right).$
Clearly, because of the topological identification
$H^{r}\left(\Omega\right)\simeq\mathbb{V}_{0}^{r}$, one has the inclusion
$\mathcal{M}_{\Omega,\Gamma}^{r}\subset\mathcal{M}_{\Omega}^{r}$ for each
$r>1/2$. In the sequel, we will also consider Hilbert spaces of the form
$W_{\mu_{\Omega}}^{k,2}\left(\mathbb{R}_{+};\mathbb{V}_{0}^{r}\right)$ for
$k\in\mathbb{N}$. When it is convenient, we will also use the notation
$\mathcal{H}_{\Omega,\Gamma}^{0,1}:=\mathbb{X}^{2}\times\mathcal{M}_{\Omega,\Gamma}^{1}\text{,
}\mathcal{H}_{\Omega,\Gamma}^{s,r}:=\mathbb{V}^{s}\times\mathcal{M}_{\Omega,\Gamma}^{r}\text{
for }s,r\geq 1.$
For matter of convenience, we will also set the inner product in
$\mathcal{M}_{\Omega,\Gamma}^{1},$ as follows:
$\displaystyle\left\langle\binom{\eta_{1}}{\xi_{1}},\binom{\eta_{2}}{\xi_{2}}\right\rangle_{L_{\theta_{\Omega}\oplus\theta_{\Gamma}}^{2}\left(\mathbb{R}_{+};\mathbb{V}^{1}\right)}$
$\displaystyle=\omega\int_{0}^{\infty}\theta_{\Omega}(s)\left(\left\langle\nabla\eta_{1}\left(s\right),\nabla\eta_{2}\left(s\right)\right\rangle_{L^{2}\left(\Omega\right)}+\alpha\left\langle\eta_{1}\left(s\right),\eta_{2}\left(s\right)\right\rangle_{L^{2}\left(\Omega\right)}\right)ds$
$\displaystyle+\nu\int_{0}^{\infty}\theta_{\Gamma}(s)\left(\left\langle\nabla_{\Gamma}\xi_{1}\left(s\right),\nabla_{\Gamma}\xi_{2}\left(s\right)\right\rangle_{L^{2}\left(\Gamma\right)}+\beta\left\langle\xi_{1}\left(s\right),\xi_{2}\left(s\right)\right\rangle_{L^{2}\left(\Gamma\right)}\right)ds.$
The following basic elliptic estimate is taken from [13, Lemma 2.2].
###### Lemma 3.1.
Consider the linear boundary value problem,
$\left\\{\begin{array}[]{rl}-\Delta u&=p_{1}~{}~{}\text{in}~{}\Omega,\\\
-\Delta_{\Gamma}u+\partial_{n}u+\beta
u&=p_{2}~{}~{}\text{on}~{}\Gamma.\end{array}\right.$ (3.12)
If $(p_{1},p_{2})^{{\mathrm{tr}}}\in H^{s}(\Omega)\times H^{s}(\Gamma)$, for
$s\geq 0$ and $s+\frac{1}{2}\not\in\mathbb{N}$, then the following estimate
holds for some constant $C>0$,
$\|u\|_{H^{s+2}}+\|u\|_{H^{s+2}(\Gamma)}\leq
C\left(\|p_{1}\|_{H^{s}}+\|p_{2}\|_{H^{s}(\Gamma)}\right).$ (3.13)
We also recall the following basic inequality from [11, Lemma A.2].
###### Lemma 3.2.
Let $s>1$ and $u\in H^{1}(\Omega)$. Then, for every $\varepsilon>0$, there
exists a positive constant $C_{\varepsilon}\sim\varepsilon^{-1}$ such that,
$\|u\|_{L^{s}(\Gamma)}^{s}\leq\varepsilon\|\nabla
u\|_{L^{2}(\Omega)}^{2}+C_{\varepsilon}\left(\|u\|_{L^{\gamma}(\Omega)}^{\gamma}+1\right),$
(3.14)
where $\gamma=\max\\{s,2(s-1)\\}$.
Next, we consider the linear (self-adjoint, positive) operator
$\mathrm{C}\psi:=\mathrm{C}_{\beta}\psi=-\Delta_{\Gamma}\psi+\beta\psi$ acting
on $D\left(\mathrm{C}\right)=H^{2}\left(\Gamma\right)$. The basic (linear)
operator, associated with problem (3.3)-(3.5), is the so-called “Wentzell”
Laplace operator. Recall that $\omega\in\left(0,1\right)$. We let
$\displaystyle\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}}\binom{u_{1}}{u_{2}}$
$\displaystyle:=\left(\begin{array}[]{cc}-\omega\Delta+\alpha\omega I&0\\\
\omega\partial_{n}\left(\cdot\right)&\nu\mathrm{C}\end{array}\right)\left(\begin{array}[]{c}u_{1}\\\
u_{2}\end{array}\right)$ (3.19)
$\displaystyle=\mathrm{A_{W}^{\alpha,0,0,\omega}}\binom{u_{1}}{u_{2}}+\binom{0}{\nu\mathrm{C}u_{2}},$
with
$D\left(\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}}\right):=\left\\{U=\binom{u_{1}}{u_{2}}\in\mathbb{Y}:-\Delta
u_{1}\in L^{2}\left(\Omega\right),\text{
}\omega\partial_{n}u_{1}-\nu\mathrm{C}u_{2}\in
L^{2}\left(\Gamma\right)\right\\},$ (3.20)
where $\mathbb{Y}:=\mathbb{V}_{0}^{1}$ if $\nu=0,$ and
$\mathbb{Y}:=\mathbb{V}^{1}$ if $\nu>0.$ It is well-known that
$(\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}},D(\mathrm{A_{W}^{\alpha,\beta,\nu,\omega})})$
is self-adjoint and nonnegative operator on $\mathbb{X}^{2}$ whenever
$\alpha,\beta,\nu\geq 0,$ and $\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}}>0$ if
either $\alpha>0$ or $\beta>0$. Moreover, the resolvent operator
$(I+\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}})^{-1}\in\mathcal{L}\left(\mathbb{X}^{2}\right)$
is compact. Moreover, since $\Gamma$ is of class $\mathcal{C}^{2},$ then
$D(\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}})=\mathbb{V}^{2}$ if $\nu>0$.
Indeed, for any $\alpha,\beta\geq 0$ with
$\left(\alpha,\beta\right)\neq\left(0,0\right),$ the map
$\Psi:U\mapsto\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}}U,$ when viewed as a map
from $\mathbb{V}_{2}$ into $\mathbb{X}^{2}=L^{2}\left(\Omega\right)\times
L^{2}\left(\Gamma\right),$ is an isomorphism and there exists a positive
constant $C_{\ast}$, independent of $U=\left(u,\psi\right)^{\mathrm{tr}}$,
such that
$C_{\ast}^{-1}\left\|U\right\|_{\mathbb{V}^{2}}\leq\left\|\Psi\left(U\right)\right\|_{\mathbb{X}^{2}}\leq
C_{\ast}\left\|U\right\|_{\mathbb{V}^{2}},$ (3.21)
for all $U\in\mathbb{V}^{2}$ (cf. Lemma 3.1). Whenever $\nu=0$, by elliptic
regularity theory and $U\in D(\mathrm{A_{W}^{\alpha,\beta,0,\omega}})$ one has
$u\in H^{3/2}\left(\Omega\right)$ and $\psi={\mathrm{tr_{D}}}\left(u\right)\in
H^{1}\left(\Gamma\right)$, since the Dirichlet-to-Neumann map is bounded from
$H^{1}\left(\Gamma\right)$ to $L^{2}\left(\Gamma\right)$; hence
$D(\mathrm{A_{W}^{\alpha,\beta,0,\omega}})=\mathbb{W}$, where $\mathbb{W}$ is
the Hilbert space equipped with the following (equivalent) norm
$\left\|U\right\|_{\mathbb{W}}^{2}:=\left\|U\right\|_{\mathbb{V}_{0}^{3/2}}^{2}+\left\|\Delta
u\right\|_{L^{2}\left(\Omega\right)}^{2}+\left\|\partial_{n}u\right\|_{L^{2}\left(\Gamma\right)}^{2}.$
We refer the reader to more details to e.g., [1], [15], [2] and the references
therein. We now have all the necessary ingredients to introduce a rigorous
formulation of problem P in the next section.
## 4\. Variational formulation and well-posedness
We need the following hypotheses for problem P. For the function $\mu_{S},$
$S\in\left\\{\Omega,\Gamma\right\\}$, given by (3.1), we consider the
following assumptions (cf., e.g. [8], [16] and [17]). Assume
$\displaystyle\mu_{S}\in C^{1}(\mathbb{R}_{+})\cap L^{1}(\mathbb{R}_{+}),$
(4.1) $\displaystyle\mu_{S}(s)\geq 0~{}~{}\text{for all}~{}~{}s\geq 0,$ (4.2)
$\displaystyle\mu_{S}^{\prime}(s)\leq 0~{}~{}\text{for all}~{}~{}s\geq 0.$
(4.3)
These assumptions are equivalent to assuming that $m_{S}(s),$
$S\in\left\\{\Omega,\Gamma\right\\}$, is a bounded, positive, nonincreasing,
convex function of class $C^{2}$. These conditions are commonly used in the
literature (see, for example, [8], [16] and [19]) to establish existence and
uniqueness of continuous global weak solutions for Coleman-Gurtin type
equations subject to Dirichlet boundary conditions.
As far as natural conditions for the nonlinear terms are concerned, we assume
$f$, $g\in C^{1}(\mathbb{R})$ satisfy the sign conditions
$f^{\prime}(s)\geq-M_{f},\text{ }g^{\prime}(s)\geq-M_{g},\text{ for all
}s\in\mathbb{R}\text{,}$ (4.4)
for some $M_{f},M_{g}>0$ and the growth assumptions, for all $s\in\mathbb{R}$,
$|f(s)|\leq\ell_{1}(1+|s|^{r_{1}-1}),\text{
}|g(s)|\leq\ell_{2}(1+|s|^{r_{2}-1}),$ (4.5)
for some positive constants $\ell_{1}$ and $\ell_{2}$, and where
$r_{1},r_{2}\geq 2$. Let now
$\widetilde{g}\left(s\right):=g\left(s\right)-\nu\beta s,\text{ for
}s\in\mathbb{R}\text{.}$ (4.6)
In addition, we assume there exists $\varepsilon\in(0,\omega)$ so that the
following balance condition
$\liminf_{|s|\rightarrow\infty}\frac{f(s)s+\frac{|\Gamma|}{|\Omega|}\widetilde{g}(s)s-\frac{C_{\Omega}^{2}|\Gamma|^{2}}{4\varepsilon|\Omega|^{2}}|\widetilde{g}^{\prime}(s)s+\widetilde{g}(s)|^{2}}{\left|s\right|^{r_{1}}}>0$
(4.7)
holds for $r_{1}\geq\max\\{r_{2},2(r_{2}-1)\\}$. The number $C_{\Omega}>0$ is
the best Sobolev constant in the following Sobolev-Poincaré inequality
$\left\|u-\left\langle
u\right\rangle_{\Gamma}\right\|_{L^{2}\left(\Omega\right)}\leq
C_{\Omega}\left\|\nabla u\right\|_{L^{2}(\Omega)},\text{ }\left\langle
u\right\rangle_{\Gamma}:=\frac{1}{\left|\Gamma\right|}\int\limits_{\Gamma}\mathrm{tr_{D}}\left(u\right)d\sigma,$
(4.8)
for all $u\in H^{1}\left(\Omega\right)$, see [25, Lemma 3.1].
The assumption (4.7) deserves some additional comments. Suppose that that for
$\left|y\right|\rightarrow\infty,$ both the internal and boundary functions
behave accordingly to the following laws:
$\lim_{\left|y\right|\rightarrow\infty}\frac{f^{{}^{\prime}}\left(y\right)}{\left|y\right|^{r_{1}-2}}=\left(r_{1}-1\right)c_{f}\text{,
}\lim_{\left|y\right|\rightarrow\infty}\frac{\widetilde{g}^{{}^{\prime}}\left(y\right)}{\left|y\right|^{r_{2}-2}}=\left(r_{2}-1\right)c_{\widetilde{g}},$
(4.9)
for some $c_{f},c_{\widetilde{g}}\in\mathbb{R}\backslash\left\\{0\right\\}$.
In particular, it holds
$f\left(y\right)y\sim c_{f}\left|y\right|^{r_{1}},\text{
}\widetilde{g}\left(y\right)y\sim
c_{\widetilde{g}}\left|y\right|^{r_{2}}\text{ as
}\left|y\right|\rightarrow\infty.$
For the case of bulk dissipation (i.e., $c_{f}>0$) and anti-dissipative
behavior at the boundary $\Gamma$ (i.e., $c_{\widetilde{g}}<0$), assumption
(4.7) is automatically satisfied provided that
$r_{1}>\max\\{r_{2},2(r_{2}-1)\\}$. Furthermore, if
$2<r_{2}<2\left(r_{2}-1\right)=r_{1}$ and
$c_{f}>\frac{1}{4\varepsilon}\left(\frac{C_{\Omega}\left|\Gamma\right|c_{\widetilde{g}}r_{2}}{\left|\Omega\right|}\right)^{2},$
(4.10)
for some $\varepsilon\in(0,\omega)$, then once again (4.7) is satisfied. In
the case when $f$ and $g$ are sublinear (i.e., $r_{1}=r_{2}=2$ in (4.5)), the
condition (4.7) is also automatically satisfied provided that
$\left(c_{f}+\frac{|\Gamma|}{|\Omega|}c_{\widetilde{g}}\right)>\frac{1}{\varepsilon}\left(\frac{C_{\Omega}\left|\Gamma\right|c_{\widetilde{g}}}{\left|\Omega\right|}\right)^{2}$
(4.11)
for some $\varepsilon\in\left(0,\omega\right)$. Of course, when both the bulk
and boundary nonlinearities are dissipative, i.e., there exist two constants
$C_{f}>0,C_{g}>0$ such that, additionally to (4.5),
$\left\\{\begin{array}[]{l}f\left(s\right)s\geq
C_{f}\left|s\right|^{r_{1}},\\\ \widetilde{g}\left(s\right)s\geq
C_{g}\left|s\right|^{r_{2}},\end{array}\right.$ (4.12)
for all $\left|s\right|\geq s_{0}$, for some sufficiently large $s_{0}>0,$
condition (4.7) can be dropped and is no longer required (see [11]).
In order to introduce a rigorous formulation for problem P, we define
$D(\mathrm{T}):=\left\\{\Phi=\binom{\eta^{t}}{\xi^{t}}\in\mathcal{M}_{\Omega,\Gamma}^{1}:\partial_{s}\Phi\in\mathcal{M}_{\Omega,\Gamma}^{1},\text{
}\Phi(0)=0\right\\}$ (4.13)
and consider the linear (unbounded) operator
$\mathrm{T}:D(\mathrm{T})\rightarrow\mathcal{M}_{\Omega,\Gamma}^{1}$ by
$\mathrm{T}\Phi=-\binom{\frac{d\eta}{ds}}{\frac{d\xi}{ds}},\text{
}\Phi=\binom{\eta^{t}}{\xi^{t}}\in D(\mathrm{T}).$
The follow result can be proven following [19, Theorem 3.1].
###### Proposition 4.1.
The operator $\mathrm{T}$ with domain $D(\mathrm{T})$ is an infinitesimal
generator of a strongly continuous semigroup of contractions on
$\mathcal{M}_{\Omega,\Gamma}^{1}$, denoted $e^{\mathrm{T}t}$.
As a consequence, we also have (cf., e.g. [23, Corollary IV.2.2]).
###### Corollary 4.2.
Let $T>0$ and assume $U=\binom{u}{v}\in L^{1}(0,T;\mathbb{V}^{1})$. Then, for
every $\Phi_{0}\in\mathcal{M}_{\Omega,\Gamma}^{1}$, the Cauchy problem for
$\Phi^{t}=\binom{\eta^{t}}{\xi^{t}},$
$\left\\{\begin{array}[]{ll}\partial_{t}\Phi^{t}=\mathrm{T}\Phi^{t}+U(t),&\text{for}~{}~{}t>0,\\\
\Phi^{0}=\Phi_{0},&\end{array}\right.$ (4.14)
has a unique (mild) solution $\Phi\in
C([0,T];\mathcal{M}_{\Omega,\Gamma}^{1})$ which can be explicitly given as
$\Phi^{t}(s)=\left\\{\begin{array}[]{ll}\displaystyle\int_{0}^{s}U(t-y)dy,&\text{for}~{}~{}0<s\leq
t,\\\ \displaystyle\Phi_{0}(s-t)+\int_{s}^{t}U(t-y)dy,&\text{for
}~{}~{}s>t,\end{array}\right.$ (4.15)
cf. also [8, Section 3.2] and [19, Section 3].
###### Remark 4.3.
(i) Note that, from assumption (4.3), the following inequality
$\left\langle\mathrm{T}\Phi,\Phi\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}\leq
0$ (4.16)
holds for all $\Phi\in D(\mathrm{T})$.
(ii) If $\Phi_{0}\in D(\mathrm{T})$ and $\partial_{t}U\in
L^{1}\left(0,T;\mathbb{V}^{1}\right)$, the function $\Phi^{t}$ given by (4.15)
satisfies (4.14) in the strong sense a.e. on $\left(0,T\right),$ for any
$T>0.$
We are now ready to introduce the rigorous (variational) formulation of
problem P.
###### Definition 4.4.
Let $\alpha,\beta>0$, $\omega,\nu\in(0,1)$ and $T>0$. Given
$\binom{u_{0}}{v_{0}}\in\mathbb{X}^{2}$,
$\binom{\eta_{0}}{\xi_{0}}\in\mathcal{M}_{\Omega,\Gamma}^{1}$, we seek to find
functions $U\left(t\right)=\binom{u\left(t\right)}{v\left(t\right)},$
$\Phi^{t}=\binom{\eta^{t}}{\xi^{t}}$ with the following properties:
$\displaystyle U$ $\displaystyle\in
L^{\infty}\left(0,T;\mathbb{X}^{2}\right)\cap L^{2}(0,T;\mathbb{V}^{1}),\text{
}\Phi\in L^{\infty}\left(0,T;\mathcal{M}_{\Omega,\Gamma}^{1}\right),$ (4.17)
$\displaystyle u$ $\displaystyle\in
L^{r_{1}}(\Omega\times\left(0,T\right)),\text{ }v\in
L^{r_{2}}(\Gamma\times(0,T)),$ (4.18) $\displaystyle\partial_{t}U$
$\displaystyle\in
L^{2}\left(0,T;(\mathbb{V}^{1})^{\ast}\right)\oplus\left(L^{r_{1}^{\prime}}(\Omega\times(0,T))\times
L^{r_{2}^{\prime}}(\Gamma\times(0,T))\right),$ (4.19)
$\displaystyle\partial_{t}\Phi$ $\displaystyle\in
L^{2}\left(0,T;W_{\mu_{\Omega}\oplus\mu_{\Gamma}}^{-1,2}(\mathbb{R}_{+};\mathbb{V}^{1})\right).$
(4.20)
$\left(U,\Phi^{t}\right)$ is said to be a weak solution to problem P if
$v\left(t\right)={\mathrm{tr_{D}}}\left(u\left(t\right)\right)$ and
$\xi^{t}={\mathrm{tr_{D}}}\left(\eta^{t}\right)$ for almost all $t\in(0,T]$,
and $\left(U\left(t\right),\Phi^{t}\right)$ satisfies, for almost all
$t\in(0,T]$,
$\begin{array}[]{ll}\left\langle\partial_{t}U(t),\Xi\right\rangle_{\mathbb{X}^{2}}+\left\langle\mathrm{A_{W}^{0,\beta,\nu,\omega}}U(t),\Xi\right\rangle_{\mathbb{X}^{2}}+\int_{0}^{\infty}\mu_{\Omega}(s)\left\langle\mathrm{A_{W}^{\alpha,0,0,\omega}}\Phi^{t}\left(s\right),\Xi\right\rangle_{\mathbb{X}^{2}}ds&\\\
+\nu\int_{0}^{\infty}\mu_{\Gamma}(s)\left\langle\mathrm{C}\xi^{t}\left(s\right),\varsigma_{\mid\Gamma}\right\rangle_{L^{2}\left(\Gamma\right)}ds+\left\langle
F\left(U(t)\right),\Xi\right\rangle_{\mathbb{X}^{2}}=0,&\\\
\left\langle\partial_{t}\eta^{t},\rho\right\rangle_{\mathcal{M}_{\Omega}^{1}}=\left\langle-\frac{d}{ds}\eta^{t},\rho\right\rangle_{\mathcal{M}_{\Omega}^{1}}+\left\langle
u(t),\rho\right\rangle_{\mathcal{M}_{\Omega}^{1}},&\\\
\left\langle\partial_{t}\xi^{t},\rho_{\mid\Gamma}\right\rangle_{\mathcal{M}_{\Gamma}^{1}}=\left\langle-\frac{d}{ds}\xi^{t},\rho_{\mid\Gamma}\right\rangle_{\mathcal{M}_{\Gamma}^{1}}+\left\langle
v(t),\rho_{\mid\Gamma}\right\rangle_{\mathcal{M}_{\Gamma}^{1}},&\end{array}$
(4.21)
for all
$\Xi=\binom{\varsigma}{\varsigma_{\mid\Gamma}}\in\mathbb{V}^{1}\oplus\left(L^{r_{1}}(\Omega)\times
L^{r_{2}}(\Gamma)\right)$, all
$\Pi=\binom{\rho}{\rho_{\mid\Gamma}}\in\mathcal{M}_{\Omega,\Gamma}^{1}$ and
$U\left(0\right)=U_{0}=\left(u_{0},v_{0}\right)^{{\mathrm{tr}}},\text{
}\Phi^{0}=\Phi_{0}=\left(\eta_{0},\xi_{0}\right)^{{\mathrm{tr}}}.$ (4.22)
Above, we have set $F:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2},$
$F\left(U\right):=\binom{f\left(u\right)}{\widetilde{g}\left(v\right)},$
with $\widetilde{g}$ defined as in (4.6). The function $[0,T]\ni
t\mapsto(U(t),\Phi^{t})$ is called a global weak solution if it is a weak
solution for every $T>0$.
In the sequel, if the initial datum $\left(U_{0},\Phi_{0}\right)$ is more
smooth, the following notion of strong solution will also become important.
###### Definition 4.5.
Let $\alpha,\beta>0$, $\omega,\nu\in(0,1)$ and $T>0$. Given
$\binom{u_{0}}{v_{0}}\in\mathbb{V}^{1}$,
$\binom{\eta_{0}}{\xi_{0}}\in\mathcal{M}_{\Omega,\Gamma}^{2}$, the pair of
functions $U\left(t\right)=\binom{u\left(t\right)}{v\left(t\right)},$
$\Phi^{t}=\binom{\eta^{t}}{\xi^{t}}$ satisfying
$\displaystyle U$ $\displaystyle\in
L^{\infty}\left(0,T;\mathbb{V}^{1}\right)\cap L^{2}(0,T;\mathbb{V}^{2}),\text{
}$ (4.23) $\displaystyle\Phi$ $\displaystyle\in
L^{\infty}(0,T;\mathcal{M}_{\Omega,\Gamma}^{2}),$ $\displaystyle\partial_{t}U$
$\displaystyle\in L^{\infty}\left(0,T;(\mathbb{V}^{1})^{\ast}\right)\cap
L^{2}\left(0,T;\mathbb{X}^{2}\right),$ $\displaystyle\partial_{t}\Phi$
$\displaystyle\in
L^{2}\left(0,T;L_{\mu_{\Omega}\oplus\mu_{\Gamma}}^{2}\left(\mathbb{R}_{+};\mathbb{X}^{2}\right)\right),$
is called a strong solution to problem P if
$v\left(t\right)={\mathrm{tr_{D}}}\left(u\left(t\right)\right)$ and
$\xi^{t}={\mathrm{tr_{D}}}\left(\eta^{t}\right)$ for almost all $t\in(0,T]$,
and additionally, $\left(U\left(t\right),\Phi^{t}\right)$ satisfies (4.21),
a.e. for $t\in(0,T]$, for all $\Xi\in\mathbb{V}^{1}$,
$\Pi\in\mathcal{M}_{\Omega,\Gamma}^{1}$, and
$U\left(0\right)=U_{0}=\left(u_{0},v_{0}\right)^{{\mathrm{tr}}},\text{
}\Phi^{0}=\Phi_{0}=\left(\eta_{0},\xi_{0}\right)^{{\mathrm{tr}}}.$ (4.24)
The function $[0,T]\ni t\mapsto(U(t),\Phi^{t})$ is called a global strong
solution if it is a strong solution for every $T>0$.
###### Remark 4.6.
Note that a strong solution is incidently more smooth than a weak solution in
the sense of Definition 4.4. Moreover, on account of standard embedding
theorems the regularity $U\in L^{\infty}\left(0,T;\mathbb{V}^{1}\right)\cap
L^{2}(0,T;\mathbb{V}^{2})$ implies that
$u\in L^{\infty}\left(0,T;L^{6}\left(\Omega\right)\right)\cap
L^{q}\left(0,T;L^{p}\left(\Omega\right)\right)$
for any $p\in\left(6,\infty\right)$ , $1\leq q\leq 4p/\left(p-6\right)$, and
${\mathrm{tr_{D}}}\left(u\right)\in
L^{\infty}\left(0,T;L^{s}\left(\Omega\right)\right)$, for any
$s\in\left(1,\infty\right)$.
Another notion of strong solution to problem P, although weaker than the
notion in Definition 4.5, can be introduced as follows.
###### Definition 4.7.
The pair $U=\binom{u}{v}$ and $\Phi=\binom{\eta}{\xi}$ is called a quasi-
strong solution of problem P on $[0,T)$ if $(U(t),\Phi^{t})$ satisfies the
equations (4.21)-(4.22) for all $\Xi\in\mathbb{V}^{1}$,
$\Pi\in\mathcal{M}_{\Omega,\Gamma}^{1}$, almost everywhere on
$\left(0,T\right)$ and if it has the regularity properties:
$\displaystyle U$ $\displaystyle\in$ $\displaystyle
L^{\infty}(0,T;\mathbb{V}^{1})\cap W^{1,2}(0,T;\mathbb{V}^{1}),$ (4.25)
$\displaystyle\Phi$ $\displaystyle\in$ $\displaystyle
L^{\infty}(0,T;D\left(\mathrm{T}\right)),$ (4.26) $\displaystyle\partial_{t}U$
$\displaystyle\in$ $\displaystyle L^{\infty}\left(0,T;\mathbb{X}^{2}\right),$
(4.27) $\displaystyle\partial_{t}\Phi$ $\displaystyle\in$ $\displaystyle
L^{\infty}\left(0,T;\mathcal{M}_{\Omega,\Gamma}^{1}\right).$ (4.28)
As before, the function $[0,T]\ni t\mapsto(U(t),\Phi^{t})$ is called a global
quasi-strong solution if it is a quasi-strong solution for every $T>0$.
Our first result in this section is contained in the following theorem. It
allows us to obtain generalized solutions in the sense of Definition 4.4.
###### Theorem 4.8.
Assume (4.1)-(4.3) and (4.5)-(4.7) hold. For each $\alpha,\beta>0$,
$\omega,\nu\in(0,1)$ and $T>0$, and for any
$U_{0}=(u_{0},v_{0})^{{\mathrm{tr}}}\in\mathbb{X}^{2}$,
$\Phi_{0}=(\eta_{0},\xi_{0})^{{\mathrm{tr}}}\in\mathcal{M}_{\Omega,\Gamma}^{1},$
there exists at least one (global) weak solution $\left(U,\Phi\right)\in
C(\left[0,T\right];\mathcal{H}_{\Omega,\Gamma}^{0,1})$ to problem P.
###### Proof.
The proof is divided into several steps. Much of the motivation for the above
theorem comes from [11]. Indeed, the dissipativity induced by the balance
condition (4.7) will be exploited to obtain an _apriori_ bound. Of course,
several modifications need to be made in order to incorporate the dynamic
boundary conditions with memory into the framework.
Step 1. (An _apriori_ bound) To begin, we derive an _apriori_ energy estimate
for any (sufficiently) smooth solution $(U,\Phi)$ of problem P. Under the
assumptions of the theorem, we claim that the following estimate holds:
$\displaystyle\|U(t)\|_{\mathbb{X}^{2}}^{2}+\left\|\Phi^{t}\right\|_{\mathcal{M}_{\Omega,\Gamma}^{1}}^{2}-2\left\langle\mathrm{T}\Phi^{t},\Phi^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+2\int_{0}^{t}\left(\|U(\tau)\|_{\mathbb{V}^{1}}^{2}+\|u(\tau)\|_{L^{r_{1}}(\Omega)}^{r_{1}}\right)d\tau$
(4.29) $\displaystyle\leq
C_{T}\left(1+\|U(0)\|_{\mathbb{X}^{2}}^{2}+\left\|\Phi^{0}\right\|_{\mathcal{M}_{\Omega,\Gamma}^{1}}^{2}\right),$
for all $t\in[0,T]$, for some constant $C>0$, independent of $(U,\Phi)$ and
$t$.
We now show (4.29). In Definition 4.4 we are allowed to take, for almost all
$t\in[0,T]$,
$\Xi=U(t)=\left(u(t),u(t)_{\mid\Gamma}\right)^{{\mathrm{tr}}}\in\mathbb{V}^{1}\cap\left(L^{r_{1}}(\Omega)\times
L^{r_{2}}(\Gamma)\right)$
and
$\Pi=\Phi^{t}=\left(\eta^{t},\xi^{t}\right)^{{\mathrm{tr}}}\in\mathcal{M}_{\Omega,\Gamma}^{1}.$
Then we obtain the differential identities
$\frac{1}{2}\frac{d}{dt}\|U\|_{\mathbb{X}^{2}}^{2}+\left\langle\mathrm{A_{W}^{0,\beta,\nu,\omega}}U,U\right\rangle_{\mathbb{X}^{2}}+\left\langle\Phi^{t},U\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+\left\langle
F(U),U\right\rangle_{\mathbb{X}^{2}}=0,$ (4.30)
where
$\displaystyle\left\langle\Phi^{t},U\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}$
$\displaystyle=\omega\int_{0}^{\infty}\mu_{\Omega}(s)\left(\left\langle\nabla\eta^{t}\left(s\right),\nabla
u\right\rangle_{L^{2}\left(\Omega\right)}+\alpha\left\langle\eta^{t}\left(s\right),u\right\rangle_{L^{2}\left(\Omega\right)}\right)ds$
(4.31)
$\displaystyle+\nu\int_{0}^{\infty}\mu_{\Gamma}(s)\left(\left\langle\nabla_{\Gamma}\xi^{t}\left(s\right),\nabla_{\Gamma}u\right\rangle_{L^{2}\left(\Gamma\right)}+\beta\left\langle\xi^{t}\left(s\right),u\right\rangle_{L^{2}\left(\Gamma\right)}\right)ds$
$\displaystyle=\int_{0}^{\infty}\mu_{\Omega}(s)\left\langle\mathrm{A_{W}^{\alpha,0,0,\omega}}\Phi^{t}\left(s\right),U\right\rangle_{\mathbb{X}^{2}}ds+\nu\int_{0}^{\infty}\mu_{\Gamma}(s)\left\langle\mathrm{C}\xi^{t}\left(s\right),u\right\rangle_{L^{2}\left(\Gamma\right)}ds,$
and
$\
\frac{1}{2}\frac{d}{dt}\|\Phi^{t}\|_{\mathcal{M}_{\Omega,\Gamma}^{1}}^{2}=\left\langle\mathrm{T}\Phi^{t},\Phi^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+\left\langle
U,\Phi^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}},$ (4.32)
which hold for almost all $t\in[0,T]$. Adding these identities together and
recalling (4.16), we obtain
$\displaystyle\frac{1}{2}\frac{d}{dt}\left(\|U\|_{\mathbb{X}^{2}}^{2}+\|\Phi^{t}\|_{\mathcal{M}_{\Omega,\Gamma}^{1}}^{2}\right)-\left\langle\mathrm{T}\Phi^{t},\Phi^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+\left(\omega\|\nabla
u\|_{L^{2}\left(\Omega\right)}^{2}+\nu\|\nabla_{\Gamma}u\|_{L^{2}(\Gamma)}^{2}+\beta\left\|u\right\|_{L^{2}\left(\Gamma\right)}^{2}\right)$
(4.33) $\displaystyle\leq-\left\langle
f(u),u\right\rangle_{L^{2}\left(\Omega\right)}-\left\langle\widetilde{g}\left(u\right),u\right\rangle_{L^{2}\left(\Gamma\right)}.$
Following [11, (2.22)] and [25, (3.11)], we estimate the product with $F$ on
the right-hand side of (4.33), as follows:
$\displaystyle\left\langle F(U),U\right\rangle_{\mathbb{X}^{2}}$
$\displaystyle=\left\langle
f(u),u\right\rangle_{L^{2}\left(\Omega\right)}+\left\langle\widetilde{g}\left(u\right),u\right\rangle_{L^{2}\left(\Gamma\right)}$
(4.34)
$\displaystyle=\int_{\Omega}\left(f(u)u+\frac{|\Gamma|}{|\Omega|}\widetilde{g}(u)u\right)dx-\frac{|\Gamma|}{|\Omega|}\int_{\Omega}\left(\widetilde{g}(u)u-\frac{1}{|\Gamma|}\int_{\Gamma}\widetilde{g}(u)u\mathrm{d}\sigma\right)dx.$
Exploiting Poincaré inequality (4.8) and Young’s inequality, we see that for
all $\varepsilon\in(0,\omega)$,
$\displaystyle\frac{|\Gamma|}{|\Omega|}\int_{\Omega}\left(\widetilde{g}(u)u-\frac{1}{|\Gamma|}\int_{\Gamma}\widetilde{g}(u)ud\sigma\right)dx$
$\displaystyle\leq
C_{\Omega}\frac{|\Gamma|}{|\Omega|}\int_{\Omega}|\nabla(\widetilde{g}(u)u)|dx$
(4.35) $\displaystyle=C_{\Omega}\frac{|\Gamma|}{|\Omega|}\int_{\Omega}|\nabla
u(\widetilde{g}^{\prime}(u)u+\widetilde{g}(u))|dx$
$\displaystyle\leq\varepsilon\|\nabla
u\|_{L^{2}(\Omega)}^{2}+\frac{C_{\Omega}^{2}|\Gamma|^{2}}{4\varepsilon|\Omega|^{2}}\int_{\Omega}|\widetilde{g}^{\prime}(u)u+\widetilde{g}(u)|^{2}dx.$
Combining (4.34)-(4.35) and applying assumption (4.7) yields
$\left\langle
F(U),U\right\rangle_{\mathbb{X}^{2}}\geq\delta\|u\|_{L^{r_{1}}(\Omega)}^{r_{1}}-\varepsilon\|\nabla
u\|_{L^{2}(\Omega)}^{2}-C_{\delta},$ (4.36)
for some positive constants $\delta$ and $C_{\delta}$ that are independent of
$U$, $t$ and $\varepsilon$. Plugging (4.36) into (4.33) gives, for almost all
$t\in[0,T]$,
$\displaystyle\frac{1}{2}\frac{d}{dt}\left(\|U\|_{\mathbb{X}^{2}}^{2}+\|\Phi^{t}\|_{\mathcal{M}_{\varepsilon}^{0}}^{2}\right)-\left\langle\mathrm{T}\Phi^{t},\Phi^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+\left(\omega-\varepsilon\right)\|\nabla
u\|_{L^{2}\left(\Omega\right)}^{2}$ (4.37)
$\displaystyle+\left(\nu\|\nabla_{\Gamma}u\|_{L^{2}(\Gamma)}^{2}+\beta\left\|u\right\|_{L^{2}\left(\Gamma\right)}^{2}\right)+\delta\|u\|_{L^{r_{1}}(\Omega)}^{r_{1}}$
$\displaystyle\leq C.$
Integrating (4.37 over the interval $\left(0,t\right)$ yields the desired
estimate (4.29). Additionally, from the above _apriori_ estimate (4.29), we
immediately see that
$\displaystyle U$ $\displaystyle\in
L^{\infty}\left(0,T;\mathbb{X}^{2}\right)\cap
L^{2}\left(0,T;\mathbb{V}^{1}\right),$ (4.38) $\displaystyle\Phi$
$\displaystyle\in L^{\infty}\left(0,T;\mathcal{M}_{\Omega,\Gamma}^{1}\right),$
(4.39) $\displaystyle u$ $\displaystyle\in
L^{r_{1}}\left(\Omega\times(0,T)\right).$ (4.40)
Applying Lemma 3.2, in view of of (4.38) and (4.40), we also get
$u\in L^{r_{2}}(\Gamma\times(0,T)).$ (4.41)
Thus, we indeed recover the bounds (4.17)-(4.18) through estimate (4.29).
Moreover, we have from (4.40) and (4.41) that $f\left(u\right)\in
L^{r_{1}^{\prime}}(\Omega\times\left(0,T\right))$,
$\widetilde{g}\left(v\right)\in
L^{r_{2}^{\prime}}(\Gamma\times\left(0,T\right))$; hence,
$F(U)\in L^{r_{1}^{\prime}}(\Omega\times\left(0,T\right))\times
L^{r_{2}^{\prime}}(\Gamma\times\left(0,T\right)).$ (4.42)
Clearly, since $U\in L^{2}(0,T;\mathbb{V}^{1})$ and $\Phi\in
L^{\infty}(0,T;\mathcal{M}_{\Omega,\Gamma}^{1})$ we also have
$\mathrm{A_{W}^{0,\beta,\nu,\omega}}\Phi(s)\in
L^{2}(0,T;(\mathbb{V}^{1})^{\ast})$ for almost all $s\in\mathbb{R}_{+},$ and
$\mathrm{A_{W}^{0,\beta,\nu,\omega}}U\in L^{2}(0,T;(\mathbb{V}^{1})^{\ast}),$
respectively. Therefore, after comparing terms in the first equation of
(4.21), we see that
$\partial_{t}U\in
L^{2}\left(0,T;(\mathbb{V}^{1})^{\ast}\right)\oplus\left(L^{r_{1}^{\prime}}(\Omega\times\left(0,T\right))\times
L^{r_{2}^{\prime}}(\Gamma\times\left(0,T\right))\right).$ (4.43)
Hence, this justifies our choice of test function for the first of (4.21).
Concerning the second equation of (4.21), in view of (4.38) and the
representation formula (4.15) we have
$\mathrm{T}\Phi^{t}(s)=-\partial_{s}\Phi^{t}(s)=\left\\{\begin{array}[]{ll}-U(t-s)&\text{for}~{}0<s\leq
t,\\\ -\partial_{s}\Phi_{0}(s-t)+U(t-s)&\text{for}~{}s>t.\end{array}\right.$
Then, with a given $\Phi_{0}\in\mathcal{M}_{\Omega,\Gamma}^{1},$
$\partial_{s}\Phi_{0}(\cdot)\in
W_{\mu_{\Omega}\oplus\mu_{\Gamma}}^{-1,2}\left(\mathbb{R}_{+};\mathbb{V}^{1}\right),$
we conclude
$\partial_{t}\Phi\in
L^{2}\left(0,T;W_{\mu_{\Omega}\oplus\mu_{\Gamma}}^{-1,2}\left(\mathbb{R}_{+};\mathbb{V}^{1}\right)\right).$
(4.44)
This concludes Step 1.
Step 2. (A Galerkin basis) First, for any $\alpha,\beta\geq$ $0$ we recall
that
($\mathrm{A_{W}^{\alpha,\beta,\nu,\omega})}^{-1}\in\mathcal{L}\left(\mathbb{X}^{2}\right)$
is compact provided that either $\beta>0$ or $\alpha>0$. This means that, for
$i\in\mathbb{N}$, there is a complete system of eigenfunctions
$\Psi_{i}^{\alpha,\beta,\nu,\omega}=(\vartheta_{i}^{\alpha,\beta,\nu,\omega},\vartheta_{i\mid\Gamma}^{\alpha,\beta,\nu,\omega})^{\mathrm{tr}}$
of the eigenvalue problem
$\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}}\Psi_{i}^{\alpha,\beta,\nu,\omega}=\lambda_{i}\Psi_{i}^{\alpha,\beta,\nu,\omega}\text{
in }\mathbb{X}^{2}$
with
$\Psi_{i}^{\alpha,\beta,\nu,\omega}\in
D\left(\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}}\right)\cap\left(C^{2}({\overline{\Omega}})\times
C^{2}\left(\Gamma\right)\right),$
see [12, Appendix]. The eigenvalues
$\lambda_{i}=\lambda_{i}^{\alpha,\beta,\nu,\omega}\in(0,\infty)$ may be put
into increasing order and counted according to their multiplicity to form a
divergent sequence going to infinity. In addition, also due to standard
spectral theory, the related eigenfunctions form an orthogonal basis in
$\mathbb{V}^{1}$ that is orthonormal in $\mathbb{X}^{2}$. Note that for each
$i\in\mathbb{N}$, the pair
$\left(\lambda_{i},\vartheta_{i}\right)\in\mathbb{R}_{+}\times
C^{2}\left(\overline{\Omega}\right),$
$\vartheta_{i}=\vartheta_{i}^{\alpha,\beta,\nu,\omega},$ is a classical
solution of the elliptic problem
$\left\\{\begin{array}[]{ll}-\omega\Delta\vartheta_{i}+\alpha\omega\vartheta_{i}=\lambda_{i}\vartheta_{i},&\text{in
}\Omega,\\\
-\nu\Delta_{\Gamma}\left(\vartheta_{i\mid\Gamma}\right)+\omega\partial_{n}\vartheta_{i}+\beta\nu\vartheta_{i\mid\Gamma}=\lambda_{i}\vartheta_{i\mid\Gamma},&\text{on
}\Gamma.\end{array}\right.$ (4.45)
It remains to select an orthonormal basis $\\{\zeta_{i}\\}_{i=1}^{\infty}$ of
$\mathcal{M}_{\Omega,\Gamma}^{1}=L_{\mu_{\Omega}\oplus\mu_{\Gamma}}^{2}(\mathbb{R}_{+};\mathbb{V}^{1})$
that also belongs to $D(\mathrm{T})\cap
W_{\mu_{\Omega}\oplus\mu_{\Gamma}}^{1,2}(\mathbb{R}_{+};\mathbb{V}^{1})$. We
can choose vectors
$\zeta_{i}=\varkappa_{i}\Psi_{i}^{\alpha,\beta,\nu,\omega}$, with eigenvectors
$\Psi_{i}^{\alpha,\beta,\nu,\omega}\in
D(\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}})$ satisfying (4.45) above, such
that $\\{\varkappa_{i}\\}_{i=1}^{\infty}\in C_{c}^{\infty}(\mathbb{R}_{+})$ is
an orthonormal basis for
$L_{\mu_{\Omega}\oplus\mu_{\Gamma}}^{2}(\mathbb{R}_{+})$. This choice will be
crucial for the derivation of strong solutions in the section later.
Let $T>0$ be fixed. For $n\in\mathbb{N}$, set the spaces
$X_{n}=\mathrm{span}\left\\{\Psi_{1}^{\alpha,\beta,\nu,\omega},\dots,\Psi_{n}^{\alpha,\beta,\nu,\omega}\right\\}\subset\mathbb{X}^{2},~{}~{}X_{\infty}=\bigcup_{n=1}^{\infty}X_{n},$
and
$M_{n}=\mathrm{span}\left\\{\zeta_{1},\zeta_{2},\dots,\zeta_{n}\right\\}\subset\mathcal{M}_{\Omega,\Gamma}^{1},~{}~{}M_{\infty}=\bigcup_{n=1}^{\infty}M_{n}.$
Obviously, $X_{\infty}$ is a dense subspace of $\mathbb{V}^{1}$. For each
$n\in\mathbb{N}$, let $P_{n}:\mathbb{X}^{2}\rightarrow X_{n}$ denote the
orthogonal projection of $\mathbb{X}^{2}$ onto $X_{n}$ and let
$Q_{n}:\mathcal{M}_{\Omega,\Gamma}^{1}\rightarrow M_{n}$ denote the orthogonal
projection of $\mathcal{M}_{\Omega,\Gamma}^{1}$ onto $M_{n}$. Thus, we seek
functions of the form
$U_{n}(t)=\sum_{i=1}^{n}a_{i}(t)\Psi_{i}^{\alpha,\beta,\nu,\omega}~{}~{}\text{and}~{}~{}\Phi_{n}^{t}(s)=\sum_{i=1}^{n}b_{i}(t)\zeta_{i}(s)=\sum_{i=1}^{n}b_{i}(t)\varkappa_{i}\left(s\right)\Psi_{i}^{\alpha,\beta,\nu,\omega}$
(4.46)
that will satisfy the associated discretized problem Pn described below. The
functions $a_{i}$ and $b_{i}$ are assumed to be (at least) $C^{2}(0,T)$ for
$i=1,\dots,n$. By definition, note that
$u_{n}(t)=\sum_{i=1}^{n}a_{i}(t)\vartheta_{i}^{\alpha,\beta,\nu,\omega}~{}~{}\text{and}~{}~{}u_{n}(t)_{\mid\Gamma}=\sum_{i=1}^{n}a_{i}(t)\vartheta_{i\mid\Gamma}^{\alpha,\beta,\nu,\omega},$
(4.47)
also
$\eta_{n}^{t}(s)=\sum_{i=1}^{n}b_{i}(t)\zeta_{i}(s)~{}~{}\text{and}~{}~{}\xi_{n}^{t}(s)=\sum_{i=1}^{n}b_{i}(t)\zeta_{i}(s)_{\mid\Gamma}.$
(4.48)
As usual, to approximate the given initial data $U_{0}\in\mathbb{X}^{2}$ and
$\Phi_{0}\in\mathcal{M}_{\Omega,\Gamma}^{1}$, we take
$U_{n0}\in\mathbb{V}^{1}$ such that $U_{n0}\rightarrow U_{0}~{}~{}$(in
$\mathbb{X}^{2}$), since $\mathbb{V}^{1}$ is dense in $\mathbb{X}^{2}$, and
$\Phi_{n0}\rightarrow\Phi_{0}~{}~{}$(in $\mathcal{M}_{\Omega,\Gamma}^{1}$).
For $T>0$ and for each integer $n\geq 1$, the weak formulation of the
approximate problem Pn is the following: find $(U_{n},\Phi_{n})$, given by
(4.46) such that, for all ${\overline{U}}=(\bar{u},\bar{v})^{\mathrm{tr}}\in
X_{n}$ and ${\overline{\Phi}}=(\bar{\eta},\bar{\xi})^{\mathrm{tr}}\in M_{n}$,
the equations
$\left\langle\partial_{t}U_{n},{\overline{U}}\right\rangle_{\mathbb{X}^{2}}+\left\langle\mathrm{A_{W}^{0,\beta,\nu,\omega}}U_{n},{\overline{U}}\right\rangle_{\mathbb{X}^{2}}+\left\langle\Phi_{n}^{t},{\overline{U}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+\left\langle
P_{n}F(U_{n}),{\overline{U}}\right\rangle_{\mathbb{X}^{2}}=0$ (4.49)
and
$\left\langle\partial_{t}\Phi_{n}^{t},{\overline{\Phi}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}=\left\langle\mathrm{T}\Phi_{n}^{t},{\overline{\Phi}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+\left\langle
U_{n},{\overline{\Phi}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}$ (4.50)
hold for almost all $t\in\left(0,T\right)$, subject to the initial conditions
$\left\langle
U_{n}(0),{\overline{U}}\right\rangle_{\mathbb{X}^{2}}=\left\langle
U_{n0},{\overline{U}}\right\rangle_{\mathbb{X}^{2}}~{}~{}\text{and}~{}~{}\left\langle\Phi_{n}^{0},{\overline{\Phi}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}=\left\langle\Phi_{n0},{\overline{\Phi}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}.$
(4.51)
To show the existence of at least one solution to (4.49)-(4.51), we now
suppose that $n$ is fixed and we take ${\overline{U}}=\Psi_{k}$ and
${\overline{\Phi}}=\zeta_{k}$ for some $1\leq k\leq n$. Then substituting the
discretized functions (4.46) into (4.49)-(4.51), we easily arrive at a system
of ordinary differential equations in the unknowns $a_{k}=a_{k}(t)$ and
$b_{k}=b_{k}(t)$ on $X_{n}$ and $M_{n},$ respectively. We need to recall that
$\langle P_{n}F(U_{n}),U_{k}\rangle=\langle F(U_{n}),P_{n}U_{k}\rangle=\langle
F(U_{n}),U_{k}\rangle.$
Since $f,$ $g\in C^{1}(\mathbb{R})$, we may apply Cauchy’s theorem for ODEs to
find that there is $T_{n}\in(0,T)$ such that $a_{k},b_{k}\in C^{2}(0,T_{n})$,
for $1\leq k\leq n$ and both (4.49) and (4.50) hold in the classical sense for
all $t\in[0,T_{n}]$. This argument shows the existence of at least one local
solution to problem Pn and ends Step 2.
Step 3. (Boundedness and continuation of approximate maximal solutions) Now we
apply the (uniform) _apriori_ estimate (4.29) which also holds for any
approximate solution $(U_{n},\Phi_{n})$ of problem Pn on the interval
$[0,T_{n})$, where $T_{n}<T$. Owing to the boundedness of the projectors
$P_{n}$ and $Q_{n}$ on the corresponding spaces, we infer
$\displaystyle\|U_{n}(t)\|_{\mathbb{X}^{2}}^{2}+\left\|\Phi_{n}^{t}\right\|_{\mathcal{M}_{\Omega,\Gamma}^{1}}^{2}-2\left\langle\mathrm{T}\Phi_{n}^{t},\Phi_{n}^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+2\int_{0}^{t}\left(\|U_{n}(\tau)\|_{\mathbb{V}^{1}}^{2}+\|u_{n}(\tau)\|_{L^{r_{1}}(\Omega)}^{r_{1}}\right)d\tau$
(4.52) $\displaystyle\leq
C_{T}\left(1+\|U(0)\|_{\mathbb{X}^{2}}^{2}+\left\|\Phi^{0}\right\|_{\mathcal{M}_{\Omega,\Gamma}^{1}}^{2}\right),$
for some constant $C_{T}>0$ independent of $n$ and $t$. Hence, every
approximate solution may be extended to the whole interval $[0,T]$, and
because $T>0$ is arbitrary, any approximate solution is a global one. As in
Step 1, we also obtain the uniform bounds (4.38)-(4.44) for each approximate
solution $(U_{n},\Phi_{n})$. Thus,
$\displaystyle U_{n}$ $\displaystyle~{}\text{is uniformly bounded
in}~{}L^{\infty}\left(0,T;\mathbb{X}^{2}\right),$ (4.53) $\displaystyle U_{n}$
$\displaystyle~{}\text{is uniformly bounded
in}~{}L^{2}\left(0,T;\mathbb{V}^{1}\right),$ (4.54) $\displaystyle u_{n}$
$\displaystyle~{}\text{is uniformly bounded
in}~{}L^{r_{1}}(\Omega\times\left(0,T\right)),$ (4.55) $\displaystyle u_{n}$
$\displaystyle~{}\text{is uniformly bounded
in}~{}L^{r_{2}}(\Gamma\times\left(0,T\right)),$ (4.56) $\displaystyle\Phi_{n}$
$\displaystyle~{}\text{is uniformly bounded
in}~{}L^{\infty}\left(0,T;\mathcal{M}_{\Omega,\Gamma}^{1}\right),$ (4.57)
$\displaystyle F(U_{n})$ $\displaystyle~{}\text{is uniformly bounded
in}~{}L^{r_{1}^{\prime}}(\Omega\times\left(0,T\right))\times
L^{r_{2}^{\prime}}(\Gamma\times\left(0,T\right)),$ (4.58)
$\displaystyle\partial_{t}U_{n}$ $\displaystyle~{}\text{is uniformly bounded
in}~{}L^{2}\left(0,T;\left(\mathbb{V}^{1}\right)^{\ast}\right)\oplus\left(L^{r_{1}^{\prime}}(\Omega\times\left(0,T\right))\times
L^{r_{2}^{\prime}}(\Gamma\times\left(0,T\right))\right),$ (4.59)
$\displaystyle\partial_{t}\Phi_{n}$ $\displaystyle~{}\text{is uniformly
bounded
in}~{}L^{2}\left(0,T;W_{\mu_{\Omega}\oplus\mu_{\Gamma}}^{-1,2}\left(\mathbb{R}_{+};\mathbb{V}^{1}\right)\right).$
(4.60)
This concludes Step 3.
Step 4. (Convergence of approximate solutions) By Alaoglu’s theorem (cf. e.g.
[24, Theorem 6.64]) and the uniform bounds (4.53)-(4.58), there is a
subsequence of $(U_{n},\Phi_{n})$, generally not relabelled, and functions $U$
and $\Phi$, obeying (4.38)-(4.44), such that as $n\rightarrow\infty$,
$\begin{array}[]{ll}U_{n}\rightharpoonup U&\text{weakly-* in
}L^{\infty}\left(0,T;\mathbb{X}^{2}\right),\\\ U_{n}\rightharpoonup
U&\text{weakly in }L^{2}\left(0,T;\mathbb{V}^{1}\right),\\\
u_{n}\rightharpoonup u&\text{weakly in
}L^{r_{1}}(\Omega\times\left(0,T\right)),\\\ u_{n}\rightharpoonup
u&\text{weakly in }L^{r_{2}}(\Gamma\times\left(0,T\right)),\\\
\Phi_{n}\rightharpoonup\Phi&\text{weakly-* in
}L^{\infty}\left(0,T;\mathcal{M}_{\Omega,\Gamma}^{1}\right).\end{array}$
(4.61)
Moreover, setting $k_{S}:=(-\mu_{S}^{{}^{\prime}})^{1/2}\geq 0$,
$S\in\left\\{\Omega,\Gamma\right\\}$ we have
$\partial_{t}U_{n}\rightharpoonup\partial_{t}U~{}\text{weakly in
}L^{2}\left(0,T;\left(\mathbb{V}^{1}\right)^{\ast}\right)\oplus\left(L^{r_{1}^{\prime}}(\Omega\times\left(0,T\right))\times
L^{r_{2}^{\prime}}(\Gamma\times\left(0,T\right))\right),$ (4.62)
$\Phi_{n}\rightharpoonup\Phi\text{ weakly in
}L^{2}\left(0,T;L_{k_{\Omega}\oplus
k_{\Gamma}}^{2}\left(\mathbb{R}_{+};\mathbb{V}^{1}\right)\right),$ (4.63)
owing to the bound on
$\left\langle\mathrm{T}\Phi_{n},\Phi_{n}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}$
from (4.52) and
$\partial_{t}\Phi_{n}\rightarrow\partial_{t}\Phi~{}\text{weakly
in}~{}L^{2}\left(0,T;W_{\mu_{\Omega}\oplus\mu_{\Gamma}}^{-1,2}\left(\mathbb{R}_{+};\mathbb{V}^{1}\right)\right).$
(4.64)
Indeed, we observe that the last of (4.61) and integration by parts yield, for
any $\zeta\in
C_{0}^{\infty}\left(J;C_{0}^{\infty}\left(\mathbb{R}_{+};\mathbb{V}^{1}\right)\right),$
$\int_{0}^{T}\left\langle\partial_{t}\Phi_{n}^{y},\zeta\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}dy=-\int_{0}^{T}\left\langle\Phi_{n}^{y},\partial_{t}\zeta\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}dy\;\rightarrow\;-\int_{0}^{T}\left\langle\Phi^{y},\partial_{t}\zeta\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}dy,$
and that $\Phi^{t}\in
C(0,T;W_{\mu_{\Omega}\oplus\mu_{\Gamma}}^{-1,2}(\mathbb{R}_{+};\mathbb{V}^{1}))$.
We can exploit the second of (4.61) and (4.62) to deduce
$U_{n}\rightarrow U~{}\text{strongly
in}~{}L^{2}\left(0,T;\mathbb{X}^{2}\right),$ (4.65)
by application of the Agmon-Lions compactness criterion since $\mathbb{V}^{1}$
is compactly embedded in $\mathbb{X}^{2}$. This last strong convergence
property is enough to pass to the limit in the nonlinear terms since $f$,
$g\in C^{1}$ (see, e.g., [11, 15]). Indeed, on account of standard arguments
(cf. also [1]) we have
$P_{n}F(U_{n})\rightharpoonup F\left(U\right)~{}\text{weakly
in}~{}L^{2}\left(0,T;\mathbb{X}^{2}\right).$ (4.66)
The convergence properties (4.61)-(4.65) allow us to pass to the limit as
$n\rightarrow\infty$ in equation (4.49) in order to recover (4.21), using
standard density arguments. Indeed, in order to pass to the limit in the
equations for memory, we use (4.63) and the following distributional equality
$\displaystyle-\int_{0}^{T}\left\langle\Phi^{y},\partial_{t}\zeta\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}dy-\int_{0}^{T}\mu_{\Omega}^{\prime}\left(s\right)\left\langle\eta^{y},\zeta\right\rangle_{\mathcal{M}_{\Omega}^{1}}dy-\int_{0}^{T}\mu_{\Gamma}^{{}^{\prime}}\left(s\right)\left\langle\xi^{y},\partial_{t}\zeta\right\rangle_{\mathcal{M}_{\Gamma}^{1}}dy$
$\displaystyle=\int_{0}^{t}\left\langle\partial_{t}\Phi^{t}-\mathrm{T}\Phi^{y},\zeta\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}dy.$
Thus, we also get the last two equations of (4.21) by virtue of the last of
(4.61).
Step 5. (Continuity of the solution) According to the description for problem
P, see (4.21), we have
$\begin{array}[]{ll}\partial_{t}U\in
L^{2}\left(0,T;\left(\mathbb{V}^{1}\right)^{\ast}\right)\oplus\left(L^{r_{1}^{\prime}}(\Omega\times\left(0,T\right))\times
L^{r_{2}^{\prime}}(\Gamma\times\left(0,T\right))\right),&\\\
\partial_{t}\Phi\in
L^{2}\left(0,T;W_{\mu_{\Omega}\oplus\mu_{\Gamma}}^{-1,2}\left(\mathbb{R}_{+};\mathbb{V}^{1}\right)\right).&\end{array}$
(4.67)
Since the spaces $L^{2}\left(0,T;(\mathbb{V}^{1})^{\ast}\right),$
$L^{r_{1}^{\prime}}(\Omega\times\left(0,T\right))\times
L^{r_{2}^{\prime}}(\Gamma\times\left(0,T\right))$ are the dual of
$L^{2}\left(0,T;\mathbb{V}^{1}\right)$ and
$L^{r_{1}}(\Omega\times\left(0,T\right))\times
L^{r_{2}}(\Gamma\times\left(0,T\right))$, respectively, recalling (4.61), we
can argue exactly as in the proof of [11, Proposition 2.5] to deduce that
$U\in C\left(\left[0,T\right];\mathbb{X}^{2}\right)$. Finally, owing to $U\in$
$L^{2}(0,T;\mathbb{V}^{1})$ and Corollary 4.2, it follows that $\Phi\in
C\left(\left[0,T\right];\mathcal{M}_{\Omega,\Gamma}^{1}\right)$. Thus, both
$U\left(0\right)$ and $\Phi\left(0\right)$ make sense and the equalities
$U\left(0\right)=U_{0}$ and $\Phi^{0}=\Phi_{0}$ hold in the usual sense due to
the strong convergence of $U_{0n}\rightarrow U_{0}$ in $\mathbb{X}^{2}$, and
$\Phi_{0n}\rightarrow\Phi_{0}$ in $\mathcal{M}_{\Omega,\Gamma}^{1}$,
respectively. The proof of the theorem is finished. ∎
When both the bulk and boundary nonlinearities are dissipative (i.e., (4.12)
holds in place of the balance (4.7)), we also have the following.
###### Theorem 4.9.
Assume (4.1)-(4.3) and (4.5), (4.12) hold. For each $\alpha,\beta>0$,
$\omega,\nu\in(0,1)$ and $T>0$, and for any
$U_{0}=(u_{0},v_{0})^{{\mathrm{tr}}}\in\mathbb{X}^{2}$,
$\Phi_{0}=(\eta_{0},\xi_{0})^{{\mathrm{tr}}}\in\mathcal{M}_{\Omega,\Gamma}^{1},$
there exists at least one (global) weak solution $\left(U,\Phi\right)\in
C(\left[0,T\right];\mathcal{H}_{\Omega,\Gamma}^{0,1})$ to problem P in the
sense of Definition 4.4.
###### Proof.
The proof is essentially the same as the proof of Theorem 4.8 with the
exception that one employs the estimate
$f\left(u\right)u\geq C_{f}\left|u\right|^{r_{1}}-C_{1},\text{
}\widetilde{g}\left(u\right)u\geq C_{g}\left|u\right|^{r_{2}}-C_{2},\text{
}\forall s\in\mathbb{R},$
in place of (4.36), owing to (4.12). This implies the same apriori estimate
(4.29). ∎
Finally, we also have uniqueness of the weak solution in some cases.
###### Proposition 4.10.
Let $\left(U_{i},\Phi_{i}\right)$ be any two weak solutions of problem P in
the sense of Definition 4.4, for $i=1,2.$ Assume (4.4). Then the following
estimate holds:
$\left\|U_{1}(t)-U_{2}\left(t\right)\right\|_{\mathbb{X}^{2}}+\left\|\Phi_{1}^{t}-\Phi_{2}^{t}\right\|_{\mathcal{M}_{\Omega,\Gamma}^{1}}\leq\left(\left\|U_{1}(0)-U_{2}\left(0\right)\right\|_{\mathbb{X}^{2}}+\left\|\Phi_{1}^{0}-\Phi_{2}^{0}\right\|_{\mathcal{M}_{\Omega,\Gamma}^{1}}\right)e^{Ct},$
(4.68)
for some constant $C>0$ independent of time, $U_{i}$ and $\Phi_{i}.$
###### Proof.
Set ${\widetilde{U}}=U_{1}-U_{2}$, ${\widetilde{\Phi}}=\Phi_{1}-\Phi_{2}$. The
function $({\widetilde{U}},{\widetilde{\Phi}})$ satisfies the equations:
$\displaystyle\left\langle\partial_{t}\widetilde{U}(t),V\right\rangle_{\mathbb{X}^{2}}+\left\langle\mathrm{A_{W}^{0,\beta,\nu,\omega}}\widetilde{U}(t),V\right\rangle_{\mathbb{X}^{2}}+\left\langle
F(U_{1})-F(U_{2}),V\right\rangle_{\mathbb{X}^{2}}$ (4.69)
$\displaystyle+\int_{0}^{\infty}\mu_{\Omega}\left(s\right)\left\langle\mathrm{A_{W}^{\alpha,0,0,\omega}}\widetilde{\Phi}^{t}\left(s\right),V\right\rangle_{\mathbb{X}^{2}}ds+\nu\int_{0}^{\infty}\mu_{\Gamma}(s)\left\langle\mathrm{C}\widetilde{\xi}^{t}\left(s\right),v\right\rangle_{L^{2}\left(\Gamma\right)}ds$
$\displaystyle=0$
and
$\left\langle\partial_{t}\widetilde{\Phi}^{t}\left(s\right)-\mathrm{T}\widetilde{\Phi}^{t}(s)-\widetilde{U}\left(t\right),\Pi\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}=0,$
(4.70)
for all
$\left(V,\Pi\right)\in\left(\mathbb{V}^{1}\oplus\left(L^{r_{1}}(\Omega)\times
L^{r_{2}}(\Gamma)\right)\right)\times\mathcal{M}_{\Omega,\Gamma}^{1}$, subject
to the associated initial conditions
$\widetilde{U}(0)=U_{1}\left(0\right)-U_{2}\left(0\right)~{}\text{and}~{}\widetilde{\Phi}^{0}=\Phi_{1}^{0}-\Phi_{2}^{0}.$
Multiplication of (4.69) by $V=\widetilde{U}(t)$ in $\mathbb{X}^{2}$ and
multiplication of (4.70) by $\Pi=\widetilde{\Phi}^{t}$ in
$\mathcal{M}_{\Omega,\Gamma}^{1}$, followed by summing the resulting
identities, leads us to the differential inequality
$\displaystyle\frac{d}{dt}\left(\left\|U_{1}-U_{2}\right\|_{\mathbb{X}^{2}}^{2}+\left\|\Phi_{1}-\Phi_{2}\right\|_{\mathcal{M}_{\Omega,\Gamma}^{1}}^{2}\right)$
(4.71) $\displaystyle\leq-2\left\langle
F(U_{1})-F(U_{2}),\widetilde{U}\right\rangle_{\mathbb{X}^{2}}$
$\displaystyle=-2\left\langle
f(u_{1})-f(u_{2}),u_{1}-u_{2}\right\rangle_{L^{2}\left(\Omega\right)}-2\left\langle\widetilde{g}(u_{1})-\widetilde{g}(u_{2}),u_{1}-u_{2}\right\rangle_{L^{2}\left(\Gamma\right)}.$
Employing assumption (4.4) on the nonlinear terms, we easily find that
$\frac{d}{dt}\left(\left\|U_{1}-U_{2}\right\|_{\mathbb{X}^{2}}^{2}+\left\|\Phi_{1}-\Phi_{2}\right\|_{\mathcal{M}_{\Omega,\Gamma}^{1}}^{2}\right)\leq
C\left\|U_{1}-U_{2}\right\|_{\mathbb{X}^{2}}^{2},$ (4.72)
for some $C=C\left(M_{f},M_{g},\beta\right)>0$. Application of the standard
Gronwall lemma to (4.72) yields the desired claim (4.68). ∎
In the final part of this section, we turn our attention to the existence of
global strong solutions for problem P. First, assuming that the interior and
boundary share the same memory kernel, we can derive the existence of strong
solutions in the case when the bulk and boundary nonlinearities have
supercritical polynomial growth of order at most $7/2$. Let $\overline{f}$,
$\overline{g}$ denote the primitives of $f$ and $\widetilde{g}$, respectively,
such that $\overline{f}\left(0\right)=\overline{g}\left(0\right)=0.$
###### Theorem 4.11.
Let (4.1)-(4.3) be satisfied for $\mu_{\Omega}\equiv\mu_{\Gamma}$, and assume
that $f,$ $g\in C^{1}\left(\mathbb{R}\right)$ satisfy the following
assumptions:
(i) $|f^{{}^{\prime}}\left(s\right)|$
$\leq\ell_{1}\left(1+\left|s\right|^{r_{1}}\right),$ for all $s\in\mathbb{R}$,
for some (arbitrary) $1\leq r_{1}<\frac{5}{2}.$
(ii) $|g^{{}^{\prime}}\left(s\right)|$ $\leq\ell_{2}(1+|s|^{r_{2}}),$ for all
$s\in\mathbb{R}$, for some (arbitrary) $1\leq r_{2}<\frac{5}{2}.$
(iii) (4.4) holds and there exist constants $C_{i}>0,$ $i=1,...,4,$ such that
$f\left(s\right)s\geq-C_{1}\left|s\right|^{2}-C_{2},\text{
}g\left(s\right)s\geq-C_{3}\left|s\right|^{2}-C_{4},\text{ }\forall
s\in\mathbb{R}\text{.}$ (4.73)
Given $\alpha,\beta>0$, $\omega,\nu\in(0,1)$,
$\left(U_{0},\Phi_{0}\right)\in\mathcal{H}_{\Omega,\Gamma}^{1,2}$, there
exists a unique global strong solution $\left(U,\Phi\right)$ to problem P in
the sense of Definition 4.5.
###### Proof.
Step 1 (The existence argument). By Remark 4.6 it suffices to deduce
additional regularity for $\left(U,\Phi\right)$. In order to get the crucial
estimate we rely once again on various dissipative estimates. First, we notice
that using the condition of (4.73), we obtain
$\left\langle F\left(U_{n}\right),U_{n}\right\rangle_{\mathbb{X}^{2}}\geq-
C_{F}\left(\left\|U_{n}\right\|_{\mathbb{X}^{2}}^{2}+1\right),$
for some $C_{F}>0$. Thus, arguing in the same fashion as in getting (4.33), in
view of Gronwall’s lemma we obtain
$\displaystyle\|U_{n}(t)\|_{\mathbb{X}^{2}}^{2}+\left\|\Phi_{n}^{t}\right\|_{\mathcal{M}_{\Omega,\Gamma}^{1}}^{2}-2\left\langle\mathrm{T}\Phi_{n}^{t},\Phi_{n}^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+C\int_{0}^{t}\|U_{n}(\tau)\|_{\mathbb{V}^{1}}^{2}d\tau$
(4.74) $\displaystyle\leq
C_{T}\left(1+\|U(0)\|_{\mathbb{X}^{2}}^{2}+\left\|\Phi^{0}\right\|_{\mathcal{M}_{\Omega,\Gamma}^{1}}^{2}\right),$
where $C_{T}\sim e^{CT},$ for some $C>0$ which is independent of $T,$ $n,$
$t.$
Next, we derive an estimate for $U_{n}\in L^{\infty}(0,T;\mathbb{V}^{1})$ and
$\Phi_{n}\in L^{\infty}(0,T;\mathcal{M}_{\Omega,\Gamma}^{2})$. We use again
the scheme (4.49)-(4.51) in which we test equation (4.49) with the function
$\overline{U}=Z_{n}:=\binom{z_{n}}{z_{n\mid\Gamma}},\text{
}z_{n}:=\sum_{i=1}^{n}a_{i}(t)\lambda_{i}\theta_{i}^{\alpha,\beta,\nu,\omega}\in
C^{2}\left(\left(0,T\right)\times\overline{\Omega}\right).$
We get
$\left\langle\partial_{t}U_{n},{Z}_{n}\right\rangle_{\mathbb{X}^{2}}+\left\langle\mathrm{A_{W}^{0,\beta,\nu,\omega}}U_{n},{Z}_{n}\right\rangle_{\mathbb{X}^{2}}+\left\langle\Phi_{n}^{t}\left(s\right),Z_{n}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+\left\langle
F(U_{n}),{Z}_{n}\right\rangle_{\mathbb{X}^{2}}=0.$ (4.75)
Moreover, testing (4.50) with
$\overline{\Phi}=\Xi_{n}^{t}:=\binom{\varphi_{n}^{t}}{\varphi_{n\mid\Gamma}^{t}},\text{
}\varphi_{n}^{t}:=\sum_{i=1}^{n}b_{i}(t)\varkappa_{i}\left(s\right)\lambda_{i}\theta_{i}^{\alpha,\beta,\nu,\omega}=\sum_{i=1}^{n}b_{i}(t)\lambda_{i}\zeta_{i}\left(s\right)$
we find
$\left\langle\partial_{t}\Phi_{n}^{t},\Xi_{n}^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}=\left\langle\mathrm{T}\Phi_{n}^{t},\Xi_{n}^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+\left\langle
U_{n},\Xi_{n}^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}.$ (4.76)
Indeed, $\left(Z_{n},\Xi_{n}^{t}\right)\in X_{n}\times M_{n}$ is admissible as
a test function in (4.49)-(4.50). Recalling (4.46), we further notice that
$Z_{n}=\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}}U_{n}$ and
$\Xi_{n}^{t}=\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}}\Phi_{n}^{t},$
respectively, due to the fact that the eigenpair
$(\lambda_{i},\theta_{i}^{\alpha,\beta,\nu,\omega})$ solves (4.45). Owing to
these identities and (4.31), we have
$\displaystyle\left\langle\Phi_{n}^{t}\left(s\right),Z_{n}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}$
$\displaystyle=\int_{0}^{\infty}\mu_{\Omega}(s)\left\langle\mathrm{A_{W}^{\alpha,0,0,\omega}}\Phi_{n}^{t}\left(s\right),Z_{n}\right\rangle_{\mathbb{X}^{2}}ds+\nu\int_{0}^{\infty}\mu_{\Gamma}(s)\left\langle\mathrm{C}\xi_{n}^{t}\left(s\right),z_{n}\right\rangle_{L^{2}\left(\Gamma\right)}ds$
(4.77)
$\displaystyle\overset{\mu_{\Omega}\equiv\mu_{\Gamma}}{=}\int_{0}^{\infty}\mu_{\Omega}(s)\left\langle\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}}\Phi_{n}^{t}\left(s\right),\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}}U_{n}\right\rangle_{\mathbb{X}^{2}}ds$
$\displaystyle=\left\langle
U_{n},\Xi_{n}^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}.$
Adding relations (4.75)-(4.76) together, and using (4.77) we further deduce
$\displaystyle\frac{1}{2}\frac{d}{dt}\left(\left\|U_{n}\right\|_{\mathbb{V}^{1}}^{2}+\left\|\Xi_{n}^{t}\right\|_{L_{\mu_{\Omega}}^{2}(\mathbb{R}_{+};\mathbb{X}^{2})}^{2}\right)-\left\langle\mathrm{T}\Phi_{n}^{t},\Xi_{n}^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+\left\|Z_{n}\right\|_{\mathbb{X}^{2}}^{2}$
(4.78) $\displaystyle=\alpha\omega\left\langle
u_{n},z_{n}\right\rangle_{L^{2}\left(\Omega\right)}-\left\langle
F(U_{n}),{Z}_{n}\right\rangle_{\mathbb{X}^{2}},$
and
$\left\langle\mathrm{T}\Phi_{n}^{t},\Xi_{n}^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}=\int_{0}^{\infty}\mu_{\Omega}(s)\left\langle\mathrm{A_{W}^{\alpha,\beta,\nu,\omega}T}\Phi_{n}^{t},\Xi_{n}^{t}\right\rangle_{\mathbb{X}^{2}}ds=\frac{1}{2}\int_{0}^{\infty}\mu_{\Omega}^{{}^{\prime}}(s)\left\|\Xi_{n}^{t}\left(s\right)\right\|_{\mathbb{X}^{2}}^{2}ds,$
(4.79)
thanks to the fact that $\mu_{\Omega}\equiv\mu_{\Gamma}$. We begin estimating
both terms on the right-hand side of (4.78). The first one is easy,
$\alpha\omega\left\langle
u_{n},z_{n}\right\rangle_{L^{2}\left(\Omega\right)}\leq\delta\left\|z_{n}\right\|_{L^{2}\left(\Omega\right)}^{2}+C_{\delta}\left\|u_{n}\right\|_{L^{2}\left(\Omega\right)}^{2},$
(4.80)
for any $\delta\in(0,1]$. To bound the last term we integrate by parts in the
following way:
$\displaystyle\left\langle F(U_{n}),{Z}_{n}\right\rangle_{\mathbb{X}^{2}}$
$\displaystyle=\int_{\Omega}f\left(u_{n}\right)\left(-\omega\Delta
u_{n}+\alpha\omega
u_{n}\right)dx+\int_{\Gamma}\widetilde{g}\left(u_{n}\right)\left(-\nu\Delta_{\Gamma}u_{n}+\omega\partial_{n}u_{n}+\nu\beta
u_{n}\right)d\sigma$ (4.81)
$\displaystyle=\omega\int_{\Omega}f^{{}^{\prime}}\left(u_{n}\right)\left|\nabla
u_{n}\right|^{2}dx+\nu\int_{\Gamma}\widetilde{g}^{{}^{\prime}}\left(u_{n}\right)\left|\nabla_{\Gamma}u_{n}\right|^{2}d\sigma$
$\displaystyle+\alpha\omega\int_{\Omega}f\left(u_{n}\right)u_{n}dx+\nu\beta\int_{\Gamma}\widetilde{g}\left(u_{n}\right)u_{n}d\sigma$
$\displaystyle+\omega\int_{\Gamma}\left(\widetilde{g}\left(u_{n}\right)-f\left(u_{n}\right)\right)\partial_{n}u_{n}d\sigma.$
By assumptions (4.4) and (4.73), we can easily find a positive constant $C$
independent of $t,T$ and $n$ such that
$\omega\int_{\Omega}f^{{}^{\prime}}\left(u_{n}\right)\left|\nabla
u_{n}\right|^{2}dx+\nu\int_{\Gamma}\widetilde{g}^{{}^{\prime}}\left(u_{n}\right)\left|\nabla_{\Gamma}u_{n}\right|^{2}d\sigma\geq-
M_{f}\omega\left\|\nabla
u_{n}\right\|_{L^{2}\left(\Omega\right)}^{2}-M_{g}\nu\left\|\nabla_{\Gamma}u_{n}\right\|_{L^{2}\left(\Gamma\right)}^{2}$
(4.82)
and
$\alpha\omega\int_{\Omega}f\left(u_{n}\right)u_{n}dx+\nu\beta\int_{\Gamma}\widetilde{g}\left(u_{n}\right)u_{n}d\sigma\geq-C\left(\left\|U_{n}\right\|_{\mathbb{X}^{2}}^{2}+1\right).$
(4.83)
In order to estimate the last boundary integral on the right-hand side of
(4.81), we observe that due to assumptions (i)-(ii) it suffices to estimate
boundary integrals of the form
$I:=\int_{\Gamma}u_{n}^{r+1}\partial_{n}u_{n}d\sigma,\text{ for some }r<5/2.$
Indeed, due to classical trace regularity and embedding results, for every
$\delta\in(0,1]$ we have
$I\leq\left\|\partial_{n}u_{n}\right\|_{H^{1/2}\left(\Gamma\right)}\left\|u_{n}^{r+1}\right\|_{H^{-1/2}\left(\Gamma\right)}\leq\delta\left\|u_{n}\right\|_{H^{2}\left(\Omega\right)}^{2}+C_{\delta}\left\|u_{n}^{r+1}\right\|_{H^{-1/2}\left(\Gamma\right)}^{2}.$
(4.84)
It remains to estimate the last term in (4.84). To this end, we employ the
basic Sobolev embeddings $H^{1/2}\left(\Gamma\right)\subset
L^{4}\left(\Gamma\right)$ and $H^{1}\left(\Gamma\right)\subset
L^{s}\left(\Gamma\right),$ for any $s\in(\frac{4}{3},\infty)$, respectively.
Owing to elementary Holder inequalities, we deduce that
$\displaystyle\left\|u_{n}^{r+1}\right\|_{H^{-1/2}\left(\Gamma\right)}^{2}$
$\displaystyle=\sup_{\psi\in
H^{1/2}\left(\Gamma\right):\left\|\psi\right\|_{H^{1/2}\left(\Gamma\right)}=1}\left|\left\langle
u_{n}^{r+1},\psi\right\rangle\right|^{2}$ (4.85)
$\displaystyle\leq\left\|u_{n}\right\|_{L^{s}\left(\Gamma\right)}^{2}\left\|u_{n}\right\|_{L^{\overline{s}r}\left(\Gamma\right)}^{2r}$
$\displaystyle\leq
C\left\|u_{n}\right\|_{H^{1}\left(\Gamma\right)}^{2}\left\|u_{n}\right\|_{L^{\overline{s}r}\left(\Gamma\right)}^{2r},$
for some positive constant $C$ independent of $u,n,t,T$, for sufficiently
large $s\in(\frac{4}{3},\infty)$, where
$\overline{s}:=4s/\left(3s-4\right)>4/3$. Exploiting now the interpolation
inequality
$\left\|u\right\|_{L^{\overline{s}r}\left(\Gamma\right)}\leq
C\left\|u\right\|_{H^{2}\left(\Gamma\right)}^{1/\left(2r\right)}\left\|u\right\|_{L^{2}\left(\Gamma\right)}^{1-1/\left(2r\right)},$
provided that $r=1+2/\overline{s}<5/2$, we further infer from (4.85) that
$\displaystyle\left\|u_{n}^{r+1}\right\|_{H^{-1/2}\left(\Gamma\right)}^{2}$
$\displaystyle\leq
C\left\|u_{n}\right\|_{H^{1}\left(\Gamma\right)}^{2}\left\|u_{n}\right\|_{H^{2}\left(\Gamma\right)}\left\|u_{n}\right\|_{L^{2}\left(\Gamma\right)}^{2r-1}$
(4.86)
$\displaystyle\leq\eta\left\|u_{n}\right\|_{H^{2}\left(\Gamma\right)}^{2}+C_{\eta}\left\|u_{n}\right\|_{H^{1}\left(\Gamma\right)}^{2}\left(\left\|u_{n}\right\|_{H^{1}\left(\Gamma\right)}^{2}\left\|u_{n}\right\|_{L^{2}\left(\Gamma\right)}^{2\left(2r-1\right)}\right),$
for any $\eta\in(0,1]$. Inserting (4.86) into (4.84) and choosing a
sufficiently small $\eta=\delta/C_{\delta}$, by virtue of (3.21), we easily
deduce
$I\leq\delta\left\|Z_{n}\right\|_{\mathbb{X}^{2}}^{2}+C_{\delta}\left\|u_{n}\right\|_{H^{1}\left(\Gamma\right)}^{2}\left(\left\|u_{n}\right\|_{H^{1}\left(\Gamma\right)}^{2}\left\|u_{n}\right\|_{L^{2}\left(\Gamma\right)}^{2\left(2r-1\right)}\right).$
(4.87)
Thus, setting
$\displaystyle\Xi\left(t\right)$
$\displaystyle:=\left\|U_{n}\left(t\right)\right\|_{\mathbb{V}^{1}}^{2}+\left\|\Xi_{n}^{t}\right\|_{L_{\mu_{\Omega}}^{2}(\mathbb{R}_{+};\mathbb{X}^{2})}^{2},\text{
}$ $\displaystyle\Lambda\left(t\right)$
$\displaystyle:=C_{\delta}\left(1+\left\|u_{n}\right\|_{H^{1}\left(\Gamma\right)}^{2}\left\|u_{n}\right\|_{L^{2}\left(\Gamma\right)}^{2\left(2r-1\right)}\right),$
it follows from (4.78), (4.80)-(4.83) and (4.87) that
$\frac{d}{dt}\Xi\left(t\right)-2\left\langle\mathrm{T}\Phi_{n}^{t},\Xi_{n}^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+\left(2-\delta\right)\left\|Z_{n}\right\|_{\mathbb{X}^{2}}^{2}\leq\Xi\left(t\right)\Lambda\left(t\right),$
(4.88)
for a sufficiently small $\delta\in(0,1]$. Gronwall’s inequality together with
(4.74) yields
$\displaystyle\left\|U_{n}\left(t\right)\right\|_{\mathbb{V}^{1}}^{2}+\left\|\Xi_{n}^{t}\right\|_{L_{\mu_{\Omega}}^{2}(\mathbb{R}_{+};\mathbb{X}^{2})}^{2}+\int_{0}^{t}\left(\|Z_{n}(\tau)\|_{\mathbb{X}^{2}}^{2}-2\left\langle\mathrm{T}\Phi_{n}^{\tau},\Xi_{n}^{\tau}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}\right)d\tau$
(4.89) $\displaystyle\leq
C_{T}\left(\left\|U\left(0\right)\right\|_{\mathbb{V}^{1}}^{2}+\left\|\Xi^{0}\right\|_{L_{\mu_{\Omega}}^{2}(\mathbb{R}_{+};\mathbb{X}^{2})}^{2}\right),$
owing to the boundedness of the (orthogonal) projectors
$P_{n}:\mathbb{X}^{2}\rightarrow X_{n}$ and
$Q_{n}:\mathcal{M}_{\Omega,\Gamma}^{1}\rightarrow M_{n}$, and the fact that
$\Lambda\in L^{1}\left(0,T\right),$ for any $T>0.$
From (4.89), recalling (3.21) we obtain the following uniform (in $n$) bounds
for each approximate solution $(U_{n},\Phi_{n})$:
$\displaystyle U_{n}$ $\displaystyle~{}\text{is uniformly bounded
in}~{}L^{\infty}\left(0,T;\mathbb{V}^{1}\right),$ (4.90) $\displaystyle U_{n}$
$\displaystyle~{}\text{is uniformly bounded
in}~{}L^{2}\left(0,T;\mathbb{V}^{2}\right),$ (4.91) $\displaystyle\Phi_{n}$
$\displaystyle~{}\text{is uniformly bounded
in}~{}L^{\infty}\left(0,T;\mathcal{M}_{\Omega,\Gamma}^{2}\right),$ (4.92)
$\displaystyle\Phi_{n}$ $\displaystyle~{}\text{is uniformly bounded
in}~{}L^{2}\left(0,T;L_{k_{\Omega}}^{2}\left(\mathbb{R}_{+};\mathbb{V}^{2}\right)\right).$
(4.93)
Observe now that by (4.49)-(4.50), we also have
$\displaystyle\left\langle\partial_{t}U_{n},{\overline{U}}\right\rangle_{\mathbb{X}^{2}}$
$\displaystyle=\left\langle\partial_{t}U_{n},P_{n}{\overline{U}}\right\rangle_{\mathbb{X}^{2}}$
(4.94)
$\displaystyle=-\left\langle\mathrm{A_{W}^{0,\beta,\nu,\omega}}U_{n},P_{n}{\overline{U}}\right\rangle_{\mathbb{X}^{2}}-\left\langle\Phi_{n}^{t},P_{n}{\overline{U}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}-\left\langle
F(U_{n}),P_{n}{\overline{U}}\right\rangle_{\mathbb{X}^{2}}$
and
$\displaystyle\left\langle\partial_{t}\Phi_{n}^{t},{\overline{\Phi}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}$
$\displaystyle=\left\langle\partial_{t}\Phi_{n}^{t},Q_{n}{\overline{\Phi}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}$
(4.95)
$\displaystyle=\left\langle\mathrm{T}\Phi_{n}^{t},Q_{n}{\overline{\Phi}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+\left\langle
U_{n},Q_{n}{\overline{\Phi}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}},$
respectively. Thus, from the uniform bounds (4.90)-(4.93), we deduce by
comparison in equations (4.94)-(4.95) that
$\displaystyle\partial_{t}U_{n}\text{ is uniformly bounded in
}L^{\infty}\left(0,T;\left(\mathbb{V}^{1}\right)^{\ast}\right)\cap
L^{2}\left(0,T;\mathbb{X}^{2}\right),$ (4.96)
$\displaystyle\partial_{t}\Phi_{n}^{t}\text{ is uniformly bounded in
}L^{2}\left(0,T;L_{\mu_{\Omega}}^{2}\left(\mathbb{R}_{+};\mathbb{X}^{2}\right)\right)\cap
L^{\infty}\left(0,T;L_{\mu_{\Omega}}^{2}\left(\mathbb{R}_{+};\left(\mathbb{V}^{1}\right)^{\ast}\right)\right).$
(4.97)
We are now ready to pass to the limit as $n$ goes to infinity. On account of
the above uniform inequalities, we can find $U$ and $\Phi$ such that, up to
subsequences,
$\displaystyle U_{n}$ $\displaystyle\rightarrow U~{}\text{weakly * in
}L^{\infty}\left(0,T;\mathbb{V}^{1}\right),$ (4.98) $\displaystyle U_{n}$
$\displaystyle\rightarrow U\text{ weakly in
}L^{2}\left(0,T;\mathbb{V}^{2}\right),$ (4.99) $\displaystyle\Phi_{n}$
$\displaystyle\rightarrow\Phi\text{ weakly * in
}L^{\infty}\left(0,T;\mathcal{M}_{\Omega,\Gamma}^{2}\right),$ (4.100)
$\displaystyle\Phi_{n}$ $\displaystyle\rightarrow\Phi\text{ weakly in
}L^{2}\left(0,T;L_{k_{\Omega}}^{2}\left(\mathbb{R}_{+};\mathbb{V}^{2}\right)\right),$
(4.101) $\displaystyle\partial_{t}U_{n}$
$\displaystyle\rightarrow\partial_{t}U\text{ in
}L_{w^{\ast}}^{\infty}\left(0,T;\left(\mathbb{V}^{1}\right)^{\ast}\right)\cap
L_{w}^{2}\left(0,T;\mathbb{X}^{2}\right),$ (4.102)
$\displaystyle\partial_{t}\Phi_{n}^{t}$
$\displaystyle\rightarrow\partial_{t}\Phi^{t}\text{ in
}L_{w}^{2}\left(0,T;L_{\mu_{\Omega}}^{2}\left(\mathbb{R}_{+};\mathbb{X}^{2}\right)\right).$
(4.103)
Due to (4.98) and (4.102) and the classical Agmon-Lions compactness theorem,
we also have
$U_{n}\rightarrow U~{}\text{strongly in }C(\left[0,T\right];\mathbb{X}^{2}).$
(4.104)
Thanks to (4.98)-(4.103) and (4.104), we can easily control the nonlinear
terms in (4.49)-(4.50). By means of the above convergence properties, we can
pass to the limit in these equations and show that $\left(U,\Phi\right)$
solves (4.21) in the sense of Definition 4.5.
Finally, uniqueness follows from Proposition 4.10 owing to assumption (4.4).
The proof of the theorem is finished. ∎
###### Remark 4.12.
Observe that the assumption $\mu_{\Omega}\equiv\mu_{\Gamma}$ in Theorem 4.11
is crucial for the identity (4.77) to hold. Without it, cancellation in (4.78)
does not generally occur and (4.79) does not hold.
We now let
$h_{f}\left(s\right)=\int_{0}^{s}f^{{}^{\prime}}\left(\tau\right)\tau
d\tau\text{ and
}h_{g}\left(s\right)=\int_{0}^{s}\widetilde{g}^{{}^{\prime}}\left(\tau\right)\tau
d\tau.$
The next result states that there exist strong solutions, albeit in a much
weaker sense than in Theorem 4.11, even when the interior and boundary memory
kernels $\mu_{S}\left(\cdot\right):\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ do
_not_ coincide but both decay exponentially fast as $s$ goes to infinity.
###### Theorem 4.13.
Let (4.1)-(4.3) be satisfied and assume that $f,$ $g\in
C^{1}\left(\mathbb{R}\right)$ satisfy the following conditions:
(i) $|f^{{}^{\prime}}\left(s\right)|$
$\leq\ell_{1}\left(1+\left|s\right|^{2}\right),$ for all $s\in\mathbb{R}$.
(ii) $|g^{{}^{\prime}}\left(s\right)|$ $\leq\ell_{2}(1+|s|^{r_{2}}),$ for all
$s\in\mathbb{R}$, for some (arbitrary) $r_{2}>2.$
(iii) (4.4) holds and there exist $C_{i}>0,$ $i=1,\dots,8,$ such that
$\left\\{\begin{array}[]{ll}f\left(s\right)s\geq-
C_{1}\left|s\right|^{2}-C_{2},\text{ }g\left(s\right)s\geq-
C_{3}\left|s\right|^{2}-C_{4},&\forall s\in\mathbb{R}\\\
h_{f}\left(s\right)\geq-C_{5}\left|s\right|^{2}-C_{6},\text{
}h_{g}\left(s\right)\geq-C_{7}\left|s\right|^{2}-C_{8},&\forall
s\in\mathbb{R}\text{.}\end{array}\right.$ (4.105)
In addition, assume there exist constants $\delta_{S}>0$ such that
$\mu_{S}^{{}^{\prime}}\left(s\right)+\delta_{S}\mu_{S}\left(s\right)\leq
0\text{, for all }s\in\mathbb{R}_{+}\text{,
}S\in\left\\{\Omega,\Gamma\right\\}.$ (4.106)
Given $\alpha,\beta>0$, $\omega,\nu\in(0,1)$,
$\left(U_{0},\Phi_{0}\right)\in\mathbb{V}^{2}\times\left(\mathcal{M}_{\Omega,\Gamma}^{2}\cap
D\left(\mathrm{T}\right)\right)$, there exists a unique global quasi-strong
solution $\left(U,\Phi\right)$ to problem P in the sense of Definition 4.7.
###### Proof.
It suffices to provide bounds for $(U,\Phi^{t})$ in the (more regular) spaces
in (4.25)-(4.28). With reference to problem P${}_{n},$ we consider the
approximate problem of finding $\left(U_{n},\Phi_{n}\right)$ of the form
(4.46) such that, $\left(U_{n},\Phi_{n}\right)$ already satisfies
(4.49)-(4.50), and
$\displaystyle\left\langle\partial_{tt}U_{n},{\overline{U}}\right\rangle_{\mathbb{X}^{2}}+\left\langle\mathrm{A_{W}^{0,\beta,\nu,\omega}}\partial_{t}U_{n},{\overline{U}}\right\rangle_{\mathbb{X}^{2}}+\left\langle\partial_{t}\Phi_{n}^{t},{\overline{U}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}$
(4.107) $\displaystyle=-\left\langle
f^{{}^{\prime}}\left(u_{n}\right)\partial_{t}u_{n},\bar{u}\right\rangle_{L^{2}\left(\Omega\right)}-\left\langle\widetilde{g}^{{}^{\prime}}\left(u_{n}\right)\partial_{t}u_{n},\bar{v}\right\rangle_{L^{2}\left(\Gamma\right)}$
and
$\left\langle\partial_{tt}\Phi_{n}^{t},{\overline{\Phi}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}=\left\langle\mathrm{T}\partial_{t}\Phi_{n}^{t},{\overline{\Phi}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}+\left\langle\partial_{t}U_{n},{\overline{\Phi}}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}$
(4.108)
hold for almost all $t\in\left(0,T\right)$, for all
${\overline{U}}=(\bar{u},\bar{v})^{\mathrm{tr}}\in X_{n}$ and
${\overline{\Phi}}=(\bar{\eta},\bar{\xi})^{\mathrm{tr}}\in M_{n}$; moreover,
the function $\left(U_{n},\Phi_{n}\right)$ fulfils the conditions
$U_{n}\left(0\right)=P_{n}U_{0},$ $\Phi_{n}^{0}=Q_{n}\Phi^{0}$ and
$\partial_{t}U_{n}\left(0\right)=P_{n}\widehat{U}_{0},\text{
}\partial_{t}\Phi_{n}^{0}=Q_{n}\widehat{\Phi}^{0},$ (4.109)
where we have set
$\displaystyle\widehat{U}_{0}$
$\displaystyle:=-\mathrm{A_{W}^{0,\beta,\nu,\omega}}U_{0}-\int_{0}^{\infty}\mu_{\Omega}(s)\mathrm{A_{W}^{\alpha,0,0,\omega}}\Phi_{0}(s)ds-\nu\int_{0}^{\infty}\mu_{\Gamma}(s)\binom{0}{\mathrm{C}\xi_{0}(s)}ds-F(U_{0})\text{,
}$ $\displaystyle\widehat{\Phi}^{0}$
$\displaystyle:=\mathrm{T}\Phi_{0}(s)+U_{0}.$
Note that, if $U_{0}\in\mathbb{V}^{2}$ and $\Phi^{0}\in
D\left(\mathrm{T}\right)\cap\mathcal{M}_{\Omega,\Gamma}^{2}$, then
$(\widehat{U}_{0},\widehat{\Phi}^{0})\in\mathbb{X}^{2}\times\mathcal{M}_{\Omega,\Gamma}^{1}=\mathcal{H}_{\Omega,\Gamma}^{0,1}$,
owing to the continuous embeddings $H^{2}\left(\Omega\right)\subset
L^{\infty}\left(\Omega\right),$ $H^{2}\left(\Gamma\right)\subset
L^{\infty}\left(\Gamma\right)$. In particular, owing to the boundedness of the
projectors $P_{n}$ and $Q_{n}$ on the corresponding subspaces, we have
$\left\|\left(\partial_{t}U_{n}\left(0\right),\partial_{t}\Phi_{n}^{0}\right)\right\|_{\mathcal{H}_{\Omega,\Gamma}^{0,1}}\leq
K\left(R\right),$ (4.110)
for all
$\left(U_{0},\Phi^{0}\right)\in\mathbb{V}^{2}\times\left(D\left(\mathrm{T}\right)\cap\mathcal{M}_{\Omega,\Gamma}^{2}\right)$
such that
$\left\|\left(U_{0},\Phi^{0}\right)\right\|_{\mathcal{H}_{\Omega,\Gamma}^{2,2}}\leq
R,$ for some positive monotone nondecreasing function $K$. Indeed, according
to assumptions (4.1)-(4.3), we can infer that
$0\leq\int_{0}^{\infty}\mu_{S}(s)ds=\mu_{S}^{0}<\infty,\text{ for each
}S\in\left\\{\Omega,\Gamma\right\\},$ (4.111)
such that repeated application of Jensen’s inequality yields
$\displaystyle\left\|\int_{0}^{\infty}\mu_{\Omega}(s)\mathrm{A_{W}^{\alpha,0,0,\omega}}\Phi_{0}(s)ds\right\|_{\mathbb{X}^{2}}^{2}$
$\displaystyle\leq\mu_{\Omega}^{0}\int_{0}^{\infty}\mu_{\Omega}(s)\left\|\mathrm{A_{W}^{\alpha,0,0,\omega}}\Phi_{0}(s)\right\|_{\mathbb{X}^{2}}^{2}ds$
$\displaystyle\leq
C\mu_{\Omega}^{0}\int_{0}^{\infty}\mu_{\Omega}(s)\left\|\Phi_{0}(s)\right\|_{H^{2}}^{2}ds$
and
$\displaystyle\left\|\int_{0}^{\infty}\mu_{\Gamma}(s)\mathrm{C}\xi_{0}(s){\mathrm{d}}s\right\|_{L^{2}\left(\Gamma\right)}^{2}$
$\displaystyle\leq\mu_{\Gamma}^{0}\int_{0}^{\infty}\mu_{\Gamma}(s)\left\|\mathrm{C}\xi_{0}(s)\right\|_{L^{2}\left(\Gamma\right)}^{2}ds$
$\displaystyle\leq
C\mu_{\Gamma}^{0}\int_{0}^{\infty}\mu_{\Gamma}(s)\left\|\Phi_{0}(s)\right\|_{H^{2}\left(\Gamma\right)}^{2}ds.$
Our starting point is the validity of the energy estimate (4.74) which holds
on account of the first assumption of (4.105). Next we proceed to take
$\overline{U}=\partial_{t}U_{n}(t)$ in (4.107) and
$\overline{\Phi}=\partial_{t}\Phi_{n}^{t}\left(s\right)$ in (4.108),
respectively, by noting that this choice
$\left(\overline{U},\overline{\Phi}\right)$ is an admissible test function.
Summing the resulting identities and using (4.4), we obtain
$\displaystyle\frac{1}{2}\frac{d}{dt}\left\\{\left\|\partial_{t}U_{n}\right\|_{\mathbb{X}^{2}}^{2}+\left\|\partial_{t}\Phi_{n}^{t}\right\|_{\mathcal{M}_{\Omega,\Gamma}^{1}}^{2}\right\\}-\left\langle\mathrm{T}\partial_{t}\Phi_{n}^{t},\partial_{t}\Phi_{n}^{t}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}$
(4.112)
$\displaystyle+\left(\omega\left\|\nabla\partial_{t}u_{n}\right\|_{L^{2}(\Omega)}^{2}+\nu\left\|\nabla_{\Gamma}\partial_{t}u_{n}\right\|_{L^{2}(\Gamma)}^{2}+\beta\left\|\partial_{t}u_{n}\right\|_{L^{2}(\Gamma)}^{2}\right)$
$\displaystyle=-\left\langle
f^{{}^{\prime}}\left(u_{n}\right)\partial_{t}u_{n},\partial_{t}u_{n}\right\rangle_{L^{2}\left(\Omega\right)}-\left\langle\widetilde{g}^{{}^{\prime}}\left(u_{n}\right)\partial_{t}u_{n},\partial_{t}u_{n}\right\rangle_{L^{2}\left(\Gamma\right)}$
$\displaystyle\leq\max\left(M_{f},M_{g}\right)\left\|\partial_{t}U_{n}\right\|_{\mathbb{X}^{2}}^{2}.$
Thus, integrating (4.112) with respect to $\tau\in(0,t),$ by application of
Growall’s inequality, we have the estimate
$\left\|\left(\partial_{t}U_{n}\left(t\right),\partial_{t}\Phi_{n}^{t}\right)\right\|_{\mathcal{H}_{\Omega,\Gamma}^{0,1}}^{2}+\int_{0}^{t}\left(2\left\|\partial_{t}U_{n}(\tau)\right\|_{\mathbb{V}^{1}}^{2}+\left\|\partial_{t}\Phi_{n}^{\tau}\right\|_{L_{k_{\Omega}\oplus
k_{\Gamma}}^{2}\left(\mathbb{R}_{+};\mathbb{V}^{1}\right)}^{2}\right)d\tau\leq
K_{T}\left(R\right),$ (4.113)
for all $t\geq 0$ and all $R>0$ such that
$\left\|\left(U_{0},\Phi^{0}\right)\right\|_{\mathcal{H}_{\Omega,\Gamma}^{2,2}}\leq
R$. Thanks to (4.113), we deduce the uniform bounds
$\displaystyle\partial_{t}U_{n}$ $\displaystyle\in
L^{\infty}\left(0,T;\mathbb{X}^{2}\right)\cap
L^{2}\left(0,T;\mathbb{V}^{1}\right),$ (4.114)
$\displaystyle\partial_{t}\Phi_{n}$ $\displaystyle\in
L^{\infty}\left(0,T;\mathcal{M}_{\Omega,\Gamma}^{1}\right)\cap
L^{2}\left(0,T;L_{k_{\Omega}\oplus
k_{\Gamma}}^{2}\left(\mathbb{R}_{+};\mathbb{V}^{1}\right)\right),$ (4.115)
which establishes (4.27)-(4.28) for the approximate solution
$\left(U_{n},\Phi_{n}\right)$.
We now establish a bound for $U_{n}$ in
$L^{\infty}\left(0,T;\mathbb{V}^{1}\right)$ in a different way from the proof
of Theorem 4.11. For this estimate, the uniform regularity in (4.114)-(4.115)
is crucial. To this end, we proceed to take $\overline{U}=U_{n}(t)$ in (4.107)
in order to derive
$\displaystyle\frac{d}{dt}\left(\left\|U_{n}\right\|_{\mathbb{V}^{1}}^{2}+\left\langle\partial_{t}U_{n},U_{n}\right\rangle_{\mathbb{X}^{2}}+2\int_{\Omega}h_{f}\left(u_{n}\right)dx+2\int_{\Gamma}h_{g}\left(u_{n}\right)d\sigma\right)$
(4.116)
$\displaystyle=2\left\|\partial_{t}{U}_{n}\right\|_{\mathbb{X}^{2}}^{2}-2\left\langle\partial_{t}\Phi_{n}^{t},{U}_{n}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}.$
Moreover, using (4.114) and owing to the Cauchy-Schwarz and Young inequalities
and the second of (4.105), the following basic inequality holds:
$\displaystyle
C_{\ast}\left\|U_{n}\right\|_{\mathbb{V}^{1}}^{2}-K_{T}\left(R\right)$ (4.117)
$\displaystyle\leq\left\|U_{n}\right\|_{\mathbb{V}^{1}}^{2}+\left\langle\partial_{t}U_{n},U_{n}\right\rangle_{\mathbb{X}^{2}}+2\int_{\Omega}h_{f}\left(u_{n}\right)dx+2\int_{\Gamma}h_{g}\left(u_{n}\right)d\sigma$
$\displaystyle\leq
C\left\|U_{n}\right\|_{\mathbb{V}^{1}}^{2}+K_{T}\left(R\right),$
for some constants $C_{\ast},C>0$ and some function $K_{T}>0,$ all independent
of $n$ and $t$. Finally, for any $\eta>0$ we estimate
$\displaystyle-\left\langle\partial_{t}\Phi_{n}^{t},{U}_{n}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}$
$\displaystyle\leq\eta\left\|U_{n}\right\|_{\mathbb{V}^{1}}^{2}+C_{\eta}\int_{0}^{\infty}\mu_{\Omega}(s)\left\|\partial_{t}\eta_{n}(s)\right\|_{H^{1}}^{2}ds+C_{\eta}\int_{0}^{\infty}\mu_{\Gamma}(s)\left\|\partial_{t}\xi_{n}(s)\right\|_{H^{1}\left(\Gamma\right)}^{2}ds$
(4.118)
$\displaystyle\leq\eta\left\|U_{n}\right\|_{\mathbb{V}^{1}}^{2}-C_{\eta}\delta_{\Omega}^{-1}\int_{0}^{\infty}\mu_{\Omega}^{{}^{\prime}}(s)\left\|\partial_{t}\eta_{n}(s)\right\|_{H^{1}}^{2}ds-
C_{\eta}\delta_{\Gamma}^{-1}\int_{0}^{\infty}\mu_{\Gamma}^{{}^{\prime}}(s)\left\|\partial_{t}\xi_{n}(s)\right\|_{H^{1}\left(\Gamma\right)}^{2}ds,$
where in the last line we have employed assumption (4.106). Thus, from (4.116)
we obtain the inequality
$\displaystyle\frac{d}{dt}\left(\left\|U_{n}\right\|_{\mathbb{V}^{1}}^{2}+\left\langle\partial_{t}U_{n},U_{n}\right\rangle_{\mathbb{X}^{2}}+2\int_{\Omega}h_{f}\left(u_{n}\right)dx+2\int_{\Gamma}h_{g}\left(u_{n}\right)d\sigma\right)$
(4.119) $\displaystyle\leq
C_{\eta}\left\|U_{n}\left(t\right)\right\|_{\mathbb{V}^{1}}^{2}+\Lambda_{2}\left(t\right),$
for a.e. $t\in\left(0,T\right),$ where we have set
$\Lambda_{2}\left(t\right):=2\left\|\partial_{t}{U}_{n}\right\|_{\mathbb{X}^{2}}^{2}-2\left\langle\partial_{t}\Phi_{n}^{t},{U}_{n}\right\rangle_{\mathcal{M}_{\Omega,\Gamma}^{1}}.$
We now observe that $\Lambda_{2}\in L^{1}\left(0,T\right)$ on account of
(4.74), (4.114)-(4.115) and (4.117)-(4.118), because
$\partial_{t}U_{n}\left(0\right)\in\mathbb{X}^{2}$ by (4.109). Thus, observing
(4.117), the application of Gronwall’s inequality to (4.119) yields the
desired uniform bound
$U_{n}\in L^{\infty}\left(0,T;\mathbb{V}^{1}\right).$ (4.120)
Finally, by comparison in equation (4.95), by virtue of the uniform bounds
(4.120) and (4.115) we also deduce
$\mathrm{T}\Phi_{n}^{t}\in
L^{\infty}\left(0,T;\mathcal{M}_{\Omega,\Gamma}^{1}\right)$ (4.121)
uniformly with respect to all $n\geq 1$. In particular, it holds
$\Phi_{n}^{t}\in L^{\infty}\left(0,T;D\left(\mathrm{T}\right)\right)$
uniformly. Finally, by (4.120) and assumptions (i)-(ii), we also have
$F\left(U_{n}\right)\in L^{\infty}\left(0,T;\mathbb{X}^{2}\right).$
We can pass to the limit as $n\rightarrow\infty$ in (4.114)-(4.115), (4.120)
and (4.121) to find a limit point $\left(U,\Phi\right)$ with the properties
stated in (4.25)-(4.28). Passage to the limit in equations (4.49)-(4.50) and
in particular, in the nonlinear terms is done in the same fashion as in the
proof of Theorem 4.11. Indeed, exploiting (4.120) and (4.114) we still have
the validity of (4.104) and, hence, the limit solution $\left(U,\Phi\right)$
solves (4.21) in the sense of Definition 4.7.
Uniqueness follows from Proposition 4.10 owing to assumption (4.4). The proof
of theorem is now complete. ∎
Finally, we may conclude with the following.
###### Theorem 4.14.
Let the assumptions of Theorem 4.11 be satisfied. Let $\left(U,\Phi\right)$ be
a unique strong solution corresponding to a given initial datum
$\left(U_{0},\Phi_{0}\right)\in\mathcal{H}_{\Omega,\Gamma}^{2,2}.$ Then, this
solution also satisfies |
# On the Thermodynamics of “Continuous Spin” photons
Philip Schuster<EMAIL_ADDRESS>SLAC National Accelerator
Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025, USA Gowri Sundaresan
<EMAIL_ADDRESS>SLAC National Accelerator Laboratory, 2575 Sand Hill
Road, Menlo Park, CA 94025, USA Department of Physics, Stanford University,
Stanford, CA, 94305, USA Natalia Toro<EMAIL_ADDRESS>SLAC National
Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025, USA
(August 28, 2024)
###### Abstract
Special relativity allows massless particles to have states of different
integer (or half-integer) helicities that mix under boosts, much like the
spin-states of a massive particle. Such massless particles are known as
“continuous spin” particles (CSPs), a term coined by Wigner, and they are
notable for their infinite tower of spin polarizations. The mixing under
boosts is controlled by a spin-scale $\rho$ with units of momentum. Normally,
we assume $\rho=0$. The interactions of CSPs are known to satisfy certain
simple properties, one of which is that the $\rho\rightarrow 0$ limit
_generically_ recovers familiar interactions of massless scalars, photons, or
gravitons, with all other polarizations decoupling in this limit. Thus, one
can ask if the photon of the Standard Model is a CSP at small but non-zero
$\rho$. One concern about this possibility – originally raised by Wigner – is
that the infinite tower of polarizations could pose problems for
thermodynamics. To address this question, we study the thermal evolution of a
CSP photon gas coupled to isothermal matter, across CSP helicity modes and
phase space. We find that the structure of the interactions dictated by
Lorentz symmetry imply well behaved thermodynamics. When the CSP photon’s
interactions to charged matter are turned on, the primary $h=\pm 1$ helicity
modes thermalize quickly, while the other modes require increasingly long
time-scales to thermalize, set by powers of $T/\rho$. In familiar thermal
systems, the CSP photon behaves like the QED photon with small $\rho$\- and
time- dependent corrections to its effective relativistic degrees of freedom.
Sizable departures from familiar thermal behavior arise at energy scales
comparable to $\rho$ and could have testable experimental consequences.
## I Introduction
At a purely kinematic level, Lorentz symmetry allows massless particles to
have states of different integer (or half-integer) helicities that mix under
boosts much like the spin-states of a massive particle [1]. Such “continuous
spin” particles (CSPs) possess a spin-scale $\rho$ (with units of momentum)
that characterizes the extent of this mixing; in the limit $\rho\rightarrow
0$, the helicity states are exactly Lorentz-invariant. The possibility that
$\rho\neq 0$ has never been experimentally studied.
Until the work of [2, 3], it was usually assumed that theories of CSPs — if
they exist at all — are unrelated to more familiar theories and are physically
irrelevant. However, the soft factor analysis of interacting CSPs in [2, 3]
and the field theory works in [4, 5, 6] provide strong evidence that
consistent interacting CSP theories exist, and their $\rho\rightarrow 0$ limit
_generically_ recovers familiar theories of massless scalars, photons, or
gravitons. In the recent work of [6], a formalism for computing scattering
amplitudes in QED for nonzero $\rho$ was given, allowing the computation and
study of finite $\rho$ corrections to QED. (Also see [7] for a lower
dimensional generalization, [8] for fermionic and [9, 10] for supersymmetric
cases using the “vector superspace” formalism. Other formalisms, including
constrained metric-like, frame-like, and BRST [11, 12] formulations, have also
been used to construct actions for fermionic [13, 14] and supersymmetric [15,
16, 17, 18] continuous spin fields, as well as those in higher-dimensional
[19, 20, 21] and (A)dS [22, 23, 24, 25, 26, 27] spaces. Relations between
these formalisms are discussed in Refs. [28, 20, 9]. For reviews and
discussion, see Refs. [28, 29, 30, 31].)
At least for the case of a photon with finite $\rho$, it is now possible to
more precisely ask the questions: _How much_ does a photon’s helicity
transform under boosts? What constraints can be derived on the spin-scale
$\rho$ of the photon, and how might a non-zero $\rho$ be observed?
To set the stage, we recall a few basic facts about CSP kinematics and
interactions. CSP states of four-momentum $k^{\mu}$ can always be decomposed
into eigenstates of the helicity operator $\hat{k}\cdot\vec{\bf J}$:
$\hat{k}\cdot\vec{\bf J}|k,h\rangle=h|k,h\rangle,$ (1)
with integer or half-integer eigenvalue $h$. The sole change from the familiar
case $\rho=0$ to non-zero $\rho$ is that Lorentz transformations $\Lambda$
transform $|k,h\rangle$ into a superposition of states $|\Lambda
k,h^{\prime}\rangle$ with integer $h^{\prime}-h$. In the limit
$\rho\rightarrow 0$, this superposition becomes simply $|\Lambda k,h\rangle$
times a phase $e^{ih\theta(\Lambda)}$ for a suitable $\theta$. While a theory
of $\rho=0$ particles can consistently include _only_ a single helicity $h$
(or two states with $\pm h$, accounting for CPT symmetry), particles of non-
zero $\rho$ must possess an infinite tower of states of _all_ integer
helicities (or all half-integer helicities, a case we will not consider
further here). This fact suggests, at first glance, that CSP physics must be
quite different from that of the familiar theories of helcity 0, 1 and 2
particles.
However, the application of Weinberg’s famous soft limit of scattering
amplitudes revealed that CSP interactions are necessarily well approximated by
familiar gauge theories in the limit of small $\rho$ [3]. Only three types of
consistent scattering amplitudes can exist in the soft limit, and each
displays a “helicity correspondence” at CSP energies large compared to $\rho$.
In the first, scalar-like amplitude, the helicity-0 mode is emitted most
strongly, with an amplitude that is well approximated by ordinary massless
scalar interactions, while all other helicity $\pm h$ modes have emission
amplitudes suppressed by $(\rho/E)^{h}/h!$. In the other two types of
amplitude, photon-like and graviton-like interactions, respectively, the
helicity $\pm 1$ ($\pm 2$) modes are emitted most strongly, with amplitudes
well approximated by ordinary vector gauge theories (perturbative GR), while
other helicity amplitudes are suppressed by $(\rho/E)^{|h-1|}$
($(\rho/E)^{|h-2|}$.). At energies much larger than $\rho$, the other
helicities must be present in the theory, but induce only small effects. In
the more complete formalism of [6], general scattering amplitudes in QED at
finite $\rho$ retain this behavior, and the sum over all polarization states
is finite, causing no new divergences in cross sections.
The mere presence of the tower of modes raises a thermodynamic puzzle first
mentioned (in a brief remark) by Wigner [32]: in a theory with CSPs, which
have infinitely many polarization states, the vacuum’s heat capacity per unit
volume is formally infinite. This feature, which appears discontinuous between
$\rho=0$ and arbitrarily small non-zero $\rho$, is sometimes taken to preclude
the relevance of CSPs to the physical world (and is sometimes conflated with
possible inconsistencies with black hole thermodynamics in GR). In this paper,
we decisively address Wigner’s original concerns regarding CSP thermodynamics.
The key insight is that any physical system has had only finite time to
(approximately) thermalize. Helicity correspondence implies that at high-
enough energies, only one CSP mode interacts appreciably with matter, while
the others decouple. Hot matter quickly thermalizes the principally
interacting mode of a CSP (helicity $\pm 1$ in the photon-like case), but
other modes take parametrically longer to thermalize. The formal infinity of
polarization modes is never physically relevant, because only a finite number
of modes will reach equilibrium in any finite time. As such, late-time
corrections to $\rho=0$ thermodynamics are calculable and the $\rho\to 0$
limit is smooth. Indeed, for temperatures $T\gg\rho$, the effects of non-zero
$\rho$ on thermodynamics are parametrically small over exponentially long
times.
### I.1 Summary of results
This paper clarifies and elaborates on the physical conclusion summarized
above, with a focus on QED at finite $\rho_{\gamma}$, as defined in [6], i.e.
a CSP “photon” with vector-like couplings to matter. The qualitative features
of our results would apply equally to scalar- or tensor-like CSPs, but we
choose the CSP photon as our primary example because it is the case of
greatest physical interest.
We will use the terms ‘ordinary photon’ and ‘familiar photon’ to refer to the
QED photon at $\rho_{\gamma}=0$. We will consider basic reactions that produce
and absorb on-shell photons in a bath of charged particles, which is precisely
what the formalism of [6] is appropriate for. However, for our analysis of
elementary thermodynamics, we will use soft CSP momentum limits of the
scattering amplitudes. We do this because (a) it is simpler than using full
amplitudes, and (b) it captures the parametric $\rho$\- and energy- scaling
for arbitrary processes, up to ${\cal O}(1)$ factors.
In this paper, we explore in detail the effects of non-zero spin scale
$\rho_{\gamma}$ on the phase-space evolution of a CSP photon gas coupled to a
thermal gas of charged particles. We show that for energies much greater than
the spin scale $\rho_{\gamma}$ (“correspondence domain” of the CSP gas):
* •
If brought into contact with a thermal bath at finite temperature, a CSP
photon can be modeled as the familiar photon with small, time- and
$\rho_{\gamma}$-dependent corrections to its effective number of degrees of
freedom.
* •
There is a strong hierarchy in mode thermalization, where all but the $h=\pm
1$ helicity modes take parametrically long to thermalize in the
$\rho_{\gamma}\rightarrow 0$ limit. The thermalization of the CSP proceeds
increasingly slowly with time.
At energies much smaller than the spin scale (“deviation domain” of the CSP
gas), we find that:
* •
There is a weak hierarchy in mode thermalization, where a large (but finite,
${\cal O}(\rho_{\gamma}/E)$) number of modes thermalize comparably as the
$h=\pm 1$ modes, but take parametrically longer to do so at progressively
lower energies.
* •
The radiation power spectrum at ultra-low frequencies is significantly
enhanced relative to the familiar black body spectrum.
At temperatures $T\gg\rho_{\gamma}$, these effects collectively generate
parametrically small corrections to the total energy density in radiation,
even at times exponentially larger than the photon thermalization time.
The remainder of this paper is presented as follows: We will begin our
discussion in section II with the thermodynamic setup. Subsequently, we
discuss CSP behavior in the “correspondence domain” and “deviation domain” in
turn, introducing the key ideas step-by-step in sections III and IV
respectively. Section V will serve as an extended synthesis of results, and we
will highlight a collection of open questions in VI.
## II Setup
We start by considering an idealized thermal system: a gas of charged
particles held at a constant temperature $T$ and constant number density. The
gas can be relativistic or non-relativistic, we assume it is non-degenerate.
At times $t<0$, the CSP photon phase space density is taken to be $f_{h}=0$
for all $h$.
At time $t=0$, we ‘turn on’ the coupling of the charged particles to CSP
photons. The interactions of charged matter produce CSP photons, which begin
to thermalize. We assume, in keeping with helicity correspondence, that CSP
modes do not have appreciable self-interactions and so thermalize
predominantly via interactions with the charged scatterers. With this
assumption, the Boltzmann equation can be solved exactly to study the
thermodynamic evolution of the photon gas (see Appendix A). The phase space
density for mode $h$ evolves as:
$f_{h}(E,t)=f_{h}^{(eq)}(E)[1-\exp{(-t/\tau_{h}(E))}]$ (2)
Here, $\tau_{h}(E)$ is the ‘characteristic thermalization time’ of mode $h$ at
energy $E$, given by:
$\begin{split}\tau_{h}(E)=f_{h}^{(eq)}(E)\bigg{[}&\int
d\Pi_{in}f_{in}d\Pi_{out}(2\pi)^{4}\\\ &\times\delta^{4}(\Sigma p_{in}-\Sigma
p_{out+\gamma_{h}})|\mathcal{M}|^{2}\frac{1}{E}\bigg{]}^{-1}\end{split}$ (3)
Here,‘in’ denotes the incoming scatterers, $f_{in}$ are their phase space
distributions (which can follow any equilibrium statistics), ‘out’ denotes all
outgoing particles except CSP mode $\gamma_{h}$,
$d\Pi\equiv\frac{g}{(2\pi)^{3}}\frac{d^{3}p}{2E}$, and there is an implied
product over all species. Any averages over initial and final spins of
scattering states (besides the photon) as well as symmetry factors are
implicit, and we ignore any effects of Bose enhancement or Fermi suppression
from the outgoing states. $f_{h}^{(eq)}(E)$ is the equilibrium distribution of
mode h, which will follow Bose-Einstein statistics.
$f_{h}^{(eq)}(E)=[\exp(E/T)-1]^{-1}$ (4)
Equation (3) is completely general, but going forward, we will work in the
soft limit, with
$|\mathcal{M}|=|\mathcal{M}|_{0}\;|\Sigma_{i}q_{i}z_{i}F_{h}(\rho_{\gamma}z_{i})|$
(5)
where the subscript ‘0’ denotes the underlying scattering process to which we
attach soft photons in the charged legs denoted by ‘i’, with momenta $p_{i}$
and carrying charges $q_{i}$. The soft factor in (5) follows [2] and uses:
$\displaystyle z_{i}$ $\displaystyle\equiv\frac{\epsilon(k).p_{i}}{k.p_{i}}$
(6) $\displaystyle F_{h}(w)$
$\displaystyle\equiv\frac{(J_{h}(|w|)-c\delta_{h0})\leavevmode\nobreak\
e^{ih\leavevmode\nobreak\ \text{arg}(w)}}{|w|}$ (7)
where $\epsilon(k)$ is the polarization vector, $J_{h}(w)$ is a Bessel
function of the first kind, and $c$ can be any arbitrary constant - due to
charge conservation, the contribution from $c\delta_{h0}$ terms will cancel
when the sum over charged legs is taken. To study the thermodynamics from a
general scattering process, it will be convenient to work with a single
charged leg and leave the sum implicit, and for such calculations it is
convenient to set $c=1$ at higher energies ($E\gg\rho_{\gamma}$) and $c=0$ at
low energies ($E\ll\rho_{\gamma}$). We will use this scheme for calculations
in this paper. Note that in the $\rho_{\gamma}\rightarrow 0$ limit,
$F_{h}(\rho_{\gamma}z_{i})\rightarrow 1$ for $h=\pm 1$,
$F_{h}(\rho_{\gamma}z_{i})\rightarrow 0$ for all other $h$, and we recover the
familiar soft limit factorization of the QED photon amplitude [33] in (5).
We will use the term ‘primary modes’ to refer to $h=\pm 1$ modes of the CSP
photon, ‘partner modes’ to refer to $h=0$ and $\pm 2$ modes, and ‘sub-partner
modes’ to refer to all other helicities. For the discussion going forward, we
allow the charged scatterers to be non-relativistic, with a velocity
$v_{\perp}$ transverse to the soft photon (which was set to unity in I.1). In
the non-relativistic limit,
$\rho_{\gamma}z\approx\frac{\rho_{\gamma}v_{\perp}}{E}$. All numerical
simulations in this paper use a full thermal distribution of $v_{\perp}$,
however we use its average value $\langle v_{\perp}\rangle$ in equations. We
also note here that all numerical simulations in the paper use the full Bessel
function soft factors without any simplifications, but we will present several
equations that use approximate forms of $J_{h}(x)$ in the applicable regimes
as an aid to understanding thermodynamic behavior. Allowing for a non-
relativistic scatterer, the “correspondence domain” will refer to
$E\gg\rho_{\gamma}\langle v_{\perp}\rangle$ and the “deviation domain” will
refer to $E\lesssim\rho_{\gamma}\langle v_{\perp}\rangle$.
From equations (5), (6) and (7), it can be seen that when
$\rho_{\gamma}\langle v_{\perp}\rangle/E\ll 1$, the primary modes of the CSP
photon behave similarly as the QED photon (with corrections discussed in III).
When $\rho_{\gamma}\langle v_{\perp}\rangle/E\gg 1$, the primary modes couple
more weakly to charged matter than the QED photon, and much more
democratically as other helicites. Intuitively, the difference in behavior in
these two energy regimes can be understood as a consequence of $\rho_{\gamma}$
having mass dimension 1 - we can expect deviations from familiar behavior at
lower energies.
In the following two sections, we study these two energy regimes in detail. We
will proceed slowly, introducing key concepts as we proceed, and bring the big
picture together in Section V.
## III Behavior at energies above spin scale
$\bigl{(}E\gg\rho_{\gamma}\langle v_{\perp}\rangle\bigr{)}$:
“Correspondence Domain”
In this section, we study the thermal evolution and behavior of the CSP gas at
energies greater than the photon’s spin scale. We demonstrate that in this
energy regime, a CSP photon gas behaves like a thermal gas of the familiar
photon, with sub-dominant, finite, calculable corrections from all other
helicities. Since we expect the spin scale to be small in nature, the
discussion in this section applies to most energy regimes probed in familiar
physical systems (that have $T\gg\rho_{\gamma}$). Even when the thermal system
under study is ultra-cold ($T\ll\rho_{\gamma}$), its dominant behavior could
be in this energy regime if the charged scatterers are highly non-relativistic
($\langle v_{\perp}\rangle\ll 1$).
We will begin by reviewing the characteristic thermalization times in III.1,
and show that mode thermalization in the correspondence domain follows a
strong hierarchy. This sets the stage for our discussion on how the phase
space of the CSP photon is populated in III.2, with most partner and sub-
partner modes only partially thermal, and contributing only small corrections
to the relativistic degrees of freedom from the primary modes. We will
subsequently discuss the energy density of CSP gas overall in III.3. (This
discussion is possible at this juncture because all deviation domain
contributions to CSP number density and energy density are sub-dominant in
familiar thermal systems, and we will justify this in Section IV.) We
calculate the time dependence of evolution in CSP energy density explicitly,
and show that the CSP photon gas increases its internal energy progressively
slower as time progresses - thus, we demonstrate explicitly that despite
having an infinite helicity tower, the CSP photon gas has finite energy at all
finite times. We defer a discussion on the full CSP number density and degrees
of freedom to Section V.
### III.1 Hierarchy in characteristic thermalization times
Using the equations for characteristic thermalization time introduced in II
((3), (5), (6), (7)), and using the Taylor expansion of Bessel functions of
the first kind [34, (10.2.2)] (which is valid for all helicities throughout
the correspondence domain), we find that the characteristic thermalization
time of mode $h$ of the CSP photon to leading order follows:
$\frac{\tau_{h}(E)}{\tau_{*}(E)}\sim\frac{\bigl{\langle}|z|^{2}\bigr{\rangle}}{\bigl{\langle}|zF_{h}(\rho_{\gamma}z)|^{2}\bigr{\rangle}}\;\approx\;2^{\tilde{h}}(\tilde{h}+1)!^{2}\biggl{(}\frac{E}{\rho_{\gamma}\langle
v_{\perp}\rangle}\biggr{)}^{2\tilde{h}}$ (8)
where we define $\tilde{h}\equiv||h|-1|$ to capture the mode ‘distance from
primary modes’, and we introduce a benchmark thermalization time $\tau_{*}(E)$
\- the characteristic thermalization time of an ordinary photon
($\rho_{\gamma}=0$) at the energy $E$ we are interested in.111Note that a
proper calculation of the thermal averages in (8) in the non-relativistic
limit can reduce the factorial scaling from $(\tilde{h}+1)!^{2}$ to
$(\tilde{h}+1)!$ for some helicities. Such non-relativistic effects are fully
included in all numerical simulations and elaborated in Appendix B but omitted
from the main text of the paper.
$\tau_{*}(E)\sim\tau_{*}(T)\biggl{(}\frac{E}{T}\biggr{)}^{2}$ (9)
For primary modes, $\tilde{h}=0$, and the ratio in (8) reduces to 1 - that is,
the primary modes of the CSP photon behave like the familiar photon at leading
order in this energy range, with corrections only at
$\mathcal{O}(\frac{\rho_{\gamma}\langle v_{\perp}\rangle}{E})^{2}$. The other
modes, all of which have lower production cross sections, have longer
thermalization times. The thermalization of partner modes is suppressed by
$\mathcal{O}(\frac{\rho_{\gamma}\langle v_{\perp}\rangle}{E})^{2}$ relative to
the primary mode. Sub-partner modes thermalize parametrically slower, not only
due to higher order suppressions by $(\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{E})$, but also due to suppressions via
$2^{(\tilde{h}+1)}(\tilde{h}+1)!^{2}$, which grows super-exponentially in $h$.
Figure 1 demonstrates the hierarchy in characteristic thermalization times,
and the exponentially longer timescales needed for populating the phase space
of the partner and sub-partner modes.
Figure 1: These figures illustrate that CSP photon modes thermalize with a
strong “double hierarchy” in the correspondence domain
($E\gg\rho_{\gamma}\langle v_{\perp}\rangle$). For these illustrations, we
choose $T=10^{4}\rho_{\gamma}$ and $\langle v_{\perp}\rangle=0.1$.
Left: Shows the characteristic thermalization times $\tau_{h}$ of the CSP
photon modes at 3 chosen energies $E$, per equation (8). The x-axis is the
helicity $|h|$. The y-axis is logarithmic in time, and shown in units of
characteristic thermalization time $\tau_{*}(T)$ of an ordinary photon
undergoing the same thermodynamic process at temperature T.
Right: Shows the rate at which primary, partner and $h=\pm 3$ modes are
populating their phase space at a given energy (chosen to be E = T) as time
evolves, per equations (2) and (8). The x-axis is logarithmic in time, and
shown in units of benchmark thermalization time $\tau_{*}(T)$. The y-axis is
the phase space density $f_{h}$ of the mode at the chosen energy, normalized
to the equilibrium Bose-Einstein distribution.
It is also instructive to review how $\tau_{h}$ of a single mode $h$ depends
on energy. It follows from (8) and (9) that $\tau_{h}(E)\propto
E^{2(\tilde{h}+1)}$ for all $E\gg\rho_{\gamma}\langle v_{\perp}\rangle$.
Therefore the low energy phase space close to $E\sim\rho_{\gamma}\langle
v_{\perp}\rangle$ thermalizes first, whereas the higher energies take longer
to thermalize, with a greater suppression at higher helicities.
Thus, the thermalization of a CSP gas at energies above the spin scale is
“doubly hierarchical” (hierarchical in helicity and energy), with higher
energy phase space of sub-partner modes getting super-exponentially
suppressed. We next present a method to study the effects of this “double
hierarchy”.
### III.2 Partial thermalization of modes
Due to the strong “doubly hierarchical” thermalization behavior, a CSP gas as
a whole is always partially thermal at any finite time. The phase space
density in a mode $h$ at any time $t$ can be obtained by integrating (2), and
the number density in the mode follows from it [35]:
$\displaystyle f_{h}(t)$
$\displaystyle=\int_{0}^{\infty}dEf_{h}^{(eq)}(E)[1-\exp{(-t/\tau_{h}(E))}]$
(10) $\displaystyle n_{h}(t)$
$\displaystyle=\int_{0}^{\infty}dEn_{h}^{(eq)}(E)[1-\exp{(-t/\tau_{h}(E))}]$
(11) where $\displaystyle
n_{h}^{(eq)}(E)\equiv\frac{1}{2\pi^{2}}f_{h}^{(eq)}(E)E^{2}$ (12)
Note that each mode of the CSP carries a single internal degree of freedom.
At a given time $t$, the low energy phase space for which $\tau_{h}(E)\ll t$
is approximately thermalized. This behavior continues until a maximum energy
$E_{h\wedge}(t)$ that satisfies the condition $\tau_{h}(E)=t$, above which
density is well below the thermal value. Thus, the differential phase space
density per unit energy at time $t$ (integrand of (10)) can be expressed in
terms of its equilibrium value, and the differential number density per unit
energy at time $t$ (integrand of (11)) has the same form. The latter is222Note
that per (11), at $E=E_{h\wedge}(t)$, $n_{h}$ has already fallen well below
its equilibrium value. (13) thus provides a conservative estimate.:
$n_{h}(E,t)\lesssim\begin{cases}n_{h}^{(eq)}(E)&\rho_{\gamma}\langle
v_{\perp}\rangle\leq E\leq E_{h\wedge}(t)\\\
n_{h}^{(eq)}(E)(\frac{E_{h\wedge}(t)}{E})^{2(\tilde{h}+1)}&E\geq
E_{h\wedge}(t)\end{cases}$ (13)
where all variables and parameters are defined as before. Note that a more
complete expression for number density will be discussed in Section V. We can
use (8) and (9) to express $E_{h\wedge}(t)$ as:
$E_{h\wedge}(t)\sim\biggl{[}\,\frac{t}{\tau_{*}(T)}\frac{(\rho_{\gamma}\langle
v_{\perp}\rangle)^{2\tilde{h}}}{(\tilde{h}+1)!^{2}}T^{2}\biggr{]}\,^{\frac{1}{2(\tilde{h}+1)}}\leavevmode\nobreak\
\leavevmode\nobreak\ \text{for all}\leavevmode\nobreak\ {h}$ (14)
For the equilibrium phase space of a mode $h$ to be fully populated by time
$t$, its $E_{h\wedge}(t)$ has to be $\gg T$. Since $E_{h\wedge}(t)\propto
t^{\frac{1}{2(\tilde{h}+1)}}$, it increases progressively slower for higher
helicities, and remains close to $\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{|h|}$ for many orders of time after $\tau_{*}(T)$ in the
high helicity limit. Differential number density per unit energy at a given
time $t$ is illustrated in Figure 2. As we expected, most modes are only
partially thermal at any given time, and sub-partner modes occupy
progressively smaller fractions of their total available phase space.
Figure 2: This figure illustrates that at any finite time, the CSP photon gas
is only partially thermalized. We show the differential number density per
unit energy of CSP modes at a snapshot in time. The x-axis is the energy $E$,
shown in units of temperature $T$. The y-axis is the mode differential number
density per unit energy, normalized to the equilibrium value at temperature T.
As shown, the helicity $\pm 1$ modes have fully equilibrated, the partner
modes have only thermalized their phase space up to $E\sim T$, and the $h=\pm
3$ modes have thermalized a vanishingly small fraction of their phase space
(up to $E<10^{-2}T$). The parameter choices for this illustration are:
Temperature $T=10^{4}\rho_{\gamma}$, $\langle v_{\perp}\rangle=0.1$, time of
snapshot t = $10^{10}\tau_{*}(T)$.
As mentioned in the section introduction, we will look at CSP energy density
overall now, and will return to CSP number density in Section V.
### III.3 CSP energy density
We will demonstrate two critical aspects of the CSP energy density:
1. 1.
The total energy density is finite at all finite times
2. 2.
The rate of increase in energy density is inverse in time (once the primary
modes are thermal3). That is, the energy density increases more and more
slowly as time evolves
The full derivation including simplifying assumptions made is provided in
Appendix C, but here we present the main results. We will use $\mathcal{E}$ to
denote energy density instead of the more commonly used $\rho$ since we
reserve the latter for spin scale.
The energy density in the CSP gas is:
$\mathcal{E}_{CSP}(t)=\sum_{h}\int_{0}^{\infty}dEn_{h}(E,t)E$ (15)
We start with the time rate of change of this.
$\displaystyle\frac{d}{dt}\mathcal{E}_{CSP}(t)$
$\displaystyle\lesssim\sum_{h}\int_{E_{h\wedge}(t)}^{\infty}dEn_{h}^{(eq)}(E)E\frac{1}{\tau_{h}(E)}$
(16)
$\displaystyle\approx\Bigg{\\{}\sum\limits_{h\,:\,\begin{subarray}{c}\tau_{h}(T)\geq
t\\\ h\neq\pm
1\end{subarray}}\frac{T^{4}}{2\pi^{2}[2\tilde{h}-1]}t^{\left(-1+\frac{3}{2(\tilde{h}+1)}\right)}$
$\displaystyle\times\left(\frac{\rho_{\gamma}^{2}}{\langle
p_{\perp}^{2}\rangle}\right)^{\frac{3\tilde{h}}{2(\tilde{h}+1)}}\left[\frac{1}{\tau_{*}(T)(\tilde{h}+1)!^{2}}\right]^{\frac{3}{2(\tilde{h}+1)}}\Bigg{\\}}$
All variables and parameters in (III.3) are as defined before. The conditional
sum singles out all the helicities that are still populating their phase space
up to $E=T$, since at $E>T$, energy density increase is exponentially
suppressed. We assume the primary modes are thermal and use $\tau_{*}(T)$ as
the benchmark time333When thermalizing, the energy density in primary mode
will not change per the form in (III.3) but instead follows
$\frac{d}{dt}\mathcal{E}_{*}(t)\propto t^{\frac{1}{2}}$. When a mode is
thermalizing, we can generally expect its energy density to change roughly in
line with $TE_{h\wedge}^{3}(t)$. Refer Appendix C for more details. .
Even though (III.3) looks forbidding, its key features are straightforward.
First, the rate of change in energy density in all modes in inverse in time,
since $\frac{d\mathcal{E}_{h}}{dt}\propto
t^{\bigl{(}-1+\frac{3}{2(\tilde{h}+1)}\bigr{)}}$ for all $\tilde{h}\neq 0$.
This means that even though the mode energy density is increasing with time,
it does so more and more slowly. Second, the sum in (III.3) is convergent,
ensured by the $\tilde{h}^{4}$ factor in the denominator. Lastly,
$\rho_{\gamma}$ hierarchically controls which helicities participate in the
conditional sum, via its control on $\tau_{h}(T)$.
Due to these factors, the energy density of the CSP as a whole increases very
slowly and in a well-controlled manner. At any time, the sum over modes in
(III.3) is dominated by the nearest non-thermal mode (smallest $\tilde{h}$
participating in the convergent sum), with thermalized modes dropping out of
the conditional sum. (15) and (III.3) point to a simplified model for the CSP
energy density:
$\mathcal{E}_{CSP}(t)<\sum\limits_{h\,:\,\tau_{h}(T)<t}\frac{\pi^{2}}{30}T^{4}+\sum\limits_{h\,:\,\tau_{h}(T)\gtrsim
t}\int_{0}^{t}dt\frac{d}{dt}\mathcal{E}_{CSP}(t)$ (18)
In (18), both terms are conditional sums: the first sum is taken over modes
that have thermalized by time $t$, whereas the second sum accounts for energy
density in modes that are still thermalizing at time $t$. We note here that
(18) overestimates the energy density at any time, since we assume that all
the available energy density in a mode i.e., $\frac{\pi^{2}}{30}T^{4}$ [35] is
unlocked when it is thermal up to $E=T$. This is a very conservative
assumption since only $\approx 3.5\%$ of total energy density is unlocked by
the time any Bose gas has fully thermalized its phase space up to temperature
$T$. Additionally, specifically for CSP photon modes, unlocking the additional
energy density in the region of phase space above $T$ is parametrically harder
since characteristic thermalization times scale as $E^{2\tilde{h}}$ per (8).
Nevertheless, we adopt this toy model for its tractability and the aid it
provides in building intuition.
From (III.3) and (18), it is easy to see that when a mode is thermalizing,
that is, at times $t\lesssim\tau_{h}(T)$, its energy density is increasing as:
$\mathcal{E}_{h}(t)<\frac{3}{(\tilde{h}+1)^{3}}t^{\frac{3}{2(\tilde{h}+1)}}\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \text{for}\leavevmode\nobreak\
h\,:\,\begin{subarray}{c}\tau_{h}(T)\geq t\\\ \tilde{h}\neq 0\end{subarray}$
(19)
Table 1 enlists how $\mathcal{E}_{h}(t)$ is varying for select modes, and
compares that with behavior of the primary modes3 given by
$\mathcal{E}_{*}(t)$. (19) implies that the sum over thermalizing modes in
(18) is convergent at any time. Since there are a finite number of thermal
modes at any finite time, the sum over thermalized modes in (18) is also
convergent at any time. Thus, the CSP photon gas always carries finite energy
at all finite times.
Table 1: Upper bounds on increase in mode energy density with time during thermalization (when $\tau_{h}(T)\gtrsim t$) for select modes. $\mathcal{E}_{*}(t)$ is shown in the first row for comparison. Mode | Time dependence of $\mathcal{E}_{h}(t)$ (slower than):
---|---
$\mathcal{E}_{*}(t)$ = $\mathcal{E}_{\pm 1}(t)$ | $t^{3/2}$
$\mathcal{E}_{0,\pm 2}(t)$ | $t^{3/4}$
$\mathcal{E}_{\pm 3}(t)$ | $t^{1/2}$
$\mathcal{E}_{h}(t)\leavevmode\nobreak\ \textrm{as}\leavevmode\nobreak\ h\rightarrow\infty$ | Frozen/ Constant
This is a remarkable result - even when supplied with an isothermal bath that
makes infinite energy available to it, the CSP gas takes infinite time to
access that energy. This is the crux of why Wigner’s infinite heat capacity
argument is not physically relevant. Figure 3 illustrates these aspects of CSP
energy density.
Figure 3: This figure illustrates that the CSP photon gas has finite energy
and increases its energy density progressively slowly with time, as indicated
by (III.3) and (19). The x-axis is logarithmic in time, and shown in units of
benchmark thermalization time $\tau_{*}(T)$ \- the time taken for a QED photon
undergoing the same thermodynamic process to populate its phase space up to
$E=T$. The y-axis is the energy density, normalized to the total energy
density in a fully thermalized Bose-Einstein distribution
$\frac{\pi^{2}}{30}T^{4}$ [35]. We show a log-log plot to make the slowing
growth rate in energy density manifest. It can be seen that in
$10^{100}\tau_{*}(T)$, the CSP photon gas has unlocked $\approx 18.7$ degrees
of freedom of the infinite degrees of freedom it has in principle, with a
decelerating rate of growth. The parameter choices for this illustration are:
Temperature $T=10^{4}\rho_{\gamma}$, and $\langle v_{\perp}\rangle=0.1$.
Simulations account for mode thermalization behaviors over the full energy
range $E=[0,\infty)$, and the dashed lines indicate analytical extensions to
simulations.
## IV Behavior at energies below spin scale
$\bigl{(}E\lesssim\rho_{\gamma}\langle v_{\perp}\rangle\bigr{)}$: “Deviation
Domain”
As summarized in Section I.1, the CSP photon gas has a fundamentally
different, but still well-controlled behavior at energies
$\lesssim\rho_{\gamma}\langle v_{\perp}\rangle$. Since we expect the spin
scale $\rho_{\gamma}$ to be small, the discussion in this section applies to a
very small volume of the overall phase space of the CSP gas in familiar
thermodynamic systems (with $T\gg\rho_{\gamma}$). Nevertheless, we devote some
attention to this since thermal CSP photons in this energy regime exhibit
interesting deviations vs. thermal QED photons. In ultra-cold thermal systems
(those that have $T\sim\rho_{\gamma}\langle v_{\perp}\rangle$), most of the
phase space will be in the deviation domain, and this could have interesting
phenomenological implications. Additionally, the well-controlled nature of CSP
thermal behavior in the deviation domain is not readily apparent and requires
different arguments vs. in the correspondence domain. As before, we begin by
studying the characteristic thermalization times.
### IV.1 Weaker hierarchy in
characteristic thermalization times
In this subsection, we show that there is a very weak hierarchy in mode
thermalization at energies much smaller than $\rho_{\gamma}\langle
v_{\perp}\rangle$. We will see that this weak hierarchy is balanced by the
parametrically longer thermalization times as we move down the energy scale
and $E\rightarrow 0$.
Using the equations for characteristic thermalization time introduced in II
((3), (5), (6), (7)), we find that the characteristic thermalization time of
mode $h$ at energy $E\ll\rho_{\gamma}\langle v_{\perp}\rangle$ is444For
$\bigl{(}\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{E}\bigr{)}^{1+\varepsilon}<|h|<\frac{\rho_{\gamma}}{E}$,
Bessel functions cannot be expressed in any simpler form for the entire range
of thermal integration in (3). Nevertheless, it can be numerically verified
that the Bessel amplitudes fall off rapidly as mode numbers increase. For a
relativistic scatterer, the second case in (20) applies for
$|h|\gg(\frac{\rho_{\gamma}}{E})^{1+\varepsilon}$.:
$\displaystyle\frac{\tau_{h}(E)}{\tau_{*}(E)}\sim\begin{cases}\bigl{(}\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{E}\bigr{)}^{3}&|h|\lessapprox\bigl{(}\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{E}\bigr{)}^{1+\varepsilon}\\\
2^{\tilde{h}}(\tilde{h}+1)!^{2}\biggl{(}\frac{E}{\rho_{\gamma}\langle
v_{\perp}\rangle}\biggr{)}^{2\tilde{h}}&|h|>\frac{\rho_{\gamma}}{E}\\\
&\text{and}>\frac{1}{2\langle v_{\perp}\rangle^{2}}\\\ \end{cases}$ (20)
Here, $0<\varepsilon\ll 1$. $\varepsilon$ has been used to indicate that more
modes with $|h|\sim\mathcal{O}\bigl{(}\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{E}\bigr{)}$ follow similar behavior as the
$h\lessapprox\bigl{(}\frac{\rho_{\gamma}\langle v_{\perp}\rangle}{E}\bigr{)}$
modes. The benchmark thermalization time $\tau_{*}(E)$ still follows (9). The
low helicity case above uses the asymptotic form [34, (10.17.3)] of Bessel
functions of the first kind. For the high helicity case, its Taylor expansion
[34, (10.2.2)] is valid, just as in the correspondence domain. Equation (20)
implies that in the deviation domain, CSP modes follow a fundamentally
different thermalization behavior vs. in the correspondence domain. We
highlight three key aspects of this behavior.
First, whereas ordinary photons thermalize _more rapidly_ at lower energies
since $\tau_{*}(E)$ scales as $E^{2}$, CSP photon modes with
$|h|\lessapprox\bigl{(}\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{E}\bigr{)}^{1+\varepsilon}$ thermalize _less rapidly_ at low
energies since $\tau_{h}(E)$ scales as $E^{-1}$. So, at any time $t$, we can
define the lowest energy $E_{h\vee}(t)$ down to which a CSP mode is thermal.
We return to this in IV.2 and IV.3.
Second, in the deviation domain the primary mode is no longer ‘special’.
Instead, as we move to lower energies, an increasing number of modes (those
with helicity $\lesssim\frac{\rho_{\gamma}\langle v_{\perp}\rangle}{E}$)
thermalize on nearly identical timescales, but this timescale also increases
as a positive power of $\frac{\rho_{\gamma}\langle v_{\perp}\rangle}{E}$. This
behavior might seem surprising at first - but it is simply a consequence of
lower scattering amplitudes at lower energies, and these scattering amplitudes
having a weaker dependence on helicity (up to a limit). As we will see later,
this beautiful balance between the number of modes thermalizing and the
increasing thermalization time keeps the low energy phase space of the CSP
photon well-controlled at all finite times.
Third, modes with high helicity ($|h|>\frac{\rho_{\gamma}}{E}$) still take
parametrically longer to thermalize - as dicussed above, these modes behave
exactly like they do in the correspondence domain, with their thermalization
times following (8). We provide additional perspectives on this in IV.3.
Thus, we still have a hierarchical thermalization behavior, albeit a much
weaker one when compared to that in the correspondence domain. The hierarchy
in characteristic thermalization times in the deviation domain is illustrated
in Figure 4.
Figure 4: This figure illustrates the controlled thermalization behavior of
the CSP photon modes in the deviation domain. It can be seen that the
thermalization behavior follows a weak hierarchy, as given by (20), with up to
$\mathcal{O}\bigl{(}\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{E}\bigr{)}^{1+\varepsilon}$ modes identically thermalizing
as the primary modes. For the three energies illustrated, it can be seen that
$\varepsilon\ll 1$. A strong hierarchy in thermalization behavior, akin to
that in correspondence domain, sets in at higher helicities. We illustrate
these behaviors at three energies, each separated by an order of magnitude.
The x-axis is the helicity $|h|$. The y-axis is logarithmic in time, and shown
in units of benchmark thermalization time $\tau_{*}(T)$. The parameter choices
for this illustration are: Temperature $T=10^{4}\rho_{\gamma}$ and $\langle
v_{\perp}\rangle=0.1$.
We studied the characteristic thermalization times at
$E\gg\rho_{\gamma}\langle v_{\perp}\rangle$ using (8) and at
$E\ll\rho_{\gamma}\langle v_{\perp}\rangle$ using (20). We now talk about an
intuitive way to understand mode thermalization behavior in the full energy
range, including intermediate energies $E\sim\rho_{\gamma}\langle
v_{\perp}\rangle$.
### IV.2 Characteristic energy of a mode
We have seen that thermalization time of a given mode increases both at very
high energies and at very low energies, with the former controlled by the
first term in the Taylor expansion of the Bessel function in (7) and the
latter by its large-argument asymptotic scaling. In between these two regimes,
CSP emission amplitudes are not readily approximated but their behavior can be
studied analytically with Bessel functions, and thus can be easily bounded
[34, (10.14.1, 10.14,2)]. An important energy scale in the problem is the
_characteristic energy_
<EMAIL_ADDRESS>2.33333pt}(h)$ of a given mode, at which that mode thermalizes most rapidly.
<EMAIL_ADDRESS>2.33333pt}(h)\approx\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{f^{{}^{\prime}}_{h,1}}=\begin{cases}\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{j^{{}^{\prime}}_{h,1}}\sim\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{|h|}&h\neq 0\\\ \frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{1.52}&h=0\end{cases}$ (21)
In (21), $f^{{}^{\prime}}_{h,1}(x)$ refers to the first positive zero of the
derivative of $F_{h}(x)$ as defined in (7), $j^{{}^{\prime}}_{h,1}(x)$ refers
to the first positive zero555The error in approximating
$j^{{}^{\prime}}_{h,1}(x)$ with $|h|$ decreases as $|h|$ increases. See [36]
of the derivative of $J_{h}(x)$ [36], and $f^{{}^{\prime}}_{0,1}(x)$ follows
from using $c=0.5$ in (7). Note that
<EMAIL_ADDRESS>2.33333pt}(h)\leq\rho_{\gamma}\langle v_{\perp}\rangle$ for all $h$.
At energies below its characteristic energy, mode thermalization time
increases inversely with energy, and at $E\ll
<EMAIL_ADDRESS>2.33333pt}(h)$, it follows the equation for the low helicity case in (20). At
energies greater than its characteristic energy, mode thermalization time
increases with energy, and at $E\gg
<EMAIL_ADDRESS>2.33333pt}(h)$, it follows (8). At
<EMAIL_ADDRESS>2.33333pt}(h)$, the mode thermalization time has a global minimum:
<EMAIL_ADDRESS>2.33333pt}(h)\bigr{)}}{\tau_{*}(T)}\sim\begin{cases}\bigl{(}\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{T}\bigr{)}^{2}|h|&h\neq 0\\\
\bigl{(}\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{T}\bigr{)}^{2}1.52&h=0\end{cases}$ (22)
Since a mode thermalizes fastest at its characteristic energy,
<EMAIL_ADDRESS>2.33333pt}(h)\bigr{)}$ gives the lower bound on $\tau_{h}(E)$. Since this
lower bound is monotonic in $|h|$, it is straightforward to see that
$|h|\rightarrow\infty$ modes need infinite time even to populate the phase
space close to their characteristic energy, which is $\approx$ zero.
### IV.3 Mode thermalization behavior
The notion of characteristic energy (21), and the lower bound on
thermalization time it provides (22) allow us to understand mode
thermalization behavior throughout the deviation domain, including at
intermediate energies $E\sim\rho_{\gamma}\langle v_{\perp}\rangle$. At time
<EMAIL_ADDRESS>2.33333pt}(h)\bigr{)}$, no region of mode phase space has populated
appreciably. At these times,
$n_{h}(E,t)\approx n_{h}^{(eq)}(E)\frac{t}{\tau_{h}(E)}\leavevmode\nobreak\
\leavevmode\nobreak\ \text{for all}\leavevmode\nobreak\ \leavevmode\nobreak\
E\leavevmode\nobreak\ \text{at}\leavevmode\nobreak\
<EMAIL_ADDRESS>2.33333pt}(h)\bigr{)}$ (23)
At
<EMAIL_ADDRESS>2.33333pt}(h)\bigr{)}$, the region of phase space close to
<EMAIL_ADDRESS>2.33333pt}(h)$ is mostly populated, but the rest of mode phase space is non-
thermal. As time evolves, phase space at energies higher and lower than the
characteristic energy thermalize.
At energies greater than
<EMAIL_ADDRESS>2.33333pt}(h)$, we can define the maximum energy up to which the mode phase
space is thermal at a given time: $E_{h\wedge}(t)$, introduced and discussed
in Section III.2. To track how the region of phase space at $E\gg
<EMAIL_ADDRESS>2.33333pt}(h)$ is populated as time evolves, (13) and the discussion
associated with its evolution is valid. It can be verified that at
<EMAIL_ADDRESS>2.33333pt}(h)\bigr{)}$, (14) reduces to
<EMAIL_ADDRESS>2.33333pt}(h)$ in the high helicity limit.
At energies less than
<EMAIL_ADDRESS>2.33333pt}(h)$, we can define an analogous quantity $E_{h\vee}(t)$: the
minimum energy down to which the mode phase space is thermal at a given time.
$E_{h\vee}(t)$ tracks how the phase space below
<EMAIL_ADDRESS>2.33333pt}(h)$ is populated as time evolves.
$E_{h\vee}(t)\sim\frac{(\rho_{\gamma}\langle
v_{\perp}\rangle)^{3}}{T^{2}}\frac{\tau_{*}(T)}{t}\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \text{for}\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\
<EMAIL_ADDRESS>2.33333pt}(h)\bigr{)}$ (24)
So at energies below characteristic energy, the differential number density
per unit energy at time $t$ can be expressed as:
$n_{h}(E,t)\lesssim\begin{cases}n_{h}^{(eq)}(E)&E_{h\vee}(t)\leq E\leq
<EMAIL_ADDRESS>2.33333pt}(h)\\\ n_{h}^{(eq)}(E)\frac{E}{E_{h\vee}(t)}&E\leq
E_{h\vee}(t)\end{cases}$ (25)
We can now gain some intuition for the mode thermalization hierarchies in the
deviation domain (20).
At any given energy $E<\rho_{\gamma}\langle v_{\perp}\rangle$, the modes that
near-identically thermalize are those that have characteristic energies
greater than $E$ \- so they are all evolving in the phase space region lower
than their respective characteristic energies in accordance with (25), with
their $E_{h\vee}(t)$ changing as in (24) - the helicity independence of (24)
explains why their evolution is near-identical. As we go down the energy
scale, we pass the characteristic energies of more modes per (21), and these
new modes also start evolving in accordance with (25) and (24).
At any given energy $E$, the modes that thermalize with parametrically longer
times in (20) are those that have characteristic energies lower than $E$, so
they are evolving in the phase space region higher than their respective
characteristic energies, with thermalization behavior approaching (8) at $E\gg
<EMAIL_ADDRESS>2.33333pt}(h)$. When we get to the correspondence domain, all modes have
characteristic energies smaller than the energy we are interested in, and all
modes thermalize per (8).
### IV.4 CSP number density, energy density and
spectrum of thermal radiation
We now discuss three key aspects of the CSP number density and energy density
in the deviation domain:
1. 1.
The total number density and energy density are both finite at all finite
times
2. 2.
The total energy density in the deviation domain is sub-dominant to the total
energy density in the correspondence domain (for systems with
$T\gg\rho_{\gamma}$)
3. 3.
The thermal radiation from a CSP photon gas is expected to be stronger vs. the
standard black body spectrum of a thermal QED gas
First, CSP number density and energy density in the deviation domain are
finite since only a finite number of modes contribute to them at all finite
times. Those modes for which
<EMAIL_ADDRESS>2.33333pt}(h)\bigr{)}>t$ would not have populated any portion of their phase
space to equilibrium densities, and since
<EMAIL_ADDRESS>2.33333pt}(h)\bigr{)}$ varies as a positive power of $|h|$, infinite time is
needed for the deviation domain of the CSP to fully thermalize. Additionally,
the non-equilibrium contributions summed over all CSP modes is also finite
since the total number of contributing modes is almost exactly offset by the
fractional thermalization at that energy (per (23) and (20)) - so, the low
volume of phase space at these energies fully regulates the number density and
energy density in the deviation domain at all finite times.
Second, the total energy density in the region of phase space below
$\rho_{\gamma}\langle v_{\perp}\rangle$ remains sub-dominant (assuming
$T\gg\rho_{\gamma}\langle v_{\perp}\rangle$), despite many more modes
thermalizing simultaneously. It can be shown that666The total energy density
in any mode is most sensitive to its $E_{h\wedge}(t)$, and relatively
insensitive to its $E_{h\vee}(t)$, so we can get an order of magnitude
estimate of the total energy in any mode by tracking only the behavior of its
$E_{h\wedge}(t)$. This simplifying assumption can be further justified by
noting that the total energy contribution to the CSP from thermalization of
every mode’s phase space region $E_{h\vee}(t)\leq E\leq
<EMAIL_ADDRESS>2.33333pt}(h)$ is at all times bounded by $\mathcal{O}((\rho_{\gamma}\langle
v_{\perp}\rangle)^{3}T)$. The total contribution to CSP deviation domain
energy from every mode’s
<EMAIL_ADDRESS>2.33333pt}(h)\leq E\leq E_{h\wedge}(t)$ is dominated by the contribution from
the highest $E_{h\wedge}(t)$ at any time $t$ i.e., the energy at the ‘boundary
region’ $E\sim\rho_{\gamma}\langle v_{\perp}\rangle$. we can get an order of
magnitude estimate of the total energy in the deviation domain of the CSP gas
using only the energy in the ‘boundary region’ $E\sim\rho_{\gamma}\langle
v_{\perp}\rangle$, which can be expressed in the suggestive form:
$\mathcal{E}_{deviation\leavevmode\nobreak\
domain}\sim\mathcal{O}\biggl{(}\leavevmode\nobreak\
\biggl{(}\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{T}\biggr{)}^{3}h_{max}T^{4}\biggr{)}$ (26)
where $h_{max}$ is the highest helicity that has populated its phase space at
$E=\rho_{\gamma}\langle v_{\perp}\rangle$, given by the mode that saturates
the condition $E_{h\wedge}(t)=\rho_{\gamma}\langle v_{\perp}\rangle$ in (14):
$\frac{t}{\tau_{*}(T)}\biggl{(}\frac{T}{\rho_{\gamma}\langle
v_{\perp}\rangle}\biggr{)}^{2}\approx 2^{h_{max}}\leavevmode\nobreak\
h_{max}!^{2}$ (27)
From (26), it is clear that for systems with $\rho_{\gamma}\langle
v_{\perp}\rangle\ll T$, the deviation domain of the CSP gas contains a
negligible fraction of the total energy in even a single fully thermalized
mode of a Bose gas, which is $\mathcal{O}(T^{4})$. The only way this can be
circumvented is if at a given time, $h_{max}$ can be
$>\mathcal{O}\biggl{(}\frac{T}{\rho_{\gamma}\langle
v_{\perp}\rangle}\biggr{)}^{3}$ _and_ those modes which have
$E_{h\wedge}(t)>\rho_{\gamma}\langle v_{\perp}\rangle$ at that time (which are
necessarily smaller helicities than $h_{max}$) have not thermalized much of
their correspondence domain. This condition can never be met since the energy
scaling of characteristic thermalization time at $E\gg
<EMAIL_ADDRESS>2.33333pt}(h)$ is $\tau_{h}\propto E^{2(\tilde{h}+1)}$ whereas the helicity
scaling is the much stronger $\tau_{h}\propto(\tilde{h}+1)^{2(\tilde{h}+1)}$.
Simply put, it is _much_ easier for smaller helicities to populate their
higher energy phase space than it is for larger helicities to even populate
their phase space at energies close to their characteristic energies - at any
time, it is impossible for $h_{max}$ to grow large enough to compensate for
the energy contained in the correspondence domain of the smaller helicities.
Third, despite the sub-dominance of the total deviation domain energy of the
thermal CSP gas, its black body spectrum shows substantial deviations from
that of the familiar thermal QED gas, both in radiated power and spectrum
shape. Due to near-identical thermalization of up to
$\mathcal{O}\bigl{(}\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{E}\bigr{)}^{1+\varepsilon}$ modes at energies
$E_{h\vee}(t)\leq E\leq\rho_{\gamma}\langle v_{\perp}\rangle$, the CSP
differential energy density stays nearly linear777There will be deviations
from linearity because: a) $0<\varepsilon\ll 1$, and b) If we wait long
enough, we can expect contributions from the modes that are populating the
region of their phase space above their characteristic energies. down to a
time-dependent low energy cutoff. Below this low energy cutoff, no mode has
populated its phase space to the equilibrium value, but non-equilibrium
contributions of up to $\mathcal{O}\bigl{(}\frac{\rho_{\gamma}\langle
v_{\perp}\rangle}{E}\bigr{)}^{1+\varepsilon}$ modes can still add up per (23),
resulting in a quadratic spectrum shape similar to QED gas in the deep IR,
albeit with stronger radiated power888Small deviations from quadratic form can
exist in principle because $0<\varepsilon\ll 1$. However, as can be seen in
Figure 4, $\varepsilon$ decreases as we move to lower energies, so these
deviations are likely to be negligible.. Thus the CSP gas radiates stronger
than the QED gas at all frequencies in the deviation domain, with exact
spectrum shape dependent on allowed thermalization time, the spin scale and
temperature. This is in contrast to the QED photon thermal spectrum that
remains quadratic in frequency at all $\omega\ll T$. Since the power radiated
by the CSP photon at these very low frequencies is spread across modes with
suppressed matter interactions, further study is needed to evaluate the
detectability of this excess.
Figure 5 illustrates all 3 aspects of CSP deviation domain behavior discussed
above.
Figure 5: This figure illustrates the differences in the spectrum of thermal
radiation of a CSP photon gas vs. a QED photon gas, at time
$t=10^{10}\tau_{*}(T)$, with discernible deviations from the familiar photon
blackbody spectrum prominent at all deviation domain frequencies. We show a
log-log plot to make the behavior across frequencies manifest. The x-axis is
the radiated frequency $\omega$ in units of temperature $T$. The y-axis shows
the power radiated at that frequency, in units of power radiated by a fully
thermalized QED photon gas at the frequency $\omega=T$. The lowest frequency
that has been populated to equilibrium density by any mode at this time is
$\omega\approx 10^{-25}T$. The dashed portion of the green CSP line indicates
an analytical extension to lower frequencies, where all energy radiated is
from non-equilibrium contributions summed over all modes. Weaker deviations
from the standard spectrum are also apparent at frequencies close to $T$ since
the partner modes have populated some of the correspondence domain phase space
at this time. Despite these strong deviations from standard spectrum in the
deviation domain, the total energy radiated by the CSP photon gas is strongly
dominated by frequencies higher than $\rho_{\gamma}\langle v_{\perp}\rangle$
\- in this figure, the total energy radiated at frequencies above
$\rho_{\gamma}\langle v_{\perp}\rangle$ is $\approx 10^{15}\times$ total
energy radiated at frequencies below $\rho_{\gamma}\langle v_{\perp}\rangle$,
a fact which might be obscured by the log-log scale. The parameter choices for
this illustration are: Temperature $T=10^{4}\rho_{\gamma}$, $\langle
v_{\perp}\rangle=0.1$ and time of snapshot $t=10^{10}\tau_{*}(T)$.
## V Synthesis: The effect of a non-zero spin scale on thermodynamic behavior
So far, we have discussed the thermalization behavior of the CSP photon in the
two energy regimes ($E\gg\rho_{\gamma}\langle v_{\perp}\rangle$ and
$E\leq\rho_{\gamma}\langle v_{\perp}\rangle$), examining the key aspects of
the behavior and building intuition step by step. In this section, we bring
together a synthesis of the key results already discussed and present the
complete picture of CSP thermalization behavior across all energies and times.
We supplement this synthesis with some additional salient aspects of CSP
thermodynamics whose discussion required the complete picture.
### V.1 Overall mode thermalization behavior
This subsection first brings together the key aspects of thermalization
behavior of a single mode across phase space. Subsequently, we synthesize all
the helicity-based thermalization hierarchies.
#### V.1.1 The complete picture of mode thermalization
Working in the soft limit, we saw that the thermalization behavior of all CSP
modes follows the same pattern: Each mode has a characteristic energy, given
by (21), at which it its thermalization time is the shortest. As we move along
the energy scale in either direction and away from the characteristic energy,
the characteristic thermalization time of the mode increases, with
$\tau_{h}\rightarrow\infty$ as $E\rightarrow 0$ and $E\rightarrow\infty$. At
energies much greater than its characteristic energy, a mode’s characteristic
thermalization time grows with energy, following (8). At energies much less
than its characteristic energy, the mode’s characteristic thermalization time
is inverse in energy, following the low helicity case in (20). Notably, this
behavior is also followed by the primary modes, whose characteristic energy is
$\sim\rho_{\gamma}\langle v_{\perp}\rangle$.
Figure 6 illustrates the behavior of the characteristic thermalization time
across phase space for select modes.
Figure 6: This log-log plot illustrates that for every thermalizing CSP photon
mode, the characteristic thermalization times diverge in the UV and IR (but
approach these divergences at different rates), keeping the total energy in
the CSP photon gas well controlled at all times. The x-axis is shown in units
of $\rho_{\gamma}\langle v_{\perp}\rangle$, and the y-axis is shown in units
of benchmark thermalization time $\tau_{*}(T)$. We illustrate the behavior for
primary modes, partner modes, and select high helicities chosen for
illustrative purposes. It can be seen that: a) All modes have a characteristic
energy
<EMAIL_ADDRESS>2.33333pt}(h)$ given by (21), at which it thermalizes fastest (given by (22))
- these can be read off at the minima for each mode in the figure b) Higher
helicities have lower characteristic energies, and take increasingly longer to
populate their phase space even at these characteristic energies. Note that
$h=\pm 10,000$ modes have already increased their $\tau_{h}(E)/\tau_{*}(T)$ to
$\mathcal{O}(10^{104})$ (beyond the y-axis cutoff in figure) at less than one
order of magnitude in energy above their characteristic energy. The dashed
lines denote the specific energies we used to illustrate mode thermalization
hierarchies in Figure 1 and Figure 4. In this figure, we include the hierarchy
across energies to complete the picture of “double hierarchy” in UV
thermalization and “weak hierarchy” in IR thermalization. The parameter
choices for this illustration are: Temperature $T=10^{4}\rho_{\gamma}$, and
$\langle v_{\perp}\rangle=0.1$.
This thermalization behavior means that each mode achieves equilibrium number
density at its characteristic energy first. At any given time $t$, we can
identify the lowest energy $E_{h\vee}(t)$, and the highest energy
$E_{h\wedge}(t)$, between which a mode has populated its equilibrium phase
space. At times
<EMAIL_ADDRESS>2.33333pt}(h))$,
<EMAIL_ADDRESS>2.33333pt}(h)$, which is as yet not fully populated. As time evolves above
<EMAIL_ADDRESS>2.33333pt}(h))$, $E_{h\wedge}(t)$ is given by (14) and $E_{h\vee}(t)$ by (24).
Putting these together, the complete equation for the differential mode number
density at any time is given by:
$\displaystyle n_{h}(E,t)\lessapprox\begin{cases}\text{At \leavevmode\nobreak\
time\leavevmode\nobreak\
<EMAIL_ADDRESS>2.33333pt}(h)\bigr{)}:\\\ n_{h}^{(eq)}(E)\frac{t}{\tau_{h}(E)}&\text{for
all}\leavevmode\nobreak\ \leavevmode\nobreak\ E\\\ \\\ \text{At
\leavevmode\nobreak\ time\leavevmode\nobreak\
<EMAIL_ADDRESS>2.33333pt}(h)\bigr{)}:\\\ n_{h}^{(eq)}(E)\frac{E}{E_{h\vee}(t)}&0\leq E\leq
E_{h\vee}(t)\\\ n_{h}^{(eq)}(E)&E_{h\vee}(t)\leq E\leq E_{h\wedge}(t)\\\
n_{h}^{(eq)}(E)(\frac{E_{h\wedge}(t)}{E})^{2(\tilde{h}+1)}&E_{h\wedge}(t)\leq
E<\infty\\\ \end{cases}$ (28)
#### V.1.2 The complete picture of mode thermalization hierarchies
While each mode follows this behavior, there are multiple hierarchies in the
thermalization across modes that govern how the CSP gas as a whole behaves.
The following hierarchies were discussed in III and IV:
1. (i)
The mode characteristic energy, given by (21), is monotonic in $\tilde{h}$,
with
<EMAIL_ADDRESS>2.33333pt}(h)\rightarrow 0$ for $\tilde{h}\rightarrow\infty$ and
<EMAIL_ADDRESS>2.33333pt}(h)\rightarrow\rho_{\gamma}\langle v_{\perp}\rangle$ for
$\tilde{h}\rightarrow 0$. This means modes with higher helicities first
populate their phase space at smaller energies.
2. (ii)
The mode thermalization time at its characteristic energy, given by (22),
which gives the shortest time taken by a mode to achieve equilibrium density
at any region in its phase space, is also monotonic in $\tilde{h}$, with
<EMAIL_ADDRESS>2.33333pt}(h)\bigr{)}\rightarrow\infty$ as $\tilde{h}\rightarrow\infty$. This
means modes with higher helicities take parametrically longer to even populate
the region of phase space close to their (already smaller) characteristic
energies.
3. (iii)
In the correspondence domain, all modes have a “double hierarchy” in their
characteristic thermalization times relative to the primary mode per (8), due
to cross sections suppressed via $\biggl{(}\frac{E}{\rho_{\gamma}\langle
v_{\perp}\rangle}\biggr{)}^{2\tilde{h}}$ as well as
$2^{\tilde{h}}(\tilde{h}+1)!^{2}$. That is, higher helicities find it super-
exponentially difficult to populate any region of their phase space, with the
difficulty also increasing polynomially with energy. This hierarchy is
reflected in the behavior of $E_{h\wedge}(t)$, which grows as
$t^{\frac{1}{2(\tilde{h}+1)}}$ per (14).
4. (iv)
In the deviation domain, characteristic thermalization times follow a much
weaker hierarchy, given by (20).
The mode thermalization hierarchies at energies much greater than and much
less than the spin scale ((iii) and (iv) above) are illustrated in Figures 1
and 4 respectively. The hierarchies in mode characteristic energies and in the
mode thermalization times at characteristic energy ((i) and (ii) above) are
evident in Figure 6.
### V.2 Overall CSP behavior
We first present the analyses estimating the effective relativistic degrees of
freedom $g_{CSP}(t)$. We then synthesize the salient aspects of CSP
thermalization behavior, energy density and spectrum of thermal radiation
discussed previously across sections III.2, III.3 and IV.4.
#### V.2.1 Effective relativistic degrees of freedom
The total number density of the CSP gas is given by the sum over mode
densities:
$\displaystyle n_{CSP}(t)$
$\displaystyle=\sum_{h}\int_{0}^{\infty}dEn_{h}(E,t)$ (29)
$\displaystyle\equiv g_{CSP}(t)\frac{\zeta(3)}{\pi^{2}}T^{3}$ (30)
where we separate the familiar form for a thermalized Bose gas [35] to define
the effective internal degrees of freedom for the CSP gas. $g_{CSP}(t)$
accounts for the fractional thermalization of every CSP mode and is a useful
state variable of this system. Since the CSP gas is always partially thermal,
it should be immediately apparent that $g_{CSP}(t)\lll$ the naive guess
$\infty$, at any finite time. Since $g_{CSP}(t)$ depends on how much of its
phase space each mode has populated by the time $t$, it implicitly depends on
$E_{h\wedge}(t)$ and $E_{h\vee}(t)$ of every CSP mode.
Given the hierarchical, controlled thermalization behavior of the CSP at all
energies, we can model the CSP as the familiar photon, but receiving time-
dependent corrections to its relativistic degrees of freedom:
$g_{CSP}(t)\approx[2-\Delta(t)]+\delta g(t)$ (31)
where the $2-\Delta(t)$ degrees of freedom correspond to the primary modes,
and $\delta g(t)$ accounts for the correction from partial thermalization of
all other modes. We estimate these effects using (28):
$\begin{split}g_{CSP}(t)\approx\sum_{h}\frac{1}{\zeta(3)}&\biggl{[}\mathrm{Li}_{3}(e^{-\frac{E_{h\vee}(t)}{T}})\\\
&-\mathrm{Li}_{3}(e^{-\frac{E_{h\wedge}(t)}{T}})\\\
&+\frac{E_{h\vee}(t)}{T}\mathrm{Li}_{2}(e^{-\frac{E_{h\vee}(t)}{T}})\\\
&-\frac{E_{h\wedge}(t)}{T}\mathrm{Li}_{2}(e^{-\frac{E_{h\wedge}(t)}{T}})\\\
&+\frac{E_{h\vee}(t)^{2}}{2T^{2}}\mathrm{Li}_{1}(e^{-\frac{E_{h\vee}(t)}{T}})\\\
&-\frac{E_{h\wedge}(t)^{2}}{2T^{2}}\mathrm{Li}_{1}(e^{-\frac{E_{h\wedge}(t)}{T}})\biggr{]}\\\
\end{split}$ (32)
Comparing with (31) and noting that the primary modes will thermalize fastest,
we recognize that at times $t\gg\tau_{*}(T)$, the two primary modes will
contribute 2 full degrees of freedom, but with very small deviations due to
their $E_{h\vee}(t)>0$ at all finite times:
$\begin{split}g_{\pm
1}(t)\equiv[2-\Delta(t)]\approx\frac{2}{\zeta(3)}\biggl{[}&\mathrm{Li}_{3}(e^{-\frac{E_{h\vee}(t)}{T}})\\\
&+\frac{E_{h\vee}(t)}{T}\mathrm{Li}_{2}(e^{-\frac{E_{h\vee}(t)}{T}})\\\
&+\frac{E_{h\vee}(t)^{2}}{2T^{2}}\mathrm{Li}_{1}(e^{-\frac{E_{h\vee}(t)}{T}})\biggr{]}\\\
\end{split}$ (33)
Note that time- and $\rho_{\gamma}$\- dependence in (32) and (33) can be made
explicit by using the equations for $E_{h\vee}(t)$ in (24) and
$E_{h\wedge}(t)$ in (14). As $E_{h\vee}(t)\rightarrow 0$ and
$E_{h\wedge}(t)\rightarrow\infty$, a full degree of freedom is unlocked from a
mode. We will consider these conditions as met when $E_{h\vee}(t)\ll T$ and
$E_{h\wedge}(t)\gg T$.
For familiar thermodynamic systems that have $T\gg\rho_{\gamma}$,
$E_{h\vee}(t)$ is always $\ll T$, so the behavior of the relativistic degrees
of freedom tracks the evolution of $E_{h\wedge}(t)$ only. Specifically, it is
driven by the strong mode thermalization hierarchies at energies much greater
than the spin scale. For such thermal systems, the primary modes will be
contributing 2 full degrees of freedom at all $t\gg\tau_{*}(T)$, and any
deviations from this are negligible until the partner modes start appreciably
thermalizing several time orders of magnitude after $\tau_{*}(T)$. Ultra cold
thermal systems (that have $T\sim\rho_{\gamma}\langle v_{\perp}\rangle$) are
likely to see detectable deviations in $g_{CSP}(t)$ and energy density in
short times, and require further investigation.
#### V.2.2 Synopsis of CSP thermalization behavior,
energy density and spectrum of thermal radiation
The essence of the effects of a small non-zero $\rho_{\gamma}$ on photon
thermalization is two-fold:
1. (i)
At energies above the spin scale (“correspondence domain”), we recover
familiar thermodynamic behavior with $\rho_{\gamma}$-dependent corrections to
all thermodynamic quantities that become manifest as the system is allowed to
thermalize over long periods of time. Specifically, in an isothermal system,
we get corrections to internal energy, thermal power spectrum and relativistic
degrees of freedom that are tell-tale signs of a CSP, but such deviations are
observable only with exponentially long thermalization time scales. All
thermodynamically relevant quantities remain finite at all finite times, and
the rate of increase in CSP energy density is inverse in time.
2. (ii)
We unlock an entirely new region of phase space at energies less than the spin
scale (“deviation domain”), with novel behavior from all helicities. The CSP
gas is populated nearly identically by increasing number of modes at
progressively lower energies, but with thermalization timescales that also
increase in tandem, keeping the IR phase space of the CSP gas well-behaved at
all finite times. The total energy density in the deviation domain remains
sub-dominant, but the power radiated at these very low frequencies shows large
fractional deviations from QED, with calculable deviations in spectrum shape
as well.
Since spin scale has mass dimensions, a thermodynamic system with non-zero
$\rho_{\gamma}$ has a new natural scale (in addition to temperature). The
interplay of these two energy scales sets the overall thermal behavior of the
CSP gas. Specifically, for a system with $T\gg\rho_{\gamma}$, the dominant
behavior of the CSP gas follows (i) above, whereas for a system with
$T\sim\rho_{\gamma}\langle v_{\perp}\rangle$, we get entirely novel behavior
overall, dominated by (ii) above.
Since the spin scale is expected to be small in nature, the thermalization
behavior of the CSP photon is identical to that of the ordinary photon for
most familiar thermodynamic systems (which will have $T\gg\rho_{\gamma}$). In
such systems, any deviations in thermodynamic behavior due to the familiar
photon potentially having a non-zero spin scale will be manifest only with
very long thermalization timescales (assuming $\rho_{\gamma}$ non-zero but not
so small that it evades detectability in the age of the universe) and/or in
the deviation domain behavior (assuming $\rho_{\gamma}$ large enough to be
detectable with available low energy technologies).
## VI Open Questions
In this section, we briefly discuss the open questions that arise from the
thermodynamic study of CSP photons presented in this paper. Resolving these is
beyond the scope of this work, but could inspire aspects of our future study.
We expect the spin scale of the CSP photon $\rho_{\gamma}$ to be small in
nature. If this weren’t the case, the CSP photon gas would rapidly thermalize
its partner and sub-partner modes, with rapid increase in its internal energy
and degrees of freedom - essentially acting as a system with a high heat
capacity that grows discernibly with time. In an isothermal bath, such as the
one in early universe shortly before recombination, a photon gas with
$\rho_{\gamma}$ much larger than $meV$-scale would have exhibited a
fundamentally different behavior from what has been observed. For
$\rho_{\gamma}$ smaller than $meV$-scale, we expect only small departures from
standard thermal behavior, but it would be interesting to study how best to
use early universe thermal signatures to constrain the spin-scale of the
photon.
In this paper, we investigated an isothermal system, with unbounded energy
available to thermalize the CSP gas. Even with such a set up, we saw that the
CSP gas takes infinite time to access that energy. Despite possessing infinite
internal degrees of freedom in principle, a thermodynamic system with a CSP
photon gas cannot access those degrees of freedom in any finite time - not
even when supplied with infinite energy to do so. Now, let us consider a more
physically realistic thermodynamic system. If we relax the isothermal
assumption and supply fixed energy to the CSP gas, for instance. The primary
modes of the CSP photon gas will still thermalize rapidly. The other modes
still thermalize on exponentially longer time scales, but we can expect that
some of the energy increase in modes that thermalize later will come from
leakage of energy from the already thermalized modes. In such thermodynamic
systems, the total energy in the CSP gas will be finite and bounded by the
available energy, even with infinite time. We can expect the gas to slowly
lower its overall temperature and increase its entropy as more modes
thermalize. CSP thermalization behavior in this and other thermodynamic
situations requires additional investigation.
Finally, ultra-cold thermal systems (those that have
$T\lesssim\rho_{\gamma}\langle v_{\perp}\rangle$) would provide a potentially
interesting regime in which to study signals of non-zero $\rho_{\gamma}$. To
study such systems completely, we need to include Bose condensation effects
and work with full scattering amplitudes, not just soft limits. This motivates
further study of CSP physics (in QED) at low temperatures in future work.
###### Acknowledgements.
We thank Javier Acevedo, Lance Dixon, Saarik Kalia, Aidan Reilly and Kevin
Zhou for useful discussions over the course of this work. The authors were
supported by the U.S. Department of Energy under contract number DE-
AC02-76SF00515 at SLAC.
## Appendix A Phase space evolution of the thermalizing CSP photon gas
The microscopic evolution of the phase space distribution of every mode $h$ of
the photon gas is governed by the Boltzmann equation [35]:
$\hat{\textbf{L}}[f_{h}]=\textbf{C}[f_{h}]$ (34)
where C is the collision operator and $\hat{\textbf{L}}$ is the Liouville
operator. The covariant, relativistic Liouville operator is [35]:
$\hat{\textbf{L}}=p^{\mu}\frac{\partial}{\partial
x^{\mu}}-\Gamma^{\mu}_{\nu\sigma}p^{\nu}p^{\sigma}\frac{\partial}{\partial
p^{\mu}}$ (35)
In a Minkowski background, the Liouville operator simplifies to:
$\hat{\textbf{L}}[f_{h}(E,t)]=E\frac{\partial}{\partial t}f_{h}(E,t)$ (36)
The collision term for the process $a+b+...\longleftrightarrow
i+j+\gamma_{h}+...$ is given by [35]:
$\displaystyle\hat{\textbf{C}}[$ $\displaystyle f_{h}(E,t)]=\int
d\Pi_{a}d\Pi_{b}....\leavevmode\nobreak\ d\Pi_{i}d\Pi_{j}....$
$\displaystyle\times(2\pi)^{4}\delta^{4}(\Sigma p_{in}-\Sigma
p_{out+\gamma_{h}})$
$\displaystyle\times\bigg{[}|\mathcal{M}|^{2}_{a+b+..\rightarrow
i+j+\gamma_{h}+..}f_{a}f_{b}...(1\pm f_{i})(1\pm f_{j})(1\pm f_{h})..$
$\displaystyle-|\mathcal{M}|^{2}_{i+j+\gamma_{h}+..\rightarrow
a+b+..}f_{i}f_{j}f_{h}..(1\pm f_{a})(1\pm f_{b})..\bigg{]}$ (37)
where ‘in’ denotes the incoming scatterers, ‘out’ denotes all outgoing
particles except the CSP mode $\gamma_{h}$ that we are interested in,
$f_{\psi}$ refers to the phase space distribution function of particle $\psi$,
$d\Pi\equiv\frac{g}{(2\pi)^{3}}\frac{d^{3}p}{2E}$, and $(1\pm f_{\psi})$
factors capture Bose enhancement/ Fermi suppression respectively. We invoke
time invariance to assume $|\mathcal{M}|^{2}_{a+b+..\rightarrow
i+j+\gamma_{h}+..}=|\mathcal{M}|^{2}_{i+j+\gamma_{h}+..\rightarrow
a+b+..}\equiv|\mathcal{M}|^{2}$. Using the energy conservation part of the
delta function, and assuming chemical and kinetic equilibrium of all other
species, (A) simplifies to:
$\begin{split}\hat{\textbf{C}}[f_{h}&(E,t)]=\int
d\Pi_{a}d\Pi_{b}\leavevmode\nobreak\ ...\leavevmode\nobreak\
d\Pi_{i}d\Pi_{j}\leavevmode\nobreak\ ...\\\ &\times(2\pi)^{4}\delta^{4}(\Sigma
p_{in}-\Sigma p_{out+\gamma_{h}})|\mathcal{M}|^{2}\\\ &\times
f_{a}f_{b}...(1\pm f_{i})(1\pm
f_{j})...\bigg{[}1+f_{h}[1-\exp{\frac{E}{T}}]\bigg{]}\end{split}$ (38)
where T is the temperature. Note that while all equations upto (A) are
explicitly Lorentz covariant, (38) is not, since we single out the frame of
reference in which the temperature T is a monopole to do thermodynamics. This
is the frame in which we will specify all particle distribution functions.
Recognizing that in equilibrium, $f_{h}$ will follow Bose-Einstein statistics,
we can use $f_{h}^{(eq)}(E)=\frac{1}{\exp{\frac{E}{T}}-1}$ to rewrite the
$\bigg{[}1+f_{h}[1-\exp{\frac{E}{T}}]\bigg{]}$ factor in (38) as
$\bigg{[}1-\frac{f_{h}(E,t)}{f_{h}^{(eq)}(E)}\bigg{]}$. Using (38) and (36),
the Boltzmann equation (34) reduces to a differential equation that can be
solved under the assumptions already stated, to give:
$f_{h}(E,t)=f_{h}^{(eq)}(E)[1-\exp{(-t/\tau_{h}(E))}]$ (39)
where $\tau_{h}(E)$ can be recognized as:
$\begin{split}\tau_{h}(E)=&f_{h}^{(eq)}(E)\bigg{[}\int d\Pi_{in}f_{in}(1\pm
f_{i})(1\pm f_{j})...d\Pi_{out}\\\ &\times(2\pi)^{4}\delta^{4}(\Sigma
p_{in}-\Sigma
p_{out+\gamma_{h}})|\mathcal{M}|^{2}\frac{1}{E}\bigg{]}^{-1}\end{split}$ (40)
Ignoring the Bose enhancement/ Fermi suppression factors from the outgoing
states, i.e., taking $(1\pm f_{\psi})\approx 1$, we get (3). When we consider
a multi-photon scattering process, this assumption means that we ignore any
Bose enhancement to the production of partner and sub-partner modes from the
faster thermalization of primary modes. We make the reasonable assumption that
multi-photon CSP processes are sub-dominant to single photon scattering
processes, as in familiar QED. Additionally, familiar QED has the same IR
divergence from Bose-Einstein statistics - so this aspect is not unique to CSP
photons. Hence, we find it best to keep the Bose enhancement aside and focus
only on aspects of thermal behavior arising from a non-zero spin scale.
## Appendix B Modifications for a non-relativistic thermal scatterer
In the main paper, we used the average velocity $\langle v_{\perp}\rangle$ in
equations, and used Taylor expansion and/or asymptotic forms of
$J_{h}(\frac{\rho_{\gamma}v_{\perp}}{E})$ where it was valid to do so. These
simplifications were made mainly to aid intuitive understanding of CSP
behavior. This appendix provides some details on modifications needed to some
of these simplified equations when the scatterer is non-relativistic. Note
that all numerical simulations presented in the paper used the full thermal
distribution of $v_{\perp}$ as well as the full Bessel function form of the
soft limit scattering amplitudes (without any simplifications).
Since scattering cross sections have a velocity dependence in the soft limit
per (5), (6) and (7), the velocity distribution of a non-relativistic
scatterer needs to be accounted for when calculating the mode thermalization
times. Equation (3) includes calculations with the following form, using a
Boltzmann distribution for the non-relativistic scatterer:
$\tau_{h}(E)\supset\int_{0}^{1}dv_{\perp}v_{\perp}\exp\biggl{(}{-\frac{v_{\perp}^{2}}{2\langle
v_{\perp}\rangle^{2}}}\biggr{)}\leavevmode\nobreak\
\bigg{|}J_{h}(\frac{\rho_{\gamma}v_{\perp}}{E})\bigg{|}^{2}$ (41)
We can approximate $J_{\alpha}(x)$ with the first term in its Taylor expansion
[34, (10.2.2)] when:
$\displaystyle J_{\alpha}(x)$
$\displaystyle=\sum_{m=0}^{\infty}\frac{(-1)^{m}(\frac{x}{2})^{2m+\alpha}}{m!\leavevmode\nobreak\
\Gamma(m+\alpha+1)}$ (42)
$\displaystyle\approx\frac{(\frac{x}{2})^{\alpha}}{\Gamma(\alpha+1)}\leavevmode\nobreak\
\leavevmode\nobreak\ \text{when}\leavevmode\nobreak\ {x\ll 2\sqrt{\alpha+1}}$
(43)
When the simplifying condition in (43) is valid for the entire range of the
integration in (41), the thermal averaging will give a lower incomplete gamma
function $\gamma(\tilde{h}+1,\frac{1}{2\langle v_{\perp}\rangle^{2}})$, which
is $\approx(\tilde{h}+1)!$ only if $\tilde{h}+1<\frac{1}{2\langle
v_{\perp}\rangle^{2}}$. This means that when working with the simplified forms
of the Bessel scattering cross sections, we need to be careful about the range
of validity of the simplifications. The rest of this appendix discusses the
modifications that need to be made for certain equations in the main paper to
account for the effect of thermal distribution of non-relativistic scatterer
velocities.
Equation (8) gets modified as:
$\frac{\tau_{h}(E)}{\tau_{*}(E)}\sim\frac{\bigl{\langle}|z|^{2}\bigr{\rangle}}{\bigl{\langle}|zF_{h}(\rho_{\gamma}z)|^{2}\bigr{\rangle}}\;\approx\;2^{\tilde{h}}(\tilde{h}+1)!^{(2-\delta)}\biggl{(}\frac{E}{\rho_{\gamma}\langle
v_{\perp}\rangle}\biggr{)}^{2\tilde{h}}$ (44)
where the continuous parameter $\delta$ accounts for the weaker suppression of
smaller helicities due to the thermal distribution in scatterer velocities
when we consider a non-relativistic scatterer. $\delta=0$ for all helicities
when the scatterer is relativistic. When we consider non-relativistic
scatterers, $0\leq\delta\leq 1$. Smaller helicities will see a shorter time
vs. that obtained using $\langle v_{\perp}\rangle$ in lieu of a full thermal
distribution of $v_{\perp}$ i.e., $\delta\rightarrow 1$ for only those modes
with $\tilde{h}<\frac{1}{2\langle v_{\perp}\rangle^{2}}$.
Equation (III.3) requires a minor modification with $(\tilde{h}+1)!^{2}$ in
the denominator replaced with $\tilde{h}+1)!^{(2-\delta)}$. This manifests as
a change in the helicity scaling of $\frac{d}{dt}\mathcal{E}_{CSP}(t)$, which
we discussed in the main paper as following $\tilde{h}^{-4}$, where 3 powers
of $\tilde{h}$ came from $(\tilde{h}+1)!^{\frac{3}{(\tilde{h}+1)}}$ and 1 came
directly from the $2\tilde{h}$ in the denominator. Including the $\delta$
parameter, the helicity scaling varies as $\tilde{h}^{2.5-4}$ for a non-
relativistic scatterer. Low helicities in the non-relativistic limit have
$\delta\approx 1$ as above and $(\tilde{h}+1)!^{\frac{1.5}{(\tilde{h}+1)}}$
grows as $(\tilde{h}+1)^{1.5}$. High helicities have $\delta=0$ as explained
above and $(\tilde{h}+1)!^{\frac{3}{(\tilde{h}+1)}}$ grows as
$(\tilde{h}+1)^{3}$. In the relativistic limit, all modes get suppressed with
$(\tilde{h}+1)^{3}$. These modifications do not change anything significant
about $\mathcal{E}_{CSP}(t)$ since its behavior is controlled by other aspects
of (III.3). The only key property of $\mathcal{E}_{CSP}(t)$ that depended on
the helicity scaling is the sum convergence, and this continues to hold with
the modified scaling for low helicities.
Equation (19), which directly followed from (III.3), gets the same
modifications discussed in the previous paragraph. For a relativistic
scatterer, the power law dependence for all modes has $(\tilde{h}+1)^{3}$ in
the denominator. For a non-relativistic scatterer, this gets modified to be
$(\tilde{h}+1)$ for low helicities with $\tilde{h}<\frac{1}{2\langle
v_{\perp}\rangle^{2}}$ and $(\tilde{h}+1)^{3}$ for higher helicities.
Note that equation (20), which is valid for $E\ll\rho_{\gamma}\langle
v_{\perp}\rangle$ already implicitly accounted for the ranges in $\delta$.
## Appendix C Time evolution of CSP energy density
The energy density in the CSP gas can be obtained using [35]:
$\mathcal{E}_{CSP}(t)=\sum_{h}\int_{0}^{\infty}dEn_{h}(E,t).E$ (45)
Using (28), we can write (45) as:
$\begin{split}\mathcal{E}_{CSP}(t)\lessapprox\sum_{h}\bigg{[}&\int_{0}^{E_{h\wedge}(t)}dEn_{h}^{(eq)}(E)E\\\
+&\int_{E_{h\wedge}(t)}^{\infty}dEn_{h}^{(eq)}(E)E\frac{t}{\tau_{h}(E)}\bigg{]}\end{split}$
(46)
In writing (46), we have used the following: i) Overall energy density in any
mode is most sensitive to the highest occupied energy, and comparatively
insensitive to the lowest occupied energy, allowing us to take
$E_{h\vee}(t)\rightarrow 0$ ii) $E_{h\wedge}(t)$ tracks (14), with
characteristic thermalization time tracking (8) since $E_{h\wedge}(t)\geq
<EMAIL_ADDRESS>2.33333pt}(h)$ always.
Taking the time derivative of (46), and noting that $n_{h}(E)$ is continuous
at all energies including $E_{h\wedge}(t)$, we get:
$\frac{d}{dt}\mathcal{E}_{CSP}(t)\lessapprox\sum_{h}\int_{E_{h\wedge}(t)}^{\infty}dEn_{h}^{(eq)}(E)\frac{E}{\tau_{h}(E)}$
(47)
To make (47) tractable for further study, we approximate $f_{h}^{(eq)}(E)$ as
$\frac{T}{E}\leavevmode\nobreak\ \leavevmode\nobreak\ \text{for
all}\leavevmode\nobreak\ E\leq T$ and as
$\exp(-\frac{E}{T})\leavevmode\nobreak\ \leavevmode\nobreak\ \text{for
all}\leavevmode\nobreak\ E>T$. We study the two regimes separately, and write:
$\begin{split}\frac{d}{dt}\mathcal{E}_{CSP}(t)\lesssim\frac{1}{2\pi^{2}}\bigg{[}&\sum\limits_{h\,:\,\begin{subarray}{c}\tau_{h}(T)\geq
t\\\ h\neq\pm
1\end{subarray}}\int_{E_{h\wedge}(t)}^{\infty}dE\frac{TE^{2}}{\tau_{h}(E)}\\\
+&\sum\limits_{h\,:\,\begin{subarray}{c}\tau_{h}(T)<t\\\ h\neq\pm
1\end{subarray}}\int_{E_{h\wedge}(t)}^{\infty}dEe^{-\frac{E}{T}}\frac{E^{3}}{\tau_{h}(E)}\bigg{]}\end{split}$
(48)
In (48), the former sum is taken over modes that are still thermalizing, given
by the condition $\tau_{h}(T)\geq t$. The latter sum is taken over modes that
have already thermalized their phase space upto energies $E>T$, given by the
criterion $\tau_{h}(T)<t$. We exclude the primary modes from these sums since
we assume them to be thermal in this analysis, and will use $\tau_{*}(T)$ as
the benchmark time in the next step. Substituting (8) in (48), the integration
in both sums can be exactly done.
$\begin{split}\frac{d}{dt}\mathcal{E}_{CSP}(t)\lesssim&\sum\limits_{h\,:\,\begin{subarray}{c}\tau_{h}(T)\geq
t\\\ h\neq\pm
1\end{subarray}}\frac{T^{4}}{2\pi^{2}[2\tilde{h}-1]}t^{-1}\left(\frac{E_{h\wedge}(t)}{T}\right)^{3}\\\
+&\sum\limits_{h\,:\,\begin{subarray}{c}\tau_{h}(T)<t\\\ h\neq\pm
1\end{subarray}}\bigg{[}\frac{T^{4}}{2\pi^{2}}t^{-1}\left(\frac{E_{h\wedge}(t)}{T}\right)^{2(\tilde{h}+1)}\\\
&\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\
\times\Gamma\big{(}2-2\tilde{h},\frac{E_{h\wedge}(t)}{T}\big{)}\bigg{]}\end{split}$
(49)
In (49), $\Gamma(x,y)$ is the upper incomplete gamma function. Since the
second sum is taken over modes that have $E_{h\wedge}(t)>T$, the gamma
function falls exponentially or faster with time for $\tilde{h}\geq 1$. So, as
long as there are at least some modes with $\tau_{h}(T)\geq t$ i.e., at all
finite times, the dual sum in (49) is always dominated by the sum over
thermalizing modes. The time dependence in (49) can be made explicit with
(14). Doing so, and dropping the sum over thermal modes that is sub-dominant
gives the expression in (III.3).
## References
* Wigner [1939] E. P. Wigner, On Unitary Representations of the Inhomogeneous Lorentz Group, Annals of Mathematics 40, 149 (1939).
* Schuster and Toro [2013a] P. Schuster and N. Toro, On the theory of continuous-spin particles: wavefunctions and soft-factor scattering amplitudes, JHEP 09, 104, arXiv:1302.1198 [hep-th] .
* Schuster and Toro [2013b] P. Schuster and N. Toro, On the theory of continuous-spin particles: helicity correspondence in radiation and forces, JHEP 09, 105, arXiv:1302.1577 [hep-th] .
* Schuster and Toro [2015a] P. Schuster and N. Toro, Continuous-spin particle field theory with helicity correspondence, Phys. Rev. D 91, 025023 (2015a), arXiv:1404.0675 [hep-th] .
* Schuster _et al._ [2023] P. Schuster, N. Toro, and K. Zhou, Interactions of Particles with ”Continuous Spin” Fields, JHEP 04, 010, arXiv:2303.04816 [hep-th] .
* Schuster and Toro [2024] P. Schuster and N. Toro, Quantum Electrodynamics Mediated by a Photon with Generalized (Continuous) Spin, Accepted to Phys. Rev. D (2024), arXiv:2308.16218 [hep-th] .
* Schuster and Toro [2015b] P. Schuster and N. Toro, A new class of particle in 2 + 1 dimensions, Phys. Lett. B 743, 224 (2015b), arXiv:1404.1076 [hep-th] .
* Bekaert _et al._ [2016] X. Bekaert, M. Najafizadeh, and M. R. Setare, A gauge field theory of fermionic continuous-spin particles, Phys. Lett. B 760, 320 (2016), arXiv:1506.00973 [hep-th] .
* Najafizadeh [2020] M. Najafizadeh, Supersymmetric continuous spin gauge theory, JHEP 03, 027, arXiv:1912.12310 [hep-th] .
* Najafizadeh [2022] M. Najafizadeh, Off-shell supersymmetric continuous spin gauge theory, JHEP 02, 038, arXiv:2112.10178 [hep-th] .
* Metsaev [2018a] R. R. Metsaev, BRST-BV approach to continuous-spin field, Phys. Lett. B 781, 568 (2018a), arXiv:1803.08421 [hep-th] .
* Buchbinder _et al._ [2018] I. L. Buchbinder, V. A. Krykhtin, and H. Takata, BRST approach to Lagrangian construction for bosonic continuous spin field, Phys. Lett. B 785, 315 (2018), arXiv:1806.01640 [hep-th] .
* Alkalaev _et al._ [2018] K. Alkalaev, A. Chekmenev, and M. Grigoriev, Unified formulation for helicity and continuous spin fermionic fields, JHEP 11, 050, arXiv:1808.09385 [hep-th] .
* Buchbinder _et al._ [2020a] I. L. Buchbinder, S. Fedoruk, A. P. Isaev, and V. A. Krykhtin, Towards Lagrangian construction for infinite half-integer spin field, Nucl. Phys. B 958, 115114 (2020a), arXiv:2005.07085 [hep-th] .
* Buchbinder _et al._ [2019a] I. L. Buchbinder, S. J. Gates, and K. Koutrolikos, Superfield continuous spin equations of motion, Phys. Lett. B 793, 445 (2019a), arXiv:1903.08631 [hep-th] .
* Buchbinder _et al._ [2019b] I. L. Buchbinder, M. V. Khabarov, T. V. Snegirev, and Y. M. Zinoviev, Lagrangian formulation for the infinite spin $N$=1 supermultiplets in $d$=4, Nucl. Phys. B 946, 114717 (2019b), arXiv:1904.05580 [hep-th] .
* Buchbinder _et al._ [2020b] I. L. Buchbinder, S. Fedoruk, and A. P. Isaev, Massless infinite spin (super)particles and fields, Proc. Steklov Inst. Math. 309, 46 (2020b), arXiv:1911.00362 [hep-th] .
* Buchbinder _et al._ [2022] I. L. Buchbinder, S. A. Fedoruk, A. P. Isaev, and V. A. Krykhtin, On the off-shell superfield Lagrangian formulation of 4$D$, $N$=1 supersymmetric infinite spin theory, Phys. Lett. B 829, 137139 (2022), arXiv:2203.12904 [hep-th] .
* Zinoviev [2017] Y. M. Zinoviev, Infinite spin fields in $d$ = 3 and beyond, Universe 3, 63 (2017), arXiv:1707.08832 [hep-th] .
* Alkalaev and Grigoriev [2018] K. B. Alkalaev and M. A. Grigoriev, Continuous spin fields of mixed-symmetry type, JHEP 03, 030, arXiv:1712.02317 [hep-th] .
* Burdík _et al._ [2020] C. Burdík, V. K. Pandey, and A. Reshetnyak, BRST–BFV and BRST–BV descriptions for bosonic fields with continuous spin on $R^{1,d-1}$, Int. J. Mod. Phys. A 35, 2050154 (2020), arXiv:1906.02585 [hep-th] .
* Metsaev [2017a] R. R. Metsaev, Continuous spin gauge field in (A)dS space, Phys. Lett. B 767, 458 (2017a), arXiv:1610.00657 [hep-th] .
* Metsaev [2017b] R. R. Metsaev, Fermionic continuous spin gauge field in (A)dS space, Phys. Lett. B 773, 135 (2017b), arXiv:1703.05780 [hep-th] .
* Metsaev [2018b] R. R. Metsaev, Continuous-spin mixed-symmetry fields in AdS(5), J. Phys. A 51, 215401 (2018b), arXiv:1711.11007 [hep-th] .
* Khabarov and Zinoviev [2018] M. V. Khabarov and Y. M. Zinoviev, Infinite (continuous) spin fields in the frame-like formalism, Nucl. Phys. B 928, 182 (2018), arXiv:1711.08223 [hep-th] .
* Metsaev [2019] R. R. Metsaev, Light-cone continuous-spin field in AdS space, Phys. Lett. B 793, 134 (2019), arXiv:1903.10495 [hep-th] .
* Metsaev [2021] R. R. Metsaev, Mixed-symmetry continuous-spin fields in flat and AdS spaces, Phys. Lett. B 820, 136497 (2021), arXiv:2105.11281 [hep-th] .
* Rivelles [2015] V. O. Rivelles, Gauge theory formulations for continuous and higher spin fields, Phys. Rev. D 91, 125035 (2015), arXiv:1408.3576 [hep-th] .
* Rivelles [2017] V. O. Rivelles, Remarks on a gauge theory for continuous spin particles, Eur. Phys. J. C 77, 433 (2017), arXiv:1607.01316 [hep-th] .
* Bekaert and Skvortsov [2017] X. Bekaert and E. D. Skvortsov, Elementary particles with continuous spin, Int. J. Mod. Phys. A 32, 1730019 (2017), arXiv:1708.01030 [hep-th] .
* Najafizadeh [2018] M. Najafizadeh, Modified Wigner equations and continuous spin gauge field, Phys. Rev. D 97, 065009 (2018), arXiv:1708.00827 [hep-th] .
* Wigner [1963] E. P. Wigner, Invariant Quantum Mechanical Equations of Motion, Theoretical Physics, International Atomic Energy Agency, Vienna (1963).
* Weinberg [1964] S. Weinberg, Photons and Gravitons in $S$-Matrix Theory: Derivation of Charge Conservation and Equality of Gravitational and Inertial Mass, Phys. Rev. 135, B1049 (1964).
* [34] DLMF, NIST Digital Library of Mathematical Functions, https://dlmf.nist.gov/, Release 1.2.0 of 2024-03-15, f. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and M. A. McClain, eds.
* Kolb and Turner [1994] E. Kolb and M. Turner, _The Early Universe_ (Westview Press, 1994).
* Mecholsky _et al._ [2021] N. A. Mecholsky, S. Akhbarifa, W. Lutze, M. Brandys, and I. L. Pegg, Bessel function $j_{n}$ maxima and minima, Data in Brief 39, https://doi.org/10.1016/j.dib.2021.107508 (2021).
|
###### Abstract
Let $P$ be a set of $n$ points in real projective $d$-space, not all contained
in a hyperplane, such that any $d$ points span a hyperplane. An ordinary
hyperplane of $P$ is a hyperplane containing exactly $d$ points of $P$. We
show that if $d\geqslant 4$, the number of ordinary hyperplanes of $P$ is at
least $\binom{n-1}{d-1}-O_{d}(n^{\lfloor(d-1)/2\rfloor})$ if $n$ is
sufficiently large depending on $d$. This bound is tight, and given $d$, we
can calculate the exact minimum number for sufficiently large $n$. This is a
consequence of a structure theorem for sets with few ordinary hyperplanes: For
any $d\geqslant 4$ and $K>0$, if $n\geqslant C_{d}K^{8}$ for some constant
$C_{d}>0$ depending on $d$, and $P$ spans at most $K\binom{n-1}{d-1}$ ordinary
hyperplanes, then all but at most $O_{d}(K)$ points of $P$ lie on a
hyperplane, an elliptic normal curve, or a rational acnodal curve. We also
find the maximum number of $(d+1)$-point hyperplanes, solving a
$d$-dimensional analogue of the orchard problem. Our proofs rely on Green and
Tao’s results on ordinary lines, our earlier work on the $3$-dimensional case,
as well as results from classical algebraic geometry.
title = On sets defining few ordinary hyperplanes, author = Aaron Lin and
Konrad Swanepoel, plaintextauthor = Aaron Lin, Konrad Swanepoel, year=2020,
number=4, received=26 April 2019, revised=17 January 2020, published=24 April
2020, doi=10.19086/da.11949,
[classification=text]
## 1 Introduction
An _ordinary line_ of a set of points in the plane is a line passing through
exactly two points of the set. The classical Sylvester–Gallai theorem states
that every finite non-collinear point set in the plane spans at least one
ordinary line. In fact, for sufficiently large $n$, an $n$-point non-collinear
set in the plane spans at least $n/2$ ordinary lines, and this bound is tight
if $n$ is even. This was shown by Green and Tao [GT13] via a structure theorem
characterising all finite point sets with few ordinary lines.
It is then natural to consider higher dimensional analogues. Motzkin [M51]
noted that there are finite non-coplanar point sets in $3$-space that span no
plane containing exactly three points of the set. He proposed considering
instead hyperplanes $\Pi$ in $d$-space such that all but one point contained
in $\Pi$ is contained in a $(d-2)$-dimensional flat of $\Pi$. The existence of
such hyperplanes was shown by Motzkin [M51] for $3$-space and by Hansen [H65]
in higher dimensions.
Purdy and Smith [PS10] considered instead finite non-coplanar point sets in
$3$-space with no three points collinear, and provided a lower bound on the
number of planes containing exactly three points of the set. Referring to such
a plane as an _ordinary plane_ , Ball [B18] proved a $3$-dimensional analogue
of Green and Tao’s [GT13] structure theorem, and found the exact minimum
number of ordinary planes spanned by sufficiently large non-coplanar point
sets in real projective $3$-space with no three points collinear. Using an
alternative method, we [LS18] were able to prove a more detailed structure
theorem but with a stronger condition; see Theorem 4.1 in Section 4.
Ball and Monserrat [BM17] made the following definition, generalising ordinary
planes to higher dimensions.
###### Definition.
An _ordinary hyperplane_ of a set of points in real projective $d$-space,
where every $d$ points span a hyperplane, is a hyperplane passing through
exactly $d$ points of the set.
They [BM17] also proved bounds on the minimum number of ordinary hyperplanes
spanned by such sets (see also [M15]). Our first main result is a structure
theorem for sets with few ordinary hyperplanes. The elliptic normal curves and
rational acnodal curves mentioned in the theorem and their group structure
will be described in Section 3. Our methods extend those in our earlier paper
[LS18], and we detail them in Section 2.
###### Theorem 1.1.
Let $d\geqslant 4$, $K>0$, and suppose $n\geqslant
C\max\\{(dK)^{8},d^{3}2^{d}K\\}$ for some sufficiently large absolute constant
$C>0$. Let $P$ be a set of $n$ points in $\mathbb{R}\mathbb{P}^{d}$ where
every $d$ points span a hyperplane. If $P$ spans at most $K\binom{n-1}{d-1}$
ordinary hyperplanes, then $P$ differs in at most $O(d2^{d}K)$ points from a
configuration of one of the following types:
1. ( )
A subset of a hyperplane;
2. ( )
A coset $H\oplus x$ of a subgroup $H$ of an elliptic normal curve or the
smooth points of a rational acnodal curve of degree $d+1$, for some $x$ such
that $(d+1)x\in H$.
It is easy to show that conversely, a set of $n$ points where every $d$ span a
hyperplane and differing from ( ) ‣ 1.1 or ( ) ‣ 1.1 by $O(K)$ points, spans
$O(K\binom{n-1}{d-1})$ ordinary hyperplanes. By [BM17]*Theorem 2.4, if a set
of $n$ points where every $d$ points span a hyperplane itself spans
$K\binom{n-1}{d-1}$ ordinary hyperplanes, and is not contained in a
hyperplane, then $K=\Omega(1/d)$. Theorem 1.2 below implies that $K\geqslant
1$ for sufficiently large $n$ depending on $d$.
For a similar structure theorem in dimension $4$ but with $K=o(n^{1/7})$, see
Ball and Jimenez [BJ18], who show that $P$ lies on the intersection of five
quadrics. Theorem 1.1 proves [BJ18]*Conjecture 12, noting that elliptic normal
curves and rational acnodal curves lie on $\binom{d}{2}-1$ linearly
independent quadrics [Klein]*p. 365[Fi08]*Proposition 5.3. We also mention
that Monserrat [M15]*Theorem 2.10 proved a structure theorem stating that
almost all points of the set lie on the intersection of $d-1$ hypersurfaces of
degree at most $3$.
Our second main result is a tight bound on the minimum number of ordinary
hyperplanes, proving [BM17]*Conjecture 3. Note that our result holds only for
sufficiently large $n$; see [BM17][M15][J18] for estimates when $d$ is small
or $n$ is not much larger than $d$.
###### Theorem 1.2.
Let $d\geqslant 4$ and let $n\geqslant Cd^{3}2^{d}$ for some sufficiently
large absolute constant $C>0$. The minimum number of ordinary hyperplanes
spanned by a set of $n$ points in $\mathbb{R}\mathbb{P}^{d}$, not contained in
a hyperplane and where every $d$ points span a hyperplane, is
$\binom{n-1}{d-1}-O\left(d2^{-d/2}\binom{n}{\lfloor\frac{d-1}{2}\rfloor}\right).$
This minimum is attained by a coset of a subgroup of an elliptic normal curve
or the smooth points of a rational acnodal curve of degree $d+1$, and when
$d+1$ and $n$ are coprime, by $n-1$ points in a hyperplane together with a
point not in the hyperplane.
Green and Tao [GT13] also used their structure theorem to solve the classical
orchard problem of finding the maximum number of $3$-point lines spanned by a
set of $n$ points in the plane, for $n$ sufficiently large. We solved the
$3$-dimensional analogue in [LS18]. Our third main result is the
$d$-dimensional analogue. We define a _$(d+1)$ -point hyperplane_ to be a
hyperplane through exactly $d+1$ points of a given set.
###### Theorem 1.3.
Let $d\geqslant 4$ and let $n\geqslant Cd^{3}2^{d}$ for some sufficiently
large absolute constant $C>0$. The maximum number of $(d+1)$-point hyperplanes
spanned by a set of $n$ points in $\mathbb{R}\mathbb{P}^{d}$ where every $d$
points span a hyperplane is
$\frac{1}{d+1}\binom{n-1}{d}+O\left(2^{-d/2}\binom{n}{\lfloor\frac{d-1}{2}\rfloor}\right).$
This maximum is attained by a coset of a subgroup of an elliptic normal curve
or the smooth points of a rational acnodal curve of degree $d+1$.
While the bounds in Theorems 1.2 and 1.3 are asymptotic, we provide a
recursive method (as part of our proofs) to calculate the exact extremal
values for a given $d$ and $n$ sufficiently large in Section 5. In principle,
the exact values can be calculated for any given $d$ and turns out to be a
quasi-polynomial in $n$ with a period of $d+1$. We present the values for
$d=4,5,6$ at the end of Section 5.
### Relation to previous work
The main idea in our proof of Theorem 1.1 is to induct on the dimension $d$,
with the base case $d=3$ being our earlier structure theorem for sets defining
few ordinary planes [LS18], which in turn is based on Green and Tao’s
Intermediate Structure Theorem for sets defining few ordinary lines
[GT13]*Proposition 5.3.
Roughly, the structure theorem in $3$-space states that if a finite set of
points is in general position (no three points collinear) and spans few
ordinary planes, then most of the points must lie on a plane, two disjoint
conics, or an elliptic or acnodal space quartic curve. In fact, we can define
a group structure on these curves encoding when four points are coplanar, in
which case our point set must be very close to a coset of the curve. (See
Theorem 4.1 for a more precise statement.)
As originally observed by Ball [B18] in $3$-space, the general position
condition allows the use of projection to leverage Green and Tao’s
Intermediate Structure Theorem [GT13]*Proposition 5.3. This avoids having to
apply their Full Structure Theorem [GT13]*Theorem 1.5, which has a much worse
lower bound on $n$, as it avoids the technical Section 6 of [GT13], dealing
with the case in the plane when most of the points lie on a large, though
bounded, number of lines. On the other hand, to get to the precise coset
structure, we used additive-combinatorial results from [GT13]*Section 7,
specifically [GT13]*Propositions A.5, Lemmas 7.2, 7.4, 7.7, and Corollary 7.6.
In this paper, the only result of Green and Tao [GT13] we explicitly use is
[GT13]*Proposition A.5, which we extend in Proposition 4.3, while all other
results are subsumed in the structure theorem in $3$-space. In dimensions
$d>3$, the general position condition also allows the use of projections from
a point to a hyperplane (see also Ball and Monserrat [BM17]). In Section 2.2
we detail various technical results about the behaviour of curves under such
projections, which are extensions of $3$-dimensional results in [LS18].
While the group structure on elliptic or singular space quartic curves are
well studied (see for instance [Muntingh]), we could not find references to
the group structure on singular rational curves in higher dimensions. This is
our main focus in Section 3, which in a way extends [LS18]*Section 3. In
particular, we look at Sylvester’s theorem on when a binary form can be
written as a sum of perfect powers, which has its roots in classical invariant
theory. In extending the results of [LS18]*Section 3, we have to consider how
to generalise the catalecticant (of a binary quartic form), which leads us to
the secant variety of the rational normal curve as a determinantal variety.
Green and Tao’s Intermediate Structure Theorem in $2$-space has a slightly
different flavour to their Full Structure Theorem, the structure theorem in
$3$-space, and Theorem 1.1. However, this is not the only reason why we start
our induction at $d=3$. A more substantial reason is that there are no smooth
rational cubic curves in $2$-space; as is well known, all rational planar
cubic curves are singular. Thus, both smooth and singular rational quartics in
$3$-space project onto rational cubics, and we need some way to tell them
apart. In higher dimensions, we have Lemma 3.7 to help us, but since this is
false when $d=3$, the induction from the plane to $3$-space [LS18] is more
technical. This is despite the superficial similarity between the $2$\- and
$3$-dimensional situations where there are two almost-extremal cases while
there is essentially only one case when $d>3$.
Proving Theorem 1.1, which covers the $d>3$ cases, is thus in some sense less
complicated, since not only are we leveraging a more detailed structure
theorem (Theorems 1.1 and 4.1 as opposed to [GT13]*Proposition 5.3), we also
lose a case. However, there are complications that arise in how to generalise
and extend results from $2$\- and $3$-space to higher dimensions.
## 2 Notation and tools
By $A=O(B)$, we mean there exists an absolute constant $C>0$ such that
$0\leqslant A\leqslant CB$. Thus, $A=-O(B)$ means there exists an absolute
constant $C>0$ such that $-CB\leqslant A\leqslant 0$. We also write
$A=\Omega(B)$ for $B=O(A)$. None of the $O(\cdot)$ and $\Omega(\cdot)$
statements in this paper have implicit dependence on the dimension $d$.
We write $A\mathbin{\triangle}B$ for the symmetric difference of the sets $A$
and $B$.
Let $\mathbb{F}$ denote the field of real or complex numbers, let
$\mathbb{F}^{*}=\mathbb{F}\setminus{\\{0\\}}$, and let
$\mathbb{F}\mathbb{P}^{d}$ denote the $d$-dimensional projective space over
$\mathbb{F}$. We denote the homogeneous coordinates of a point in
$d$-dimensional projective space by a $(d+1)$-dimensional vector
$[x_{0},x_{1},\dots,x_{d}]$. We call a linear subspace of dimension $k$ in
$\mathbb{F}\mathbb{P}^{d}$ a _$k$ -flat_; thus a point is a $0$-flat, a line
is a $1$-flat, a plane is a $2$-flat, and a hyperplane is a $(d-1)$-flat. We
denote by $Z_{\mathbb{F}}(f)$ the set of $\mathbb{F}$-points of the algebraic
hypersurface defined by the vanishing of a homogeneous polynomial
$f\in\mathbb{F}[x_{0},x_{1},\dots,x_{d}]$. More generally, we consider a
(closed, projective) _variety_ to be any intersection of algebraic
hypersurfaces. We say that a variety is pure-dimensional if each of its
irreducible components has the same dimension. We consider a _curve_ of degree
$e$ in $\mathbb{C}\mathbb{P}^{d}$ to be a variety $\delta$ of pure dimension
$1$ such that a generic hyperplane in $\mathbb{C}\mathbb{P}^{d}$ intersects
$\delta$ in $e$ distinct points. More generally, the degree of a variety
$X\subset\mathbb{C}\mathbb{P}^{d}$ of dimension $r$ is
$\deg(X):=\max\left\\{|\Pi\cap X|:\text{$\Pi$ is a $(d-r)$-flat such that
$\Pi\cap X$ is finite}\right\\}.$
We say that a curve is _non-degenerate_ if it is not contained in a
hyperplane, and _non-planar_ if it is not contained in a $2$-flat. We call a
curve _real_ if each of its irreducible components contains infinitely many
points of $\mathbb{R}\mathbb{P}^{d}$. Whenever we consider a curve in
$\mathbb{R}\mathbb{P}^{d}$, we implicitly assume that its Zariski closure is a
real curve.
We denote the Zariski closure of a set $S\subseteq\mathbb{C}\mathbb{P}^{d}$ by
$\overline{S}$. We will use the _secant variety
$\operatorname{Sec}_{\mathbb{C}}(\delta)$_ of a curve $\delta$, which is the
Zariski closure of the set of points in $\mathbb{C}\mathbb{P}^{d}$ that lie on
a line through some two points of $\delta$.
### 2.1 Bézout’s theorem
Bézout’s theorem gives the degree of an intersection of varieties. While it is
often formulated as an equality, in this paper we only need the weaker form
that ignores multiplicity and gives an upper bound. The (set-theoretical)
intersection $X\cap Y$ of two varieties is just the variety defined by
$P_{X}\cup P_{Y}$, where $X$ and $Y$ are defined by the collections of
homogeneous polynomials $P_{X}$ and $P_{Y}$ respectively.
###### Theorem 2.1 (Bézout [Fu84]*Section 2.3).
Let $X$ and $Y$ be varieties in $\mathbb{C}\mathbb{P}^{d}$ with no common
irreducible component. Then $\deg(X\cap Y)\leqslant\deg(X)\deg(Y)$.
### 2.2 Projections
Given $p\in\mathbb{F}\mathbb{P}^{d}$, the _projection from $p$_,
$\pi_{p}\colon\mathbb{F}\mathbb{P}^{d}\setminus\\{p\\}\to\mathbb{F}\mathbb{P}^{d-1}$,
is defined by identifying $\mathbb{F}\mathbb{P}^{d-1}$ with any hyperplane
$\Pi$ of $\mathbb{F}\mathbb{P}^{d}$ not passing through $p$, and then letting
$\pi_{p}(x)$ be the point where the line $px$ intersects $\Pi$ [H92]*Example
3.4. Equivalently, $\pi_{p}$ is induced by a surjective linear transformation
$\mathbb{F}^{d+1}\to\mathbb{F}^{d}$ where the kernel is spanned by the vector
$p$.
As in our previous paper [LS18], we have to consider projections of curves
where we do not have complete freedom in choosing a generic projection point
$p$.
Let $\delta\subset\mathbb{C}\mathbb{P}^{d}$ be an irreducible non-planar curve
of degree $e$, and let $p$ be a point in $\mathbb{C}\mathbb{P}^{d}$. We call
$\pi_{p}$ _generically one-to-one on $\delta$_ if there is a finite subset $S$
of $\delta$ such that $\pi_{p}$ restricted to $\delta\setminus S$ is one-to-
one. (This is equivalent to the birationality of $\pi_{p}$ restricted to
$\delta\setminus\\{p\\}$ [H92]*p. 77.) If $\pi_{p}$ is generically one-to-one,
the degree of the curve $\overline{\pi_{p}(\delta\setminus\\{p\\})}$ is $e-1$
if $p$ is a smooth point on $\delta$, and is $e$ if $p$ does not lie on
$\delta$; if $\pi_{p}$ is not generically one-to-one, then the degree of
$\overline{\pi_{p}(\delta\setminus\\{p\\})}$ is at most $(e-1)/2$ if $p$ lies
on $\delta$, and is at most $e/2$ if $p$ does not lie on $\delta$
[H92]*Example 18.16, [Kollar]*Section 1.15.
The following three lemmas on projections are proved in [LS18] in the case
$d=3$. They all state that most projections behave well and can be considered
to be quantitative versions of the trisecant lemma [KKT08]. The proofs of
Lemmas 2.3 and 2.4 are almost word-for-word the same as the proofs of the
$3$-dimensional cases in [LS18]. All three lemmas can also be proved by
induction on the dimension $d\geqslant 3$ from the $3$-dimensional case. We
illustrate this by proving Lemma 2.2.
###### Lemma 2.2.
Let $\delta$ be an irreducible non-planar curve of degree $e$ in
$\mathbb{C}\mathbb{P}^{d}$, $d\geqslant 3$. Then there are at most $O(e^{4})$
points $p$ on $\delta$ such that $\pi_{p}$ restricted to
$\delta\setminus\\{p\\}$ is not generically one-to-one.
###### Proof.
The case $d=3$ was shown in [LS18], based on the work of Furukawa [Fu11]. We
next assume that $d\geqslant 4$ and that the lemma holds in dimension $d-1$.
Since $d>3$ and the dimension of $\operatorname{Sec}_{\mathbb{C}}(\delta)$ is
at most $3$ [H92]*Proposition 11.24, there exists a point
$p\in\mathbb{C}\mathbb{P}^{d}$ such that all lines through $p$ have
intersection multiplicity at most $1$ with $\delta$. It follows that the
projection $\delta^{\prime}:=\overline{\pi_{p}(\delta)}$ of $\delta$ is a non-
planar curve of degree $e$ in $\mathbb{C}\mathbb{P}^{d-1}$. Consider any line
$\ell$ not through $p$ that intersects $\delta$ in at least three distinct
points $p_{1},p_{2},p_{3}$. Then $\pi_{p}(\ell)$ is a line in
$\mathbb{C}\mathbb{P}^{d-1}$ that intersects $\delta^{\prime}$ in three points
$\pi_{p}(p_{1}),\pi_{p}(p_{2}),\pi_{p}(p_{3})$. It follows that if
$x\in\delta$ is a point such that for all but finitely many points
$y\in\delta$, the line $xy$ intersects $\delta$ in a point other than $x$ or
$y$, then $x^{\prime}:=\pi_{p}(x)$ is a point such that for all but finitely
many points $y^{\prime}:=\pi_{p}(y)\in\delta^{\prime}$, the line
$x^{\prime}y^{\prime}$ intersects $\delta^{\prime}$ in a third point. That is,
if $\pi_{x}$ restricted to $\delta$ is not generically one-to-one, then the
projection map $\pi_{x^{\prime}}$ in $\mathbb{C}\mathbb{P}^{d-1}$ restricted
to $\delta^{\prime}$ is not generically one-to-one. By the induction
hypothesis, there are at most $O(e^{4})$ such points and we are done. ∎
###### Lemma 2.3.
Let $\delta$ be an irreducible non-planar curve of degree $e$ in
$\mathbb{C}\mathbb{P}^{d}$, $d\geqslant 3$. Then there are at most $O(e^{3})$
points $x\in\mathbb{C}\mathbb{P}^{d}\setminus\delta$ such that $\pi_{x}$
restricted to $\delta$ is not generically one-to-one.
###### Lemma 2.4.
Let $\delta_{1}$ and $\delta_{2}$ be two irreducible non-planar curves in
$\mathbb{C}\mathbb{P}^{d}$, $d\geqslant 3$, of degree $e_{1}$ and $e_{2}$
respectively. Then there are at most $O(e_{1}e_{2})$ points $p$ on
$\delta_{1}$ such that $\overline{\pi_{p}(\delta_{1}\setminus\\{p\\})}$ and
$\overline{\pi_{p}(\delta_{2}\setminus\\{p\\})}$ coincide.
## 3 Curves of degree $d+1$d+1
In this paper, irreducible non-degenerate curves of degree $d+1$ in
$\mathbb{C}\mathbb{P}^{d}$ play a fundamental role. Indeed, the elliptic
normal curve and rational acnodal curve mentioned in Theorem 1.1 are both such
curves. In this section, we describe their properties that we need. These
properties are all classical, but we did not find a reference for the group
structure on singular rational curves of degree $d+1$, and therefore consider
this in detail.
It is well-known in the plane that there is a group structure on any smooth
cubic curve or the set of smooth points of a singular cubic. This group has
the property that three points sum to the identity if and only if they are
collinear. Over the complex numbers, the group on a smooth cubic is isomorphic
to the torus $(\mathbb{R}/\mathbb{Z})^{2}$, and the group on the smooth points
of a singular cubic is isomorphic to $(\mathbb{C},+)$ or
$(\mathbb{C}^{*},\cdot)$ depending on whether the singularity is a cusp or a
node. Over the real numbers, the group on a smooth cubic is isomorphic to
$\mathbb{R}/\mathbb{Z}$ or $\mathbb{R}/\mathbb{Z}\times\mathbb{Z}_{2}$
depending on whether the real curve has one or two semi-algebraically
connected components, and the group on the smooth points of a singular cubic
is isomorphic to $(\mathbb{R},+)$, $(\mathbb{R},+)\times\mathbb{Z}_{2}$, or
$\mathbb{R}/\mathbb{Z}$ depending on whether the singularity is a cusp, a
crunode, or an acnode. See for instance [GT13] for a more detailed
description.
In higher dimensions, it turns out that an irreducible non-degenerate curve of
degree $d+1$ does not necessarily have a natural group structure, but if it
has, the behaviour is similar to the planar case. For instance, in
$\mathbb{C}\mathbb{P}^{3}$, an irreducible non-degenerate quartic curve is
either an elliptic quartic, with a group isomorphic to an elliptic curve such
that four points on the curve are coplanar if and only if they sum to the
identity, or a rational curve. There are two types, or species, of rational
quartics. The rational quartic curves of the first species are intersections
of two quadrics (as are elliptic quartics), they are always singular, and
there is a group on the smooth points such that four points on the curve are
coplanar if and only if they sum to the identity. Those of the second species
lie on a unique quadric, are smooth, and there is no natural group structure
analogous to the other cases. See [LS18] for a more detailed account. The
picture is similar in higher dimensions.
###### Definition (Clifford [Clifford], Klein [Klein]).
An _elliptic normal curve_ is an irreducible non-degenerate smooth curve of
degree $d+1$ in $\mathbb{C}\mathbb{P}^{d}$ isomorphic to an elliptic curve in
the plane.
###### Proposition 3.1 ([S09]*Exercise 3.11 and Corollary 5.1.1,
[S94]*Corollary 2.3.1).
An elliptic normal curve $\delta$ in $\mathbb{C}\mathbb{P}^{d}$, $d\geqslant
2$, has a natural group structure such that $d+1$ points in $\delta$ lie on a
hyperplane if and only if they sum to the identity. This group is isomorphic
to $(\mathbb{R}/\mathbb{Z})^{2}$.
If the curve is real, then the group is isomorphic to $\mathbb{R}/\mathbb{Z}$
or $\mathbb{R}/\mathbb{Z}\times\mathbb{Z}_{2}$ depending on whether the real
curve has one or two semi-algebraically connected components.
A similar result holds for singular rational curves of degree $d+1$. Since we
need to work with such curves and a description of their group structure is
not easily found in the literature, we give a detailed discussion of their
properties in the remainder of this section.
A _rational curve_ $\delta$ in $\mathbb{F}\mathbb{P}^{d}$ of degree $e$ is a
curve that can be parametrised by the projective line,
$\delta\colon\mathbb{F}\mathbb{P}^{1}\to\mathbb{F}\mathbb{P}^{d},\quad[x,y]\mapsto[q_{0}(x,y),\dots,q_{d}(x,y)],$
where each $q_{i}$ is a homogeneous polynomial of degree $e$ in the variables
$x$ and $y$. The following lemma is well known (see for example [SR85]*p. 38,
Theorem VIII), and can be proved by induction from the planar case using
projection.
###### Proposition 3.2.
An irreducible non-degenerate curve of degree $d+1$ in
$\mathbb{C}\mathbb{P}^{d}$, $d\geqslant 2$, is either an elliptic normal curve
or rational.
We next describe when an irreducible non-degenerate rational curve of degree
$d+1$ in $\mathbb{C}\mathbb{P}^{d}$ has a natural group structure. It turns
out that this happens if and only if the curve is singular.
We write $\nu_{d+1}$ for the _rational normal curve_ in
$\mathbb{C}\mathbb{P}^{d+1}$ [H92]*Example 1.14, which we parametrise as
$\nu_{d+1}:[x,y]\mapsto[y^{d+1},-xy^{d},x^{2}y^{d-1},\dotsc,(-x)^{d-1}y^{2},(-x)^{d}y,(-x)^{d+1}].$
Any irreducible non-degenerate rational curve $\delta$ of degree $d+1$ in
$\mathbb{C}\mathbb{P}^{d}$ is the projection of the rational normal curve, and
we have
$\delta[x,y]=[y^{d+1},-xy^{d},x^{2}y^{d-1},\dotsc,(-x)^{d-1}y^{2},(-x)^{d}y,(-x)^{d+1}]A,$
where $A$ is a $(d+2)\times(d+1)$ matrix of rank $d+1$ (since $\delta$ is non-
degenerate) with entries derived from the coefficients of the polynomials
$q_{i}$ of degree $d+1$ in the parametrisation of the curve (with suitable
alternating signs). Thus $\delta\subset\mathbb{C}\mathbb{P}^{d}$ is the image
of $\nu_{d+1}$ under the projection map $\pi_{p}$ defined by $A$. In
particular, the point of projection
$p=[p_{0},p_{1},\dots,p_{d+1}]\in\mathbb{C}\mathbb{P}^{d+1}$ is the
($1$-dimensional) kernel of $A$. If we project $\nu_{d+1}$ from a point
$p\in\nu_{d+1}$, then we obtain a rational normal curve in
$\mathbb{C}\mathbb{P}^{d}$. However, since $\delta$ is of degree $d+1$,
necessarily $p\notin\nu_{d+1}$. Conversely, it can easily be checked that for
any $p\notin\nu_{d+1}$, the projection of $\nu_{d+1}$ from $p$ is a rational
curve of degree $d+1$ in $\mathbb{C}\mathbb{P}^{d}$. We will use the notation
$\delta_{p}$ for this curve. We summarise the above discussion in the
following proposition that will be implicitly used in the remainder of the
paper.
###### Proposition 3.3.
An irreducible non-degenerate rational curve of degree $d+1$ in
$\mathbb{C}\mathbb{P}^{d}$ is projectively equivalent to $\delta_{p}$ for some
$p\in\mathbb{C}\mathbb{P}^{d+1}\setminus\nu_{d+1}$.
We use the projection point $p$ to define a binary form and a multilinear form
associated to $\delta_{p}$. The _fundamental binary form_ associated to
$\delta_{p}$ is the homogeneous polynomial of degree $d+1$ in two variables
$f_{p}(x,y):=\sum_{i=0}^{d+1}p_{i}\binom{d+1}{i}x^{d+1-i}y^{i}$. Its
_polarisation_ is the multilinear form
$F_{p}\colon(\mathbb{F}^{2})^{d+1}\to\mathbb{F}$ [D03]*Section 1.2 defined by
$F_{p}(x_{0},y_{0},x_{1},y_{1},\dots,x_{d},y_{d}):=\frac{1}{(d+1)!}\sum_{I\subseteq\\{0,1,\dots,d\\}}(-1)^{d+1-|I|}f_{p}\left(\sum_{i\in
I}x_{i},\sum_{i\in I}y_{i}\right).$
Consider the multilinear form
$G_{p}(x_{0},y_{0},\dots,x_{d},y_{d})=\sum_{i=0}^{d+1}p_{i}P_{i}$, where
$P_{i}(x_{0},y_{0},x_{1},y_{1},\dots,x_{d},y_{d}):=\sum_{I\in\binom{\\{0,1,\dots,d\\}}{i}}\prod_{j\in\overline{I}}x_{j}\prod_{j\in
I}y_{j}$ (1)
for each $i=0,\dots,d+1$. Here the sum is taken over all subsets $I$ of
$\\{0,1,\dots,d\\}$ of size $i$, and $\overline{I}$ denotes the complement of
$I$ in $\\{0,1,\dots,d\\}$. It is easy to see that the binary form $f_{p}$ is
the _restitution_ of $G_{p}$, namely [D03]*Section 1.2
$f_{p}(x,y)=G_{p}(x,y,x,y,\dots,x,y).$
Since the polarisation of the restitution of a multilinear form is itself
[D03]*Section 1.2, we must thus have $F_{p}=G_{p}$. (This can also be checked
directly.)
###### Lemma 3.4.
Let $\delta_{p}$ be an irreducible non-degenerate rational curve of degree
$d+1$ in $\mathbb{C}\mathbb{P}^{d}$, $d\geqslant 2$, where
$p\in\mathbb{C}\mathbb{P}^{d+1}\setminus\nu_{d+1}$. A hyperplane intersects
$\delta_{p}$ in $d+1$ points $\delta_{p}[x_{i},y_{i}]$, $i=0,\dots,d$,
counting multiplicity, if and only if
$F_{p}(x_{0},y_{0},x_{1},y_{1},\dots,x_{d},y_{d})=0$.
###### Proof.
We first prove the statement for distinct points
$[x_{i},y_{i}]\in\mathbb{C}\mathbb{P}^{1}$. Then the points
$\delta_{p}[x_{i},y_{i}]$ are all on a hyperplane if and only if the
hyperplane in $\mathbb{C}\mathbb{P}^{d+1}$ through the points
$\nu_{d+1}[x_{i},y_{i}]$ passes through $p$. It will be sufficient to prove
the identity
$D:=\det\begin{pmatrix}\nu_{d+1}[x_{0},y_{0}]\\\ \vdots\\\
\nu_{d+1}[x_{d},y_{d}]\\\
p\end{pmatrix}=F_{p}(x_{0},y_{0},x_{1},y_{1},\dots,x_{d},y_{d})\prod_{0\leqslant
j<k\leqslant d}\begin{vmatrix}x_{j}&x_{k}\\\ y_{j}&y_{k}\end{vmatrix},$ (2)
since the second factor on the right-hand side does not vanish because the
points $[x_{i},y_{i}]$ are distinct. We first note that
$\displaystyle D$
$\displaystyle=\begin{vmatrix}y_{0}^{d+1}&-x_{0}y_{0}^{d}&x_{0}^{2}y_{0}^{d-1}&\dotsc&(-x_{0})^{d}y_{0}&(-x_{0})^{d+1}\\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\\
y_{d}^{d+1}&-x_{d}y_{d}^{d}&x_{d}^{2}y_{d}^{d-1}&\dotsc&(-x_{d})^{d}y_{d}&(-x_{d})^{d+1}\\\
p_{0}&p_{1}&p_{2}&\dotsc&p_{d}&p_{d+1}\end{vmatrix}$
$\displaystyle=(-1)^{\left\lfloor\frac{d+2}{2}\right\rfloor}\begin{vmatrix}y_{0}^{d+1}&x_{0}y_{0}^{d}&x_{0}^{2}y_{0}^{d-1}&\dotsc&x_{0}^{d}y_{0}&x_{0}^{d+1}\\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\\
y_{d}^{d+1}&x_{d}y_{d}^{d}&x_{d}^{2}y_{d}^{d-1}&\dotsc&x_{d}^{d}y_{d}&x_{d}^{d+1}\\\\[3.0pt]
p_{0}&-p_{1}&p_{2}&\dotsc&(-1)^{d}p_{d}&(-1)^{d+1}p_{d+1}\end{vmatrix}.$ (3)
We next replace $(-1)^{i}p_{i}$ by $x^{i}y^{d+1-i}$ for each $i=0,\dots,d+1$
in the last row of the determinant in (3) and obtain the Vandermonde
determinant
$\displaystyle\mathrel{\phantom{=}}(-1)^{\left\lfloor\frac{d+2}{2}\right\rfloor}\begin{vmatrix}y_{0}^{d+1}&x_{0}y_{0}^{d}&x_{0}^{2}y_{0}^{d-1}&\dotsc&x_{0}^{d}y_{0}&x_{0}^{d+1}\\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\\
y_{d}^{d+1}&x_{d}y_{d}^{d}&x_{d}^{2}y_{d}^{d-1}&\dotsc&x_{d}^{d}y_{d}&x_{d}^{d+1}\\\\[3.0pt]
y^{d+1}&xy^{d}&x^{2}y^{d-1}&\dotsc&x^{d}y&x^{d+1}\end{vmatrix}$
$\displaystyle=(-1)^{\left\lfloor\frac{d+2}{2}\right\rfloor}\prod_{0\leqslant
j<k\leqslant d}\begin{vmatrix}y_{j}&y_{k}\\\
x_{j}&x_{k}\end{vmatrix}\prod_{0\leqslant j\leqslant
d}\begin{vmatrix}y_{j}&y\\\ x_{j}&x\end{vmatrix}$
$\displaystyle=(-1)^{\left\lfloor\frac{d+2}{2}\right\rfloor}(-1)^{\binom{d+2}{2}}\prod_{0\leqslant
j<k\leqslant d}\begin{vmatrix}x_{j}&x_{k}\\\
y_{j}&y_{k}\end{vmatrix}\prod_{0\leqslant j\leqslant
d}\begin{vmatrix}x_{j}&x\\\ y_{j}&y\end{vmatrix}.$
Finally, note that $(-1)^{\lfloor(d+2)/2\rfloor}(-1)^{\binom{d+2}{2}}=1$ and
that the coefficient of $x^{i}y^{d+1-i}$ in $\prod_{0\leqslant j\leqslant
d}\begin{vmatrix}x_{j}&x\\\ y_{j}&y\end{vmatrix}$ is
$\sum_{I\subseteq\binom{\\{0,\dots,d\\}}{i}}\prod_{j\in
I}(-y_{j})\prod_{j\in\overline{I}}x_{j}=(-1)^{i}P_{i},$
where $P_{i}$ is as defined in (1). It follows that the coefficient of $p_{i}$
in (3) is $P_{i}$, and (2) follows.
We next complete the argument for the case when the points $[x_{i},y_{i}]$ are
not all distinct. First suppose that a hyperplane $\Pi$ intersects
$\delta_{p}$ in $\delta_{p}[x_{i},y_{i}]$, $i=0,\dots,d$. By Bertini’s theorem
[H77]*Theorem II.8.18 and Remark II.8.18.1, there is an arbitrarily close
perturbation $\Pi^{\prime}$ of $\Pi$ that intersects $\delta_{p}$ in distinct
points $\delta_{p}[x_{i}^{\prime},y_{i}^{\prime}]$. By what has already been
proved,
$F_{p}(x_{0}^{\prime},y_{0}^{\prime},\dots,x_{d}^{\prime},y_{d}^{\prime})=0$.
Since $\Pi^{\prime}$ is arbitrarily close and $F_{p}$ is continuous,
$F_{p}[x_{0},y_{0},\dots,x_{d},y_{d}]=0$.
Conversely, suppose that $F_{p}(x_{0},y_{0},\dots,x_{d},y_{d})=0$ where the
$[x_{i},y_{i}]$ are not all distinct. Perturb the points
$[x_{0},y_{0}],\dots,[x_{d-1},y_{d-1}]$ by an arbitrarily small amount to
$[x_{0}^{\prime},y_{0}^{\prime}],\dots,[x_{d-1}^{\prime},y_{d-1}^{\prime}]$
respectively, so as to make
$\delta_{p}[x_{0}^{\prime},y_{0}^{\prime}],\dots,\delta_{p}[x_{d-1}^{\prime},y_{d-1}^{\prime}]$
span a hyperplane $\Pi^{\prime}$ that intersects $\delta_{p}$ again in
$\delta_{p}[x_{d}^{\prime},y_{d}^{\prime}]$, say, and so that
$[x_{0}^{\prime},y_{0}^{\prime}],\dots,[x_{d}^{\prime},y_{d}^{\prime}]$ are
all distinct. If we take the limit as
$[x_{i}^{\prime},y_{i}^{\prime}]\to[x_{i},y_{i}]$ for each $i=0,\dots,d-1$, we
obtain a hyperplane $\Pi$ intersecting $\delta_{p}$ in
$\delta_{p}[x_{0},y_{0}],\dots,\delta_{p}[x_{d-1},y_{d-1}],\delta_{p}[x_{d}^{\prime\prime},y_{d}^{\prime\prime}]$,
say. Then
$F_{p}(x_{0},y_{0},\dots,x_{d-1},y_{d-1},x_{d}^{\prime\prime},y_{d}^{\prime\prime})=0$.
Since the multilinear form $F_{p}$ is non-trivial, it follows that
$[x_{d},y_{d}]=[x_{d}^{\prime\prime},y_{d}^{\prime\prime}]$. Therefore, $\Pi$
is a hyperplane that intersects $\delta_{p}$ in $\delta_{p}[x_{i},y_{i}]$,
$i=0,\dots,d$. ∎
The secant variety $\operatorname{Sec}_{\mathbb{C}}(\nu_{d+1})$ of the
rational normal curve $\nu_{d+1}$ in $\mathbb{C}\mathbb{P}^{d+1}$ is equal to
the set of points that lie on a proper secant or tangent line of $\nu_{d+1}$,
that is, on a line with intersection multiplicity at least $2$ with
$\nu_{d+1}$. We also define the real secant variety of $\nu_{d+1}$ to be the
set $\operatorname{Sec}_{\mathbb{R}}(\nu_{d+1})$ of points in
$\mathbb{R}\mathbb{P}^{d+1}$ that lie on a line that either intersects
$\nu_{d+1}$ in two distinct real points or is a tangent line of $\nu_{d+1}$.
The _tangent variety_ $\operatorname{Tan}_{\mathbb{F}}(\nu_{d+1})$ of
$\nu_{d+1}$ is defined to be the set of points in $\mathbb{F}\mathbb{P}^{d+1}$
that lie on a tangent line of $\nu_{d+1}$. We note that although
$\operatorname{Tan}_{\mathbb{R}}(\nu_{d+1})=\operatorname{Tan}_{\mathbb{C}}(\nu_{d+1})\cap\mathbb{R}\mathbb{P}^{d+1}$,
we only have a proper inclusion
$\operatorname{Sec}_{\mathbb{R}}(\nu_{d+1})\subset\operatorname{Sec}_{\mathbb{C}}(\nu_{d+1})\cap\mathbb{R}\mathbb{P}^{d+1}$
for $d\geqslant 2$.
We will need a concrete description of
$\operatorname{Sec}_{\mathbb{C}}(\nu_{d+1})$ and its relation to the
smoothness of the curves $\delta_{p}$. For any
$p\in\mathbb{F}\mathbb{P}^{d+1}$ and $k=2,\dots,d-1$, define the
$(k+1)\times(d-k+2)$ matrix
$M_{k}(p):=\begin{pmatrix}p_{0}&p_{1}&p_{2}&\dots&p_{d-k+1}\\\
p_{1}&p_{2}&p_{3}&\dots&p_{d-k+2}\\\ \vdots&\vdots&\vdots&\ddots&\vdots\\\
p_{k}&p_{k+1}&p_{k+2}&\dots&p_{d+1}\end{pmatrix}.$
Suppose that $\delta_{p}$ has a double point, say
$\delta_{p}[x_{0},y_{0}]=\delta_{p}[x_{1},y_{1}]$. This is equivalent to $p$,
$\nu_{d+1}[x_{0},y_{0}]$, and $\nu_{d+1}[x_{1},y_{1}]$ being collinear, which
is equivalent to $p$ being on the secant variety of $\nu_{d+1}$. (In the
degenerate case where $[x_{0},y_{0}]=[x_{1},y_{1}]$, we have that
$p\in\operatorname{Tan}_{\mathbb{F}}(\nu_{d+1})$.) Then
$\delta_{p}[x_{0},y_{0}]$, $\delta_{p}[x_{1},y_{1}]$,
$\delta_{p}[x_{2},y_{2}]$,…, $\delta_{p}[x_{d},y_{d}]$ are on a hyperplane in
$\mathbb{F}\mathbb{P}^{d}$ for all
$[x_{2},y_{2}],\dots,[x_{d},y_{d}]\in\mathbb{F}\mathbb{P}^{1}$. It follows
that the coefficients of
$F_{p}(x_{0},y_{0},x_{1},y_{1},x_{2},y_{2},\dots,x_{d},y_{d})$ as a polynomial
in $x_{2},y_{2},\dots,x_{d},y_{d}$ all vanish, that is,
$p_{i}x_{0}x_{1}+p_{i+1}(x_{0}y_{1}+y_{0}x_{1})+p_{i+2}y_{0}y_{1}=0$
for all $i=0,\dots,d-1$. This can be written as
$[x_{0}x_{1},x_{0}y_{1}+y_{0}x_{1},y_{0}y_{1}]M_{2}(p)=0$. Conversely, if
$M_{2}(p)$ has rank $2$ with say $[c_{0},2c_{1},c_{2}]M_{2}(p)=0$, then there
is a non-trivial solution to the linear system with $c_{0}=x_{0}x_{1}$,
$c_{1}=x_{0}y_{1}+y_{0}x_{1}$, $c_{2}=y_{0}y_{1}$, and we have
$c_{0}x^{2}+2c_{1}xy+c_{2}y^{2}=(x_{0}x+y_{0}y)(x_{1}x+y_{1}y)$. In the
degenerate case where $[x_{0},y_{0}]=[x_{1},y_{1}]$, we have that the
quadratic form has repeated roots.
It follows that $M_{2}(p)$ has rank at most $2$ if and only if
$p\in\operatorname{Sec}_{\mathbb{C}}(\nu_{d+1})$ (also note that $M_{2}(p)$
has rank $1$ if and only if $p\in\nu_{d+1}$). We note for later use that since
the null space of $M_{2}(p)$ is $1$-dimensional if it has rank $2$, it follows
that each $p\in\operatorname{Sec}_{\mathbb{C}}(\nu_{d+1})$ lies on a unique
secant (which might degenerate to a tangent). This implies that $\delta_{p}$
has a unique singularity when
$p\in\operatorname{Sec}_{\mathbb{C}}(\nu_{d+1})\setminus{\nu_{d+1}}$, which is
a node if
$p\in\operatorname{Sec}_{\mathbb{C}}(\nu_{d+1})\setminus\operatorname{Tan}_{\mathbb{C}}(\nu_{d+1})$
and a cusp if
$p\in\operatorname{Tan}_{\mathbb{C}}(\nu_{d+1})\setminus{\nu_{d+1}}$. In the
real case there are two types of nodes. If
$p\in\operatorname{Sec}_{\mathbb{R}}(\nu_{d+1})\setminus\nu_{d+1}$, then the
roots $[x_{0},y_{0}],[x_{1},y_{1}]$ are real, and $\delta_{p}$ has either a
cusp when $p\in\operatorname{Tan}_{\mathbb{R}}(\nu_{d+1})\setminus\nu_{d+1}$
and $[x_{0},y_{0}]=[x_{1},y_{1}]$, or a crunode when
$p\in\operatorname{Sec}_{\mathbb{R}}(\nu_{d+1})\setminus\operatorname{Tan}_{\mathbb{R}}(\nu_{d+1})$
and $[x_{0},y_{0}]$ and $[x_{1},y_{1}]$ are distinct roots of the real binary
quadratic form $c_{0}x^{2}+2c_{1}xy+c_{2}y^{2}$. If
$p\in\operatorname{Sec}_{\mathbb{C}}(\nu_{d+1})\setminus\operatorname{Sec}_{\mathbb{R}}(\nu_{d+1})\cap\mathbb{R}\mathbb{P}^{d+1}$
then the quadratic form has conjugate roots
$[x_{0},y_{0}]=[\overline{x_{1}},\overline{y_{1}}]$ and $\delta_{p}$ has an
acnode.
If $p\notin\operatorname{Sec}(\nu_{d+1})$, then $\delta_{p}$ is a smooth curve
of degree $d+1$. It follows that $\delta_{p}$ is singular if and only if
$p\in\operatorname{Sec}(\nu_{d+1})\setminus{\nu_{d+1}}$. For the purposes of
this paper, we make the following definitions.
###### Definition.
A _rational singular curve_ is an irreducible non-degenerate singular rational
curve of degree $d+1$ in $\mathbb{C}\mathbb{P}^{d}$. In the real case, a
_rational cuspidal curve_ , _rational crunodal curve_ , or _rational acnodal
curve_ is a rational singular curve isomorphic to a singular planar cubic with
a cusp, crunode, or acnode respectively.
In particular, we have shown the case $k=2$ of the following well-known
result.
###### Proposition 3.5 ([H92]*Proposition 9.7).
Let $d\geqslant 3$. For any $k=2,\dots,d-1$, the secant variety of $\nu_{d+1}$
is equal to the locus of all $[p_{0},p_{1},\dots,p_{d+1}]$ such that
$M_{k}(p)$ has rank at most $2$.
###### Corollary 3.6.
Let $d\geqslant 3$. For any $k=2,\dots,d-1$ and
$p\in\mathbb{C}\mathbb{P}^{d+1}\setminus\nu_{d+1}$, the curve $\delta_{p}$ of
degree $d+1$ in $\mathbb{C}\mathbb{P}^{d}$ is singular if and only if
$\operatorname{rank}M_{k}(p)\leqslant 2$.
We next use Corollary 3.6 to show that the projection of a smooth rational
curve of degree $d+1$ in $\mathbb{C}\mathbb{P}^{d}$ from a generic point on
the curve is again smooth when $d\geqslant 4$. This is not true for $d=3$, as
there is a trisecant through each point of a quartic curve of the second
species in $3$-space. (The union of the trisecants form the unique quadric on
which the curve lies [H92]*Exercise 8.13.)
###### Lemma 3.7.
Let $\delta_{p}$ be a smooth rational curve of degree $d+1$ in
$\mathbb{C}\mathbb{P}^{d}$, $d\geqslant 4$. Then for all but at most three
points $q\in\delta_{p}$, the projection
$\overline{\pi_{q}(\delta_{p}\setminus\\{q\\})}$ is a smooth rational curve of
degree $d$ in $\mathbb{C}\mathbb{P}^{d-1}$.
###### Proof.
Let $q=\delta_{p}[x_{0},y_{0}]$. Suppose that
$\overline{\pi_{q}(\delta_{p}\setminus\\{q\\})}$ is singular. Then there exist
$[x_{1},y_{1}]$ and $[x_{2},y_{2}]$ such that
$\pi_{q}(\delta_{p}[x_{1},y_{1}])=\pi_{q}(\delta_{p}[x_{2},y_{2}])$ and the
points $\delta_{p}[x_{0},y_{0}]$, $\delta_{p}[x_{1},y_{1}]$, and
$\delta_{p}[x_{2},y_{2}]$ are collinear. Then for arbitrary
$[x_{3},y_{3}],\dots,[x_{d},y_{d}]\in\mathbb{C}\mathbb{P}^{1}$, the points
$\delta_{p}[x_{i},y_{i}]$, $i=0,\dots,d$ are on a hyperplane, so by Lemma 3.4,
$F_{p}(x_{0},y_{0},\dots,x_{d},y_{d})$ is identically $0$ as a polynomial in
$x_{3},y_{3},\dots,x_{d},y_{d}$. The coefficients of this polynomial are of
the form
$p_{i}x_{0}x_{1}x_{2}+p_{i+1}(x_{0}x_{1}y_{2}+x_{0}y_{1}x_{2}+y_{0}x_{1}x_{2})+p_{i+2}(x_{0}y_{1}y_{2}+y_{0}x_{1}y_{2}+y_{0}y_{1}x_{2})+p_{i+3}y_{0}y_{1}y_{2}$
for $i=0,\dots,d-2$. This means that the linear system
$[c_{0},3c_{1},3c_{2},c_{3}]M_{3}(p)=0$ has a non-trivial solution
$c_{0}=x_{0}x_{1}x_{2}$,
$3c_{1}=x_{0}x_{1}y_{2}+x_{0}y_{1}x_{2}+y_{0}x_{1}x_{2}$,
$3c_{2}=x_{0}y_{1}y_{2}+y_{0}x_{1}y_{2}+y_{0}y_{1}x_{2}$,
$c_{3}=y_{0}y_{1}y_{2}$. The binary cubic form
$c_{0}x^{3}+3c_{1}x^{2}y+c_{2}xy^{2}+c_{3}y^{3}$ then has the factorisation
$(x_{0}x+y_{0}y)(x_{1}x+y_{1}y)(x_{2}x+y_{2}y)$, hence its roots give the
collinear points on $\delta_{p}$. Since $\delta_{p}$ is smooth, $M_{3}(p)$ has
rank at least $3$ by Corollary 3.6, and so the cubic form is unique up to
scalar multiples. It follows that there are at most three points $q$ such that
the projection $\overline{\pi_{q}(\delta_{p}\setminus\\{q\\})}$ is not smooth.
∎
We need the following theorem on the fundamental binary form $f_{p}$ that is
essentially due to Sylvester [S51] to determine the natural group structure on
rational singular curves. Reznick [Rez2013] gives an elementary proof of the
generic case where $p$ does not lie on the tangent variety. (See also Kanev
[K99]*Lemma 3.1 and Iarrobino and Kanev [IK99]*Section 1.3.) We provide a very
elementary proof that includes the non-generic case.
###### Theorem 3.8 (Sylvester [S51]).
Let $d\geqslant 2$.
1. ( )
If $p\in\operatorname{Tan}_{\mathbb{C}}(\nu_{d+1})$, then there exist binary
linear forms $L_{1},L_{2}$ such that $f_{p}(x,y)=L_{1}(x,y)^{d}L_{2}(x,y)$.
Moreover, if $p\notin\nu_{d+1}$ then $L_{1}$ and $L_{2}$ are linearly
independent, and if $p\in\mathbb{R}\mathbb{P}^{d+1}$ then $L_{1}$ and $L_{2}$
are both real.
2. ( )
If
$p\in\operatorname{Sec}_{\mathbb{C}}(\nu_{d+1})\setminus\operatorname{Tan}_{\mathbb{C}}(\nu_{d+1})$,
then there exist linearly independent binary linear forms $L_{1},L_{2}$ such
that $f_{p}(x,y)=L_{1}(x,y)^{d+1}-L_{2}(x,y)^{d+1}$. Moreover, if
$p\in\mathbb{R}\mathbb{P}^{d+1}\setminus\operatorname{Sec}_{\mathbb{R}}(\nu_{d+1})$
then $L_{1}$ and $L_{2}$ are complex conjugates, while if
$p\in\operatorname{Sec}_{\mathbb{R}}(\nu_{d+1})$ then there exist linearly
independent real binary linear forms $L_{1},L_{2}$ such that
$f_{p}(x,y)=L_{1}(x,y)^{d+1}\pm L_{2}(x,y)^{d+1}$, where we can always choose
the lower sign when $d$ is even, and otherwise depends on $p$.
###### Proof.
( ) ‣ 3.8: We work over $\mathbb{F}\in\\{\mathbb{R},\mathbb{C}\\}$. Let
$p=[p_{0},p_{1},\dots,p_{d+1}]\in\operatorname{Tan}_{\mathbb{F}}(\nu_{d+1})$.
Let $p_{*}=\nu_{d+1}[\alpha_{1},\alpha_{2}]$ be the point on $\nu_{d+1}$ such
that the line $pp_{*}$ is tangent to $\nu_{d+1}$ (if $p\in\nu_{d+1}$, we let
$p_{*}=p$). We will show that
$f_{p}(x,y)=\sum_{i=0}^{d+1}p_{i}\binom{d+1}{i}x^{d+1-i}y^{i}=(\alpha_{2}x-\alpha_{1}y)^{d}(\beta_{2}x-\beta_{1}y)$
(4)
for some $[\beta_{1},\beta_{2}]\in\mathbb{F}\mathbb{P}^{1}$.
First consider the special case $\alpha_{1}=0$. Then $p_{*}=[1,0,\dots,0]$ and
the tangent to $\nu_{d+1}$ at $p_{*}$ is the line
$x_{2}=x_{3}=\dots=x_{d+1}=0$. It follows that
$f_{p}(x,y)=p_{0}x^{d+1}+p_{1}(d+1)x^{d}y=(1x-0y)^{d}(p_{0}x+p_{1}(d+1)y)$. If
$p_{1}=0$, then $p=p_{*}\in\nu_{d+1}$. Thus, if $p\notin\nu_{d+1}$, then
$p_{1}\neq 0$, and $x$ and $p_{0}x+p_{1}(d+1)y$ are linearly independent.
We next consider the general case $\alpha_{1}\neq 0$. Equating coefficients in
(4), we see that we need to find $[\beta_{1},\beta_{2}]$ such that
$p_{i}\binom{d+1}{i}=\binom{d}{i}\alpha_{2}^{d-i}(-\alpha_{1})^{i}\beta_{2}-\binom{d}{i-1}\alpha_{2}^{d-i+1}(-\alpha_{1})^{i-1}\beta_{1}$
for each $i=0,\dots,d+1$, where we use the convention
$\binom{d}{-1}=\binom{d}{d+1}=0$. This can be simplified to
$p_{i}=\left(1-\frac{i}{d+1}\right)\alpha_{2}^{d-i}(-\alpha_{1})^{i}\beta_{2}-\frac{i}{d+1}\alpha_{2}^{d-i+1}(-\alpha_{1})^{i-1}\beta_{1}.$
(5)
Since we are working projectively, we can fix the value of $\beta_{1}$ from
the instance $i=d+1$ of (5) to get
$p_{d+1}=-(-\alpha_{1})^{d}\beta_{1}.$ (6)
If $p_{d+1}\neq 0$, we can divide (5) by (6). After setting
$\alpha=\alpha_{2}/\alpha_{1}$, $\beta=\beta_{2}/\beta_{1}$, and
$a_{i}=p_{i}/p_{d+1}$, we then have to show that for some
$\beta\in\mathbb{F}$,
$a_{i}=-\left(1-\frac{i}{d+1}\right)(-\alpha)^{d-i}\beta+\frac{i}{d+1}(-\alpha)^{d-i+1}$
(7)
for each $i=0,\dots,d$. We next calculate in the affine chart $x_{d+1}=1$
where the rational normal curve becomes
$\nu_{d+1}(t)=((-t)^{d+1},(-t)^{d},\dots,-t)$, $p=(a_{0},\dots,a_{d})$, and
$p_{*}=\nu_{d+1}(\alpha)$. The tangency condition means that $p_{*}-p$ is a
scalar multiple of
$\nu_{d+1}^{\prime}(\alpha)=((d+1)(-\alpha)^{d},d(-\alpha)^{d-1},\dots,2\alpha,-1),$
that is, we have for some $\lambda\in\mathbb{F}$ that
$(-\alpha)^{d+1-i}-a_{i}=\lambda(d+1-i)(-\alpha)^{d-i}$ for all $i=0,\dots,d$.
Set $\beta=\alpha+\lambda(d+1)$. Then
$(-\alpha)^{d+1-i}-a_{i}=(\beta-\alpha)(1-\frac{i}{d+1})(-\alpha)^{d-i}$, and
we have
$\displaystyle a_{i}$
$\displaystyle=(-\alpha)^{d+1-i}-(\beta-\alpha)\left(1-\frac{i}{d+1}\right)(-\alpha)^{d-i}$
$\displaystyle=-\left(1-\frac{i}{d+1}\right)(-\alpha)^{d-i}\beta+\frac{i}{d+1}(-\alpha)^{d-i+1},$
giving (7) as required. If $\alpha=\beta$, then $\lambda=0$ and
$p=p_{*}\in\nu_{d+1}$. Thus, if $p\notin\nu_{d+1}$, then $\alpha\neq\beta$,
and $\alpha_{2}x-\alpha_{1}y$ and $\beta_{2}x-\beta_{1}y$ are linearly
independent.
We still have to consider the case $p_{d+1}=0$. Then $\beta_{1}=0$ and we need
to find $\beta_{2}$ such that
$p_{i}=\left(1-\frac{i}{d+1}\right)\alpha_{2}^{d-i}(-\alpha_{1})^{i}\beta_{2}$
(8)
for all $i=0,\dots,d$. Since $p_{d+1}=0$, we have that
$\nu_{d+1}^{\prime}(\alpha)$ is parallel to $(p_{0},\dots,p_{d})$, that is,
$p_{i}=\lambda(d+1-i)(-\alpha)^{d-i}$
for some $\lambda\in\mathbb{F}^{*}$. Set
$\beta_{2}=\lambda(d+1)/(-\alpha_{1})^{d}$. Then
$p_{i}=\frac{(-\alpha_{1})^{d}\beta_{2}}{d+1}(d+1-i)\left(\frac{\alpha_{2}}{-\alpha_{1}}\right)^{d-i}=\left(1-\frac{i}{d+1}\right)\alpha_{2}^{d-i}(-\alpha_{1})^{i}\beta_{2},$
again giving (8) as required. Note that since $\alpha_{1}\neq 0$ but
$\beta_{1}=0$, $\alpha_{2}x-\alpha_{1}y$ and $\beta_{2}x-\beta_{1}y$ are
linearly independent. Note also that since $\lambda\neq 0$, we have
$\beta_{2}\neq 0$ and $p\neq[1,0,\dotsc,0]$, hence $p\notin\nu_{d+1}$.
( ) ‣ 3.8: Let
$p=[p_{0},\dots,p_{d+1}]\in\operatorname{Sec}_{\mathbb{C}}(\nu_{d+1})\setminus\operatorname{Tan}_{\mathbb{C}}(\nu_{d+1})$,
and suppose that $p$ lies on the secant line through the distinct points
$p_{1}:=\nu_{d+1}[\alpha_{1},\alpha_{2}]$ and
$p_{2}:=\nu_{d+1}[\beta_{1},\beta_{2}]$. Since $p,p_{1},p_{2}$ are distinct
and collinear, there exist $\mu_{1},\mu_{2}\in\mathbb{C}^{*}$ such that
$p=\mu_{1}p_{1}+\mu_{2}p_{2}$. This means that for $i=0,\dotsc,d+1$, we have
$p_{i}=\mu_{1}(-\alpha_{1})^{i}\alpha_{2}^{d+1-i}+\mu_{2}(-\beta_{1})^{i}\beta_{2}^{d+1-i}.$
Then
$\displaystyle f_{p}(x,y)$
$\displaystyle=\sum_{i=0}^{d+1}p_{i}\binom{d+1}{i}x^{d+1-i}y^{i}$
$\displaystyle=\mu_{1}\sum_{i=0}^{d+1}\binom{d+1}{i}(\alpha_{2}x)^{d+1-i}(-\alpha_{1}y)^{i}+\mu_{2}\sum_{i=0}^{d+1}\binom{d+1}{i}(\beta_{2}x)^{d+1-i}(-\beta_{1}y)^{i}$
$\displaystyle=\mu_{1}(\alpha_{2}x-\alpha_{1}y)^{d+1}+\mu_{2}(\beta_{2}x-\beta_{1}y)^{d+1}$
$\displaystyle=L_{1}(x,y)^{d+1}-L_{2}(x,y)^{d+1}$
where the linear forms $L_{1},L_{2}$ are linearly independent.
If
$p\in\mathbb{R}\mathbb{P}^{d+1}\setminus{\operatorname{Sec}_{\mathbb{R}}(\nu_{d+1})}$,
then $f_{p}$ is real and $p_{1}$ and $p_{2}$ are non-real points. Taking
conjugates, we have
$p=\overline{\mu_{1}}\nu_{d+1}[\overline{\alpha_{1}},\overline{\alpha_{2}}]+\overline{\mu_{2}}\nu_{d+1}[\overline{\beta_{1}},\overline{\beta_{2}}]$
as vectors, and because of the uniqueness of secants of the rational normal
curve through a given point, we obtain $\overline{\mu_{1}}=\mu_{2}$ and
$\nu_{d+1}[\overline{\alpha_{1}},\overline{\alpha_{2}}]=\nu_{d+1}[\beta_{1},\beta_{2}]$,
hence $\overline{\alpha_{1}}=\beta_{1}$ and $\overline{\alpha_{2}}=\beta_{2}$.
It follows that $\overline{L_{1}(x,y)}=L_{2}(\overline{x},\overline{y})$.
If $p\in\operatorname{Sec}_{\mathbb{R}}(\nu_{d+1})$, then $p_{1}$ and $p_{2}$
are real, so
$[\mu_{1},\mu_{2}],[\alpha_{1},\alpha_{2}],[\beta_{1},\beta_{2}]\in\mathbb{R}\mathbb{P}^{1}$,
and we obtain $f_{p}(x,y)=L_{1}^{d+1}\pm L_{2}^{d+1}$ for some linearly
independent $L_{1},L_{2}$ over $\mathbb{R}$, where the choice of sign depends
on $p$. ∎
We are now in a position to describe the group laws on rational singular
curves. We first note the effect of a change of coordinates on the
parametrisation of $\delta_{p}$. Let
$\varphi\colon\mathbb{F}\mathbb{P}^{1}\to\mathbb{F}\mathbb{P}^{1}$ be a
projective transformation. Then $\nu_{d+1}\circ\varphi$ is a reparametrisation
of the rational normal curve. It is not difficult to see that there exists a
projective transformation
$\psi\colon\mathbb{F}\mathbb{P}^{d+1}\to\mathbb{F}\mathbb{P}^{d+1}$ such that
$\nu_{d+1}\circ\varphi=\psi\circ\nu_{d+1}$. It follows that if we
reparametrise $\delta_{p}$ using $\varphi$, we obtain
$\delta_{p}\circ\varphi=\pi_{p}\circ\nu_{d+1}\circ\varphi=\pi_{p}\circ\psi\circ\nu_{d+1}=\psi^{\prime}\circ\pi_{\psi^{-1}(p)}\circ\nu_{d+1}\cong\delta_{\psi^{-1}(p)},$
where $\psi^{\prime}\colon\mathbb{F}\mathbb{P}^{d}\to\mathbb{F}\mathbb{P}^{d}$
is an appropriate projective transformation such that first transforming
$\mathbb{F}\mathbb{P}^{d+1}$ with $\psi$ and then projecting from $p$ is the
same as projecting from $\psi^{-1}(p)$ and then transforming
$\mathbb{F}\mathbb{P}^{d}$ with $\psi^{\prime}$. So by reparametrising
$\delta_{p}$, we obtain $\delta_{p^{\prime}}$ for some other point
$p^{\prime}$ that is in the orbit of $p$ under the action of projective
transformations that fix $\nu_{d+1}$.
Since
$\delta_{p}\circ\varphi[x_{0},y_{0}],\dots,\delta_{p}\circ\varphi[x_{d},y_{d}]$
lie on a hyperplane if and only if the $\delta_{\psi^{-1}(p)}[x_{i},y_{i}]$’s
are on a hyperplane, it follows from Lemma 3.4 that
$F_{p}(\varphi(x_{0},y_{0}),\dots,\varphi(x_{d},y_{d}))$ is a scalar multiple
of $F_{\psi^{-1}(p)}(x_{0},y_{0},\dots,x_{d},y_{d})$, in which case
$f_{p}\circ\varphi=f_{\psi^{-1}(p)}$ up to a scalar multiple. Thus, we obtain
the same reparametrisation of the fundamental binary form $f_{p}$.
###### Proposition 3.9.
A rational singular curve $\delta_{p}$ in $\mathbb{C}\mathbb{P}^{d}$ has a
natural group structure on its subset of smooth points $\delta_{p}^{*}$ such
that $d+1$ points in $\delta_{p}^{*}$ lie on a hyperplane if and only if they
sum to the identity. This group is isomorphic to $(\mathbb{C},+)$ if the
singularity of $\delta_{p}$ is a cusp and isomorphic to
$(\mathbb{C}^{*},\cdot)$ if the singularity is a node.
If the curve is real and cuspidal or acnodal, then it has a group isomorphic
to $(\mathbb{R},+)$ or $\mathbb{R}/\mathbb{Z}$ depending on whether the
singularity is a cusp or an acnode, such that $d+1$ points in $\delta_{p}^{*}$
lie on a hyperplane if and only if they sum to the identity. If the curve is
real and the singularity is a crunode, then the group is isomorphic to
$(\mathbb{R},+)\times\mathbb{Z}_{2}$, but $d+1$ points in $\delta_{p}^{*}$ lie
on a hyperplane if and only if they sum to $(0,0)$ or $(0,1)$, depending on
$p$.
###### Proof.
First suppose $\delta_{p}$ is cuspidal and
$\mathbb{F}\in\\{\mathbb{R},\mathbb{C}\\}$, so that
$p\in\operatorname{Tan}_{\mathbb{F}}(\nu_{d+1})\setminus{\nu_{d+1}}$. By
Theorem 3.8, $f_{p}=L_{1}^{d}L_{2}$ for some linearly independent linear forms
$L_{1}$ and $L_{2}$. By choosing $\varphi$ appropriately, we may assume
without loss of generality that $L_{1}(x,y)=x$ and $L_{2}(x,y)=(d+1)y$, so
that $f_{p}(x,y)=(d+1)x^{d}y$ and $p=[0,1,0,\dots,0]$, with the cusp of
$\delta_{p}$ at $\delta_{p}[0,1]$. It follows that the polarisation of $f_{p}$
is $F_{p}(x_{0},y_{0},\dotsc,x_{d},y_{d})=P_{1}=x_{0}x_{1}\dotsb
x_{d}\sum_{i=0}^{d}y_{i}/x_{i}$. For $[x_{i},y_{i}]\neq[0,1]$, $i=0,\dots,d$,
the points $\delta_{p}[x_{i},y_{i}]$ are on a hyperplane if and only if
$\sum_{i=0}^{d}y_{i}/x_{i}=0$. Thus we identify
$\delta_{p}[x,y]\in\delta_{p}^{*}$ with $y/x\in\mathbb{F}$, and the group is
$(\mathbb{F},+)$.
Next suppose $\delta_{p}$ is nodal, so that
$p\in\operatorname{Sec}_{\mathbb{C}}(\nu_{d+1})\setminus\operatorname{Tan}_{\mathbb{C}}(\nu_{d+1})$.
By Theorem 3.8, $f_{p}=L_{1}^{d+1}-L_{2}^{d+1}$ for some linearly independent
linear forms $L_{1}$ and $L_{2}$. Again by choosing $\varphi$ appropriately,
we may assume without loss of generality that $L_{1}(x,y)=x$ and
$L_{2}(x,y)=y$, so that $f_{p}(x,y)=x^{d+1}-y^{d+1}$ and $p=[1,0,\dots,0,-1]$,
with the node of $\delta_{p}$ at $\delta_{p}[0,1]=\delta_{p}[1,0]$. The
polarisation of $f_{p}$ is
$F_{p}(x_{0},y_{0},\dots,x_{d},y_{d})=P_{0}-P_{d+1}=x_{0}x_{1}\dotsb
x_{d}-y_{0}y_{1}\dotsb y_{d}$. Therefore, $\delta_{p}[x_{i},y_{i}]$,
$i=0,\dotsc,d$, are on a hyperplane if and only if
$\prod_{i=0}^{d}y_{i}/x_{i}=1$. Thus we identify
$\delta_{p}[x,y]\in\delta_{p}^{*}$ with $y/x\in\mathbb{C}^{*}$, and the group
is $(\mathbb{C}^{*},\cdot)$.
Now suppose $\delta_{p}$ is real and the node is an acnode. Then the linearly
independent linear forms $L_{1}$ and $L_{2}$ given by Theorem 3.8 are
$L_{1}(x,y)=\alpha x+\beta y$ and
$L_{2}(x,y)=\overline{\alpha}x+\overline{\beta}y$ for some
$\alpha,\beta\in\mathbb{C}\setminus\mathbb{R}$. There exists
$\varphi\colon\mathbb{R}\mathbb{P}^{1}\to\mathbb{R}\mathbb{P}^{1}$ such that
$L_{1}\circ\varphi=x+iy$ and $L_{2}\circ\varphi=x-iy$, hence we may assume
after such a reparametrisation that $f_{p}(x,y)=(x+iy)^{d+1}-(x-iy)^{d+1}$ and
that the node is at $\delta_{p}[i,1]=\delta_{p}[-i,1]$. The polarisation of
$f_{p}$ is
$F_{p}(x_{0},y_{0},\dots,x_{d},y_{d})=\prod_{j=0}^{d}(x_{j}+iy_{j})-\prod_{j=0}^{d}(x_{j}-iy_{j})$,
and it follows that $\delta_{p}[x_{0},y_{0}],\dotsc,\delta_{p}[x_{d},y_{d}]$
are collinear if and only if
$\prod_{j=0}^{d}\frac{x_{j}+iy_{j}}{x_{j}-iy_{j}}=1$. We now identify
$\mathbb{R}\mathbb{P}^{1}$ with the circle
$\mathbb{R}/\mathbb{Z}\cong\left\\{z\in\mathbb{C}:|z|=1\right\\}$ using the
Möbius transformation $[x,y]\to\frac{x+iy}{x-iy}$.
It remains to consider the crunodal case. Then, similar to the complex nodal
case, we obtain after a reparametrisation that $\delta_{p}[x_{i},y_{i}]$,
$i=0,\dotsc,d$, are on a hyperplane if and only if
$\prod_{i=0}^{d}y_{i}/x_{i}=\pm 1$, where the sign depends on $p$. Thus we
identify $\delta_{p}[x,y]\in\delta_{p}^{*}$ with $y/x\in\mathbb{R}^{*}$, and
the group is $(\mathbb{R}^{*},\cdot)\cong\mathbb{R}\times\mathbb{Z}_{2}$,
where $\pm 1\in\mathbb{R}^{*}$ corresponds to
$(0,0),(0,1)\in\mathbb{R}\times\mathbb{Z}_{2}$ respectively. ∎
The group on an elliptic normal curve or a rational singular curve of degree
$d+1$ as described in Propositions 3.1 and 3.9 is not uniquely determined by
the property that $d+1$ points lie on a hyperplane if and only if they sum to
some fixed element $c$. Indeed, for any $t\in(\delta^{*},\oplus)$, $x\boxplus
y:=x\oplus y\oplus t$ defines another abelian group on $\delta^{*}$ with the
property that $d+1$ points lie on a hyperplane if and only if they sum to
$c\oplus dt$. However, these two groups are isomorphic in a natural way with
an isomorphism given by the translation map $x\mapsto x\ominus t$. The next
proposition show that we always get uniqueness up to some translation. It will
be used in Section 5.
###### Proposition 3.10.
Let $(G,\oplus,0)$ and $(G,\boxplus,0^{\prime})$ be abelian groups on the same
ground set, such that for some $d\geqslant 2$ and some $c,c^{\prime}\in G$,
$x_{1}\oplus\dotsb\oplus x_{d+1}=c\iff x_{1}\boxplus\dotsb\boxplus
x_{d+1}=c^{\prime}\quad\text{for all }x_{1},\dots,x_{d+1}\in G.$
Then $(G,\oplus,0)\to(G,\boxplus,0^{\prime}),x\mapsto x\boxminus 0=x\oplus
0^{\prime}$ is an isomorphism, and
$c^{\prime}=c\boxplus\underbrace{0\boxplus\dotsb\boxplus 0}_{\text{$d$
times}}=c\ominus(\underbrace{0^{\prime}\oplus\dotsb\oplus
0^{\prime}}_{\text{$d$ times}}).$
###### Proof.
It is clear that the cases $d\geqslant 3$ follow from the case $d=2$, which we
now show. First note that for any $x,y\in G$, $x\boxplus y\boxplus(c\ominus
x\ominus y)=c^{\prime}$ and $(x\oplus y)\boxplus 0\boxplus(c\ominus x\ominus
y)=c^{\prime}$, since $x\oplus y\oplus(c\ominus x\ominus y)=(x\oplus y)\oplus
0\oplus(c\ominus x\ominus y)=c$. Thus we have $x\boxplus y=(x\oplus y)\boxplus
0$, hence $(x\oplus y)\boxminus 0=x\boxplus y\boxminus 0\boxminus
0=(x\boxminus 0)\boxplus(y\boxminus 0)$. Similarly we have $x\oplus
y=(x\boxplus y)\oplus 0^{\prime}$, hence $x\boxplus y=x\oplus y\ominus
0^{\prime}$, so in particular $0^{\prime}=0\boxminus 0=0\oplus(\boxminus
0)\ominus 0^{\prime}$, and $\boxminus 0=0^{\prime}\oplus 0^{\prime}$. So we
also have $x\boxminus 0=x\oplus(\boxminus 0)\ominus 0^{\prime}=x\oplus
0^{\prime}$, and $(G,\oplus,0)\to(G,\boxplus,0^{\prime}),x\mapsto x\boxminus
0=x\oplus 0^{\prime}$ is an isomorphism. ∎
## 4 Structure theorem
We prove Theorem 1.1 in this section. The main idea is to induct on the
dimension $d$ via projection. We start with the following statement of the
slightly different case $d=3$, which is [LS18]*Theorem 1.1. Note that it
contains one more type that does not occur when $d\geqslant 4$.
###### Theorem 4.1.
Let $K>0$ and suppose $n\geqslant C\max\\{K^{8},1\\}$ for some sufficiently
large absolute constant $C>0$. Let $P$ be a set of $n$ points in
$\mathbb{R}\mathbb{P}^{3}$ with no $3$ points collinear. If $P$ spans at most
$Kn^{2}$ ordinary planes, then up to projective transformations, $P$ differs
in at most $O(K)$ points from a configuration of one of the following types:
1. ( )
A subset of a plane;
2. ( )
A subset of two disjoint conics lying on the same quadric with $\frac{n}{2}\pm
O(K)$ points of $P$ on each of the two conics;
3. ( )
A coset of a subgroup of the smooth points of an elliptic or acnodal space
quartic curve.
We first prove the following weaker lemma using results from Section 2.
###### Lemma 4.2.
Let $d\geqslant 4$, $K>0$, and suppose $n\geqslant
C\max\\{d^{3}2^{d}K,(dK)^{8}\\}$ for some sufficiently large absolute constant
$C>0$. Let $P$ be a set of $n$ points in $\mathbb{R}\mathbb{P}^{d}$ where
every $d$ points span a hyperplane. If $P$ spans at most $K\binom{n-1}{d-1}$
ordinary hyperplanes, then all but at most $O(d2^{d}K)$ points of $P$ are
contained in a hyperplane or an irreducible non-degenerate curve of degree
$d+1$ that is either elliptic or rational and singular.
###### Proof.
We use induction on $d\geqslant 4$ to show that for all $K>0$ and all
$n\geqslant f(d,K)$, for all sets $P$ of $n$ points in
$\mathbb{R}\mathbb{P}^{d}$ with any $d$ points spanning a hyperplane, if $P$
has at most $K\binom{n-1}{d-1}$ ordinary hyperplanes, then all but at most
$g(d,K)$ points of $P$ are contained in a hyperplane or an irreducible non-
degenerate curve of degree $d+1$, and that if the curve is rational then it
has to be singular, where
$g(d,K):=\sum_{k=0}^{d}k^{3}2^{d-k}+C_{1}2^{d}(d-1)K$
and
$f(d,K):=d^{2}(g(d,K)+C_{2}d^{10})+C(d-1)^{8}K^{8}$
for appropriate $C_{1},C_{2}>0$ to be determined later and $C$ from Theorem
4.1. We assume that this holds in $\mathbb{R}\mathbb{P}^{d-1}$ if $d\geqslant
5$, while Theorem 4.1 takes the place of the induction hypothesis when $d=4$.
Let $P^{\prime}$ denote the set of points $p\in P$ such that there are at most
$\frac{d-1}{d-2}K\binom{n-2}{d-2}$ ordinary hyperplanes through $p$. By
counting incident point-ordinary-hyperplane pairs, we obtain
$dK\binom{n-1}{d-1}>(n-|P^{\prime}|)\frac{d-1}{d-2}K\binom{n-2}{d-2},$
which gives $|P^{\prime}|>n/(d-1)^{2}$. For any $p\in P^{\prime}$, the
projected set $\pi_{p}(P\setminus\\{p\\})$ has $n-1$ points and spans at most
$\frac{d-1}{d-2}K\binom{n-2}{d-2}$ ordinary $(d-2)$-flats in
$\mathbb{R}\mathbb{P}^{d-1}$, and any $d-1$ points of
$\pi_{p}(P\setminus\\{p\\})$ span a $(d-2)$-flat. To apply the induction
hypothesis, we need
$f(d,K)\geqslant 1+f(d-1,\tfrac{d-1}{d-2}K),$
as well as $f(3,K)\geqslant C\max\\{K^{8},1\\}$, both of which easily follow
from the definition of $f(d,K)$. Then all except $g(d-1,\frac{d-1}{d-2}K)$
points of $\pi_{p}(P\setminus\\{p\\})$ are contained in a $(d-2)$-flat or a
non-degenerate curve $\gamma_{p}$ of degree $d$ in
$\mathbb{R}\mathbb{P}^{d-1}$, which is either irreducible or possibly two
conics with $\frac{n}{2}\pm O(K)$ points on each when $d=4$.
If there exists a $p\in P^{\prime}$ such that all but at most
$g(d-1,\frac{d-1}{d-2}K)$ points of $\pi_{p}(P\setminus\\{p\\})$ are contained
in a $(d-2)$-flat, then we are done, since $g(d,K)>g(d-1,\frac{d-1}{d-2}K)$.
Thus we may assume without loss of generality that for all $p\in P^{\prime}$
we obtain a curve $\gamma_{p}$.
Let $p$ and $p^{\prime}$ be two distinct points of $P^{\prime}$. Then all but
at most $2g(d-1,\frac{d-1}{d-2}K)$ points of $P$ lie on the intersection
$\delta$ of the two cones $\overline{\pi^{-1}_{p}(\gamma_{p})}$ and
$\overline{\pi^{-1}_{p^{\prime}}(\gamma_{p^{\prime}})}$. Since the curves
$\gamma_{p}$ and $\gamma_{p^{\prime}}$ are $1$-dimensional, the two cones are
$2$-dimensional. Since their vertices $p$ and $p^{\prime}$ are distinct, the
cones do not have a common irreducible component, so their intersection is a
variety of dimension at most $1$. By Bézout’s theorem (Theorem 2.1), $\delta$
has total degree at most $d^{2}$, so has to have at least one $1$-dimensional
irreducible component. Let $\delta_{1},\dotsc,\delta_{k}$ be the
$1$-dimensional components of $\delta$, where $1\leqslant k\leqslant d^{2}$.
Let $\delta_{1}$ be the component with the most points of $P^{\prime}$ amongst
all the $\delta_{i}$, so that
$|P^{\prime}\cap\delta_{1}|\geqslant\frac{|P^{\prime}|-2g(d-1,\frac{d-1}{d-2}K)}{d^{2}}.$
Choose a $q\in P^{\prime}\cap\delta_{1}$ such that $\pi_{q}$ is generically
one-to-one on $\delta_{1}$. By Lemma 2.2 there are at most
$O(\deg(\delta_{1})^{4})=O(d^{8})$ exceptional points, so we need
$|P^{\prime}\cap\delta_{1}|>C_{2}d^{8}.$ (9)
Since $|P^{\prime}|>n/(d-1)^{2}$, we need
$\frac{\frac{n}{(d-1)^{2}}-2g(d-1,\frac{d-1}{d-2}K)}{d^{2}}>C_{2}d^{8},$
or equivalently, $n>(d-1)^{2}(2g(d-1,\frac{d-1}{d-2}K)+C_{2}d^{10})$. However,
this follows from the definition of $f(d,K)$. If $\pi_{q}$ does not map
$\delta_{1}\setminus\\{q\\}$ into $\gamma_{q}$, then by Bézout’s theorem
(Theorem 2.1), $n-1-g(d-1,\binom{d-1}{d-2}K)\leqslant d^{3}$. However, this
does not occur since $f(d,K)>g(d-1,\binom{d-1}{d-2}K)+d^{3}+1$. Thus,
$\pi_{q}$ maps $\delta_{1}\setminus\\{q\\}$ into $\gamma_{q}$, hence
$\delta_{1}$ is an irreducible curve of degree $d+1$ (or, when $d=4$, possibly
a twisted cubic containing at most $n/2+O(K)$ points of $P$).
We first consider the case where $\delta_{1}$ has degree $d+1$. We apply Lemma
2.4 to $\delta_{1}$ and each $\delta_{i}$, $i=2,\dots,k$, and for this we need
$|P^{\prime}\cap\delta_{1}|>C^{\prime\prime}d^{4}$, since
$\deg(\delta_{1})\leqslant d^{2}$ and $\sum_{i=2}^{d}\deg(\delta_{i})\leqslant
d^{2}$. However, this condition is implied by (9). Thus we find a
$q^{\prime}\in P^{\prime}\cap\delta_{1}$ such that
$\overline{\pi_{q^{\prime}}(\delta_{1}\setminus\\{q^{\prime}\\})}=\gamma_{q^{\prime}}$
as before, and in addition, the cone
$\overline{\pi_{q^{\prime}}^{-1}(\gamma_{q^{\prime}})}$ does not contain any
other $\delta_{i}$, $i=2,\dots,k$. Since all points of $P$ except
$2g(d-1,\frac{d-1}{d-2}K)+d^{2}$ lie on $\delta_{1}\cup\dots\cup\delta_{k}$,
we obtain by Bézout’s theorem (Theorem 2.1) that
$|P\setminus{\delta_{1}}|\leqslant
d(d^{2}-d-1)+d^{2}+2g(d-1,\tfrac{d-1}{d-2}K)<g(d,K).$
We next dismiss the case where $d=4$ and $\delta_{1}$ is a twisted cubic. We
redefine $P^{\prime}$ to be the set of points $p\in P$ such that there are at
most $12Kn^{2}$ ordinary hyperplanes through $p$. Then $|P^{\prime}|\geqslant
2n/3$. Since we have $|P\cap\delta_{1}|\leqslant n/2+O(K)$, by Lemma 2.3 there
exists $q^{\prime}\in P^{\prime}\setminus\delta_{1}$ such that the projection
from $q^{\prime}$ will map $\delta_{1}$ onto a twisted cubic in
$\mathbb{R}\mathbb{P}^{3}$. However, by Bézout’s theorem (Theorem 2.1) and
Theorem 4.1, $\pi_{q^{\prime}}(\delta_{1}\setminus\\{q^{\prime}\\})$ has to be
mapped onto a conic, which gives a contradiction.
Note that $g(d,K)=O(d2^{d}K)$ since $K=\Omega(1/d)$ by [BM17]*Theorem 2.4. We
have shown that all but $O(d2^{d}K)$ points of $P$ are contained in a
hyperplane or an irreducible non-degenerate curve $\delta$ of degree $d+1$. By
Proposition 3.2, this curve is either elliptic or rational. It remains to show
that if $\delta$ is rational, then it has to be singular. Similar to what was
shown above, we can find more than $3$ points $p\in\delta$ for which the
projection $\overline{\pi_{p}(\delta\setminus\\{p\\})}$ is a rational curve of
degree $d$ that is singular by the induction hypothesis. Lemma 3.7 now implies
that $\delta$ is singular. ∎
To get the coset structure on the curves as stated in Theorem 1.1, we use a
simple generalisation of an additive combinatorial result used by Green and
Tao [GT13]*Proposition A.5. This captures the principle that if a finite
subset of a group is almost closed, then it is close to a subgroup. The case
$d=3$ was shown in [LMMSSZ18].
###### Lemma 4.3.
Let $d\geqslant 2$. Let $A_{1},A_{2},\dotsc,A_{d+1}$ be $d+1$ subsets of some
abelian group $(G,\oplus)$, all of size within $K$ of $n$, where $K\leqslant
cn/d^{2}$ for some sufficiently small absolute constant $c>0$. Suppose there
are at most $Kn^{d-1}$ $d$-tuples $(a_{1},a_{2},\dotsc,a_{d})\in A_{1}\times
A_{2}\times\dotsb\times A_{d}$ for which $a_{1}\oplus a_{2}\oplus\dotsb\oplus
a_{d}\notin A_{d+1}$. Then there is a subgroup $H$ of $G$ and cosets $H\oplus
x_{i}$ for $i=1,\dotsc,d$ such that
$|A_{i}\mathbin{\triangle}(H\oplus
x_{i})|,\left|A_{d+1}\mathbin{\triangle}\left(H\oplus\bigoplus_{i=1}^{d}x_{i}\right)\right|=O(K).$
###### Proof.
We use induction on $d\geqslant 2$ to show that the symmetric differences in
the conclusion of the lemma have size at most
$C\prod_{i=1}^{d}(1+\frac{1}{i^{2}})K$ for some sufficiently large absolute
constant $C>0$. The base case $d=2$ is [GT13]*Proposition A.5.
Fix a $d\geqslant 3$. By the pigeonhole principle, there exists $b_{1}\in
A_{1}$ such that there are at most
$\frac{1}{n-K}Kn^{d-1}\leqslant\frac{1}{1-\frac{c}{d^{2}}}Kn^{d-2}$
$(d-1)$-tuples $(a_{2},\dotsc,a_{d})\in A_{2}\times\dotsb\times A_{d}$ for
which $b_{1}\oplus a_{2}\oplus\dotsb\oplus a_{d}\notin A_{d+1}$, or
equivalently $a_{2}\oplus\dotsb\oplus a_{d}\notin A_{d+1}\ominus b_{1}$. Since
$\frac{1}{1-\frac{c}{d^{2}}}K\leqslant\frac{c}{d^{2}-c}n\leqslant\frac{c}{(d-1)^{2}}n,$
we can use induction to get a subgroup $H$ of $G$ and $x_{2},\dotsc,x_{d}\in
G$ such that for $j=2,\dotsc,d$ we have
$|A_{j}\mathbin{\triangle}(H\oplus x_{j})|,\left|(A_{d+1}\ominus
b_{1})\mathbin{\triangle}\left(H\oplus\bigoplus_{j=2}^{d}x_{j}\right)\right|\leqslant
C\prod_{i=1}^{d-1}\left(1+\frac{1}{i^{2}}\right)\frac{1}{1-\frac{c}{d^{2}}}K.$
Since $|A_{d}\cap(H\oplus x_{d})|\geqslant
n-K-C\prod_{i=1}^{d-1}(1+\frac{1}{i^{2}})\frac{1}{1-\frac{c}{d^{2}}}K$, we
repeat the same pigeonhole argument on $A_{d}\cap(H\oplus x_{d})$ to find a
$b_{d}\in A_{d}\cap(H\oplus x_{d})$ such that there are at most
$\displaystyle\frac{1}{n-K-C\prod_{i=1}^{d-1}\left(1+\frac{1}{i^{2}}\right)\frac{1}{1-\frac{c}{d^{2}}}K}Kn^{d-1}$
$\displaystyle\leqslant\frac{1}{1-\frac{c}{d^{2}}-C\prod_{i=1}^{d-1}\left(1+\frac{1}{i^{2}}\right)\frac{c}{d^{2}-c}}Kn^{d-2}$
$\displaystyle\leqslant\frac{1}{1-C_{1}\frac{c}{d^{2}-c}}Kn^{d-2}$
$\displaystyle\leqslant\left(1+\frac{C_{2}c}{d^{2}-c}\right)Kn^{d-2}$
$\displaystyle\leqslant\left(1+\frac{1}{d^{2}}\right)Kn^{d-2}$
$(d-1)$-tuples $(a_{1},\dotsc,a_{d-1})\in A_{1}\times\dotsb A_{d-1}$ with
$a_{1}\oplus\dotsb\oplus a_{d-1}\oplus b_{d}\notin A_{d+1}$, for some absolute
constants $C_{1},C_{2}>0$ depending on $C$, by making $c$ sufficiently small.
Now $(1+\frac{1}{d^{2}})K\leqslant cn/(d-1)^{2}$, so by induction again, there
exist a subgroup $H^{\prime}$ of $G$ and elements
$x_{1},x_{2}^{\prime},\dotsc,x_{d-1}^{\prime}\in G$ such that for
$k=2,\dotsc,d-1$ we have
$|A_{1}\mathbin{\triangle}(H^{\prime}\oplus
x_{1})|,|A_{k}\mathbin{\triangle}(H^{\prime}\oplus
x_{k}^{\prime})|,\left|(A_{d+1}\ominus
b_{d})\mathbin{\triangle}\left(H^{\prime}\oplus
x_{1}\oplus\bigoplus_{k=2}^{d-1}x_{k}^{\prime}\right)\right|\leqslant
C\prod_{i=1}^{d-1}\left(1+\frac{1}{i^{2}}\right)\left(1+\frac{1}{d^{2}}\right)K.$
From this, it follows that $|(H\oplus x_{k})\cap(H^{\prime}\oplus
x_{k}^{\prime})|\geqslant n-K-2C\prod_{i=1}^{d}(1+\frac{1}{i^{2}})K=n-O(K)$.
Since $(H\oplus x_{k})\cap(H^{\prime}\oplus x_{k}^{\prime})$ is non-empty, it
has to be a coset of $H^{\prime}\cap H$. If $H^{\prime}\neq H$, then
$|H^{\prime}\cap H|\leqslant n/2+O(K)$, a contradiction since $c$ is
sufficiently small. Therefore, $H=H^{\prime}$, and $H\oplus
x_{k}=H^{\prime}\oplus x_{k}^{\prime}$. So we have
$|A_{i}\mathbin{\triangle}(H\oplus
x_{i})|,\left|A_{d+1}\mathbin{\triangle}\left(H\oplus\bigoplus_{\ell=1}^{d-1}x_{\ell}\oplus
b_{d}\right)\right|\leqslant C\prod_{i=1}^{d}\left(1+\frac{1}{i^{2}}\right)K.$
Since $b_{d}\in H\oplus x_{d}$, we also obtain
$\left|A_{d+1}\mathbin{\triangle}\left(H\oplus\bigoplus_{i=1}^{d}x_{i}\right)\right|\leqslant
C\prod_{i=1}^{d}\left(1+\frac{1}{i^{2}}\right)K.\qed$
To apply Lemma 4.3, we first need to know that removing $K$ points from a set
does not change the number of ordinary hyperplanes it spans by too much.
###### Lemma 4.4.
Let $P$ be a set of $n$ points in $\mathbb{R}\mathbb{P}^{d}$, $d\geqslant 2$,
where every $d$ points span a hyperplane. Let $P^{\prime}$ be a subset that is
obtained from $P$ by removing at most $K$ points. If $P$ spans $m$ ordinary
hyperplanes, then $P^{\prime}$ spans at most $m+\frac{1}{d}K\binom{n-1}{d-1}$
ordinary hyperplanes.
###### Proof.
Fix a point $p\in P$. Since every $d$ points span a hyperplane, there are at
most $\binom{n-1}{d-1}$ sets of $d$ points from $P$ containing $p$ that span a
hyperplane through $p$. Thus, the number of $(d+1)$-point hyperplanes through
$p$ is at most $\frac{1}{d}\binom{n-1}{d-1}$, since a set of $d+1$ points that
contains $p$ has $d$ subsets of size $d$ that contain $p$. If we remove points
of $P$ one-by-one to obtain $P^{\prime}$, we thus create at most
$\frac{1}{d}K\binom{n-1}{d-1}$ ordinary hyperplanes. ∎
The following lemma then translates the additive combinatorial Lemma 4.3 to
our geometric setting.
###### Lemma 4.5.
Let $d\geqslant 4$, $K>0$, and suppose $n\geqslant C(d^{3}K+d^{4})$ for some
sufficiently large absolute constant $C>0$. Let $P$ be a set of $n$ points in
$\mathbb{R}\mathbb{P}^{d}$ where every $d$ points span a hyperplane. Suppose
$P$ spans at most $K\binom{n-1}{d-1}$ ordinary hyperplanes, and all but at
most $dK$ points of $P$ lie on an elliptic normal curve or a rational singular
curve $\delta$. Then $P$ differs in at most $O(dK+d^{2})$ points from a coset
$H\oplus x$ of a subgroup $H$ of $\delta^{*}$, the smooth points of $\delta$,
for some $x$ such that $(d+1)x\in H$. In particular, $\delta$ is either an
elliptic normal curve or a rational acnodal curve.
###### Proof.
Let $P^{\prime}=P\cap\delta^{*}$. Then by Lemma 4.4, $P^{\prime}$ spans at
most $K\binom{n-1}{d-1}+d\frac{1}{d}K\binom{n-1}{d-1}=2K\binom{n-1}{d-1}$
ordinary hyperplanes.
First suppose $\delta$ is an elliptic normal curve or a rational cuspidal or
acnodal curve. If $a_{1},\dotsc,a_{d}\in\delta^{*}$ are distinct, then by
Propositions 3.1 and 3.9, the hyperplane through $a_{1},\dotsc,a_{d}$ meets
$\delta$ again in the unique point $a_{d+1}=\ominus(a_{1}\oplus\dotsb\oplus
a_{d})$. This implies that $a_{d+1}\in P^{\prime}$ for all but at most
$d!O(K\binom{n-1}{d-1})$ $d$-tuples $(a_{1},\dotsc,a_{d})\in(P^{\prime})^{d}$
with all $a_{i}$ distinct. There are also at most $\binom{d}{2}n^{d-1}$
$d$-tuples $(a_{1},\dotsc,a_{d})\in(P^{\prime})^{d}$ for which the $a_{i}$ are
not all distinct. Thus, $a_{1}\oplus\dotsb\oplus a_{d}\in\ominus P^{\prime}$
for all but at most $O((dK+d^{2})n^{d-1})$ $d$-tuples
$(a_{1},\dotsc,a_{d})\in(P^{\prime})^{d}$. Applying Lemma 4.3 with
$A_{1}=\dotsb=A_{d}=P^{\prime}$ and $A_{d+1}=\ominus P^{\prime}$, we obtain a
finite subgroup $H$ of $\delta^{*}$ and a coset $H\oplus x$ such that
$|P^{\prime}\mathbin{\triangle}(H\oplus x)|=O(dK+d^{2})$ and $|\ominus
P^{\prime}\mathbin{\triangle}(H\oplus dx)|=O(dK+d^{2})$, the latter being
equivalent to $|P^{\prime}\mathbin{\triangle}(H\ominus dx)|=O(dK+d^{2})$. Thus
we have $|(H\oplus x)\mathbin{\triangle}(H\ominus dx)|=O(dK+d^{2})$, which
implies $(d+1)x\in H$. Also, $\delta$ cannot be cuspidal, otherwise by
Proposition 3.9 we have $\delta^{*}\cong(\mathbb{R},+)$, which has no finite
subgroup of order greater than $1$.
Now suppose $\delta$ is a rational crunodal curve. By Proposition 3.9, there
is a bijective map
$\varphi:(\mathbb{R},+)\times\mathbb{Z}_{2}\rightarrow\delta^{*}$ such that
$d+1$ points in $\delta^{*}$ lie in a hyperplane if and only if they sum to
$h$, where $h=\varphi(0,0)$ or $\varphi(0,1)$ depending on the curve $\delta$.
If $h=\varphi(0,0)$ then the above argument follows through, and we obtain a
contradiction as we have by Proposition 3.9 that
$\delta^{*}\cong(\mathbb{R},+)\times\mathbb{Z}_{2}$, which has no finite
subgroup of order greater than $2$. Otherwise, the hyperplane through distinct
$a_{1},\dotsc,a_{d}\in\delta^{*}$ meets $\delta$ again in the unique point
$a_{d+1}=\varphi(0,1)\ominus(a_{1}\oplus\dotsb\oplus a_{d})$. As before, this
implies that $a_{d+1}\in P^{\prime}$ for all but at most
$O((dK+d^{2})n^{d-1})$ $d$-tuples $(a_{1},\dotsc,a_{d})\in(P^{\prime})^{d}$,
or equivalently $a_{1}\oplus\dotsb\oplus a_{d}\in\varphi(0,1)\ominus
P^{\prime}$. Applying Lemma 4.3 with $A_{1}=\dotsb=A_{d}=P^{\prime}$ and
$A_{d+1}=\varphi(0,1)\ominus P^{\prime}$, we obtain a finite subgroup $H$ of
$\delta^{*}$, giving a contradiction as before. ∎
We can now prove Theorem 1.1.
###### Proof of Theorem 1.1.
By Lemma 4.2, all but at most $O(d2^{d}K)$ points of $P$ are contained in a
hyperplane or an irreducible curve $\delta$ of degree $d+1$ that is either
elliptic or rational and singular. In the prior case, we get Case ( ) ‣ 1.1
of the theorem, so suppose we are in the latter case. We then apply Lemma 4.5
to obtain Case ( ) ‣ 1.1 of the theorem, completing the proof. ∎
## 5 Extremal configurations
We prove Theorems 1.2 and 1.3 in this section. It will turn out that
minimising the number of ordinary hyperplanes spanned by a set is equivalent
to maximising the number of $(d+1)$-point planes, thus we can apply Theorem
1.1 in both theorems. Then we only have two cases to consider, where most of
our point set is contained either in a hyperplane or a coset of a subgroup of
an elliptic normal curve or the smooth points of a rational acnodal curve.
The first case is easy, and we get the following lower bound.
###### Lemma 5.1.
Let $d\geqslant 4$, $K\geqslant 1$, and let $n\geqslant 2dK$. Let $P$ be a set
of $n$ points in $\mathbb{R}\mathbb{P}^{d}$ where every $d$ points span a
hyperplane. If all but $K$ points of $P$ lie on a hyperplane, then $P$ spans
at least $\binom{n-1}{d-1}$ ordinary hyperplanes, with equality if and only if
$K=1$.
###### Proof.
Let $\Pi$ be a hyperplane with $|P\cap\Pi|=n-K$. Since $n-K>d$, any ordinary
hyperplane spanned by $P$ must contain at least one point not in $\Pi$. Let
$m_{i}$ be the number of hyperplanes containing exactly $d-1$ points of
$P\cap\Pi$ and exactly $i$ points of $P\setminus\Pi$, $i=1,\dots,K$. Then the
number of unordered $d$-tuples of elements from $P$ with exactly $d-1$
elements in $\Pi$ is
$K\binom{n-K}{d-1}=m_{1}+2m_{2}+3m_{3}+\dots+Km_{K}.$
Now consider the number of unordered $d$-tuples of elements from $P$ with
exactly $d-2$ elements in $\Pi$, which equals $\binom{K}{2}\binom{n-K}{d-2}$.
One way to generate such a $d$-tuple is to take one of the $m_{i}$ hyperplanes
containing $i$ points of $P\setminus\Pi$ and $d-1$ points of $P\cap\Pi$,
choose two of the $i$ points, and remove one of the $d-1$ points. Since any
$d$ points span a hyperplane, there is no overcounting. This gives
$\displaystyle\binom{K}{2}\binom{n-K}{d-2}$
$\displaystyle\geqslant(d-1)\left(\binom{2}{2}m_{2}+\binom{3}{2}m_{3}+\binom{4}{2}m_{4}+\dotsb\right)$
$\displaystyle\geqslant\frac{d-1}{2}(2m_{2}+3m_{3}+4m_{4}+\dotsb).$
Hence the number of ordinary hyperplanes is at least
$m_{1}\geqslant
K\binom{n-K}{d-1}-\frac{K(K-1)}{d-1}\binom{n-K}{d-2}=K\binom{n-K}{d-1}\frac{n-2K-d+3}{n-K-d+2}.$
We next show that for all $K\geqslant 2$, if $n\geqslant 2dK$ then
$K\binom{n-K}{d-1}\frac{n-2K-d+3}{n-K-d+2}>\binom{n-1}{d-1}.$
This is equivalent to
$K>\frac{n-K+1}{n-2K-d+3}\prod_{i=1}^{K-2}\frac{n-i}{n-d-i+1}.$ (10)
Note that
$\frac{n-K+1}{n-2K-d+3}<2$ (11)
if $n>3K+2d-5$ and
$\frac{n-i}{n-d-i+1}<\frac{i+2}{i+1}$ (12)
if $n\geqslant(i+2)d$ for each $i=1,\dots,K-2$. However, since $2dK>(i+2)d$
and also $2dK>4K+2d-5$, the inequality (10) now follows from (11) and (12). ∎
The second case needs more work. We first consider the number of ordinary
hyperplanes spanned by a coset of a subgroup of the smooth points $\delta^{*}$
of an elliptic normal curve or a rational acnodal curve. By Propositions 3.1
and 3.9, we can consider $\delta^{*}$ as a group isomorphic to either
$\mathbb{R}/\mathbb{Z}$ or $\mathbb{R}/\mathbb{Z}\times\mathbb{Z}_{2}$. Let
$H\oplus x$ be a coset of a subgroup $H$ of $\delta^{*}$ of order $n$ where
$(d+1)x=\ominus c\in H$. Since $H$ is a subgroup of order $n$ of
$\mathbb{R}/\mathbb{Z}$ or $\mathbb{R}/\mathbb{Z}\times\mathbb{Z}_{2}$, we
have that either $H$ is cyclic, or $\mathbb{Z}_{n/2}\times\mathbb{Z}_{2}$ when
$n$ is divisible by $4$. The exact group will matter only when we make exact
calculations.
Note that it follows from the group property that any $d$ points on
$\delta^{*}$ span a hyperplane. Also, since any hyperplane intersects
$\delta^{*}$ in $d+1$ points, counting multiplicity, it follows that an
ordinary hyperplane of $H\oplus x$ intersects $\delta^{*}$ in $d$ points, of
which exactly one of them has multiplicity $2$, and the others multiplicity
$1$. Denote the number of ordered $k$-tuples $(a_{1},\dotsc,a_{k})$ with
distinct $a_{i}\in H$ that satisfy $m_{1}a_{1}\oplus\dotsb\oplus m_{k}a_{k}=c$
by $[m_{1},\dotsc,m_{k};c]$. Then the number of ordinary hyperplanes spanned
by $H\oplus x$ is
$\frac{1}{(d-1)!}[2,\\!\\!\underbrace{1,\dotsc,1}_{\text{$d-1$ times}}\\!;c].$
(13)
We show that we can always find a value of $c$ for which (13) is at most
$\binom{n-1}{d-1}$.
###### Lemma 5.2.
Let $\delta^{*}$ be an elliptic normal curve or the smooth points of a
rational acnodal curve in $\mathbb{R}\mathbb{P}^{d}$, $d\geqslant 2$. Then any
finite subgroup $H$ of $\delta^{*}$ of order $n$ has a coset $H\oplus x$ with
$(d+1)x\in H$, that spans at most $\binom{n-1}{d-1}$ ordinary hyperplanes.
Furthermore, if $d+1$ and $n$ are coprime, then any such coset spans exactly
$\binom{n-1}{d-1}$ ordinary hyperplanes.
###### Proof.
It suffices to show that there exists $c\in H$ such that the number of
solutions $(a_{1},\dotsc,a_{d})\in H^{d}$ of the equation $2a_{1}\oplus
a_{2}\oplus\dotsb\oplus a_{d}=c$, where $c=\ominus(d+1)x$, is at most
$(d-1)!\binom{n-1}{d-1}$.
Fix $a_{1}$ and consider the substitution $b_{i}=a_{i}-a_{1}$ for
$i=2,\dotsc,d$. Note that $2a_{1}\oplus\dotsb\oplus a_{d}=c$ and
$a_{1},\dots,a_{d}$ are distinct if and only if $b_{2}\oplus\dotsb\oplus
b_{d}=c\ominus(d+1)a_{1}$ and $b_{2},\dots,b_{d}$ are distinct and non-zero.
Let
$A_{c,j}=\left\\{(j,a_{2},\dotsc,a_{d}):2j\oplus a_{2}\oplus\dotsb\oplus
a_{d}=c,\text{$a_{2},\dotsc,a_{d}\in H\setminus\\{j\\}$ distinct}\right\\},$
and let
$B_{k}=\left\\{(b_{2},\dotsc,b_{d}):b_{2}\oplus\dotsb\oplus
b_{d}=k,\text{$b_{2},\dotsc,b_{d}\in H\setminus\\{0\\}$ distinct}\right\\}.$
Then $|A_{c,j}|=|B_{c\ominus(d+1)j}|$, and the number of ordinary hyperplanes
spanned by $H\oplus x$ is
$\frac{1}{(d-1)!}\sum_{j\in H}|A_{c,j}|.$
If $d+1$ is coprime to $n$, then $c\ominus(d+1)j$ runs through all elements of
$H$ as $j$ varies. So we have
$\sum_{j}|B_{c\ominus(d+1)j}|=(n-1)\dotsb(n-d+1)$, hence for all $c$,
$\frac{1}{(d-1)!}\sum_{j\in H}|A_{c,j}|=\binom{n-1}{d-1}.$
If $d+1$ is not coprime to $n$, then $c\ominus(d+1)j$ runs through a coset of
a subgroup of $H$ of size $n/\gcd(d+1,n)$ as $j$ varies. We now have
$\sum_{j\in H}|B_{c\ominus(d+1)j}|=\gcd(d+1,n)\sum_{k\in
c\ominus(d+1)H}|B_{k}|.$
Summing over $c$ gives
$\displaystyle\sum_{c\in H}\sum_{j\in H}|A_{c,j}|$
$\displaystyle=\gcd(d+1,n)\sum_{c\in H}\sum_{k\in c\ominus(d+1)H}|B_{k}|$
$\displaystyle=\gcd(d+1,n)\frac{n}{\gcd(d+1,n)}(n-1)\dotsb(n-d+1)$
$\displaystyle=n(n-1)\dotsb(n-d+1).$
By the pigeonhole principle, there must then exist a $c$ such that
$\frac{1}{(d-1)!}\sum_{j\in H}|A_{c,j}|\leqslant\binom{n-1}{d-1}.\qed$
We next want to show that $[2,\\!\\!\overbrace{1,\dotsc,1}^{\text{$d-1$
times}}\\!\\!;c]$ is always very close to $(d-1)!\binom{n-1}{d-1}$,
independent of $c$ or the group $H$. Before that, we prove two simple
properties of $[m_{1},\dotsc,m_{k};c]$.
###### Lemma 5.3.
$[m_{1},\dots,m_{k};c]\leqslant 2m_{k}(k-1)!\binom{n}{k-1}$.
###### Proof.
Consider a solution $(a_{1},\dotsc,a_{k})$ of $m_{1}a_{1}\oplus\dotsb\oplus
m_{k}a_{k}=c$ where all the $a_{i}$ are distinct. We can choose
$a_{1},\dotsc,a_{k-1}$ arbitrarily in $(k-1)!\binom{n}{k-1}$ ways, and $a_{k}$
satisfies the equation $m_{k}a_{k}=c\ominus m_{1}a_{1}\ominus\dotsb\ominus
m_{k-1}a_{k-1}$, which has at most $m_{k}$ solutions if $H=\mathbb{Z}_{n}$ and
at most $2m_{k}$ solutions if $H=\mathbb{Z}_{2}\times\mathbb{Z}_{n/2}$. ∎
###### Lemma 5.4.
We have the recurrence relation
$\displaystyle[m_{1},\dots,m_{k-1},1;c]=(k-1)!\binom{n}{k-1}$
$\displaystyle-[m_{1}+1,m_{2},\dots,m_{k-1};c]$
$\displaystyle-[m_{1},m_{2}+1,m_{3},\dots,m_{k-1};c]$ $\displaystyle-\dotsb$
$\displaystyle-[m_{1},\dots,m_{k-2},m_{k-1}+1;c].$
###### Proof.
We can arbitrarily choose distinct values from $H$ for $a_{1},\dots,a_{k-1}$,
which determines $a_{k}$, and then we have to subtract the number of
$k$-tuples where $a_{k}$ is equal to one of the other $a_{i}$,
$i=1,\dots,k-1$. ∎
###### Lemma 5.5.
$[2,\\!\\!\underbrace{1,\dotsc,1}_{\text{$d-1$
times}}\\!\\!;c]=(d-1)!\left(\binom{n-1}{d-1}+\varepsilon(d,n)\right),$
where
$|\varepsilon(d,n)|=\begin{cases}O\left(2^{-d/2}\binom{n}{(d-1)/2}+\binom{n}{(d-3)/2}\right)&\text{if
$d$ is odd,}\\\
O\left(d2^{-d/2}\binom{n}{d/2-1}+\binom{n}{d/2-2}\right)&\text{if $d$ is
even.}\end{cases}$
###### Proof.
Applying Lemma 5.4 once, we obtain
$[2,\\!\\!\underbrace{1,\dotsc,1}_{\text{$d-1$
times}}\\!\\!;c]=(d-1)!\binom{n}{d-1}-[3,\\!\\!\underbrace{1,\dotsc,1}_{\text{$d-2$
times}}\\!\\!;c]-(d-2)[2,2,\\!\\!\underbrace{1,\dotsc,1}_{\text{$d-3$
times}}\\!\\!;c].$
Note that at each stage of the recurrence in Lemma 5.4 (as long as it
applies), there are $(d-1)(d-2)\dotsb(d-k)$ terms of length $d-k$, where we
define the _length_ of $[m_{1},\dotsc,m_{k};c]$ to be $k$.
If $d$ is odd, we can continue this recurrence until we reach
$\displaystyle[2,\\!\\!\underbrace{1,\dotsc,1}_{\text{$d-1$ times}}\\!\\!;c]$
$\displaystyle=(d-1)!\left(\binom{n}{d-1}-\binom{n}{d-2}+\dotsb+(-1)^{(d+1)/2}\binom{n}{(d+1)/2}\right)$
$\displaystyle\qquad+(-1)^{(d-1)/2}R,$
where $R$ is the sum of $(d-1)(d-2)\dotsb(d-(d-1)/2)$ terms of length
$(d+1)/2$. Among these there are
$\frac{\binom{d-1}{2}\binom{d-3}{2}\dotsb\binom{2}{2}}{(\frac{d-1}{2})!}=(d-2)(d-4)\dotsb
3\cdot 1$
terms of the form $[2,\dotsc,2;c]$. We now write $R=A+B$, where $A$ is the
same sum as $R$, except that we replace each occurrence of $[2,\dots,2;c]$ by
$[1,\dots,1;c]$, and
$B:=(d-2)(d-4)\dotsb 3\cdot 1([\underbrace{2,\dotsc,2}_{\text{$\frac{d+1}{2}$
times}};c]-[\\!\underbrace{1,\dotsc,1}_{\text{$\frac{d+1}{2}$ times}}\\!;c]).$
We next bound $A$ and $B$. We apply Lemma 5.4 to each term in $A$, after which
we obtain $(d-1)(d-2)\dotsb(d-(d+1)/2)$ terms of length $(d-1)/2$. Then using
the bound in Lemma 5.3, we obtain
$\displaystyle A$
$\displaystyle=(d-1)!\binom{n}{(d-1)/2}-O\left((d-1)(d-2)\dotsb(d-(d+1)/2)\left(\tfrac{d-3}{2}\right)!\binom{n}{(d-3)/2}\right)$
$\displaystyle=(d-1)!\left(\binom{n}{(d-1)/2}-O\left(\binom{n}{(d-3)/2}\right)\right).$
For $B$, we again use Lemma 5.3 to get
$\displaystyle|B|$ $\displaystyle=O\left((d-2)(d-4)\dotsb 3\cdot
1\left(\frac{d-1}{2}\right)!\binom{n}{(d-1)/2}\right)$
$\displaystyle=O\left((d-2)(d-4)\dotsb 3\cdot 1\cdot
2^{-\frac{d-1}{2}}(d-1)(d-3)\dotsb 4\cdot 2\binom{n}{(d-1)/2}\right)$
$\displaystyle=O\left((d-1)!2^{-\frac{d-1}{2}}\binom{n}{(d-1)/2}\right).$
Thus we obtain
$[2,\\!\\!\underbrace{1,\dotsc,1}_{\text{$d-1$
times}}\\!\\!;c]=(d-1)!\left(\binom{n}{d-1}-\binom{n}{d-2}+\dotsb+(-1)^{\frac{d+1}{2}}\binom{n}{(d+1)/2}\right)\\\
+(-1)^{\frac{d-1}{2}}(d-1)!\left(\binom{n}{(d-1)/2}-O\left(\binom{n}{(d-3)/2}\right)\right)+(-1)^{\frac{d-1}{2}}B\\\
=(d-1)!\left(\binom{n-1}{d-1}+(-1)^{\frac{d+1}{2}}O\left(\binom{n}{(d-3)/2}\right)\pm
O\left(2^{-\frac{d-1}{2}}\binom{n}{(d-1)/2}\right)\right),$
which finishes the proof for odd $d$.
If $d$ is even, we obtain
$[2,\\!\\!\underbrace{1,\dotsc,1}_{\text{$d-1$
times}}\\!\\!;c]=(d-1)!\left(\binom{n}{d-1}-\binom{n}{d-2}+\dotsb+(-1)^{\frac{d}{2}+1}\binom{n}{d/2}\right)+(-1)^{d/2}R,$
where $R$ now is the sum of $(d-1)(d-2)\dotsb(d-d/2)$ terms of length $d/2$.
Among these there are
$\frac{(d-1)\binom{d-2}{2}\binom{d-4}{2}\dotsb\binom{2}{2}}{(\frac{d-2}{2})!}+\frac{2\binom{d-1}{3}\binom{d-4}{2}\dotsb\binom{2}{2}}{(\frac{d-4}{2})!}=(d+1)(d-1)\dotsb
7\cdot 5$
terms of the form $[3,2,\dots,2;c]$. Again we write $R=A+B$, where $A$ is the
same sum as $R$, except that each occurrence of $[3,2,\dots,2;c]$ is replaced
by $[1,\dots,1;c]$, and
$B:=(d+1)(d-1)\dotsb 7\cdot
5([3,\\!\\!\underbrace{2,\dotsc,2}_{\text{$\frac{d}{2}-1$
times}}\\!\\!;c]-[\underbrace{1,\dotsc,1}_{\text{$\frac{d}{2}$ times}};c]).$
Similar to the previous case, we obtain
$A=(d-1)!\left(\binom{n}{d/2-1}-O\left(\binom{n}{d/2-2}\right)\right)$
and
$|B|=O\left((d+1)(d-1)\dotsb 7\cdot
5(\tfrac{d}{2}-1)!\binom{n}{d/2-1}\right)=O\left(2^{-d/2}d!\binom{n}{d/2-1}\right),$
which finishes the proof for even $d$. ∎
Computing $[2,\dotsc,2;c]$ and $[3,2,\dotsc,2;c]$ exactly is more subtle and
depends on $c$ and the group $H$. We do not need this for the asymptotic
Theorems 1.2 and 1.3, and will only need to do so when computing exact
extremal values.
To show that a coset is indeed extremal, we first consider the effect of
adding a single point. The case where the point is on the curve is done in
Lemma 5.6, while Lemma 5.7 covers the case where the point is off the curve.
We then obtain a more general lower bound in Lemma 5.8.
###### Lemma 5.6.
Let $\delta^{*}$ be an elliptic normal curve or the smooth points of a
rational acnodal curve in $\mathbb{R}\mathbb{P}^{d}$, $d\geqslant 2$. Suppose
$H\oplus x$ is a coset of a finite subgroup $H$ of $\delta^{*}$ of order $n$,
with $(d+1)x\in H$. Let $p\in\delta^{*}\setminus(H\oplus x)$. Then there are
at least $\binom{n}{d-1}$ hyperplanes through $p$ that meet $H\oplus x$ in
exactly $d-1$ points.
###### Proof.
Take any $d-1$ points $p_{1},\dotsc,p_{d-1}\in H\oplus x$. Suppose that the
(unique) hyperplane through $p,p_{1},\dots,p_{d-1}$ contains another point
$p^{\prime}\in H\oplus x$. Since $p\oplus p_{1}\oplus\dots\oplus p_{d-1}\oplus
p^{\prime}=0$ by Propositions 3.1 and 3.9, we obtain that $p\in H\ominus dx$.
Since $(d+1)x\in H$, we obtain $p\in H\oplus x$, a contradiction. Therefore,
the hyperplane through $p,p_{1},\dots,p_{d-1}$ does not contain any other
point of $H\oplus x$.
It remains to show that if
$\\{p_{1},\dots,p_{d-1}\\}\neq\\{p_{1}^{\prime},\dots,p_{d-1}^{\prime}\\}$
where also $p_{1}^{\prime},\dots,p_{d-1}^{\prime}\in H\oplus x$, then the two
sets span different hyperplanes with $p$. Suppose they span the same
hyperplane. Then $\ominus(p\oplus p_{1}\oplus\dotsb\oplus p_{d-1})$ also lies
on this hyperplane, but not in $H\oplus x$, as shown above. Also,
$p_{i}^{\prime}\notin\\{p_{1},\dots,p_{d-1}\\}$ for some $i$, and then
$p_{1},\dots,p_{d-1},p_{i}^{\prime}$, and $\ominus(p\oplus
p_{1}\oplus\dotsb\oplus p_{d-1})$ are $d+1$ distinct points on a hyperplane,
so their sum is $0$, which implies $p=p_{i}^{\prime}$, a contradiction.
So there are $\binom{n}{d-1}$ hyperplanes through $p$ meeting $H\oplus x$ in
exactly $d-1$ points. ∎
The following Lemma generalises [GT13]*Lemma 7.7, which states that if
$\delta^{*}$ is an elliptic curve or the smooth points of an acnodal cubic
curve in the plane, $H\oplus x$ is a coset of a finite subgroup of order
$n>10^{4}$, and if $p\notin\delta^{*}$, then there are at least $n/1000$ lines
through $p$ that pass through exactly one element of $H\oplus x$. A naive
generalisation to dimension $3$ would state that if $\delta^{*}$ is an
elliptic or acnodal space quartic curve with a finite subgroup $H$ of
sufficiently large order $n$, and $x\in\delta^{*}$ and $p\notin\delta^{*}$,
then there are $\Omega(n^{2})$ planes through $p$ and exactly two elements of
$H\oplus x$. This statement is false, even if we assume that $4x\in H$ (the
analogous assumption $3x\in H$ is not made in [GT13]), as can be seen from the
following example.
Let $\delta$ be an elliptic quartic curve obtained from the intersection of a
circular cylinder in $\mathbb{R}^{3}$ with a sphere which has centre $c$ on
the axis $\ell$ of the cylinder. Then $\delta$ is symmetric in the plane
through $c$ perpendicular to $\ell$, and we can find a finite subgroup $H$ of
any even order $n$ such that the line through any element of $H$ parallel to
$\ell$ intersects $H$ in two points. If we now choose $p$ to be the point at
infinity on $\ell$, then we obtain that any plane spanned by $p$ and two
points of $H$ not collinear with $p$, intersects $H$ in two more points. Note
that the projection $\pi_{p}$ maps $\delta$ to a conic, so is not generically
one-to-one. The number of such $p$ is bounded by the trisecant lemma (Lemma
2.3). However, as the next lemma shows, a generalisation of [GT13]*Lemma 7.7
holds except that in dimension 3 we have to exclude such points $p$.
###### Lemma 5.7.
Let $\delta$ be an elliptic normal curve or a rational acnodal curve in
$\mathbb{R}\mathbb{P}^{d}$, $d\geqslant 2$, and let $\delta^{*}$ be its set of
smooth points. Let $H$ be a finite subgroup of $\delta^{*}$ of order $n$,
where $n\geqslant Cd^{4}$ for some sufficiently large absolute constant $C>0$.
Let $x\in\delta^{*}$ satisfy $(d+1)x\in H$. Let
$p\in\mathbb{R}\mathbb{P}^{d}\setminus\delta^{*}$. If $d=3$, assume
furthermore that $\delta$ is not contained in a quadric cone with vertex $p$.
Then there are at least $c\binom{n}{d-1}$ hyperplanes through $p$ that meet
the coset $H\oplus x$ in exactly $d-1$ points, for some sufficiently small
absolute constant $c>0$.
###### Proof.
We prove by induction on $d$ that under the given hypotheses there are at
least $c^{\prime}\prod_{i=2}^{d}(1-\frac{1}{i^{2}})\binom{n}{d-1}$ such
hyperplanes for some sufficiently small absolute constant $c^{\prime}>0$. The
base case $d=2$ is given by [GT13]*Lemma 7.7.
Next assume that $d\geqslant 3$, and that the statement holds for $d-1$. Fix a
$q\in H\oplus x$, and consider the projection $\pi_{q}$. Since $q$ is a smooth
point of $\delta$, $\overline{\pi_{q}(\delta\setminus\\{q\\})}$ is a non-
degenerate curve of degree $d$ in $\mathbb{R}\mathbb{P}^{d-1}$ (otherwise its
degree would be at most $d/2$, but a non-degenerate curve has degree at least
$d-1$). The projection $\pi_{q}$ can be naturally extended to have a value at
$q$, by setting $\pi_{q}(q)$ to be the point where the tangent line of
$\delta$ at $q$ intersects the hyperplane onto which $\delta$ is projected.
(This point is the single point in
$\overline{\pi_{q}(\delta\setminus\\{q\\})}\setminus\pi_{q}(\delta\setminus\\{q\\})$.)
The curve $\pi_{q}(\delta)$ has degree $d$ and is either elliptic or rational
and acnodal, hence it has a group operation $\boxplus$ such that $d$ points
are on a hyperplane in $\mathbb{R}\mathbb{P}^{d-1}$ if and only if they sum to
the identity.
Observe that any $d$ points
$\pi_{q}(p_{1}),\dots,\pi_{q}(p_{d})\in\pi_{q}(\delta^{*})$ lie on a
hyperplane in $\mathbb{R}\mathbb{P}^{d-1}$ if and only if
$p_{1}\oplus\dots\oplus p_{d}\oplus q=0$. By Proposition 3.10 it follows that
the group on $\pi_{q}(\delta^{*})$ obtained by transferring the group
$(\delta^{*},\oplus)$ by $\pi_{q}$ is a translation of
$(\pi_{q}(\delta^{*}),\boxplus)$. In particular, $\pi_{q}(H\oplus
x)=H^{\prime}\boxplus x^{\prime}$ for some subgroup $H^{\prime}$ of
$(\pi_{q}(\delta^{*}),\boxplus)$ of order $n$, and $(d+1)x^{\prime}\in
H^{\prime}$.
We would like to apply the induction hypothesis, but we can only do that if
$\pi_{q}(p)\notin\pi_{q}(\delta^{*})$, and when $d=4$, if $\pi_{q}(p)$ is not
the vertex of a quadric cone containing $\pi_{q}(\delta)$. We next show that
there are only $O(d^{2})$ exceptional points $q$ to which we cannot apply
induction.
Note that $\pi_{q}(p)\in\pi_{q}(\delta^{*})$ if and only if the line $pq$
intersects $\delta$ with multiplicity $2$, which means we have to bound the
number of these lines through $p$. To this end, we consider the projection of
$\delta$ from the point $p$. Suppose that $\pi_{p}$ does not project $\delta$
generically one-to-one to a degree $d+1$ curve in
$\mathbb{R}\mathbb{P}^{d-1}$. Then $\pi_{p}(\delta)$ has degree at most
$(d+1)/2$. However, its degree is at least $d-1$ because it is non-degenerate.
It follows that $d=3$, and that $\pi_{p}(\delta)$ has degree $2$ and is
irreducible, so $\delta$ is contained in a quadric cone with vertex $p$, which
we ruled out by assumption.
Therefore, $\pi_{p}$ projects $\delta$ generically one-to-one onto the curve
$\pi_{p}(\delta)$, which has degree $d+1$ and has at most $\binom{d}{2}$
double points (this follows from the Plücker formulas after projecting to the
plane [W78]*Chapter III, Theorem 4.4). We thus have that an arbitrary point
$p\in\mathbb{R}\mathbb{P}^{d}\setminus\delta$ lies on at most $O(d^{2})$
secants or tangents of $\delta$ (or lines through two points of $\delta^{*}$
if $p$ is the acnode of $\delta$).
If $d=4$, we also have to avoid $q$ such that $\pi_{q}(p)$ is the vertex of a
cone on which $\pi_{q}(\delta)$ lies. Such $q$ have the property that if we
first project $\delta$ from $q$ and then $\pi_{q}(\delta)$ from $\pi_{q}(p)$,
then the composition of these two projections is not generically one-to-one.
Another way to do these to successive projections is to first project $\delta$
from $p$ and then $\pi_{p}(\delta)$ from $\pi_{p}(q)$. Thus, we have that
$\pi_{p}(q)$ is a point on the quintic $\pi_{p}(\delta)$ in
$\mathbb{R}\mathbb{P}^{3}$ such that the projection of $\pi_{p}(\delta)$ from
$\pi_{p}(q)$ onto $\mathbb{R}\mathbb{P}^{2}$ is not generically one-to-one.
However, there are only $O(1)$ such points by Lemma 2.3. Thus there are at
most $Cd^{2}$ points $q\in H\oplus x$ to which we cannot apply the induction
hypothesis.
For all remaining $q\in H\oplus x$, we obtain by the induction hypothesis that
there are at least
$c^{\prime}\prod_{i=2}^{d-1}(1-\frac{1}{i^{2}})\binom{n}{d-2}$ hyperplanes
$\Pi$ in $\mathbb{R}\mathbb{P}^{d-1}$ through $\pi_{q}(p)$ and exactly $d-2$
points of $H^{\prime}\boxplus x^{\prime}$. If none of these $d-2$ points equal
$\pi_{q}(q)$, then $\pi_{q}^{-1}(\Pi)$ is a hyperplane in
$\mathbb{R}\mathbb{P}^{d}$ through $p$ and $d-1$ points of $H\oplus x$, one of
which is $q$. There are at most $\binom{n-1}{d-3}$ such hyperplanes in
$\mathbb{R}\mathbb{P}^{d-1}$ through $\pi_{q}(q)$. Therefore, there are at
least
$c^{\prime}\prod_{i=2}^{d-1}(1-\frac{1}{i^{2}})\binom{n}{d-2}-\binom{n-1}{d-3}$
hyperplanes in $\mathbb{R}\mathbb{P}^{d}$ that pass through $p$ and exactly
$d-1$ points of $H\oplus x$, one of them being $q$. If we sum over all
$n-Cd^{2}$ points $q$, we count each hyperplane $d-1$ times, and we obtain
that the total number of such hyperplanes is at least
$\frac{n-Cd^{2}}{d-1}\left(c^{\prime}\prod_{i=2}^{d-1}\left(1-\frac{1}{i^{2}}\right)\binom{n}{d-2}-\binom{n-1}{d-3}\right).$
(14)
It can easily be checked that
$\frac{n-Cd^{2}}{d-1}\binom{n}{d-2}\geqslant\left(1-\frac{1}{2d^{2}}\right)\binom{n}{d-1}$
(15)
if $n>2Cd^{4}$, and that
$c^{\prime}\prod_{i=2}^{d-1}\left(1-\frac{1}{i^{2}}\right)\frac{1}{2d^{2}}\binom{n}{d-1}\geqslant\frac{n-Cd^{2}}{d-1}\binom{n-1}{d-3}$
(16)
if $n>4d^{3}/c^{\prime}$. It now follows from (15) and (16) that the
expression (14) is at least
$c^{\prime}\prod_{i=2}^{d}\left(1-\frac{1}{i^{2}}\right)\binom{n}{d-1},$
which finishes the induction. ∎
###### Lemma 5.8.
Let $\delta^{*}$ be an elliptic normal curve or the smooth points of a
rational acnodal curve in $\mathbb{R}\mathbb{P}^{d}$, $d\geqslant 4$, and let
$H\oplus x$ be a coset of a finite subgroup $H$ of $\delta^{*}$, with
$(d+1)x\in H$. Let $A\subseteq H\oplus x$ and
$B\subset\mathbb{R}\mathbb{P}^{d}\setminus(H\oplus x)$ with $|A|=a$ and
$|B|=b$. Let $P=(H\oplus x\setminus A)\cup B$ with $|P|=n$ be such that every
$d$ points of $P$ span a hyperplane. If $A$ and $B$ are not both empty and
$n\geqslant C(a+b+d^{2})d$ for some sufficiently large absolute constant
$C>0$, then $P$ spans at least $(1+c)\binom{n-1}{d-1}$ ordinary hyperplanes
for some sufficiently small absolute constant $c>0$.
###### Proof.
We first bound from below the number of ordinary hyperplanes of $(H\oplus
x)\setminus A$ that do not pass through a point of $B$.
The number of ordinary hyperplanes of $(H\oplus x)\setminus A$ that are
disjoint from $A$ is
$\frac{1}{(d-1)!}\left|\left\\{(a_{1},\dots,a_{d})\in(H\setminus(A\ominus
x))^{d}:\begin{array}[]{c}2a_{1}\oplus a_{2}\oplus\dotsb\oplus
a_{d}=\ominus(d+1)x,\\\ \text{$a_{1},\dots,a_{d}$ are
distinct}\end{array}\right\\}\right|.$
If we denote by by $[m_{1},\dotsc,m_{k}]^{\prime}$ the number of ordered
$k$-tuples $(a_{1},\dotsc,a_{k})$ with distinct $a_{i}\in H\setminus(A\ominus
x)$ that satisfy $m_{1}a_{1}\oplus\dotsb\oplus m_{k}a_{k}=\ominus(d+1)x$, then
we obtain, similar to the proofs of Lemmas 5.3 and 5.4, that
$\displaystyle[2,\\!\\!\underbrace{1,\dotsc,1}_{\text{$d-1$
times}}\\!]^{\prime}$
$\displaystyle=(d-1)!\binom{n-b}{d-1}-[3,\\!\\!\underbrace{1,\dotsc,1}_{\text{$d-2$
times}}\\!]^{\prime}-(d-2)[2,2,\\!\\!\underbrace{1,\dotsc,1}_{\text{$d-3$
times}}\\!]^{\prime}$
$\displaystyle\geqslant(d-1)!\binom{n-b}{d-1}-2(d-2)!\binom{n-b}{d-2}-2(d-2)(d-2)!\binom{n-b}{d-2}$
$\displaystyle=(d-1)!\binom{n-b}{d-1}-2(d-1)!\binom{n-b}{d-2},$
and it follows that the number of ordinary hyperplanes of $(H\oplus
x)\setminus A$ disjoint from $A$ is at least
$\binom{n-b}{d-1}-2\binom{n-b}{2}$.
Next, we obtain an upper bound on the number of these hyperplanes that pass
through a point $q\in B$. Let the ordinary hyperplane $\Pi$ pass through
$p_{1},p_{2},\dots,p_{d}\in(H\oplus x)\setminus A$, with $p_{1}$ being the
double point. Since $q\in\Pi$ and any $d$ points determine a hyperplane, $\Pi$
is still spanned by $q,p_{1},\dots,p_{d-1}$, after a relabelling of
$p_{2},\dots,p_{d}$. Let $S$ be a minimal subset of
$\\{p_{2},\dots,p_{d-1}\\}$ such that the tangent line $\ell$ of $\delta$ at
$p_{1}$ lies in the flat spanned by $S\cup\\{q,p_{1}\\}$.
If $S$ is empty, then $\ell$ is a tangent from $q$ to $\delta$, of which there
are at most $d(d+1)$ (this follows again from projection and the Plücker
formulas [W78]*Chapter IV, p. 117[NZ]*Corollary 2.5). Therefore, the number of
ordinary hyperplanes through $p_{1},p_{2},\dots,p_{d}\in(H\oplus x)\setminus
A$ with the tangent of $\delta$ at $p_{1}$ passing through $q$ is at most
$d(d+1)\binom{n-b}{d-2}$.
If on the other hand $S$ is non-empty, then there is some $p_{i}$, say
$p_{d-1}$, such that $q,p_{1},\dots,p_{d-2}$ together with $\ell$ generate
$\Pi$. Therefore, $\Pi$ is determined by $p_{1}$, the tangent through $p_{1}$,
and some $d-3$ more points $p_{i}$. There are at most
$(n-b)\binom{n-b-1}{d-3}=(d-2)\binom{n-b}{d-2}$ ordinary hyperplanes through
$q$ in this case.
The number of ordinary hyperplanes of $(H\oplus x)\setminus A$ that contain a
point from $A$ is at least
$a\left(\binom{n-b}{d-1}-a\binom{n-b}{d-2}-(n-b)\binom{n-b-1}{d-3}\right)=a\binom{n-b}{d-1}-(a^{2}+a(d-2))\binom{n-b}{d-2},$
since we can find such a hyperplane by choosing a point $p\in A$ and $d-1$
points $p_{1},\dots,p_{d-1}\in(H\oplus x)\setminus A$, and then the remaining
point $\ominus(p\oplus p_{1}\oplus\dots\oplus p_{d-1})$ might not be a new
point in $(H\oplus x)\setminus A$ by either being in $A$ (possibly equal to
$p$) or being equal to one of the $p_{i}$. The number of these hyperplanes
that also pass through some point of $B$ is at most $ab\binom{n-b}{d-2}$.
Therefore, the number of ordinary hyperplanes of $(H\oplus x)\setminus A$ that
miss $B$ is at least
$(1+a)\binom{n-b}{d-1}-\left(2+b(d(d+1)+d-2)+a^{2}+a(d-2)+ab\right)\binom{n-b}{d-2}.$
(17)
Next, assuming that $B\neq\emptyset$, we find a lower bound to the number of
ordinary hyperplanes through exactly one point of $B$ and exactly $d-1$ points
of $(H\oplus x)\setminus A$. The number of hyperplanes through at least one
point of $B$ and exactly $d-1$ points of $(H\oplus x)\setminus A$ is at least
$bc^{\prime}\binom{n-b}{d-1}-ab\binom{n-b}{d-2}$ by Lemmas 5.6 and 5.7 for
some sufficiently small absolute constant $c^{\prime}>0$. The number of
hyperplanes through at least two points of $B$ and exactly $d-1$ points of
$(H\oplus x)\setminus A$ is at most $\binom{b}{2}\binom{n-b}{d-2}$. It follows
that there are at least
$bc^{\prime}\binom{n-b}{d-1}-\bigl{(}ab+\binom{b}{2}\bigr{)}\binom{n-b}{d-2}$
ordinary hyperplanes passing though a point of $B$.
Combining this with (17), $P$ spans at least
$(1+a+bc^{\prime})\binom{n-b}{d-1}-\left(2+b(d(d+1)+d-2)+a^{2}+a(d-2)+2ab+\binom{b}{2}\right)\binom{n-b}{d-2}=:f(a,b)$
ordinary hyperplanes. Since
$f(a+1,b)-f(a,b)=\binom{n-b}{d-1}-(2a+2b+d-1)\binom{n-b}{d-2}$
is easily seen to be positive for all $a\geqslant 0$ as long as
$n>(2a+2b+d-1)(d-1)+b+d-2$, we have without loss of generality that $a=0$ in
the case that $b\geqslant 1$. Then $f(0,b+1)-f(0,b)$ is easily seen to be at
least
$c^{\prime}\binom{n-b-1}{d-1}-(d^{2}+d-2+b)\binom{n-b-1}{d-2},$
which is positive for all $b\geqslant 1$ if $n\geqslant C(b+d^{2})d$ for $C$
sufficiently large. Also,
$f(0,1)=(1+c^{\prime})\binom{n-1}{d-1}-(d^{2}+2d)\binom{n-1}{d-2})\geqslant(1+c)\binom{n-1}{d-1}$
if $n\geqslant Cd^{3}$. This completes the proof in the case where $B$ is non-
empty.
If $B$ is empty, then we can bound the number of ordinary hyperplanes from
below by setting $b=0$ in (17), and checking that the resulting expression
$(1+a)\binom{n}{d-1}-\left(d+a^{2}+a(d-2)\right)\binom{n}{d-2}$
is increasing in $a$ if $n>(2a+d-1)(d-1)+d-2$, and larger than
$\frac{3}{2}\binom{n-1}{d-1}$ if $n>Cd^{3}$. ∎
We are now ready to prove Theorems 1.2 and 1.3.
###### Proof of Theorem 1.2.
Let $P$ be the set of $n$ points. By Lemma 5.2, we may assume that $P$ has at
most $\binom{n-1}{d-1}$ ordinary hyperplanes. Since $n\geqslant Cd^{3}2^{d}$,
we may apply Theorem 1.1 to obtain that up to $O(d2^{d})$ points, $P$ lies in
a hyperplane or is a coset of a subgroup of an elliptic normal curve or the
smooth points of a rational acnodal curve.
In the first case, by Lemma 5.1, since $n\geqslant Cd^{3}2^{d}$, the minimum
number of ordinary hyperplanes is attained when all but one point is contained
in a hyperplane and we get exactly $\binom{n-1}{d-1}$ ordinary hyperplanes.
In the second case, by Lemma 5.8, again since $n\geqslant Cd^{3}2^{d}$, the
minimum number of ordinary hyperplanes is attained by a coset of an elliptic
normal curve or the smooth points of a rational acnodal curve. Lemmas 5.2 and
5.5 then complete the proof. Note that the second term in the error term of
Lemma 5.5 is dominated by the first term because of the lower bound on $n$,
and that the error term here is negative by Lemma 5.2. ∎
Note that if we want to find the exact minimum number of ordinary hyperplanes
spanned by a set of $n$ points in $\mathbb{R}\mathbb{P}^{d}$, $d\geqslant 4$,
not contained in a hyperplane and where every $d$ points span a hyperplane, we
can continue with the calculation of $[2,1,\dotsc,1;c]$ in the proof of Lemma
5.5. As seen in the proof of Lemma 5.2, this depends on $\gcd(d+1,n)$. We also
have to minimise over different values of $c\in H$, and if $n\equiv
0\pmod{4}$, consider both cases $H\cong\mathbb{Z}_{n}$ and
$H\cong\mathbb{Z}_{n/2}\times\mathbb{Z}_{2}$.
For example, it can be shown that if $d=4$, the minimum number is
$\begin{cases}\binom{n-1}{3}-4&\text{if }n\equiv 0\pmod{5},\\\
\binom{n-1}{3}&\text{otherwise},\end{cases}$
if $d=5$, the minimum number is
$\begin{cases}\binom{n-1}{4}-\frac{1}{8}n^{2}+\frac{1}{12}n-1&\text{if
}n\equiv 0\pmod{6},\\\ \binom{n-1}{4}&\text{if }n\equiv 1,5\pmod{6},\\\
\binom{n-1}{4}-\frac{1}{8}n^{2}+\frac{3}{4}n-1&\text{if }n\equiv
2,4\pmod{6},\\\ \binom{n-1}{4}-\frac{2}{3}n+2&\text{if }n\equiv
3\pmod{6},\end{cases}$
and if $d=6$, the minimum number is
$\begin{cases}\binom{n-1}{5}-6&\text{if }n\equiv 0\pmod{7},\\\
\binom{n-1}{5}&\text{otherwise.}\end{cases}$
###### Proof of Theorem 1.3.
We first show that there exist sets of $n$ points, with every $d$ points
spanning a hyperplane, spanning at least
$\frac{1}{d+1}\binom{n-1}{d}+O\left(2^{-d/2}\binom{n}{\lfloor\frac{d-1}{2}\rfloor}\right)$
$(d+1)$-point hyperplanes. Let $\delta^{*}$ be an elliptic normal curve or the
smooth points of a rational acnodal curve. By Propositions 3.1 and 3.9, the
number of $(d+1)$-point hyperplanes spanned by a coset $H\oplus x$ of
$\delta^{*}$ is
$\frac{1}{(d+1)!}[\\!\underbrace{1,\dotsc,1}_{\text{$d+1$ times}}\\!;c]$
for some $c\in\delta^{*}$. Note that
$[\\!\underbrace{1,\dotsc,1}_{\text{$d+1$
times}}\\!;c]=d!\binom{n}{d}-d[2,\\!\\!\underbrace{1,\dotsc,1}_{\text{$d-1$
times}}\\!\\!;c],$
so if we take $H\oplus x$ to be a coset minimising the number of ordinary
hyperplanes, then by Theorem 1.2, there are
$\displaystyle\mathrel{\phantom{=}}\frac{1}{d+1}\left(\binom{n}{d}-\binom{n-1}{d-1}\right)+O\left(2^{-\frac{d}{2}}\binom{n}{\lfloor\frac{d-1}{2}\rfloor}\right)$
$\displaystyle=\frac{1}{d+1}\binom{n-1}{d}+O\left(2^{-\frac{d}{2}}\binom{n}{\lfloor\frac{d-1}{2}\rfloor}\right)$
(18)
$(d+1)$-point hyperplanes.
Next let $P$ be an arbitrary set of $n$ points in $\mathbb{R}\mathbb{P}^{d}$,
$d\geqslant 4$, where every $d$ points span a hyperplane. Suppose $P$ spans
the maximum number of $(d+1)$-point hyperplanes. Without loss of generality,
we can thus assume $P$ spans at least
$\frac{1}{d+1}\binom{n-1}{d}+O\left(2^{-d/2}\binom{n}{\lfloor\frac{d-1}{2}\rfloor}\right)$
$(d+1)$-point hyperplanes.
Let $m_{i}$ denote the number of $i$-point hyperplanes spanned by $P$.
Counting the number of unordered $d$-tuples, we get
$\binom{n}{d}=\sum_{i\geqslant d}\binom{i}{d}m_{i}\geqslant
m_{d}+(d+1)m_{d+1},$
hence we have
$m_{d}\leqslant\binom{n}{d}-\binom{n-1}{d}-O\left(d2^{-\frac{d}{2}}\binom{n}{\lfloor\frac{d-1}{2}\rfloor}\right)=O\left(\binom{n-1}{d-1}\right),$
and we can apply Theorem 1.1.
In the case where all but $O(d2^{d})$ points of $P$ are contained in a
hyperplane, it is easy to see that $P$ spans $O(d2^{d}\binom{n}{d-1})$
$(d+1)$-point planes, contradicting the assumption.
So all but $O(d2^{d})$ points of $P$ are contained in a coset $H\oplus x$ of a
subgroup $H$ of $\delta^{*}$. Consider the identity
$(d+1)m_{d+1}=\binom{n}{d}-m_{d}-\sum_{i\geqslant d+2}\binom{i}{d}m_{i}.$
By Theorem 1.2 and Lemma 5.8, we know that
$m_{d}\geqslant\binom{n-1}{d-1}-O\left(d2^{-d/2}\binom{n}{\lfloor\frac{d-1}{2}\rfloor}\right)$
and any deviation of $P$ from the coset $H\oplus x$ adds at least
$c\binom{n-1}{d-1}$ ordinary hyperplanes for some sufficiently small absolute
constant $c>0$. Since we also have
$\displaystyle\sum_{i\geqslant d+2}\binom{i}{d}m_{i}$
$\displaystyle=\binom{n}{d}-m_{d}-(d+1)m_{d+1}$
$\displaystyle=\binom{n}{d}-\binom{n-1}{d-1}-\binom{n-1}{d}+O\left(d2^{-\frac{d}{2}}\binom{n}{\lfloor\frac{d-1}{2}\rfloor}\right)$
$\displaystyle=O\left(d2^{-\frac{d}{2}}\binom{n}{\lfloor\frac{d-1}{2}\rfloor}\right),$
we can conclude that $m_{d+1}$ is maximised when $P$ is exactly a coset of a
subgroup of $\delta^{*}$, in which case (18) completes the proof. ∎
Knowing the exact minimum number of ordinary hyperplanes spanned by a set of
$n$ points in $\mathbb{R}\mathbb{P}^{d}$, $d\geqslant 4$, not contained in a
hyperplane and where every $d$ points span a hyperplane then also gives the
exact maximum number of $(d+1)$-point hyperplanes.
Continuing the above examples, for $d=4$, the maximum number is
$\begin{cases}\frac{1}{5}\binom{n-1}{4}+\frac{4}{5}&\text{if }n\equiv
0\pmod{5},\\\ \frac{1}{5}\binom{n-1}{4}&\text{otherwise},\end{cases}$
for $d=5$, the maximum number is
$\begin{cases}\frac{1}{6}\binom{n-1}{5}+\frac{1}{48}n^{2}-\frac{1}{72}n+\frac{1}{6}&\text{if
}n\equiv 0\pmod{6},\\\ \frac{1}{6}\binom{n-1}{5}&\text{if }n\equiv
1,5\pmod{6},\\\
\frac{1}{6}\binom{n-1}{5}+\frac{1}{48}n^{2}-\frac{1}{8}n+\frac{1}{6}&\text{if
}n\equiv 2,4\pmod{6},\\\
\frac{1}{6}\binom{n-1}{5}+\frac{1}{9}n-\frac{1}{3}&\text{if }n\equiv
3\pmod{6},\end{cases}$
and for $d=6$, the maximum number is
$\begin{cases}\frac{1}{7}\binom{n-1}{6}+\frac{6}{7}&\text{if }n\equiv
0\pmod{7},\\\ \frac{1}{7}\binom{n-1}{6}&\text{otherwise}.\end{cases}$
## Acknowledgments
We thank Peter Allen, Alex Fink, Misha Rudnev, and an anonymous referee for
helpful remarks and for pointing out errors in a previous version.
## References
[AL] Aaron Lin
Department of Mathematics
London School of Economics and Political Science
United Kingdom
aaronlinhkgmailcom [KS] Konrad Swanepoel
Department of Mathematics
London School of Economics and Political Science
United Kingdom
kswanepoellseacuk
http://personal.lse.ac.uk/swanepoe/
|
# Constraining Mass Transfer Models with Galactic Neutron Star$-$White Dwarf
Binaries as Gravitational Wave Sources
Jian-Guo He1,2, Yong Shao1,2, Xiao-Jie Xu1,2, Xiang-Dong Li1,2
1Department of Astronomy, Nanjing University, Nanjing 210023, People’s
Republic of China
2Key Laboratory of Modern Astronomy and Astrophysics, Nanjing University,
Ministry of Education, Nanjing 210023, People’s Republic of China E-mail:
<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
Neutron star$-$white dwarf (NSWD) binaries are one of the most abundant
sources of gravitational waves (GW) in the Milky Way. These GW sources are the
evolutionary products of primordial binaries that experienced many processes
of binary interaction. We employ a binary population synthesis method to
investigate the properties of Galactic NSWD binaries detectable by the Laser
Interferometer Space Antenna (LISA). In this paper, only the NSWD systems with
a COWD or ONeWD component are included. We consider various models related to
mass transfer efficiencies during primordial binary evolution, supernova
explosion mechanisms at NS formation, common envelope ejection efficiencies,
and critical WD masses that determining the stability of mass transfer between
WDs and NSs. Based on our calculations, we estimate that tens to hundreds of
LISA NSWD binaries exist in the Milky Way. We find that the detection of LISA
NSWD binaries is able to provide profound insights into mass transfer
efficiencies during the evolution of primordial binaries and critical WD
masses during mass transfer from a WD to an NS.
###### keywords:
gravitational waves – binaries: general – stars: neutron – stars: white dwarf
– stars: evolution
††pubyear: 2023††pagerange: Constraining Mass Transfer Models with Galactic
Neutron Star$-$White Dwarf Binaries as Gravitational Wave Sources–A
## 1 Introduction
In recent years, ground-based gravitational wave (GW) detectors such as LIGO
and Virgo have identified nearly one hundred mergers of double compact objects
(Abbott et al., 2023), primarily composed of black holes (BHs), since the
groundbreaking detection of GW150914 (Abbott et al., 2016). Among these
mergers, two (GW170817 and GW190425) originate from double neutron star (NS)
systems. To date, no event involving the merger of an NS and a white dwarf
(WD) has been confirmed. The detection of GW signals from NSWD binaries is
able to help resolve some astrophysical problems, including the stability of
mass transfer between WDs and NSs (e.g., Verbunt & Rappaport, 1988; Bobrick et
al., 2017), the equation of state of NS matter (Tauris, 2018), the possible
origins of ultra-compact X-ray binaries (UCXBs, van Haaften et al., 2013; Wang
et al., 2021), repeating fast radio bursts (Gu et al., 2016), faint type Iax
supernovae (Bobrick et al., 2022), and peculiar gamma-ray bursts (e.g., Yang
et al., 2022; Kaltenborn et al., 2023). Future space-based GW detectors such
as LISA (Amaro-Seoane et al., 2017) and TianQin (Luo et al., 2016) are
promising to detect these GW signals in the mHz band.
It is expected that the Milky Way hosts hundreds of NSWD systems observable by
LISA (Amaro-Seoane et al., 2023). An early work by Nelemans et al. (2001b)
indicated that LISA may detect several hundreds of Galactic NSWD binaries.
Based on the observations of close binary radio millisecond pulsars, Tauris
(2018) inferred that at least a hundred NSWD binaries with helium WDs (HeWDs)
could be detected as LISA sources in the Milky Way. In addition, Chen et al.
(2020) estimated the existence of approximately 60$-$80 interacting LISA
NS$-$HeWD systems in the Galactic field. More recently, Korol et al. (2023)
suggested that around 80$-$350 detached Galactic NSWDs are detectable by LISA
over its 4-year duration, depending on models related to the kick velocities
of natal NSs and the treatments of common envelope (CE) evolution. Outside the
Milky Way, LISA is likely to detect about $1-6$ NSWD systems in M31 for a
10-year mission (He et al., 2023). In galaxies beyond the local group, GW
signals from merging NSWD binaries are challenging to observe due to their
large distances and limited chirp masses (Amaro-Seoane et al., 2023), unless
the operation of future next-generation detectors such as DO-OPT and DEC (Kang
et al., 2024).
Based on the canonical channels with isolated binary evolution, close NSWD
systems are expected to be the descendants of low-mass X-ray binaries that
experienced stable Roche lobe overflow (RLOF) or intermediate-mass X-ray
binaries that experienced CE evolution (Tauris & van den Heuvel, 2023). The
former channel always results in the formation of HeWDs, while the latter
tends to produce more massive WDs, i.e., carbon-oxygen WDs (COWDs) or oxygen-
neon WDs (ONeWDs). In some cases, it is possible that mass transfer during the
progenitor evolution of NSWD binaries can effect a reversal of the end states
of the two components, resulting in a WD that forms before an NS (e.g.,
Nelemans et al., 2001b). Observations of detached NSWD systems with orbital
periods of $<0.4$ days and HeWD masses of $<0.2M_{\odot}$ pose a challenge to
the RLOF channel since the formation of these binaries is very sensitive to
various factors such as initial NS masses, NS’s accretion efficiencies, and
magnetic braking mechanisms (Istrate et al., 2014; Chen et al., 2020). As the
orbits of these detached NSWD binaries significantly shrink due to GW
radiation, they are likely to become detectable GW sources. Subsequently,
these systems evolve to be UCXBs when the WD companion fills its Roche lobe
and transfers material to the NS. It has been shown that the duration
approximately one million years before and after the onset of UCXBs represents
a GW detection window (Tauris, 2018), indicating an overlap in the
evolutionary pathways of NSWD binaries as LISA sources and UCXBs. In recent
investigations on Galactic NSWD binaries as GW sources (Tauris, 2018; Chen et
al., 2020; Korol et al., 2023), UCXBs with HeWDs and detached systems with
HeWDs/COWDs/ONeWDs have been considered. It is possible that UCXBs with
COWDs/ONeWDs also contribute to the population of GW sources if a massive WD
can stably transfer matter to an NS.
The stability of mass transfer between WDs and NSs has been extensively
studied but remains uncertain. The traditional jet-only model (Verbunt &
Rappaport, 1988) suggests a critical WD mass of approximately $0.5M_{\odot}$
for stable mass transfer. Later, the isotropic re-emission mass-transfer
assumption gives a limit of around $0.4M_{\odot}$ for WD masses (Yungelson et
al., 2002; van Haaften et al., 2012), taking into account the inability to
eject a sufficient amount of transferred matter from the system, further
reducing the stability of mass transfer. By assuming that the NSWD systems
with WDs of masses $>0.38M_{\odot}$ lead to unstable mass transfer and merge,
van Haaften et al. (2013) pointed out that most UCXBs consist of an HeWD
component. Based on hydrodynamic simulations, Bobrick et al. (2017) revealed a
lower critical WD mass of $0.2M_{\odot}$ when considering the effect of disc
winds that developed at highly super-Eddington mass-transfer rates. In this
case, only NSWD systems containing an HeWD can evolve into UCXBs. Using the
same method, Church et al. (2017) obtained a critical value of approximately
$0.15-0.25M_{\odot}$ for WD masses, depending on the assumptions of initial
input parameters and specific physical models. However, in contrast to these
results, Chen et al. (2022) suggested that all NS$-$HeWD binaries with WD
masses of $0.17-0.45M_{\odot}$ are expected to undergo stable mass transfer
when using the detailed stellar evolution code MESA, which takes into account
the realistic structure of WDs. Chen et al. (2022) also demonstrated that the
stability of mass transfer from an HeWD to an NS is independent of NS’s mass
and its mass-accretion efficiency that characterizes the fraction of
transferred matter accreted by the NS. In a different approach, Yu et al.
(2021) developed a code to investigate the orbital evolution of mass-
transferring NSWD binaries and showed that the majority of these systems
experience stable mass transfer. They found that the maximum WD mass can reach
approximately $1.25-1.28M_{\odot}$ for stable mass transfer, beyond which
NSWDs directly merge when mass transfer begins.
In this paper, we provide a comprehensive study on the characteristic
distribution of Galactic NSWD binaries as GW sources. Using a binary
population synthesis (BPS, see a review by Han et al., 2020) method, we
consider the effect from various aspects such as the treatments of mass
transfer between binary components, the efficiencies of CE ejection, and the
mechanisms of supernova explosion. The structure of this paper is organized as
follows. In Section 2, we present the methodology employed, which includes the
adoption of different models. Subsequently, in Section 3, we present the
results derived from our calculations. In Section 4, we make some discussions
based on these results. Finally, we conclude in Section 5.
## 2 Method
We employ the BSE code developed by Hurley et al. (2002) to investigate the
population of Galactic GW sources of NSWD binaries across various models.
These models involve different supernova recipes, mass-transfer efficiencies
and its stability, as well as physical parameters related to CE evolution.
Additionally, we account for the star formation history of the Milky Way and
perform the integration of the spatial motion of NSWDs under the influence of
Galactic gravitational potential. Taking into account the spatial locations of
Galactic NSWD systems, we calculate the signal-to-noise ratio (S/N) for every
GW binary in the LISA band and obtain the population of detectable sources
accordingly.
### 2.1 Supernova Mechanisms
Regarding the types and the masses of stellar remnants following core-collapse
supernova (CCSN) explosions, we utilize the rapid mechanism (Fryer et al.,
2012), which correlates the remnant masses with the masses of the CO cores
prior to explosions. This mechanism can account for the existence of the mass
gap between NSs and BHs (Shao, 2022). Under this mechanism, we adopt a Maxwell
distribution with a standard deviation of
$\sigma=265\mathrm{~{}km}\mathrm{~{}s}^{-1}$ (Hobbs et al., 2005) for the kick
velocities of natal NSs. For NSs formed through electron capture supernovae
(ECSN) or accretion-induced collapse (AIC), we use smaller kick velocities
with $\sigma=30\mathrm{~{}km}\mathrm{~{}s}^{-1}$ (Vigna-Gómez et al., 2018;
Shao & Li, 2018).
In addition, we consider an alternative supernova explosion mechanism with the
stochastic recipe proposed by Mandel & Müller (2020). Unlike the rapid
mechanism, this recipe introduces randomness in compact remnant masses and
kick velocities. Both the rapid and the stochastic recipes have been
incorporated into the BSE code (Shao & Li, 2021).
### 2.2 Mass Transfer
In a binary, the stability of mass transfer is determined by considering the
adiabatic hydrostatic response of the donor star to mass loss, denoted as
$\zeta_{\mathrm{ad}}$,
$\zeta_{\mathrm{ad}}=\left.\frac{\partial\ln R_{2}}{\partial\ln
M_{2}}\right|_{\mathrm{ad}},$ (1)
as well as the response of the Roche lobe to mass loss, denoted as
$\zeta_{\mathrm{RL}}$,
$\zeta_{\mathrm{RL}}=\left.\frac{\partial\ln R_{\mathrm{L,2}}}{\partial\ln
M_{2}}\right|_{\mathrm{bin}},$ (2)
where $R_{2}$ and $R_{\mathrm{L,2}}$ are the radii of the donor star and its
Roche lobe, respectively, and $M_{2}$ is the mass of the donor star (see e.g.,
Soberman et al., 1997). When $\zeta_{\mathrm{RL}}<\zeta_{\mathrm{ad}}$, mass
transfer proceeds in a stable manner. Otherwise, dynamically unstable mass
transfer occurs and CE evolution is triggered.
#### 2.2.1 Efficiency of mass transfer
During the evolution of the primordial binaries initially containing two zero-
age main-sequence stars, mass-transfer efficiency ($\eta_{\mathrm{MT}}$)
characterizes the fraction of the transferred matter that is accreted by the
secondary star, which plays a crucial role in determining whether the binaries
undergo stable mass transfer or evolve into a contact/CE phase (e.g., de Mink
et al., 2007). It has been demonstrated that a lower mass-transfer efficiency
tends to prevent significant expansion of the secondary star due to accretion,
allowing a larger parameter space of primordial binaries for stable mass
transfer. Based on the work of Shao & Li (2014), we employ three mass
accretion models to deal with the process of mass transfer during primordial
binary evolution. By default, we utilize the rotation-dependent mass accretion
model, referred to as MA1. In this model, the mass accretion rate of the
secondary star is assumed to be dependent on its rotational velocity, so the
mass-transfer process could be highly non-conservative with
$\eta_{\mathrm{MT}}\lesssim 20\%$. Alternatively, we consider two other
models: half mass accretion and thermal equilibrium-limited mass accretion,
referred to as MA2 and MA3, respectively, corresponding to
$\eta_{\mathrm{MT}}=50\%$ and $\eta_{\mathrm{MT}}\sim 100\%$. Each accretion
model is associated with a specific criterion to decide the stability of mass
transfer between binary components (Shao & Li, 2014). Previous investigations
have shown that the rotation-dependent model can better match the observations
of Galactic OBe-star binaries with a BH or a Wolf-Rayet star companion, while
the observations of Galactic Be-star binaries with an NS or a subdwarf
companion seem to favor the models with $\eta_{\mathrm{MT}}\gtrsim 0.5$ (Shao,
2022, and references therein).
When the accretor is an NS, we assume an accretion efficiency of 0.5 (Chen et
al., 2020). In addition, the accretion rate of the NS is constrained by the
Eddington limit. In our calculations, we adopt the isotropic re-emission
mechanism for non-conservative mass transfer. It is assumed that the material
lost from a binary system carries away the specific angular momentum of the
accretor.
#### 2.2.2 Mass transfer from a nondegenerate star to an NS
In the case of non-conservative mass transfer with an isotropic re-emission
way, previous works (e.g., Soberman et al., 1997) have suggested a positive
correlation between $\zeta_{\mathrm{RL}}$ and $q=M_{\rm d}/M_{\rm NS}$ (mass
ratio of the donor to the NS). As a consequence, there are critical mass
ratios used to determine mass-transfer stability. Tauris et al. (2000) pointed
out that the NS binaries with $q\gtrsim 3-3.5$ always evolve into CE phases
while the systems with $q\lesssim 1.5-2$ are expected to undergo stable mass
transfer (see also Shao & Li, 2012; Misra et al., 2020). Based on detailed
evolutionary simulations of the BH binaries with nondegenerate donors, Shao &
Li (2021) obtained easy-to-use criteria for mass-transfer stability. In this
paper, we apply these criteria to the binaries with an NS accretor. It is
assumed that mass transfer is always stable if $q<2$ or always unstable if
$q>2.1+0.8M_{\rm NS}$. For the systems with mass ratios between them, mass
transfer stability is dependent on the properties of the donor stars.
For the binaries with a naked He star and an NS, we adopt the criteria
calculated by Tauris et al. (2015) to deal with mass-transfer stability.
According to their work, CE phases are expected to occur when the masses of
the He stars exceed $2.7M_{\odot}$ and the orbital periods are below $0.06$
days. So we assume that only the systems with He-star masses of
$>2.7M_{\odot}$ and orbital periods of $<0.06$ days evolve into CE phases.
#### 2.2.3 Mass transfer from a WD to an NS
Considering that UCXBs with a WD donor and an NS accretor can last $\sim
1\,\rm Myr$ as GW sources, it is speculated that the stability of mass
transfer between WDs and NSs significantly impacts the population properties
of LISA NSWD binaries. Until now, however, the stability of mass transfer from
a WD to an NS still remains uncertain. By default, we adopt a threshold of
$0.2M_{\odot}$ for WD masses (Bobrick et al., 2017) to investigate the
properties of detached NSWD systems observable by LISA. This threshold does
not allow the NS binaries with COWDs/ONeWDs to evolve into long-standing
UCXBs. Also, we vary this threshold mass $M_{\mathrm{WD,crit}}$ from
$0.4M_{\odot}$ (van Haaften et al., 2012) to $1.25M_{\odot}$ (Yu et al., 2021)
and explore its influence on the number of interacting NSWD systems (UCXBs) in
the Milky Way.
For other types of binary systems not mentioned above, we use the default
criteria given by Hurley et al. (2002) to deal with the stability of mass
transfer.
#### 2.2.4 CE Evolution
When CE evolution is triggered, we employ the energy conservation prescription
to determine the outcome of CE phases. The related formulae can be found in
Hurley et al. (2002) and Shao & Li (2014). In our study, we utilize the
binding energy parameter $\lambda$ fitted by Xu & Li (2010). By default, we
assume the efficiency of CE ejection to be unity, i.e.,
$\alpha_{\mathrm{CE}}=1.0$. This parameter determines the proportion of
orbital energy lost that used to eject donor’s envelope. To assess the impact
of $\alpha_{\mathrm{CE}}$ on the population of LISA NSWD systems, we consider
two additional efficiencies, by choosing $\alpha_{\mathrm{CE}}=0.3$ as
inferred from the parameter distribution of Galactic WDWD systems (Scherbak &
Fuller, 2023) and $\alpha_{\mathrm{CE}}=3.0$ as required by the formation of
the post-CE system IK Pegasi (Davis et al., 2010).
### 2.3 Primordial Binaries
In our study, we simulate the evolution of approximately $10^{6}$ primordial
binaries for each model. The initial parameters of primordial binaries are set
as follows. The primary masses range from $5$ to $100M_{\odot}$, and the
secondary masses range from $0.5$ to $100M_{\odot}$. All primordial binaries
are assumed to have circular orbits, with separations varying from $3$ to
$10000R_{\odot}$. The binary fraction among all stars is assumed to be unity.
We follow the method of Shao & Li (2021) to calculate the Galactic population
of LISA NSWD systems that evolved from primordial binaries.
### 2.4 Star Formation History and Orbital Integration
For the Milky Way, we assume a constant star formation rate of
$3M_{\odot}\mathrm{~{}yr}^{-1}$ and a constant metallicity of $Z_{\odot}=0.02$
throughout its entire lifespan of 10 $\mathrm{Gyr}$.
To account for the spatial motions of NSWD binaries, we utilize the galpy
package (with the MWPotential2014 model, Bovy, 2015) to numerically integrate
their tracks from the formation of NS until either the binaries merge or the
evolutionary time exceeds 10 $\mathrm{Gyr}$.
Regarding the initial locations of primordial binaries in the Milky Way, the
star number density distribution can be described as a function of radial
distance from the Galactic center $r$ and vertical distance from the Galactic
plane $z$, using the equation proposed by Bahcall & Soneira (1980) as
$\rho(r,z)=\rho_{\odot}\exp\left[-\frac{r-r_{\odot}}{h_{r}}-\frac{z}{h_{z}}\right],$
(3)
where $r_{\odot}=8.5\mathrm{~{}kpc}$ represents the radial distance of the Sun
from the Galactic center, and $\rho_{\odot}$ denotes the star number density
at the location of the Sun. Here, $h_{r}=3.5\mathrm{~{}kpc}$ and
$h_{z}=0.25\mathrm{~{}kpc}$ represent the scale length parallel and
perpendicular to the Galactic plane. In the Galactic reference frame, the
velocities of the mass centers of pre-SN systems are assumed to be consistent
with rotation curves, where the circular velocity of the Sun is 220
$\mathrm{km}\mathrm{~{}s}^{-1}$.
Following the approach of Hurley et al. (2002), the velocities of the new mass
centers of post-SN binaries are subject to changes caused by mass losses and
NS kicks during supernova explosions.
### 2.5 Signal-to-noise Ratio (S/N) of GW
We utilize the python package LEGWORK (Wagg et al., 2022) to calculate the S/N
of Galactic NSWD binaries and identify those with an S/N greater than 5 as
detectable LISA sources. Among optional spacecraft parameters, we select the
robson19 model (Robson et al., 2019) for the sub-mHz confusion noise
contributed by unresolved WDWD binaries in the Galaxy (Cornish & Robson, 2017;
Babak et al., 2021). By default, the observation time is set to 4 years, which
is the standard duration used in LISA mission simulations.
## 3 Result
It is challenging for rapid population synthesis to model the formation of
close NSWD systems with low-mass HeWDs that involving the RLOF channel and
requiring severe fine-tuning of input parameters (see e.g., Istrate et al.,
2014; Chen et al., 2020; Deng et al., 2021). Consequently, the NS$-$HeWD
systems as GW sources are absent in our results111Despite the absence of
NS$-$HeWD systems in our results, we can evaluate their impact on LISA
detection. Previous studies have demonstrated that LISA may detect more than
100 NS$-$HeWD binaries in the Milky Way (Tauris, 2018; Chen et al., 2020).
Magnetic braking mechanisms are thought to play a crucial role in forming
close NS$-$HeWD systems and alleviating the fine-tuning problem (Deng et al.,
2021; Chen et al., 2021). For LISA NSWD sources, one may differentiate HeWDs
from COWDs/ONeWDs with measured chirp masses in detached systems or observed
spectral lines in interacting systems. Possible detection of LISA NS$-$HeWD
systems can be used to constrain the mechanisms of magnetic braking.. And, we
consider LISA NSWD systems with NSs originating from CCSN and ECSN, while
those with NSs originating from AIC are disregarded here and discussed in
Section 4.3.
Our calculations reveal that the total number of Galactic NSWD systems in the
Milky Way is about $2\times 10^{6}$, which is consistent with the estimation
of Nelemans et al. (2001b). Expected numbers of detectable NSWD systems by
LISA vary across different models, as presented in Table 1. In this table, we
separately list the numbers of detached and interacting LISA sources, as well
as the merger rates of NSWD systems.
In Section 3.1, we discuss the evolutionary pathways and initial binary
parameter spaces to form LISA NSWD binaries. Subsequently, in Section 3.2, we
evaluate the influence of different models related to the options of
$\eta_{\mathrm{MT}}$, $\alpha_{\mathrm{CE}}$, and supernova recipes. We
analyze the distributions of various parameters of LISA NSWD binaries
including their orbital parameters, component masses, and Galactic locations.
In Section 3.3, we investigate the impact of $M_{\mathrm{WD,crit}}$ on
interacting LISA NSWDs. In Section 3.4, we estimate the merger rates of NSWD
binaries in the Milky Way and the local Universe.
Table 1: Expected numbers and merger rates ($R_{\mathrm{merger}}$) of LISA NSWD binaries in the Milky Way. Our models include different treatments of mass-transfer efficiencies during primordial binary evolution ($\eta_{\mathrm{MT}}$), critical WD masses for the stability of mass transfer between WDs and NSs ($M_{\mathrm{WD,cri}}$), CE ejection efficiencies ($\alpha_{\mathrm{CE}}$) and supernova recipes. MA1, MA2, and MA3 represent rotation-dependent mass accretion, half mass accretion ($\eta_{\mathrm{MT}}=50\%$), and near-conservative mass accretion ($\eta_{\mathrm{MT}}\sim 100\%$), respectively. The superscripts D and I refer to detached and interacting LISA NSWD binaries, respectively. $\mathcal{R}_{\mathrm{merger}}$ denotes estimated merger rate densities of NSWD systems in the local Universe. $\eta_{\mathrm{MT}}$ | $M_{\mathrm{WD,cri}}$ | $\alpha_{\mathrm{CE}}$ | SN recipes | $N^{\mathrm{D}}$ | $N^{\mathrm{I}}$ | $N^{\mathrm{D}}$ ($N^{\mathrm{I}}$) | $N^{\mathrm{D}}$ ($N^{\mathrm{I}}$) | $R_{\mathrm{merger}}$ | $\mathcal{R}_{\mathrm{merger}}$
---|---|---|---|---|---|---|---|---|---
| $(M_{\odot})$ | | | | | NS forms first | WD forms first | $(\mathrm{Myr}^{-1})$ | $(\mathrm{Gpc}^{-3}\mathrm{yr}^{-1})$
MA1 | 0.2 | 0.3 | rapid | 45 | $-$ | 34 | 11 | 17.4 | 174
MA1 | 0.2 | 1.0 | rapid | 105 | $-$ | 79 | 26 | 43.3 | 433
MA1 | 0.2 | 1.0 | stochastic | 162 | $-$ | 120 | 42 | 66.9 | 669
MA1 | 0.2 | 3.0 | rapid | 197 | $-$ | 149 | 48 | 83.3 | 833
MA2 | 0.2 | 0.3 | rapid | 17 | $-$ | 14 | 3 | 7.2 | 72
MA2 | 0.2 | 1.0 | rapid | 78 | $-$ | 55 | 23 | 31.2 | 312
MA2 | 0.2 | 1.0 | stochastic | 103 | $-$ | 72 | 31 | 42.3 | 423
MA2 | 0.2 | 3.0 | rapid | 153 | $-$ | 86 | 67 | 64.3 | 643
MA3 | 0.2 | 0.3 | rapid | 49 | $-$ | 11 | 38 | 18.1 | 181
MA3 | 0.2 | 1.0 | rapid | 145 | $-$ | 47 | 98 | 55.9 | 559
MA3 | 0.2 | 1.0 | stochastic | 211 | $-$ | 63 | 148 | 82.6 | 826
MA3 | 0.2 | 3.0 | rapid | 234 | $-$ | 64 | 170 | 88.9 | 889
MA1 | 1.25 | 1.0 | rapid | 105 | 194 | 79 (174) | 26 (20) | 5.1 | 51
MA2 | 1.25 | 1.0 | rapid | 78 | 116 | 55 (100) | 23 (16) | 8.7 | 87
MA3 | 1.25 | 1.0 | rapid | 145 | 120 | 47 (99) | 98 (21) | 31.8 | 318
### 3.1 Formation Scenarios and Binary Parameter Spaces
NSWD systems can be categorized based on the formation order of their binary
components, specifically whether the NS or the WD forms first. Therefore, we
classify the formation scenarios for LISA NSWD sources as follows:
Scenario 1: NS forms first.
Scenario 2: WD forms first.
Evolved from primordial binaries, Scenario 1 means that the primary stars
firstly evolve into NSs and then the secondary stars become WDs. There is a
common feature that the NSWD binaries formed via Scenario 1 have circular
orbits. The masses of the primary stars and the secondary stars fall within
the ranges of $6-20M_{\odot}$ and $2-10M_{\odot}$, respectively. The orbital
periods of the primordial binaries cover two separated ranges, i.e., a few
days to tens of days and hundreds to thousands of days. Generally, the
primordial binaries with narrow orbits undergo stable mass transfer during the
first mass-transfer stage, while those with wide orbits are expected to
experience CE evolution since the primary stars have developed a deep
convective envelope prior to mass transfer.
In Scenario 2, the primary stars typically have initial masses of $\gtrsim
5M_{\odot}$ and the secondary stars have similar masses. As the progenitors of
LISA NSWD sources, the corresponding primordial binaries have relatively
narrow orbits with periods of a few days to tens of days. During the
evolution, mass transfer occurs through stable RLOF. The primary stars firstly
turn into WDs. Since the secondary stars have accreted sufficient matter
during previous mass-transfer processes, they can evolve into He stars with
masses of $\gtrsim 2M_{\odot}$ after their hydrogen envelopes stripped via CE
phases, and eventually become NSs. Additionally, it is possible for Scenario 2
that a small fraction of the primordial binaries with long orbital periods
evolve to be double He-star systems before WD and NS formation (see also the
Pathway 3 described by Toonen et al., 2018). In contrast to Scenario 1, NSWD
binaries where WDs form first tend to exhibit large orbital eccentricities due
to the lack of an efficient mechanism of orbital circularization after NS
formation.
### 3.2 Detached NSWD Binaries
#### 3.2.1 The impact of mass-transfer efficiency
Figure 1: Calculated number distributions of detached LISA NSWD systems in the
Milky Way, as a function of NS mass, WD mass, orbital period, and
eccentricity. The left, middle, and right panels correspond to the models MA1,
MA2 and MA3, respectively. Here, we adopt $\alpha_{\mathrm{CE}}=1$ and the
rapid mechanism of supernova explosions. In each panel, the blue contours
represent systems where NS forms first, while the orange contours represent
systems where WD forms first. Notably, all NSWD binaries where NS forms first
exhibit circular orbits, the corresponding blue contours do not appear to show
these systems in the plane of orbital period versus eccentricity.
Fig. 1 presents calculated number distributions of Galactic LISA sources of
detached NSWD binaries in the planes of NS mass versus WD mass (upper panels)
and orbital period versus eccentricity (lower panels). The left, middle, and
right panels correspond to the mass accretion models MA1, MA2, and MA3,
respectively. In this analysis, we adopt the default rapid supernova explosion
mechanism and set $\alpha_{\mathrm{CE}}=1$. The value of
$M_{\mathrm{WD,crit}}$ is fixed at $0.2M_{\odot}$, resulting in all LISA NSWD
systems being detached. The reason is that when NSs form first, WD masses are
typically above $0.4M_{\odot}$, whereas when WDs form first, WD masses are
generally larger than $0.8M_{\odot}$. This also indicates the absence of HeWDs
in all cases. Since the critical mass ratio of nondegenerate donors to NSs for
avoiding CE evolution is $\sim 2$ (see e.g., Misra et al., 2020), LISA NSWD
binaries formed via Scenario 1, as the evolutionary products of intermediate-
mass X-ray binaries, are expected to host COWDs or ONeWDs. On the other hand,
NSWD systems formed via Scenario 2 require the masses of both components of
the primordial binaries to exceed $5M_{\odot}$, leading to produce massive
WDs. The masses of NSs are distributed with two peaks at $\sim 1.1M_{\odot}$
and $\sim 1.3M_{\odot}$, corresponding to the NSs formed from CCSN and ECSN,
respectively. For CCSN NSs, we adopt the rapid mechanism (Fryer et al., 2012)
which predicts that stars with masses of $\sim 8-12M_{\odot}$ finally collapse
into $\sim 1.1M_{\odot}$ NSs. For ECSN NSs, we simply assume that they are
born with mass of $1.3M_{\odot}$ (see also He et al., 2023).
It is noteworthy that distinguishing the evolutionary origins of detached NSWD
systems from the RLOF and the CE channels is relatively straightforward. The
RLOF channel is expected to produce LISA NS$-$HeWD binaries with chirp masses
of $\lesssim 0.44M_{\odot}$, corresponding to the systems containing a
$\lesssim 2M_{\odot}$ NS and a $\sim 0.167M_{\odot}$ WD (Tauris, 2018). For
the CE channel, our calculations indicate a minimum chirp mass of $\sim
0.56M_{\odot}$ for LISA NSWD systems which contain a $\gtrsim 1.1M_{\odot}$ NS
and a $\gtrsim 0.4M_{\odot}$ WD. Consequently, we can confidently discern the
evolutionary channels of LISA NSWD systems. Next, we focus on the systems with
COWD/ONeWD components and perform quantitative analyses of binary parameters
under different models.
Among three mass accretion models, we see that MA3, which corresponds to
nearly conservative mass transfer, yields the highest number of LISA NSWD
systems where WDs form first. Additionally, the majority of these systems are
expected to host an ONeWD. As mass-transfer efficiency increases, the
component masses of primordial binaries in Scenario 2 shift towards lower
values, resulting in the formation of more systems where WDs form first due to
initial mass function. Specifically, the model MA3 predicts the existence of
approximately 50 detached NSWD systems with eccentricities exceeding 0.1. In
this case, the fraction of these eccentric binaries among all detached LISA
systems with COWDs/ONeWDs (the number is 145 from Table 1) can reach as high
as $\sim 0.34$. And, we estimate that $\sim 8$ systems are likely to have
relatively wide orbits with periods ranging from 0.1 to 1 day. In contrast,
the models MA1 and MA2 predict that almost no LISA NSWD binaries have large
eccentricities of $>0.5$, and only around 10 systems are expected to have
eccentricities exceeding 0.1, corresponding to their fraction of $\sim
0.1-0.14$ among all $78-105$ LISA binaries (see Table 1). Based on the
distributions of orbital parameters combined with the numbers of detached NSWD
systems detectable by LISA, it becomes feasible to provide constraints on
possible mass accretion models from future GW observations.
Figure 2: Pie charts illustrating the relative fractions of detached LISA NSWD
systems where NSs (blue) or WDs (red) form first. The left, middle, and right
panels represent the models MA1, MA2 and MA3, respectively. The panels from
top to bottom correspond to $\alpha_{\mathrm{CE}}=0.3$, 1.0, and 3.0,
respectively. Below each pie chart, we also give the corresponding numbers for
all detached binaries and the systems with eccentricities larger than 0.1.
#### 3.2.2 The impact of $\alpha_{\mathrm{CE}}$
Fig. 2 shows the relative fractions of detached LISA NSWD binaries formed via
Scenario 1 or 2. The left, middle, and right panels represent the models MA1,
MA2 and MA3, respectively. The panels from top to bottom correspond to
$\alpha_{\mathrm{CE}}=0.3,$ 1.0, and 3.0, respectively. In each panel, we also
give the numbers of all detached LISA binaries from our calculations and the
systems with eccentricities larger than 0.1.
It is obvious that mass-transfer efficiencies during primordial binary
evolution are the dominant factor influencing the relative fractions of
systems where NSs or WDs form first. However, we note that the influence of
$\alpha_{\mathrm{CE}}$ cannot be disregarded. On the one hand, a lower value
of $\alpha_{\mathrm{CE}}$ make it more challenging to eject CE, resulting in a
significant reduction in the number of systems. Expected numbers of LISA NSWD
binaries ($N^{\rm D}$) decrease from 153$-$234, to 78$-$145, and to 17$-$49
when decreasing $\alpha_{\mathrm{CE}}$ from 3.0, to 1.0, and to 0.3,
respectively. On the other hand, the numbers of the systems with
eccentricities above 0.1 ($N^{\rm D}_{\rm e>0.1}$) are sensitive to the
options of $\alpha_{\mathrm{CE}}$ in the models MA1 and MA2. Overall, the
ratios of $N^{\rm D}_{\rm e>0.1}$ to $N^{\rm D}$ are always $\lesssim 0.2$ in
these two models. While for all our adopted $\alpha_{\mathrm{CE}}$, the model
MA3 predicts that $\sim 0.3-0.4$ of detectable NSWD binaries have
eccentricities larger than 0.1. This is because the primordial binaries in
Scenario 2 have the orbital periods of 8$-$27 days for the model MA3, while
they have the orbital periods of 2$-$20 days for the models MA1 and MA2. The
latter is more susceptible to the influence of $\alpha_{\mathrm{CE}}$ when CE
evolution occurs in the subsequent binaries with a giant star and a WD.
In the model MA3, a lower value of $\alpha_{\mathrm{CE}}=0.3$ leads to an
increase of the relative fraction of detached LISA NSWD binaries where WDs
form first, compared to the cases of $\alpha_{\mathrm{CE}}=1.0$ and
$\alpha_{\mathrm{CE}}=3.0$, which can potentially shift the chirp-mass
distribution of the whole population of NSWD GW sources to have a higher mass
peak. A similar conclusion was drawn by Korol et al. (2023), who assumed the
mass-accretion rate during primordial binary evolution to be limited by the
thermal timescale of the accreting secondary, resembling our model MA3.
#### 3.2.3 The impact of supernova recipes
Figure 3: Probability density functions (PDFs) of detached LISA NSWD systems
in the Milky Way, as a function of chirp mass and vertical distance from the
Galactic plane. The left, middle, and right panels correspond to the models
MA1, MA2, and MA3, respectively. The top and bottom panels represent the rapid
and stochastic supernova mechanisms, respectively. Here we adopt
$\alpha_{\mathrm{CE}}=1.0$. In each panels, the blue dots denote systems where
NSs form first, while the orange dots denote systems where WDs form first. The
size of each dot reflects its offset distance from the Galactic center.
Fig. 3 shows the probability density functions of detached LISA NSWDs in the
Milky Way, as a function of chirp mass $M_{\rm chirp}$ and vertical distance
$z$ from the Galactic plane. Here, we adopt $\alpha_{\mathrm{CE}}=1.0$. The
blue and orange dots with big scatters correspond to the NSWD systems formed
via Scenarios 1 and 2, respectively. Different sizes of these dots represent
their offset distances from the Galactic center. The left, middle, and right
panels represent the models MA1, MA2, and MA3, respectively. The top and
bottom panels represent the rapid and stochastic supernova mechanisms,
respectively. There is a tendency that the vast majority of the binaries where
NSs form first are distributed with $z<1\,\rm kpc$, while the systems where
WDs form first are more likely to locate at relatively large $z$ with a tail
up to $\sim 2-5\,\rm kpc$. And, the latter systems are expected to have
significantly large offset distances from the Galactic center. The main reason
for these discrepancies is that the systems where WDs form first are more
susceptible to the kick velocities of natal NSs. Also, we should note that a
significant fraction of LISA NSWD binaries where WDs form first have eccentric
orbits.
For the supernova mechanisms changing from the rapid to the stochastic
recipes, we observe a slight shift in the overall distribution of chirp masses
of LISA NSWD sources towards higher values, especially in the model MA3. This
shift is because the stochastic mechanism can produce more massive NSs with
masses of $\sim 1.2M_{\odot}-1.6M_{\odot}$ than the rapid mechanism that forms
$\sim 1.1$ NSs. Furthermore, the stochastic mechanism allows part of NSs to
have small kick velocities, compared to the rapid mechanism, therefore
preventing the disruption of more binaries where WDs form first during
supernova explosions. As a result, we can see for the stochastic mechanism
that some LISA sources locate at relatively large $z$. However, this
difference from the spatial locations of LISA NSWD binaries cannot provide
strong constraints on our adopted supernova mechanisms. As pointed out by
Korol et al. (2023), the majority of LISA NSWDs locate within
$z=5\mathrm{~{}kpc}$ from the Galactic disk. Another avenue of exploration
lies in decoupling the component masses of observable binaries, which may
offer improved constraints on the remnant masses after supernova explosions.
### 3.3 Interacting NSWD Binaries
Figure 4: The influence of $M_{\mathrm{WD,crit}}$ on calculated numbers of
interacting LISA NSWD binaries (top panel), as well as number ratios of
interacting systems to detached systems (bottom panel). The blue, orange, and
green curves correspond to the models MA1, MA2, and MA3, respectively. Here
$\alpha_{\mathrm{CE}}=1.0$ and the rapid supernova mechanism are adopted.
For close detached NSWDs with orbital periods less than $\sim 0.4$ days, in
particular eccentric systems, the emission of GW is able to significantly
shrink their orbits. Within a Hubble time, RLOF occurs in these binaries,
leading to the formation of interacting systems. Depending on the stability of
mass transfer via RLOF, NSWD binaries may either merge or evolve into stable
UCXBs. Notably, a large value of $M_{\mathrm{WD,crit}}$ can increase the
number of interacting NSWD binaries.
Fig. 4 shows the expected number of interacting LISA NSWDs, as well as the
number ratio of interacting systems to detached systems, as a function of
$M_{\mathrm{WD,crit}}$. In this analysis, we choose $\alpha_{\mathrm{CE}}=1.0$
and the rapid supernova mechanism. Three mass accretion models (i.e. MA1, MA2,
and MA3) are taken into account for comparison. Across these models, a
pronounced increase in the number of observable sources occurs near
$0.8M_{\odot}$, as this is where the mass distribution of WDs has a peak (see
also Fig. 1). In each model, there is a plateau at the high-mass end of $\sim
1.1-1.2M_{\odot}$. This plateau arises because the orbital shrinkage of the
systems with $\gtrsim 1.1M_{\odot}$ ONeWDs caused by GW radiation dominates
and these systems always merge, although stable mass transfer is allowed to
happen. Overall, when $M_{\mathrm{WD,crit}}$ is below 0.6, the three models
exhibit a similar trend. For $M_{\mathrm{WD,crit}}$ exceeding 0.6, the model
MA1 is able to produce more interacting NSWD systems, since it allows the
formation of more close binaries with less-massive COWDs compared to the
models MA2 and MA3.
In total, the number ratios of interacting to detached NSWD binaries are less
than about 1.9/1.5/0.9, corresponding to the models MA1/MA2/MA3, respectively.
Importantly, these number ratios are sensitive to the options of
$M_{\mathrm{WD,crit}}$. In principle, we can provide some constraints on
$M_{\mathrm{WD,crit}}$ if a number of interacting and detached LISA NSWD
binaries with COWD/ONeWD components are identified in the future. However, it
is essential to consider the formation of interacting LISA sources via other
pathways such as the RLOF and the AIC channels. Distinguishing between these
channels becomes challenging. On the one hand, all detectable interacting
NSWDs exhibit eccentricities of $<0.001$ which are below the measurement
threshold of approximately 0.1 by LISA (Korol et al., 2023). On the other
hand, the RLOF channel can produce NS$-$HeWD systems with
$z>1\mathrm{~{}kpc}$, such as J0348+0432 (Antoniadis et al., 2013). It is
possible to distinguish the RLOF and the CE channels if one can observe
different spectra with components from originally HeWDs or COWDs/ONeWDs.
Furthermore, the AIC channel can lead to the formation of both NS$-$HeWD and
NS$-$COWD systems (see Section 4.3 for more details).
### 3.4 Merger rates
The merger rates ($R_{\mathrm{merger}}$) of Galactic NSWD binaries under
various models are presented in Table 1, varying in the range of about
$5-90\,\rm Myr^{-1}$. It is worth noting that the merger rates are relatively
low in the models with $\alpha_{\mathrm{CE}}=0.3$, which are consistent with
the small numbers of corresponding LISA binaries predicted in these models. In
the models MA1 and MA2, increasing $M_{\mathrm{WD,crit}}$ from $0.2M_{\odot}$
to $1.25M_{\odot}$ can greatly decrease the merger rates by a factor of $\sim
4-8$. However, the model MA3 shows an exception, as it allows the formation of
a substantial number of NS$-$ONeWD binaries, which are bound to merge despite
a large value of $M_{\mathrm{WD,crit}}$.
It is believed that NSWD mergers are relevant to some observable transients
outside the Milky Way (e.g., Bobrick et al., 2022; Kaltenborn et al., 2023).
One can calculate the merger rate density ($\mathcal{R}_{\mathrm{merger}}$) of
NSWD binaries in the local Universe, if simultaneously modeling the evolution
of star formation and metallicity as a function of redshift. Here we present a
rough estimation of $\mathcal{R}_{\mathrm{merger}}$, according to our obtained
$R_{\mathrm{merger}}$. We assume that the number density of Milky Way
equivalent galaxies within the local Universe is $0.01\rm\,Mpc^{-3}$ (e.g.,
Abadie et al., 2010). This produces a conversion factor between
$R_{\mathrm{merger}}=1\rm\,Myr^{-1}$ and
$\mathcal{R}_{\mathrm{merger}}=10\rm\,Gpc^{-3}\,yr^{-1}$. So we estimate the
merger rate density of NSWD binaries in the local Universe of $\sim
50-900\rm\,Gpc^{-3}\,yr^{-1}$, which is agreement with the theoretical
prediction of $390\rm\,Gpc^{-3}\,yr^{-1}$ made by Zhao et al. (2021).
## 4 Discussion
### 4.1 Identification of NSWDs
Although the detection of Galactic NSWD binaries as GW sources can provide
valuable information to constrain the origin of these sources and the physics
of binary interaction, not all NSWDs can be discerned among numerous GW
sources in the Milky Way.
The identification of LISA NSWD systems typically relies on the measurement of
chirp masses, which usually range from approximately $0.35$ to $1.2M_{\odot}$
(e.g., Korol et al., 2023). For comparison, the chirp mass distribution of
WDWD and NSNS systems exhibits a peak around $\sim 0.25-0.4\mathrm{M}_{\odot}$
(Korol et al., 2022) and $1.1M_{\odot}$ (Korol & Safarzadeh, 2021),
respectively. The determination of chirp masses can often be inferred through
the measurement of GW frequency derivative $\dot{f}_{\mathrm{GW}}$ for nearby
GW sources with exceptionally high S/N. Additionally, for other systems,
Tauris (2018) indicated that combining measurements of optical distance and GW
strain amplitude can effectively constrain $\dot{f}_{\mathrm{GW}}$.
Furthermore, eccentricities can serve as an alternative feature to distinguish
NSWD systems from WDWD systems in cases where chirp masses cannot be directly
measured. Korol et al. (2023) demonstrated that the minimum detectable
eccentricity, derived from full Bayesian parameter estimation (Moore et al.,
2023), can reach as low as $\sim 0.03$ when searching for realistic eccentric
NSWD systems. Moreover, as mentioned by Tauris (2018), a direct method to
identify NSWD systems is the search of both optical WDs and radio pulsars.
### 4.2 The degeneracy of multiple parameters
In our study, we include multiple parameters related to binary evolution,
which can potentially cross-influence the population statistics of LISA NSWD
systems and lead to possible degeneracy of these parameters.
The mass accretion models (MA1, MA2 and MA3) play a vital role in determining
the formation order of NS/WD, thereby affecting the eccentricity distribution
of detached LISA NSWD binaries. The formation of the systems with
eccentricities larger than 0.1 is sensitive to the option of these models.
Additionally, the mass accretion models can also influence the chirp mass
distributions, as the NSWD binaries where WDs from first tend to contain more
massive WDs than the systems where NSs form first. Combining the distributions
of orbital eccentricities and chirp masses for detached LISA NSWD binaries is
able to provide constraints on the mass accretion models.
Varying CE ejection efficiencies $\alpha_{\mathrm{CE}}$ can lead to different
relative fractions of detached LISA NSWD binaries where NSs or WDs form first.
The impact of $\alpha_{\mathrm{CE}}$ on the ratios of $N^{\rm D}_{\rm e>0.1}$
to $N^{\rm D}$ is considerably smaller than that of mass accretion models (see
Fig. 2). Furthermore, disentangling the impact of $\alpha_{\mathrm{CE}}$ from
that of supernova mechanisms is challenging, as both of them can significantly
influence the calculated numbers and the obtained parameter distributions of
detached LISA NSWD binaries (see Fig. 1 and Figs. 6$-$8 in Appendix A).
### 4.3 AIC Mechanism
Figure 5: The bar chart illustrates calculated numbers of LISA NSWD binaries
under various models. The blue bars represent the systems with CCSN/ECSN NSs,
while the orange bars represent the systems with AIC NSs. Since the criterion
of $M_{\mathrm{WD,crit}}=0.2M_{\odot}$ is used, all LISA binaries with
CCSN/ECSN NSs from our calculations are detached systems. Here, the LISA
binaries with AIC NSs include interacting systems.
Based on our calculations, we observe a significant contribution from the NSWD
binaries with AIC NSs to the whole population of Galactic LISA NSWD sources.
This contribution was not considered in previous analyses due to considerable
uncertainties of the AIC mechanism itself.
Beginning the evolution from a primordial binary, the primary star firstly
leaves an ONeWD. When the secondary star evolves to the giant branch and fills
its Roche lobe, a CE phase is triggered. This phase leads to the formation of
the ONeWD binary with a helium star or another WD companion. Subsequently, the
ONeWD collapse into an NS when its mass exceeds the critical mass of
$1.38M_{\odot}$ due to accretion.
In Fig. 5, we provide an estimate of the number of LISA NSWD binaries with AIC
NSs in the Milky Way. In contrast to the systems with CCSN/ECSN NSs, a lower
value of $\alpha_{\mathrm{CE}}$ tends to produce more LISA sources with AIC
NSs. Overall, our models predict that the Milky Way may host about $100-300$
LISA NSWD binaries with AIC NSs.
The AIC channel is not yet fully understood, including uncertainties from the
treatments of mass-transfer stability in binaries with WD accretors and mass-
retention efficiency of accreting WDs. For the stability of mass transfer
between giant-like donor stars and WDs, we adopt the default critical mass
ratio in the BSE code, as Equation (56) of Hurley et al. (2002). This
criterion of mass-transfer stability is derived by Hjellming & Webbink (1987),
assuming the transfer of mass and orbital angular momentum is conservative
during the evolution. However, the realistic value of $\zeta_{\mathrm{RL}}$
depends on specific mass-loss mechanisms when mass transfer is non-
conservative (Soberman et al., 1997). The default criterion involving a low
critical mass ratio implies that almost all of the systems with an ONeWD and a
giant donor experience a CE phase and evolve to be close binaries, thereby
significantly contributing to the population of LISA NSWD binaries during
subsequent evolution. However, Pavlovskii & Ivanova (2015) proposed that the
critical mass ratio for stable mass transfer in binaries with a giant donor
and a compact object ranges from 1.5 to 2.2. More recently, Ge et al. (2020)
showed that mass transfer in binaries with giant donor stars is more stable
than previously believed.
Furthermore, for stable mass transfer between two WDs, we adopt the critical
mass ratio of 0.628 (Hurley et al., 2002), which is similar to the value
proposed by Nelemans et al. (2001a). This critical mass ratio allows most
ONeWDs to effectively accrete mass from HeWDs or COWDs, eventually resulting
in the formation of NSs via AIC. Additionally, we assume that steady nuclear
burning occurs on the surface of accreting WDs if mass-transfer rate falls
within the range of $(1.03-2.71)\times
10^{-7}\mathrm{M}_{\odot}\mathrm{yr}^{-1}$. However, this range greatly
depends on the masses and the temperatures of the WDs involved, as well as the
components of accreted material. Therefore, there remains uncertainty
regarding whether the AIC mechanism significantly contributes the population
of LISA NSWD binaries.
## 5 Conclusions
In this study, we have performed binary population synthesis calculations to
investigate the origins of LISA NSWD binaries in the Milky Way and examine the
influences of different assumptions related to binary evolution on their
characteristic distribution. Our results reveal that approximately 17$-$234
detached NSWD binaries and less than 200 interacting systems can serve as
detectable LISA sources, excluding the NSWDs originating from the RLOF and the
AIC channels.
The model MA3, with near-conservative mass transfer during primordial binary
evolution, predicts most of detached LISA NSWD binaries are systems where WDs
form first (see Fig. 2). Among all detached LISA binaries from our
calculations, the fraction for the systems with eccentricities larger than 0.1
can reach as high as $\sim 0.3-0.4$. While for the models MA1 and MA2, we
obtain that $\lesssim 0.2$ of detached LISA NSWD systems have eccentricities
of larger than 0.1. These eccentric systems are more likely to locate at large
vertical distances from the Galactic plane. Furthermore, detached LISA NSWD
binaries are more likely to host a massive ONeWD in the model MA3, compared to
the models MA1 and MA2. By studying the distributions of the binary parameters
and the spatial locations of detached LISA NSWD sources, it is possible to
constrain mass accretion models (i.e., MA1, MA2, and MA3).
Also, we have demonstrated that CE ejection efficiencies
$\alpha_{\mathrm{CE}}$ can significantly influence the expected numbers of
LISA NSWD sources in the Milky Way. A smaller value of $\alpha_{\mathrm{CE}}$
tends to suppress the formation of the systems with eccentricities of $>0.1$,
particularly in the models MA1 and MA2. For supernova mechanisms, the adoption
of the stochastic recipe tends to generate more binaries with large chirp
masses, in contrast to the rapid recipe.
At last, we propose that the stability of mass transfer between WDs and NSs
could be constrained, according to the observations of interacting LISA NSWD
binaries also appearing as UCXBs.
## Acknowledgements
We thank the anonymous referee for helpful suggestions that improved this
paper. This work was supported by the National Key Research and Development
Program of China (Grant Nos. 2023YFA1607902, 2021YFA0718500), the Natural
Science Foundation of China (Nos. 12041301, 12121003, 12373034), the Strategic
Priority Research Program of the Chinese Academy of Sciences (Grant No.
XDB0550300), and the Project U1838201 supported by NSFC and CAS.
## Data Availability
The data underlying this article will be shared on reasonable request to the
corresponding author.
## References
* Abadie et al. (2010) Abadie J., et al., 2010, Classical and Quantum Gravity, 27, 173001
* Abbott et al. (2016) Abbott B. P., et al., 2016, Physical Review X, 6, 041015
* Abbott et al. (2023) Abbott R., et al., 2023, Physical Review X, 13, 011048
* Amaro-Seoane et al. (2017) Amaro-Seoane P., et al., 2017, arXiv e-prints, p. arXiv:1702.00786
* Amaro-Seoane et al. (2023) Amaro-Seoane P., et al., 2023, Living Reviews in Relativity, 26, 2
* Antoniadis et al. (2013) Antoniadis J., et al., 2013, Science, 340, 448
* Babak et al. (2021) Babak S., Hewitson M., Petiteau A., 2021, arXiv e-prints, p. arXiv:2108.01167
* Bahcall & Soneira (1980) Bahcall J. N., Soneira R. M., 1980, ApJS, 44, 73
* Bobrick et al. (2017) Bobrick A., Davies M. B., Church R. P., 2017, MNRAS, 467, 3556
* Bobrick et al. (2022) Bobrick A., Zenati Y., Perets H. B., Davies M. B., Church R., 2022, MNRAS, 510, 3758
* Bovy (2015) Bovy J., 2015, ApJS, 216, 29
* Chen et al. (2020) Chen W.-C., Liu D.-D., Wang B., 2020, ApJ, 900, L8
* Chen et al. (2021) Chen H.-L., Tauris T. M., Han Z., Chen X., 2021, MNRAS, 503, 3540
* Chen et al. (2022) Chen H.-L., Tauris T. M., Chen X., Han Z., 2022, ApJ, 930, 134
* Church et al. (2017) Church R. P., Strader J., Davies M. B., Bobrick A., 2017, ApJ, 851, L4
* Cornish & Robson (2017) Cornish N., Robson T., 2017, in Journal of Physics Conference Series. p. 012024 (arXiv:1703.09858), doi:10.1088/1742-6596/840/1/012024
* Davis et al. (2010) Davis P. J., Kolb U., Willems B., 2010, MNRAS, 403, 179
* Deng et al. (2021) Deng Z.-L., Li X.-D., Gao Z.-F., Shao Y., 2021, ApJ, 909, 174
* Fryer et al. (2012) Fryer C. L., Belczynski K., Wiktorowicz G., Dominik M., Kalogera V., Holz D. E., 2012, ApJ, 749, 91
* Ge et al. (2020) Ge H., Webbink R. F., Chen X., Han Z., 2020, ApJ, 899, 132
* Gu et al. (2016) Gu W.-M., Dong Y.-Z., Liu T., Ma R., Wang J., 2016, ApJ, 823, L28
* Han et al. (2020) Han Z.-W., Ge H.-W., Chen X.-F., Chen H.-L., 2020, Research in Astronomy and Astrophysics, 20, 161
* He et al. (2023) He J.-G., Shao Y., Gao S.-J., Li X.-D., 2023, ApJ, 953, 153
* Hjellming & Webbink (1987) Hjellming M. S., Webbink R. F., 1987, ApJ, 318, 794
* Hobbs et al. (2005) Hobbs G., Lorimer D. R., Lyne A. G., Kramer M., 2005, MNRAS, 360, 974
* Hurley et al. (2002) Hurley J. R., Tout C. A., Pols O. R., 2002, MNRAS, 329, 897
* Istrate et al. (2014) Istrate A. G., Tauris T. M., Langer N., 2014, A&A, 571, A45
* Kaltenborn et al. (2023) Kaltenborn M. A. R., Fryer C. L., Wollaeger R. T., Belczynski K., Even W., Kouveliotou C., 2023, ApJ, 956, 71
* Kang et al. (2024) Kang Y., et al., 2024, MNRAS, 528, 5309
* Korol & Safarzadeh (2021) Korol V., Safarzadeh M., 2021, MNRAS, 502, 5576
* Korol et al. (2022) Korol V., Hallakoun N., Toonen S., Karnesis N., 2022, MNRAS, 511, 5936
* Korol et al. (2023) Korol V., Igoshev A. P., Toonen S., Karnesis N., Moore C. J., Finch E., Klein A., 2023, arXiv e-prints, p. arXiv:2310.06559
* Luo et al. (2016) Luo J., et al., 2016, Classical and Quantum Gravity, 33, 035010
* Mandel & Müller (2020) Mandel I., Müller B., 2020, MNRAS, 499, 3214
* Misra et al. (2020) Misra D., Fragos T., Tauris T. M., Zapartas E., Aguilera-Dena D. R., 2020, A&A, 642, A174
* Moore et al. (2023) Moore C. J., Finch E., Klein A., Korol V., Pham N., Robins D., 2023, arXiv e-prints, p. arXiv:2310.06568
* Nelemans et al. (2001a) Nelemans G., Portegies Zwart S. F., Verbunt F., Yungelson L. R., 2001a, A&A, 368, 939
* Nelemans et al. (2001b) Nelemans G., Yungelson L. R., Portegies Zwart S. F., 2001b, A&A, 375, 890
* Pavlovskii & Ivanova (2015) Pavlovskii K., Ivanova N., 2015, MNRAS, 449, 4415
* Robson et al. (2019) Robson T., Cornish N. J., Liu C., 2019, Classical and Quantum Gravity, 36, 105011
* Scherbak & Fuller (2023) Scherbak P., Fuller J., 2023, MNRAS, 518, 3966
* Shao (2022) Shao Y., 2022, Research in Astronomy and Astrophysics, 22, 122002
* Shao & Li (2012) Shao Y., Li X.-D., 2012, ApJ, 756, 85
* Shao & Li (2014) Shao Y., Li X.-D., 2014, ApJ, 796, 37
* Shao & Li (2018) Shao Y., Li X.-D., 2018, ApJ, 867, 124
* Shao & Li (2021) Shao Y., Li X.-D., 2021, ApJ, 920, 81
* Soberman et al. (1997) Soberman G. E., Phinney E. S., van den Heuvel E. P. J., 1997, A&A, 327, 620
* Tauris (2018) Tauris T. M., 2018, Phys. Rev. Lett., 121, 131105
* Tauris & van den Heuvel (2023) Tauris T. M., van den Heuvel E. P. J., 2023, Physics of Binary Star Evolution. From Stars to X-ray Binaries and Gravitational Wave Sources, doi:10.48550/arXiv.2305.09388.
* Tauris et al. (2000) Tauris T. M., van den Heuvel E. P. J., Savonije G. J., 2000, ApJ, 530, L93
* Tauris et al. (2015) Tauris T. M., Langer N., Podsiadlowski P., 2015, MNRAS, 451, 2123
* Toonen et al. (2018) Toonen S., Perets H. B., Igoshev A. P., Michaely E., Zenati Y., 2018, A&A, 619, A53
* Verbunt & Rappaport (1988) Verbunt F., Rappaport S., 1988, ApJ, 332, 193
* Vigna-Gómez et al. (2018) Vigna-Gómez A., et al., 2018, MNRAS, 481, 4009
* Wagg et al. (2022) Wagg T., Breivik K., de Mink S. E., 2022, ApJS, 260, 52
* Wang et al. (2021) Wang B., Chen W.-C., Liu D.-D., Chen H.-L., Wu C.-Y., Tang W.-S., Guo Y.-L., Han Z.-W., 2021, MNRAS, 506, 4654
* Xu & Li (2010) Xu X.-J., Li X.-D., 2010, ApJ, 716, 114
* Yang et al. (2022) Yang J., et al., 2022, Nature, 612, 232
* Yu et al. (2021) Yu S., Lu Y., Jeffery C. S., 2021, MNRAS, 503, 2776
* Yungelson et al. (2002) Yungelson L. R., Nelemans G., van den Heuvel E. P. J., 2002, A&A, 388, 546
* Zhao et al. (2021) Zhao Z. Y., Zhang G. Q., Wang Y. Y., Tu Z.-L., Wang F. Y., 2021, ApJ, 907, 111
* de Mink et al. (2007) de Mink S. E., Pols O. R., Hilditch R. W., 2007, A&A, 467, 1181
* van Haaften et al. (2012) van Haaften L. M., Nelemans G., Voss R., Wood M. A., Kuijpers J., 2012, A&A, 537, A104
* van Haaften et al. (2013) van Haaften L. M., Nelemans G., Voss R., Toonen S., Portegies Zwart S. F., Yungelson L. R., van der Sluys M. V., 2013, A&A, 552, A69
## Appendix A Parameter distributions of detached LISA NSWD binaries
Figs. 6$-$8 show calculated number distributions of Galactic LISA sources of
detached NSWD systems as a function of binary parameters, similar to Fig. 1,
by assuming different CE efficiencies and supernova recipes.
Figure 6: Similar to Fig. 1, but assuming $\alpha_{\mathrm{CE}}=0.3$. Figure
7: Similar to Fig. 1, but assuming $\alpha_{\mathrm{CE}}=3.0$. Figure 8:
Similar to Fig. 1, but assuming the stochastic mechanism of supernova
explosions.
|
# Finsler geometry modeling and Monte Carlo study of skyrmion shape
deformation by uniaxial stress
Sahbi El Hog1 Fumitake Kato2 Hiroshi Koibuchi3<EMAIL_ADDRESS><EMAIL_ADDRESS>Hung T. Diep4<EMAIL_ADDRESS>1Laboratoire de la
Mati${\grave{e}}$re Condens${\acute{e}}$e et des Nanosciences (LMCN),
Universit${\acute{e}}$ de Monastir, D${\acute{e}}$partement de Physique,
Facult${\acute{e}}$ des Sciences de Monastir, Avenue de l’Environnement, 5019
Monastir, Tunisia
2Department of Industrial Engineering, National Institute of Technology
(KOSEN), Ibaraki College, Nakane 866, Hitachinaka, Ibaraki 312-8508, Japan
3National Institute of Technology (KOSEN), Sendai College, 8 Nodayama,
Medeshima-Shiote, Natori-shi, Miyagi 981-1239, Japan
4Laboratoire de Physique The${\acute{o}}$rique et Mod${\acute{e}}$lisation,
University of Cergy-Pontoise, CNRS, UMR 8089 2, Avenue Adolphe Chauvin, 95302
Cergy-Pontoise Cedex, France
###### Abstract
Skyrmions in chiral magnetic materials are topologically stable and
energetically balanced spin configurations appearing under the presence of
ferromagnetic interaction (FMI) and Dzyaloshinskii-Moriya interaction (DMI).
Much of the current interest has focused on the effects of magneto-elastic
coupling on these interactions under mechanical stimuli, such as uniaxial
stresses for future applications in spintronics devices. Recent studies
suggest that skyrmion shape deformations in thin films are attributed to an
anisotropy in the coefficient of DMI, such that $D_{x}\\!\not=\\!D_{y}$, which
makes the ratio $\lambda/D$ anistropic, where the coefficient of FMI $\lambda$
is isotropic. It is also possible that $\lambda_{x}\\!\not=\\!\lambda_{y}$
while $D$ is isotropic for $\lambda/D$ to be anisotropic. In this paper, we
study this problem using a new modeling technique constructed based on Finsler
geometry (FG). Two possible FG models are examined: In the first (second)
model, the FG modeling prescription is applied to the FMI (DMI) Hamiltonian.
We find that these two different FG models’ results are consistent with the
reported experimental data for skyrmion deformation. We also study responses
of helical spin orders under lattice deformations corresponding to uniaxial
extension/compression and find a clear difference between these two models in
the stripe phase, elucidating which interaction of FMI and DMI is deformed to
be anisotropic by uniaxial stresses.
## I Introduction
Skyrmions are topologically stable spin configurations Skyrme-1961 ;
Moriya-1960 ; Dzyalo-1964 ; Bogdanov-Nat2006 ; Bogdanov-PHYSB2005 ; Bogdanov-
SovJETP1989 observed in chiral magnetic materials such as FeGe, MnSi, etc.
Uchida-etal-SCI2006 ; Yu-etal-Nature2010 ; Mohlbauer-etal-Science2009 ;
Munzer-etal-PRB2010 ; Yu-etal-PNAS2012 , and are considered to be applicable
for future spintronics devices Fert-etal-NatReview2017 . For this purpose,
many experimental and theoretical studies have been conducted Buhrandt-PRB2013
; Zhou-Ezawa-NatCom2014 ; Iwasaki-etal-NatCom2013 ; Romming-etal-Science2013
specifically on responses to external stimuli such as mechanical stresses
Bogdanov-PRL2001 ; Butenko-etal-PRB2010 ; Chacon-etal-PRL2015 ; Levatic-etal-
SCRep2016 ; Seki-etal-PRB2017 ; Yu-etal-PRB2015 ; Banerjee-etal-PRX2014 ;
Gungordu-etal-PRB2016 . It has been demonstrated that mechanical stresses
stabilize/destabilize or deform the skyrmion configuration Ritz-etal-PRB2013 ;
Shi-Wang-PRB2018 ; Nii-etal-PRL2014 ; Nii-etal-NatCom2015 ; Chen-etal-
SCRep2017 .
Effects of magnetostriction of chiral magnets are analytically studied using
spin density wave by a Landau-type free energy model, in which magneto-elastic
coupling (MEC) is assumed Plumer-Walker-JPC1982 ; Plumer-etal-JPC1984 ;
Kataoka-JPSJ1987 . In a micromagnetic theory based on chiral symmetry
breaking, anisotropy in the exchange coupling is assumed in addition to
magnetostriction term to implement non-trivial effects on helical states and
stabilize skyrmions Bogdanov-PRL2001 ; Butenko-etal-PRB2010 . Using such a
model implementing MEC into Ginzburg-Landau free energy, Wang et al. reported
simulation data for spins’ responses under uniaxial stresses JWang-etal-
PRB2018 , and their results accurately explain both the skyrmion deformation
and alignment of helical stripes.
Among these studies, Shibata et al. reported an experimental result of large
deformation of skyrmions by uniaxial mechanical stress, and they concluded
that the origin of this shape deformation is an anisotropy in the coefficient
$D$ of Dzyaloshinskii-Moriya interaction (DMI), such that
$D_{x}\\!\not=\\!D_{y}$ Shibata-etal-Natnanotech2015 . Such an anisotropic DMI
can be caused by uniaxial mechanical stresses, because the origin of DMI
anisotropy is a spin-orbit coupling Fert-etal-NatReview2017 . It was reported
in Ref. Koretsune-etal-SCRep2015 that this anisotropy in $D$ comes from a
quantum mechanical effect of interactions between electrons and atoms
resulting from small strains. Moreover, skyrmion deformation can also be
explained by a DMI anisotropy in combination with antiferromagnetic exchange
coupling Osorio-etal-PRB2017 ; Gao-etal-Nature2019 .
However, we have another possible scenario for skyrmion deformation; it is an
anisotropy in the FMI coefficient $\lambda$ such that
$\lambda_{x}\\!\not=\\!\lambda_{y}$. This direction-dependent $\lambda$ causes
an anisotropy $\lambda/D$ even for isotropic $D$ as discussed in Ref. Shibata-
etal-Natnanotech2015 , although the authors concluded that anisotropy
$\lambda/D$ comes form anisotropy in $D$. Such an anisotropy in $\lambda$, the
direction dependent coupling of FMI, also plays an important role in the
domain wall orientation Vedmedenko-PRL2004 .
Therefore, it is interesting to study which coefficient of FMI and DMI should
be anisotropic for the skyrmion deformation and stripe alignment by a new
geometric modeling technique. On the stripe alignment, Dho et al.
experimentally studied the magnetic microstructure of an ${\rm
La_{0.7}Sr_{0.3}MnO_{3}}$ (LSMO) thin film and reported magnetic-force
microscope images under tensile/compressive external forces JDho-etal-APL2003
.
In this paper, using Finsler geometry (FG) modeling, which is a mathematical
framework for describing anisotropic phenomena Takano-PRE2017 ; Proutorov-
etal-JPC2018 ; Egorov-etal-PLA2021 , we study two possible models for the
deformation of skyrmions and the alignment of magnetic stripes Koibuchi-etal-
JPCS2019 . In one of the models, the FMI coefficient is deformed to be
$\lambda_{x}\\!\not=\\!\lambda_{y}$ while DMI is isotropic, and in the other
model, the DMI coefficient is deformed to be $D_{x}\\!\not=\\!D_{y}$ while FMI
is isotropic. Both model 1 and model 2 effectively render the ratio
$\lambda/D$ anisotropic for modulated states implying that a characteristic
length scale is also rendered to be anisotropic Butenko-etal-PRB2010 . Note
also that the present FG prescription cannot directly describe an anisotropic
magnetization expected from MEC. In this sense, FG models in this paper are
different from both the standard Landau-type model of MEC and micromagnetic
theory for thin films studied in Ref. Plumer-Walker-JPC1982 ; Plumer-etal-
JPC1984 ; Kataoka-JPSJ1987 ; Butenko-etal-PRB2010 , although these standard
models implement MEC by an extended anisotropy of FMI in the sense that a
magnetization anisotropy or higher order term of magnetization is included in
addition to the exchange anisotropy.
## II Models
### II.1 Triangular lattices
Figure 1: A regular triangular lattice of size $N\\!=\\!L^{2}\\!=\\!100$,
where the total number of vertices is $L\\!=\\!10$ along each of the edges.
This number, $L\\!=\\!10$, is fixed to be very small to visualize the lattice
structure. Simulations are performed on a lattice of size $N\\!=\\!10^{4}$.
Periodic boundary condition (PBC) is assumed in both directions. The lattice
spacing $a$ is fixed to $a\\!=\\!1$ in the simulations.
We use a triangular lattice composed of regular triangles of side length $a$,
called lattice spacing Creutz-txt (Fig. 1). Triangular lattices are used for
simulating skyrmions on thin films Okubo-etal-PRL2012 ; Rosales-etal-PRB2015 ,
where frustrated system or antiferromagnetic interaction is assumed for
studying possible mechanism of skyrmion formation on chiral magnetic
materials. However, the purpose in this paper is not the same as in Okubo-
etal-PRL2012 ; Rosales-etal-PRB2015 . On the other hand, skyrmions are known
to be stabilized on thin films Yu-etal-PRB2015 . On the thin film of MnSi,
hexagonal skyrmion crystal is observed, which can be realized on the
triangular lattice. This is one of the reasons why we use triangular lattice,
though the results in this paper are expected to remain unchanged on the
regular square lattice because ferromagnetic interaction is assumed, or in
other words, the system is not frustrated.
The lattice size $N$, which is the total number of vertices, is given by
$N\\!=\\!L^{2}$, where $L\\!-\\!1$ is the total number of triangles in both
horizontal and vertical directions. The side length of the lattice is $(L-1)a$
along the vertical direction, and $(\sqrt{3}/2)(L-1)a$ along the horizontal
direction. Boundary conditions for dynamical variables are assumed to be
periodic in both directions as assumed in 3D simulations in Ref. Buhrandt-
PRB2013 . Skyrmions are topological solitons which depend on the boundary
condition. The boundary condition also strongly influences skyrmions in motion
such as those in transportation. However, in our simulations, every skyrmion
is only allowed to thermally fluctuate around a fixed position. For this
reason, to avoid unexpected boundary effects, we assume the periodic boundary
condition.
The lattice spacing is fixed to $a\\!=\\!1$ for simplicity, and the lattice
size is fixed to $N\\!=\\!10^{4}$ for all simulations. As we describe in the
presentation section, the numerical results are completely independent of the
lattice size up to $400\\!\times\\!400$ at the boundary region between
skyrmion and ferromagnetic phases, and therefore, all simulations are
performed on the lattice of size $100\\!\times\\!100$.
### II.2 The Hamiltonian and a new variable for mechanical strains
The discrete Hamiltonian is given by the linear combination of five terms such
that
$\displaystyle S=\lambda S_{{\rm FM}}-S_{B}+DS_{{\rm DM}}+\gamma
S_{\tau}-\alpha S_{f},\quad(\alpha=1),$ (1)
where FMI and DMI energies $S_{{\rm FM}}$ and $S_{{\rm DM}}$ are given in two
different combinations denoted by model 1 and model 2 Koibuchi-etal-JPCS2019
(see Appendix A)
$\displaystyle\begin{split}&S_{{\rm
FM}}=\sum_{\Delta}\left[\lambda_{ij}\left(1-\sigma_{i}\cdot\sigma_{j}\right)+\lambda_{jk}\left(1-\sigma_{j}\cdot\sigma_{k}\right)+\lambda_{ki}\left(1-\sigma_{k}\cdot\sigma_{i}\right)\right],\\\
&\lambda_{ij}=\frac{1}{3}\left(\frac{v_{ij}}{v_{ik}}+\frac{v_{ji}}{v_{jk}}\right),\quad
v_{ij}=|\tau_{i}\cdot{\vec{e}}_{ij}|+v_{0},\quad({\rm model\;1}),\\\ &S_{{\rm
DM}}=\sum_{ij}{\vec{e}}_{ij}\cdot\sigma_{i}\times\sigma_{j},\end{split}$ (2)
and
$\displaystyle\begin{split}&S_{{\rm
FM}}=\sum_{ij}\left(1-\sigma_{i}\cdot\sigma_{j}\right),\\\ &S_{{\rm
DM}}=\sum_{\Delta}\left[\lambda_{ij}\left({\vec{e}}_{ij}\cdot\sigma_{i}\times\sigma_{j}\right)+\lambda_{jk}\left({\vec{e}}_{jk}\cdot\sigma_{j}\times\sigma_{k}\right)+\lambda_{ki}\left({\vec{e}}_{ki}\cdot\sigma_{k}\times\sigma_{i}\right)\right],\\\
&\lambda_{ij}=\frac{1}{3}\left(\frac{v_{ij}}{v_{ik}}+\frac{v_{ji}}{v_{jk}}\right),\quad
v_{ij}=\sqrt{1-\left(\tau_{i}\cdot{\vec{e}}_{ij}\right)^{2}}+v_{0},\quad({\rm
model\;2}),\end{split}$ (3)
where FG modeling prescription is only applied to $S_{{\rm FM}}$ ($S_{{\rm
DM}}$) in model 1 (model 2). Note that $S_{{\rm FM}}$ in model 1 and $S_{{\rm
DM}}$ in model 2 are defined by the sum over triangles $\sum_{\Delta}$. The
coefficients $\lambda$ and $D$ of $S_{{\rm FM}}$ and $S_{{\rm DM}}$ represent
the strength of FMI and DMI. The coefficients $\lambda_{ij}$ inside the sum
$\sum_{\Delta}$ of $S_{{\rm FM}}$ and $S_{{\rm DM}}$ are obtained by
discretization of the corresponding continuous Hamiltonians with Finsler
metric (see Appendix A). $i,j,k$ of $v_{ij}$ in $\lambda_{ij}$ denote the
three vertices of triangle the $\Delta$ (Fig. 2).
The symbol $\sigma_{i}(\in S^{2}:{\rm unit\;sphere})$ denotes the spin
variable at lattice site $i$, which is a vertex of the triangle. The symbol
$\tau_{i}(\in S^{1}:{\rm unit\;circle})$ in $v_{ij}$ denotes a direction of
strain. Microscopically, strains are understood to be connected to a
displacement of atoms, which also thermally fluctuate or vibrate without
external forces. Thus, an internal variable can be introduced to represent the
direction of movement or position deformation of atom $i$. For this reason,
$\tau_{i}$ is introduced in model 1 and model 2. A random or isotropic state
of $\tau_{i}$ effectively corresponds to a zero-stress or zero-strain
configuration, while an aligned state corresponds to a uniaxially stressed or
strained configuration. The zero-strain configuration includes a random and
inhomogeneous strain configuration caused by a random stress, because the mean
value of random stress is effectively identical with zero-stress from the
microscopic perspective. We should note that the variable $\tau_{i}$ is
expected to be effective only in a small stress region to represent strain
configurations ranging from random state to aligned state. In fact, if the
variables once align along an external force direction, which is sufficiently
large, no further change is expected in the configuration. Therefore, the
strain representation by $\tau_{i}$ is effective only in a small stress or
strain region.
One more point to note is that the variable $\tau$ is assumed to be non-polar
in the sense that it is only direction-dependent and independent of the
positive/negative direction. Indeed, the direction of $\tau$ is intuitively
considered to be related to whether the external mechanical force is tension
or compression. However, to express an external tensile force, we need two
opposite directions in general. This assumption ($\Leftrightarrow$ $\tau$ is
non-polar) is considered sufficient because the interaction, implemented via
$v_{ij}$ in Eqs. (2) and (3) for $\lambda_{ij}$, is simply dependent on
$|\tau_{i}\cdot{\vec{e}}_{ij}|$ and
$\left(\tau_{i}\cdot{\vec{e}}_{ij}\right)^{2}$, respectively, where
${\vec{e}}_{ij}$ is the unit tangential vector from vertex $i$ to vertex $j$,
and hence, the interaction is dependent only on strain directions, and
independent of whether $\tau$ is polar or non-polar.
We should note that $\lambda\lambda_{ij}$ and $D\lambda_{ij}$ in the FMI and
DMI are considered to be microscopic interaction coefficients, which are both
position ($\Leftrightarrow i$) and direction ($\Leftrightarrow ij$) dependent.
The expression of $\lambda_{ij}$ of model 1 is the same as that of model 2,
and the relation $\lambda_{ij}\\!=\\!\lambda_{ji}$ is automatically satisfied.
However, the definitions of $v_{ij}$ are different from each other. Hence, the
value of $\lambda_{ij}$ of model 1 is not always identical to that of model 2.
Indeed, if $\tau_{i}$ is almost parallel to the $x$ axis (see Fig. 2),
$v_{ij}$ is relatively larger (smaller) than $v_{ik}$ and $v_{jk}$ in model 1
(model 2), and as a consequence, $\lambda_{ij}$ also becomes relatively large
(small) compared with the case where $\tau_{i}$ is perpendicular to the $x$
axis.
To discuss this point further, we introduce effective coupling constants of
DMI such that
$\displaystyle\begin{split}&\langle
D_{x}\rangle=(1/N_{B})\sum_{ij}\lambda_{ij}|\vec{e}_{ij}^{\;x}|,\\\ &\langle
D_{y}\rangle=(1/N_{B})\sum_{ij}\lambda_{ij}|\vec{e}_{ij}^{\;y}|,\end{split}$
(4)
where $\vec{e}_{ij}^{\;x}$ and $\vec{e}_{ij}^{\;y}$ are components of
$\vec{e}_{ij}=(\vec{e}_{ij}^{\;x},\vec{e}_{ij}^{\;y}){\in{\bf R}^{2}}$, which
is the unit tangential vector from vertex $i$ to vertex $j$ as mentioned
above, and $N_{B}\\!=\\!\sum_{ij}1(=\\!3N)$ is the total number of links or
bonds. Expressions of $\langle\lambda_{x}\rangle$ and
$\langle\lambda_{y}\rangle$ for FMI are exactly the same as those of $\langle
D_{x}\rangle$ and $\langle D_{y}\rangle$ in Eq. (4). The symbol
$\langle\cdot\rangle$ for the mean value is removed henceforth for simplicity.
Suppose the effective coupling constants $\lambda_{x}$ and $\lambda_{y}$, for
$S_{{\rm FM}}$ in model 1, satisfy $\lambda_{x}>\lambda_{y}$. In this case,
the resulting spin configurations are expected to be the same as those in
model 2 under $D_{x}<D_{y}$ for $S_{{\rm DM}}$, because an skx configuration
emerges as a result of competition between $S_{{\rm FM}}$ and $S_{{\rm DM}}$.
This is an intuitive understanding that both models are expected to have the
same configuration in the skx phase. If $\lambda_{ij}$ is isotropic or locally
distributed at random, almost independent of the direction $ij$, then the
corresponding microscopic coupling constants
$\lambda_{ij}|\vec{e}_{ij}^{\;x}|$ and $\lambda_{ij}|\vec{e}_{ij}^{\;y}|$ in
$D_{x}$ and $D_{y}$ of Eq. (4) also become isotropic, and consequently,
$D_{x}\\!=\\!D_{y}$ is expected. In contrast, if the variable $\tau$ is
aligned by the external force $\vec{f}$, then $\lambda_{ij}$ becomes
anisotropic or globally direction dependent, and as a consequence, $D_{x}$ and
$D_{y}$ become anisotropic such that $D_{x}\\!\not=\\!D_{y}$.
Figure 2: A regular triangle of vertices $i,j,k$, and a strain direction
$\tau_{i}$ at vertex $i$. The unit Finsler length $v_{ij}$ from vertices $i$
to $j$ is defined by using the tangential component
$\tau_{i}\cdot{\vec{e}}_{ij}$ of $\tau_{i}$ along the direction
${\vec{e}}_{ij}$, which is the unit tangential vector from $i$ to $j$.
We should comment that our models include a shear component of stress-effect
on the coefficient $\lambda_{ij}$ in Eqs. (2), (3). To simplify arguments, we
tentatively assume $v_{0}\\!=\\!0$ in model 1. Let $\vec{f}$ be
$\vec{f}\\!=\\!(f,0)$ or parallel to ${\vec{e}}_{ij}$, which represents the
first local coordinates axis (Fig. 2), implying that $\tau_{i}$ is almost
parallel to ${\vec{e}}_{ij}$ for sufficiently large $f$. Then, we have
$v_{ij}\\!\simeq\\!|\tau_{i}|\\!=\\!1$, which represents an effect of the
tensile stress $\vec{f}$ along ${\vec{e}}_{ij}$. For the same $\tau_{i}$, we
have $v_{ik}\\!\simeq\\!0.5|\tau_{i}|\\!=\\!0.5$ along ${\vec{e}}_{ik}$, which
represents the second local coordinates axis. Thus, we obtain the ratio
$v_{ij}/v_{ik}\\!\simeq\\!2$, and by moving the local coordinate origin to
vertex $j$ and from the same calculation we obtain
$v_{jk}/v_{ji}\\!\simeq\\!2$, and therefore, $\lambda_{ij}\\!=\\!4/3$. Since
the variables $\tau$ at all other vertices are naturally considered to be
parallel to ${\vec{e}}_{ij}$, we have $\lambda_{ik}\\!=\\!1/3$ from the same
argument. The fact that $\lambda_{ik}$ is non-zero under $\vec{f}\\!=\\!(f,0)$
is considered to be an effect of shear stress.
The other terms $S_{B}$, $S_{\tau}$ and $S_{F}$ in $S$ of Eq. (1) are common
to both models and are given by
$\displaystyle\begin{split}&S_{B}=\sum_{i}\sigma_{i}\cdot\vec{B},\quad\vec{B}=(0,0,B),\\\
&S_{\tau}=\frac{1}{2}\sum_{ij}\left(1-3(\tau_{i}\cdot\tau_{j})^{2}\right),\quad
S_{f}=\sum_{i}\left(\tau_{i}\cdot\vec{f}\right)^{2},\quad{\vec{f}}=(f_{x},f_{y}),\end{split}$
(5)
where $S_{B}$ is the Zeeman energy with magnetic field $\vec{B}$, and
$S_{\tau}$ is a Lebwohl-Lasher type potential Leb-Lash-PRA1972 , which is
always assumed for models of liquid crystals Proutorov-etal-JPC2018 .
In $S_{f}$, $\vec{f}\\!=\\!(f_{x},f_{y})$ represents an external mechanical
force, which aligns the strain direction $\tau$ along the direction of
$\vec{f}$. The reason why $S_{f}$ is not linear concerning $\vec{f}$ (or
$\tau$) is that the force $\vec{f}$ has a non-polar interaction given by
$S_{\tau}$. Therefore, it is natural to assume the square type potential. In
liquid crystals, such a square type potential is also assumed for external
electric fields Proutorov-etal-JPC2018 . The coefficient $\alpha$ of $S_{f}$
in Eq. (1) is fixed to $\alpha=1$ for simplicity. This is always possible by
re-scaling $f$ to $\sqrt{\alpha}f$.
Alignment of the direction of $\tau$ is essential for modeling stress-effect
in model 1 and model 2. In this paper, we assume the following two different
sources for this alignment:
1. (i)
Uniaxial stresses by ${\vec{f}}=(f,0)$ and ${\vec{f}}=(0,f)$ with
$\gamma\\!=\\!0\quad$ (for skyrmion deformation),
2. (ii)
Uniaxial strains by lattice deformation by $\xi$ with $\gamma\\!>\\!0\quad$
(for stripe deformation),
where $\xi$ in (ii) is defined by the deformations of side lengths such that
(Fig. 3)
$\displaystyle L_{x}\to\xi^{-1}L_{x},\quad L_{y}\to\xi L_{y},$ (6)
where $f\\!>\\!0$ is assumed, implying that $\vec{f}$ is tensile, and $L_{x}$
and $L_{y}$ are actually given by $L_{x}\\!=\\!(L\\!-\\!1)a$ and
$L_{y}\\!=\\!(\sqrt{3}/2)(L\\!-\\!1)a$ as shown in Fig. 1. In both cases (i)
and (ii), the variable $\tau$ is expected to be aligned, and this alignment
causes deformations in the interactions of $S_{{\rm FM}}$ and $S_{{\rm DM}}$
to be direction dependent like in the forms $\lambda\lambda_{ij}$ and
$D\lambda_{ij}$ as mentioned above. In the case of (i), the lattice is
undeformed, implying that $\xi$ is fixed to $\xi\\!=\\!1$. In this case (i),
uniaxial stresses by the external force are only applied to check the skyrmion
shape deformation, and the coupling constant $\gamma$ of $S_{\tau}$ is assumed
to be $\gamma\\!=\\!0$. On the contrary, in the case of (ii), the external
force $\vec{f}$ is assumed to be ineffective and fixed to
${\vec{f}}\\!=\\!(0,0)$, while the parameter $\gamma$ for $S_{\tau}$ is fixed
to a non-negative constant $\gamma\\!>\\!0$ so that $\tau$ can spontaneously
align to a certain direction associated with the lattice deformation by $\xi$.
In this case (ii), $S_{{\rm DM}}$ is expected to play a non-trivial role in
both model 1 and model 2, because lattice deformations originally influence
DMI. This will be a check on whether or not a coupling of strain and spins (or
magnetization) is effectively implemented in DMI. It is clear that $S_{{\rm
FM}}$ of model 2 in Eq. (3) is completely independent of the lattice
deformation by $\xi$.
Figure 3: Lattice deformations represented by (a) $\xi\\!<\\!1$ and (b)
$\xi\\!>\\!1$ in Eq. (6). In (a) for $\xi\\!<\\!1$ and (b) for $\xi\\!>\\!1$,
the corresponding external tensile forces’ direction is horizontal and
vertical, respectively. The dashed arrows represent the direction of forces,
implying that the force is assumed compressive, and the shaded thick lines
denote the stripe directions experimentally observed and reported in Ref.
JDho-etal-APL2003 .
The partition function is defined by
$\displaystyle Z=\sum_{\sigma}\sum_{\tau}\exp\left[-S(\sigma,\tau)/T\right],$
(7)
where $\sum_{\sigma}$ and $\sum_{\tau}$ denote the sum over all possible
configurations of $\sigma$ and $\tau$, and $T$ is the temperature. Note that
the Boltzmann constant $k_{B}$ is assumed to be $k_{B}\\!=\\!1$.
Here, we show the input parameters for simulations in Table LABEL:table-1.
Table 1: List of symbols and descriptions of the input parameters. Symbol | | Description
---|---|---
$T$ | | Temperature
$\lambda$ | | Ferromagnetic interaction coefficient
$D$ | | Dzaloshinskii-Moriya interaction coefficient
$B$ | | Magnetic filed
$\gamma$ | | Interaction coefficient of $S_{\tau}$
$f$ | | Strength of mechanical force ${\vec{f}}=(f,0)$ or ${\vec{f}}=(0,f)$ with $f\\!>\\!0$
$v_{0}$ | | Strength of anisotropy
$\xi$ | | Deformation parameter for the side lengths of lattice: $\xi\\!=\\!1\Leftrightarrow$ non-deformed
### II.3 Monte Carlo technique and snapshots
The standard Metropolis Monte Carlo (MC) technique is used to update the
variables $\sigma$ and $\tau$ Metropolis-JCP-1953 ; Landau-PRB1976 . For the
update of $\sigma$, a new variable $\sigma_{i}^{\prime}$ at vertex $i$ is
randomly generated on the unit sphere $S^{2}$ independent of the old
$\sigma_{i}$, and therefore, the rate of acceptance is not controllable. The
variable $\tau$ is updated on the unit circle $S^{1}$ by almost the same
procedure as that of $\sigma$.
The initial configuration of spins is generated by searching the ground state
(see Ref. Hog-etal-JMagMat2018 ). One MC sweep (MCS) consists of $N$
consecutive updates of $\sigma$ and that of $\tau$. In almost all simulations,
$2\times 10^{8}$ MCSs are performed. At the phase boundary between the
skyrmion and ferromagnetic phases, the convergence is relatively slow, and
therefore $5\times 10^{8}$ MCSs or more, up to $1.6\times 10^{9}$ MCSs, are
performed. In contrast, a relatively small number of MCSs are performed in the
ferromagnetic phase at large $|B|$ or high $T$ region.
Figure 4: Snapshot of skyrmions for (a) $f\\!=\\!0$ and (b)
$f\\!\not=\\!0(=\\!1.7)$ with $T\\!=\\!0.2$, $D\\!=\\!0.45$, $\gamma\\!=\\!0$,
$v_{0}\\!=\\!0.7$, and $\xi\\!=\\!1$. These snapshots are obtained by model 2
and the same as those obtained by model 1.. This vortex-like skyrmion is
called Bloch type, which is studied in this paper.
Here, we show snapshots of skyrmion configuration obtained by model 2 for
$f\\!=\\!0$ and $f\\!\not=\\!0(=\\!1.7)$ in Figs. 4(a),(b). The assumed
parameters other than $f$ are $T\\!=\\!0.2$, $D\\!=\\!0.45$, $\gamma\\!=\\!0$,
$v_{0}\\!=\\!0.7$, and $\xi\\!=\\!1$ for both (a) and (b). The cones represent
spins $\sigma_{i}$, and the colors of cones correspond to $z$-component
$\sigma_{i}^{z}$. We find from both snapshots that the direction of cones in
the central region of skyrmions is $-z$ while it is $+z$ outside. Skyrmion
configurations of model 1 are the same as these snapshots. This vortex-like
configuration (Figs. 4(a)) is called Bloch type and symmetric under rotation
along $z$ axis Leonov-etal-NJO2016 . In this paper, we study skyrmions of
Bloch type.
## III Simulation results
### III.1 Responses to uniaxial stress
#### III.1.1 Magnetic filed vs. Temperature diagram
Figure 5: (a) Phase diagram of magnetic field $B$ and temperature $T$ of model
1, (b) snapshot obtained at $(B,T)\\!=\\!(-0.6,0.1)$, (c) corresponding
snapshot to measure the shape anisotropy $\delta$ of skyrmion, (d) histograms
of $\delta$, where a reported histogram (Exp) for experimental result in Ref.
Shibata-etal-Natnanotech2015 is also plotted, (e) snapshot at
$(B,T)\\!=\\!(-0.6,0.85)$, and (f) snapshot in the stripe phase at
$(B,T)\\!=\\!(0,0.85)$. The symbols (skx), (str), and (ferro) denote skyrmion,
stripe, and ferromagnetic phases, respectively. The symbols (sk-fe) and (sk-
st) denote intermediate phases of skyrmion ferromagnetic and skyrmion stripe,
respectively. On the dashed horizontal line, physical quantities are
calculated in the following subsection.
A phase diagram of model 1 is shown in Fig. 5(a), where the temperature $T$
and magnetic field $B$ are varied. The symbols (skx), (str), and (ferro)
denote the skyrmion, stripe, and ferromagnetic phases, respectively. The
stripe phase is the same as the so-called helical phase, where the spins are
rotating along the axis perpendicular to the stripe direction. Between these
two different phases, intermediate phases appear, denoted by the skyrmion
ferromagnetic (sk-fe) and skyrmion stripe (sk-st) phases. The parameters
$\lambda,D,\gamma,f,v_{0}$ are fixed to
$(\lambda,D,\gamma,f,v_{0})\\!=\\!(0.8,0.9,0,0.5,0.15)$ in Fig. 5(a). The
applied mechanical stress is given by $\vec{f}\\!=\\!(0.5,0)$, which implies
that a thin film is expanded in $x$ direction by a tensile force
$f\\!=\\!0.5$.
The phase diagram in Fig. 5(a) is only rough estimates for identifying the
regions of different states. These boundaries are determined by viewing their
snapshots. For example, if a skyrmion is observed in the final ferromagnetic
configuration of simulation at the boundary region between the skx and ferro
phases, this state is written as sk-fe. If two skymion states are connected to
be oblong shape and all others are isolated in a snapshot, then this state is
written as sk-st. Thus, the phase boundaries in these digital phase diagrams
are not determined by the standard technique such as the finite scaling
analyses Janoschek-etal-PRB2013 ; Hog-etal-JMagMat2018 , and therefore, the
order of transition between two different states is not specified.
Figure 5(b) shows a snapshot of deformed skyrmions of model 1 at a relatively
low temperature $T\\!=\\!0.1$. To measure the shape anisotropy, we draw
rectangles enclosing skyrmions, as shown in Fig. 5(c), where the edge lines
are drawn parallel to the $x$ and $y$ directions. The details of how the edge
lines are drawn can be found in Appendix B. This technique can also be used to
count the total number of skyrmions, at least in the skx phase, which will be
presented below. Figure 5(d) shows the distribution of shape anisotropy
$\delta$ defined by
$\displaystyle\delta=\left(1-w_{y}/w_{x}\right)/\left(1+w_{y}/w_{x}\right),$
(8)
where $w_{x}$ and $w_{y}$ are the edge lengths of the rectangle Shibata-etal-
Natnanotech2015 . The solid histogram is the experimental data (Exp) reported
in Ref. Shibata-etal-Natnanotech2015 . In this Ref. Shibata-etal-
Natnanotech2015 , simulations were also performed by assuming that the DMI
coefficients $D$ are direction-dependent such that $D_{x}/D_{y}\\!=\\!0.8$,
and almost the same result with Exp was obtained. The result of model 1 in
this paper, shown in the shaded histogram, is almost identical to that of Exp.
In these histograms, the height is normalized such that the total height
remains the same. Another snapshot obtained at higher temperature
$T\\!=\\!0.8$ is shown in Fig. 5(e), where the shape of the skyrmion is not
smooth and almost randomly fluctuating around the circular shape. Therefore,
this configuration is grouped into the sk-fe phase, even though such
fluctuating skyrmions are numerically stable, implying that the total number
of skyrmions remains constant for long-term simulations. Figure 5(f) shows a
snapshot obtained in the stripe phase. The direction of the stripes is
parallel to the direction of the tensile force $\vec{f}\\!=\\!(f,0)$.
Figure 6: (a) Phase diagram of magnetic field $B$ and temperature $T$ of model
2, (b) snapshot obtained at $(B,T)\\!=\\!(-0.5,0.1)$, (c) corresponding
snapshot to measure the shape anisotropy of the skyrmion, (d) the
corresponding histogram of $\delta$, where a reported histogram (Exp) for
experimental result in Ref. Shibata-etal-Natnanotech2015 is also plotted, (e)
snapshot at $(B,T)\\!=\\!(-0.5,0.35)$, and (f) snapshot in the stripe phase at
$(B,T)\\!=\\!(-0.3,0.6)$. On the dashed horizontal line, physical quantities
are calculated in the following subsection.
The results of model 2 in Figs. 6(a)–6(f) are almost identical to those in
Fig. 5. The parameters $\lambda,D,\gamma,f,v_{0}$ are fixed to
$(\lambda,D,\gamma,f,v_{0})\\!=\\!(1.2,0.9,0,1.7,0.7)$ in Fig. 6(a) for model
2. The unit of $T$ depends on the ratio of $T$ and the coefficients of
Hamiltonians $S_{{\rm FM}}$, $S_{B}$, $S_{{\rm DM}}$, $S_{\tau}$ and $S_{f}$.
However, the ratios themselves cannot be compared with each other because the
first two parameters, $(\lambda,D)$ at least for model 1, are not proportional
to these parameters for model 2. In fact, $D$ in model 2 is effectively
deformed to be direction-dependent such that $DD_{x}$ and $DD_{y}$ by $D_{x}$
and $D_{y}$ in Eq. (4), while $D$ in model 1 remains unchanged. Therefore, the
unit of horizontal $T$ axis in Fig. 5(a) is not exactly identical but almost
comparable to that of model 1 in Fig. 6(a).
The parameter $v_{0}\\!=\\!0.7$ assumed in $v_{ij}$ of Eq. (3) for model 2 is
relatively larger than $v_{0}\\!=\\!0.15$ in $v_{ij}$ of Eq. (2) for model 1.
If $v_{0}$ in model 2 is fixed to be much smaller such as $v_{0}\\!=\\!0.15$
just like in model 1, then the shape of the skyrmions becomes unstable. This
fact implies that the anisotropy of DMI caused by the FG model prescription is
too strong for such a small $v_{0}$ in model 2. Conversely, if $v_{0}$ in
model 1 is fixed to be much larger, such as $v_{0}\\!=\\!0.7$, then the
skyrmion shape deformation is too small, implying that anisotropy of FMI
caused by the FG model prescription is too weak for $v_{0}\\!=\\!0.7$.
Here, we should note that the skx region in the $BT$ diagrams of Figs. 5 and 6
changes with varying $B$ at relatively low $T$ region. Indeed, if $|B|$ is
increased from $B\\!=\\!0$ at $T\\!=\\!0.1$ in Fig. 6 for example, the
connected stripes like in Fig. 6(f) start to break, and the stripe phase
changes to the sk-st at $|B|\\!=\\!0.4$, and the skx emerges at
$|B|\\!=\\!0.5$ as shown in Fig. 6(b). The skyrmion shape in the skx phase is
oblong in $(1,0)$ direction, which is the same as the stripe direction for
smaller $B$ region. This shape anisotropy of skyrmions as well as the size
itself becomes smaller and smaller with increasing $|B|$, and for sufficiently
large $|B|$ such as $|B|\\!=\\!0.8$, the skx turns to be ferromagnetic.
#### III.1.2 Temperature dependence of physical quantities
Figure 7: (a) Spin variables $\sigma_{i}(i\\!=\\!1,2,3)$ at the three vertices
of a triangle, and (b) a small triangle area defined by
$\sigma_{i}(i\\!=\\!1,2,3)$ on the unit sphere. This small area can be used to
calculate the total number of skyrmions.
The total number of skyrmions $N_{{\rm sk}}$ is defined by
$\displaystyle N_{{\rm sk}}=({1}/{4\pi})\int
d^{2}x\;\sigma\cdot\frac{\partial\sigma}{\partial
x_{1}}\times\frac{\partial\sigma}{\partial x_{2}},\quad({\rm top})$ (9)
which can be calculated by replacing differentials with differences Hog-etal-
JMMM2020 ; Diep-Koibuchi-Frustrated2020 . This $N_{{\rm sk}}$ is denoted by
“top” and plotted in the figures below. Another numerical technique for
calculating $N_{{\rm sk}}$ is to measure the solid angle of the triangle cone
formed by $\sigma_{1}$, $\sigma_{2}$ and $\sigma_{3}$ (Fig. 7(a)). Let
$a_{\Delta}$ be the area of the shaded region in Fig. 7(b), and $N_{{\rm sk}}$
can then be calculated by
$\displaystyle N_{{\rm sk}}=\frac{1}{4\pi}\sum_{\Delta}a_{\Delta},\quad({\rm
are})$ (10)
and this is denoted by “are” below. One more technique to count $N_{{\rm sk}}$
is denoted by “gra”, which is a graphical measurement technique (see Appendix
B).
Figure 8: Total number of skyrmions $|N_{{\rm sk}}|$ of (a) model 1 and (b)
model 2, where the texts “gra”, “are”, and “top” correspond to three different
calculation techniques for $|N_{{\rm sk}}|$; “Gra” denotes the graphical
measurement technique presented in Appendix B, “are” and “top” denote the
techniques of using the formulas in Eqs. (9) and (10). The corresponding order
parameter $M_{\tau}$ of (c) model 1 and (d) model 2 is plotted.
Figure 8(a) shows the dependence of $|N_{{\rm sk}}|$ of model 1 on the
temperature variation at $B\\!=\\!-0.6$, where the absolute values of $N_{{\rm
sk}}$ are plotted. These curves in Fig. 8(a) are obtained along the horizontal
dashed line in Fig. 5(a). We find that $|N_{{\rm sk}}|$ discontinuously
reduces at $T\\!\simeq\\!0.45$, and that the reduced $|N_{{\rm sk}}|$ in the
region $T\\!>\\!0.45$ of “top” and “are” remain finite up to $T\\!\simeq\\!1$.
Because of this discontinuous change of $|N_{{\rm sk}}|$, the skx phase of
model 1 is divided into two regions at $T\\!\simeq\\!0.45$. This skx phase at
higher temperatures is numerically stable. However, $N_{{\rm sk}}$ evaluated
graphically, denoted by “gra”, increases at $T\\!\simeq\\!0.6$. This behavior
of $N_{{\rm sk}}$ implies that the skx configuration is collapsed or multiply
counted. Therefore, the skx configuration should be grouped into the sk-fe
phase in this region, and we plot a dashed line as the phase boundary between
the skx and sk-fe phases. The curves $|N_{{\rm sk}}|$ of model 2 in Fig. 8(b)
are obtained along the horizontal dashed line in Fig. 6(a) at $B\\!=\\!-0.5$,
and we find that $N_{{\rm sk}}$ discontinuously reduces to $N_{{\rm
sk}}\\!\simeq\\!0$. This reduction implies that the skx phase changes to sk-fe
or ferro phase at $T\\!\simeq\\!0.3$ in model 2.
To see the internal configuration of the 2D non-polar variable $\tau$, we
calculate the order parameter by
$\displaystyle M_{\tau}=2\left(\langle\sigma_{x}\rangle^{2}-1/2\right).$ (11)
This $M_{\tau}$ continuously changes with respect to $T$ (Fig. 8(c) for model
1), and no discontinuous change is observed. However, it is clear that $\tau$
is anisotropic (isotropic) in the temperature region $T\\!<\\!0.2$
($0.5\\!<\\!T$). The $M_{\tau}$ plotted in Fig. 8(d) for model 2 is very large
compared with that in Fig. 8(c). This behavior of $M_{\tau}$ implies that
$\tau$ is parallel to the direction of $\vec{f}$ in the whole region of $T$
plotted, resulting from the considerably large value of $f(=\\!1.7)$ assumed
in model 2 for Fig. 6.
We should note that the variations of $|N_{{\rm sk}}|$ with respect to $T$ in
Figs. 8(a) and 8(b) are identical to those (which are not plotted) obtained
under $\vec{f}\\!=\\!(0,0)$ and with the same other parameters. In this case,
$\gamma$ for $S_{\tau}$ is fixed to $\gamma\\!=\\!0$, and therefore, $\tau$
becomes isotropic. This result, obtained under $\vec{f}\\!=\\!(0,0)$ and
$\gamma\\!=\\!0$, implies that the skyrmion deformation is caused by the
alignment of $\tau$, and the only effect of $\vec{f}\\!\not=\\!(0,0)$ is to
deform the skyrmion shape to anisotropic in the skx phase.
Figure 9: (a) $S_{{\rm DM}}/N$ vs. $T$ of model 1 and model 2, (b) $S_{{\rm
FM}}/N$ vs. $T$ of model 1 and model 2, the anisotropy of effective
interaction coefficient $\eta_{\lambda}$ and $\eta_{D}$ vs. $T$ of (c) model 1
and (d) model 2. The vertical dashed lines in (a) and (b) roughly indicate the
positions where $S_{{\rm DM}}/N$ and $S_{{\rm FM}}/N$ discontinuously change
in model 1 and model 2. The horizontal dashed line in (d) is drawn at
$\eta_{D}\\!=\\!0.2$, which is the value assumed in Ref. Shibata-etal-
Natnanotech2015 to simulate the skyrmion deformation.
The DMI and FMI energies $S_{{\rm DM}}/N$ and $S_{{\rm FM}}/N$ are shown to
have discontinuous changes at $T\\!\simeq\\!0.4$ in both models (Figs.
9(a),(b)), where $N$ is the total number of vertices. The gaps of these
discontinuities in $S_{{\rm FM}}/N$ are very small.
Anisotropy $\eta_{\lambda}$ and $\eta_{D}$ of effective FMI and DMI
coefficients can be evaluated such that
$\displaystyle\begin{split}&\eta_{\lambda}=1-\lambda_{y}/\lambda_{x}\quad({\rm
model\;1}),\\\ &\eta_{D}=1-D_{x}/D_{y}\quad({\rm model\;2}),\end{split}$ (12)
where the expressions for $D_{x}$, $D_{y}$ and $\lambda_{x}$, $\lambda_{y}$
are given in Eq. (4). The direction dependence of the definition
$\eta_{\lambda}$ of model 1 is different from $\eta_{D}$ of model 2, and this
difference comes from the fact that the definition of $v_{ij}$ in Eq. (2) for
model 1 is different from that in Eq. (3) of model 2. We find from the
anisotropy $\eta_{\lambda}$ of model 1 in Fig. 9(c) that $\eta_{\lambda}$ is
decreasing with increasing $T$, and this tendency is the same for $\eta_{D}$
of model 2 in Fig. 9(d). It is interesting to note that $\eta_{D}$ of model 2
is $\eta_{D}\\!\simeq\\!0.2$ in the skx phase at $T\\!<\\!0.4$. This value
$\eta_{D}\\!=\\!0.2$ corresponds to $D_{x}/D_{y}\\!=\\!0.8$ explicitly assumed
in Ref. Shibata-etal-Natnanotech2015 to simulate the skyrmion deformation.
This $\eta_{D}$ is slightly larger than 0.2 at $T\\!\simeq 0.1$, where the
shape anisotropy is comparable to the experimentally observed one, as
demonstrated in Fig. 6(d). It must be emphasized that $\eta_{D}$ or
equivalently $D_{x}$ and $D_{y}$ of model 2 are not the input parameters for
the simulations, where the input is $\vec{f}$, and the output is a skyrmion
deformation like in the experiments.
Finally in this subsection, we show how the simulations are convergent by
plotting $|N_{\rm sk}|$ (top) in Eq. (9) vs. MCS and discuss how the stress
influences the skx phase. The data $|N_{\rm sk}|$ of model 1 plotted in Figs.
10(a)–(c), which are obtained on the dashed line in Fig. 5 at the transition
region $T\\!\simeq\\!0.5$, indicate that the skyrmion number is independent of
whether the stress is applied or not. This implies that the distortion of FMI
coefficient by uniaxial stress does not influence the skx and sk-fe phases. In
contrast, we find in the remaining plots in Figs. 10(d)–(f), which are
obtained on the dashed line in Fig. 6, that $|N_{\rm sk}|$ of model 2 depends
on the stress. Indeed, $|N_{\rm sk}|$ remains unchanged for the stressed
condition in the skx phase (Fig. 10(d)), while $|N_{\rm sk}|$ is considerably
increased from $|N_{\rm sk}|\\!=\\!{\rm finite}$ in the sk-fe phase (Fig.
10(e)) and also from $|N_{\rm sk}|\\!=\\!0$ in the ferro phase (Fig. 10(f)).
It is interesting to note that such skyrmion proliferation is experimentally
observed by uniaxial stress control not only in low temperature region Chacon-
etal-PRL2015 ; Nii-etal-PRL2014 ; Nii-etal-NatCom2015 but also in high
temperature region close to the boundary with the ferro phase Levatic-etal-
SCRep2016 . Thus, effects of uniaxial stress on skyrmion proliferation are
considered to be implemented in model 2.
Figure 10: $|N_{\rm sk}|$ vs. MCS obtained on the dashed lines in Figs. 5 and
6 at the boundary between skx and sk-fe phases in (a),(b),(c) model 1 and
(d),(e),(f) model 2. $|N_{\rm sk}|$ is independent of whether the stress is
applied or not in model 1, while it clearly depends on the stress in model 2.
The other parameters $\lambda,D,\gamma,v_{0}$ are the same as those shown in
Figs. 5 and 6.
#### III.1.3 Stress vs. magnetic field diagram
Figure 11: (a) $fB$ phase diagram of model 1, where $f$ and $B$ are the
external force and magnetic field, (b) snapshot of skyrmions at
$(f,B)\\!=\\!(1.1,-0.7)$, (c) snapshot obtained at $(f,B)\\!=\\!(0.7,-0.7)$,
(d) histogram of $\delta$ corresponding to (c), which is close to Exp data in
Ref. Shibata-etal-Natnanotech2015 , and (e), (f) snapshots obtained at
$(f,B)\\!=\\!(0,-0.7)$ and $(f,B)\\!=\\!(-0.5,-0.7)$, where the negative $f$
implies ${\vec{f}}\\!=\\!(0,f)$, and the skyrmion shape deforms vertically.
The assumed parameter values are written in the figure. Figure 12: (a) $fB$
phase diagram of model 2, where $f$ and $B$ are the external force and
magnetic field, (b) snapshot of skyrmions at $(f,B)\\!=\\!(1.1,-0.45)$, (c)
snapshot obtained at $(f,B)\\!=\\!(0.9,-0.45)$, (d) histogram of $\delta$
corresponding to (c), which is close to Exp data in Ref. Shibata-etal-
Natnanotech2015 , and (e), (f) snapshots obtained at $(f,B)\\!=\\!(0,-0.45)$
and $(f,B)\\!=\\!(-0.5,-0.45)$, where the negative $f$ implies
${\vec{f}}\\!=\\!(0,f)$ and the skyrmion shape deforms vertically. The assumed
parameter values are written on the figure.
The external force $f$ and magnetic field $B$ are varied, and $fB$ phase
diagrams of model 1 and model 2 are obtained (Figs. 11 and 12). The parameters
are fixed to $(T,\lambda,D,\gamma,v_{0})\\!=\\!(0.1,0.8,0.9,0,0.15)$ in Fig.
11 for model 1 and $(T,\lambda,D,\gamma,v_{0})\\!=\\!(0.1,1.2,0.45,0,0.7)$ in
Fig. 12 for model 2. The parameters $(\lambda,D,\gamma,v_{0})$ for model 1 and
model 2 are the same as those assumed for the $BT$ phase diagrams in Figs. 5
and 6. The symbol (skx) for skyrmion and those for other phases are also
exactly the same as those used in Figs. 5 and 6.
For the external force $\vec{f}\\!=\\!(f,0)$ in the positive $x$ direction, we
assign positive $f$ in the vertical axis of the diagrams. In the case of
$\vec{f}\\!=\\!(f,0)$ for positive $f$, the internal variable $\tau$ is
expected to align along $\vec{f}$ in the direction $(1,0)$ or $x$ direction.
In contrast, the negative $f$ in the diagrams means that
$\vec{f}\\!=\\!(0,f)$. In this case, $\tau$ aligns along the direction $(0,1)$
or $y$ direction. Such an aligned configuration of $\tau$ along the $y$ axis
is also expected for $\alpha\\!=\\!-1$ with $\vec{f}\\!=\\!(f,0)$, because the
energy $\alpha S_{f}$ for $\alpha\\!=\\!1$ with $\vec{f}\\!=\\!(0,f)$ is
identical to $\alpha S_{f}$ for $\alpha\\!=\\!-1$ with $\vec{f}\\!=\\!(f,0)$
up to a constant energy.
Figures 11(b) and 12(b) are snapshots of deformed skyrmions, where the shape
anisotropy is slightly larger than the experimental one in Ref. Shibata-etal-
Natnanotech2015 . In contrast, the snapshots in Figs. 11(c) and 12(c) are
almost comparable in their anisotropy $\delta$, as shown in Figs. 11(d) and
12(d) with the experimentally reported one denoted by Exp. The word “thinner”
corresponding to the solid circle enclosed by a blue-colored square indicates
that the shape deformation is thinner than that of Exp, and the word “compa”
corresponding to that enclosed by a pink-colored diagonal indicates that the
shape deformation is comparable to that of Exp. For $f\\!=\\!0$, the skyrmion
shape is isotropic, as we see in Figs. 11(e) and 12(e), and the shape
vertically deforms for the negative $f$ region, which implies positive $f$ in
$\vec{f}\\!=\\!(0,f)$, in Figs. 11(f) and 12(f). Thus, we can confirm from the
snapshots that the shape deforms to oblong along the applied tensile force
direction. Moreover, the deformation is almost the same as Exp for a certain
range of $f$ in both model 1 and model 2.
We should note that the skx phase changes to the sk-st phase with increasing
$f$ at a relatively small $B$ region, however, it does not change to the ferro
phase at an intermediate region of $B$ even if $f$ increases to sufficiently
large, where $\tau$ saturates in the sense that no further change is expected.
This saturation is because the role of $\vec{f}$ is only to rotate the
direction of $\tau$. Hence, the modeling of stress by $\vec{f}$ and $\tau$ is
considered effective only in small stress regions, as mentioned in Section II.
This point is different from the reported numerical results in Ref. JWang-
etal-PRB2018 , where the skx phase terminates, and the stripe or ferro phase
appears for sufficiently large strain in the strain vs. magnetic field
diagram.
Finally in this subsection, we show snapshots of the variable $\tau$ in Figs.
13(a), (b), and (c), which correspond to the configurations shown in Fig. 5(c)
of model 1, Fig. 6(c) of model 2, and Fig. 12(e) of model 2, respectively. To
clarify the directions of $\tau$, we show a quarter of $\tau$
($\Leftrightarrow$ the total number of $\tau$ is 2500) in the snapshots. We
find that $\tau$ is almost parallel to $\vec{f}$ denoted by the arrows in (a)
and (b), and it is almost random in (c), where $f$ is assumed to be
$f\\!=\\!0$. The reason why $\tau$ in (b) is more uniform than in (a) is
because $f(=\\!1.7)$ in (b) is relatively larger than $f(=\\!0.5)$ in (a).
Figure 13: Snapshots of $\tau$ corresponding to (a) Fig. 5(c) of model 1, (b)
Fig. 6(c) of model 2, and (c) Fig. 12(e) of model 2. The small cylinders
correspond to $\tau$. The total number of cylinders is reduced to 2500, which
is quarter of $N(=\\!10000)$, to clarify the directions. The arrows
($\leftrightarrow$) in (a) and (b) denote the direction of tensile force
$\vec{f}$.
### III.2 Responses to uniaxial strains
To summarize the results in Section III.1, both model 1 and model 2
successfully describe the shape deformation of skyrmions under external
mechanical forces $\vec{f}$. The skyrmion deformation comes from the fact that
the skx phase is sensitive to the direction $\tau$ of the strain field
influenced by $f$ in $\vec{f}\\!=\\!(f,0)$, which is assumed to be positive or
equivalently tensile, as mentioned in Section II. This successful result
implies that the interaction between spins and the mechanical force is
adequately implemented in both models at least in the skx phase.
Besides, the response of spins in the stripe phase in both models, or more
explicitly, the stripe direction as a response to $\vec{f}$ is also consistent
with the reported experimental result in Ref. JDho-etal-APL2003 . In this Ref.
JDho-etal-APL2003 , as mentioned in the introduction, Dho et al.
experimentally studied magnetic microstructures of LSMO thin film at room
temperature and zero magnetic fields. The reported results indicate that the
direction of the strain-induced magnetic stripe becomes dependent on whether
the force is compression or tension.
On the other hand, the definition of $S_{{\rm DM}}$ in Eqs. (2), (3) is
explicitly dependent on the shape of the lattice, and therefore, we examine
another check for the response of spins in the stripe phase by deforming the
lattice itself, as described in Fig. 3. To remove the effect of $\vec{f}$, we
fix $\vec{f}$ to $f\\!=\\!0$ in $S_{f}$, and instead, $\gamma$ in $\gamma
S_{\tau}$ is changed from $\gamma\\!=\\!0$ to $\gamma\\!=\\!0.5$ for model 1
and $\gamma\\!=\\!0.65$ for model 2. As a consequence of these non-zero
$\gamma$, the variable $\tau$ is expected to align to some spontaneous
directions. If the lattice deformation non-trivially influences $\tau$, this
spontaneously and locally oriented configuration of $\tau$ is expected to
influence spin configurations strongly in the stripe phase. As a consequence,
the stripe direction becomes anisotropic on deformed lattices
($\Leftrightarrow\xi\\!\not=\\!1$), while the stripe is isotropic on the
undeformed lattice ($\Leftrightarrow\xi\\!=\\!1$).
To check these expectations by the lattice deformations shown in Fig. 3, we
modify the unit tangential vector ${\vec{e}}_{ij}$, which originally comes
from $\partial{\vec{r}}_{i}/\partial x_{j}$ (Appendix A). Indeed,
$\partial{\vec{r}}_{i}/\partial x_{j}$ is understood to be the edge vector
${\vec{\ell}}_{ij}(=\\!\vec{r}_{j}\\!-\\!\vec{r}_{i})$ from vertex $i$ to
vertex $j$ in the discrete model, and therefore, both the direction and the
length of ${\vec{\ell}}_{ij}$ are changed by the lattice deformations in Fig.
3. Thus, the unit tangential vector
${\vec{e}}_{ij}\\!=\\!(e_{ij}^{x},e_{ij}^{y})$ in $S_{{\rm DM}}$ in Eqs. (2)
and (3) is replaced by
$\displaystyle{\vec{e}}_{ij}^{\;\prime}=(e_{ij}^{\prime x},e_{ij}^{\prime
y})=(\xi^{-1}e_{ij}^{x},\xi e_{ij}^{y}).$ (13)
This generalized vector ${\vec{e}}_{ij}^{\;\prime}$ is identical to the
original unit vector ${\vec{e}}_{ij}$ for $\xi\\!=\\!1$. Note also that
${\vec{e}}_{ij}$ in $v_{ij}$ in Eqs. (2) and (3) is replaced by
${\vec{e}}_{ij}^{\;\prime}$ as follows:
$\displaystyle\begin{split}&S_{{\rm
FM}}=\sum_{\Delta}\left[\lambda_{ij}\left(1-\sigma_{i}\cdot\sigma_{j}\right)+\lambda_{jk}\left(1-\sigma_{j}\cdot\sigma_{k}\right)+\lambda_{ki}\left(1-\sigma_{k}\cdot\sigma_{i}\right)\right],\\\
&S_{{\rm
DM}}=\sum_{ij}{\vec{e}}_{ij}^{\;\prime}\cdot\sigma_{i}\times\sigma_{j},\\\
&\lambda_{ij}=\frac{1}{3}\left(\frac{v_{ij}}{v_{ik}}+\frac{v_{ji}}{v_{jk}}\right),\quad
v_{ij}=\left\\{\begin{array}[]{@{\,}ll}|\tau_{i}\cdot{\vec{e}}_{ij}^{\;\prime}|+v_{0}&(|\tau_{i}\cdot{\vec{e}}_{ij}^{\;\prime}|<1)\\\
1+v_{0}&(|\tau_{i}\cdot{\vec{e}}_{ij}^{\;\prime}|\geq
1)\end{array}\right.,\quad({\rm model\;1}),\end{split}$ (14)
and
$\displaystyle\begin{split}&S_{{\rm
FM}}=\sum_{ij}\left(1-\sigma_{i}\cdot\sigma_{j}\right),\\\ &S_{{\rm
DM}}=\sum_{\Delta}\left[\lambda_{ij}\left({\vec{e}}_{ij}^{\;\prime}\cdot\sigma_{i}\times\sigma_{j}\right)+\lambda_{jk}\left({\vec{e}}_{jk}^{\;\prime}\cdot\sigma_{j}\times\sigma_{k}\right)+\lambda_{ki}\left({\vec{e}}_{ki}^{\;\prime}\cdot\sigma_{k}\times\sigma_{i}\right)\right],\\\
&\lambda_{ij}=\frac{1}{3}\left(\frac{v_{ij}}{v_{ik}}+\frac{v_{ji}}{v_{jk}}\right),\quad
v_{ij}=\left\\{\begin{array}[]{@{\,}ll}\sqrt{1-\left(\tau_{i}\cdot{\vec{e}}_{ij}^{\;\prime}\right)^{2}}+v_{0}&(|\tau_{i}\cdot{\vec{e}}_{ij}^{\;\prime}|<1)\\\
v_{0}&(|\tau_{i}\cdot{\vec{e}}_{ij}^{\;\prime}|\geq
1)\end{array}\right.,\quad({\rm model\;2}),\end{split}$ (15)
and the corresponding models are also denoted by model 1 and model 2. The
difference between models in Eqs. (14), (15) and Eqs. (2), (3) comes from the
definition of $v_{ij}$. However, the variables $v_{ij}$ in Eqs. (14), (15) are
identical with $v_{ij}$ in Eqs. (2), (3) for the non-deformed lattice
corresponding to $\xi\\!=\\!1$, and therefore, both models in Eqs. (14), (15)
are simple and straightforward extension of models in Eqs. (2), (3). From the
definitions of $v_{ij}$ in Eqs. (14) and (15), $v_{ij}$ no longer have the
meaning of a component of $\tau_{i}$ along or perpendicular to the direction
from vertex $i$ to vertex $j$. It is also possible to start with model 1 and
model 2 in Eqs. (14) and (15) from the beginning, however, model 1 and model 2
in Eqs. (2), (3) are relatively simple and used to study responses to the
external stress $\vec{f}$ in Section III.1.
Since the definition of $v_{ij}$ in Eqs. (14) and (15) depends on the bond
vector ${\vec{e}}_{ij}^{\;\prime}$, we first show the lattices corresponding
to $\xi\\!=1$, $\xi\\!=0.9$, and $\xi\\!=1.1$ in Figs. 14(a)–(c). Let the bond
length or the lattice spacing $a(=\\!|{\vec{e}}_{ij}|)$ be $a\\!=\\!1$ on the
regular lattice, then $a(=\\!|{\vec{e}}_{ij}^{\;\prime}|)$ becomes $a>1$ or
$a<1$ depending on the bond direction on the deformed lattices. For
$\xi\\!=\\!0.9$, all bonds in the horizontal direction, such as bond $ij$ in
Fig. 14(b), satisfy $a>1$, and all other bonds, such as bond $ik$, satisfy
$a<1$. To the contrary, for $\xi\\!=\\!1.1$, all bonds in the horizontal
direction satisfy $a<1$ and all other bonds satisfy $a>1$ as shown in Fig.
14(c).
Figure 14: (a) Regular triangular lattice corresponding to $\xi\\!=\\!1$, and
deformed lattices corresponding to (b) $\xi\\!=\\!0.9$ and (c)
$\xi\\!=\\!1.1$. The bond length $a$ in (a) is $a\\!=\\!1$, while in (b) and
(c), $a$ changes to $a>1$ or $a<1$ depending on the direction of bonds. The
symbol $\theta$ in (a) is the angle between $\tau_{i}$ and the direction of
bond $ij$, and the arrows ($\leftrightarrow$) and ($\updownarrow$) in (b) and
(c) indicate the elongation direction. Figure 15: (a) $T\xi$ diagram in the
stripe phase of model 1, where $T$ and $\xi$ are the temperature and
deformation parameter in Eq. (6). The arrows ($\leftrightarrow$) and
($\updownarrow$) denote the lattice elongation direction, whereas the symbols
($\bigtriangleup$), ($\bigcirc$) and ($\square$) denote alignments of the
stripe direction. (b), (c) and (d) are snapshots obtained at $\xi\\!=\\!0.88$,
and (e), (f) and (g) are those obtained at $\xi\\!=\\!1$ and $\xi\\!=\\!1.12$.
The parameters $\lambda$ and $D$ are the same as those used in Figs. 5 and 11,
and $(B,\gamma,f)$ are fixed to $(B,\gamma,f)\\!=\\!(0,0.5,0)$. Fluctuations
of spins increase with increasing temperature. Figure 16: (a) $T\xi$ diagram
in the stripe phase of model 2, where $T$ and $\xi$ are the temperature and
deformation parameter in Eq. (6). The arrows ($\leftrightarrow$) and
($\updownarrow$) denote the lattice elongation direction, whereas the symbols
($\bigtriangleup$), ($\bigcirc$),and ($\square$) denote alignments of the
stripe direction. (b), (c) and (d) are snapshots obtained at $\xi\\!=\\!0.88$,
and (e), (f) and (g) are those obtained at $\xi\\!=\\!1$ and $\xi\\!=\\!1.12$.
The parameters $\lambda$ and $D$ are the same as those used in Figs. 6 and 12,
and $(B,\gamma,f)$ are fixed to $(B,\gamma,f)\\!=\\!(0,0.65,0)$. Fluctuations
of spins increase with increasing temperature.
We should comment on the influences of lattice deformation described in Eq.
(6) on $S_{{\rm FM}}$ and $S_{{\rm DM}}$ in model 1 and model 2 in detail.
First, the definition of $S_{{\rm DM}}$ initially depends on the lattice
shape. Moreover, in $S_{{\rm DM}}$ of model 2, the influences of lattice
deformation come from both ${\vec{e}}_{ij}^{\;\prime}$ and $\lambda_{ij}$,
which depends on $v_{ij}$. $S_{{\rm FM}}$ in model 1 is also dependent on the
lattice shape due to this $\lambda_{ij}$. In contrast, $S_{{\rm FM}}$ in model
2 depends only on the connectivity of the lattice and is independent of the
lattice shape. To summarize, the lattice deformation by $\xi$ in Eq. (6)
influences both $S_{{\rm FM}}$ and $S_{{\rm DM}}$ in model 1, and it
influences only $S_{{\rm DM}}$ in model 2.
Figures 15 and 16 show phase diagrams for the stripe phase in model 1 and
model 2 under variations of $\xi$ and $T$. The symbols ($\bigtriangleup$),
($\bigcirc$),and ($\square$) denote horizontal, isotropic, and vertical
alignments of stripe direction. In Fig. 16 (g), the alignment direction is not
exactly vertical to the horizontal direction, but it is parallel to the
triangle’s edge directions (see Fig. 1). This deviation in the alignment
direction is in contrast to the case of model 1 in Figs. 15(b), (c) and(d) and
is also in contrast to the case that $\vec{f}\\!=\\!(0,f)$ is applied, where
the stripe direction is precisely vertical to the horizontal direction (which
is not shown). For $\xi\\!=\\!1$, the lattice is not deformed, and uniaxial
strains, and hence, aligned stripes are not expected. Indeed, we find from the
snapshots in Figs. 15 and 16 that the stripe direction is not always uniformly
aligned, except for at relatively low temperatures such as $T\\!=\\!0.9$. From
this, it is reasonable to consider $\tau$ to be a strain direction in a
microscopic sense. If $\gamma$ is fixed to a larger value, such as
$\gamma\\!=\\!1$ in both models, then the stripe pattern, or equivalently the
direction of $\tau$, becomes anisotropic even at $\xi\\!=\\!1$ like those in
the case of $\xi\\!\not=\\!1$.
We find that the results of model 2 in Fig. 16 are consistent with the
reported experimental data in Ref. JDho-etal-APL2003 (see Fig. 3), implying
that $\tau$ in model 2 correctly represents the direction of strains expected
under the lattice deformations by $\xi$. On the contrary, the results of model
1 in Fig. 15 are inconsistent with the experimental data. This difference in
the stripe direction comes from the fact that the lattice deformation
incorrectly influences the alignment of $\tau$, or in other words, $\tau$ in
model 1 is not considered as the strain direction corresponding to the lattice
deformation.
Thus, the strains caused by lattice deformations are consistent (inconsistent)
to their stress type, compression, or tension, which determines the direction
of stripe pattern in model 2 (model 1) at $T\\!\simeq\\!1$ and $B\\!=\\!0$.
For the low-temperature region, the responses of lattice deformation in model
2 and model 1 are partly inconsistent with the experimental result in Ref.
JDho-etal-APL2003 . To summarize, the numerical results in this paper support
that the reason for skyrmion shape deformation, described in Ref. Shibata-
etal-Natnanotech2015 , is an anisotropy in the DMI coefficient.
Figure 17: Snapshots of $\tau$ of model 2 obtained at (a)
$(T,B)\\!=\\!(0.9,0.88)$, (b) $(T,B)\\!=\\!(1.05,1)$, and (c)
$(T,B)\\!=\\!(0.9,1.12)$, which correspond to Figs. 16(d), 16(f), and 16(g),
respectively. The small cylinders represent $\tau$. The total number of
cylinders is reduced to 2500, which is quarter of $N(=\\!10000)$, to clarify
the directions. The arrows in (a) ($\leftrightarrow$) and (c) ($\updownarrow$)
denote the lattice elongation directions.
Here we show snapshots of $\tau$ in Figs. 17(a), (b) and (c) corresponding to
Figs. 16(d), 16(f), and 16(g), respectively. We find that almost all $\tau$
align along the horizontal direction in (a), the direction locally aligns and
is globally isotropic in (b), and almost all $\tau$ align along the vertical
direction or the triangle edge direction in (c). The random state of $\tau$ in
Fig. 17(b) implies that the direction of $D$-vector is globally at random and
considered to correspond to a non-coplanar distribution of $D$-vectors in the
bulk system with inhomogeneous distortion expected from the effective magnetic
model Plumer-Walker-JPC1982 ; Plumer-etal-JPC1984 . Thermal fluctuations in
such a random state may grow on larger lattices, and if such an unstable
phenomenon is expected, the deformed skyrmion shape changes with increasing
lattice size. However, no difference is found in the simulation results on the
lattice of size $100\\!\times\\!100$ and those on the lattices of
$200\\!\times\\!200$ and $400\\!\times\\!400$ on the dashed lines in Figs. 5
and 6. Due to the competing interactions in our model, the spin configuration
is non-uniform with topological textures. However, skyrmion structures cannot
be generated by random anisotropies.
Figure 18: Responses of the original model, in which the FG prescription is
not applied, to the lattice deformations (a) $\xi\\!=\\!0.98$, (b)
$\xi\\!=\\!1$ and (c) $\xi\\!=\\!1.02$ in the stripe phase for
$(T,\lambda,D,B)\\!=\\!(1,1.6,0.9,0)$. The direction of stripes for
$\xi\\!\not=\\!1$ is inconsistent with the experimental result in Ref. JDho-
etal-APL2003 . The arrows inside the snapshots of (a) ($\leftrightarrow$) and
(c) ($\updownarrow$) denote the lattice elongation direction.
To further check the response of spins to the lattice deformation, we examine
the original model defined by Hog-etal-JMMM2020 ; Diep-Koibuchi-Frustrated2020
$\displaystyle\begin{split}&S=\lambda S_{{\rm FM}}+DS_{{\rm DM}}-S_{B},\\\
&S_{{\rm FM}}=\sum_{ij}\left(1-\sigma_{i}\cdot\sigma_{j}\right),\quad S_{{\rm
DM}}=\sum_{ij}{\vec{e}}_{ij}^{\;\prime}\cdot\sigma_{i}\times\sigma_{j},\end{split}$
(16)
where both $S_{{\rm FM}}$ and $S_{{\rm DM}}$ are not deformed by FG modeling
prescription, and $S_{B}$ is the same as in Eq. (5). The $S_{{\rm DM}}$ is
defined by using the generalized ${\vec{e}}_{ij}^{\;\prime}$ in Eq. (13). The
parameters are assumed as $(T,\lambda,D,B)\\!=\\!(1,1.6,0.9,0)$. The snapshots
are shown in Figs. 18(a), (b) and (c) for $\xi\\!=\\!0.98$, $\xi\\!=\\!1$, and
$\xi\\!=\\!1.02$, respectively. We find that the result is inconsistent with
the reported experimental data in Ref. JDho-etal-APL2003 . This inconsistency
implies that the effective coupling constants, such as $D_{x}$ and $D_{y}$ in
Eq. (4), play a non-trivial role in the skyrmion deformation and the stripe
direction. It must also be emphasized that stress-effect implemented in model
2 via the alignment of $\tau$ correctly influences helical spin configurations
of the skyrmion shape deformation and the stripe direction.
For smaller (larger) $\xi$, such as $\xi\\!=\\!0.94$ ($\xi\\!=\\!1.06$), the
vertical (horizontal) direction of stripes becomes more apparent in Fig. 18.
Interestingly, the vertical direction of stripes is parallel to the $y$
direction and not parallel to the triangle edge directions. This result
indicates that the vertical direction shown in Fig. 16(g) comes from non-
trivial effects of $\lambda_{ij}$ of $S_{{\rm FM}}$ and $S_{{\rm DM}}$ in Eqs.
(14) and (15). We note that the parameters are not always limited to those
used in Fig. 18. It is possible to use a wide range of $(T,\lambda,D)$ where
isotropic stripe configurations like in Fig. 18(b) are expected for
$\xi\\!=\\!1$.
Figure 19: The variation of $v_{ij}$ vs. $\theta$ of model 1 for (a)
$\xi\\!=\\!0.9$ and (b) $\xi\\!=\\!1.1$, and $v_{ij}$ vs. $\theta$ of model 2
for (c) $\xi\\!=\\!0.9$ and (d) $\xi\\!=\\!1.1$, where $\theta$ is the angle
between $\tau_{i}$ and ${\vec{e}}_{ij}^{\;\prime}$ (see Fig. 14(a)). All the
curves of $v_{ij}$ (dashed lines) continuously reduce to the curve of $v_{ij}$
(solid line) in the limit of $\xi\\!\to\\!1$.
In Fig. 19(a), the Finsler length $v_{ij}$ defined by Eq. (14) for
$\xi\\!=\\!0.9$ are plotted, where the horizontal axis $\theta$ is the angle
between $\tau_{i}$ and ${\vec{e}}_{ij}^{\;\prime}$ (see Fig. 14(a)). For
$\xi\\!=\\!1$, $v_{ij}$ (dashed line) is identical with the original $v_{ij}$
in Eq. (2), which is also plotted (solid line) and is found to be shifted from
$[0,1]$ to $[v_{0},1\\!+\\!v_{0}]$ by a constant $v_{0}(=\\!0.15)$ in Figs. 19
(a),(b) and $v_{0}(=\\!0.7)$ in Figs. 19 (c),(d). We find that $v_{ij}$
(dashed line) deviates from $v_{ij}$ (solid line) only slightly at the region
$\theta\to 0$ or equivalently $\theta\to\pi$, while at $\theta\\!\to\\!\pi/2$,
$v_{ij}$ (dashed line) is identified with $v_{ij}$ (solid line) for any $\xi$.
On the lattice of $\xi\\!=\\!1.1$ in Fig. 19(b), the behavior of $v_{ij}$
(dashed line) is almost comparable to the case of $\xi\\!=\\!0.9$ in Fig.
19(a). In addition, the curve of $v_{ij}$ (dashed line) of model 2 on the long
bond $a\\!>\\!1$ also includes a constant part ($=\\!v_{0}$) at
$\theta\\!\to\\!0$ like in the case of model 1. This constant part disappears
in the limit of $\xi\\!\to\\!1$, and hence, model 2 in Eq. (15) as well as
model 1 in Eq. (14) is understood to be an extension of those in Eq. (2) and
(3) as mentioned above.
Figure 20: (a) The effective coupling constants $\lambda_{x},\lambda_{y}$ and
the anisotropy $\eta_{\lambda}$ vs. $\xi$ of model 1, and (b) $D_{x},D_{y}$
and $\eta_{D}$ of model 2. The behaviors of $\lambda_{\mu}$ and
$\eta_{\lambda}$ of model 1 are almost identical to those of model 2 except
the jumps at $\xi\\!=\\!1$ in model 2. These data of model 1 (model 2) are
obtained from the simulations in Fig. 15 (Fig. 16) at $T\\!=\\!1$.
Now, we discuss why the results of model 2 are considered to be more
realistic. We show the variation of effective coupling constants
$\lambda_{\mu}$ and $D_{\mu}$ ($\mu\\!=\\!x,y$), defined by Eq. (4), with
respect to $\xi$ (Figs. 20(a),(b)), where the anisotropies $\eta_{\lambda}$
and $\eta_{D}$, defined by Eq. (12), are also plotted. We find that in the
region $\xi\\!>\\!1$, both $\eta_{\lambda}$ and $\eta_{D}$ are decreasing and
smaller than those in $\xi\\!<\\!1$ in Figs. 20(a),(b). Remarkably, the
variations of $D_{x}$, $D_{y}$ and $\eta_{D}$ vs. $\xi$ in model 2 almost
discontinuously change at $\xi\\!=\\!1$ and are in sharp contrast to those of
model 1. In model 2, if $\eta_{D}$ is positive (negative), which implies
$D_{x}\\!<\\!D_{y}$ ($D_{x}\\!>\\!D_{y}$), then the stripe direction is
horizontal (vertical). Thus, we find that model 2 on the deformed lattices for
$\xi\\!<\\!1$ (Fig. 14(b)) shares the same property as that on the non-
deformed lattice with a tensile stress ${\vec{f}}\\!=\\!(f,0)$. Indeed, the
stripe direction of model 2 is horizontal (Fig. 6(f)) and $\eta_{D}$ is
positive (Fig. 9(d)) under the tensile stress of horizontal direction
${\vec{f}}\\!=\\!(f,0)$. In other words, the response of model 2 on the non-
deformed lattice with uniaxial stress ${\vec{f}}\\!=\\!(f,0)$ is the same as
that on the deformed lattice in Fig. 14(b) corresponding to $\xi\\!<\\!1$.
This is considered to be the reason why model 2 provides the consistent result
of stripe direction with experimental data.
From these observations, we find that the small value region of $v_{ij}$ plays
an important role in the model’s response. The small value region in model 1
is $\theta\\!\simeq\\!\pi/2$ (Figs. 19(a),(b)), where $\tau_{i}$ is almost
vertical to ${\vec{e}}_{ij}^{\;\prime}$, and $v_{ij}$ for $\xi\\!\not=\\!1$ is
almost the same as $v_{ij}$ for $\xi\\!=\\!1$, and therefore no new result is
expected in model 1. In contrast, the small value region in model 2 is
$\theta\\!\simeq\\!0$ (Figs. 19(c),(d)), where $\tau_{i}$ is almost parallel
to ${\vec{e}}_{ij}^{\;\prime}$, and even a small deviation of $v_{ij}$ (dashed
line) from $v_{ij}$ (solid line) is relevant. Such a non-trivial behavior of
model 2 emerging from small $v_{ij}$ region is understood from the fact that
the effective coupling constant $\lambda_{ij}$ is given by a rational function
of $v_{ij}$.
We should emphasize that the result, supporting that model 2 is consistent
with both skyrmion deformation and stripe direction, is obtained by comparing
model 1 and model 2, and that the result of model 2 is consistent with that in
Ref. JWang-etal-PRB2018 , where an additional energy term for MEC is included
in a Landau-Ginzburg free energy. In this additional interaction term, strains
and magnetization are directly coupled. In our models, the strain field $\tau$
is introduced in $S_{f}$, and $\tau$ represents strain direction, though
$S_{f}$ includes no direct interaction of $\tau$ and magnetization or spin
variable $\sigma$. Thus, we consider that model 2 supports the model in Ref.
Shibata-etal-Natnanotech2015 , where an anisotropy in the DMI coefficient is
explicitly assumed, implying that uniaxial stress deforms DMI anisotropic.
Another choice is that both FMI and DMI are modified by FG modeling
prescription. This model is certainly expected to reproduce the experimentally
observed shape deformation of skyrmions. However, this choice is not suitable
for reproducing the stripe direction alignment by lattice deformation because
model 1 is contradictory for this purpose, as demonstrated above. Therefore,
we eliminate this choice from suitable models and find the conclusion stated
above.
## IV Summary and conclusion
Using a Finsler geometry (FG) model on a 2D triangular lattice with periodic
boundary conditions, we numerically study skyrmion deformation under uniaxial
stress and the lattice deformation. Two different models, model 1 and model 2,
are examined: the ferromagnetic energy $S_{\rm FM}$ and Dzyaloshinskii-Moriya
energy $S_{\rm DM}$ are deformed by FG modeling prescription in model 1 and
model 2, respectively. In these FG models, the coupling constants $\lambda$
and $D$ of $S_{\rm FM}$ and $S_{\rm DM}$ are dynamically deformed to be
direction-dependent such that $\lambda_{x}\\!\not=\\!\lambda_{y}$ and
$D_{x}\\!\not=\\!D_{y}$. In both models, the ratio $\lambda/D$ is dynamically
distorted to be direction dependent with a newly introduced internal degree of
freedom $\tau$ for strains and a mechanical force or stress $\vec{f}$.
We find that the results of both models for skyrmion deformation under
uniaxial stress are consistent with the reported experimental data. For the
direction of stripes as a response to the stresses, the numerical data of both
models are also consistent with the reported experimental result observed at
room temperature with zero magnetic field. However, we show that the responses
of the two models to lattice deformations are different from each other in the
stripe phase. In this case, only the data obtained by model 2 are shown to be
consistent with the experimental result. We conclude that in real systems only
lattice deformations due to the DMI are relevant. Note that the original
model, in which both FMI and DMI energies are not deformed by FG modeling
prescription, is also examined under the lattice deformations, and the
produced stripe directions are found to be different from those of the
experimental data. This shows that the lattice deformations naturally
introduced into the system by the FG modeling are necessary to explain the
experimental results.
Combining the obtained results for responses to both uniaxial stresses and
lattice deformations, we conclude that the anisotropy of the DMI coefficient
is considered to be the origin of the experimentally observed and reported
skyrmion deformations by uniaxial mechanical stresses. Thus, the FG modeling
can provide a successful model to describe modulated chiral magnetic
excitations on thin films caused by the anistropy in the ratios $\lambda/D$.
###### Acknowledgements.
This study was initiated during a two-month stay of S. E. H. at Ibaraki KOSEN
in 2017, and this stay was financially supported in part by Techno AP Co.
Ltd., Genesis Co. Ltd., Kadowaki Sangyo Co. Ltd, and also by JSPS KAKENHI
Grant Number JP17K05149. The author H.K. acknowledges V. Egorov for simulation
tasks in the early stage of this work during a four-month stay from 2019 to
2020 at Sendai KOSEN. The simulations and data analyses were performed with S.
Tamanoe, S. Sakurai, and Y. Tanaka’s assistance. This work is supported in
part by JSPS Grant-in-Aid for Scientific Research on Innovative Areas
”Discrete Geometric Analysis for Materials Design”: Grant Number 20H04647.
## Appendix A Finsler geometry modeling of ferromagnetic and Dzyaloshinskii-
Moriya interactions
In this Appendix A, we show detailed information on how the discrete forms of
$S_{{\rm FM}}$ and $S_{{\rm DM}}$ in Eqs. (2) and (3) are obtained. To
simplify descriptions, we focus on the models on non-deformed lattices in Eqs.
(2) and (3). Note that descriptions of models on deformed lattices in Eqs.
(14) and (15) remain unchanged except the definition of $v_{ij}$. Let us start
with the continuous form of $S_{{\rm FM}}$. Since the variable $\sigma(\in
S^{2}:{\rm unit\;sphere})$ is defined on a two-dimensional surface, the
continuous $S_{{\rm FM}}$ and $S_{{\rm DM}}$ are given by
$\displaystyle\begin{split}&S_{{\rm
FM}}=\frac{1}{2}\int\sqrt{g}d^{2}xg^{ab}\frac{\partial\sigma}{\partial
x^{a}}\cdot\frac{\partial\sigma}{\partial x^{b}},\\\ &S_{{\rm
DM}}=\int\sqrt{g}d^{2}xg^{ab}\frac{\partial{\vec{r}}}{\partial
x^{a}}\cdot\sigma\times\frac{\partial\sigma}{\partial x^{b}},\end{split}$ (17)
where $g^{ab}$ is the inverse of the metric $g_{ab}$, and $g$ is its
determinant (see also Ref. Diep-Koibuchi-Frustrated2020 ). Note that the unit
tangential vector ${\vec{e}}_{a}$ can be used for $\partial{\vec{r}}/\partial
x^{a}$, which is not always a unit vector. Indeed, the difference between
${\vec{e}}_{a}$ and $\partial{\vec{r}}/\partial x^{a}$ is a constant
multiplicative factor on the regular triangular lattice, and therefore, we use
${\vec{e}}_{a}$ for $\partial{\vec{r}}/\partial x^{a}$ for simplicity. For
simulations on deformed lattices, this unit vector ${\vec{e}}_{a}$ is replaced
by a more general one ${\vec{e}}_{a}^{\;\prime}$ in Eq. (13).
Figure 21: (a) A triangle of vertices 123 and a strain field $\tau_{1}$ at
vertex 1, and its tangential components $\tau_{1}\cdot{\vec{e}}_{12}$ and
$\tau_{1}\cdot{\vec{e}}_{13}$ along the directions ${\vec{e}}_{12}$ and
${\vec{e}}_{13}$, which are the unit tangential vectors from vertices 1 to 2
and 1 to 3. (b) Three possible local coordinates on the triangle 123, (c) two
neighboring triangles $ijk$ and $jil$.
Here we assume that $g_{ab}$ is not always limited to the induced metric
$(\partial{\vec{r}}/\partial x^{i})\cdot(\partial{\vec{r}}/\partial x^{j})$,
but it is assumed to be of the form
$\displaystyle g_{ab}=\begin{pmatrix}v_{12}^{-2}&0\\\
0&v_{13}^{-2}\end{pmatrix}$ (18)
on the triangle of vertices 123 (see Fig. 21(a)), where $v_{ij}$ is defined by
using the strain field $\tau_{i}(\in S^{1}:{\rm unit\;circle})$ such that
$\displaystyle\begin{split}&v_{ij}=|\tau_{i}\cdot{\vec{e}}_{ij}|+v_{0},\quad({\rm
for}\;S_{{\rm FM}};\;{\rm model\;1}),\\\
&v_{ij}=\sqrt{1-\left(\tau_{i}\cdot{\vec{e}}_{ij}\right)^{2}}+v_{0},\quad({\rm
for}\;S_{{\rm DM}};\;{\rm model\;2}).\end{split}$ (19)
Note that the definition of $v_{ij}$ in $S_{{\rm FM}}$ in model 1 is different
from that in $S_{{\rm DM}}$ in model 2.
We should comment that the usage of Finsler geometry in this paper for chiral
magnetism is not the standard one of non-Euclidean geometry such as in Ref.
Gaididei-etal-PRL2014 . In the case of Ref. Gaididei-etal-PRL2014 , a non-flat
geometry is assumed to describe real curved thin films in ${\bf R}^{3}$ and to
extract curvature effect on a magnetic system. In contrast, the film in this
paper is flat and follows Euclidean geometry; however, an additional distance
called Finsler length is introduced to describe Hamiltonian $S_{{\rm FM}}$ or
$S_{{\rm DM}}$. Even when the surface is curved, in which the surface geometry
follows the induced metric or Euclidean geometry in ${\bf R}^{3}$ as in Ref.
Gaididei-etal-PRL2014 , a Finsler length can also be introduced in addition to
the surface geometry. Such a non-Euclidean length scale can constantly be
introduced to the tangential space, where the length of vector or the distance
of two different points is defined by the newly introduced metric tensor such
as $g_{ab}$ in Eq. (18). Therefore, in the FG modeling prescription, we have
two different length scales; one is the Euclidean length for thin films in
${\bf R}^{3}$ and the other is dynamically changeable Finsler length for
Hamiltonian.
The Finsler length scale is used to effectively deform the coefficient
$\lambda_{ij}$ in Eqs. (2), (3), which will be described below in detail. This
$\lambda_{ij}$ varies depending on the internal strain variable $\tau$, which
is integrated out in the partition function, and therefore, all physical
quantities are effectively integrated over different length scales
characterized by the ratio $\lambda/D$ of interaction coefficients for FMI and
DMI. Here, this ratio is fluctuating and its mean value can be observed and
expressed by using the effective coupling constant in Eq. (4). Thus,
“dynamically deformed $D$” means that all-important length scales are
effectively integrated out with the Boltzmann weight to calculate observable
quantities. Note that this is possible if $g_{ab}$ is treated to be
dynamically changeable. For this reason, this FG modeling is effective,
especially for anisotropic phenomena, because we can start with isotropic
models such as the isotropic FMI and DMI. Therefore, the FG model is in sharp
contrast to those models with explicit anisotropic interaction terms such as
Landau-type theory for MEC. This FG modeling is coarse-grained one like the
linear chain model, of which the connection to monomers is mathematically
confirmed Doi-Edwards-1986 . In such a coarse-grained modeling, the detailed
information on electrons and atoms are lost from the beginning like in the
case of FMI. In other words, no specific information at the scale of atomic
level is necessary to calculate physical quantities even in such complex
anisotropic phenomena.
To obtain the discrete expressions of $S_{{\rm FM}}$, we replace
$\int\sqrt{g}d^{2}x\to\sum_{\Delta}(1/v_{12}v_{13})$ and
$g^{11}\partial\sigma/\partial x^{1}\cdot\partial\sigma/\partial x^{1}\to
v_{12}^{2}(\sigma_{2}-\sigma_{1})^{2}$, $g^{22}\partial\sigma/\partial
x^{2}\cdot\sigma/\partial x^{2}\to v_{13}^{2}(\sigma_{3}-\sigma_{1})^{2}$ on
the triangle of vertices 123 (Fig. 21(a)), where the local coordinate origin
is at vertex 1, and $\sum_{\Delta}$ denotes the sum over triangles. The
discrete form of $S_{{\rm DM}}$ is also obtained by the replacements
$g^{11}\partial{\vec{r}}/\partial
x^{1}\cdot(\sigma\times{\partial\sigma}/{\partial x^{1}})\to v_{12}^{2}{\bf
e}_{12}\cdot(\sigma_{1}\times\sigma_{2})$, $g^{22}\partial{\vec{r}}/\partial
x^{2}\cdot(\sigma\times{\partial\sigma}/{\partial x^{2}})\to v_{13}^{2}{\bf
e}_{13}\cdot(\sigma_{1}\times\sigma_{3})$. Then, we have
$\displaystyle\begin{split}S_{{\rm
FM}}&=\frac{1}{2}\int\sqrt{g}d^{2}x\left(g^{11}\frac{\partial\sigma}{\partial
x^{1}}\cdot\frac{\partial\sigma}{\partial
x^{1}}+g^{22}\frac{\partial\sigma}{\partial
x^{2}}\cdot\frac{\partial\sigma}{\partial x^{2}}\right)\\\
&\to\sum_{\Delta}\left[\frac{v_{12}}{v_{13}}\left(1-\sigma_{1}\cdot\sigma_{2}\right)+\frac{v_{13}}{v_{12}}\left(1-\sigma_{1}\cdot\sigma_{3}\right)\right],\end{split}$
(20)
and
$\displaystyle\begin{split}S_{{\rm
DM}}&=\int\sqrt{g}d^{2}x\left(g^{11}\frac{\partial{\vec{r}}}{\partial
x^{1}}\cdot\sigma\times\frac{\partial\sigma}{\partial
x^{1}}+g^{22}\frac{\partial{\vec{r}}}{\partial
x^{2}}\cdot\sigma\times\frac{\partial\sigma}{\partial x^{2}}\right)\\\
&\to\sum_{\Delta}\left[\frac{v_{12}}{v_{13}}\left({\vec{e}}_{12}\cdot\sigma_{1}\times\sigma_{2}\right)+\frac{v_{12}}{v_{13}}\left({\vec{e}}_{13}\cdot\sigma_{1}\times\sigma_{3}\right)\right].\end{split}$
(21)
The local coordinate origin can also be assumed at vertices 2 and 3 on the
triangle 123 (Fig. 21(b)). Therefore, summing over the discrete expressions of
$S_{{\rm FM}}$ and $S_{{\rm DM}}$ for the three possible local coordinates,
which are obtained by replacing the indexes $1\to 2,2\to 3,\cdots$ with the
factor $1/3$, we have
$\displaystyle\begin{split}S_{{\rm
FM}}=\frac{1}{3}\sum_{\Delta}&\left[\left(\frac{v_{12}}{v_{13}}+\frac{v_{21}}{v_{23}}\right)\left(1-\sigma_{1}\cdot\sigma_{2}\right)+\left(\frac{v_{23}}{v_{21}}+\frac{v_{32}}{v_{31}}\right)\left(1-\sigma_{2}\cdot\sigma_{3}\right)\right.\\\
&+\left.\left(\frac{v_{13}}{v_{12}}+\frac{v_{31}}{v_{32}}\right)\left(1-\sigma_{3}\cdot\sigma_{1}\right)\right],\end{split}$
(22)
and
$\displaystyle\begin{split}S_{{\rm
DM}}=\frac{1}{3}\sum_{\Delta}&\left[\left(\frac{v_{12}}{v_{13}}+\frac{v_{21}}{v_{23}}\right)\left({\vec{e}}_{12}\cdot\sigma_{1}\times\sigma_{2}\right)+\left(\frac{v_{23}}{v_{21}}+\frac{v_{32}}{v_{31}}\right)\left({\vec{e}}_{23}\cdot\sigma_{2}\times\sigma_{3}\right)\right.\\\
&+\left.\left(\frac{v_{13}}{v_{12}}+\frac{v_{31}}{v_{32}}\right)\left({\vec{e}}_{31}\cdot\sigma_{3}\times\sigma_{1}\right)\right].\end{split}$
(23)
Replacing the vertices 1,2,3 with $i,j,k$, we have the following expressions
for $S_{{\rm FM}}$ and $S_{{\rm DM}}$ such that
$\displaystyle\begin{split}&S_{{\rm
FM}}=\sum_{\Delta}\left[\lambda_{ij}\left(1-\sigma_{i}\cdot\sigma_{j}\right)+\lambda_{jk}\left(1-\sigma_{j}\cdot\sigma_{k}\right)+\lambda_{ki}\left(1-\sigma_{k}\cdot\sigma_{i}\right)\right],\\\
&S_{{\rm
DM}}=\sum_{\Delta}\left[\lambda_{ij}\left({\vec{e}}_{ij}\cdot\sigma_{i}\times\sigma_{j}\right)+\lambda_{jk}\left({\vec{e}}_{jk}\cdot\sigma_{j}\times\sigma_{k}\right)+\lambda_{ki}\left({\vec{e}}_{ki}\cdot\sigma_{k}\times\sigma_{i}\right)\right],\\\
&\lambda_{ij}=\frac{1}{3}\left(\frac{v_{ij}}{v_{ik}}+\frac{v_{ji}}{v_{jk}}\right),\end{split}$
(24)
where $k$ in $\lambda_{ij}$ is the third vertex number other than $i$ and $j$.
Note that $\lambda_{ij}\\!=\\!\lambda_{ji}$ is satisfied.
The sum over triangles $\sum_{\Delta}$ in these expressions can also be
replaced by the sum over bonds $\sum_{ij}$, and we also have
$\displaystyle S_{{\rm
FM}}=\sum_{ij}\bar{\lambda}_{ij}\left(1-\sigma_{i}\cdot\sigma_{j}\right),\quad
S_{{\rm DM}}=\sum_{ij}\bar{\lambda}_{ij}\left({\bf
e}_{ij}\cdot\sigma_{i}\times\sigma_{j}\right),$ (25)
where the coefficients $\bar{\lambda}_{ij}$ on the triangles are given by
$\displaystyle\bar{\lambda}_{ij}=\frac{1}{3}\left(\frac{v_{ij}}{v_{ik}}+\frac{v_{ji}}{v_{jk}}+\frac{v_{ij}}{v_{il}}+\frac{v_{ji}}{v_{jl}}\right).$
(26)
In this expression, the vertices $k$ and $l$ are those connected with $i$ and
$j$ (see Fig. 21(c)). The coefficient $\bar{\lambda}_{ij}$ is also symmetric;
$\bar{\lambda}_{ij}\\!=\\!\bar{\lambda}_{ji}$, where $k$ and $l$ should also
be replaced by each other if $i$ is replaced by $j$. For numerical
implementation, the expressions in the sum of triangles are easier than the
sum over bonds, and we use the sum over triangles in the simulations in this
paper.
Figure 22: (a) A curve $C$ parameterized by $t$ on a two-dimensional
continuous surface, where a point $x(t)=(x^{1},x^{2})$ on $C$ and its
derivative $y(t)=(\dot{x}^{1},\dot{x}^{2})$ are represented by a local
coordinate. (b) A regular square lattice with a local coordinate axes $x^{1}$
and $x^{2}$ at vertex 1 and strain fields $\tau_{i}(i\\!=\\!1,2)$ at vertices
1 and 2. Note that $v_{12}\\!\not=\\!v_{21}$ implying that the velocity from 1
to 2 is different from the velocity from 2 to 1, while $v_{21}\\!=\\!v_{24}$.
Now, the origin of the form of $g_{ab}$ in Eq. (18) is briefly explained SS-
Chern-AMS1996 ; Matsumoto-SKB1975 ; Bao-Chern-Shen-GTM200 ; Koibuchi-PhysA2014
. Let $L(x(t),y(t))$ be a Finsler function on a two-dimensional surface
defined by
$\displaystyle\begin{split}&L(x(t),y(t))=\sqrt{(y^{1})^{2}+(y^{2})^{2}}/|{\vec{v}}|=\sqrt{\left(\frac{dx^{1}}{dt}\right)^{2}+\left(\frac{dx^{2}}{dt}\right)^{2}}/|{\vec{v}}|,\\\
&|{\vec{v}}|=\sqrt{\left(\frac{dx^{1}}{ds}\right)^{2}+\left(\frac{dx^{2}}{ds}\right)^{2}},\quad{\vec{v}}=\left(\frac{dx^{1}}{ds},\frac{dx^{2}}{ds}\right),\end{split}$
(27)
where ${\vec{v}}$ is a velocity along $C$ other than
$y(t)\\!=\\!\left({dx^{1}}/{dt},{dx^{2}}/{dt}\right)$, and ${\vec{v}}$ is
assumed to be identical to the derivative of $(x^{1},x^{2})$ with respect to
the parameter $s$ (Fig. 22(a)). It is easy to check that
$\displaystyle
s=\int_{t_{0}}^{t}L(x(t),y(t))dt\quad\left(\Leftrightarrow\frac{ds}{dt}=L(x(t),y(t))\right),$
(28)
and this $s$ is called Finsler length along the positive direction of $C$. The
Finsler metric $g_{ab},(a,b=1,2)$, which is a $2\times 2$ matrix, is given by
using the Finsler function such that
$\displaystyle g_{ab}=\frac{1}{2}\frac{\partial^{2}L}{\partial y^{a}\partial
y^{b}}.$ (29)
Now, let us consider the Finsler function $L(x,y)$ on the square lattice (for
simplicity). Note that $L$ is defined only on the local coordinate axes on the
lattice, and therefore we have
$\displaystyle L(x(t),y(t))=y^{1}/v_{12}$ (30)
on $x^{1}$ axis from vertices 1 to 2 (Fig. 22(b)), where $v_{12}$ is the
velocity from vertex 1 to vertex 2 defined in Eq. (19). From this expression
and Eq. (29), we have $g_{11}\\!=\\!v_{12}^{-2}$. We also have
$g_{22}\\!=\\!v_{13}^{-2}$ from the Finsler function $L\\!=\\!y^{2}/v_{13}$
defined on $x^{2}$ axis from vertex 1 to vertex 3. Thus, we have the discrete
and local coordinate expression of Finsler metric in Eq. (18) on square
lattices shown in Fig. 22(b), though the expression of $g_{ab}$ in Eq. (18)
for triangular lattices. Indeed, on triangular lattices, the expression of
$g_{ab}$ is the same as that on square lattices, and the only difference is
that there are three possible local coordinates on triangles, while there are
four possible local coordinates on squares. Due to this difference, the
coefficient $\lambda_{ij}$ in Eq. (24) becomes slightly different from that on
square lattices; however, we have no difference in the expression of $g_{ab}$
for the dependence on the lattice structure.
## Appendix B Graphical measurement of skyrmion shape anisotropy
Figure 23: (a) The definition of shape anisotropy $\delta$ with a snapshot of
skyrmion enclosed by a rectangle for the graphical measurement of $w_{x}$ and
$w_{y}$, and (b) two lines $\ell_{X}$ and $\ell_{Y}$, passing through the
local minimum of $\sigma_{z}$, are used to find the four points $A$, $B$, $C$
and $D$ for the rectangle.
Here we describe how to measure the side lengths $w_{x}$ and $w_{y}$ of a
skyrmion for the shape anisotropy $\delta$ (Fig.23(a)), where a snapshot of
the skyrmion is shown simply by two-color gradation in blue and red using
$\sigma_{z}(\in[-1,1])$. Two lines $\ell_{X}$ and $\ell_{Y}$ in Fig. 23(b) are
drawn parallel to the $x$ and $y$ directions, and the point where two lines
cross is a vertex where $\sigma_{z}$ is the local minimum (or maximum
depending on the direction of $\vec{B}$). This local minimum $\sigma_{z}$ is
numerically determined to be smaller than those of the four nearest neighbor
vertices in all directions. The point $A$ is the first vertex, where the sign
of $\sigma_{z}$ changes from minus to plus, encountered moving along
$\ell_{X}$ from the crossing point. The other vertex $B$ is also uniquely
determined in the same way. Note that the crossing point is not always located
at the center of $A$ and $B$ on $\ell_{X}$. The vertices $C$ and $D$ on
$\ell_{Y}$ are also uniquely determined. The values of $\sigma_{z}$ at these
four points $A$, $B$, $C$ and $D$ are not always exactly identical to
$\sigma_{z}\\!=\\!0$ but are small positive close to $\sigma_{z}\\!=\\!0$.
## References
## References
* (1) T.H. Skyrme, Proc. Royal Soc. London, Ser A 260, 127-138 (1961).
* (2) T. Moriya, Phys. Rev. 120, 91-98 (1960).
* (3) I.E. Dzyaloshinskii, Sov. Phys. JETP 19, 960-971 (1964).
* (4) U.K. R${\rm\ddot{o}}$ssler, A.N. Bogdanov and C. Pfleiderer, Nature 442, 797-801 (2006).
* (5) A.N. Bogdanov, U.K. R$\ddot{{\rm o}}$ssler and C. Pfleiderer, Phys. B 359, 1162-1164 (2005).
* (6) A.N. Bogdanov and D.A. Yablonskii, Sov. Phys. JETP 68, 101-103 (1989).
* (7) M. Uchida, Y. Onose,Y. Matsui and Y. Tokura, Science 311, pp.359-361 (2006).
* (8) X. Yu, Y. Onose, N. Kanazawa, J.H. Park, J.H. Han, Y. Matsui, N. Nagaosa, and Y. Tokura, Nature 465, 901-904 (2010).
* (9) S. M${\rm\ddot{u}}$hlbauer, B. Binz, F. Jonietz, C. Pfleiderer, A. Rosch, A. Neubauer, R. Georgii1, and P. B${\rm\ddot{o}}$ni, Science 323, 915-919 (2009).
* (10) W. Munzer, A. Neubauer, T. Adams, S. Muhlbauer, C. Franz, F. Jonietz, R. Georgii, P. Boni, B. Pedersen, M. Schmidt, A. Rosch, and C. Pfleiderer, Phys. Rev. B 81, 041203(R) (2010).
* (11) X. Yu, M. Mostovoy, Y. Tokunaga, W. Zhang, K. Kimoto, Y. Matsui, Y. Kaneko, N. Nagaosa, and Y. Tokura, PNAS 109 8856 (2012).
* (12) A. Fert, N. Reyren and V. Cros, Nature Reviews 2, 17031 (2017).
* (13) N. Romming, C. Hanneken, M. Menzel, J.a E. Bickel, B. Wolter, K. von Bergmann, A. Kubetzka and R. Wiesendanger, Science 341 (6146), 636-639 (2013).
* (14) S. Buhrandt and L. Fritz, Phys. Rev. B 88, 195137 (2013).
* (15) Y. Zhou and M. Ezawa, Nature Comm. 5, 4652 (2014).
* (16) J. Iwasaki, M. Mochizuki and N. Nagaosa, Nature Comm. 4, 1463 (2013).
* (17) S. Banerjee, J. Rowland, O. Erten, and M. Randeria, Phys. Rev. X 4, 031045 (2014).
* (18) U. G${\rm\ddot{u}}$ng${\rm\ddot{o}}$rd${\rm\ddot{u}}$, R. Nepal, O.A. Tretiakov, K. Belashchenko, and A.A. Kovalev, Phys. Rev. B 93, 064428 (2016).
* (19) A. Chacon, A. Bauer, T. Adams, F. Rucker, G. Brandl, R. Georgii, M. Garst, and C. Pfleiderer, Phys. Rev. Lett. 115 267202 (2015).
* (20) AI. Levati${\rm\acute{c}}$, P. Pop${\rm\check{c}}$evi${\rm\acute{c}}$, V. ${\rm\check{S}}$urija, A. Kruchkov, H. Berger, A. Magrez, J.S. White, H.M. Ronnow and I. ${\rm\check{Z}}$ivkovi${\rm\acute{c}}$, Scientific Rep. 6, 21347 (2016).
* (21) A. N. Bogdanov, and U. K. R${\ddot{\rm o}}{\rm\beta}$ler, Phys. Rev. Lett., 87, 037203 (2001).
* (22) A. B. Butenko, A. A. Leonov, U. K. Rossler, and A. N. Bogdanov, Phys. Rev. B 82, 052403 (2010).
* (23) S. Seki, Y. Okamura, K. Shibata, R. Takagi, N.D. Khanh, F. Kagawa, T. Arima, and Y. Tokura, Phys. Rev. B 96 220404(R) (2017).
* (24) X. Yu, A. Kikkawa, D. Morikawa, K. Shibata, Y. Tokunaga, Y. Taguchi, and Y. Tokura, Phys. Rev. B 91 054411 (2015).
* (25) R. Ritz, M. Halder, C. Franz, A. Bauer, M. Wagner, R. Bamler, A. Rosch, and C. Pfleiderer, Phys. Rev. B 87, 134424 (2013).
* (26) Y. Shi and J. Wang, Phys.Rev. B 97, 224428 (2018).
* (27) Y. Nii, A. Kikkawa, Y. Taguchi, Y. Tokura, and Y. Iwasa, Phys. Rev. Lett. 113, 267203 (2014).
* (28) Y. Nii, T. Nakajima, A. Kikkawa, Y. Yamasaki, K. Ohishi, J. Suzuki, Y. Taguchi, T. Arima, Y. Tokura, and Y. Iwasa, Nature Comm. 6, 8539 (2015).
* (29) J. Chen, W.P. Cai, M.H. Qin, S. Dong, X.B. Lu, X.S. Gao and J.-M. Liu, Scientific Reports 7, 7392 (2017).
* (30) M. L. Plumer and M. B. Walker, J. Phys. C: Solid State Phys., 15, 7181-7191 (1982).
* (31) E. Franus-Muir, M. L. Plumer and E. Fawcett, J. Phys. C: Solid State Phys., 17, 1107-1141 (1984).
* (32) M. Kataoka, J. Phys. Soc. Japan, 56, 3635-3647 (1987).
* (33) J. Wang, Y. Shi, and M. Kamlah, Phys. Rev. B. 97, 024429(1-7) (2018).
* (34) K. Shibata, J. Iwasaki, N. Kanazawa, S. Aizawa, T. Tanigaki, M. Shirai, T. Nakajima, M. Kubota, M. Kawasaki, H.S. Park, D. Shindo, N. Nagaosa, and Y. Tokura, Nature Nanotech. 10, 589 (2015).
* (35) T. Koretsune, N. Nagaosa, and R. Arita, Scientific Reports 75, 13302 (2015).
* (36) S. A. Osorio, M. B. Sturla, H. D. Rosales, and D. C. Cabra, Phys. Rev. B 100, 220404(R) (2019).
* (37) S. Gao, H. D. Rosales, F. A. G. Albarrac${\acute{\rm i}}$n, V. Tsurkan, G. Kaur, T.Fennell, P. Steffens, M. Boehm, P. ${\check{\rm C}}$ermak, A. Schneidewind, E. Ressouche, D. C. Cabra, C. R${\ddot{\rm u}}$egg and O. Zaharko, Nature, doi.org/10.1038/s41586-020-2716-8, (2020).
* (38) E.Y. Vedmedenko, A. Kubetzka, K. von Bergmann, O. Pietzsch, M. Bode, J. Kirschner, H. P. Oepen, and R. Wiesendanger, Phys. Rev. Lett. 92, 077207 (2004).
* (39) J. Dho, Y. N. Kim, Y. S. Hwang, J. C. Kim, and N. H. Hur, Appl. Phys. Lett. 82, 1434-1436 (2003).
* (40) Y. Takano and H. Koibuchi, Phys. Rev. E , 95, 042411(1-11) (2017).
* (41) E. Proutorov, N. Matsuyama and H. Koibuchi, J. Phys. C 30, 405101(1-13) (2018)
* (42) V. Egorov, O. Maksimova, H. Koibuchi, C. Bernard, J-M. Chenal, O. Lame, G. Diguet, G. Sebald, J-Y. Cavaille and T. Takagi, Phys. Lett. A 396, 127230 (1-5) (2021).
* (43) H. Koibuchi, S. El Hog, V. Egorov, F. Kato and H. T. Diep, J. Phys. Conf. Ser. 1391, 012013 (2019).
* (44) M. Creutz, Quarks, gluons and lattices, (Cambridge University Press, Cambridge, 1983.
* (45) T. Okubo, S. Chung, and H. Kawamura, Phys. Rev. Lett. 108, 017206 (2012).
* (46) H.D. Rosales, D.C. Cabra, and P. Pujol, Phys. Rev. B 92, 214439 (2015).
* (47) P.A. Lebwohl and G.Lasher, Phys. Rev. A 6, 426 (1972).
* (48) N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, and A.H. Teller, J. Chem. Phys. 21, 1087 (1953).
* (49) D.P. Landau, Phys. Rev. B 13, 2997 (1976).
* (50) S. El Hog, A. Bailly-Reyre, and H.T. Diep, J. Mag. Mat. 445 32-38 (2018).
* (51) A.O. Leonov, T. L .Monchesky ,N. Romming, A. Kubetzka, A.N. Bogdanov and R. Wiesendanger, New J. Phys. 18 065003 (2016).
* (52) M. Janoschek, M. Garst, A. Bauer, P. Krautscheid, R. Georgii, P. B${\ddot{\rm o}}$oni, and C. Pfleiderer, Phys. Rev. B 87 134407 (2013).
* (53) S. El Hog, F.Kato, H. Koibuchi, H.T. Diep, J. Mag. Mag. Mat. 498, 166095(1-14) (2020).
* (54) H.T. Diep and H. Koibuchi, Frustrated Magnetic Thin Films: Spin Waves and Skrmion in Frustrated Spin Systems, 3rd Edition, Ed. H.T. Diep, (World Scientific,2020).
* (55) S.-S. Chern, Finsler Geometry Is Just Riemannian Geometry without the Quadratic Restriction, In Notices of the AMS, pp. 959-963 (1996).
* (56) M. Matsumoto, Keiryou Bibun Kikagaku (in Japanese), (Shokabo, Tokyo 1975).
* (57) D. Bao, S. -S. Chern, Z. Shen, An Introduction to Riemann-Finsler Geometry, GTM 200, (Springer, New York, 2000).
* (58) H. Koibuchi and H. Sekino, Physica A, 393, 37-50 (2014).
* (59) Y. Gaididei, V. P. Kravchuk, and D. D. Sheka, Phys. Rev. Lett., 112, 257203 (2014).
* (60) M. Doi and S.F. Edwards, The Theory of Polymer Dynamics. Oxford University Press: Oxford, United Kingdom, 1986.
|
# Transverse Shifts and Time Delays of Spatiotemporal Vortex Pulses
Reflected and Refracted at a Planar Interface
Maxim Mazanov V. N. Karazin Kharkiv National University, Kharkiv, 61022,
Ukraine Danica Sugic Theoretical Quantum Physics Laboratory, RIKEN Cluster
for Pioneering Research, Wako-shi, Saitama 351-0198, Japan Miguel A. Alonso
CNRS, Centrale Marseille, Institut Fresnel, Aix Marseille University, UMR
7249, 13397 Marseille CEDEX 20, France The Institute of Optics, University of
Rochester, Rochester, NY 14627, USA Franco Nori RIKEN Center for Quantum
Computing, Wako-shi, Saitama 351-0198, Japan Theoretical Quantum Physics
Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198,
Japan Physics Department, University of Michigan, Ann Arbor, Michigan
48109-1040, USA Konstantin Y. Bliokh Theoretical Quantum Physics Laboratory,
RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan
###### Abstract
Transverse (Hall-effect) and Goos–Hänchen shifts of light beams
reflected/refracted at planar interfaces are important wave phenomena, which
can be significantly modified and enhanced by the presence of intrinsic
orbital angular momentum (OAM) in the beam. Recently, optical spatiotemporal
vortex pulses (STVPs) carrying a purely transverse intrinsic OAM were
predicted theoretically and generated experimentally. Here we consider the
reflection and refraction of such pulses at a planar isotropic interface. We
find theoretically and confirm numerically novel types of the OAM-dependent
transverse and longitudinal pulse shifts. Remarkably, the longitudinal shifts
can be regarded as time delays, which appear, in contrast to the well-known
Wigner time delay, without temporal dispersion of the reflection/refraction
coefficients. Such time delays allow one to realize OAM-controlled slow
(subluminal) and fast (superluminal) pulse propagation without medium
dispersion. These results can have important implications in various problems
involving scattering of localized vortex states carrying transverse OAM.
## I Introduction
Small wavepacket shifts and time delays are currently attracting considerable
attention due to their noticeable roles in nanoscience. The first example of
such effects is the Goos–Hänchen shift of the beam reflected/refracted at a
planar interface Goos and Hänchen (1947); Artmann (1948); Merano _et al._
(2009); Jayaswal _et al._ (2013); Bliokh and Aiello (2013). This shift is
proportional to the wavevector-gradient of the logarithm of the reflection
coefficient.
The temporal counterpart of this spatial shift is the Wigner time delay of a
wavepacket scattered by a frequency-dependent potential Wigner (1954); Chiao
and Steinberg (1997); de Carvalho and Nussenzveig (2002); Winful (2006); Asano
_et al._ (2016). Correspondingly, this delay is given by the frequency
gradient of the logarithm of the scattering coefficient.
Another example of beam shifts is the transverse Imbert–Fedorov shift
associated with the spin-Hall effect (i.e., a transverse circular-
polarization-induced shift of the reflected/refracted beam) Imbert (1972);
Schilling (1965); Fedoseyev (1988); Onoda _et al._ (2004); Bliokh and Bliokh
(2006); Hosten and Kwiat (2008); Bliokh and Aiello (2013); Götte and Dennis
(2012); Töppel _et al._ (2013); Bliokh _et al._ (2015, 2016); Ling _et al._
(2017). This shift has a more complicated origin associated with the spin
angular momentum carried by the wave, spin-orbit interaction, and conservation
of the total angular momentum component normal to the interface.
All these shifts and time delays have been studied mostly for Gaussian-like
wavepackets and beams, and all have a typical scale of the wavelength or wave
period, which can be enhanced up to the beam-width or pulse-length scale using
the weak-measurement technique Hosten and Kwiat (2008); Götte and Dennis
(2012); Töppel _et al._ (2013); Jayaswal _et al._ (2013); Bliokh _et al._
(2016); Asano _et al._ (2016).
It has also been shown that the beam shifts can be modified significantly by
the presence of the intrinsic orbital angular momentum (OAM) in optical vortex
beams Fedoseyev (2001); Dasgupta and Gupta (2006); Fedoseyev (2008); Okuda and
Sasada (2008); Bliokh _et al._ (2009); Bekshaev (2009); Merano _et al._
(2010); Dennis and Götte (2012); Bliokh and Nori (2012a); Bliokh and Aiello
(2013). This enhances the Gaussian-beam shifts by the factor of the OAM
quantum number $\ell$ and also produces new types of shifts.
To the best of our knowledge, the role of the intrinsic OAM and vortices on
time delays have not been studied so far. This is because optical vortex beams
are usually monochromatic states unbounded in the longitudinal direction,
while time delays make sense only for finite-length wavepackets.
Recently, a novel type of localized pulses carrying transverse intrinsic OAM —
spatiotemporal vortex pulses (STVPs) — was described theoretically Sukhorukov
and Yangirova (2005); Dror and Malomed (2011); Bliokh and Nori (2012b); Bliokh
(2021, in press) and generated experimentally Jhajj _et al._ (2016); Hancock
_et al._ (2019); Chong _et al._ (2020); Hancock _et al._ (2021); Wan _et
al._ (2021); Wang _et al._ (2021) (see also Ref. Dallaire _et al._ (2009)
for the zeroth-order Bessel STVP without OAM). Such STVPs have geometrical and
OAM properties different from monochromatic vortex beams. (Note that STVPs
should not be confused with principally different space-time wavepackets
considered in Refs. Kondakci and Abouraddy (2017, 2019); Turunena and Friberg
(2010).) Therefore, it is natural to expect that these qualitatively new
objects behave differently in problems involving beam shifts and time delays.
In this work, we consider reflection and refraction of an optical STVP at a
planar isotropic interface. We predict theoretically and confirm numerically a
number of novel spatial shifts and time delays that are controlled by the
value and orientation of the intrinsic OAM of the pulse. Remarkably, time
delays appear in this system without any frequency dependence of the
reflection/refraction coefficients, thereby allowing one to realize slow
(subluminal) and fast (superluminal) pulse propagation without medium
dispersion. This is in sharp contrast to Wigner time delays and is produced by
the coupling of spatial and temporal degrees of freedom in spatiotemporal
vortices. Our results can have important implications in various problems
involving scattering of localized vortex states with transverse OAM, both
classical and quantum.
## II Laguerre-Gaussian STVPs
We first introduce a STVP propagating along the $z$-axis and carrying
transverse OAM along the $y$-axis. For this, akin to monochromatic Laguerre-
Gaussian (LG) beams Allen _et al._ (2003); Andrews and Babiker (2012), we
consider a LG-type plane-wave spectrum in the $(z,x)$ plane with the central
wavevector ${\bf k}_{0}=k_{0}\bar{\bf z}$ (the overbar denotes the unit vector
of the corresponding axis) and zero radial quantum number (Fig. 1):
$\tilde{\psi}\left({\bf{k}}\right)\propto{\left[{\gamma\left({{k_{z}}-{k_{0}}}\right)+i\,{\rm
sgn}(\ell){k_{x}}}\right]^{\left|\ell\right|}}e^{-\tfrac{\Delta^{2}}{4}\left[{{\gamma^{2}}{{\left({{k_{z}}-{k_{0}}}\right)}^{2}}+k_{x}^{2}}\right]}.$
(1)
Here, $\ell$ is the integer vortex charge, $\gamma$ is the factor determining
the ellipticity of the STVP profile in the $(z,x)$ plane, and $\Delta$ is the
$x$-width of the pulse ($\gamma\Delta$ being its $z$-length). Note that we do
not include a distribution over $k_{y}$, because for our goals it is
sufficient to consider pulses unbounded along the OAM direction. If needed, an
additional Gaussian distribution over $k_{y}$ can provide localization along
the $y$-axis.
The real-space form of the STVP (1) is given by the Fourier integral
$\psi\left({{\bf{r}},t}\right)\propto\iint{\tilde{\psi}\left({\bf{k}}\right){e^{i{\bf{k}}\cdot{\bf{r}}-i\omega\left({\bf{k}}\right)t}}}d{k_{x}}d{k_{z}}$,
where $\omega\left({\bf{k}}\right)=kc$. For the purpose of this work it is
sufficient to use a paraxial approximation, $k_{0}\Delta\gg 1$, in which only
linear deviations in the transverse wavevector components are considered. This
leads to the following expression for a paraxial LG-type STVP where
diffraction is ignored (Fig. 1):
$\psi\propto{\left[{{\gamma^{-1}}\zeta+i\,{\rm
sgn}(\ell)x}\right]^{\left|\ell\right|}}\\!\exp\\!\left[{-\frac{\left({{\gamma^{-2}}{\zeta^{2}}+{x^{2}}}\right)}{{{\Delta^{2}}}}+i{k_{0}}\zeta}\right]\\!,$
(2)
where $\zeta=z-ct$ is the pulse-accompanying coordinate. Closed-form real-
space expressions that incorporate diffraction both in the paraxial and
nonparaxial regimes will be described in a separate work.
Figure 1: The phase-intensity distributions of the momentum-space (left) and
real-space (right) wavefunctions of the STVP (1) and (2) with $\ell=1$,
$k_{0}\Delta=0.7$ and $\gamma=1.5$. The brightness is proportional to the
intensity $|\psi|^{2}$, while the color indicates the phase ${\rm Arg}(\psi)$
Thaller (2000).
For our purposes, the key features of such STVPs are: (i) their spatiotemporal
vortex structure near the center:
$\psi\propto{\left[{{\gamma^{-1}}\zeta+i\,{{\rm
sgn}}\\!\left(\ell\right)x}\right]^{\left|\ell\right|}}e^{ik_{0}\zeta}$ , and
(ii) their normalized integral intrinsic OAM Bliokh and Nori (2012b); Bliokh
(2021, in press):
$\left\langle{{\bf L}}\right\rangle=\frac{\iint{{\rm Im}\left[{{\psi^{*}}({\bf
r}\times\\!{\bm{\nabla}})\psi}\right]}\,dxdz}{\iint{{\psi^{*}}\psi}\,dxdz}=\frac{{\gamma+{\gamma^{-1}}}}{2}\,\ell\,\bar{\bf
y}\,.$ (3)
The above equations are written for a scalar wavefunction $\psi$. To consider
polarized optical STVP, one has to multiply each plane wave in the spectrum
(1) by the corresponding electric-field polarization vector ${\bf e}({\bf
k})\bot{\bf k}$. In the paraxial regime this does not affect the shape of the
pulse and its OAM considerably, but polarization plays a crucial role in the
Goos-Hänchen and spin-Hall effects Bliokh and Aiello (2013); Götte and Dennis
(2012); Töppel _et al._ (2013).
## III Reflection/refraction of a STVP at an interface
We now consider reflection/refraction of a paraxial STVP at a planar isotropic
(e.g., dielectric) interface. The geometry of the problem is shown in Fig. 2.
The interface is $Z=0$, with the $Z$-axis being directed towards the second
medium. The propagation direction of the incident pulse is determined by the
central wavevector ${\bf k}_{0}=k_{0}(\bar{\bf Z}\cos\theta+\bar{\bf
X}\sin\theta)\equiv k_{0}\bar{\bf z}$. According to Snell’s law, the reflected
and transmitted pulses have the central wavevectors ${\bf
k}^{r}_{0}=k_{0}(-\bar{\bf Z}\cos\theta+\bar{\bf X}\sin\theta)\equiv
k_{0}\bar{\bf z}^{r}$ and ${\bf k}^{t}_{0}=k_{0}^{\prime}(\bar{\bf
Z}\cos\theta^{\prime}+\bar{\bf X}\sin\theta^{\prime})\equiv
k_{0}^{\prime}\bar{\bf z}^{t}$ ($\sin\theta^{\prime}=n^{-1}\sin\theta$,
$k_{0}^{\prime}=nk_{0}$, where $n$ is the relative refractive index of the
second medium), respectively. Here, as usual in beam-shift problems, we use
the accompanying coordinate frames $(x,y,z)$ and $(x^{r,t},y,z^{r,t})$ for the
incident and reflected/transmitted pulses, Fig. 2.
Figure 2: Schematics of the reflection and refraction of a STVP at a planar
interface. The incident, reflected, and transmitted pulses, together with
their accompanying coordinate frames and intrinsic OAM are shown schematically
for the two geometries (A) and (B) (see details in the text). The longitudinal
shift (time delay) $\langle\zeta\rangle$ and angular shift $\langle
k_{x}\rangle$ are shown for the reflected and transmitted pulses in (a),
whereas the transverse shift $\langle y\rangle$ and angular shift $\langle
k_{y}\rangle$ are shown for the transmitted and reflected pulses in (b).
In contrast to the monochromatic-beam-shift problems, where the orientation of
the OAM is fixed by the beam propagation direction, in our problem the
transverse OAM can have different orientations with respect to the $(x,z)$
plane of incidence. We will consider two basic cases shown in Fig. 2:
(A) The incident STVP is localized in the $(x,z)$ plane, and the intrinsic OAM
$\left\langle{{\bf L}}\right\rangle\parallel\bar{\bf y}$.
(B) The incident STVP is localized in the $(y,z)$ plane and $\left\langle{{\bf
L}}\right\rangle\parallel\bar{\bf x}$.
To describe the main transformations of the reflected and refracted STVPs,
note that the $y$-components of the wavevectors in their spectra are
conserved, $k_{y}^{r,t}=k_{y}$, while the $x$-components in the corresponding
accompanying frames are related as $k_{x}^{r}=-k_{x}$ and
$k_{x}^{t}=(\cos\theta/\cos\theta^{\prime})k_{x}$ Bliokh and Aiello (2013). In
addition, the $z$-components of the wavevectors of the transmitted pulse are
$k_{z}^{t}\simeq nk_{z}$. From this, one can find that the vortex is inverted
in the reflected pulse in the case (A) but not (B), and its intrinsic OAM
becomes (see Fig. 2):
$\displaystyle\left\langle{{{\bf{L}}^{r}}}\right\rangle_{A}=-\left\langle{\bf{L}}\right\rangle=-\frac{{\gamma+{\gamma^{-1}}}}{2}\,\ell\,{\bf{\bar{y}}}\,,$
$\displaystyle\left\langle{{{\bf{L}}^{r}}}\right\rangle_{B}=\frac{{\gamma+{\gamma^{-1}}}}{2}\,\ell\,{\bf{\bar{x}}}^{r}.\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ $ (4)
Here and hereafter, the subscripts $A$ and $B$ mark the quantities related to
the cases (A) and (B), respectively.
For the transmitted STVP, the above transformations of the wavevector
components stretch the $x^{t}$-width of the pulse by a factor of
$\cos\theta^{\prime}/\cos\theta$ and squeeze its longitudinal length by a
factor of $1/n$. Therefore, the intrinsic OAM of the refracted pulses becomes
$\displaystyle\left\langle{{{\bf{L}}^{t}}}\right\rangle_{A}=\frac{{\gamma^{\prime}_{A}+{\gamma^{\prime-1}_{A}}}}{2}\,\ell\,{\bf{\bar{y}}},\leavevmode\nobreak\
\leavevmode\nobreak\
\gamma^{\prime}_{A}=\frac{\cos\theta}{n\cos\theta^{\prime}}\gamma,$
$\displaystyle\left\langle{{{\bf{L}}^{t}}}\right\rangle_{B}=\frac{{\gamma^{\prime}_{B}+{\gamma^{\prime-1}_{B}}}}{2}\,\ell\,{\bf{\bar{x}}}^{t},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\
\gamma^{\prime}_{B}=\frac{\gamma}{n}.\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ $ (5)
Equations (III) and (III) show that the transformations of the transverse
intrinsic OAM in the case (A) is similar to those of the longitudinal OAM of
monochromatic vortex beams Bliokh _et al._ (2009); Bliokh and Aiello (2013),
only with the additional $n^{-1}$ factor in $\gamma^{\prime}_{A}$. In turn,
the case (B) differs considerably because the intrinsic OAM and vortex do not
flip in the reflected pulse.
## IV Transverse shifts and time delays
We are now in the position to calculate small shifts in reflected/refracted
STVPs. Rigorous calculations can be performed by applying the standard
Fresnel-Snell formulas to each plane wave in the incident pulse spectrum; this
is realized numerically in the next section. Here, akin to Ref. Bliokh _et
al._ (2009), we derive all the OAM-dependent shifts using general
considerations.
First of all, we assume that paraxial polarized optical STVPs experience all
the polarization-dependent shifts known for Gaussian wave beams or packets,
i.e., angular and spatial Goos-Hänchen and spin-Hall shifts $\left\langle
k^{t,r}_{x}\right\rangle_{0}$, $\left\langle x^{t,r}\right\rangle_{0}$,
$\left\langle k^{t,r}_{y}\right\rangle_{0}$, and $\left\langle
y^{t,r}\right\rangle_{0}$ Bliokh and Aiello (2013); Götte and Dennis (2012);
Töppel _et al._ (2013), where the subscript “0” indicates that the shifts are
calculated for Gaussian states with $\ell=0$. In addition to these shifts, we
will determine the $\ell$-dependent shifts induced by the transverse intrinsic
OAM. There are three types of such shifts.
The first type is related to the conservation of the $Z$-component of the
total angular momentum in the problem and can be associated with the orbital-
Hall effect of light Bliokh and Aiello (2013); Bliokh (2006). In the case (A)
the intrinsic OAM has only the $y$-component, and the conservation law is
satisfied trivially. In the case (B), the incident and reflected pulses have
the same $Z$-components of the normalized intrinsic OAM, $\left\langle
L_{Z}\right\rangle=\left\langle L^{r}_{Z}\right\rangle$, Eqs. (3) and (III),
while the transmitted pulse has a different OAM component: $\left\langle
L_{Z}\right\rangle\neq\left\langle L^{t}_{Z}\right\rangle$, Eqs. (3) and
(III). Similarly to the refraction of monochromatic vortex beams Bliokh and
Aiello (2013); Bliokh _et al._ (2009); Fedoseyev (2008); Merano _et al._
(2010), this imbalance between the intrinsic OAM of the incident and
transmitted pulses should be compensated by the transverse $y$-shift of the
refracted pulse producing an extrinsic OAM $\langle L_{Z}^{t}\rangle^{\rm
ext}=\langle y^{t}\rangle\langle k_{X}^{t}\rangle\simeq\langle
y^{t}\rangle\,nk_{0}\sin\theta^{\prime}$. From here, the refracted STVP in the
case (B) should undergo an additional transverse shift (see Fig. 2)
$\left\langle{{y^{t}}}\right\rangle_{B}=\frac{{\left\langle{L_{Z}^{t}}\right\rangle-\left\langle{{L_{Z}}}\right\rangle}}{{n{k_{0}}\sin\theta^{\prime}}}=\frac{{\gamma\ell}}{{2{k_{0}}}}\left({{n^{-2}}-1}\right).$
(6)
In contrast to the analogous shift for refracted monochromatic vortex beams,
the shift (6) is independent of the angle $\theta$ (apart from the small
vicinity of the normal incidence $\theta=0$, which is singular for the
transverse-shift problem). The typical scale of this shift is the wavelength,
although it can be enhanced by high vortex charges $\ell$ or ellipticity
$\gamma\gg 1$ (narrow long pulses).
The second type of $\ell$-dependent shift is related to the angular Goos-
Hänchen and spin-Hall shifts $\langle k_{x,y}\rangle$, see Fig. 2. As has been
shown for monochromatic vortex beams, in the presence of vortex these shifts
acquire an additional factor of $\left(1+|\ell|\right)$ Bliokh _et al._
(2009); Bliokh and Aiello (2013); Merano _et al._ (2010), so that the
additional shifts are:
$\left\langle k^{t,r}_{x}\right\rangle_{A}=|\ell|\left\langle
k^{t,r}_{x}\right\rangle_{0},\quad\left\langle
k^{t,r}_{y}\right\rangle_{B}=|\ell|\left\langle
k^{t,r}_{y}\right\rangle_{0}\,.$ (7)
The typical scale of these angular shifts is the inverse Rayleigh range $\sim
1/(k_{0}\Delta^{2})$, and these shifts are independent of the ellipticity
$\gamma$.
Finally, the third type of $\ell$-dependent shifts is related to the cross-
coupling between different Cartesian degrees of freedom in a vortex. Below we
use reasoning similar to that for vortex beams in Refs. Bliokh and Aiello
(2013); Bliokh _et al._ (2009). In the case (A), the spatiotemporal vortices
in the reflected and transmitted pulses have the forms
$\propto{\left[{-{\gamma^{-1}}{\zeta^{r}}+i\,{\rm
sgn}\\!\left(\ell\right){x^{r}}}\right]^{\left|\ell\right|}}$ and
$\propto{\left[{{\gamma^{\prime-1}_{A}}{\zeta^{t}}+i\,{\rm
sgn}\\!\left(\ell\right){x^{t}}}\right]^{\left|\ell\right|}}$, respectively,
where $\zeta^{r,t}=z^{r,t}-ct$ and $c$ is the speed of light in the
corresponding medium. Among other polarization-dependent shifts, these pulses
experience shifts in momentum space due to the angular Goos-Hänchen effect,
which can be regarded as imaginary shifts in real space Aiello and Woerdman
(2008); Bliokh _et al._ (2009); Bliokh and Aiello (2013):
${\left\langle{k_{x}^{r}}\right\rangle_{0}}\to\delta{x^{r}}=-i\dfrac{\Delta^{2}}{2}{\left\langle{k_{x}^{r}}\right\rangle_{0}}$
and
${\left\langle{k_{x}^{t}}\right\rangle_{0}}\to\delta{x^{t}}=-i\dfrac{\Delta^{2}}{2}{\left({\dfrac{{\cos\theta^{\prime}}}{{\cos\theta}}}\right)^{2}}{\left\langle{k_{x}^{t}}\right\rangle_{0}}$.
Substituting these shifts to the vortex forms of the reflected and transmitted
pulses, we find that the imaginary $x$-shifts produce real $\ell$-dependent
$\zeta$-shifts as follows (see Fig. 2):
$\left\langle{{\zeta^{r}}}\right\rangle_{A}=-\ell\,\gamma\frac{{{\Delta^{2}}}}{2}{\left\langle{k_{x}^{r}}\right\rangle_{0}},\leavevmode\nobreak\
\leavevmode\nobreak\
\left\langle{{\zeta^{t}}}\right\rangle_{A}=\ell\,\frac{\gamma}{n}\frac{{{\Delta^{2}}}}{2}\frac{{\cos\theta^{\prime}}}{{\cos\theta}}{\left\langle{k_{x}^{t}}\right\rangle_{0}}.$
(8)
Applying analogous considerations to the case (B), with the reflected and
transmitted vortices $\propto{\left[{y+i{\gamma^{-1}}{\rm
sgn}\\!\left(\ell\right){\zeta^{r}}}\right]^{\left|\ell\right|}}$ and
$\propto{\left[{y+i{\gamma_{B}^{\prime-1}}{\rm
sgn}\\!\left(\ell\right){\zeta^{t}}}\right]^{\left|\ell\right|}}$, and angular
Hall-effect shifts ${\left\langle{k_{y}^{r,t}}\right\rangle_{0}}\to\delta
y^{r,t}=-i\dfrac{{{\Delta^{2}}}}{2}{\left\langle{k_{y}^{r,t}}\right\rangle_{0}}$,
where $\Delta$ is the pulse width in the $y$-direction, we obtain
$\left\langle{{\zeta^{r}}}\right\rangle_{B}=-\ell\,\gamma\frac{{{\Delta^{2}}}}{2}{\left\langle{k_{y}^{r}}\right\rangle_{0}},\leavevmode\nobreak\
\leavevmode\nobreak\
\left\langle{{\zeta^{t}}}\right\rangle_{B}=\ell\,\frac{\gamma}{n}\frac{{{\Delta^{2}}}}{2}{\left\langle{k_{y}^{t}}\right\rangle_{0}}.$
(9)
Equations (8) and (9) describe a remarkable qualitatively novel phenomenon:
longitudinal shifts of STVPs reflected/refracted by a planar interface. These
$\zeta$-shifts are equivalent to time delays $\left\langle\delta
t\right\rangle=-\left\langle{{\zeta}}\right\rangle/c$. In contrast to the
Wigner time delays, produced by the temporal dispersion (frequency dependence)
of the scattered potential Wigner (1954); Chiao and Steinberg (1997); de
Carvalho and Nussenzveig (2002); Winful (2006); Asano _et al._ (2016), here
the time delays appear without any temporal dispersion. The angular Goos-
Hänchen effect ${\left\langle{k_{x}^{r,t}}\right\rangle_{0}}$ originates from
the spatial dispersion (wavevector dependence) of the Fresnel
reflection/transmission coefficients, while the angular spin-Hall shift
${\left\langle{k_{y}^{r,t}}\right\rangle_{0}}$ is a purely geometric
phenomenon which does not require any dispersion Bliokh _et al._ (2015).
Importantly, such time delays allow one to realize slow (subluminal,
$\left\langle{{\zeta}}\right\rangle<0$) and fast (superluminal,
$\left\langle{{\zeta}}\right\rangle>0$) pulse propagation without any
dispersion in optical media. Unlike previous approaches controlling slow/fast
light via properties of the medium, we can control propagation time via
internal spatiotemporal properties of the pulse. Note, however, that, in
contrast to the wave packets in Ref. Kondakci and Abouraddy (2019), the sub-
or superluminal propagation of the pulses studied here is induced by the
Fresnel-Snell reflection/refraction at an interface rather than by tailoring
the pulse to have a free-space group velocity differing from $c$.
Equations (8) and (9) show that these novel OAM-dependent time delays are a
rather universal phenomenon: they appear in both reflected and transmitted
pulses in both cases (A) and (B). It is natural to expect that such time
delays will appear in a variety of systems, both classical and quantum,
involving scattering of a spatiotemporal vortex with the transverse OAM.
The typical magnitude of the longitudinal shifts (8) and (9) is the
wavelength. However, angular shifts $\left\langle
k^{r}_{x,y}\right\rangle_{0}$ of the reflected pulses, and hence the
corresponding new shifts (7)–(9), are enhanced resonantly for near-$p$
polarization in the vicinity of the Brewster angle of incidence
$\theta_{B}=\tan^{-1}(n)$ Merano _et al._ (2009); Bliokh and Bliokh (2006);
Dasgupta and Gupta (2006); Qin _et al._ (2009) (see Fig. 4 below). This is a
general phenomenon of the weak-measurement amplification of shifts for
wavepackets scattered with a near-zero amplitude Asano _et al._ (2016);
Gorodetski _et al._ (2012); Götte and Dennis (2013). The maximum weak-
measurement-amplified shift is comparable with the pulse size in the
corresponding dimension, which corresponds to the amplification factor $\sim
k_{0}\Delta\gg 1$.
Figure 3: Theoretically calculated (curves) and numerically calculated
(symbols) shifts of the reflected ($r$) and transmitted ($t$) STVPs in the
cases (A) and (B) from Fig. 2 as functions of the angle of incidence $\theta$.
The shifts are given by the sums of previously known polarization-induced
contributions at $\ell=0$ (shown by pale dashed curves) and OAM-induced
contributions Eqs. (6)–(9). Parameters are: $\ell=1$, $n=1.5$, $\gamma=0.4$,
$k_{0}\Delta=500$, and ${\bf
e}_{0}\equiv(e_{x},e_{y})=\left(1/\sqrt{3},(1-i)/\sqrt{3}\right)$. The density
plot shows an example of the deformation of the transmitted pulse in the case
(B) with $k_{0}\Delta=1$ and $\gamma=2.5$ (to enhance the shifts with respect
to the pulse size) and its centroid right after the refraction.
## V Numerical calculations
To verify the above theoretical derivations, we performed numerical
calculations of the reflection/refraction of polarized STVPs at a dielectric
interface by applying exact Fresnel-Snell’s formulas to each plane wave in the
incident pulse spectrum
$\tilde{\bf{E}}({\bf{k}})={\bf{e}}({\bf{k}})\tilde{\psi}({\bf{k}})$. In the
paraxial approximation, this is equivalent to applying an effective
wavevector-dependent Jones matrix ${{\hat{T}}^{r,t}}({\bf{k}})$ to the
polarization of the central plane wave ${\bf e}_{0}={\bf e}({\bf k}_{0})$
Bliokh and Aiello (2013); Götte and Dennis (2012); Töppel _et al._ (2013), so
that the reflected and transmitted pulse spectra become
$\tilde{{{\bf{E}}}}^{r,t}({\bf{k}})\simeq{{\hat{T}}^{r,t}}({\bf{k}})\,{\bf{e}}_{0}\,\tilde{\psi}({\bf{k}})$.
After that, the spatial and angular shifts are calculated as expectation
values of the corresponding position and momentum operators in the momentum
representation:
$\left\langle{{y^{r,t}}}\right\rangle={{\left\langle{{{\tilde{\bf{E}}}^{r,t}}}\right|i\partial/\partial
k_{y}^{r,t}\left|{{{\tilde{\bf{E}}}^{r,t}}}\right\rangle}\mathord{\left/{\vphantom{{\left\langle{{{\tilde{\bf{E}}}^{r,t}}}\right|i\partial/\partial
k_{y}^{r,t}\left|{{{\tilde{\bf{E}}}^{r,t}}}\right\rangle}{\left\langle{{{\tilde{\bf{E}}}^{r,t}}}\right.\left|{{{\tilde{\bf{E}}}^{r,t}}}\right\rangle}}}\right.\kern-1.2pt}{\left\langle{{{\tilde{\bf{E}}}^{r,t}}}\\!\right.\left|{{{\tilde{\bf{E}}}^{r,t}}}\right\rangle}}$,
$\left\langle{{\zeta^{r,t}}}\right\rangle={{\left\langle{{{\tilde{\bf{E}}}^{r,t}}}\right|i\partial/\partial
k_{z}^{r,t}\left|{{{\tilde{\bf{E}}}^{r,t}}}\right\rangle}\mathord{\left/{\vphantom{{\left\langle{{{{\bf{\tilde{E}}}}^{r,t}}}\right|i\partial/\partial
k_{z}^{r,t}\left|{{{{\bf{\tilde{E}}}}^{r,t}}}\right\rangle}{\left\langle{{{\tilde{\bf{E}}}^{r,t}}}\right.\left|{{{\tilde{\bf{E}}}^{r,t}}}\right\rangle}}}\right.\kern-1.2pt}{\left\langle{{{\tilde{\bf{E}}}^{r,t}}}\\!\right.\left|{{{\tilde{\bf{E}}}^{r,t}}}\right\rangle}}$,
$\left\langle{k_{x,y}^{r,t}}\right\rangle={{\left\langle{{{\tilde{\bf{E}}}^{r,t}}}\right|k_{x,y}^{r,t}\left|{{{\tilde{\bf{E}}}^{r,t}}}\right\rangle}\mathord{\left/{\vphantom{{\left\langle{{{{\bf{\tilde{E}}}}^{r,t}}}\right|k_{x,y}^{r,t}\left|{{{{\bf{\tilde{E}}}}^{r,t}}}\right\rangle}{\left\langle{{{{\bf{\tilde{E}}}}^{r,t}}}\right.\left|{{{{\bf{\tilde{E}}}}^{r,t}}}\right\rangle}}}\right.\kern-1.2pt}{\left\langle{{{\tilde{\bf{E}}}^{r,t}}}\right.\left|{{{\tilde{\bf{E}}}^{r,t}}}\right\rangle}}$,
where the inner product involves integration over the corresponding wavevector
components: $\left(k_{x}^{r,t},k_{z}^{r,t}\right)$ and
$\left(k_{y},k_{z}^{r,t}\right)$ in the cases (A) and (B), respectively.
In doing so, it is sufficient to use the approximation linear in the
transverse wavevector components for all shifts apart from
$\left\langle{{y^{t}}}\right\rangle$, Eq. (6). For this shift it is necessary
to consider the second-order correction from the Snell’s transformation of the
wavevectors, see Eqs. (15) and (19) in Ref. Fedoseyev (2001) for a similar
peculiarity in monochromatic vortex beams. In our case, this second-order
correction is given by ${k_{z}}-{k_{0}}\simeq
k_{z}^{t}-n{k_{0}}-\dfrac{{k_{y}^{2}}}{{2{k_{0}}}}\left({{n^{-2}}-1}\right)$.
Figures 3 and 4 display results of numerical calculations of the shifts
(6)–(9) for an air-glass interface with $n=1.5$, generic incident STVP and
different angles of incidence $\theta$. These calculations demonstrate perfect
agreement with the theoretical predictions. Furthermore, Fig. 3 also shows a
typical real-space intensity profile of the transmitted STVP which exhibits
deformations characteristic for shifts of vortex beams Bliokh and Aiello
(2013); Dennis and Götte (2012). Figure 4 demonstrates weak-meaurement
enhancement Asano _et al._ (2016); Gorodetski _et al._ (2012); Götte and
Dennis (2013), by two orders of magnitude, of the longitudinal shifts (time
delays) $\langle\zeta^{r}\rangle$ of reflected pulses for a near-$p$ input
polarization in the vicinity of the Brester angle of incidence,
$\theta\simeq\theta_{B}$.
Figure 4: Resonant weak-measurement enhancement of the longitudinal shifts
(time delays) of the reflected STVPs for near-$p$ input polarization ${\bf
e}_{0}=\left(1,0.01\right)$ in the vicinity of the Brewster angle of incidence
in the cases (A) and (B). Parameters are: $\ell=1$, $n=1.5$, $\gamma=0.4$,
$k_{0}\Delta=500$.
## VI Discussion
We have described reflection and refraction of a STVP at a planar isotropic
interface. The problem was considered by adopting previously developed methods
for monochromatic vortex beams. In doing so, spatiotemporal vortices have a
more complicated geometry with a transverse intrinsic OAM, which requires
consideration of two basic cases: (A) the OAM is orthogonal to the plane of
incidence and (B) the OAM lies within this plane. We have described
transformations of the reflected and transmitted pulses in both of these
cases. Notably, reflection in the case (A) can be used to flip the intrinsic
OAM of the pulse, while refraction can be employed for changing the
ellipticity of the pulse.
Most importantly, we have derived analytically and checked numerically all
OAM-dependent spatial and angular shifts of the reflected and transmitted
pulses in the paraxial approximation. These shifts can be divided into three
types: (i) the spatial orbital-Hall-effect shift $\langle y\rangle$ appearing
for the transmitted pulse in the case (B); (ii) the OAM-amplified angular
Goos-Hänchen and Hall-effect shifts $\langle k_{x}\rangle$ and $\langle
k_{y}\rangle$; and (iii) the longitudinal shifts $\langle\zeta\rangle$ which
appear for both reflected and transmitted pulses in both cases (A) and (B).
The latter one is the most remarkable phenomenon, which is equivalent to time
delays $\left\langle\delta
t\right\rangle=-\left\langle{{\zeta}}\right\rangle/c$ of the scattered pulses.
In contrast to the well-known Wigner time delay, this effect occurs without
any temporal dispersion of the scattering coefficients, from the coupling of
spatial and temporal degrees of freedom in spatiotemporal vortices. Such time
delays allow one to realize OAM-controlled slow (subluminal) and fast
(superluminal) pulse propagation without any medium dispersion.
Due to remarkable success in experimental studies of subwavelength shifts of
monochromatic optical beams and Wigner time delays of optical pulses, it is
natural to expect that the new shifts and time delays predicted in this work
could be measured in the near future. Furthermore, our work can stimulate a
number of implications and follow-up studies. In particular, scattering of
quantum spatiotemporal vortices in the geometry (A) can appear in 2D
condensed-matter systems. Furthermore, we have considered only the basic case
of STVP with a purely transverse intrinsic OAM and two basic geometries (A)
and (B) with respect to the interface. In general, one can examine STVPs with
intrinsic OAM arbitrarily oriented with repect to the propagation direction
Bliokh and Nori (2012b); Wan _et al._ (2021) and interface. One can expect
that in this general case, the pulse shifts could be expressed via suitably
weighted sums of previously considered basic shifts. Finally, including
temporal dispersion of the media and interface into consideration should add
the Wigner time-delay effects, which could be coupled with spatial degrees of
freedom and produce new spatial pulse shifts.
###### Acknowledgements.
We are grateful to V. G. Fedoseyev, A. Y. Bekshaev, and O. Yermakov for
helpful discussions. This work was partially supported by the National
Research Foundation of Ukraine (Project No. 2020.02/0149 “Quantum phenomena in
the interaction of electromagnetic waves with solid-state nanostructures”) and
the Excellence Initiative of Aix Marseille University—A*MIDEX, a French
‘Investissements d’Avenir’ programme. F.N. was supported by Nippon Telegraph
and Telephone Corporation (NTT) Research; the Japan Science and Technology
Agency (JST) via the Quantum Leap Flagship Program (Q-LEAP), the Moonshot R&D
Grant No. JP- MJMS2061, and the Centers of Research Excellence in Science and
Technology (CREST) Grant No. JPMJCR1676; the Japan Society for the Promotion
of Science (JSPS) via the Grants-in-Aid for Scientific Research (KAKENHI)
Grant No. JP20H00134, and the JSPS–RFBR Grant No. JPJSBP120194828; the Army
Research Office (ARO) (Grant No. W911NF-18-1-0358), the Asian Office of
Aerospace Research and Development (AOARD) (Grant No. FA2386-20-1-4069); and
the Foundational Questions Institute Fund (FQXi) (Grant No. FQXi-IAF19-06).
## References
* Goos and Hänchen (1947) F. Goos and H. Hänchen, “Ein neuer und fundamentaler Versuch zur Totalreflexion,” Ann. Phys. 1, 333–346 (1947).
* Artmann (1948) K. Artmann, “Berechnung der Seitenversetzung des totalreflektierten Strahles,” Ann. Phys. 2, 87–102 (1948).
* Merano _et al._ (2009) M. Merano, A. Aiello, M. P. van Exter, and J. P. Woerdman, “Observing angular deviations in the specular reflection of a light beam,” Nat. Photon. 3, 337–340 (2009).
* Jayaswal _et al._ (2013) G. Jayaswal, G. Mistura, and M. Merano, “Weak measurement of the Goos–Hänchen shift,” Opt. Lett. 38, 1232–1234 (2013).
* Bliokh and Aiello (2013) K. Y. Bliokh and A. Aiello, “Goos–Hänchen and Imbert–Fedorov beam shifts: an overview,” J. Opt. 15, 014001 (2013).
* Wigner (1954) E. P. Wigner, “Lower limit for the energy derivative of the scattering phase shift,” Phys. Rev. 98, 145–147 (1954).
* Chiao and Steinberg (1997) R. Y. Chiao and A. M. Steinberg, “Tunneling times and superluminality,” Prog. Opt. 37, 345–405 (1997).
* de Carvalho and Nussenzveig (2002) C. A. A. de Carvalho and H. M. Nussenzveig, “Time delay,” Phys. Rep. 364, 83–174 (2002).
* Winful (2006) H. G. Winful, “Tunneling time, the Hartman effect, and superluminality: A proposed resolution of an old paradox,” Phys. Rep. 436, 1–69 (2006).
* Asano _et al._ (2016) M. Asano, K. Y. Bliokh, Y. P. Bliokh, A. G. Kofman, R. Ikuta, T. Yamamoto, Y. S. Kivshar, L. Yang, N. Imoto S. K. Özdemir, and F. Nori, “Anomalous time delays and quantum weak measurements in optical micro-resonators,” Nat. Commun. 7, 13488 (2016).
* Imbert (1972) C. Imbert, “Calculation and experimental proof of the transverse shift induced by total internal reflection of a circularly polarized light beam,” Phys. Rev. D 5, 787–796 (1972).
* Schilling (1965) H. Schilling, “Die Strahlversetzung bei der Reflexion linear oder elliptisch polarisierter ebener Wellen an der Trennebene zwischen absorbierenden Medien,” Ann. Phys. 16, 122–134 (1965).
* Fedoseyev (1988) V. G. Fedoseyev, “Conservation laws and transverse motion of energy on reflection and transmission of electromagnetic waves,” J. Phys. A: Math. Gen. 21, 2045–2059 (1988).
* Onoda _et al._ (2004) M. Onoda, S. Murakami, and N. Nagaosa, “Hall effect of light,” Phys. Rev. Lett. 93, 083901 (2004).
* Bliokh and Bliokh (2006) K. Y. Bliokh and Y. P. Bliokh, “Conservation of Angular Momentum, Transverse Shift, and Spin Hall Effect in Reflection and Refraction of an Electromagnetic Wave Packet,” Phys. Rev. Lett. 96, 073903 (2006).
* Hosten and Kwiat (2008) O. Hosten and P. Kwiat, “Observation of the Spin Hall Effect of Light via Weak Measurements,” Science 319, 787–790 (2008).
* Götte and Dennis (2012) J. B. Götte and M. R. Dennis, “Generalized shifts and weak values for polarization components of reflected light beams,” New J. Phys. 14, 073016 (2012).
* Töppel _et al._ (2013) F. Töppel, M. Ornigotti, and A. Aiello, “Goos–Hänchen and Imbert–Fedorov shifts from a quantum-mechanical perspective,” New J. Phys. 15, 113059 (2013).
* Bliokh _et al._ (2015) K. Y. Bliokh, F. J. Rodríguez-Fortuño, F. Nori, and A. V. Zayats, “Spin-orbit interactions of light,” Nat. Photon. 9, 796–808 (2015).
* Bliokh _et al._ (2016) K. Y. Bliokh, C. T. Samlan, C. Prajapati, G. Puentes, N. K. Viswanathan, and F. Nori, “Spin-Hall effect and circular birefringence of a uniaxial crystal plate,” Optica 3, 1039–1047 (2016).
* Ling _et al._ (2017) X. Ling, X. Zhou, K. Huang, Y. Liu, C.-W. Qiu, H. Luo, and S. Wen, “Recent advances in the spin Hall effect of light,” Rep. Prog. Phys. 80, 066401 (2017).
* Fedoseyev (2001) V. G. Fedoseyev, “Spin-independent transverse shift of the centre of gravity of a reflected and of a refracted light beam,” Opt. Commun. 193, 9–18 (2001).
* Dasgupta and Gupta (2006) R. Dasgupta and P. K. Gupta, “Experimental observation of spin-independent transverse shift of the centre of gravity of a reflected Laguerre–Gaussian light beam,” Opt. Commun. 257, 91–96 (2006).
* Fedoseyev (2008) V. G. Fedoseyev, “Transformation of the orbital angular momentum at the reflection and transmission of a light beam on a plane interface,” J. Phys. A: Math. Theor. 41, 505202 (2008).
* Okuda and Sasada (2008) H. Okuda and H. Sasada, “Significant deformations and propagation variations of Laguerre-Gaussian beams reflected and transmitted at a dielectric interface,” J. Opt. Soc. Am. B 25, 881–890 (2008).
* Bliokh _et al._ (2009) K. Y. Bliokh, I. V. Shadrivov, and Y. S. Kivshar, “Goos–Hänchen and Imbert–Fedorov shifts of polarized vortex beams,” Opt. Lett. 34, 389–391 (2009).
* Bekshaev (2009) A. Y. Bekshaev, “Oblique section of a paraxial light beam: criteria for azimuthal energy flow and orbital angular momentum,” J. Opt. A: Pure Appl. Opt. 11, 094003 (2009).
* Merano _et al._ (2010) M. Merano, N. Hermosa, J. P. Woerdman, and A. Aiello, “How orbital angular momentum affects beam shifts in optical reflection,” Phys. Rev. A 82, 023817 (2010).
* Dennis and Götte (2012) M. R. Dennis and J. B. Götte, “Topological aberration of optical vortex beams: Determining dielectric interfaces by optical singularity shifts,” Phys. Rev. Lett. 109, 183903 (2012).
* Bliokh and Nori (2012a) K. Y. Bliokh and F. Nori, “Relativistic Hall Effect,” Phys. Rev. Lett. 108, 120403 (2012a).
* Sukhorukov and Yangirova (2005) A. P. Sukhorukov and V. V. Yangirova, “Spatio-temporal vortices: properties, generation and recording,” Proc. SPIE 5949, 594906 (2005).
* Dror and Malomed (2011) N. Dror and B. A. Malomed, “Symmetric and asymmetric solitons and vortices in linearly coupled two-dimensional waveguides with the cubic-quintic nonlinearity,” Physica D 240, 526–541 (2011).
* Bliokh and Nori (2012b) K. Y. Bliokh and F. Nori, “Spatiotemporal vortex beams and angular momentum,” Phys. Rev. A 86, 033824 (2012b).
* Bliokh (2021, in press) K. Y. Bliokh, “Spatiotemporal vortex pulses: Angular momenta and spin-orbit interaction,” Phys. Rev. Lett. (2021, in press).
* Jhajj _et al._ (2016) N. Jhajj, I. Larkin, E. W. Rosenthal, S. Zahedpour, J. K. Wahlstrand, and H. M. Milchberg, “Spatiotemporal optical vortices,” Phys. Rev. X 6, 031037 (2016).
* Hancock _et al._ (2019) S. W. Hancock, S. Zahedpour, A. Goffin, and H. M. Milchberg, “Free-space propagation of spatiotemporal optical vortices,” Optica 6, 1547 (2019).
* Chong _et al._ (2020) A. Chong, C. Wan, J. Chen, and Q. Zhan, “Generation of spatiotemporal optical vortices with controllable transverse orbital angular momentum,” Nat. Photon. 14, 350 (2020).
* Hancock _et al._ (2021) S. W. Hancock, S. Zahedpour, and H. M. Milchberg, “Second harmonic generation of spatiotemporal optical vortices and conservation of orbital angular momentum,” Optica 8, 594–597 (2021).
* Wan _et al._ (2021) C. Wan, J. Chen, A. Chong, and Q. Zhan, “Experimental demonstration of ultrafast wavepacket containing orbital angular momentum with controllable orientation,” arXiv:2101.04949 (2021).
* Wang _et al._ (2021) H. Wang, C. Guo, W. Jin, A. Y. Song, and S. Fan, “Engineering arbitrarily oriented spatiotemporal optical vortices using transmission nodal lines,” Optica 8, 966–971 (2021).
* Dallaire _et al._ (2009) M. Dallaire, N. McCarthy, and M. Piché, “Spatiotemporal Bessel beams: theory and experiments,” Opt. Express 17, 18148–18164 (2009).
* Kondakci and Abouraddy (2017) H. E. Kondakci and A. F. Abouraddy, “Diffraction-free space–time light sheets,” Nat. Photon. 11, 733–740 (2017).
* Kondakci and Abouraddy (2019) H. E. Kondakci and A. F. Abouraddy, “Optical space-time wave packets having arbitrary group velocities in free space,” Nat. Commun. 10, 929 (2019).
* Turunena and Friberg (2010) J. Turunena and A. T. Friberg, “Propagation-invariant optical fields,” Prog. Opt. 54, 1–88 (2010).
* Allen _et al._ (2003) L. Allen, S. M. Barnett, and M. J. Padgett, eds., _Optical Angular Momentum_ (Taylor and Francis, 2003).
* Andrews and Babiker (2012) D. L. Andrews and M. Babiker, eds., _The Angular Momentum of Light_ (Cambridge University Press, 2012).
* Thaller (2000) B. Thaller, _Visual Quantum Mechanics_ (Springer, Berlin, 2000).
* Bliokh (2006) K. Y. Bliokh, “Geometrical optics of beams with vortices: Berry phase and orbital angular momentum Hall effect,” Phys. Rev. Lett. 97, 043901 (2006).
* Aiello and Woerdman (2008) A. Aiello and J. P. Woerdman, “Role of beam propagation in Goos–Hänchen and Imbert–Fedorov shifts,” Opt. Lett. 33, 1437–1439 (2008).
* Qin _et al._ (2009) Y. Qin, Y. Li, H. He, and Q. Gong, “Measurement of spin Hall effect of reflected light,” Opt. Lett. 34, 2551–2553 (2009).
* Gorodetski _et al._ (2012) Y. Gorodetski, K. Y. Bliokh, B. Stein, C. Genet, N. Shitrit, V. Kleiner, E. Hasman, and T. W. Ebbesen, “Weak measurements of light chirality with a plasmonic slit,” Phys. Rev. Lett. 109, 013901 (2012).
* Götte and Dennis (2013) J. B. Götte and M. R. Dennis, “Limits to superweak amplification of beam shifts,” Opt. Lett. 38, 2295–2297 (2013).
|
# Local properties of the $t$-$J$ model in a two-pole approximation within COM
Amir Eskandari-asl1 Adolfo Avella1,2,3 1Dipartimento di Fisica “E.R,
Caianiello”, Universitᅵ degli Studi di Salerno, I-84084 Fisciano (SA), Italy
2CNR-SPIN, UoS di Salerno, I-84084 Fisciano (SA), Italy 3Unitᅵ CNISM di
Salerno, Universitᅵ degli Studi di Salerno, I-84084 Fisciano (SA), Italy
###### Abstract
In this work, we study the $t$-$J$ model using a two-pole approximation within
the composite operator method. We choose a basis of two composite operators –
the constrained electrons and their spin-fluctuation dressing – and
approximate their currents in order to compute the corresponding Green’s
functions. We exploit the algebraic constraints obeyed by the basis operators
to close a set of self-consistent equations that is numerically solved. This
allows to determine the physical parameters of the system such as the spin-
spin correlation function and the kinetic energy. Our results are compared to
those of an exact numerical method on a finite system to asses their
reliability. Indeed, a very good agreement is achieved through a far less
numerically demanding and a more versatile procedure. We show that by
increasing the hole doping, anti-ferromagnetic correlations are replaced by
ferromagnetic ones. The behavior on changing temperature and exchange integral
is also studied and reported.
###### keywords:
$t$-$J$ model , composite operator method (COM) , spin fluctuations
††journal: Journal of Magnetism and Magnetic Materials
## 1 Introduction
Strongly correlated systems have undergone extensive study for several decades
[1, 2, 3]. For lattice systems, the onsite interaction is the most important
one, a feature which is well reflected in the Hubbard model [4, 5, 6]. Despite
its seemingly simple structure, the Hubbard model and its extensions and
derivations have been very successful in studying strongly correlated systems
[7, 8, 9, 10]. In the strong electron-electron repulsion regime, it is
possible to derive an effective model from the Hubbard model in which the
double-occupancy is discarded according to its extreme energy cost: the so
called $t$-$J$ model [11, 12, 13, 14, 15, 16]. The Hubbard and the $t$-$J$
models have been used to theoretically understand and describe so many
interesting phenomena such as Mott-Hubbard metal-insulator transition [17],
non-Fermi-liquid normal phases and high-temperature superconductivity [18, 19,
20, 21, 22, 23, 24, 25, 26, 27], etc. One feature which is very interesting
and still needs to be clarified is the occurrence of a ferro-antiferromagnetic
crossover in the $t$-$J$ model [28, 29]. It is quite well-known that, at half
filling, the system is in the anti-ferromagnetic (AF) Nᅵel state. Yet,
Nagaoka proved that in the infinite-$U$ regime of the Hubbard model, which
corresponds to vanishing exchange integral in the $t$-$J$ model, if we
introduce one hole into the system the ground state becomes ferromagnetic (FM)
[30, 31, 32]. This idea got generalized in the $t$-$J$ model by so many
successive studies which showed transition to FM phase for finite hole dopings
[33, 34, 35, 36, 37, 38, 39, 40, 41].
In studying strongly correlated systems, quasi-particles play a crucial role.
In the composite operator method (COM), the equation of motion of the
operators corresponding to the most relevant quasi-particles, those associated
with the emergent elementary excitations in the system, are investigated.
Within this method, using the properties of the generalized Green’s functions
(GFs) of the quasi-particle operators and their algebraic constraints, a set
of self-consistent equations are obtained from which the physical properties
can be computed [21, 42, 43, 44, 45, 46]. It’s worth noticing that COM belongs
to the large class of operatorial approaches: the Hubbard approximations [4],
an early high-order GF approach [47], the projection operator method [48, 49],
the works of Mori [50], Rowe [51], and Roth [52], the spectral density
approach [53], the works of Barabanov [54], Val’kov [55], and Plakida [56, 57,
58, 59, 60], and the cluster perturbation theory in the Hubbard-operator
representation [61].
In this work, we consider a two-pole approximation for the $t$-$J$ model on a
two-dimensional lattice and focus on the quasi-particles describing the
constrained electrons and their spin-fluctuation dressing. After computing the
currents of our basis operators, we apply a generalized mean-field
approximation to project them back on the basis. These currents, together with
the integrated spectral weights of the basis, can be exploited to get a set of
self-consistent equations that allow to compute the relevant GFs. The
remaining unknowns can be related to the GFs using the algebraic constraints
obeyed by the composite operators. The solutions of these equations reveal the
physical properties of the system in different parametric regimes. The quality
of the approximation is assessed by comparing our results to those of an exact
numerical study. We find a very good agreement while our method is, on one
hand, numerically less demanding and, on the other hand, more versatile as it
can be generalized to study several different systems and parameter regimes.
We show that the system features AF correlations of the Nᅵel type near half
filling, while by increasing the hole doping it develops FM ones. At higher
temperatures, both AF and FM correlations are suppressed and the paramagnetic
phase is the favored one. Moreover, as expected, higher and higher values of
the exchange integral favor AF correlations and higher values of doping are
needed for the emergence of FM fluctuations.
The article is organized as follows. In Sec. 2, we introduce the model and the
basis operators we have chosen and describe our method. In Sec. 3, we present
our numerical results, assess them by comparison to exact numerical ones, and
discuss their relevant features. Finally, in Sec. 4, we give our conclusions.
## 2 Model and Method
The $t$-$J$ Hamiltonian is derived in the strongly correlated regime of the
Hubbard model ($t\ll U$) where an exchange integral $J=4t^{2}/U$ emerges [11,
12, 16]. Its explicit form for a two-dimensional lattice is given by
$\displaystyle\mathcal{H}$
$\displaystyle=-4t\sum_{i}\xi^{\dagger}\left(i\right)\cdot\xi^{\alpha}\left(i\right)-\mu\sum_{i}\nu\left(i\right)$
$\displaystyle+\frac{J}{2}\sum_{i}\left[\nu_{k}\left(i\right)\nu_{k}^{\alpha}\left(i\right)-\nu\left(i\right)\nu^{\alpha}\left(i\right)\right],$
(1)
where $t$, $J$, and $\mu$ are the nearest-neighbor hopping integral, the
exchange integral and the chemical potential, respectively. We set $t$ as
energy unit. In this model, double occupancy of sites is prohibited,
accordingly one has to use the operator
$\xi_{\sigma}\left(i\right)=\left(1-n_{\bar{\sigma}}\left(i\right)\right)c_{\sigma}\left(i\right)$,
which describes the transition between empty and singly-occupied sites, with
$c_{\sigma}\left(i\right)$ being the annihilation operator of an electron with
spin $\sigma$ on the site $i$, and
$n_{\sigma}\left(i\right)=c_{\sigma}^{\dagger}\left(i\right)c_{\sigma}\left(i\right)$.
We use the spinorial notation
$\xi^{\dagger}\left(i\right)=\left(\xi_{\uparrow}^{\dagger}\left(i\right),\xi_{\downarrow}^{\dagger}\left(i\right)\right)$
and define (spin) inner product between operators:
$\nu\left(i\right)=\xi^{\dagger}\left(i\right)\cdot\xi\left(i\right)=\sum_{\sigma}\xi_{\sigma}^{\dagger}\left(i\right)\xi_{\sigma}\left(i\right)$
and
$\nu_{k}\left(i\right)=\xi^{\dagger}\left(i\right)\cdot\sigma_{k}\cdot\xi\left(i\right)$
are the charge and spin density operators on site $i$, respectively, with
$\sigma_{k}$ being the Pauli matrices for $k=1,2,3$. Moreover, for every
operator $\phi\left(i\right)$, its projection on the nearest neighbor sites
($\delta_{\left\langle ij\right\rangle}$) is given by
$\phi^{\alpha}\left(i\right)=\sum_{j}\alpha_{ij}\phi\left(j\right)=\frac{1}{4}\left[\phi\left(i_{x}+1,i_{y}\right)+\phi\left(i_{x}-1,i_{y}\right)+\phi\left(i_{x},i_{y}+1\right)+\phi\left(i_{x},i_{y}-1\right)\right]$,
where for a two-dimensional square lattice
$\alpha_{ij}=\frac{1}{4}\delta_{\left\langle ij\right\rangle}$. With some
straightforward calculations, one can rewrite the Hamiltonian as [62]
$\mathcal{H}=\sum_{i}\xi^{\dagger}\left(i\right)\cdot\left[-4t\xi^{\alpha}\left(i\right)+J\left(\widetilde{\xi}_{0}\left(i\right)+\widetilde{\xi}_{s}\left(i\right)\right)-\left(\mu+\frac{J}{2}\right)\xi\left(i\right)\right],$
(2)
where the operators
$\widetilde{\xi}_{0}\left(i\right)=\frac{1}{2}\left(1-\nu^{\alpha}\left(i\right)\right)\xi\left(i\right),$
(3)
$\widetilde{\xi}_{s}\left(i\right)=\frac{1}{2}\nu_{k}^{\alpha}\left(i\right)\sigma_{k}\cdot\xi\left(i\right),$
(4)
describe constrained electronic transitions dressed by nearest-neighbor charge
and spin fluctuations, respectively. As our basis operators we choose the
following set of operators
$\boldsymbol{\psi}\left(i\right)=\left(\begin{array}[]{c}\xi\left(i\right)\\\
\widetilde{\xi}_{s}\left(i\right)\end{array}\right),$ (5)
which reflects the fact that near half filling, the spin fluctuations play the
most important role as the system has a clear tendency towards the AF Nᅵel
state.
In the Heisenberg picture, the current of the basis operators is
$\boldsymbol{J}\left(i\right)=i\frac{\partial}{\partial
t_{i}}\boldsymbol{\psi}\left(i\right)=\left[\boldsymbol{\psi}\left(i\right),\mathcal{H}\right],$
(6)
where we set $\hbar=1$. One can show that
$\displaystyle J_{\xi}=$
$\displaystyle-2t\left(\xi^{\alpha}\left(i\right)+2\xi_{0}\left(i\right)+2\xi_{s}\left(i\right)\right)$
$\displaystyle+2J\left(\widetilde{\xi}_{0}\left(i\right)+\widetilde{\xi}_{s}\left(i\right)\right)-\left(\mu+J\right)\xi\left(i\right),$
(7) $\displaystyle J_{\widetilde{\xi}_{s}}=$
$\displaystyle-t\nu_{k}^{\alpha}\left(i\right)\sigma_{k}\cdot\left(\xi^{\alpha}\left(i\right)+2\xi_{0}\left(i\right)+2\xi_{s}\left(i\right)\right)$
$\displaystyle+\left(-2t\left[\xi^{\dagger}\left(i\right)\cdot\sigma_{k}\cdot\xi^{\alpha}\left(i\right)\right]^{\alpha}+2t\left[\xi^{\dagger\alpha}\left(i\right)\cdot\sigma_{k}\cdot\xi\left(i\right)\right]^{\alpha}\right)\sigma_{k}\cdot\xi\left(i\right)$
$\displaystyle-\mu\widetilde{\xi}_{s}\left(i\right)-\frac{J}{2}\nu_{k}^{\alpha}\left(i\right)\nu^{\alpha}\left(i\right)\sigma_{k}\cdot\xi\left(i\right)$
$\displaystyle+\frac{J}{2}\nu_{k}^{\alpha}\left(i\right)\nu_{g}^{\alpha}\left(i\right)\sigma_{k}\cdot\sigma_{g}\cdot\xi\left(i\right)+Ji\epsilon_{kgh}\left[\nu_{g}^{\alpha}\left(i\right)\nu_{h}\left(i\right)\right]^{\alpha}\sigma_{k}\cdot\xi\left(i\right),$
(8)
where
$\xi_{0}\left(i\right)=\frac{1}{2}\left(1-\nu\left(i\right)\right)\xi^{\alpha}\left(i\right)$
and
$\xi_{s}\left(i\right)=\frac{1}{2}\nu_{k}\left(i\right)\sigma_{k}\cdot\xi^{\alpha}\left(i\right)$
can be considered as counterparts of $\widetilde{\xi}_{0}\left(i\right)$ and
$\widetilde{\xi}_{s}\left(i\right)$, respectively. Taking into account only
nearest neighbor contributions, these latter higher-order operators can be
approximated as
$\displaystyle\xi_{0}\left(i\right)\simeq$ $\displaystyle
4\widetilde{\xi}_{0}^{\alpha}\left(i\right)-\frac{3}{2}\left(1-\nu\right)\xi^{\alpha}\left(i\right),$
(9) $\displaystyle\xi_{s}\left(i\right)\simeq$ $\displaystyle
4\widetilde{\xi}_{s}^{\alpha}\left(i\right).$ (10)
Moreover, one can approximate $\widetilde{\xi}_{0}\left(i\right)$ by
projecting it on $\xi\left(i\right)$ as [62]
$\widetilde{\xi}_{0}\left(i\right)\approx\frac{2-3\nu+\chi_{c}^{\alpha}}{4\left(1-\frac{\nu}{2}\right)}\xi\left(i\right)-\frac{C_{11}^{\alpha}}{2\left(1-\frac{\nu}{2}\right)}\xi^{\alpha}\left(i\right).$
(11)
Finally, considering a paramagnetic and homogenous phase and using a mean-
field like approximation we can write the currents in the form [42, 43, 44,
21]
$J_{a}\left(i\right)=\sum_{j}\sum_{b}\varepsilon_{ab}\left(i,j\right)\psi_{b}\left(j\right).$
(12)
The Fourier transform of the $\boldsymbol{\varepsilon}$ matrix reads as
$\displaystyle\varepsilon_{11}\left(\boldsymbol{k}\right)=$ $\displaystyle
16t\frac{C_{11}^{\alpha}}{2-\nu}\alpha^{2}\left(\boldsymbol{k}\right)$
$\displaystyle+\left(6t\left(\frac{2}{3}-\nu\right)-8t\frac{2-3\nu+\chi_{c}^{\alpha}}{2-\nu}-2J\frac{C_{11}^{\alpha}}{2-\nu}\right)\alpha\left(\boldsymbol{k}\right)$
$\displaystyle+J\frac{2-3\nu+\chi_{c}^{\alpha}}{2-\nu}-\mu-J,$ (13)
$\displaystyle\varepsilon_{12}\left(\boldsymbol{k}\right)=$
$\displaystyle-16t\alpha\left(\boldsymbol{k}\right)+2J,$ (14)
$\displaystyle\varepsilon_{21}\left(\boldsymbol{k}\right)=$
$\displaystyle-6tC_{11}^{\alpha}\left(1+\frac{1}{2-\nu}\right)\alpha^{2}\left(\boldsymbol{k}\right)$
$\displaystyle+\biggr{(}-\frac{3}{4}t-t\frac{9}{4}\chi_{s}^{\alpha}+6tC_{11}^{\alpha^{2}}-\frac{15}{4}t\left(1-\nu\right)$
$\displaystyle+6t\frac{2-3\nu+\chi_{c}^{\alpha}}{4-2\nu}+3J\frac{C_{11}^{\alpha}}{8-4\nu}\biggr{)}\alpha\left(\boldsymbol{k}\right)$
$\displaystyle+\frac{3J}{8}+\frac{3}{2}tC_{11}^{\alpha}-\frac{3J}{4}\frac{2-3\nu+\chi_{c}^{\alpha}}{4-2\nu},$
(15) $\displaystyle\varepsilon{}_{22}\left(\boldsymbol{k}\right)=$
$\displaystyle
2t\alpha\left(\boldsymbol{k}\right)-\left(\mu+\frac{3J}{4}+\frac{3J}{4}\nu\right),$
(16)
where $\nu=\left\langle\nu\left(i\right)\right\rangle$ is the average electron
number per site which can vary between 0 and 1 (half filling) depending on the
doping.
$C_{ab}^{\alpha^{n}}=\left\langle\psi_{a}^{\alpha^{n}}\left(i\right)\psi_{b}^{\dagger}\left(i\right)\right\rangle$
is the generalized correlation matrix
[$\phi^{\alpha^{n}}\left(i\right)=\sum_{j}\alpha_{ij}^{n}\phi\left(j\right)$]
with $n$ being a non-negative integer: $\alpha_{ij}^{0}=\delta_{ij}$,
$\alpha_{ij}^{1}=\alpha_{ij}$, and for $n>1$,
$\alpha_{ij}^{n}=\sum_{l_{1},..,l_{n-1}}\alpha_{il_{1}}\alpha_{l_{1}l_{2}}...\alpha_{l_{n-1}j}$.
$\chi_{c}^{\alpha}=\left\langle\nu\left(i\right)\nu^{\alpha}\left(i\right)\right\rangle$
and
$\chi_{s}^{\alpha}=\frac{1}{3}\sum_{k=1}^{3}\left\langle\nu_{k}\left(i\right)\nu_{k}^{\alpha}\left(i\right)\right\rangle$
are the charge-charge and spin-spin correlation functions, respectively.
The normalization matrix of the basis operators is defined as
$I_{ab}\left(i,j\right)=\left\langle\left\\{\psi_{a}\left(i\right),\psi_{b}^{\dagger}\left(j\right)\right\\}\right\rangle.$
(17)
Once again, we use mean-field-like approximations and perform Fourier
transformation to obtain
$\displaystyle I_{11}\left(\boldsymbol{k}\right)=$ $\displaystyle
1-\frac{1}{2}\nu,$ (18) $\displaystyle I_{12}\left(\boldsymbol{k}\right)=$
$\displaystyle\frac{3}{4}\chi_{s}^{\alpha}+\frac{3}{2}\alpha\left(\boldsymbol{k}\right)C_{11}^{\alpha},$
(19) $\displaystyle I_{22}\left(\boldsymbol{k}\right)=$
$\displaystyle\frac{3}{16}\left(-\frac{1}{2}\chi_{c}^{\alpha}-\chi_{s}^{\alpha}+\nu\right)-\alpha\left(\boldsymbol{k}\right)C_{12}^{\alpha}$
$\displaystyle+\frac{3\alpha\left(\boldsymbol{k}\right)}{16}C_{11}^{\alpha}+\left(2\alpha^{2}\left(\boldsymbol{k}\right)-\frac{1}{2}\right)\frac{4}{3}C_{12}^{\alpha^{2}},$
(20)
where in the last line we used the so called spherical approximation [63, 42].
In order to obtain the self-consistent set of equations, we define the
retarded GF as follow.
$G_{ab}^{R}\left(i,j\right)=\theta\left(t_{i}-t_{j}\right)\left\langle\left\\{\psi_{a}\left(i\right),\psi_{b}^{\dagger}\left(j\right)\right\\}\right\rangle,$
(21)
where $i$ stands for both time and site indices. For a basis of $n$ operators,
GF is an $n\times n$ matrix (in our case $n=2$). Then, using the current
equations and performing Fourier transformation in space and time one can show
$\boldsymbol{G}^{R}\left(\boldsymbol{k},\omega\right)=\left(\omega-\boldsymbol{\varepsilon}\left(\boldsymbol{k}\right)\right)^{-1}\boldsymbol{I}\left(\boldsymbol{k}\right),$
which results in the following explicit form
$\boldsymbol{G}^{R}\left(\boldsymbol{k},\omega\right)=\sum_{m=1}^{n}\frac{\boldsymbol{\sigma}^{\left(m\right)}\left(\boldsymbol{k}\right)}{\omega-\omega_{m}\left(\boldsymbol{k}\right)+i0^{+}},$
(22)
where $\omega_{m}\left(\boldsymbol{k}\right)$ is the m-th eigenvalue of
$\boldsymbol{\varepsilon}\left(\boldsymbol{k}\right)$, and
$\sigma_{ab}^{\left(m\right)}\left(\boldsymbol{k}\right)=\Omega_{am}\left(\boldsymbol{k}\right)\sum_{c=1}^{2}\Omega_{mc}^{-1}\left(\boldsymbol{k}\right)I_{cb}\left(\boldsymbol{k}\right),$
(23)
in which $\boldsymbol{\Omega}\left(\boldsymbol{k}\right)$ is an $n\times n$
matrix whose columns are the eigenvectors of
$\boldsymbol{\varepsilon}\left(\boldsymbol{k}\right)$. Using Eq, 22, one can
obtain a generalized form of the fluctuation-dissipation theorem as [42]
$\boldsymbol{C}\left(\boldsymbol{k},\omega\right)=2\pi\sum_{m=1}^{n}\left[1-f_{F}\left(\omega_{m}\left(\boldsymbol{k}\right)\right)\right]\boldsymbol{\sigma}^{\left(m\right)}\left(\boldsymbol{k}\right)\delta\left(\omega-\omega_{m}\left(\boldsymbol{k}\right)\right),$
(24)
where $f_{F}$ is the Fermi distribution function. Performing the inverse
Fourier transformation, we obtain
$\displaystyle C_{ab}^{\kappa}=$
$\displaystyle\frac{2\pi}{N}\sum_{\boldsymbol{k}}\kappa\left(-\boldsymbol{k}\right)\sum_{l=1}^{n}\left[1-f_{F}\left(\omega_{l}\left(\boldsymbol{k}\right)\right)\right]\sigma_{ab}^{\left(l\right)}\left(\boldsymbol{k}\right),$
(25)
where $\kappa$ can be any lattice projection operator. This relation shows how
the self-consistent procedure works. For calculating the GFs, we need
$\boldsymbol{I}\left(\boldsymbol{k}\right)$ and
$\boldsymbol{\varepsilon}\left(\boldsymbol{k}\right)$, which are determined by
the correlation functions. On the other hand, the correlation functions are
determined by the GFs through the fluctuation-dissipation theorem, Eq. 25. In
order to close the set of self-consistent equations, we use the following
algebraic constraints obeyed by the basis operators.
$\displaystyle C_{11}^{\delta}=$ $\displaystyle 1-\nu,$ (26)
$C_{12}^{\delta}=0,$ (27) $\displaystyle C_{22}^{\delta}=$
$\displaystyle-\frac{3}{16}\chi_{c}^{\alpha}+\frac{3}{16}\nu.$ (28)
Having a closed set of self-consistent equations, we numerically solve it to
obtain the physical properties of the system.
---
Figure 1:
$C_{11}^{\alpha}=\left\langle\xi^{\alpha}\left(i\right)\xi^{\dagger}\left(i\right)\right\rangle$
, as a function of filling, $\nu$, for $J=0.1$ and temperature T ranging from
0.01 to 1. Circles are ED data extracted from Ref. [64]. $C_{11}^{\alpha}$ is
proportional to the kinetic energy, $K=8tC_{11}^{\alpha}$.
## 3 Results
In this section, we present our numerical results. In Fig. 1, we show
$C_{11}^{\alpha}=\left\langle\xi^{\alpha}\left(i\right)\xi^{\dagger}\left(i\right)\right\rangle$
as a function of electron density per site, $\nu$, for $J=0.1$ and temperature
$T$ ranging from 0.01 to 1. The circles are numerical data extracted from Ref.
[64] and correspond to exact diagonalization (ED) results at zero temperature
for a finite cluster. There is a clear agreement with our results although we
need a small finite temperature to compensate for the finite size effects. At
half filling ($\nu=1$), $C_{11}^{\alpha}$ vanishes, as it is proportional to
the kinetic energy by the relation $K=8tC_{11}^{\alpha}$. Since each site is
occupied exactly by one electron there is no possibility for electrons to
move, and kinetic energy vanishes. Our results show that the kinetic energy
decreases by increasing the temperature, which means the thermally excited
states of the system do not favor hole mobility, as it will be clarified in
the following.
---
Figure 2: $\chi_{s}^{\alpha}$ as a function of $\nu$: (top) same parameters as
Fig. 1; (bottom) $T=0.1$ and $J$ ranging from 0.1 to 0.5.
Although we considered a paramagnetic phase, we can still investigate the
tendency of the system towards other (ordered) phases. In Fig. 2, top panel,
we plot the spin-spin correlation function, $\chi_{s}^{\alpha}$, as a function
of $\nu$, with same parameters as Fig. 1. For low enough temperatures, we have
AF correlations near half filling. This clearly shows that our solution
correctly captures the behavior in this regime, consistently with the well-
established AF Nᅵel state at half filling. The FM phase in the $t$-$J$ model
has been predicted in the literature [33, 28, 34, 35, 36, 37, 29, 38, 39, 40,
41]: mobile holes can form Nagaoka polarons which results in a FM ordering
[38, 40]. We witness a similar behavior here, i.e., once enough holes are
present in the system, FM correlations clearly emerge and they overcome the AF
ones, whose correlation lengths decrease rapidly with doping [35]. Increasing
the temperature results in weakening of both AF and FM correlations: the
paramagnetic phase becomes the most favorable one. Let us now come back to the
decrease of the kinetic energy on increasing the temperature reported in Fig.
1. This behavior has different explanations in different doping regimes. Near
half filling, the AF correlations get weaker and weaker on increasing the
temperature, inhibiting the virtual hopping processes because of the Pauli
exclusion principle. Accordingly, the kinetic energy decreases. On the other
hand, at intermediate fillings, significant FM correlations result from the
formation of Nagaoka polarons, which requires mobile holes and induce a gain
in kinetic energy. By increasing the temperature, the FM correlations too
become weaker and weaker and, consequently, the kinetic energy decreases also
in this case.
In Fig. 2 bottom panel, we plot the spin-spin correlation function as a
function of $\nu$ for $T=0.1$ and $J$ ranging from 0.1 to 0.5. For larger and
larger values of $J$: $\left(i\right)$ the AF correlations increase, which
shows a stronger tendency towards AF for larger exchange integrals, as
expected; (ii) the emergence of FM correlations requires larger and larger
values of doping in order to overcome the stronger and stronger AF
correlations.
## 4 Summary
In summary, we performed a two-pole study of the $t$-$J$ model within COM. In
our calculations, we considered the constrained electrons and their spin
dressing as fundamental quasi particles. By exploiting mean-field-like
approximations, we projected back the operatorial currents on the basis
operators. We used similar approximations to calculate the normalization
matrix within COM. These relations can be combined with the algebraic
constraints obeyed by the operators to give a closed set of self-consistent
equations which can be numerically solved.
Our results for the kinetic energy are in a very good agreement with those of
ED for finite clusters, while our method is numerically less demanding and
also more versatile. Moreover, we show that the system undergoes a smooth
transition between small and intermediate doping regimes where it features AF
and FM correlations, respectively. By increasing the temperature, both AF and
FM correlations are weakened and, consequently, the kinetic energy decreases
due to the inhibition of exchange virtual processes and polaron formation,
respectively. Increasing the exchange integral strengthens the AF
fluctuations, as expected, and forces the FM fluctuations to emerge at higher
values of doping.
## Acknowledgments
Authors acknowledge support by MIUR under Project No. PRIN 2017RKWTMY.
## References
* [1] A. Avella, F. Mancini, Strongly Correlated Systems, Theoretical Methods, Vol. 171, Springer, 2012.
* [2] A. Avella, F. Mancini, Strongly Correlated Systems, Numerical Methods, Vol. 176, Springer, 2013.
* [3] A. Avella, F. Mancini, Strongly Correlated Systems, Experimental Techniques, Vol. 180, Springer, 2015.
* [4] J. Hubbard, Proc. R. Soc. London, Ser. A 276, 238 (1963); 277, 237 (1964); 281, 401 (1964).
* [5] P. W. Anderson, Science 235 (1987) 1196.
* [6] A. Georges, G. Kotliar, W. Krauth, M. J. Rozenberg, Rev. Mod. Phys. 68 (1996) 13\.
* [7] A. Montorsi, The Hubbard Model: A Reprint Volume, World Scientific, 1992.
* [8] F. H. Essler, H. Frahm, F. Göhmann, A. Klümper, V. E. Korepin, The one-dimensional Hubbard model, Cambridge University Press, 2005.
* [9] H. Tasaki, J. Phys.: Cond. Mat. 10 (1998) 4353.
* [10] D. Baeriswyl, D. K. Campbell, J. M. Carmelo, F. Guinea, E. Louis, The Hubbard model: its physics and mathematical physics, Vol. 343, Springer Science & Business Media, 2013.
* [11] K. Chao, J. Spalek, A. M. Oles, J. Phys. C: Solid State Phys. 10 (1977) L271.
* [12] K. Chao, J. Spałek, A. M. Oles, Phys. Rev. B 18 (1978) 3453.
* [13] A. H. MacDonald, S. M. Girvin, D. Yoshioka, Phys. Rev. B 37 (1988) 9753.
* [14] J. Stein, J. Stat. Phys. 88 (1997) 487.
* [15] A. Reischl, E. Müller-Hartmann, G. S. Uhrig, Phys. Rev. B 70 (2004) 245124.
* [16] J. Spałek, Philos. Mag. 95 (2015) 661.
* [17] M. Imada, A. Fujimori, Y. Tokura, Rev. Mod. Phys. 70 (1998) 1039.
* [18] A. Avella, F. Mancini, D. Villani, Solid State Comm. 108 (1998) 723.
* [19] A. Avella, F. Mancini, D. Villani, H. Matsumoto, Euro. Phys. J. B 20 (2001) 303\.
* [20] C.-W. Chen, J. Choe, E. Morosan, Rep. Prog. Phys. 79 (2016) 084505.
* [21] A. Avella, Adv. Cond. Mat. Phys. 2014 (2014) 515698.
* [22] T. Kloss, X. Montiel, V. de Carvalho, H. Freire, C. Pépin, Rep. Prog. Phys. 79 (2016) 084507.
* [23] P. A. Lee, N. Nagaosa, X.-G. Wen, Rev. Mod. Phys. 78 (2006) 17.
* [24] N. Armitage, P. Fournier, R. Greene, Rev. Mod. Phys. 82 (2010) 2421.
* [25] M. Hashimoto, I. M. Vishik, R.-H. He, T. P. Devereaux, Z.-X. Shen, Nat. Phys. 10 (2014) 483.
* [26] D. Chowdhury, S. Sachdev, in: Quantum criticality in condensed matter: Phenomena, materials and ideas in theory and experiment, World Scientific, 2016, p. 1.
* [27] S. Tajima, Rep. Prog. Phys. 79 (2016) 094001.
* [28] M. Marder, N. Papanicolaou, G. Psaltakis, Phys. Rev. B 41 (1990) 6920.
* [29] C. S. Hellberg, E. Manousakis, Phys. Rev. Lett. 78 (1997) 4609.
* [30] Y. Nagaoka, Solid State Comm. 3 (1965) 409.
* [31] Y. Nagaoka, Phys. Rev. 147 (1966) 392.
* [32] H. Tasaki, Phys. Rev. B 40 (1989) 9192.
* [33] C. Jayaprakash, H. Krishnamurthy, S. Sarker, Phys. Rev. B 40 (1989) 2610.
* [34] D. Poilblanc, Phys. Rev. B 45 (1992) 10775.
* [35] R. R. Singh, R. L. Glenister, Phys. Rev. B 46 (1992) 11871.
* [36] W. Putikka, M. Luchini, M. Ogata, Phys. Rev. Lett. 69 (1992) 2288.
* [37] H. Mori, M. Hamada, Phys. Rev. B 48 (1993) 6242.
* [38] M. Maśka, M. Mierzejewski, E. Kochetov, L. Vidmar, J. Bonča, O. Sushkov, Phys. Rev. B 85 (2012) 245113.
* [39] S. Bhattacharjee, R. Chaudhury, J. Low Temp. Phys. 193 (2018) 21.
* [40] L. Vidmar, J. Bonča, J. Supercond. Nov. Magn. 26 (2013) 2641.
* [41] R. Montenegro-Filho, M. Coutinho-Filho, Phys. Rev. B 90 (2014) 115123.
* [42] F. Mancini, A. Avella, Adv. Phys. 53 (2004) 537.
* [43] A. Avella, F. Mancini, in: Strongly Correlated Systems, Springer, 2012, p. 103.
* [44] A. Avella, Euro. Phys. J. B 87 (2014) 1.
* [45] A. Di Ciolo, A. Avella, Cond. Mat. Phys. 21 (2018) 33701.
* [46] S. Odashima, A. Avella, F. Mancini, Phys. Rev. B 72 (2005) 205121.
* [47] E. Kuz’min, S. Ovchinnikov, Teor. Mat. Fiz. 31 (1977) 379.
* [48] Y. Tserkovnikov, Teor. Mat. Fiz. 49 (1981) 219.
* [49] Y. Tserkovnikov, Teor. Mat. Fiz. 50 (1982) 261.
* [50] H. Mori, Prog. Theor. Phys. 33 (1965) 423.
* [51] D. Rowe, Rev. Mod. Phys. 40 (1968) 153.
* [52] L. Roth, Phys. Rev. 184 (1969) 451.
* [53] W. Nolting, W. Borgiel, Phys. Rev. B 39 (1989) 6962.
* [54] A. Barabanov, A. Kovalev, O. Urazaev, A. Belemuk, R. Hayn, J. Exp. Theor. Phys. 92 (2001) 677.
* [55] V. Val’kov, D. Dzebisashvili, J. Exp. Theor. Phys. 100 (2005) 608.
* [56] N. Plakida, V. Oudovenko, Phys. Rev. B 59 (1999) 11949.
* [57] N. Plakida, JETP Lett. 74 (2001) 36.
* [58] J. Exp. Theor. Phys. 97 (2003) 331.
* [59] N. Plakida, V. Oudovenko, J. Exp. Theor. Phys. 104 (2007) 230.
* [60] N. Plakida, High-temperature cuprate superconductors: Experiment, theory, and applications, Vol. 166, Springer Science & Business Media, 2010.
* [61] S. Ovchinnikov, S. Nikolaev, JETP lett. 93 (2011) 517.
* [62] A. Di Ciolo, C. Noce, A. Avella, Euro. Phys. J. Spec. Top. 228 (2019) 659.
* [63] A. Avella, F. Mancini, V. Turkowski, Phys. Rev. B 67 (2003) 115123.
* [64] E. Dagotto, A. Moreo, F. Ortolani, D. Poilblanc, J. Riera, Phys. Rev. B 45 (1992) 10741.
|
# Characterizing and Mitigating Anti-patterns of Alerts in Industrial Cloud
Systems
Tianyi Yang1, Jiacheng Shen1, Yuxin Su2, Xiaoxue Ren1, Yongqiang Yang3, and
Michael R. Lyu1 Yuxin Su is the corresponding author. 1Department of Computer
Science and Engineering, The Chinese University of Hong Kong, Hong Kong,
China.
Email: {tyyang, jcshen<EMAIL_ADDRESS><EMAIL_ADDRESS>2School
of Software Engineering, Sun Yat-Sen Univeristy, Zhuhai, China. Email:
<EMAIL_ADDRESS>3Computing and Networking Innovation Lab, Cloud BU,
Huawei, Shenzhen, China. Email<EMAIL_ADDRESS>
###### Abstract
Alerts are crucial for requesting prompt human intervention upon cloud
anomalies. The quality of alerts significantly affects the cloud reliability
and the cloud provider’s business revenue. In practice, we observe on-call
engineers being hindered from quickly locating and fixing faulty cloud
services because of the vast existence of misleading, non-informative, non-
actionable alerts. We call the ineffectiveness of alerts “anti-patterns of
alerts”. To better understand the anti-patterns of alerts and provide
actionable measures to mitigate anti-patterns, in this paper, we conduct the
first empirical study on the practices of mitigating anti-patterns of alerts
in an industrial cloud system. We study the alert strategies and the alert
processing procedure at Huawei Cloud, a leading cloud provider. Our study
combines the quantitative analysis of millions of alerts in two years and a
survey with eighteen experienced engineers. As a result, we summarized four
individual anti-patterns and two collective anti-patterns of alerts. We also
summarize four current reactions to mitigate the anti-patterns of alerts, and
the general preventative guidelines for the configuration of alert strategy.
Lastly, we propose to explore the automatic evaluation of the Quality of
Alerts (QoA), including the indicativeness, precision, and handleability of
alerts, as a future research direction that assists in the automatic detection
of alerts’ anti-patterns. The findings of our study are valuable for
optimizing cloud monitoring systems and improving the reliability of cloud
services.
###### Index Terms:
alert anti-patterns, alert strategy, alert governance, cloud reliability,
software maintenance
## I Introduction
The boost of cloud adoption puts forward higher requirements on the
reliability and availability of cloud services. Typically, cloud services are
organized and managed as microservices that interact with each other and serve
user requests as a whole. In a large-scale cloud microservice system,
unplanned microservice anomalies happen from time to time. Some anomalies are
transient, while others persist and require human intervention. If anomalies
are not detected and mitigated timely, they may cause severe cloud failures
and incidents, affect the availability of cloud services, and deteriorate user
satisfaction [1]. Hence, prompt detection, human intervention, and mitigation
of service anomalies are critical for the reliability of cloud services. To
accomplish that, cloud service providers employ large-scale cloud monitoring
systems that monitor the system state and generate alerts that require human
intervention. Whenever anomalous states of services emerge, alerts will be
generated to notify engineers to prevent service failures.
In a cloud system, an alert is a notification sent to On-Call Engineers
(OCEs), of the form defined by the alert strategy, of a specific abnormal
state of the cloud service, i.e., an anomaly. A severe enough alert (or a
group of related alerts) can escalate to an incident, which, by definition, is
any unplanned interruption or performance degradation of a service or product,
which can lead to service shortages at all service levels [1]. An alert
strategy defines the policy of alert generation, i.e., when to generate an
alert, what attributes and descriptions an alert should have, and to whom the
alert should be sent. Once an OCE receives an alert, the OCE will follow the
corresponding predefined Standard Operating Procedure (SOP) to inspect the
state of the cloud service and mitigate the service anomaly based on their
domain knowledge. The alert strategies and SOPs are two key aspects to ensure
a prompt and effective response to cloud alerts and incidents. In industrial
practice, the two aspects are often considered and managed together because
improperly designed alert strategies may lead to non-informative or delayed
alerts, affecting the diagnosis and mitigation of the cloud alerts and
incidents. We call the unified management of alert strategies and SOPs alert
governance. Table I summarizes the terminologies used in this paper.
TABLE I: The Terminology Adopted in This Paper. Term | Explanation
---|---
Anomaly | A deviation from the normal state of the cloud system, which will possibly trigger an alert.
Alert | A notification sent to On-Call Engineers (OCEs), of the form defined by the alert strategy, of a specific anomaly of the cloud system.
Incident | Any unplanned interruption or performance degradation of a service or product, which can lead to service shortages at all service levels [1].
Alert Strategy | The policy of alert generation, including when to generate an alert, what attributes and descriptions an alert should have, and to whom the alert should be sent.
SOP | A predefined Standard Operating Procedure (SOP) to inspect the state of the cloud system and mitigate the system abnormality upon receiving an alert. The operations can be conducted by OCEs or automatically.
Alert Governance | The unified management of alert strategies and SOPs.
In industrial practice, a cloud provider usually deploys a cloud monitoring
system to obtain the telemetry data that reflects the running state of their
cloud services [2, 3]. Multiple monitoring techniques are employed to collect
various types of telemetry data, including the performance indicators of the
monitored service, the low-level resource utilization, the logs printed by the
monitored service, etc. For normally functioning services, it is assumed that
their states, as well as their telemetry data, will be stable. For a service
that will fail soon, its telemetry data will fluctuate from the normal state
[4, 5]. Hence, cloud providers typically conduct anomaly detection on the
telemetry data to detect the deviation from the normal state. If an anomaly
triggers an alert strategy, an alert will be generated, and the cloud
monitoring system will notify OCEs according to the configuration of the alert
strategy.
The configuration of alert strategies is empirical, which heavily depends on
human expertise. Since different cloud services exhibit different attributes
and serve different purposes, their alert strategies vary significantly. In
particular, the empiricalness of alert strategies results from two aspects of
cloud services. On the one hand, a cloud service’s abnormal state may differ
because each cloud service implements its own business logic. There is no one-
fits-all rule for anomaly detection on cloud services, i.e., when to generate
an alert. For example, network overload is a crucial anomaly for a virtual
network service. However, high connection number becomes a real issue for a
database service. On the other hand, the attributes of an alert that helps the
manual inspection and mitigation of the abnormal state, e.g., the location
information and the free-text title that describes the alert, are also
service-specific and lack comprehensive guidelines. In other words, “what
attributes and descriptions an alert should have” also depends on human
expertise. For example, the title “Instance _x_ is abnormal” is non-
informative. In summary, the configuration of alert strategies, as a precursor
step for human intervention in cloud anomalies, is an empirical procedure.
Manually-configured alert strategies are flexible but can also be ineffective
(e.g., misleading, non-informative, and non-actionable) when the engineer is
inexperienced or unfamiliar with the monitored cloud service. The
ineffectiveness of alerts becomes anti-patterns that hinder the OCEs’
diagnosis, especially for inexperienced OCEs. The anti-patterns of alerts,
which we will elaborate in Section III, will frustrate OCEs and deteriorate
cloud reliability in the long term.
In this paper, we conduct the first empirical study on the industrial practice
of alert governance in Huawei Cloud 111Huawei Cloud is a global cloud provider
and ranked fifth in Gartner’s report [6] on the global market share of
Infrastructure as a Service in 2020.. The cloud system considered in this
study consists of 11 cloud services and 192 cloud microservices. The procedure
of our study includes 1) a quantitative assessment of over 4 million alerts in
the time range of two years to identify the anti-patterns of alerts; 2)
interviews with 18 experienced on-call engineers (OCEs) to confirm the
identified anti-patterns and summarize the current practice to mitigate the
identified anti-patterns. To sum up, we make the following contributions:
* •
We conduct the first empirical study on characterizing and mitigating anti-
patterns of alerts in an industrial cloud system.
* •
We identify six anti-patterns of alerts in a production cloud system.
Specifically, the six anti-patterns can be divided into two categories, namely
individual anti-patterns and collective anti-patterns. Individual anti-
patterns result from the ineffective patterns in one single alert strategy,
including _Unclear Name or Description_ , _Misleading Severity_ , _Improper
and Outdated Alert Strategy_ , and _Transient and Toggling Alerts_. Collective
anti-patterns are ineffective patterns that a bunch of alerts collectively
exhibit, including _repeating_ and _cascading alerts_.
* •
We summarize the current industrial practices for mitigating the anti-patterns
of alerts, including postmortem reactions to mitigate the effect of anti-
patterns and the preventative guidelines to avoid the anti-patterns. The
postmortem reactions include _rule-based alert blocking_ and _alert
aggregation_ , _pattern-based alert correlation analysis_ , and _emerging
alert detection_. We also describe three aspects of designing preventative
guidelines for alert strategies according to our experience in Huawei Cloud.
* •
Lastly, we share our thoughts on prospective directions to achieve automatic
alert governance. We propose to bridge the gap between manual alert strategies
and cloud service upgrades by automatically evaluating the Quality of Alerts
(QoA) in terms of _indicativeness_ , _impact_ , and _handleability_.
## II Alerts for the Reliability of Cloud Services
This section provides the preliminary knowledge for our study. We first
generally introduce the reliability measures of cloud services, then describe
the mechanism of alerting in cloud systems.
### II-A Reliability of Cloud Services
Cloud providers typically split various services into microservices and
organize them into microservice architecture [7]. Microservices are small,
independent, and loosely coupled modules that can be deployed independently
[8]. Communicating through well-defined APIs, each microservice can be
refactored and scaled independently and dynamically [9]. External requests are
routed through and served by dozens of different microservices that rely on
one another.
One of the major weaknesses of the microservice architecture is the difficulty
in system maintenance [10, 11]. The highly decoupled nature of the
microservice architecture makes the performance debugging, failure diagnosis,
and fault localization in cloud systems more complex than ever [12, 1, 13,
14]. A common pathway to tackle the difficulties in system maintenance is to
1) improve system observability [15, 16, 17, 18, 19] with logging, tracing,
and performance monitoring, 2) employ proper alert strategies to detect system
anomalies and send alerts [10], and 3) design effective SOPs to quickly
mitigate the system abnormality before it escalates to severe failure and
incidents. In practice, cloud providers usually deploy cloud monitoring
systems to improve observability, detect anomalies, and generate alerts.
### II-B Alerts in Cloud Services
TABLE II: Sample reliability alerts in a cloud system. The names of microservices are omitted due to confidentiality. No. | Severity | Time | Service | Alert Title | Duration | Location
---|---|---|---|---|---|---
1 | Major | 2021/05/18 06:36 | Block Storage | Failed to allocate new blocks, disk full | 10 min | Region=X;DC=1;…
2 | Critical | 2021/05/18 06:38 | Database | Failed to commit changes … | 2 min | Region=X;DC=1;…
3 | Critical | 2021/05/18 06:39 | Database | Failed to commit changes … | 5 min | Region=X;DC=1;…
#### II-B1 Necessities of Alerts
Service reliability is one of the most significant factors for cloud providers
and their clients, but failures that prevent cloud services from properly
functioning are inevitable [1]. In order to satisfy Service Level Agreements
(SLAs) on the reliability of the target services, cloud providers need to deal
with service and microservice anomalies before they escalate their effect into
severe failures and incidents. Alerting is a practical way to achieve this
goal. Figure 1 demonstrates the significance of alerts. By continuously
monitoring cloud services via traces, logs, metrics, the monitoring system
will send alerts222This paper only focuses on the alerts that indicate
potential bugs and failures, i.e., the system reliability alerts. to On-Call
Engineers (OCEs) upon detecting anomalous service states. With the information
provided in the alerts, OCEs can judge with their domain knowledge, fix the
problems, and clear the alert. As a result, unplanned failures and incidents
can be avoided or quickly mitigated.
Figure 1: The significance of alerts for cloud reliability.
#### II-B2 Attributes of Alerts
Alerts have many attributes that are helpful for OCEs’ diagnosis, including
title of alerts, severity level, time, service name, duration, location
information. The _Title of an Alert_ concisely describes the alert. Typically,
the title should contain information like “the affected service or
microservice” and “the manifestation of the failure”. The OCEs will look up
the alert title to find the corresponding SOP and perform predefined actions
to mitigate the alert. The _Severity Level_ indicates how severe the alert is.
The corresponding _Alert Strategy_ defines the severity level and alert title
according to the nature of the affected service or microservice. The _Time_
means the time of occurrence of the alert, and _Duration_ is the duration
between the occurrence and the clearance of the alert. The _Location
Information_ contains the necessary information to locate the anomalous
service or microservice. Table II shows the samples of alerts from the
monitoring system of Huawei Cloud.
#### II-B3 Generation of Alerts
An alert represents a specific abnormal state of the cloud system. The first
and foremost step of alert generation is anomaly detection. Anomaly detection
in logs [16, 20, 21], traces [22, 23, 11], and monitoring metrics [24, 25, 26]
of the cloud system have been widely studied.
The cloud monitoring system will continuously detect anomalies and generate
system reliability alerts according to the alert strategies associated with
specific services or microservices. The strategies for system reliability
alerts can be divided into three categories, i.e., probes, logs, and metrics.
* •
_Probes:_ The cloud monitoring system will send probing requests to the target
services and receive the heartbeat from the target services. Typically, OCEs
set fixed thresholds of no-response time for different services as the
strategy of probes. If a target service does not respond to the probing
requests for a long time, an alert will be generated.
* •
_Logs:_ The cloud monitoring system will process logs of the target services.
OCEs can set flexible rules for different services. Typical rules of logs are
keyword matching, e.g., “IF the logs contain 5 ERRORs in the past 2 minutes,
THEN generate an alert.” Traces can also be viewed as special logs and will be
processed similarly.
* •
_Metrics:_ Performance metrics are time series that show the states of a
running service, e.g., latency, no. of requests, network throughput, CPU
utilization, disk usage, memory utilization, etc. The alert strategy for
metrics varies from static threshold to algorithmic anomaly detection.
#### II-B4 Clearance of Alerts
Alerts can be cleared manually or automatically. On the one hand, after the
human intervention, if the OCE confirms the mitigation of the anomaly, the OCE
can manually mark the alert as “cleared”. On the other hand, the cloud
monitoring system can automatically clear some alerts. For system reliability
alerts of _probes_ and _metrics_ , the cloud monitoring system will continue
to monitor the status of the associated service. If the service returns to a
normal state, the cloud monitoring system will mark the corresponding alert as
“automatically cleared”.
## III An Empirical Study on the Anti-patterns of Alerts
The research described in this paper is motivated by the pain point of alert
governance in a production cloud system. In this section, we present the first
empirical study of characterizing the anti-patterns of alerts333An alert
always corresponds to an alert strategy. Therefore, we do not discriminate
“anti-pattern of alerts” and “anti-patterns of alert strategies”. and how we
mitigate the anti-patterns in the production cloud system. Specifically, we
study the following research questions (RQs).
* •
RQ1: What anti-patterns exist in alerts? How do these anti-patterns prevent
OCEs from promptly and precisely diagnosing the alert?
* •
RQ2: What is the standard procedure to process alerts? Can the standard
procedure handle the anti-patterns?
* •
RQ3: What are the current reactions to the anti-patterns of alerts? How about
their performance?
* •
RQ4: What are the current measures to avoid the anti-patterns of alerts? How
about their performance?
To answer these research questions, we quantitatively analyzed over 4 million
alerts from the production system of Huawei Cloud which serves tens of
millions of users and contains hundreds of services. The time range of the
alerts spans over two years. We conducted a survey involving 18 experienced
OCEs to find out the current practice of mitigating the anti-patterns of
alerts. Among them, 10 (55.6%) OCEs have more than 3 years of working
experience. The number of OCEs with 2 to 3 years’ working experience and 1 to
2 years’ working experience are 3 (16.7%) and 2 (11.1%). Lastly, 3 (16.7%)
OCEs’ experience are less than 1 year.
(a) How about the impact of different anti-patterns to alert diagnosis?
(b) How helpful are the predefined SOPs?
(c) How about the effectiveness of current reactions to anti-patterns?
Figure 2: A survey about the current practice of mitigating the anti-patterns
of alerts.
### III-A RQ1: Anti-patterns in Alerts
Anti-patterns of alerts are misconfigured and ineffective patterns in alerts
that hinder alert processing in practice. Although alerts provide essential
information to OCEs for diagnosing and mitigating failures, anti-patterns of
alerts hinder this process. We divide the anti-patterns into two categories,
i.e., individual anti-patterns and collective anti-patterns. Individual anti-
patterns result from the ineffectiveness of one single alert. In practice,
OCEs usually have limited time to diagnose alerts. If one alert and its SOP
are poorly designed, e.g., misleading steps to diagnose or non-informative
description, the manual diagnosis will be difficult. Collective anti-patterns
are ineffectiveness that alerts collectively exhibit. Sometimes, due to
inappropriate configuration of alert strategy, complex dependency, and inter-
influence effect in the cloud, numerous alerts may simultaneously occur. If
alerts flood to OCEs or are collectively hard to handle, it will be too
complicated for manual diagnosis, especially for inexperienced OCEs.
Characterizing these anti-patterns is the leading step for alert governance.
For this research question, we analyzed more than 4 million alerts over two
years to characterize the anti-patterns of alerts. The total number of alert
strategies in this empirical study is $2010$. To select the candidates of
individual anti-patterns, we group the alerts according to the alert
strategies, then calculate each strategies’ average processing time. The alert
strategies that take the top 30% longest time to process are selected as the
candidates of individual anti-patterns. To find cases of collective anti-
patterns, we first group all the alerts by the hour they occur and the region
they belong to. Then we count the number of alerts per hour per region. If the
number of alerts per hour per region exceeds 200444We set the threshold as 200
as the estimated maximum number of alerts an OCE team can deal with is 200.
Experienced OCEs confirm the threshold., we select all the alerts in this
group as the candidate of collective anti-patterns. We also went through the
incident reports over the past two years to seek the ineffectiveness in alerts
recorded by OCEs. We get five candidate cases of individual anti-patterns and
two candidate cases of collective anti-patterns. After that, we ask two
experienced OCEs to mark whether they think the candidate ineffective pattern
in alerts is an anti-pattern. If they both agree, we include it as an anti-
pattern. If disagreements occur, another experienced OCE is invited to examine
the pattern. As a result, we summarized four individual anti-patterns and two
collective anti-patterns.
Our survey asked the OCEs to determine the impact of different anti-patterns
on alert diagnosis. Figure 2(a) shows the answers’ distributions. Each bar
represents one anti-pattern, which is elaborated below.
#### III-A1 Individual anti-patterns
Individual anti-patterns are the ineffectiveness of a single alert, including
unclear name or description, misleading severity, and improper and outdated
generation rule.
[A1] _Unclear Name or Description_. Unclear alert name or alert description
obstructs the OCEs from gaining intuitive judgment at the first sight, which
slows down the diagnosis and even hinders OCEs from knowing the logical
connections from the alert to other alerts. Typical unclear alert names
describe the system state in a very general way with vague words, e.g.,
“Elastic Computing Service is abnormal”, “Instance $x$ is abnormal”,
“Component $y$ encounters exceptions”, and “Computing cluster has risks”. All
OCEs agree with the impact of _unclear name or description_ , and 61.1% of
them think the impact is high.
[A2] _Misleading Severity_. Severity helps OCEs to prioritize which alert to
diagnose first. Inappropriately high severity level takes up OCE’s time for
dealing with less essential alerts, while too low severity level may lead to
missing important alerts. In our survey, 88.9% of OCEs agree with the impact
of _misleading severity_. In practice, we find that the setting of severity
heavily depends on domain knowledge. With the update of the cloud system,
especially the enhancement of fault tolerance mechanisms, the severity may
also change.
[A3] _Improper and Outdated Generation Rule_. Typically, the cloud monitoring
system will continuously monitor the performance indicators of both lower-
level infrastructures (e.g., CPU usage, disk usage) and higher-level services
(e.g., request per second, response latency). If any indicator increases over
or drops below the predefined thresholds, an alert will be generated. Although
the performance indicators of lower-level infrastructures can provide valuable
information when the root cause of the alert is failures of lower-level
infrastructures (e.g., high CPU usage), due to the fault-tolerance techniques
applied in cloud services, the performance indicators of lower-level
infrastructures do not have definite effect on the quality of cloud services
from the perspective of customers. According to our survey, 72.2% of OCEs
agree that the impact of _improper and outdated generation rule_ is high.
[A4] _Transient and Toggling Alerts_. As mentioned in Section II-B4, the cloud
monitoring system can automatically clear some alerts. When the interval
between the generation time and automatic clearance time of an alarm is less
than a certain value (known as the intermittent interruption threshold), the
alert is called a transient alert. Commonly speaking, a transient alert is an
alert that lasts for a short time. When the same alert is generated and
cleared multiple times (i.e., oscillation), and the number of oscillations is
greater than a certain value (known as the oscillation threshold), it is
called a toggling alert. Transient and toggling alerts are usually caused by
alert strategies being too sensitive to the fluctuation of the metrics.
Transient and toggling alerts cause fatigue of OCEs and also distract the OCEs
from being dealing with other important alerts. Although there are
disagreements on the level of impact, most OCEs (94.4%) think the impact
exists.
#### III-A2 Collective anti-patterns
Collective anti-patterns result from the ineffective patterns of a bunch of
alerts that occur in a short time scope. Zhao et al. [10] defined numerous
alerts (e.g., hundreds of alerts) from different cloud services in a short
time (e.g., one minute) as “alert storm”, and conducted several case studies
of alert storms. In alert storms, even if all the individual alerts are
effective, the large number of alerts may still set obstacles for OCEs and
greatly affect the system reliability in the following three ways. Firstly,
during an alert storm, many alerts are generated. If OCEs check each alert
manually, the troubleshooting will take unacceptably long time. Secondly,
since alert storms occur frequently [10], the OCEs will continually receive
alerts by email, SMS, or even phone call. According to our study, alert storms
occur weekly or even daily, and 17 out of 18 interviewed OCEs say that the
alert storms greatly fatigue them. Lastly, the overwhelming number of alerts
adds pressure to the monitoring system, so the latency of generating new
alerts may increase.
Inspired by [10], we summarize the following collective anti-patterns from
confirmed cases of alert storms in Huawei Cloud. In this study, if the number
of alerts from a region exceeds 100 in an hour, we count it as an alert storm.
Consecutive hours of alert storm will be merged into one. Among the two
collective anti-patterns, “cascading alerts” has already been observed by
[10], but “repeating alerts” has not. In particular, we demonstrate the
collective anti-patterns of alerts with a representative alert storm that
happened from 7:00 AM to 11:59 AM in Huawei Cloud. During the alert storm,
totally 2751 alerts were generated, among which we observeed both collective
anti-patterns as described below.
Figure 3: Repeating alerts in an alert storm.
Figure 4: Answers to Q1 “Overall Helpfulness” regarding OCEs’ working
experience.
[A5] _Repeating Alerts_. Repeating alerts means that alerts from the same
alert strategy appear repeatedly. Sometimes the repeated alerts may last for
several hours. This is usually due to the inappropriate frequency of alert
generation. For example, in Figure 4, we count the number of alerts per
strategy. The total number of alerts is 2751, and the number of effective
alert strategies is 200. To make the figure clear, we only show the name of
the top two alerts. All other alerts are classified as “Others” in the figure.
The alert “haproxy process number warning”, abbreviated as HAProxy in the
figure, takes up around 30% of the total number of alerts in each hour.
However, it is only a WARNING level alert, i.e., the lowest level. Even though
an individual alert is straightforward to process, it is still time-consuming
to deal with it when it occurs repeatedly. If one rule continually generates
alerts, it will distract OCEs from dealing with the more essential alerts.
Most OCEs (94.4%) agree with the impact of _repeating alerts_.
[A6] _Cascading Alerts_. Modern cloud systems are composed of many
microservices that depend on each other [22]. When a service enters an
anomalous state, other services that rely on it will probably suffer from
anomalous states as well. Such anomalous states can propagate through the
service-calling structure [27]. Despite various fault tolerance mechanisms
being introduced, minor anomalies are still common to magnify their impact and
eventually affect the entire system. Each of the affected services will
generate many anomalous monitoring metrics, resulting in many alerts (e.g.,
thousands of alerts per hour). As a consequence, the alerts burst and flood to
the OCEs. Although the alerts are different, they are implicitly related
because they originate from the cascading effect of one single failure.
Manually inspecting the alerts is hard without sufficient knowledge of the
dependencies in the cloud system. All interviewed OCEs agree with the impact
of _cascading alerts_. Table II shows a simplified sample of cascading alerts.
By manually inspecting the alerts, experienced OCEs would infer that the alert
1 possibly cause alert 2 because 1) Alert 2&3 occurred right after alert 1 and
2) The relational database service relies on the block storage service as the
backend. If the relational database service failed to commit changes, i.e.,
write data, one possible reason is that the storage service failed.
Finding 1: Individual anti-patterns and collective anti-patterns widely exist.
They hinder alert diagnosis to different extent.
### III-B RQ2: Standard Alert Processing Procedure
Figure 5: An example Standard Operation Procedure.
The Standard Operation Procedure (SOP) defines the procedure to process a
single alert. For each alert, its SOP includes the alert name, the alert
description, the generation rule of the alert (i.e., alert strategy), the
potential impact on the cloud system, the possible causes, and the steps to
process the alert. Figure 5 shows an example SOP of the alert
nginx_cpu_usage_over_80. The OCEs can follow the SOP to process the alert upon
receiving the alert. According to our survey, only 22.2% of OCEs think current
SOPs are helpful (Q1, Figure 2(b)), and the other 77.8% of OCEs say the help
is limited. The SOPs are deemed to show limited help by all OCEs with over 3
years’ experience, taking up $71.4\%$ of all OCEs selected ”Limited Help” for
Q1 (Figure 4). Moreover, SOPs are considered much less helpful for diagnosing
collective anti-patterns (Q3, Figure 2(b)) than individual anti-patterns (Q2,
Figure 2(b)).
Finding 2: SOPs can help OCEs quickly process alerts, but the help is limited.
SOPs are considered less helpful when dealing with collective anti-patterns.
### III-C RQ3: Reactions to Anti-patterns
Depending on the number of alerts, OCEs react differently. When the number of
alerts is relatively small, OCEs will scan through all the reported alerts.
Then they will manually rule out alerts that are not of great importance and
deal with critical alerts that will affect the whole system.
OCEs react differently when the number of alerts becomes too large. According
to our interview with senior OCEs in Huawei Cloud, they typically take four
kinds of reactions, i.e., alert blocking, alert aggregation, alert correlation
analysis, and emerging alert detection. In practice, we observe that although
the reactions are considered effective, they need to be reconfigured after the
update of cloud services or alert strategies.
[R1] _Alert Blocking_. When OCEs find that transient alerts, toggling alerts,
and repeating alerts provide no information about service anomaly, they can
treat these alerts as noise and block them with alert blocking rules. As a
result, these non-informative alerts will not distract OCEs from quickly
identifying the root causes of service anomalies.
[R2] _Alert Aggregation_. When dealing with large amounts of alerts, there may
be many duplicate alerts in a time period. For the non-informative alerts,
OCEs will employ alert blocking introduced before to facilitate analysis. For
the informative ones, they will adopt alert aggregation. To be more specific,
OCEs will set rules to aggregate alerts in a period and use the number of
alerts as another feature [28]. By doing so, OCEs can quickly identify
critical alerts and focus more on the information provided by them.
[R3] _Alert Correlation Analysis_. Apart from the information provided by the
alerts and their statistical characteristics, OCEs will also leverage other
exogenous information to analyze the correlation of alerts. Two kinds of
exogenous information are used to correlate alerts. The first is the
dependencies of alert strategies, which indicate the spread of alerts in the
cloud services [29]. For instance, if a source alert triggers another alert,
OCEs will be more interested in the source alert, potentially the root cause
of future service failures. They will associate all the derived alerts with
their source alerts and diagnose the source alerts only. Another exogenous
information is the topology of cloud services. Based on the topology of
services, OCEs will set rules to correlate alerts based on the services that
generated them. With this kind of correlation, OCEs can quickly pinpoint the
root cause of a large number of alerts by following the topological
correlation.
[R4] _Emerging Alert Detection_. Due to the large scale of cloud services,
manually configured dependencies of alert strategies could not cover all the
alert strategies. This may lead to the failure of alert correlation analysis.
For example, a few alerts corresponding to a root cause (i.e., emerging
alerts) appear first. If they are not dealt with seriously, when the root
cause escalates its influence, numerous cascading alerts will be generated.
The lack of critical association rules will prevent the OCEs from discovering
the correlation and quickly alert diagnosis. This usually happens on gray
failures like memory leak and CPU overloading. Hence, it would be helpful to
capture the implicit dependencies. We employ the adaptive online Latent
Dirichlet Allocation [30, 31] to capture the implicit dependencies. OCEs could
detect these emerging alerts as early as possible for faster alert diagnosis
with the implicit dependencies.
Figure 2(c) shows OCEs’ opinions about the effectiveness of the four
reactions. In general, the effectiveness of all four reactions is relatively
high.
Finding 3: Current reactions are considered effective, but the configurations
of such reactions still require domain knowledge.
### III-D RQ4: Avoidance of Anti-patterns
To avoid the alert anti-patterns from occurring, Huawei Cloud also adopts
preventative guidelines and conducts periodical reviews on alert strategies.
We summarize the generic aspects to consider when designing the guidelines.
The guidelines are designed by experienced OCEs and guide from three aspects
of alerts.
* •
_Target_ means what to monitor. The performance metrics highly related to the
service quality should be monitored.
* •
_Timing_ means when to generate an alert upon the manifestation of anomalies.
Sometimes an anomaly does not necessarily mean the service quality will be
affected.
* •
_Presentation_ means whether the alerts’ attributes are helpful for alert
diagnosis.
However, our interview with OCEs shows that the preventative guidelines are
not strictly obeyed in practice. Most (88.9%) OCEs agree that strictly
following the guidelines will make alert diagnosis easier.
Finding 4: The preventative guidelines could reduce the anti-patterns and
assist in alert diagnosis if they are carefully designed and strictly obeyed.
## IV Future Directions
Although several postmortem reactions and preventative guidelines are adopted
(Section III), according to our study, the problem of alert anti-patterns is
still prevailing in industrial cloud monitoring systems because most current
measures still require manual configuration. As for the alert blocking, OCEs
need to inspect each alert and set rules manually. How to define the blocking
rules and when to invalidate these rules become a crucial problem. A similar
problem also exists in alert correlation. As for alert correlation analysis,
OCEs also need to inspect alert generating rules and service topology
documents apart from reading alerts, which incurs a considerable burden to
OCEs. Moreover, the effectiveness of the reactions also lacks clear criteria
to evaluate. OCEs can only estimate the effectiveness of the reactive measures
by their feeling. Therefore, outdated reactive measures is hard to detect. As
a result, the whole process of alert governance becomes time-consuming and
laborious.
Figure 6: Incorporating human knowledge and machine learning to detect anti-
patterns of alerts.
In Figure 6, we formulate the three stages of the mitigation of alert anti-
patterns. We already shared our experience of avoiding and reacting to alert
anti-patterns in Section III. To close the gap between manual alert strategies
and cloud system upgrades, we propose to explore the automatic detection of
alert anti-patterns. Automatic evaluation of the Quality of Alerts (QoA) will
be a promising approach to the automatic detection of alert anti-patterns.
Based on our empirical study, we propose three criteria to measure the quality
of alerts (QoA), including indicativeness, precision, and handleability.
* •
_Indicativeness_ measures whether the alert can indicate the failures that
will affect the end users’ experience.
* •
_Precision_ measures whether the alert can correctly reflect the severity of
the anomaly.
* •
_Handleability_ measures whether the alert can be quickly handled. The
handleability depends on the target and the presentation of the alert.
Improper target or unclear presentation decreases the handleability.
In the future, incorporating human knowledge and machine learning to evaluate
the three aspects of alerts deserves more exploration. In particular, OCEs
provide their domain knowledge by creating labels like “high/low
precision/handleability/indicativeness” for each alerts during alert
processing. With the labels, a machine learning model could be trained and
continuously updated so that it can automatically absorb the human knowledge
for future QoA evaluation.
## V Related Work
Many works focus on processing alerts of cloud services and microservices. One
of the essential tasks of alert processing is to reduce the enormous amount of
reported alerts to facilitate failure diagnosis. Alert correlation [32] and
clustering [33, 34, 10] are two common techniques employed to help OCEs find
critical alerts and repair the system in a short period. Li et al. [35]
proposes to generate incidents based on the system alerts to prevent services
from future failures. Unlike all prior works, our paper focuses on not only
how to deal with alerts after they are generated, but also how to generate
better alerts and conduct better alert governance.
## VI Conclusion
This paper conducts the first empirical study to characterize the anti-
patterns in cloud alerts. We also summarize the industrial practices of
mitigating the anti-patterns by postmortem reactions and preventative
guidelines. We wish our study to inspire further research on automatic QoA
evaluation and anti-pattern detection and benefit the reliability of the cloud
services in the long run.
## Acknowledgment
The work was supported by Key-Area Research and Development Program of
Guangdong Province (No. 2020B010165002), Key Program of Fundamental Research
from Shenzhen Science and Technology Innovation Commission (No.
JCYJ20200109113403826), and the Research Grants Council of the Hong Kong
Special Administrative Region, China (CUHK 14210920).
## References
* [1] Z. Chen, Y. Kang, L. Li, X. Zhang, H. Zhang, H. Xu, Y. Zhou, L. Yang, J. Sun, Z. Xu, Y. Dang, F. Gao, P. Zhao, B. Qiao, Q. Lin, D. Zhang, and M. R. Lyu, “Towards intelligent incident management: why we need it and how we make it,” in _ESEC/FSE ’20: 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Virtual Event, USA, November 8-13, 2020_. ACM, 2020, pp. 1487–1497.
* [2] Z. Li, Q. Cheng, K. Hsieh, Y. Dang, P. Huang, P. Singh, X. Yang, Q. Lin, Y. Wu, S. Levy, and M. Chintalapati, “Gandalf: An intelligent, end-to-end analytics service for safe deployment in large-scale cloud infrastructure,” in _17th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2020, Santa Clara, CA, USA, February 25-27, 2020_. USENIX Association, 2020, pp. 389–402.
* [3] C. L. Dickson, “A working theory-of-monitoring,” Google, Inc., Tech. Rep., 2013\. [Online]. Available: https://www.usenix.org/conference/lisa13/working-theory-monitoring
* [4] P. Huang, C. Guo, L. Zhou, J. R. Lorch, Y. Dang, M. Chintalapati, and R. Yao, “Gray failure: The achilles’ heel of cloud-scale systems,” in _Proceedings of the 16th Workshop on Hot Topics in Operating Systems, HotOS 2017, Whistler, BC, Canada, May 8-10, 2017_. ACM, 2017, pp. 150–155.
* [5] H. Wang, Z. Wu, H. Jiang, Y. Huang, J. Wang, S. Köprü, and T. Xie, “Groot: An event-graph-based approach for root cause analysis in industrial settings,” in _ASE ’21: 36th IEEE/ACM International Conference on Automated Software Engineering, Virtual Event, Australia, November 15-19, 2021_. IEEE/ACM, 2021, pp. 1–12.
* [6] D. Blackmore, C. Tornbohm, D. Ackerman, C. Graham, S. Matson, T. Lo, T. Singh, A. Roy, C. Tenneson, M. Sawai, E. Kim, E. Anderson, S. Nag, N. Barton, N. Sethi, R. Malik, B. Williams, C. Healey, R. Buest, T. Wu, K. Madaan, S. Sahoo, H. Singh, and P. Sullivan, “Market share: It services, worldwide, 2020,” Tech. Rep., 2021.
* [7] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. H. Katz, A. Konwinski, G. Lee, D. A. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “Above the clouds: A berkeley view of cloud computing,” EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2009-28, Feb 2009. [Online]. Available: http://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.html
* [8] M. A. Doc, “Microservices architecture style,” 2019. [Online]. Available: https://docs.microsoft.com/en-us/azure/architecture/guide/architecture-styles/microservices
* [9] M. Villamizar, O. Garcés, H. Castro, M. Verano, L. Salamanca, R. Casallas, and S. Gil, “Evaluating the monolithic and the microservice architecture pattern to deploy web applications in the cloud,” in _2015 10th Computing Colombian Conference (10CCC)_. IEEE, 2015, pp. 583–590.
* [10] N. Zhao, J. Chen, X. Peng, H. Wang, X. Wu, Y. Zhang, Z. Chen, X. Zheng, X. Nie, G. Wang, Y. Wu, F. Zhou, W. Zhang, K. Sui, and D. Pei, “Understanding and handling alert storm for online service systems,” in _ICSE-SEIP 2020: 42nd International Conference on Software Engineering, Software Engineering in Practice, Seoul, South Korea, 27 June - 19 July, 2020_. ACM, 2020, pp. 162–171.
* [11] X. Zhou, X. Peng, T. Xie, J. Sun, C. Ji, D. Liu, Q. Xiang, and C. He, “Latent error prediction and fault localization for microservice applications by learning from system trace logs,” in _Proceedings of the ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2019, Tallinn, Estonia, August 26-30, 2019_. ACM, 2019, pp. 683–694.
* [12] X. Zhang, Y. Xu, S. Qin, S. He, B. Qiao, Z. Li, H. Zhang, X. Li, Y. Dang, Q. Lin, M. Chintalapati, S. Rajmohan, and D. Zhang, “Onion: identifying incident-indicating logs for cloud systems,” in _ESEC/FSE ’21: 29th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Athens, Greece, August 23-28, 2021_. ACM, 2021, pp. 1253–1263.
* [13] X. Zhang, Q. Lin, Y. Xu, S. Qin, H. Zhang, B. Qiao, Y. Dang, X. Yang, Q. Cheng, M. Chintalapati, Y. Wu, K. Hsieh, K. Sui, X. Meng, Y. Xu, W. Zhang, F. Shen, and D. Zhang, “Cross-dataset time series anomaly detection for cloud systems,” in _2019 USENIX Annual Technical Conference, USENIX ATC 2019, Renton, WA, USA, July 10-12, 2019_. USENIX Association, 2019, pp. 1063–1076.
* [14] Y. Gan, Y. Zhang, K. Hu, D. Cheng, Y. He, M. Pancholi, and C. Delimitrou, “Seer: Leveraging big data to navigate the complexity of performance debugging in cloud microservices,” in _Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2019, Providence, RI, USA, April 13-17, 2019_. ACM, 2019, pp. 19–33.
* [15] P. Huang, C. Guo, J. R. Lorch, L. Zhou, and Y. Dang, “Capturing and enhancing in situ system observability for failure detection,” in _13th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2018, Carlsbad, CA, USA, October 8-10, 2018_. USENIX Association, 2018, pp. 1–16.
* [16] S. He, P. He, Z. Chen, T. Yang, Y. Su, and M. R. Lyu, “A survey on automated log analysis for reliability engineering,” _ACM Comput. Surv._ , vol. 54, no. 6, Jul. 2021. [Online]. Available: https://doi.org/10.1145/3460345
* [17] A. Pecchia, M. Cinque, G. Carrozza, and D. Cotroneo, “Industry practices and event logging: assessment of a critical software development process,” in _Proc. of the 37th IEEE/ACM International Conference on Software Engineering (ICSE)_ , 2015, pp. 169–178.
* [18] K. Yao, G. B. de Pádua, W. Shang, C. Sporea, A. Toma, and S. Sajedi, “Log4perf: suggesting and updating logging locations for web-based systems’ performance monitoring,” _Empir. Softw. Eng._ , vol. 25, no. 1, pp. 488–531, 2020.
* [19] S. He, Q. Lin, J. Lou, H. Zhang, M. R. Lyu, and D. Zhang, “Identifying impactful service system problems via log analysis,” in _Proceedings of the 2018 ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2018, Lake Buena Vista, FL, USA, November 04-09, 2018_. ACM, 2018, pp. 60–70.
* [20] V. Le and H. Zhang, “Log-based anomaly detection without log parsing,” in _ASE ’21: 36th IEEE/ACM International Conference on Automated Software Engineering, Virtual Event, Australia, November 15-19, 2021_. IEEE/ACM, 2021, pp. 1–12.
* [21] N. Zhao, H. Wang, Z. Li, X. Peng, G. Wang, Z. Pan, Y. Wu, Z. Feng, X. Wen, W. Zhang, K. Sui, and D. Pei, “An empirical investigation of practical log anomaly detection for online service systems,” in _ESEC/FSE ’21: 29th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Athens, Greece, August 23-28, 2021_. ACM, 2021, pp. 1404–1415.
* [22] T. Yang, J. Shen, Y. Su, X. Ling, Y. Yang, and M. R. Lyu, “Aid: Efficient prediction of aggregated intensity of dependency in large-scale cloud systems,” in _ASE ’21: 36th IEEE/ACM International Conference on Automated Software Engineering, Virtual Event, Australia, November 15-19, 2021_. IEEE/ACM, 2021, pp. 1–12.
* [23] X. Guo, X. Peng, H. Wang, W. Li, H. Jiang, D. Ding, T. Xie, and L. Su, “Graph-based trace analysis for microservice architecture understanding and problem diagnosis,” in _ESEC/FSE ’20: 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Virtual Event, USA, November 8-13, 2020_. ACM, 2020, pp. 1387–1397.
* [24] Y. Meng, S. Zhang, Y. Sun, R. Zhang, Z. Hu, Y. Zhang, C. Jia, Z. Wang, and D. Pei, “Localizing failure root causes in a microservice through causality inference,” in _28th IEEE/ACM International Symposium on Quality of Service, IWQoS 2020, Hangzhou, China, June 15-17, 2020_. IEEE, 2020, pp. 1–10.
* [25] P. Liu, Y. Chen, X. Nie, J. Zhu, S. Zhang, K. Sui, M. Zhang, and D. Pei, “Fluxrank: A widely-deployable framework to automatically localizing root cause machines for software service failure mitigation,” in _30th IEEE International Symposium on Software Reliability Engineering, ISSRE 2019, Berlin, Germany, October 28-31, 2019_. IEEE, 2019, pp. 35–46.
* [26] G. Zhao, S. Hassan, Y. Zou, D. Truong, and T. Corbin, “Predicting performance anomalies in software systems at run-time,” _ACM Trans. Softw. Eng. Methodol._ , vol. 30, no. 3, pp. 33:1–33:33, 2021.
* [27] A. W. Services, “Aws well-architected framework,” 2020. [Online]. Available: https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html
* [28] Z. Chen, J. Liu, Y. Su, H. Zhang, X. Wen, X. Ling, Y. Yang, and M. R. Lyu, “Graph-based incident aggregation for large-scale online service systems,” in _ASE ’21: 36th IEEE/ACM International Conference on Automated Software Engineering, Virtual Event, Australia, November 15-19, 2021_. IEEE/ACM, 2021, pp. 1–12.
* [29] R. Melo and D. Macedo, “A cloud immune security model based on alert correlation and software defined network,” in _28th IEEE International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises, WETICE 2019, Naples, Italy, June 12-14, 2019_. IEEE, 2019, pp. 52–57.
* [30] T. Yang, C. Gao, J. Zang, D. Lo, and M. R. Lyu, “TOUR: dynamic topic and sentiment analysis of user reviews for assisting app release,” in _Companion of The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 19-23, 2021_. ACM / IW3C2, 2021, pp. 708–712.
* [31] C. Gao, J. Zeng, M. R. Lyu, and I. King, “Online app review analysis for identifying emerging issues,” in _Proceedings of the 40th International Conference on Software Engineering, ICSE 2018, Gothenburg, Sweden, May 27 - June 03, 2018_. ACM, 2018, pp. 48–58.
* [32] S. A. Mirheidari, S. Arshad, and R. Jalili, “Alert correlation algorithms: A survey and taxonomy,” in _Cyberspace Safety and Security - 5th International Symposium, CSS 2013, Zhangjiajie, China, November 13-15, 2013, Proceedings_ , ser. Lecture Notes in Computer Science, vol. 8300. Springer, 2013, pp. 183–197.
* [33] D. Lin, R. Raghu, V. Ramamurthy, J. Yu, R. Radhakrishnan, and J. Fernandez, “Unveiling clusters of events for alert and incident management in large-scale enterprise it,” in _The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, New York, NY, USA - August 24 - 27, 2014_. ACM, 2014, pp. 1630–1639.
* [34] J. Xu, Y. Wang, P. Chen, and P. Wang, “Lightweight and adaptive service API performance monitoring in highly dynamic cloud environment,” in _2017 IEEE International Conference on Services Computing, SCC 2017, Honolulu, HI, USA, June 25-30, 2017_. IEEE Computer Society, 2017, pp. 35–43.
* [35] L. Li, X. Zhang, X. Zhao, H. Zhang, Y. Kang, P. Zhao, B. Qiao, S. He, P. Lee, J. Sun, F. Gao, L. Yang, Q. Lin, S. Rajmohan, Z. Xu, and D. Zhang, “Fighting the fog of war: Automated incident detection for cloud systems,” in _2021 USENIX Annual Technical Conference, USENIX ATC 2021, July 14-16, 2021_. USENIX Association, 2021, pp. 131–146.
|
# On a Perturbed Critical p-Kirchhoff-Type Problem
G. N. Cunha Instituto de Matemática e Estatística, Universidade Federal de
Goiás, Goiânia GO74001-970, Brazil<EMAIL_ADDRESS>, F. Faraci
Department of Mathematics and Computer Sciences, University of Catania, 95125
Catania, Italy<EMAIL_ADDRESS>and K. Silva Instituto de Matemática e
Estatística, Universidade Federal de Goiás, Goiânia GO74001-970, Brazil
<EMAIL_ADDRESS>
###### Abstract.
In this paper we deal with a stationary non-degenerate $p-$Kirchhoff type
problem with critical non-linearity and a subcritical parametrized
perturbation. We work on bounded domains of the Euclidean space, without any
restriction on the dimension or on $p>0$. Variational methods will be utilized
in combination with an analysis of the fibering functions of the energy
functional, in order to obtain ground state solutions, as well as Mountain
Pass solutions, depending on the values of the parameter. A local analysis of
the energy functional will allow us to obtain non-trivial solutions even
beyond the extremal parameter.
###### Contents
1. 1 Introduction
2. 2 Abstract Results
3. 3 Existence Results
1. 3.1 Global Minimizers
2. 3.2 Local Minimizers
3. 3.3 Mountain Pass Solutions
4. 4 Non-Existence Results
_Mathematics Subject Classification (2010)_ : 35J20, 35B33.
_Key words and phrases_ : Critical Nonlinearity, Extremal Parameter, Fibering
Maps, Kirchhoff Term, Subcritical Perturbation, Variational Methods.
## 1\. Introduction
In this paper we deal with the following stationary Kirchhoff type problem:
$\left\\{\begin{array}[]{lr}-M\left(\displaystyle{\int_{\Omega}|\nabla
u|^{p}}dx\right)\Delta_{p}u=|u|^{p^{\star}-2}u+\lambda f(x,u)&in\ \ \
\Omega\\\ u=0&on\ \partial\Omega,\end{array}\right.$ (1)
where $1<p<+\infty$, $\Omega$ is a bounded domain in ${\mathbb{R}}^{N}$ with
smooth boundary, $p^{\star}=\frac{pN}{N-p}$, $M:[0,+\infty[\mapsto[0,+\infty[$
is a continuous function with $\hat{M}(t):=\displaystyle{\int_{0}^{t}M(s)ds}$,
$f:\Omega\times\mathbb{R}\mapsto\mathbb{R}$ is a Carathéodory function with
primitive $F(x,t)=\displaystyle{\int_{0}^{t}f(x,s)ds}$ for each $x\in\Omega$,
$t\in{\mathbb{R}}$.
According to the survey [18], Kirchhoff problems arise from the study of the
transverse oscillations of a stretched string. The original equation is
$\rho
hu_{tt}-\left\\{\rho_{0}+\frac{Eh}{2L}\int_{0}^{L}|u_{x}|^{2}dx\right\\}u_{xx}+\delta
u_{x}+f(x,u)=0,$ (2)
where $u=u(t,x)$ is the lateral displacement at the time $t$ and at the space
coordinate $x$, $E$ the Young modulus, $\rho$ the mass density, $h$ the cross
section area, $L$ the length of the string, $\rho_{0}$ the initial axial
tension, $\delta$ the resistance modulus, and $f$ is the external force. When
$\delta=f=0$, (2) was introduced by Kirchhoff in [13]. Further details and a
study of the physical phenomena described by Kirchhoff’s classical theory can
be found in [19]. In the last years, the existence and multiplicity of
solutions for Kirchhoff problems with a critical non-linearity have received
considerable attention. As a matter of fact, the main difficulty in dealing
with such problems is the lack of compactness of the Sobolev embedding
$W_{0}^{1,p}(\Omega)\subset L^{p^{\star}}(\Omega)$, which prevents the
application of standard variational methods.
The existence and multiplicity of solutions of Kirchhoff type equations with
critical exponents have been investigated by using different techniques, such
as truncation and variational methods, the Nehari manifold approach, the
Ljusternik-Schnirelmann category theory, genus theory (see for instance
[3]–[8] and the references therein).
In the present paper, we apply an idea introduced in [6], which was inspired
by the fibering method in [17] and the notion of extremal parameters described
in [12], to analize the topological changes occured on the energy functional
as the parameter $\lambda$ varies. With such a technique, we do not need to
consider the second order derivative of the fiber function, except for the
non-existence results. More specifically we employ the Second Lions’s
Concentration Compactness principle to obtain Mountain Pass type solutions
(see [1], [7],[10],[11],[15],[16],[20]), but also to establish the sequential
weak lower semi-continuity of the energy functional, with a proof inspired by
[14], which is, in turn (as far as we know), the only work so far where this
approach has been utilized. Also, in the cases beyond the extremal parameter,
we prove it is still possible to obtain non-trivial solutions, as long as we
minimize the energy functional locally, as in [6].
We are going to look for solutions of problem (1) in the Sobolev space
$W^{1,p}_{0}(\Omega)$. This linear space is endowed with the norm
$\|u\|:=\left(\int_{\Omega}|\nabla u|^{p}dx\right)^{\frac{1}{p}}$
and continuously embedded into $L^{p^{\star}}(\Omega)$ with embedding constant
$\displaystyle{S=\sup_{u\in
W_{0}^{1,p}(\Omega)\setminus\\{0\\}}\frac{\|u\|_{p^{\star}}^{p^{\star}}}{\|u\|^{p^{\star}}}}.$
(3)
A weak solution for problem (1) is a critical point of the energy functional
$\Phi_{\lambda}:W_{0}^{1,p}(\Omega)\mapsto{\mathbb{R}}$ given by
$\displaystyle{\Phi_{\lambda}(u)=\frac{1}{p}\hat{M}(\|u\|^{p})-\frac{1}{p^{\star}}\|u\|^{p^{\star}}_{p^{\star}}-\lambda\int_{\Omega}F(x,u(x))dx.}$
In order to control the behavior of the fibers at $0$ and $+\infty$, as well
as to establish the coercivity of the energy functional, we need the following
hypotheses on the non-local term $M$:
1. $(\rho_{1})$:
$\displaystyle{\lim_{t\to 0^{+}}M(t)>0}$;
2. $(\rho_{2})$:
$\displaystyle{\lim_{t\to+\infty}\frac{M(t)}{t^{\frac{r-p}{p}}}>0}$ for some
$r>p^{\star}$.
The sequential weak lower semi-continuity of the energy functional, on the
other hand, is associated with the following conditions:
1. $(\beta_{1})$:
$\displaystyle{\inf_{t>0}\frac{\hat{M}(t)}{t^{\frac{p^{\star}}{p}}}\geq
S\frac{p}{p^{\star}}}$;
2. $(\beta_{2})$:
$\displaystyle{\hat{M}(t+s)\geq\hat{M}(t)+\hat{M}(s)}$ for all $t>0$ and
$s>0$.
Notice that conditions $\eqref{beta1}$ and $\eqref{beta2}$ imply that
$\hat{M}$ is strictly increasing.
The existence of a Mountain Pass solution comes mainly from the next
condition. This hypothesis (which is stronger than ($\beta_{1}$)) is also
related to the non-existence results:
1. $(\gamma_{1})$:
$\displaystyle{\inf_{t>0}\frac{M(t)}{t^{\frac{p^{\star}}{p}-1}}>S}$
An example of a function satisfying conditions ($\rho_{1}$), ($\rho_{2}$),
($\beta_{1}$), ($\beta_{2}$), and ($\gamma_{1}$), is
$M(t):=a+bt^{\alpha-1},$ (4)
for suitable values of $\alpha>1$ and $a,b>0$:
* (i)
Assume $\alpha>\frac{N}{N-p}$; then
$\displaystyle{\inf_{t>0}\frac{\hat{M}(t)}{t^{\frac{p^{\star}}{p}}}}\geq\frac{p}{p^{\star}}S$
if, and only if,
$a^{\frac{N(\alpha-1)-\alpha
p}{p}}b\geq\left[\frac{p}{p^{\star}}S\frac{N(\alpha-1)-p\alpha}{N(\alpha-1)-p\alpha+p}\right]^{\frac{(N-p)(\alpha-1)}{p}}\alpha\frac{p}{N(\alpha-1)-p\alpha}.$
* (ii)
Also, for $\alpha>\frac{N}{N-p}$, the following assertion holds true:
$\displaystyle{\inf_{t>0}\frac{M(t)}{t^{\frac{p^{\star}}{p}-1}}}>S$
if, and only if,
$a^{\frac{N(\alpha-1)-\alpha
p}{p}}b>\left[S\frac{N(\alpha-1)-p\alpha}{N(\alpha-1)-p\alpha+p}\right]^{\frac{(N-p)(\alpha-1)}{p}}\frac{p}{N(\alpha-1)-p\alpha}.$
* (iii)
For $\alpha>1$, $\displaystyle{\lim_{t\to 0}M(t)>0}$.
* (iv)
For any $r>p^{\star}$ such that $\alpha p>r$, there holds
$\displaystyle{\lim_{t\to+\infty}\frac{M(t)}{t^{\frac{r-p}{p}}}>0}.$
* (v)
For $\alpha>1$, the inequality
$\hat{M}(t+s)\geq\hat{M}(t)+\hat{M}(s)\ \ \ \ \forall\ t,s\ \in[0,+\infty[$
holds true.
* (vi)
For $\alpha\in(1,\frac{N}{N-p}]$, there holds
$\displaystyle{\inf_{t>0}\frac{\hat{M}(t)}{t^{\frac{p^{\star}}{p}}}}=\left\\{\begin{array}[]{lr}0&{\rm
if}\ \ \alpha<\frac{N}{N-p}\\\ \frac{b}{\alpha}&{\rm if}\ \
\alpha=\frac{N}{N-p}.\end{array}\right.$
Some comparison with related literature is in order. For
$S=S_{N}^{-\frac{p^{\star}}{p}}$ in (3), and $\alpha=p=2$ in (4), we obtain
the problem studied in the work [6]; in that paper, the conditions
corresponding to ($\beta_{1}$) and ($\gamma_{1}$) would be, respectively,
$a^{\frac{N-4}{2}}b\geq\frac{(N-4)^{\frac{N-4}{2}}}{N^{\frac{N-2}{2}}S_{N}^{\frac{N}{2}}}4$
and
$a^{\frac{N-4}{2}}b>\frac{(N-4)^{\frac{N-4}{2}}}{(N-2)^{\frac{N-2}{2}}S_{N}^{\frac{N}{2}}}2.$
In the present paper we also achieve an improvement with respect to [5]
concerning the semicontinuity property: in that paper, the condition
corresponding to ($\beta_{1}$), i.e.,
$\displaystyle{\inf_{l>0}\frac{\hat{M}(l)}{l^{\frac{p^{\star}}{p}}}\geq
c_{p}}$
where
$c_{p}=\left\\{\begin{array}[]{lcl}\left(2^{p-1}-1\right)^{\frac{p^{\star}}{p}}\frac{p}{p^{\star}}S_{N}^{-\frac{P^{\star}}{p}}&if&p\geq
2,\\\
2^{2p^{\star}-1-\frac{p^{\star}}{p}}\frac{p}{p^{\star}}S_{N}^{-\frac{p^{\star}}{p}}&if&1<p<2,\end{array}\right.$
is more restrictive than $(\beta_{1})$ since for $p\neq 2$ there holds
$c_{p}>\frac{p}{p^{\star}}S_{N}^{-\frac{p^{\star}}{p}}.$
The following hypotheses on the perturbation $f$ will be used throughout this
work.
1. $(f_{1})$:
There exist $c_{1},c_{2}>0$ and $q\in(p,p^{\star})$ such that $|f(x,t)|\leq
c_{1}+c_{2}|t|^{q-1}$ for $t\in{\mathbb{R}}$, and a.e. in $\Omega$;
2. $(f_{2})$:
$\displaystyle{\lim_{t\to 0}}\frac{f(x,t)}{|t|^{p-1}}=0$ uniformly on
$x\in\Omega$;
3. $(f_{3})$:
$f(x,t)>0$ for every $t>0$, a.e in $\Omega$; and $f(x,t)<0$ for every $t<0$,
a.e in $\Omega$. Moreover there exists $\mu>0$ such that $f(x,t)\geq\mu>0$ for
a.a. $x\in\Omega$ and every $t\in I$, being $I$ an open interval of
$(0,+\infty)$.
An example of a function satisfying the conditions ($f_{1}$), ($f_{2}$), and
($f_{3}$), is
$\begin{array}[]{rcl}f(x,t)=|t|^{q-2}t&\forall&t\in{\mathbb{R}},\end{array}$
where $q\in(p,p^{\star})$ is fixed.
Now we introduce the main results of this paper, which we prove in the next
sections.
The first result justifies the necessity for the parameterized perturbation.
###### Theorem 1.1.
Under condition ($\gamma_{1}$) there exists a number $\lambda_{1}^{\star}>0$
such that for all $-\infty<\lambda<\lambda_{1}^{\star}$ problem (1) possesses
only the trivial solution.
The following existence result is the main goal of this paper.
###### Theorem 1.2.
Assume conditions ($\rho_{1}$), ($\rho_{2}$), ($\beta_{1}$), ($\beta_{2}$),
($f_{1}$), ($f_{2}$), and ($f_{3}$). Then, there exists
$\lambda_{0}^{\star}\geq 0$ such that
* (i)
if $\lambda>\lambda_{0}^{\star}$, then the energy functional $\Phi_{\lambda}$
has a global minimizer $u_{\lambda}$ such that
$I_{\lambda}=\Phi_{\lambda}(u_{\lambda})<0$ ( in particular that
$u_{\lambda}\neq 0$);
* (ii)
if $\lambda=\lambda_{0}^{\star}$, then the energy functional $\Phi_{\lambda}$
has a global minimizer $u_{\lambda_{0}^{\star}}$ such that
$I_{\lambda_{0}^{\star}}=0$; if the inequality in condition ($\beta_{1}$) is
strict, then $u_{\lambda_{0}^{\star}}\neq 0$;
* (iii)
if $\lambda<\lambda_{0}^{\star}$, then for all $u\in
W_{0}^{1,p}(\Omega)\backslash\\{0\\}$ there holds $\Phi_{\lambda}(u)>0$.
Therefore, $u_{\lambda}=0$ is the only global minimizer of $\Phi_{\lambda}.$
The next result shows that although we may not find non-trivial global
minimizers for the energy functional for $\lambda<\lambda_{0}^{\star}$, we may
still find non-trivial local minimizers, as long as the parameter $\lambda$ is
close enough to $\lambda_{0}^{\star}$.
###### Theorem 1.3.
Assume conditions ($\rho_{1}$), ($\rho_{2}$), ($\beta_{1}$), ($\beta_{2}$),
($f_{1}$), ($f_{2}$), and ($f_{3}$). If the inequality in condition
($\beta_{1}$) is strict, then there exists $\epsilon>0$ small enough so that
for each $\lambda\in(\lambda_{0}^{\star}-\epsilon,\lambda_{0}^{\star})$ the
energy functional $\Phi_{\lambda}$ possesses a local minimizer with positive
energy.
The next result states the existence of a mountain pass type solution.
###### Theorem 1.4.
Assume conditions ($\rho_{1}$), ($\rho_{2}$), ($\beta_{2}$), ($\gamma_{1}$),
($f_{1}$), ($f_{2}$), and ($f_{3}$). Then, there exists $\epsilon>0$ small
enough such that for each $\lambda>\lambda_{0}^{\star}-\epsilon$, problem (1)
has a solution of mountain pass type.
## 2\. Abstract Results
Now we proceed to describe the abstract results which allow us to deduce our
main theorems, stated in Section 1.
For each $u\in W_{0}^{1,p}(\Omega)\backslash\\{0\\}$ and $\lambda\geq 0$, we
define the fiber function $\psi_{\lambda,u}:[0,+\infty[\mapsto{\mathbb{R}}$ by
$\psi_{\lambda,u}(t):=\Phi_{\lambda}(tu)$.
###### Lemma 2.1.
Assume conditions ($f_{1}$) and ($f_{2}$). Then, the following assertions hold
true:
* (i):
Under condition ($\rho_{1}$), there exist
$\epsilon_{1}=\epsilon_{1}(\lambda,u)>0$ and
$\epsilon_{2}=\epsilon_{2}(\lambda,u)>0$ such that $\psi_{\lambda,u}(t)>0\
\forall t\in(0,\epsilon_{1})$, and $\psi^{\prime}_{\lambda,u}(t)>0\ \forall
t\in(0,\epsilon_{2})$;
* (ii):
Under condition ($\rho_{2}$), there holds
$\displaystyle{\lim_{t\to\infty}\psi_{\lambda,u}(t)}=+\infty$ and
$\displaystyle{\lim_{t\to\infty}\psi^{\prime}_{\lambda,u}(t)=+\infty}$.
###### Proof.
We will prove the claim for $\psi_{\lambda,u}$. In fact, write
$\psi_{\lambda,u}(t)=t^{p}\|u\|^{p}\left[\frac{1}{p}\frac{\hat{M}(t^{p}\|u\|^{p})}{t^{p}\|u\|^{p}}-\frac{t^{p^{\star}-p}}{p^{\star}}\frac{\|u\|_{p^{\star}}^{p^{\star}}}{\|u\|^{p}}-\frac{\lambda}{\|u\|^{p}}\int_{\Omega}\frac{F(x,tu(x))}{t^{p}}dx\right].$
Note that by conditions ($f_{1}$) and ($f_{2}$), for each $\varepsilon>0$
there exists $c>0$ such that
$|F(x,t)|\leq\varepsilon|t|^{p}+c|t|^{q}\ \hbox{for all $t\in{\mathbb{R}}$,
a.e. in $\Omega$. }$
Therefore,
$\lim_{t\to 0}\int_{\Omega}\frac{F(x,tu(x))}{t^{p}}dx=0.$
By assumption ($\rho_{1}$) and De l’Hospital rule
$\displaystyle{\lim_{t\to
0}\frac{\hat{M}(t^{p}\|u\|^{p})}{t^{p}\|u\|^{p}}>0},$
and the first conclusion in $(i)$ follows.
By $(\rho_{2})$ and continuity of $M$, there exists positive constants
$c_{1},c_{2}$ such that
$\hat{M}(t)\geq c_{1}t^{\frac{r}{p}}-c_{2}\ \hbox{for all $t\geq 0$}.$
Thus, from $(f_{1})$, and possibly different constants $c_{i}$,
$\displaystyle\psi_{\lambda,u}(t)$
$\displaystyle=\frac{1}{p}{\hat{M}(t^{p}\|u\|^{p})}-\frac{t^{p^{\star}}}{p^{\star}}{\|u\|_{p^{\star}}^{p^{\star}}}-{\lambda}\int_{\Omega}{F(x,tu(x))}dx$
$\displaystyle\geq\frac{1}{p}c_{1}t^{r}\|u\|^{r}-\frac{t^{p^{\star}}}{p^{\star}}{\|u\|_{p^{\star}}^{p^{\star}}}-c_{3}t^{q}\|u\|_{q}^{q}-c_{2}$
and the first claim in $(ii)$ holds. ∎
For each $u\in W_{0}^{1,p}(\Omega)\backslash\\{0\\}$, consider now the
following system:
$\left\\{\begin{array}[]{l}\psi_{\lambda,u}(t)=0\\\
\psi^{\prime}_{\lambda,u}(t)=0\\\
\psi_{\lambda,u}(t)=\inf_{s>0}\psi_{\lambda,u}(s).\end{array}\right.$ (5)
###### Lemma 2.2.
Assume conditions ($f_{1}$), ($f_{2}$), ($\rho_{1}$), ($\rho_{2}$) and
($\beta_{1}$). Then, system (5) has a solution $(\lambda_{0}(u),t_{0}(u))$ for
each $u\in W_{0}^{1,p}(\Omega)\backslash\\{0\\}$. Furthermore, the solution is
unique with respect to $\lambda$.
###### Proof.
We proceed by showing first that for each $u\in
W_{0}^{1,p}(\Omega)\backslash\\{0\\}$, the set
$\Lambda_{u}:=\\{\lambda\geq 0:\ \ \inf_{t\geq 0}\psi_{\lambda,u}(t)\geq 0\\}$
is non empty. Fix $\bar{\lambda}>0$. By Lemma 2.1, there exist
$\epsilon_{1},\delta_{1}>0$ such that $\psi_{\bar{\lambda},u}>0$ in
$(0,\epsilon_{1})\cup(\delta_{1},+\infty)$.
For $\lambda\leq\bar{\lambda}$, since
$\psi_{\lambda,u}\geq\psi_{\bar{\lambda},u}$, the fiber $\psi_{\lambda,u}$ is
positive over $(0,\epsilon_{1})\cup(\delta_{1},+\infty)$.
Take a positive monotone sequence $\\{\lambda_{k}\\}_{k\geq 1}$ converging to
zero. The sequence of continuous real functions
$\\{\psi_{\lambda_{k},u}\\}_{k\geq 1}$ converges uniformly to $\psi_{0,u}$
over the compact interval $[\epsilon_{1},\delta_{1}]$. We observe that
$\inf_{[\epsilon_{1},\delta_{1}]}\psi_{0,u}>0$ as it follows by condition
($\beta_{1}$) and by the fact that the constant $S$ in (3) is not attained.
Therefore, there exists a $k_{0}$ such that the fiber $\psi_{\lambda_{k},u}$
is positive over the interval $[\epsilon_{1},\delta_{1}]$ for all $k>k_{0}$.
For $k$ large enough such that $\lambda_{k}<\bar{\lambda}$, we get that
$\lambda_{k}\in\Lambda_{u}$. Also, $\Lambda_{u}$ is bounded from above, since
for fixed $t_{0}>0$, $\lim_{\lambda\to\infty}\psi_{\lambda,u}(t_{0})=-\infty$.
Now we define the canditate $\lambda_{0}(u):=\sup\Lambda_{u}$. For each
positive $\epsilon\leq\epsilon_{0}$ (where $\epsilon_{0}$ is fixed), we denote
by $t_{0}(\epsilon)$, the first critical point of
$\psi_{\lambda_{0}(u)+\epsilon,u}$ such that
$\psi_{\lambda_{0}(u)+\epsilon,u}(t_{0}(\epsilon))<0\ .$ (6)
Since the function $\epsilon\mapsto\psi_{\lambda_{0}(u)+\epsilon,u}(t)$ is
decreasing, the map $\epsilon\mapsto t_{0}(\epsilon)$ is bounded from below by
the first root of the fiber $\psi_{\lambda_{0}(u)+\epsilon_{0},u}$. Define the
candidate $t_{0}(u):=\liminf_{\epsilon\to 0}t_{0}(\epsilon)$. By taking the
liminf on inequality (6) as $\epsilon$ goes to zero, we get
$\psi_{\lambda_{0}(u),u}(t_{0}(u))\leq 0$. Since $\lambda_{0}(u)$ belongs to
$\Lambda_{u}$ as well, there holds $\psi_{\lambda_{0}(u),u}(t)\geq 0$ for all
$t>0$, in particular for $t=t_{0}(u)$. Therefore,
$\psi_{\lambda_{0}(u),u}(t_{0}(u))=0$
Let us now prove the uniqueness.
Assume that the ordered pairs
$(\lambda_{0}(u),t_{0}(u))$,$(\lambda^{{}^{\prime}}_{0}(u),t^{{}^{\prime}}_{0}(u))$
are both solutions of system (5). Assume without loss of generality that
$\lambda_{0}^{{}^{\prime}}(u)\geq\lambda_{0}(u)$. Then,
$0\leq\psi_{\lambda_{0}^{{}^{\prime}}(u),u}(t_{0}(u))\leq\psi_{\lambda_{0}(u),u}(t_{0}(u))=0$,
which implies $\lambda_{0}^{{}^{\prime}}(u)=\lambda_{0}(u)$. The proof is
concluded. ∎
###### Corollary 2.1.
For all $u\in W_{0}^{1,p}(\Omega)\backslash\\{0\\}$ and all $k\geq 0$, there
holds $\lambda_{0}(ku)=\lambda_{0}(u)$.
###### Proof.
For $u\in W_{0}^{1,p}(\Omega)\backslash\\{0\\}$ and $k\geq 0$, there holds
$ku\in W_{0}^{1,p}(\Omega)\backslash\\{0\\}$. System (5) possesses a solution
$(\lambda_{0}(ku),t_{0}(ku))$, where the first coordinate is unique:
$\left\\{\begin{array}[]{l}\psi_{\lambda_{0}(ku),ku}(t_{0}(ku))=0\\\
\psi^{{}^{\prime}}_{\lambda_{0}(ku),ku}(t_{0}(ku))=0\\\
\psi_{\lambda_{0}(ku),ku}(t_{0}(ku))=\inf_{l>0}\psi_{\lambda_{0}(ku),ku}(l).\end{array}\right.$
(7)
System (7) may be rewritten as
$\left\\{\begin{array}[]{l}\psi_{\lambda_{0}(ku),u}(kt_{0}(ku))=0\\\
\psi^{{}^{\prime}}_{\lambda_{0}(ku),u}(kt_{0}(ku))=0\\\
\psi_{\lambda_{0}(ku),u}(kt_{0}(ku))=\inf_{l>0}\psi_{\lambda_{0}(ku),u}(kl)=\inf_{l>0}\psi_{\lambda_{0}(ku),u}(l).\end{array}\right.$
By uniqueness, we conclude that
$\lambda_{0}(ku)=\lambda_{0}(u).$
∎
Now, we define an extremal parameter which will play an important role in our
analysis:
$\lambda^{\star}_{0}:=\inf_{u\in
W_{0}^{1,p}(\Omega)\backslash\\{0\\}}\lambda_{0}(u).$ (8)
###### Lemma 2.3.
Assume conditions ($f_{1}$), ($f_{2}$), and ($f_{3}$). The following
assertions hold true.
* (i):
Under condition ($\beta_{1}$), there holds $\lambda_{0}^{\star}\geq 0$.
* (ii):
If the inequality in condition ($\beta_{1}$) is strict, then
$\lambda_{0}^{\star}>0$ and viceversa.
###### Proof.
$(i)$ Let us perform a proof by contradiction. Assume there exists a sequence
$\\{u_{k}\\}_{k\geq 1}\in W_{0}^{1,p}(\Omega)\backslash\\{0\\}$ such that (we
use the notation $\lambda_{k}:=\lambda_{0}(u_{k})$)
$-\infty\leq\lim_{k\to\infty}\lambda_{k}=\lambda_{0}^{\star}<0.$ (9)
We may assume by Corollary (2.1) that $\|u_{k}\|=1$ for all $k\geq 1$. By the
definition of $\lambda_{0}(u_{k})$, there exists a sequence
$\\{t_{k}\\}_{k\geq 1}$ of positive numbers (which is bounded, according to
Lemma (2.1) item $(ii)$ ) such that
$\frac{1}{p}\hat{M}(t_{k}^{p}\|u_{k}\|^{p})-\frac{1}{p^{\star}}t_{k}^{p^{\star}}\|u_{k}\|^{p^{\star}}_{p^{\star}}=\lambda_{k}\int_{\Omega}F(x,t_{k}u_{k})dx\
\ \forall\ k\geq 1.$
By the Sobolev embbeding,
$\frac{1}{p}\hat{M}(t_{k}^{p})-\frac{1}{p^{\star}}t_{k}^{p^{\star}}S\leq\lambda_{k}\int_{\Omega}F(x,t_{k}u_{k})dx\
\ \forall\ k\geq 1.$ (10)
The contradiction follows from (10) and (9). Therefore, there holds
$\lambda_{0}^{\star}\geq 0$.
$(ii)$ Let $L>S\frac{p}{p^{\star}}$ such that
$\inf_{t>0}\frac{\hat{M}(t)}{t^{\frac{p^{\star}}{p}}}\geq L.$ Arguing as in
the proof of item $(i)$, assume by contradiction that
$\displaystyle{\lim_{k\to+\infty}\lambda_{k}=\lambda_{0}^{\star}=0}$, where
$\lambda_{k}=\lambda_{0}(u_{k})$ with $\|u_{k}\|=1$. Thus, there exists a
sequence $\\{t_{k}\\}_{k\geq 1}$ of positive numbers such that
$\left(L-S\frac{p}{p^{\star}}\right)t_{k}^{p^{\star}}\leq{\hat{M}(t_{k}^{p}})-S\frac{p}{p^{\star}}t_{k}^{p^{\star}}\leq\lambda_{k}p\int_{\Omega}{F(x,t_{k}u_{k})}dx.$
The right hand side tends to zero since $\lambda_{k}\to 0$ and
$\left\\{\int_{\Omega}F(x,t_{k}u_{k})dx\right\\}$ is bounded due to the growth
of $F$, and the fact that $\\{t_{k}\\}$ and $\\{u_{k}\\}$ are bounded. This
implies that $\displaystyle\lim_{k\to+\infty}t_{k}=0$. Dividing the previous
inequality by $t_{k}^{p}$, we get
$\frac{\hat{M}(t_{k}^{p})}{t_{k}^{p}}-S\frac{p}{p^{\star}}t_{k}^{p^{\star}-p}\leq{\lambda_{k}}p\int_{\Omega}\frac{F(x,t_{k}u_{k})}{t_{k}^{p}}dx,$
which contradicts assumption ($\rho_{1}$) when passing to the limit as
$k\to\infty$. Therefore, $\lambda_{0}^{\star}>0$.
Let us prove the viceversa. Assume that condition ($\beta_{1}$) holds with
equality. We will prove that $\lambda_{0}^{\star}=0$. Without loss of
generality we may assume that $0\in\Omega$. Fix a ball of radius $r>0$ such
that $B_{2r}(0)\subset\Omega$ and let $\varphi$ a function in
$C^{\infty}_{0}(B_{2r}(0))$ such that $\varphi(x)=1$ in $B_{r}(0)$,
$0\leq\varphi\leq 1$ and $|\nabla\varphi|\leq 2$.
Put
$v_{\varepsilon}(x)=\frac{\varphi(x)}{\left(\varepsilon+|x|^{\frac{p}{p-1}}\right)^{\frac{N-p}{p}}}\qquad\hbox{and}\qquad
u_{\varepsilon}(x)=\frac{v_{\varepsilon}(x)}{\|v_{\varepsilon}\|}.$
By [9] (see also [2]), there exists a constant $K=K(N,p)$ such that
$\displaystyle\|v_{\varepsilon}\|^{p}=K\varepsilon^{-\frac{N-p}{p}}+O(1);\qquad\displaystyle\|v_{\varepsilon}\|_{p^{\star}}^{p}=KS^{\frac{p}{p^{\star}}}\varepsilon^{-\frac{N-p}{p}}+O(\varepsilon);$
$\displaystyle\|u_{\varepsilon}\|=1;\qquad\displaystyle\|u_{\varepsilon}\|_{p^{\star}}^{p^{\star}}=S+O(\varepsilon^{\frac{N}{p}}).$
We deduce in particular that
$\displaystyle\|v_{\varepsilon}\|=K^{\frac{1}{p}}\varepsilon^{-\frac{N-p}{p^{2}}}+O(\varepsilon^{\frac{(N-p)(p-1)}{p^{2}}}).$
Let $t_{0}>0$ such that
$\inf_{t>0}\frac{\hat{M}(t)}{t^{\frac{p^{\star}}{p}}}=\frac{\hat{M}(t_{0}^{p})}{t_{0}^{p^{\star}}}=S\frac{p}{p^{\star}}.$
Then,
$\displaystyle\psi_{\lambda,u_{\varepsilon}}(t_{0})$ $\displaystyle=$
$\displaystyle\frac{1}{p}\hat{M}(t_{0}^{p})-\frac{1}{p^{\star}}t_{0}^{p^{\star}}\|u_{\varepsilon}\|^{p^{\star}}_{p^{\star}}-\lambda\int_{\Omega}F(x,t_{0}u_{\varepsilon})dx$
$\displaystyle=$
$\displaystyle-\frac{1}{p^{\star}}t_{0}^{p^{\star}}O(\varepsilon^{\frac{N}{p}})-\lambda\int_{\Omega}F(x,t_{0}u_{\varepsilon})dx.$
Let us estimate $\displaystyle\int_{\Omega}F(x,t_{0}u_{\varepsilon})dx$ from
below. By assumption ($f_{3}$), one has that $f(x,s)\geq\mu\chi_{I}(s)$ (being
$\chi_{I}$ the characteristic function of the interval $I$), so there exist
$\alpha,\beta>0$ such that
$F(x,s)\geq\tilde{F}(s):=\mu\int_{0}^{s}\chi_{I}(t)dt\geq\beta$ for every
$s\geq\alpha$. Following Corollary 2.1 of [2] and using the positivity and
monotonicity of $F$,
$\displaystyle\int_{\Omega}F(x,t_{0}u_{\varepsilon})dx$
$\displaystyle\geq\int_{|x|\leq r}F(x,t_{0}u_{\varepsilon})dx\geq\int_{|x|\leq
r}F\left(x,\frac{t_{0}}{\|v_{\varepsilon}\|(\varepsilon+|x|^{\frac{p}{p-1}})^{\frac{N-p}{p}}}\right)dx$
$\displaystyle\geq\int_{|x|\leq
r}\tilde{F}\left(\frac{t_{0}}{\|v_{\varepsilon}\|(\varepsilon+|x|^{\frac{p}{p-1}})^{\frac{N-p}{p}}}\right)dx$
$\displaystyle=c_{1}\varepsilon^{\frac{N(p-1)}{p}}\int_{0}^{r\varepsilon^{-\frac{p-1}{p}}}\tilde{F}\left(\frac{t_{0}}{\|v_{\varepsilon}\|}\left(\frac{\varepsilon^{-1}}{1+s^{\frac{p}{p-1}}}\right)^{\frac{N-p}{p}}\right)s^{N-1}ds$
One has that
$\tilde{F}\left(\frac{t_{0}}{\|v_{\varepsilon}\|}\left(\frac{\varepsilon^{-1}}{1+s^{\frac{p}{p-1}}}\right)^{\frac{N-p}{p}}\right)\geq\beta\hbox{
if $s$ is such that}\
\frac{t_{0}}{\|v_{\varepsilon}\|}\left(\frac{\varepsilon^{-1}}{1+s^{\frac{p}{p-1}}}\right)^{\frac{N-p}{p}}\geq\alpha.$
(11)
Notice that the second inequality of (11) is equivalent to
$\frac{t_{0}\varepsilon^{-\frac{(N-p)(p-1)}{p^{2}}}}{(K^{\frac{1}{p}}+O(\varepsilon^{\frac{N-p}{p}}))(1+s^{\frac{p}{p-1}})^{\frac{N-p}{p}}}\geq\alpha.$
Now, fix $c_{2}<\frac{t_{0}}{\alpha}$. If $s\leq
c_{2}\varepsilon^{-\frac{(p-1)^{2}}{p^{2}}}$, for $\varepsilon$ small enough,
the above inequality holds true. This implies that for an eventually smaller
$r$,
$\displaystyle\int_{\Omega}F(x,t_{0}u_{\varepsilon})dx\geq
c_{3}\varepsilon^{\frac{N(p-1)}{p}}\int_{0}^{r\varepsilon^{-\frac{(p-1)^{2}}{p^{2}}}}\beta
s^{N-1}ds=c_{4}\varepsilon^{\frac{N(p-1)}{p^{2}}},$
for some positive constant $c_{4}$. Hence,
$\psi_{\lambda,u_{\varepsilon}}(t_{0})\leq-\frac{1}{p^{\star}}t_{0}^{p^{\star}}O(\varepsilon^{\frac{N}{p}})-\lambda
c_{4}\varepsilon^{\frac{N(p-1)}{p^{2}}}=\varepsilon^{\frac{N(p-1)}{p^{2}}}\left(O(\varepsilon^{\frac{N}{p^{2}}})-\lambda
c_{4}\right)<0,$
for small $\varepsilon>0$. We deduce that
$\lambda_{0}(u_{\varepsilon})<\lambda$ and because of the arbitrariness of
$\lambda,$ $\lambda_{0}^{\star}=0$. ∎
Now we prove a continuity result, which will be useful for proving the
existence of a minimizer of our problem. The proof we present here was
inspired by [14, Theorem 2.1].
###### Lemma 2.4.
Assume conditions ($\beta_{1}$) and ($\beta_{2}$). Then, for all
$\\{u_{k}\\}_{k\geq 1}\subset W_{0}^{1,p}(\Omega)$ such that
$u_{k}\rightharpoonup u\in W_{0}^{1,p}(\Omega)$, and $\\{\lambda_{k}\\}_{k\geq
1}\subset{\mathbb{R}}$ such that
$\lambda_{k}\rightarrow\lambda\in{\mathbb{R}}$,
$\Phi_{\lambda}(u)\leq\liminf_{k\to\infty}\Phi_{\lambda_{k}}(u_{k}).$
###### Proof.
Let $\\{u_{k}\\}_{k\geq 1}\subset W_{0}^{1,p}(\Omega)$ be such that
$u_{k}\rightharpoonup u\in W_{0}^{1,p}(\Omega)$. Let’s call
$\liminf_{k\to\infty}\Phi_{\lambda_{k}}(u_{k})=L$. By the second
Concentration-Compactness Lemma of Lions, there exist an at most countable
index set $J$, a set of points $\\{x_{j}\\}_{j\in J}\subset\overline{\Omega}$
and two families of positive numbers $\\{\eta_{j}\\}_{j\in J}$,
$\\{\nu_{j}\\}_{j\in J}$ such that
$\displaystyle|\nabla u_{k}|^{p}$ $\displaystyle\rightharpoonup
d\eta\geq|\nabla u|^{p}+\sum_{j\in J}\mu_{j}\delta_{x_{j}},$
$\displaystyle|u_{k}|^{p^{*}}$ $\displaystyle\rightharpoonup
d\nu=|u|^{p^{*}}+\sum_{j\in J}\nu_{j}\delta_{x_{j}},$
(weak star convergence in the sense of measures), where $\delta_{x_{j}}$ is
the Dirac mass concentrated at $x_{j}$ and such that
$S^{-\frac{p}{p^{\star}}}\nu_{j}^{\frac{p}{p^{*}}}\leq\mu_{j}\qquad\mbox{for
every $j\in J$}.$
Thus, we deduce
$\displaystyle L$ $\displaystyle=$
$\displaystyle\frac{1}{p}\hat{M}\left(\liminf_{k\to\infty}\int_{\Omega}|\nabla
u_{k}|^{p}dx\right)-\frac{1}{p^{\star}}\left(\liminf_{k\to\infty}\int_{\Omega}|u_{k}|^{p^{\star}}dx\right)$
$\displaystyle-\liminf_{k\to\infty}\lambda_{k}\int_{\Omega}F(x,u_{k}(x))dx$
$\displaystyle\overset{\eqref{beta2}}{\geq}$
$\displaystyle\frac{1}{p}\hat{M}\left(\int_{\Omega}|\nabla u|^{p}dx+\sum_{j\in
J}\mu_{j}\right)-\frac{1}{p^{\star}}\left(\int_{\Omega}|u|^{p^{\star}}dx+\sum_{j\in
J}\nu_{j}\right)-\lambda\int_{\Omega}F(x,u(x))dx$
$\displaystyle\overset{\eqref{beta2}}{\geq}$
$\displaystyle\Phi_{\lambda}(u)+\frac{1}{p}\sum_{j\in
J}\hat{M}\left(\mu_{j}\right)-\frac{1}{p^{\star}}\sum_{j\in J}\nu_{j}$
$\displaystyle\overset{\eqref{beta1}}{\geq}$
$\displaystyle\Phi_{\lambda}(u)+\frac{S}{p^{\star}}\sum_{j\in
J}\mu_{j}^{\frac{p^{\star}}{p}}-\frac{1}{p^{\star}}\sum_{j\in J}\nu_{j}$
$\displaystyle=$ $\displaystyle\Phi_{\lambda}(u).$
∎
###### Corollary 2.2.
Assume that condition ($\beta_{2}$) holds true and the inequality in condition
($\beta_{1}$) is strict. Let $\\{u_{k}\\}_{k\geq 1}\subset
W_{0}^{1,p}(\Omega)$ be a sequence such that $u_{k}\rightharpoonup u\in
W_{0}^{1,p}(\Omega)$ and
$\displaystyle{\Phi_{\lambda}(u)=\lim_{k\to\infty}\Phi_{\lambda_{k}}(u_{k})}$.
Then, $u_{k}\rightarrow u\in W_{0}^{1,p}(\Omega)$.
###### Proof.
Arguing as in the proof of Lemma (2.4) we deduce that inequality in (2) is
strict, that is $L>\Phi_{\lambda}(u)$ which contradicts our assumption. Hence
$J$ must be empty, that is
$\lim_{k\to\infty}\int_{\Omega}|u_{k}|^{p^{\star}}dx=\int_{\Omega}|u|^{p^{\star}}dx$
and the uniform convexity of $L^{p^{\star}}(\Omega)$ implies that $u_{k}\to
u\mbox{ in }L^{p^{\star}}(\Omega).$ By the fact that
$\Phi_{\lambda_{k}}(u_{k})\to\Phi_{\lambda}(u)$, it follows that
$\hat{M}(\|u_{k}\|^{p})\to\hat{M}(\|u\|^{p})$ which ensures our claim because
of the strict monotonicity of $\hat{M}$. ∎
## 3\. Existence Results
In this section, we study the existence of global and local minimizers for the
energy functional, as well as Mountain Pass solutions. The existence of
minimizers will be guaranteed by conditions ($\beta_{1}$) and ($\beta_{2}$),
while the existence of a Mountain Pass solution will be achieved under
assumption ($\gamma_{1}$). Throughout the sequel we will always assume
($\rho_{1}$) and ($\rho_{2}$).
### 3.1. Global Minimizers
First we look for global minimizers. Consider the problem
$I_{\lambda}:=\inf\left\\{\Phi_{\lambda}(u)\ :\ u\in
W_{0}^{1,p}(\Omega)\right\\}$ (13)
###### Proposition 3.1.
The infimum in problem (13) is attained by some $u_{\lambda}\in
W_{0}^{1,p}(\Omega)$, under conditions ($\beta_{1}$) and ($\beta_{2}$). If
$\lambda>\lambda_{0}^{\star}$, then $I_{\lambda}<0$ and $u_{\lambda}\neq 0$.
If $\lambda<\lambda_{0}^{\star}$, then $I_{\lambda}=0$ and $u_{\lambda}=0$.
###### Proof.
Condition ($\rho_{2}$), combined with the following inequality, gives us the
coercivity of the energy functional:
$\begin{array}[]{lcl}\Phi_{\lambda}(u)&\geq&\displaystyle{\frac{1}{p}\hat{M}(\|u\|^{p})-\frac{S}{p^{\star}}\|u\|^{p^{\star}}-\lambda\int_{\Omega}F(x,u(x))dx}\\\
&=&\displaystyle{\|u\|^{r}\left[\frac{1}{p}\frac{\hat{M}(\|u\|^{p})}{\left(\|u\|^{p}\right)^{\frac{r}{p}}}-\frac{S}{p^{\star}}\|u\|^{p^{\star}-r}-\lambda\int_{\Omega}\frac{F(x,u(x))}{\|u\|^{r}}dx\right]}.\end{array}$
Conditions ($\beta_{1}$) and ($\beta_{2}$) give us the sequential weak lower
semi continuity of the energy functional, as proven in Lemma (2.4). Therefore,
$I_{\lambda}$ is reached. In order to analyse the sign of $I_{\lambda}$, we
resort to system (5) and Definition (8).
If $0\leq\lambda<\lambda_{0}^{\star}$, then for all $u\in
W_{0}^{1,p}(\Omega)\backslash\\{0\\}$, $\lambda<\lambda_{0}(u)$ and so
$0=\psi_{\lambda_{0}(u),u}(t_{0}(u))\leq\psi_{\lambda_{0}(u),u}(1)<\psi_{\lambda,u}(1)=\Phi_{\lambda}(u)$.
Since $\Phi_{\lambda}(0)=0$, we are done.
If $\lambda>\lambda_{0}^{\star}$, there exists $u\in
W_{0}^{1,p}(\Omega)\backslash\\{0\\}$ such that $\lambda>\lambda_{0}(u)$. And
thus
$0=\psi_{\lambda_{0}(u),u}(t_{0}(u))>\psi_{\lambda,u}(t_{0}(u))=\Phi_{\lambda}(t_{0}(u)u)$.
∎
###### Proposition 3.2.
Assume condition ($\beta_{1}$) with strict inequality. Then, there exists
$u_{\lambda_{0}^{\star}}\in W_{0}^{1,p}(\Omega)\setminus\\{0\\}$ such that
$I_{\lambda_{0}^{\star}}=\Phi_{\lambda_{0}^{\star}}(u_{\lambda_{0}^{\star}})=0$.
###### Proof.
Fix a sequence $\lambda_{k}\downarrow\lambda_{0}^{\star}$. From Proposition
3.1, for each $k$ we can find $u_{k}\in W_{0}^{1,p}(\Omega)\backslash\\{0\\}$
such that $I_{\lambda_{k}}=\Phi_{\lambda_{k}}(u_{k})<0$. Since
$\lambda_{k}\downarrow\lambda_{0}^{\star}$ and assumptions ($f_{1}$),($f_{2}$)
hold true, the coercivity of $\Phi_{\lambda_{0}^{\star}}$ tells us that
$\\{u_{k}\\}$ is bounded, and therefore we may assume that
$u_{k}\rightharpoonup u_{\lambda_{0}^{\star}}$. From Lemma 2.4 we obtain
$\Phi_{\lambda_{0}^{\star}}(u_{\lambda_{0}^{\star}})\leq\liminf_{k\to\infty}\Phi_{\lambda_{k}}(u_{k})\leq
0.$
Since for all $u\in W_{0}^{1,p}(\Omega)\backslash\\{0\\}$ one has
$\lambda_{0}^{\star}\leq\lambda_{0}(u)$, there holds
$0=\psi_{\lambda_{0}(u),u}(t_{0}(u))\leq\psi_{\lambda_{0}(u),u}(1)\leq\psi_{\lambda_{0}^{\star},u}(1)=\Phi_{\lambda_{0}^{\star}}(u)$.
Therefore,
$I_{\lambda_{0}^{\star}}=\Phi_{\lambda_{0}^{\star}}(u_{\lambda_{0}^{\star}})=0$.
Let us prove that $u_{\lambda_{0}^{\star}}\neq 0$.
Assume the contrary. Let $L>S\frac{p}{p^{\star}}$ such that
$\inf_{t>0}\frac{\hat{M}(t)}{t^{\frac{p^{\star}}{p}}}\geq L.$ Thus,
$\left(L-S\frac{p}{p^{\star}}\right)\|u_{k}\|^{p^{\star}}\leq{\hat{M}(\|u_{k}\|^{p}})-\frac{p}{p^{\star}}\|u_{k}\|_{p^{\star}}^{p^{\star}}\leq\lambda_{k}p\int_{\Omega}{F(x,u_{k})}dx.$
The right hand side tends to zero by the growth of $F$, and by using that
$u_{k}\to 0$ in $L^{q}(\Omega)$ for $q<p^{\star}$. Hence, $u_{k}\to 0$ in
$W_{0}^{1,p}(\Omega)$. Dividing the previous inequality by $\|u_{k}\|^{p}$, we
get
$\frac{\hat{M}(\|u_{k}\|^{p})}{\|u_{k}\|^{p}}-S\frac{p}{p^{\star}}\|u_{k}\|^{p^{\star}-p}\leq\frac{\lambda_{k}}{\|u_{k}\|^{p}}p\int_{\Omega}{F(x,u_{k})}dx.$
And since the right hand side is still tending to zero, we get the desired
contradiction, by ($\rho_{1}$). ∎
Proof of Theorem 1.2. The existence of a global minimizer for $\Phi_{\lambda}$
follows by the coercivity of the energy functional and the lower
semicontinuity property given by Lemma 2.4. The rest of the proof is a
consequence of Propositions 3.1 and 3.2.∎
### 3.2. Local Minimizers
We have already proved that the energy functional possesses a global minimizer
$u_{\lambda}\in W_{0}^{1,p}(\Omega)$, regardless of $\lambda$. However, when
$0\leq\lambda<\lambda_{0}^{\star}$ Proposition (3.1) tells us that
$u_{\lambda}=0$, while Proposition (3.2) states the existence of a non-trivial
solution for $\lambda=\lambda_{0}^{\star}$. Therefore, for
$0\leq\lambda<\lambda_{0}^{\star}$ we tackle a different minimization problem.
Let $\delta>0$ and define
$I_{\lambda}^{\delta}:=\inf_{u\in K_{\delta}}\Phi_{\lambda}(u),$ (14)
where
$K_{\delta}:=\left\\{u\in W_{0}^{1,p}(\Omega)\ |\ d(u,K)\leq\delta\right\\}$
and
$K:=\left\\{u\in W_{0}^{1,p}(\Omega)\backslash\\{0\\}\ |\
\Phi_{\lambda_{0}^{\star}}(u)=0\right\\}.$
###### Remark 3.1.
Notice that by Proposition (3.2) there holds $K\neq\emptyset$, as long as the
inequality in condition ($\beta_{1}$) is strict.
We are going to prove that problem (14) has a solution $u_{\lambda}^{\delta}$
which, for $0\leq\lambda<\lambda_{0}^{\star}$ close enough to
$\lambda_{0}^{\star}$, does not belong to $\partial K_{\delta}$ and for
$\delta$ small enough is non trivial.
The proof of this fact is based on the following lemmas.
###### Lemma 3.1.
Assume conditions ($f_{1}$), and ($f_{2}$) and ($\beta_{1}$) with strict
inequality. Then, there exists $\bar{\delta}$ such that for
$0<\delta<\bar{\delta}$, $0\notin K_{\delta}$.
###### Proof.
Assume by contradiction that there exists $\delta_{n}\to 0$ with
$d(0,K)\leq\delta_{n}$ for all $n\geq 1$. Therefore, $d(0,K)=0$, so
$0\in\bar{K}$. This implies the existence of $\\{u_{n}\\}_{n\geq 1}\subset K$,
$u_{n}\rightarrow 0$ in $W_{0}^{1,p}(\Omega)$. Thus,
$\begin{array}[]{c}\displaystyle{0=\|u_{n}\|^{p}\left[\frac{1}{p}\frac{\hat{M}(\|u_{n}\|^{p})}{\|u_{n}\|^{p}}-\frac{1}{p^{\star}}\frac{\|u_{n}\|_{p^{\star}}^{p^{\star}}}{\|u_{n}\|^{p}}-\lambda_{0}^{\star}\int_{\Omega}\frac{F(x,u_{n}(x))}{\|u_{n}\|^{p}}dx\right]}\
\geq\\\
\displaystyle{\|u_{n}\|^{p}\left[\frac{1}{p}\frac{\hat{M}(\|u_{n}\|^{p})}{\|u_{n}\|^{p}}-\frac{1}{p^{\star}}S\|u_{n}\|^{p^{\star}-p}-\lambda_{0}^{\star}\int_{\Omega}\frac{F(x,u_{n}(x))}{\|u_{n}\|^{p}}dx\right]}\end{array}$
and the latter is positive as it follows by ($\rho_{1}$), ($f_{1}$), and
($f_{2}$), leading to a contradiction. ∎
###### Lemma 3.2.
Assume conditions ($f_{1}$), ($f_{2}$), ($\beta_{2}$) and ($\beta_{1}$) with
strict inequality. Then, there exists $\bar{\delta}$ such that for
$0<\delta<\bar{\delta}$,
$\inf_{u\in\partial K_{\delta}}\Phi_{\lambda_{0}^{\star}}(u)>0.$
###### Proof.
Suppose by contradiction that for $\delta$ small enough there exists
$\\{u_{k}\\}_{k\geq 1}\subset\partial K_{\delta}$ such that
$\lim_{k\to\infty}\Phi_{\lambda_{0}^{\star}}(u_{k})=0$. Then, by the
coercivity of the energy functional, $u_{k}\rightharpoonup w$ in
$W_{0}^{1,p}(\Omega)$ up to a subsequence. Also, by the weak lower semi-
continuity of $\Phi_{\lambda_{0}^{\star}}$ it follows that
$\Phi_{\lambda_{0}^{\star}}(w)=0$, which means that $w\in K\cup\\{0\\}$ and
that $u_{k}\rightarrow w$ strongly in $W_{0}^{1,p}(\Omega)$ by Corollary
(2.2), i.e. $w\in\partial K_{\delta}$. Subsequently, by Lemma (3.1), $w\neq
0$, and we reach the desired contradiction: $w\in K$ along with
$d(w,K)=\delta>0$. ∎
###### Lemma 3.3.
Assume conditions ($f_{1}$), ($f_{2}$), ($\beta_{2}$), and ($\beta_{1}$) with
strict inequality. Then, the set $K$ is compact in $W_{0}^{1,p}(\Omega)$ with
the strong topology.
###### Proof.
Let $\\{v_{n}\\}_{n\geq 1}\subset K$. The definition of $K$ and the coercivity
of the energy functional imply that the sequence $\\{v_{n}\\}_{n\geq 1}\subset
K$ is bounded in $W_{0}^{1,p}(\Omega)$. And consequently $v_{n}\rightharpoonup
v$ in $W_{0}^{1,p}(\Omega)$ up to a subsequence. By the sequential weak lower
semi-continuity of the energy functional we obtain that
$0\leq\Phi_{\lambda_{0}^{\star}}(v)\leq\liminf_{n\to\infty}\Phi_{\lambda_{0}^{\star}}(v_{n})=0$,
which implies that $v\in K\cup\\{0\\}$. Therefore, we obtain that
$v_{n}\rightarrow v$ in $W_{0}^{1,p}(\Omega)$ by Corollary (2.2). Since
$K\subset K_{\delta}$ for all $\delta>0$, Lemma (3.1) finishes the proof. ∎
###### Lemma 3.4.
Assume conditions ($f_{1}$), ($f_{2}$), ($\beta_{2}$) and condition
($\beta_{1}$) with strict inequality. Then, the set $K_{\delta}$ is
sequentially weakly closed in $W_{0}^{1,p}(\Omega)$.
###### Proof.
Let $\\{u_{n}\\}_{n\geq 1}\subset K_{\delta}$ be such that
$u_{n}\rightharpoonup u_{0}$ in $W_{0}^{1,p}(\Omega)$. Since
$d(u_{n},K)\leq\delta$ for all $n\geq 1$, we will obtain that
$d(u_{0},K)\leq\delta$ if we show that $d(u_{0},K)\leq\liminf_{n}d(u_{n},K)$.
To accomplish so, assume there exists a positive constant $c$ such that
$d(u_{0},K)>c>\liminf_{n\to\infty}d(u_{n},K).$
By Lemma (3.3), $K$ is a compact subset of
$\left(W_{0}^{1,p}(\Omega),\|\cdot\|\right)$. Therefore, for each $n$, there
exists $v_{n}\in K$ such that $d(u_{n},K)=\|u_{n}-v_{n}\|$. Up to a
subsequence, $v_{n}\rightarrow v_{0}\in K$. The following chain of
inequalities leads us to a contradiction:
$\begin{array}[]{rrl}\|u_{0}-v_{0}\|&\leq&\liminf_{n\to\infty}\|u_{n}-v_{0}\|\\\
&\leq&\liminf_{n\to\infty}(\|u_{n}-v_{n}\|+\|v_{n}-v_{0}\|)\\\
&=&\liminf_{n\to\infty}(d(u_{n},K)+\|v_{n}-v_{0}\|)\\\
&<&c+\lim_{n\to\infty}\|v_{n}-v_{0}\|\\\ &<&d(u_{0},K)\end{array}$
∎
Proof of Theorem 1.3. Let $\delta>0$ small enough be such that $0\notin
K_{\delta}$ (see Lemma (3.1). By (14), for $\lambda\leq\lambda_{0}^{\star}$
there exits a sequence $\\{u_{n}\\}_{n\geq 1}\subset K_{\delta}$ such that
$\lim_{n\to\infty}\Phi_{\lambda}(u_{n})=I_{\lambda}^{\delta}.$
Since the functional $\Phi_{\lambda}$ is coercive, $\\{u_{n}\\}_{n\geq 1}$
must be bounded on $W_{0}^{1,p}(\Omega)$. And therefore up to a subsequence
$u_{n}\rightharpoonup u_{\lambda}^{\delta}\in W_{0}^{1,p}(\Omega).$
By Lemma (3.4), $u_{\lambda}^{\delta}\in K_{\delta}$ and so
$I_{\lambda}^{\delta}:=\inf_{u\in
K_{\delta}}\Phi_{\lambda}(u)=\Phi_{\lambda}(u_{\lambda}^{\delta}).$
Let us show that if we take the parameter $\lambda\leq\lambda_{0}^{\star}$
close enough to $\lambda_{0}^{\star}$, $u_{\lambda}^{\delta}$ does not belong
to $\partial K_{\delta}$. Otherwise, there exists a sequence of positive
numbers $\\{\lambda_{k}\\}_{k\geq 1}$, with
$\lambda_{k}\leq\lambda_{0}^{\star}$, and
$\lim_{k\to\infty}\lambda_{k}=\lambda_{0}^{\star}$, such that
$u_{\lambda_{k}}^{\delta}\in\partial K_{\delta}$ for each $k$. Since $\partial
K_{\delta}\subset K_{\delta}$, for all $k\geq 1$,
$I_{\lambda_{k}}^{\delta}=\inf_{u\in\partial
K_{\delta}}\Phi_{\lambda_{k}}(u).$
Fix any $u_{0}\in K$. From the considerations above, we obtain that for all
$k\geq 1$,
$\inf_{u\in\partial
K_{\delta}}\Phi_{\lambda_{0}^{\star}}(u)\leq\inf_{u\in\partial
K_{\delta}}\Phi_{\lambda_{k}}(u)=\inf_{u\in
K_{\delta}}\Phi_{\lambda_{k}}(u)\leq\inf_{u\in
K}\Phi_{\lambda_{k}}(u)\leq\Phi_{\lambda_{k}}(u_{0}).$
Since
$\Phi_{\lambda_{k}}(u_{0})\rightarrow\Phi_{\lambda_{0}^{\star}}(u_{0})=0$, and
$\Phi_{\lambda_{0}^{\star}}\geq 0$ we obtain that
$\inf_{u\in\partial K_{\delta}}\Phi_{\lambda_{0}^{\star}}(u)=0,$
which contradicts Lemma (3.2). Thus, $u_{\lambda}^{\delta}\notin\partial
K_{\delta}$ and so it is a local minimizer of $\Phi_{\lambda}$.
Also, by Lemma (3.1) $u_{\lambda}^{\delta}\neq 0$, and by Remark (LABEL:eitaa)
there holds $I_{\lambda}^{\delta}>0$. ∎
### 3.3. Mountain Pass Solutions
In this section we prove the Palais-Smale property for the energy functional
and our existence result will be a consequence of the Mountain Pass theorem.
First, we establish the Palais-Smale condition. We follow the proof of [5,
Theorem 1.3 ].
###### Lemma 3.5.
Assume condition ($\gamma_{1}$). Then, for any $\lambda\geq 0$ the energy
functional $\Phi_{\lambda}$ satisfies the Palais-Smale property.
###### Proof.
Let $\\{u_{k}\\}_{k\geq 1}$ be a Palais Smale sequence at a level $c$, i.e. a
sequence satisfying the conditions
$\left\\{\begin{array}[]{lll}\displaystyle{\lim_{k\to\infty}\Phi_{\lambda}(u_{k})}&=&c\\\
\displaystyle{\lim_{k\to\infty}\Phi^{{}^{\prime}}_{\lambda}(u_{k})}&=&0.\end{array}\right.$
Since
$\displaystyle{\inf_{t>0}\frac{M(t)}{t^{\frac{p^{\star}}{p}-1}}>S},$
we may assume there exists a $L>S$ such that
$\begin{array}[]{lr}M(t)>Lt^{\frac{p^{\star}}{p}-1}&\forall t\geq
0.\end{array}$ (15)
Then,
$\begin{array}[]{lr}\hat{M}(t)\geq
L\frac{p}{p^{\star}}t^{\frac{p^{\star}}{p}}&\forall t\geq 0.\end{array}$
In order to prove the coercivity of the energy functional, we may argue as in
the proof of Proposition (3.1). The sequence $\\{u_{k}\\}_{k\geq 1}$ is
bounded in $W_{0}^{1,p}(\Omega)$. And, up to subsequences, the following holds
true.
$\left\\{\begin{array}[]{lccr}u_{k}\rightharpoonup
u&in&W_{0}^{1,p}(\Omega)&\\\ u_{k}\rightarrow
u&in&L^{q}(\Omega),&q\in[1,+\infty)\\\ u_{k}\rightarrow u&a.e\
on&\Omega.&\end{array}\right.$
By the Concentration-Compactness Lemma of Lions there exist an at most
countable index set $J$, a set of points $\\{x_{j}\\}_{j\in
J}\subset\overline{\Omega}$ and two families of positive numbers
$\\{\eta_{j}\\}_{j\in J}$, $\\{\nu_{j}\\}_{j\in J}$ such that
$\displaystyle|\nabla u_{k}|^{p}$ $\displaystyle\rightharpoonup
d\eta\geq|\nabla u|^{p}+\sum_{j\in J}\mu_{j}\delta_{x_{j}},$
$\displaystyle|u_{k}|^{p^{*}}$ $\displaystyle\rightharpoonup
d\nu=|u|^{p^{*}}+\sum_{j\in J}\nu_{j}\delta_{x_{j}},$
(weak star convergence in the sense of measures), where $\delta_{x_{j}}$ is
the Dirac mass concentrated at $x_{j}$ and such that
$S^{-\frac{p}{p^{\star}}}\nu_{j}^{\frac{p}{p^{*}}}\leq\mu_{j}\qquad\mbox{for
every $j\in J$}.$
We will prove that $J$ is empty. Assume by contradiction that there exists an
index $j_{0}\in J$. And, for $\epsilon>0$ define the following smooth function
on $\Omega$.
$\phi_{\epsilon}(x)=\left\\{\begin{array}[]{crc}1,&x\in&B(x_{0},\epsilon)\\\
0,&x\in&\Omega\backslash B(x_{0},2\epsilon)\end{array}\right.$
such that $|\nabla\phi_{\epsilon}(x)|\leq\frac{2}{\epsilon}.$ For each
$\epsilon>0$, $\\{u_{k}\phi_{\epsilon}\\}_{k\geq 1}$ is bounded in
$W_{0}^{1,p}(\Omega)$. Therefore,
$\lim_{k\to+\infty}\Phi_{\epsilon}^{{}^{\prime}}(u_{k})(u_{k}\phi_{\epsilon})=0.$
And thus
$\begin{array}[]{lll}o(1)&=&M(\|u_{k}\|^{p})\displaystyle{\int_{\Omega}|\nabla
u_{k}|^{p-2}\nabla u_{k}(\nabla
u_{k}\phi_{\epsilon})dx-\int_{\Omega}|u_{k}|^{p^{\star}}\phi_{\epsilon}dx}-\lambda\displaystyle{\int_{\Omega}f(x,u_{k})u_{k}\phi_{\epsilon}dx}\\\
&=&M(\|u_{k}\|^{p})\displaystyle{\left[\int_{\Omega}|\nabla
u_{k}|^{p}\phi_{\epsilon}+u_{k}|\nabla u_{k}|^{p-2}\nabla
u_{k}\nabla\phi_{\epsilon}dx\right]-\int_{\Omega}|u_{k}|^{p^{\star}}\phi_{\epsilon}dx}-\lambda\displaystyle{\int_{\Omega}f(x,u_{k})u_{k}\phi_{\epsilon}dx}\end{array}$
(16)
On the other hand, by applying the Lebesgue Dominated Convergence Theorem, we
may prove that
$\displaystyle{\lim_{\epsilon\to 0}\lim_{k\to\infty}\int_{\Omega}u_{k}|\nabla
u_{k}|^{p-2}\nabla u_{k}\nabla\phi_{\epsilon}dx}=0,$
$\begin{array}[]{lcl}\displaystyle{\lim_{\epsilon\to
0}\lim_{k\to\infty}\int_{\Omega}|u_{k}|^{p^{\star}}\phi_{\epsilon}dx}=\displaystyle{\lim_{\epsilon\to
0}\int_{B(x_{0},2\epsilon)}|u_{k}|^{p^{\star}}\phi_{\epsilon}dx+\nu_{j_{0}}}=\nu_{j_{0}},\par\end{array}$
and, by condition ($f_{1}$), that
$\lim_{\epsilon\to
0}\lim_{k\to+\infty}\displaystyle{\int_{\Omega}f(x,u_{k})u_{k}\phi_{\epsilon}dx}=0.$
Since $M(\|u_{k}\|^{p})$ is bounded in ${\mathbb{R}}$, we deduce that
$\displaystyle{\lim_{\epsilon\to
0}\lim_{k\to\infty}M(\|u_{k}\|^{p})\int_{\Omega}u_{k}|\nabla
u_{k}|^{p-2}\nabla u_{k}\nabla\phi_{\epsilon}dx}=0.$
Furthermore, we have that
$\begin{array}[]{lcl}\displaystyle{\lim_{k\to\infty}M(\|u_{k}\|^{p})\int_{\Omega}|\nabla
u_{k}|^{p}\phi_{\epsilon}dx}&\geq&\displaystyle{\lim_{k\to\infty}\left[M\left(\int_{\Omega}|\nabla
u_{k}|^{p}dx\right)\int_{B(x_{0},2\epsilon)}|\nabla
u_{k}|^{p}\phi_{\epsilon}dx\right]}\\\
&\geq&\displaystyle{\lim_{k\to\infty}\left[L\left(\int_{B(x_{0},2\epsilon)}|\nabla
u_{k}|^{p}\phi_{\epsilon}dx\right)^{\frac{p^{\star}}{p}-1}\int_{B(x_{0},2\epsilon)}|\nabla
u_{k}|^{p}\phi_{\epsilon}dx\right]}\\\
&\geq&L\left[\displaystyle{\int_{B(x_{0},2\epsilon)}|\nabla
u|^{p}\phi_{\epsilon}dx+\mu_{j_{0}}}\right]^{\frac{p^{\star}}{p}}.\end{array}$
The above outcomes, combined together give us that
$\begin{array}[]{lcl}0\geq
L\mu_{j_{0}}^{\frac{p^{\star}}{p}}-\nu_{j_{0}}=\left(L-S\right)\mu_{j_{0}}^{\frac{p^{\star}}{p}}\geq
0,\end{array}$
which means that $\mu_{j_{0}}=0$ and, subsequently that $\nu_{j_{0}}=0$, a
contradiction. Thus, $J=\emptyset$ and so
$\displaystyle{\lim_{k\to\infty}\int_{\Omega}|u_{k}|^{p^{\star}}dx=\int_{\Omega}|u|^{p^{\star}}dx},$
(17)
which implies that $\\{u_{k}\\}$ converges to $u\in L^{p^{\star}}$ strongly.
Let us finally prove that $u_{k}\to u$ strongly in $W_{0}^{1,p}(\Omega)$. We
already know by hypothesis that
$\begin{array}[]{lcl}\displaystyle
0={\lim_{k\to\infty}\Phi^{{}^{\prime}}_{\lambda}(u_{k})(u_{k}-u)}&=&\displaystyle{\lim_{k\to\infty}\left[M(\|u_{k}\|^{p})\displaystyle{\int_{\Omega}|\nabla
u_{k}|^{p-2}\nabla
u_{k}\nabla(u_{k}-u)}dx-\int_{\Omega}|u_{k}|^{p^{\star}-2}u_{k}(u_{k}-u)dx\right.}\\\
&&\left.-\lambda\displaystyle{\int_{\Omega}f(x,u_{k})(u_{k}-u)dx}\right],\end{array}$
and, by using condition ($f_{1}$) and (17) we obtain that
$\displaystyle{\lim_{k\to+\infty}\int_{\Omega}f(x,u_{k})(u_{k}-u)dx=0},$
$\displaystyle{\lim_{k\to\infty}\int_{\Omega}|u_{k}|^{p^{\star}-2}u_{k}(u_{k}-u)dx}=0.$
Thus,
$\displaystyle{\lim_{k\to\infty}M(\|u_{k}\|^{p})\int_{\Omega}|\nabla
u_{k}|^{p-2}\nabla u_{k}\nabla(u_{k}-u)dx}=0.$
If $\displaystyle{\lim_{k\to\infty}M(\|u_{k}\|^{p})}=0$, then from (15) one
has that $\displaystyle{\lim_{k\to\infty}\|u_{k}\|=0}$, which means that
$u_{k}\rightarrow 0$ strongly in $W_{0}^{1,p}(\Omega)$. Otherwise,
$\displaystyle{\limsup_{k\to\infty}M(\|u_{k}\|^{p})}>0$, that implies
$\displaystyle{\lim_{k\to\infty}\int_{\Omega}|\nabla u_{k}|^{p-2}\nabla
u_{k}\nabla(u_{k}-u)dx=0}.$
Since $\\{u_{k}\\}_{k\geq 1}$ converges weakly to $u$, we know that
$\displaystyle{\lim_{k\to\infty}\int_{\Omega}|\nabla u|^{p-2}\nabla
u\nabla(u_{k}-u)dx=0}.$
Then, we deduce that
$\displaystyle{\lim_{k\to\infty}\int_{\Omega}\left(|\nabla u_{k}|^{p-2}\nabla
u_{k}-|\nabla u|^{p-2}\nabla u\right)\nabla(u_{k}-u)dx=0},$
and as a consequence, we obtain that $u_{k}\rightarrow u$ in
$W_{0}^{1,p}(\Omega)$. ∎
Let us point out now that the energy functional complies with the mountain
pass geometry. Notice that when $\lambda\geq\lambda_{0}^{\star}$ this is
obvious since $0$ is a strict local minimizer (see Lemma 3.6) and
$u_{\lambda}$ is a global minimizer with energy level less or equal than zero.
Thus, the interesting case is when
$\lambda_{0}^{\star}-\epsilon<\lambda<\lambda_{0}^{\star}$.
###### Lemma 3.6.
Under condition ($\gamma_{1}$) the following statements hold true.
* (i)
For $R>0$ small enough there exists $\sigma=\sigma(R)>0$ such that
$\Phi_{\lambda}(u)\geq\sigma$ for all $u\in W_{0}^{1,p}(\Omega)$ with
$\|u\|=R$;
* (ii)
For $0<R<\|u_{\lambda_{0}^{*}}\|$ in item $(i)$, there exists $\epsilon>0$
small enough such that $\Phi_{\lambda}(u_{\lambda_{0}^{*}})<\sigma$ for all
$\lambda>\lambda_{0}^{\star}-\epsilon$.
###### Proof.
$(i)$ By ($\rho_{1}$) there exists $C>0$ and $\delta>0$ such that if
$\|u\|<\delta$ then
$\hat{M}(\|u\|^{p})>C\|u\|^{p}.$
Also, from ($f_{2}$) we deduce that for fixed $\epsilon>0$ we may choose
$\delta>0$ in such a way that
$\displaystyle{\int_{\Omega}F(x,u)dx}\leq\epsilon\|u\|^{p}$ for
$\|u\|<\delta$.
Consequently, the following inequality holds true:
$\Phi_{\lambda}(u)\geq\left(\frac{C}{p}-\lambda\epsilon\right)\|u\|^{p}-\frac{1}{p^{\star}}\|u\|_{p^{\star}}^{p^{\star}}\geq\|u\|^{p}\left[\left(\frac{C}{p}-\lambda\epsilon\right)-\frac{S}{p^{\star}}\|u\|^{p^{\star}-p}\right].$
Therefore, if we take $\epsilon>0$ such that $\frac{C}{p}-\lambda\epsilon>0$,
and take $R<\delta=\delta(\epsilon)$ small enough so that, for all $u\in
W_{0}^{1,p}(\Omega)$ such that $\|u\|=R$, there holds
$\frac{S}{p^{\star}}\|u\|^{p^{\star}-p}<\frac{C}{p}-\lambda\epsilon$, we are
led into the desired conclusion.
$(ii)$ From Proposition (3.1),
$\Phi_{\lambda_{0}^{\star}}(u_{\lambda_{0}^{\star}})=0$. Therefore, the
monotonicity of the function
$\lambda\mapsto\Phi_{\lambda}(u_{\lambda_{0}^{\star}})$ ensures that
$\Phi_{\lambda}(u_{\lambda_{0}^{\star}})\leq 0$ for all
$\lambda\geq\lambda_{0}^{\star}$, and that
$0<\Phi_{\lambda}(u_{\lambda_{0}^{\star}})<\sigma$ for
$\lambda_{0}^{\star}-\epsilon<\lambda<\lambda_{0}^{\star}$ with $\epsilon>0$
small enough. ∎
Proof of Theorem 1.4. Define
$c_{\lambda}=\inf_{g\in\Gamma}\ \max_{0\leq t\leq
1}\Phi_{\lambda_{0}^{\star}}(g(t)),$
where
$\Gamma:=\left\\{g\in C\left([0,1];W_{0}^{1,p}(\Omega)\right)\ |\
g(0)=0,g(1)=u_{\lambda_{0}^{\star}}\right\\}.$
By Lemmas 3.5, 3.6 and the Mountain Pass Theorem it follows that for each
$\lambda>\lambda_{0}^{\star}-\epsilon$, the set
$K_{c_{\lambda}}=\left\\{u\in W_{0}^{1,p}(\Omega)\ |\
\Phi_{\lambda}(u)=c_{\lambda},\ \Phi^{{}^{\prime}}_{\lambda}(u)=0\right\\}$
is not empty. ∎
## 4\. Non-Existence Results
In this section we establish a non-existence result under condition
($\gamma_{1}$), which is a stronger hypothesis than $\eqref{beta1}$. We also
assume that $M$ is of class $C^{1}({\mathbb{R}})$, and that $f(x,\cdot)\in
C^{1}({\mathbb{R}})$ for all $x\in\Omega$. We recall that hypotheses
($\rho_{1}$) and ($\rho_{2}$) hold true.
For each $u\in W_{0}^{1,p}(\Omega)\setminus\\{0\\}$ consider the following
system:
$\left\\{\begin{array}[]{l}\psi^{{}^{\prime}}_{\lambda,u}(t)=0\\\
\psi^{{}^{\prime\prime}}_{\lambda,u}(t)=0\\\
\psi^{{}^{\prime}}_{\lambda,u}(t)=\inf_{s>0}\psi^{{}^{\prime}}_{\lambda,u}(s).\end{array}\right.$
(18)
###### Lemma 4.1.
Assume conditions ($f_{1}$), ($f_{2}$) and ($\gamma_{1}$) . Then, for each
$u\in W_{0}^{1,p}(\Omega)\backslash\\{0\\}$ system (18) possesses a solution
$(\lambda_{1}(u),t_{1}(u))$ which is unique with respect to $\lambda$.
###### Proof.
The proof is similar to Lemma (2.2). ∎
###### Lemma 4.2.
Assume conditions ($f_{1}$), ($f_{2}$) and ($\gamma_{1}$). Then, for all $u\in
W_{0}^{1,p}(\Omega)\backslash\\{0\\}$ there holds
$\lambda_{1}(u)<\lambda_{0}(u)$.
###### Proof.
Let us perform a proof by contradiction. Suppose there exists a $u\in
W_{0}^{1,p}(\Omega)\backslash\\{0\\}$ such that
$\lambda_{0}(u)\leq\lambda_{1}(u)$. Then,
$\psi^{{}^{\prime}}_{\lambda_{0}(u),u}(t)\geq\psi^{{}^{\prime}}_{\lambda_{1}(u),u}(t)\geq
0$ for all $t\geq 0$. Therefore, $\psi_{\lambda_{0}(u),u}$ is non-decreasing
over $[0,+\infty)$. However, by Lemma 2.1 we know that
$\psi_{\lambda_{0}(u),u}(t)>0$ for $t>0$ small enough,
$\psi_{\lambda_{0}(u),u}(t_{0})=0$ for some $t_{0}\in(0,+\infty)$, and that
$\lim_{t\to+\infty}\psi_{\lambda_{0}(u),u}(t)=+\infty$. The proof is complete.
∎
Let us define the extremal parameter
$\lambda^{\star}_{1}:=\inf_{u\in
W_{0}^{1,p}(\Omega)\backslash\\{0\\}}\lambda_{1}(u).$
Notice that by Lemma 4.2, $\lambda_{1}^{\star}\leq\lambda_{0}^{\star}$.
###### Lemma 4.3.
Assume conditions ($f_{1}$), ($f_{2}$) and ($\beta_{1}$) with strict
inequality. Then, $\lambda_{0}^{\star}=\lambda_{0}(u_{\lambda_{0}^{\star}})$,
where $u_{\lambda_{0}^{\star}}$ is as in Proposition (3.2).
###### Proof.
Let $u:=u_{\lambda_{0}^{\star}}$. From the definition of $\lambda_{0}(u)$ and
$\lambda_{0}^{\star}$,
$\begin{array}[]{lrlccl}0=&I_{\lambda_{0}^{\star}}&=&\Phi_{\lambda_{0}^{\star}}(u)&\geq&\Phi_{\lambda_{0}(u)}(u)\\\
&&=&\psi_{\lambda_{0}(u),u}(1)&\geq&\psi_{\lambda_{0}(u),u}(t_{0}(u))\\\
&&=&0.&&\end{array}$
Therefore, $\Phi_{\lambda_{0}^{\star}}(u)=\Phi_{\lambda_{0}(u)}(u)$ or
$\begin{array}[]{c}\displaystyle{\frac{1}{p}\hat{M}(\|u\|^{p})-\frac{1}{p^{\star}}\|u\|^{p^{\star}}_{p^{\star}}-\lambda_{0}^{\star}\int_{\Omega}F(x,u(x))dx=\frac{1}{p}\hat{M}(\|u\|^{p})-\frac{1}{p^{\star}}\|u\|^{p^{\star}}_{p^{\star}}-\lambda_{0}(u)\int_{\Omega}F(x,u(x))dx}.\end{array}$
Therefore, $\lambda_{0}^{\star}=\lambda_{0}(u_{\lambda_{0}^{\star}})$. ∎
###### Lemma 4.4.
Assume conditions ($f_{1}$), ($f_{2}$) and ($\gamma_{1}$). Then,
$\lambda_{1}^{\star}<\lambda_{0}^{\star}$.
###### Proof.
From Lemma (4.2) we obtain the desired inequality:
$\lambda_{1}^{\star}\leq\lambda_{1}(u_{\lambda_{0}^{\star}})<\lambda_{0}(u_{\lambda_{0}^{\star}})=\lambda_{0}^{\star}.$
∎
We are ready to prove our non existence result.
Proof of Theorem 1.1 Assume $\lambda<\lambda_{1}^{\star}$. Then, for all $u\in
W_{0}^{1,p}\backslash\\{0\\}$, $\lambda<\lambda_{1}(u)$. Therefore,
$\psi^{{}^{\prime}}_{\lambda,u}(t)>\psi^{{}^{\prime}}_{\lambda_{1}(u),u}(t)\geq\psi^{{}^{\prime}}_{\lambda_{1}(u),u}(t_{1}(u))=0$
for all positive $t$, and therefore the energy functional has no non-zero
critical points. ∎
Acknowledgments G. N. Cunha has been supported by FAPEG, Fundação de Amparo à
Pesquisa do Estado de Goiás. F. Faraci has been supported by Università degli
Studi di Catania, PIACERI 2020-2022, Linea di intervento 2, Progetto ”MAFANE”
and by the Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro
Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). K.
Silva has been supported by CNPq-Grant 308501/2021-7.
## References
* [1] C. O. Alves, F. J. Corrêa, G. Figueiredo, On a class of nonlocal elliptic problems with critical growth, Differ. Equ. Appl, 2 (2010) 409–417.
* [2] H. Brézis, L. Nirenberg, Positive solutions of nonlinear elliptic equations involving critical sobolev exponents, Comm. Pure Appl. Math. 36 (1983) 437–477.
* [3] F.J. Corrêa, G. Figueiredo, On an elliptic equation of p-Kirchhoff type via variational methods, Bull. Austral. Math. Soc., 74 (2006) 263–277.
* [4] H. Fan, Multiple positive solutions for a class of Kirchhoff type problems involving critical Sobolev exponents, J. Math. Anal. Appl. 431 (2015) 150–168.
* [5] F. Faraci, Cs. Farkas, On a critical Kirchhoff-type problem, Nonlinear Anal., Elsevier 192 (2020) 111679.
* [6] F. Faraci, K. Silva, On the Brezis-Nirenberg problem for a Kirchhoff type equation in high dimension, Calc. Var. Partial Differential Equations, 60 (2021) 33 pp.
* [7] G. Figueiredo, Existence of a positive solution for a Kirchhoff problem type with critical growth via truncation argument, J. Math. Anal. Appl. 401 (2013) 706–713.
* [8] G. Figueiredo, J. Santos Junior, Multiplicity of solutions for a Kirchhoff equation with subcritical or critical growth, Differential Integral Equations 25 (2012) 853–868.
* [9] M. Guedda, L. Véron, Quasilinear elliptic equations involving critical Sobolev exponents, Nonlinear Anal. 13 (1989) 879–902.
* [10] E. Hebey, Compactness and the Palais–Smale property for critical Kirchhoff equations in closed manifolds, Pacific Journal of Mathematics 280 (2015) 913–924.
* [11] E. Hebey, Multiplicity of solutions for critical Kirchhoff type equations, Comm. Partial Differential Equations 41 (2016) 913–924.
* [12] Y. Il’yasov, On extreme values of Nehari manifold method via nonlinear Rayleigh’s quotient, Topol. Methods Nonlinear Anal. 49 (2017) 683–714.
* [13] G. Kirchhoff, Vorlesungen über mechanik, BG Teubner, 1 (1897)
* [14] E. Montefusco, Lower Semicontinuity of Functionals via the Concentration-Compactness Principle, J. Math. Anal. Appl. 263 (2001) 264-276.
* [15] D. Naimen, Positive solutions of Kirchhoff type elliptic equations involving a critical Sobolev exponent, NoDEA Nonlinear Differential Equations Appl 21 (2014) 885–914.
* [16] D. Naimen, M. Shibata, Two positive solutions for the Kirchhoff type elliptic problem with critical nonlinearity in high dimension, Nonlinear Anal. 186 (2019) 187–208.
* [17] Pokhozhaev, SI, The fibration method for solving nonlinear boundary value problems, Trudy Mat. Inst. Steklov, 192 (1990) 146–163.
* [18] P. Pucci, V. Rădulescu, Progress in nonlinear Kirchhoff problems, Nonlinear Anal. 186 (2019) 1–5.
* [19] P. Villaggio, Mathematical models for elastic structures, Cambridge University Press, Cambridge (1997).
* [20] X. Yao, C. Mu, Multiplicity of solutions for Kirchhoff type equations involving critical Sobolev exponents in high dimension, Math. Methods Appl. Sci. 39 (2016) 3722–3734.
|
# Irrationality of the general smooth quartic $3$-fold using intermediate
Jacobians
Benson Farb Supported in part by National Science Foundation Grant No.
DMS-181772 and the Eckhardt Faculty Fund.
###### Abstract
We prove that the intermediate Jacobian of the Klein quartic $3$-fold $X$ is
not isomorphic, as a principally polarized abelian variety, to a product of
Jacobians of curves. As corollaries we deduce (using a criterion of Clemens-
Griffiths) that $X$, as well as the general smooth quartic $3$-fold, is
irrational. These corollaries were known: Iskovskih-Manin [IM] proved that
every smooth quartic $3$-fold is irrational. However, the method of proof here
is different than that of [IM] and is significantly simpler.
## 1 Introduction
A smooth quartic $3$-fold is a smooth, degree $4$ hypersurface $Y$ in complex
projective space $\mathbb{P}^{4}$. For such a $Y$ there is a Hodge
decomposition
$H^{3}(Y;\mathbb{C})=H^{2,1}(Y)\oplus H^{1,2}(Y)$
and an attached intermediate Jacobian
$\operatorname{J}(Y):=\frac{H^{1,2}(Y)^{*}}{\displaystyle
i(H_{3}(Y;\mathbb{Z}))}$
where the embedding $i:H_{3}(Y;\mathbb{Z})\to H^{1,2}(Y)^{*}$ is defined by
sending $\alpha\in H_{3}(Y;\mathbb{Z})$ to the linear functional
$\omega\mapsto\int_{\alpha}\omega$. The complex torus $\operatorname{J}(Y)$ is
a $30$-dimensional abelian variety. It has a principal polarization defined by
the Hermitian form
$Q(\alpha,\beta):=2i\int_{Y}\alpha\wedge\bar{\beta}.$
The Klein quartic $3$-fold $X$ is the smooth, degree $4$ hypersurface
$X:=\\{[x_{0}:\cdots:x_{4}]:x_{0}^{3}x_{1}+x_{1}^{3}x_{2}+x_{2}^{3}x_{3}+x_{3}^{3}x_{4}+x_{4}^{3}x_{0}=0\\}\subset\mathbb{P}^{4}.$
$X$ admits a non-obvious faithful action of
$\mathbb{Z}/61\mathbb{Z}\rtimes\mathbb{Z}/5\mathbb{Z}$ by automorphisms; see
§2. We will use these symmetries to prove the following.
###### Theorem 1.1 (Intermediate Jacobian).
The intermediate Jacobian $\operatorname{J}(X)$ of the Klein quartic $3$-fold
$X$ is not isomorphic, as a principally polarized abelian variety, to a
product of Jacobians of smooth curves.
A short argument using resolution of singularities (Corollary 3.26 of [CG])
gives the Clemens-Griffiths criterion : if $Y$ is rational then
$\operatorname{J}(Y)$ is isomorphic as a principally polarized abelian variety
(henceforth p.p.a.v.) to a product of Jacobians of smooth curves. Theorem 1.1
thus implies:
###### Corollary 1.2 (Irrationality of Klein).
The Klein quartic $3$-fold is irrational: it is not birational to
$\mathbb{P}^{3}$.
The intermediate Jacobian determines a period mapping
$\operatorname{J}:{\mathcal{M}}_{4,3}\to{\mathcal{A}}_{30}$ from the moduli
space of smooth quartic $3$-folds to the moduli space of $30$-dimensional
principally polarized abelian varieties. $\operatorname{J}$ is a holomorphic
map between quasiprojective varieties. Since the target ${\mathcal{A}}_{30}$
is the quotient of a bounded symmetric domain by an arithmetic lattice,
Theorem 3.10 of Borel [Bo] gives that $\operatorname{J}$ is in fact a
morphism. Let ${\mathcal{P}}\subset{\mathcal{A}}_{30}$ denote the subvariety
consisting of products of Jacobians of smooth curves. Then
$\operatorname{J}^{-1}({\mathcal{P}})$ is a subvariety of
${\mathcal{M}}_{4,3}$. Theorem 1.1 implies that the inclusion
$\operatorname{J}^{-1}({\mathcal{P}})\subset{\mathcal{M}}_{4,3}$ is strict.
The irreducibility of ${\mathcal{M}}_{4,3}$ then gives:
###### Corollary 1.3 (Irrationality is general).
The general smooth quartic $3$-fold is irrational.111In other words, there is
a subvariety $V\subsetneq{\mathcal{M}}_{4,3}$ such that each
$X\in{\mathcal{M}}_{4,3}\setminus V$ is irrational.
Context. Corollaries 1.2 and 1.3 are not new. Iskovskih-Manin [IM] proved in
1971 that any smooth quartic $3$-fold $X$ is irrational. In contrast, Segre
had constructed in [Se] (see also §9 of [IM]) examples of such $X$ that are
unirational: there is a dominant rational map $\mathbb{P}^{3}\dashrightarrow
X$. Iskovskih-Manin prove their theorem by developing the “method of maximal
singularities” to prove that any birational map $X\dashrightarrow X$ has
finite order, and noting that this is of course not true for $\mathbb{P}^{3}$.
This initiated the modern theory of birational superrigidity; see, e.g.
Cheltsov [Ch] for a survey and details. More recently, Colliot-Thélène-Pirutka
[CP], building on a method of Voisin using the Chow group of $0$-cycles,
proved that the very general smooth quartic $3$-fold is not stably rational.
Around the same time as Iskovskih-Manin, Clemens-Griffiths [CG] used their
criterion mentioned above to prove that any smooth cubic $3$-fold $Y$ is
irrational, even though any such $Y$ is unirational. The bulk of their proof
is showing that $\operatorname{J}(Y)$ is not a product of Jacobians of curves.
Intermediate Jacobians have been used (via the Clemens-Griffiths criterion) to
prove irrationality for many $3$-folds, but not (as far as we can tell) for
smooth quartic $3$-folds; see Beauville’s survey [B1], in particular the table
on page $6$. The proof of Theorem 1.1 uses the symmetry of $X$ in a crucial
way, and follows an idea of Beauville (see [B1, B2], and also Zarhin [Z]) to
whom we owe an intellectual debt. It may be worth noting that the proofs of
all of the results in this paper use technology available already in 1972.
Acknowledgements. I thank Nick Addington and Jeff Achter for useful
discussions, and Ronno Das and János Kollár for corrections on an earlier
version of this paper. I am also extremely grateful to Curt McMullen, whose
many useful comments on an earlier version of this paper greatly improved its
exposition.
## 2 Proof of Theorem 1.1
In this note we always work in the category of principally polarized abelian
varieties. The polarization is crucial for the proofs that follow. For any
p.p.a.v $A$, denote by $\operatorname{Aut}(A)$ the group of automorphisms of
$A$ respecting the polarization; in particular $\operatorname{Aut}(A)$ is
finite (see, e.g. [BL], Corollary 5.1.9). Without the polarization this is no
longer true: consider the automorphism of
$A:=\mathbb{C}^{2}/\mathbb{Z}[i]^{2}$ induced by $(z,w)\mapsto(2z+w,z+w)$,
which is an infinite order algebraic automorphism of $A$.
Recall that the Jacobian ${\rm Jac}(C)$ of a smooth, projective curve $C$ is a
p.p.a.v., with polarization induced by the intersection pairing on
$H_{1}(C;\mathbb{Z})$. We will need the following.
###### Lemma 2.1.
Let $C$ be any smooth, projective curve of genus $g\geq 2$ and let ${\rm
Jac}(C)$ denote its Jacobian. Assume that the biholomorphic automorphism group
$\operatorname{Aut}(C)$ has odd order. Then for any
$G\subset\operatorname{Aut}({\rm Jac}(C))$ the following hold.
1. 1.
Any cyclic subgroup of $G$ has order at most $4g+2$.
2. 2.
If $g\geq 4$ and if $G$ is metacyclic (meaning that $G$ has a cyclic normal
subgroup $N\lhd G$ such that $G/N$ is cyclic) then $|G|\leq 9(g-1)$.
###### Proof.
For any smooth projective curve $C$ of genus $g\geq 2$ the natural map
$\rho:\operatorname{Aut}(C)\to\operatorname{Aut}({\rm Jac}(C))$ is injective;
see, e.g. [FM], Theorem 6.8. The classical Torelli theorem gives that $\rho$
is surjective if $C$ is hyperelliptic, and otherwise $[\operatorname{Aut}({\rm
Jac}(C)):\rho(\operatorname{Aut}(C))]=2$, the remaining automorphism being the
standard involution that every p.p.a.v has. Since $|G|$ is assumed to be odd,
there is a subgroup $\tilde{G}\subset\operatorname{Aut}(C)$ such that
$\rho:\tilde{G}\to G$ is an isomorphism. Both parts of the lemma now follow
from the corresponding statements for subgroups of $\operatorname{Aut}(C)$;
see e.g. Theorem 7.5 of [FM] (which is classical) and Proposition 4.2 of
[Sch], a result of Schweizer. ∎
###### Proof of Theorem 1.1.
Let $X$ be the Klein quartic $3$-fold, and let $\zeta:=e^{2\pi
i/(3^{5}+1)}=e^{2\pi i/244}$. The group
$G:=\mathbb{Z}/61\mathbb{Z}\rtimes\mathbb{Z}/5\mathbb{Z}$ acts on $X$ by
automorphisms via the maps
$\begin{array}[]{l}\phi([x_{0}:x_{1}:x_{2}:x_{3}:x_{4}]):=[\zeta
x_{0}:\zeta^{-3}x_{1}:\zeta^{9}x_{2}:\zeta^{-27}x_{3}:\zeta^{81}x_{4}]\\\ \\\
\psi([x_{0}:x_{1}:x_{2}:x_{3}:x_{4}]):=[x_{1}:x_{2}:x_{3}:x_{4}:x_{0}]\end{array}$
of order $61$ and $5$, respectively 222The somewhat surprisingly large order
automorphism $\phi$ is based on Klein, and as far as we can tell was first
written down by Z. Zheng in [Zh], Lemma 3.2.; in fact
$G\cong\operatorname{Aut}(X)$ (see [GLMV], Theorem B), but we will not need
this. For any smooth, degree $d\geq 3$ hypersurface in $\mathbb{P}^{n},n>1$,
the action of $\operatorname{Aut}(X)$ on $H^{3}(X;\mathbb{Z})$ is faithful
(see, e.g., Chap.1, Cor. 3.18 of [H]). Since in addition
$\operatorname{Aut}(X)$ preserves the Hodge decomposition of
$H^{3}(X;\mathbb{C})$, it follows that $\operatorname{Aut}(X)$, hence $G$,
acts faithfully on $\operatorname{J}(X)$ by p.p.a.v automorphisms.
Suppose that $X$ is rational. The Clemens-Griffiths criterion gives an
isomorphism of p.p.a.v.:
$A:=\operatorname{J}(X)\cong A_{1}^{n_{1}}\times\cdots\times A_{r}^{n_{r}}$
(2.1)
where each $A_{i}:={\rm Jac}(C_{i})$ is the Jacobian of a smooth, projective
curve $C_{i}$ and where $A_{i}\not\cong A_{j}$ if $i\neq j$. They also show
(Corollary 3.23 of [CG]) that each $A_{i}$ is irreducible 333A p.p.a.v $A$ is
irreducible if any morphism $A^{\prime}\to A$ of p.p.a.v is $0$ or an
isomorphism., and that the decomposition of any p.p.a.v into a product of
p.p.a.v as in (2.1) is unique.
Now, $G$ acts on $A$ as p.p.a.v. automorphisms. The uniqueness of the
decomposition (2.1) implies that each $A_{i}^{n_{i}}$ is $G$-invariant. Note
that
$30=\dim(A)=\sum_{i=1}^{r}n_{i}\dim(A_{i}).$ (2.2)
Since each $A_{i}$ is irreducible, the action of $G$ on $A_{i}^{n_{i}}$ gives
a representation
$G\to\operatorname{Aut}(A_{i}^{n_{i}})\cong\operatorname{Aut}(A_{i})^{n_{i}}\rtimes
S_{n_{i}}$
whose composition with the projection to $S_{n_{i}}$ records the permutation
of the direct factors of $A_{i}^{n_{i}}$.
Since the $G$-action on $A$ is faithful and $\mathbb{Z}/61\mathbb{Z}$ has
prime order, there exists some $i$ (after re-labeling assume $i=1$) so that
$\mathbb{Z}/61\mathbb{Z}$ acts faithfully on $A_{1}^{n_{1}}$. By the orbit-
stabilizer theorem, the orbit of any direct factor $A_{1}$ of $A_{1}^{n_{1}}$
under the prime order subgroup $\mathbb{Z}/61\mathbb{Z}\subset G$ has $1$ or
$61$ elements; but the latter is impossible by (2.2) since $\dim(A_{1})\geq
1$. Thus $\mathbb{Z}/61\mathbb{Z}$ leaves each individual direct factor
$A_{1}$ invariant.
Fix such a direct factor $B\cong A_{1}$ on which $\mathbb{Z}/61\mathbb{Z}$
acts faithfully (such a factor must exist since $\mathbb{Z}/61\mathbb{Z}$ acts
faithfully on $A_{1}^{n_{1}}$, as noted above). Recall that $B\cong
A_{1}\cong{\rm Jac}(C_{1})$ for some smooth projective curve $C_{1}$ of genus
$g\geq 1$. Note that in fact $g\geq 2$ since otherwise $\dim(B)=1$ and so
$A_{1}$ does not admit a p.p.a.v. automorphism of order $>6$. Thus Lemma
2.1(1) applies, giving
$61\leq 4\cdot{\rm genus}(C_{1})+2=4\dim(B)+2$
and so $\dim(A_{1})=\dim(B)={\rm genus}(C_{1})\geq 15$. Again by the orbit-
stabilizer theorem, the orbit of $B$ in the set of direct factors of
$A_{1}^{n_{1}}$ under the prime order subgroup $\mathbb{Z}/5\mathbb{Z}\subset
G$ has $1$ or $5$ elements. Since $\dim(B)={\rm genus}(C_{1})\geq 15$ and
$n_{1}\cdot{\rm genus}(C_{1})\leq 30$, the latter is not possible; that is,
$B$ is $\mathbb{Z}/5\mathbb{Z}$-invariant, and so $G$-invariant.
Now, the definition of $\phi$ and $\psi$ above give that
$G\cong\mathbb{Z}/61\mathbb{Z}\rtimes\mathbb{Z}/5\mathbb{Z}$ is a nontrivial
semidirect product; that is, $G$ is not a direct product. For any homomorphism
$\mu:C\rtimes D\to E$ of a nontrivial semidirect product of finite simple
groups (e.g. cyclic groups of prime order) to any group, if $\mu$ is not
faithful on $D$ then it is not faithful on $C$ (and indeed $\mu$ is trivial in
this case). Since the $\mathbb{Z}/61\mathbb{Z}$-action on $B$ is faithful, it
follows that the $\mathbb{Z}/5\mathbb{Z}$ action on $B$ is faithful. From this
it follows that the $G$-action on $B$ is faithful (consider the kernel $K$ of
the $G$-action, and note that $K\cap\mathbb{Z}/61\mathbb{Z}=0$ and so
$K<\mathbb{Z}/5\mathbb{Z}$, so that $K$ is trivial).
Note that
$|G|=61\cdot 5=305>261=9\cdot(30-1)>9({\rm genus}(C_{1})-1).$ (2.3)
Since ${\rm genus}(C_{1})\geq 15\geq 4$ and since $G$ is metacyclic, Lemma
2.1(2) applies. Its conclusion contradicts (2.3). Thus $X$ is not rational.
∎
###### Remark 2.2.
One might hope to replace the use of Lemma 2.1(2) by something simpler, such
as the Hurwitz bound $|\operatorname{Aut}(C)|\leq 84(g-1)$. However, a quick
check of the numerology shows that this is not enough to obtain a
contradiction.
## References
* [B1] A. Beauville, The Lüroth problem, in Rationality problems in algebraic geometry,1–27, Lect. Notes in Math., 2172, Fond. CIME Found. Subser., Springer, 2016.
* [B2] A. Beauville, Non-rationality of the symmetric sextic Fano threefold, in Geometry and arithmetic, EMS Ser. Congr. Rep., 57–60, EMS, Zürich, 2012.
* [BL] C. Birkenhake and H. Lange, Complex Abelian Varieties, second ed., Springer, 2004.
* [Bo] A. Borel, Some metric properties of arithmetic quotients of symmetric spaces and an extension theorem, Jour. Diff. Geom. 6 (1972), 543–560.
* [CG] C. Clemens and P. Griffiths, The Intermediate Jacobian of the Cubic Threefold, Annals of Math., Vol. 95, No. 2 (Mar., 1972), pp. 281–356.
* [Ch] I. Chel’tsov, Birationally rigid Fano varieties, Russian Math. Surveys 60:5, 875–965.
* [CP] J.-L. Colliot-Thélène and A. Pirutka, Hypersurfaces quartiques de dimension $3$: non-rationalité stable, Ann. Sci. Éc. Norm. Sup. (4) 49 , no. 2, 371–397 (2016).
* [FM] B. Farb and D. Margalit, A Primer on Mapping Class Groups, Princeton University Press, Princeton Mathematical Series, Vol. 50, 2012.
* [GLMV] V. González-Aguilera, A. Liendo, P. Montero and R. Villaflor Loyola, On a Torelli Principle for automorphisms of Klein hypersurfaces, Trans. AMS, to appear.
* [H] D. Huybrechts, The geometry of cubic hypersurfaces, Cambridge University Press, 2023.
* [IM] V.A. Iskovskih and Y. Manin, Three-dimensional quartics and counterexamples to the Lüroth problem, Math. USSR Sb. 15 141, 1971.
* [Sch] A. Schweizer, Metacyclic groups as automorphism groups of compact Riemann surfaces, Geom. Dedicata 190:185–197, 2017.
* [Se] B. Segre, Variazione continua ed omotopia in geometria algebrica, Ann. Mat. Pura Appl. (4) 50, 149–186, 1960.
* [Z] Y. Zarhin, Cubic surfaces and cubic threefolds, Jacobians and intermediate Jacobians, in Algebra, arithmetic, and geometry: in honor of Yu. I. Manin., Vol. II, 687–691, Progr. Math., 270, Birkhäuser, 2009.
* [Zh] Z. Zheng, On abelian automorphism groups of hypersurfaces, Israel Jour. of Math. 247 (2022), 479–498.
Dept. of Mathematics
University of Chicago
E-mail<EMAIL_ADDRESS>
|
# Supplementary Information: Understanding the computation of time using
neural network models
Zedong Bi Changsong Zhou
## S1 Method
### S1.1 Network details
We adopted a discrete-time formulation of network dynamics, in which
$\mathbf{x}_{t}=\mathbf{W}^{rec}\mathbf{r}_{t-1}+\mathbf{W}^{in}\mathbf{u}_{t}+\mathbf{W}^{in,att}[\mathbf{u}_{t}^{att}-\theta^{att}]_{+}+\mathbf{b}+\sqrt{2\sigma_{rec}^{2}}\text{N}(0,1),$
(S1)
where $\mathbf{x}_{t}$, $\mathbf{r}_{t}$ and $\mathbf{u}_{t}$ are respectively
the synaptic current, firing rate and network input at time step $t$,
$\mathbf{b}$ is the background input, $\mathbf{W}^{rec}$ is the recurrent
weight, $\mathbf{W}^{in}$ is the input weight, and $\sigma_{rec}$ is the
strength of recurrent noise. We supposed $\mathbf{r}_{t}=f(\mathbf{x}_{t})$,
with $f(\cdot)$ being the softplus current-rate transfer function, i. e.
$f(x)=\log(1+\exp(x)).$ (S2)
Input $\mathbf{u}_{t}$ is also noisy,
$\mathbf{u}_{t}=\mathbf{u}_{signal}+\sqrt{2\sigma_{in}^{2}}\text{N}(0,1),$
(S3)
with $\sigma_{in}$ being the strength of input noise. $\mathbf{W}^{in,att}$,
$\mathbf{u}_{t}^{att}$ and $\theta^{att}$ are the quantities related to the
input units modulated by top-down attention. They are only valid when studying
the effect of anticipatory attention in non-timing tasks (Fig. 6g-i). The
model does not have these quantities in the other tasks. $\mathbf{W}^{in,att}$
is the weight from the attention-modulated units to the recurrent network,
$\mathbf{u}_{t}^{att}$ is the input current to the attention-modulated units,
and $\theta^{att}$ is the firing threshold of these units. The firing
threshold is
$\theta^{att}=[\theta_{0}^{att}-\mathbf{W}^{fb,att}\mathbf{r}_{t}]_{+},$ (S4)
with $\mathbf{W}^{fb,att}$ being positive feedback weight, so that
$\theta^{att}$ decreases with feedback current until to zero, starting from
$\theta_{0}^{att}=1.5$. Eq. S4 models the disinhibitory effect of feedback
connections [1]. Similar to $\mathbf{u}_{t}$, $\mathbf{u}_{t}^{att}$ is also
noisy, with the noise strength $\sigma_{in}^{2}$ (eq. S3).
Some previous studies started with a continuous-time formulation, and obtained
the discrete-time version using Euler method (omitting the attention-modulated
units):
$\mathbf{x}_{t}=(1-\alpha)\mathbf{x}_{t-1}+\alpha(\mathbf{W}^{rec}\mathbf{r}_{t-1}+\mathbf{W}^{in}\mathbf{u}_{t}+\mathbf{b}+\sqrt{2\alpha^{-1}\sigma_{rec}^{2}}\text{N}(0,1)),$
(S5)
with $\alpha=\Delta t/\tau$ being the ratio of time step length $\Delta t$ and
membrane time constant $\tau$. In our study, we effectively set $\alpha=1$,
similarly as the scheme used in Ref. [2, 3]. We also set $\Delta t=20\text{
ms}$. The output of the network is supposed to be
$z=\mathbf{W}^{out}\mathbf{r}+\mathbf{b}^{out},$ (S6)
with the dimension of $z$ depending on tasks.
We set $\sigma_{in}=0.01$, $\sigma_{rec}=0.05$ when training the network.
After training, when plotting the neuronal activities in the perception epoch
(Fig. 2b-d), we kept $\sigma_{in}=0.01$, $\sigma_{rec}=0.05$ so that the
neuronal temporal profiles under different durations of perception epoch did
not fully overlap. When doing the other analysis, we turned off the noises by
default.
### S1.2 Task details
#### S1.2.1 Timing tasks
Interval production task (IP). The network received from 2 input units: from
one came the two pulses that defined the time interval, and from the other
came the Go cue. The interval between the beginning of the simulation and the
onset of the first pulse was
$T_{start}\sim U(60\text{ ms},500\text{ ms}),$ (S7)
where $U(t_{1},t_{2})$ is a uniform distribution between $t_{1}$ and $t_{2}$.
The interval between the offset of the first pulse and the onset of the second
pulse was
$T\sim U(400\text{ ms},1400\text{ ms}).$ (S8)
Note that we set the range of $T$ to be $[400\text{ ms},1400\text{ ms}]$
during training, but after training, we only investigated the performance of
the network when $T\in[600\text{ ms},1200\text{ ms}]$. The reason is that
there were boundary effects if, after training, $T$ took a value close to 400
ms or 1400 ms: if $T$ was close to 400 ms, then the time interval produced by
the network was biased to be larger than $T$; whereas if $T$ was close to 1400
ms, then the produced interval was biased to be smaller than $T$. Such biases
were weak if $T$ took a middle value (Fig. S1e).
The interval between the offset of the second pulse and the onset of the Go
cue (i. e. , the delay period) was
$T_{delay}\sim U(600\text{ ms},1600\text{ ms}).$ (S9)
All input pulses (including the two pulses that defined the time interval, and
the Go cue) lasted for 60 ms, and had strength 1. Input units stayed at 0 when
there were no pulses.
The target output was a scalar. It stayed at zero from the beginning, jumped
to 1 at time $T$ after the offset of the Go cue, and kept at 1 until the end
of the simulation at 300ms afterwards.
Interval comparision task (IC). The network received two successive long-
lasting stimuli respectively from two input units. The first stimuli, which
came from the first unit, started at time $T_{start}$ after the beginning of
the simulation, and lasted for duration $T_{1}$. Then after a delay interval
$T_{delay}$ , the second stimuli, which came from the second unit, started,
and lasted for duration $T_{2}$.
$T_{start}\sim U(60\text{ ms},500\text{ ms}),\quad T_{1}\sim U(400\text{
ms},1400\text{ ms}),\quad T_{delay}\sim U(600\text{ ms},1600\text{ ms}),\quad
T_{2}\sim U(400\text{ ms},1400\text{ ms})$ (S10)
All the input stimuli had strength 1. Input units stayed at 0 when there were
no stimuli.
The target outputs were two scalars $\hat{z}_{0}$ and $\hat{z}_{1}$. Both
stayed at zero from the beginning. If $T_{1}>T_{2}$, then $\hat{z}_{0}$ jumped
to 1 at the offset of the second stimulus, and stayed at 1 until the end of
the simulation at 300 ms afterwards. Otherwise, $\hat{z}_{1}$ jumped to 1 at
the offset of the second stimulus.
Timed spatial reproduction task (t-SR). The network successively received
three pulses from three input channels. The first channel was a line that
coded spatial locations. This line contained 32 units, whose preferred
directions were uniformly spaced from -6 to 25. For unit $i$ with preferred
location $y_{i}$, its activity in a pulse with location $x$ was
$A_{in}(t)\exp[-\frac{1}{2}(\frac{|y_{i}-x|}{2})^{2}],$ (S11)
where $A_{in}(t)=1$ during the presentation of the pulse and $A_{in}(t)=0$ at
the other time. In our simulation, the spatial locations of the stimuli were
uniformly drawn from 0 to 19. The second and third channels were both scalar
inputs. The pulse from the second channel defined the time interval to be
remembered together with the pulse from the first channel. The pulse from the
third channel acted as Go cue. $T_{start}$, $T$ and $T_{delay}$ were
distributed similarly as in IP (eqs. S7-S9).
The target output was a line with 32 units, which represented response
location using similar tuning curves as the ones used for the input line (eq.
S11):
$\hat{z}_{i}=A_{out}(t)\exp[-\frac{1}{2}(\frac{|y_{i}-x|}{2})^{2}],$ (S12)
where the amplitude $A_{out}(t)$ stayed at zero from the beginning, jumped to
1 at time $T$ after the offset of the Go cue, and stayed at 1 until the end of
the simulation at 300 ms afterwards.
Timed decision making task (t-DM). The network received from three channels of
scalar inputs. From the first two channels came the stimuli whose strengths
were to be compared with each other, and from the last channel came the Go cue
pulse. Starting from the beginning of simulation, the first two channels were
set to 0 for duration $T_{start}$, and then jumped to $A_{1}$ and $A_{2}$
respectively; after $T$ time, these two channels were set to 0 again. The Go
cue pulse came at time $T_{delay}$ after the offset of the first two channels.
Here,
$A_{1}=\gamma+c,\quad A_{2}=\gamma-c,$ (S13)
where $\gamma$ was the average strength of these two stimuli and was
distributed as $\gamma\sim U(0.8,1.2)$, and $c$ measured the strength
difference of these two stimuli, and was distributed as
$c\sim U(\\{-0.08,-0.04,-0.02,-0.01,0.01,0.02,0.04,0.08\\}),$ (S14)
where $U(\\{a_{1},a_{2},\cdots,a_{n}\\})$ denotes a discrete uniform
distribution over the set $\\{a_{1},a_{2},\cdots,a_{n}\\}$. $T_{start}$, $T$
and $T_{delay}$ were distributed similarly as in interval production task
(eqs. S7-S9).
The target outputs were two scalars $\hat{z}_{0}$ and $\hat{z}_{1}$. Both
stayed at zero from the beginning. If $c>0$, then $\hat{z}_{0}$ jumped to 1 at
time $T$ after the offset of the Go cue, and stayed at 1 until the end of the
simulation at 300ms afterwards. Otherwise, $\hat{z}_{1}$ jumped to 1 at time
$T$ after the offset of the Go cue.
#### S1.2.2 Non-timing tasks: default settings
Spatial reproduction task (SR). The network received pulses from two input
channels. The first channel was a line that contained 32 units, coding spatial
locations in the range $[-6,25]$ in the way indicated by eq. S11. In our
simulation, the spatial locations of the stimuli were uniformly drawn from 0
to 19. The second channel is a scalar input. The duration $T_{delay}$ of the
delay epoch between the first and second pulses was 1200 ms. The target output
was a line of 32 units (eq. S12), which was to indicate the location of the
first pulse immediately after the second pulse.
Comparison task (COMP). The network received pulses from two input channels,
both of which were lines that contained 32 units successively gave two pulses
to the network. The target outputs were two scalars $\hat{z}_{0}$ and
$\hat{z}_{1}$, which were to indicate whether or not the spatial coordinate of
the first pulse was larger than that of the second pulse.
Change detection task (CD). The network had the same structure as that in
COMP. Two scalar outputs were to indicate whether or not the distance between
the spatial locations of the two input pulses was within 1.
Decision making task (DM). The network received from two channels of stimuli
lasting for $T=1200$ ms. The two scalar outputs were to indicate which
stimulus was stronger immediately after the ending of the two stimuli.
Cue-dependent decision making task (cue-DM). The network received from two
channels of stimuli lasting for $T=1200$ ms. At the 1140 ms after the
presentation of the two stimuli, a two dimensional one-hot vector lasting for
60ms was input from a third channel. Two scalar outputs were to indicate the
index of the stronger stimulus or the index of the weaker stimulus according
to the third channel.
#### S1.2.3 Non-timing tasks: studying the factors that influence the
strength of temporal signal
To study the effect of the overlap of sensory input to the strength of
temporal signal in the delay epoch of SR, COMP and CD (Fig. 6d), we expanded
the unit number in the line channels to 44 (default is 32), and broadened the
standard deviation of the tuning curves (eq. S12) to 4 (default is 2). These
units coded spatial locations in the range -12 to 31. In our simulation, the
spatial locations of input stimuli were uniformly drawn from 0 to 19.
To study the effect of multi-tasking (Fig. 6e), we trained the network on t-SR
and SR concurrently, or on t-DM and DM concurrently. The two tasks in each
pair share the same input and output channels. We used a one-hot vector from
another two-dimensional input channel to indicate which task should be
performed [4]. The network was to be able to perform either of the indicated
task.
To study the effect of timing anticipation (Fig. 6f), we trained the network
to perform SR, COMP, CD, DM and cue-DM, with the duration $T$ of the delay
epoch (for SR, COMP and CD) or the stimuli-presentation epoch (for DM and cue-
DM) was randomly between $[800\text{ ms},1600\text{ ms}]$. After training, we
analyzed the simulation results when $T=1200$ ms, and compared the results
with the cases after training the network with $T$ fixed at 1200 ms. To study
the the effect of anticipatory attention (Fig. 6g-i), feedback was imposed on
the second input channel of SR, COMP and CD, and was imposed on the third
channel of cue-DM. This means that these input channels were modeled using the
third term at the right-hand side in eq. S1, instead of the second term.
### S1.3 Training details
Training was performed to minimize a cost function using back-propagation
through time. Cost function was defined as
$C=\sum_{i}m_{i}(z_{i}-\hat{z}_{i})^{2},$ (S15)
where $i$ is the index of output units, $z_{i}$ is the actual output defined
by eq. S6, $\hat{z}_{i}$ is the target output, and $m_{i}$ is the mask. In all
tasks, $m_{i}=0$ before the onset of the first stimulus, and $m_{i}=1$
afterwards; therefore, only the output after the onset of the first stimulus
was constrained. When studying the effect of anticipatory attention in non-
timing tasks (Fig. 6g-i), we added L2 regularization to feedback current
$\mathbf{I}^{fb}=\mathbf{W}^{fb,att}\mathbf{r}_{t}$ (see eq. S4), so that
eq.S15 becomes
$C=\sum_{i}m_{i}(z_{i}-\hat{z}_{i})^{2}+\beta_{fb}\frac{1}{N_{i,t}}\sum_{i,t}(I_{i,t}^{fb})^{2}$,
with $\beta_{fb}=10^{-4}$. This cost function was minimized using Adam
optimizer at learning rate 0.0005, with batch size 64 in each training step.
We trained 16 configurations to perform IP and IC tasks, and trained 30
configurations to perform t-SR and t-DM tasks. Different configurations were
initialized using different random seeds.
Before training, recurrent self-connections ($W_{ii}^{rec}$ in eq. S5) were
initialized to 1, and other recurrent connections were initialized as
independent Gaussian variables with mean 0 and standard deviation
$0.3/\sqrt{N_{rec}}$, with $N_{rec}=256$ being the number of recurrent units.
This initialization strategy was used in Ref. [3]. The identity self-
connections prevent vanishing gradient during training [5], and the non-zero
off-diagonal recurrent connections induce sequential activity in the network
after training [3], so that the dynamics of the network becomes comparable to
experimental observations [6, 7, 8, 9, 10]. Output connections were
initialized as independent Gaussian variable with mean 0 and standard
deviation $1/\sqrt{N_{rec}}$. Input connections from the line input were
initialized as variables drawn uniformly from
$[-1/\sqrt{2\sigma_{tuning}},1/\sqrt{2\sigma_{tuning}}]$, with
$\sigma_{tuning}$ being the standard deviation of the Gaussian tuning curve
(eq. S11), which was 2 by default and 4 when studying the effect of input
overlap in non-timing tasks. The input connections from the other channels
were initialized as variables drawn uniformly from
$[-1/\sqrt{D_{channel}},1/\sqrt{D_{channel}}]$, with $D_{channel}$ being the
dimension of the input channel.
Every 200 training steps, we evaluated the performance of the network using a
batch of size 512, and stopped training as soon as the performance of the
network reached criterion (Fig. S1i-l). We introduced our criterion in t-SR
and t-DM in details, the other tasks shared similar criterion:
In t-SR, a time interval was considered to be produced if: (1) the activities
of all the 32 output units were below 0.2 before the offset of the Go cue, (2)
one of them went above 0.5 at some point $t_{p}$ before $T+300\text{ms}$ after
the offset time $t_{off}^{cue}$ of the Go cue. The produced interval was
$T_{p}=t_{off}^{cue}-t_{p}$. Output location at time $t_{p}$ was read out
using a population vector method (see the computer code in Ref. [4]). Training
was stopped as soon as (1) time intervals were produced in over 95% simulation
trials, (2) the relative error of the produced intervals $|T_{p}-T|/T<0.025$,
(3) the output locations were on average within 0.8 of the input locations.
In t-DM, a time interval was considered to be produced if: (1) the activities
of both output units $z_{0}$ and $z_{1}$ were below 0.2 before the offset of
the Go cue, (2) one of them went above 0.5 at some time point $t_{p}$ before
$T+300\text{ms}$ after the offset $t_{off}^{cue}$ of the Go cue, whereas the
other one stayed below 0.5. The produced interval was
$T_{p}=t_{off}^{cue}-t_{p}$. In the trials in which a time interval was
produced, the decision was considered to be correct if: when $c>0$ (or $c<0$),
$z_{0}$ (or $z_{1}$) went above 0.5 and $z_{1}$ (or $z_{0}$) kept below 0.5.
Training was stopped as soon as (1) time intervals were produced in over 96%
of simulation trials, (2) the relative error of the produced intervals
$|T_{p}-T|/T<0.025$, (3) the decision error rate was smaller than 0.02.
### S1.4 Data analysis
#### S1.4.1 Types of neurons at the end of the delay epoch
In IP or IC, we supposed $f_{i}(T)$ to be the activity of the $i$th neuron at
the end of the delay epoch as a function of the duration $T$ of the perception
(for IP) or stimulus1 (for IC) epoch. We picked neurons that can be strongly
activated at the end of the delay epoch, namely the neurons whose
$\max_{T\in[T_{min},T_{max}]}f_{i}(T)>\theta_{sa}$, with $T_{min}=600\text{
ms}$ and $T_{max}=1200\text{ ms}$ respectively being the minimal and maximal
values of $T$ in our simulation, and $\theta_{sa}=2$. Our results are not
sensitive to the value of $\theta_{sa}$. We classified $f_{i}(T)$ of the
picked neurons into three types, namely monotonically increasing (MoI),
monotonically decreasing (MoD), and non-monotonic (non-M) in the following
way: We divided the range of $T$ (i.e., $[T_{min},T_{max}]$) into four parts
of the same length, and calculated the mean value of $f_{i}(T)$ in these four
parts, say $f_{i}(\text{part
1})=\frac{4}{T_{max}-T_{min}}\int_{T_{min}}^{T_{min}+(T_{max}-T_{min})/4}f_{i}(T)\mathrm{d}T$,
$f_{i}(\text{part
2})=\frac{4}{T_{max}-T_{min}}\int_{T_{min+(T_{max}-T_{min})/4}}^{T_{min}+2(T_{max}-T_{min})/4}f_{i}(T)\mathrm{d}T$,
etc. If $f_{i}(\text{part 1})\leq f_{i}(\text{part 2})\leq f_{i}(\text{part
3})\leq f_{i}(\text{part 4})$, then neuron $i$ belongs to MoI type; if
$f_{i}(\text{part 1})\geq f_{i}(\text{part 2})\geq f_{i}(\text{part 3})\geq
f_{i}(\text{part 4})$, then neuron $i$ belongs to MoD type; otherwise, neuron
$i$ belongs to non-M type.
In t-SR, we supposed $g_{i}(T,x)$ to be the activity of the $i$th neuron at
the end of the delay epoch as a function of $T$ at a given location $x$ of the
first pulse. We picked neurons that can be strongly activated at the end of
the delay epoch (i.e., the neurons whose
$\max_{\\{T,x\\}}g_{i}(T,x)>\theta_{sa}$). We then defined
$f_{i}(T)=\max_{x}g_{i}(T,x)$, and classified neuron $i$ into MoI, MoD or
non-M types according to the monotonicity of $f_{i}(T)$ in the similar way to
the IP or IC case introduced above. Similarly, in t-DM, we classified neurons
according to $f_{i}(T)=\max_{c}g_{i}(T,c)$, where $c$ is the half difference
between the strengths of the presented stimuli (eq. S13).
#### S1.4.2 Temporal scaling in the production epoch
Analysis of temporal scaling was performed using similar technique to Ref.
[2]. Specifically, we calculated the $k$th scaling component
$\mathbf{u}_{SC,k}$ through the following equation:
$\mathbf{u}_{SC,k}=\text{arg
}\min_{\mathbf{u}}\frac{\sum_{t}\sum_{T}(\mathbf{r}_{k}^{S}(t;T)\mathbf{u}-\text{Mean}_{T}(\mathbf{r}_{k}^{S}(t;T)\mathbf{u}))^{2}}{\sum_{t}\sum_{T}(\mathbf{r}_{k}^{S}(t;T)\mathbf{u}-\text{Mean}_{\\{t,T\\}}(\mathbf{r}_{k}^{S}(t;T)\mathbf{u}))^{2}},$
(S16)
where $\mathbf{r}_{k}^{S}(t;T)$ is population activity at the scaled time when
the duration of the perception epoch is $T$ (see below for details), the
denominator is the total variance of the trajectories, and the numerator is
the variance that cannot be explained by temporal scaling. To calculate the
first scaling component $\mathbf{u}_{SC,1}$, we set
$\mathbf{r}_{1}^{S}(t;T)=\mathbf{r}^{PC}(tT_{p};T),$ with $0\leq t\leq 1$,
where $\mathbf{r}^{PC}$ is the projection of the population activity in the
subspace spanned by the first 9 principal components, and $T_{p}$ is the
interval produced by the network in the production epoch; then we minimized
$\mathbf{u}$ in eq. S16. To calculate the second scaling component
$\mathbf{u}_{SC,2}$, we set
$\mathbf{r}_{2}^{S}(t;T)=\mathbf{r}_{1}^{S}(t;T)-\mathbf{r}_{1}^{S}(t;T)\mathbf{u}_{SC,1}$,
and then minimized $\mathbf{u}$ in eq. S16 in the subspace orthogonal to
$\mathbf{u}_{SC,1}$. In this way, we calculated all the 9 scaling components
one by one.
Scaling index (SI) of a subspace $U$ was defined as
$\text{SI}=\frac{\sum_{t}\sum_{T}(\mathbf{r}_{1}^{S}(t;T)U-\text{Mean}_{T}(\mathbf{r}_{1}^{S}(t;T)U))^{2}}{\sum_{t}\sum_{T}(\mathbf{r}_{1}^{S}(t;T)U-\text{Mean}_{\\{t,T\\}}(\mathbf{r}_{1}^{S}(t;T)U))^{2}},$
(S17)
where $\mathbf{r}_{1}^{S}(t;T)U$ is the projection of the scaled trajectory to
the subspace $U$.
#### S1.4.3 The geometry of coding combination
During the perception epoch of t-SR, the network state is quantified by the
time elapsed from the beginning of the epoch (temporal flow) and the spatial
information of the first pulse. At the end of the delay epoch of t-SR, the
network state is quantified by the time interval between the first two pulses
and the spatial information of the first pulse. During the production epoch of
t-SR, the network state is quantified by temporal flow, time interval and
spatial information. Similar scenario also exists in t-DM, except that the
non-temporal information is the decision choice made by the network. In t-DM,
the decision choice $d$ depends on the sign of the half difference $c$ between
the strength of the presented two stimuli (eq. S13), we defined
$r_{i}(d=1,\\{a\\})=\langle r_{i}(c,\\{a\\})\rangle_{c>0}$ and
$r_{i}(d=-1,\\{a\\})=\langle r_{i}(c,\\{a\\})\rangle_{c<0}$, where $\\{a\\}$
indicates the other parameters than decision choice, and used
$r_{i}(d,\\{a\\})$ to do the following analysis. Together, during the
perception epoch and at the end of the delay epoch of t-SR and t-DM, two
variables are coded in the network state; during the production epoch, three
variables are coded in the network state. We used two measurements to quantify
the geometry of the coding combination of multiple variables: (1) the angle
between the first marginal principal components and (2) the mixed variance
[11], introduced below.
Suppose the activity of the $i$th neuron $r_{i}(a,b)$ is a function of two
variables $a$ and $b$, with the mean of $r_{i}(a,b)$ being subtracted so that
$\langle r_{i}(a,b)\rangle_{a,b}=0$. The marginal principal components (PCs)
with respect to $a$ are the PCs of the dot set $\\{\langle
r_{i}(a,b)\rangle_{b}\\}_{i}$, and the marginal PCs of $b$ are the PCs of
$\\{\langle r_{i}(a,b)\rangle_{a}\\}_{i}$. We quantified the coding
orthogonality of $a$ and $b$ by calculating the angle between the first
marginal PCs of $a$ and $b$. The portions of variance explained by $a$ and $b$
are respectively $p_{a}=\text{Var}_{i,a}(\\{\langle
r_{i}(a,b)\rangle_{b}\\}_{i})/v_{tot}$ and $p_{b}=\text{Var}_{i,b}(\\{\langle
r_{i}(a,b)\rangle_{a}\\}_{i})/v_{tot}$, with the total variance
$v_{tot}=\text{Var}_{i,a,b}(\\{r_{i}(a,b)\\}_{i})$. The portion of mixed
variance between $a$ and $b$ is $p_{a+b}=1-p_{a}-p_{b}$.
In the case that the activity of the $i$th neuron $r_{i}(a,b,c)$ is a function
of three variables, we also subtracted the mean of $r_{i}(a,b,c)$ so that
$\langle r_{i}(a,b,c)\rangle_{a,b,c}=0$. The marginal PCs of $a$, $b$ and $c$
are respectively the PCs of $\\{\langle r_{i}(a,b,c)\rangle_{b,c}\\}_{i}$,
$\\{\langle r_{i}(a,b,c)\rangle_{a,c}\\}_{i}$ and $\\{\langle
r_{i}(a,b,c)\rangle_{a,b}\\}_{i}$. The portions of variance explained by these
variables and their mixing were defined as [11]:
$p_{a}=\text{Var}_{i,a}(\\{\langle r_{i}(a,b)\rangle_{b,c}\\}_{i})/v_{tot}$
$p_{b}=\text{Var}_{i,b}(\\{\langle r_{i}(a,b)\rangle_{a,c}\\}_{i})/v_{tot}$
$p_{c}=\text{Var}_{i,c}(\\{\langle r_{i}(a,b)\rangle_{a,b}\\}_{i})/v_{tot}$
$p_{a+b}=\text{Var}_{i,a,b}(\\{\langle r_{i}(a,b,c)-\langle
r_{i}(a,b)\rangle_{b,c}-\langle r_{i}(a,b)\rangle_{a,c}-\langle
r_{i}(a,b)\rangle_{a,b}\rangle_{c}\\}_{i})/v_{tot}$
$p_{b+c}=\text{Var}_{i,b,c}(\\{\langle r_{i}(a,b,c)-\langle
r_{i}(a,b)\rangle_{b,c}-\langle r_{i}(a,b)\rangle_{a,c}-\langle
r_{i}(a,b)\rangle_{a,b}\rangle_{a}\\}_{i})/v_{tot}$
$p_{a+c}=\text{Var}_{i,a,c}(\\{\langle r_{i}(a,b,c)-\langle
r_{i}(a,b)\rangle_{b,c}-\langle r_{i}(a,b)\rangle_{a,c}-\langle
r_{i}(a,b)\rangle_{a,b}\rangle_{b}\\}_{i})/v_{tot}$
$p_{a+b+c}=1-p_{a}-p_{b}-p_{c}-p_{a+b}-p_{b+c}-p_{a+c}$
where $v_{tot}=\text{Var}_{i,a,b,c}(\\{r_{i}(a,b,c)\\}_{i})$ is the total
variance, “$+$” sign in the subscript indicates the mixing of several
variables.
In Fig. 3, we used the network state trajectory after 400 ms (200 ms) of
transient period of the perception (production) epoch to do the analysis.
#### S1.4.4 Decoding
We studied two types of nearest-centroid decoders [12]. Given a population
state $\mathbf{f}_{0}$, the decoded value $a_{d,1}$ read-out by Decoder 1 is
$a_{d,1}=\text{arg
min}_{a\in\mathcal{A}}(\left\|\mathbf{f}_{0}\mathbf{W}^{dec}-\mathbf{f}(a;b_{train})\mathbf{W}^{dec}\right\|),$
(S18)
where $\mathbf{f}(a;b_{train})$ is the population state as a function of
variable $a$ along an iso-$b$ line whose $b$ value is constantly $b_{train}$,
and decoding weight $\mathbf{W}^{dec}$ is the first PC of
$\mathbf{f}(a;b_{train})$. The decoded value $a_{d,2}$ read-out by Decoder 2
is
$a_{d,2}=\text{arg
min}_{a\in\mathcal{A}}(\left\|(\mathbf{f}_{0}-\langle\mathbf{f}(a;b_{test})\rangle_{a})\mathbf{W}^{dec}-(\mathbf{f}(a;b_{train})-\langle\mathbf{f}(a;b_{train})\rangle_{a})\mathbf{W}^{dec}\right\|),$
(S19)
where $\mathbf{f}(a;b_{test})$ is the iso-$b$ line that $\mathbf{f}_{0}$
belongs to, and $\langle\cdot\rangle_{a}$ means averaging over $a$. From
eq.S19, both the mass centers of the two iso-$b$ lines
$\mathbf{f}(a;b_{train})$ and $\mathbf{f}(a;b_{test})$ are translationally
moved to the zero point before $\mathbf{f}(a;b_{train})$ and
$\mathbf{f}(a;b_{test})$ are projected to the decoding space by
$\mathbf{W}^{dec}$.
#### S1.4.5 Correlation between decoding error, angle and mixed variance
In Fig. 4d, f, we computed the correlation between decoding error (DE), the
angle (AG) between the first PCs of the decoded and generalized variables, and
the mixed variance (MV) between the decoded and generalized variables. A
subtle point here is that AG and MV may also be correlated (see Fig. 4c, e for
the negative correlation between AG and MV in the production epoch of t-SR),
therefore the Pearson’s correlation between DE and AG may be contributed by
two pathways: (1) AG influences DE directly; (2) AG influences DE indirectly
through MV, due to the correlation between AG and MV. Similar situation also
exists for the correlation between DE and MV. To investigate the direct
correlation and remove the indirect one, we iteratively took the following
operation to reduce the correlation between AG and MV: removing a single data
point (i.e., the AG and MV of a single training configuration) from the
dataset, so that the absolute value of the correlation between AG and MV in
the left dataset is minimal. We found that small correlation (with absolute
value below 0.05) between AG and MV could usually be obtained after removing 2
or 3 data points from the whole dataset of 30 points (Figs. S7, S8). In this
way, we got a dataset with small correlation between AG and MV, while at the
same time, as large as possible. Pearson’s correlation were then calculated
using the left dataset to draw Figs. 4d, f, S7, S8.
#### S1.4.6 Firing sequence and network structure
To plot Fig. 5a, b, we ordered the peak firing time of strongly active neurons
(whose peak firing rates were larger than 2) in the studied epoch, and plotted
weight connection as a function of the peak order difference between the post-
and pre-synaptic neurons.
To plot Fig. 5c, d, we used a more elaborate method to illustrate the network
structure underlying t-SR and t-DM. At time $t_{0}$ and non-time information
$x_{0}$ (which may be spatial location or decision choice), we picked a set
$\mathcal{N}(t_{0},x_{0})$ of strongly active neurons whose firing rates at
$t_{0}$ and $x_{0}$ were larger than a threshold 2 (our result is insensitive
to this threshold). We then defined $T_{peak,i}(t_{0},x_{0})$ to be the peak
time of neuron $i$ near $t_{0}$ at $x_{0}$: if the activity $f_{i}(t;x_{0})$
of neuron $i$ decreased (or increased) with time at time point $t_{0}$ and
non-time information $x_{0}$, then $T_{peak,i}(t_{0},x_{0})$ was the time
point of the local maximum of $f_{i}(t;x_{0})$ before (or after), but most
nearest to, $t_{0}$. Iterating over all the possible values of $x_{0}$, we got
all the strongly active neurons at time $t_{0}$:
$\mathcal{N}(t_{0})=\bigcup_{x_{0}}\mathcal{N}(t_{0},x_{0})$. For neuron $i$
in $\mathcal{N}(t_{0})$, we called its prefered non-time information
$x_{prefer}$ to be the value of $x_{0}$ that maximized its peak firing rate:
$x_{prefer}=\text{arg max}_{x_{0}}f_{i}(T_{peak,i}(t_{0},x_{0}),x_{0})$. In
this way, we classified all the neurons in $\mathcal{N}(t_{0})$ according to
their non-time information preference:
$\mathcal{N}(t_{0})=\bigcup_{x_{0}}\mathcal{N}_{prefer}(t_{0},x_{0})$, with
$\mathcal{N}_{prefer}(t_{0},x_{0})$ being the set of neurons that prefer
$x_{0}$ around time $t_{0}$. We then defined $T_{peak,i}(t_{0},x_{prefer})$ to
be the big peak time of neuron $i$ at time $t_{0}$. Given a neuron $i$ and a
set $\mathcal{N}_{prefer}(t_{0},x_{0})$ of neurons ($i$ may or may not belong
to $\mathcal{N}_{prefer}(t_{0},x_{0})$), we ordered their big peak times, and
then investigated the recurrent weight from $i$ to each neuron of
$\mathcal{N}_{prefer}(t_{0},x_{0})$ (except $i$ itself if
$i\in\mathcal{N}_{prefer}(t_{0},x_{0})$). In this way, we studied the
recurrent weight $w(o_{post}-o_{pre},|x_{post}-x_{pre}|)$ as a function of the
difference $o_{post}-o_{pre}$ between the orders of the big peak time of the
post- and pre-synaptic neurons and the difference $|x_{post}-x_{pre}|$ of
their preferred non-time information. Fig. 5c,d were plotted by averaging
$w(o_{post}-o_{pre},|x_{post}-x_{pre}|)$ over $t_{0}$ and training
configurations.
## S2 The relationship between the low dimensionality of the attractor in the
delay epoch and the dominance of monotonic neurons
We denote $\mathcal{M}$ as the manifold of the population states at the end of
the delay epoch at different durations $T$ of the perception epoch (Fig. 2e).
The first principal component (PC) of $\mathcal{M}$ explained about 90% of its
variance (Fig. 2g), and the activities of most neurons changed monotonically
with $T$ in $\mathcal{M}$ (Fig. 2j). To understand the relationship between
these two facts, let’s consider the extreme case that all neurons are linearly
monotonic with $T$ in $\mathcal{M}$, then $\mathcal{M}$ is a line in the
population-state space that can be parameterized as
$[f_{1}(T),f_{2}(T),\cdots,f_{N}(T)]^{T}$, with $f_{i}(T)$ being the activity
of the $i$th neuron at the end of the delay epoch when the duration of the
perception epoch is $T$. In this case, PC1 of $\mathcal{M}$, which explains
100% of the variance of $\mathcal{M}$ because $\mathcal{M}$ is a line, is the
following vector with unit length:
$\pm\frac{1}{\sqrt{\sum_{i}(f_{i}(T_{max})-f_{i}(T_{min}))^{2}}}[f_{1}(T_{max})-f_{1}(T_{min}),f_{2}(T_{max})-f_{2}(T_{min}),\cdots,f_{N}(T_{max})-f_{N}(T_{min})]^{T},$
where $T_{min}=600\text{ms}$ and $T_{max}=1200\text{ms}$ are respectively the
minimal and maximal values of $T$ in our simulation, and the $\pm$ sign
indicates that the direction of PC1 is undetermined. If neuron $i$
monotonically increases (or decreases) with $T$, then
$f_{i}(T_{max})-f_{i}(T_{min})>0$ (or $f_{i}(T_{max})-f_{i}(T_{min})<0$).
Apparently, if two neurons $i$ and $j$ have the same (or different)
monotonicity, then their corresponding elements in PC1 have the same
(different) signs. This is indeed what we found in our simulation (Fig. S2g,
h).
## S3 The geometric meaning of mixed variance
We denote the population state to be
$\mathbf{r}=\\{r_{1},r_{2},\cdots,r_{N}\\}$, where $r_{i}$ is the firing rate
of the $i$th neuron, or in general, the activity projected on the $i$th basis
vector, say, principal component. Suppose $\mathbf{r}$ is parameterized by two
variables $a$ and $b$, and we subtract the mean value of $r_{i}$ so that
$\text{E}_{a,b}[r_{i}(a,b)]=0,$ (S20)
where $\text{E}_{a,b}[\cdot]$ means the average over $a$ and $b$.
The total variance of $\mathbf{r}$ is
$v_{tot}=\text{Var}_{i,a,b}[r_{i}(a,b)]$
$=\text{E}_{i}[\text{Var}_{a,b}[r_{i}(a,b)]]+\text{Var}_{i}[\text{E}_{a,b}[r_{i}(a,b)]]$
$=\text{E}_{i}[\text{Var}_{a,b}[r_{i}(a,b)]],$ (S21)
where $\text{Var}_{x}[\cdot]$ means the variance over variable $x$. The first
equation is the definition of the total variance, the second equation is from
the law of total variance, and the third equation is from eq. S20. Similarly,
the variance explained by $a$ is
$v_{a}=\text{Var}_{i,a}[\text{E}_{b}[r_{i}(a,b)]]=\text{E}_{i}[\text{Var}_{a}[\text{E}_{b}[r_{i}(a,b)]]],$
(S22)
and the variance explained by $b$ is
$v_{b}=\text{Var}_{i,b}[\text{E}_{a}[r_{i}(a,b)]]=\text{E}_{i}[\text{Var}_{b}[\text{E}_{a}[r_{i}(a,b)]]]$
(S23)
Now let’s study a sufficient condition so that
$v_{tot}=v_{a}+v_{b},$ (S24)
which means that the mixed variance
$v_{mix}=v_{tot}-(v_{a}+v_{b})$ (S25)
is zero.
From eqs. S21-S23, a sufficient condition to fulfill eq. S24 is
$\text{Var}_{a,b}[r_{i}(a,b)]=\text{Var}_{a}[\text{E}_{b}[r_{i}(a,b)]]+\text{Var}_{b}[\text{E}_{a}[r_{i}(a,b)]]\quad\text{for
every }i.$ (S26)
According to the law of total variance,
$\text{Var}_{a,b}[r_{i}(a,b)]=\text{Var}_{a}[\text{E}_{b}[r_{i}(a,b)]]+\text{E}_{a}[\text{Var}_{b}[r_{i}(a,b)]].$
(S27)
Therefore, to realize eq. S26, we can set
$\text{Var}_{b}[\text{E}_{a}[r_{i}(a,b)]]=\text{E}_{a}[\text{Var}_{b}[r_{i}(a,b)]]\quad\text{for
every }i.$ (S28)
in other words
$\text{E}_{b}[(\text{E}_{a}[r_{i}(a,b)]-\text{E}_{a,b}[r_{i}(a,b)])^{2}]=\text{E}_{a}[\text{E}_{b}[(r_{i}(a,b)-\text{E}_{b}[r_{i}(a,b)])^{2}]]\quad\text{for
every }i$ (S29)
Because $\text{E}_{a,b}[r_{i}(a,b)]=0$, this equation gives
$\text{E}_{b}[(\text{E}_{a}[r_{i}(a,b)])^{2}]=\text{E}_{a}[\text{E}_{b}[(r_{i}(a,b)-\text{E}_{b}[r_{i}(a,b)])^{2}]]\quad\text{for
every }i$ (S30)
A sufficient condition to fulfill the equation above is
$r_{i}(a,b)-\text{E}_{b}[r_{i}(a,b)]=f(b)\quad\text{for every }i,$ (S31)
namely the value of $r_{i}(a,b)-\text{E}_{b}[r_{i}(a,b)]$ does not depend on
$a$. This sufficient condition can be easily proved by substituting eq. S31
into eq. S30 and using the fact that $\text{E}_{a,b}[r_{i}(a,b)]=0$. Now let’s
try to understand the meaning of eq. S31. Consider four pairs of variables
$(a_{1},b_{1})$, $(a_{2},b_{1})$, $(a_{1},b_{2})$ and $(a_{2},b_{2})$, we have
$r_{i}(a_{1},b_{1})-\text{E}_{b}[r_{i}(a_{1},b)]=f(b_{1})=r_{i}(a_{2},b_{1})-\text{E}_{b}[r_{i}(a_{2},b)]\quad\text{for
every }i$ (S32)
$r_{i}(a_{1},b_{2})-\text{E}_{b}[r_{i}(a_{1},b)]=f(b_{2})=r_{i}(a_{2},b_{2})-\text{E}_{b}[r_{i}(a_{2},b)]\quad\text{for
every }i$ (S33)
By subtracting eq. S32 from eq. S33, we have
$r_{i}(a_{1},b_{1})-r_{i}(a_{1},b_{2})=r_{i}(a_{2},b_{1})-r_{i}(a_{2},b_{2})\quad\text{for
every }i.$ (S34)
This means that between the two iso-$b$ lines in which the values of $b$ are
separately fixed at $b_{1}$ and $b_{2}$, the vector that connects the two
points representing $a_{1}$ is equal to the vector that connects the two
points representing $a_{2}$. In other words, these two iso-$b$ lines can be
related by translational movement. By rewritten eq.S34 as
$r_{i}(a_{1},b_{1})-r_{i}(a_{2},b_{1})=r_{i}(a_{1},b_{2})-r_{i}(a_{2},b_{2})$,
we see that different iso-$a$ lines are also related by translational
movement.
From the discussion above, translational relation between different iso-$a$ or
iso-$b$ lines is a sufficient condition for zero mixed variance. How about the
necessity? In other words, if we observe close-to-zero mixed variance in
simulation, how will be the geometry of the iso-$a$ and iso-$b$ lines? We
checked this point through simulation. In Fig. S6, we show the iso-space lines
of several simulation examples, in the perception, delay and production epochs
of t-SR task. We see that in examples with small mixed variance, the iso-space
lines of different spatial information tend to be parallel and of the same
length; whereas in examples with large mixed variance, the iso-space lines may
be non-parallel or of very different lengths. Additionally, if iso-$a$ or
iso-$b$ lines are translationally related, then Decoder 2 (eq. S19) will have
perfectly zero generalization error. We found that the generalization error of
Decoder 2 is strongly positively correlated with mixed variance (Figs. 4f, S7,
S8). These results imply that at least in the context of our simulation, mixed
variance is a good index to quantify the translational relationship between
different iso-$a$ or iso-$b$ lines, or in other words, the parallelogram-
likeness of iso-$a$ and iso-$b$ grids (Fig. 3f, upper left).
The opposite extreme case that $v_{mix}=v_{tot}$, which, from eq.S25, means
$v_{a}=v_{b}=0$. From eqs. S22, S23, this means that
$\text{Var}_{a}[\text{E}_{b}[r_{i}(a,b)]]=\text{Var}_{b}[\text{E}_{a}[r_{i}(a,b)]]=0\quad\text{for
every }i.$
In other words, the mean value of $r_{i}(a,b)$ over $b$ (i.e.,
$\text{E}_{b}[r_{i}(a,b)]$) does not depends on $a$, and the mean value of
$r_{i}(a,b)$ over $a$ (i.e., $\text{E}_{a}[r_{i}(a,b)]$) does not depends on
$b$ neither. This implies that different iso-$a$ (and also iso-$b$) lines are
strongly intertwined with each other, so that they have the same mean state
value. A good example of this case is that every point in the 2-dimensional
range of variables $[a_{min},a_{max}]\otimes[b_{min},b_{max}]$ (where
$a_{min}$, $a_{max}$, $b_{min}$ and $b_{max}$ are the minimal and maximal
values of $a$ and $b$ respectively) is mapped toward a random point in a state
space
$[r_{1,min},r_{1,max}]\otimes[r_{2,min},r_{2,max}]\otimes\cdots\otimes[r_{n,min},r_{n,max}]$:
in this case, every iso-$a$ or iso-$b$ dot set of states has the mean value
located at the center of the state space
$(\frac{r_{1,min}+r_{1,max}}{2},\frac{r_{2min}+r_{2,max}}{2},\cdots,\frac{r_{n,min}+r_{n,max}}{2})$.
Figure S1: Performance of the network after training. (a-e) Interval
production (IP) task. (a) An example of the input and output of the network in
IP. Red and blue lines: two input channels. Dashed black line: target output.
Solid black line: actual output. (b) Probability distribution function (p.d.f)
of self-connections (blue) and non-diagonal connections (red) of the recurrent
network after training. (c) Three examples of the output in the production
epoch of IP, when $T=600$ ms (blue), 900 ms (red) and 1200 ms (yellow). Dashed
line: target output. Solid line: actual output. The horizontal dashed black
line indicates the threshold that the network is regarded to generate a
movement in the production epoch when the output rises across this threshold.
(d) Distribution of the scaling index of the output across training
configurations in the production epoch of IP. (e) The difference between the
produced time interval $T_{p}$ and the interval $T$ between the first two
pulses in IP as a function of $T$. Error bar means standard deviation over 16
training configurations. During training, we set $T\in[400\text{
ms},1400\text{ ms}]$. This panel shows that if after training we set $T$ to be
close to 400 ms, $T_{p}$ tends to be larger than $T$; whereas if we set $T$ to
be close to 1400 ms, $T_{p}$ tends to be smaller than $T$. Therefore, by
default, we set $T\in[600\text{ ms},1200\text{ ms}]$ for data analysis after
training to reduce the bias of $T_{p}$. (f) Two examples of interval
discrimination (IC) task. Upper: the case when the duration of the first
stimulus is shorter than that of the second stimulus. Lower: the case when the
duration of the first stimulus is longer than that of the second stimulus. Red
and yellow lines: two input channels. Dashed black and pink lines: two
channels of target output. Solid black and pink lines: two channels of actual
output. (g) An example of timed spatial reproduction (t-SR) task. Left upper:
the pulse with location information from the first input channel. Left lower:
the pulses from the second (yellow) and third (blue) input channels. Right:
actual output. (h) Two examples of timed decision making (t-DM) task. Upper:
when the input from the first channel (red) is weaker than the input from the
second channel (yellow), i.e., $c<0$. Lower: when $c>0$. (i-l) Performance of
the network during training. (i) Performance of the network during the
training of IP, quantified by the probability to successfully produce time
interval (upper) and the relative error of the produced interval (lower). Gray
lines indicate individual training configurations. Training stopped as soon as
both quantities reach the criterion (horizontal dashed lines). (j) Performance
of the network during the training of IC, quantified by the probability to
successfully output a choice (upper) and the probability of choice error
(lower). (k) Performance of the network during the training of t-SR,
quantified by the probability to successfully produce time interval (upper),
the relative error of the produced interval (middle) and the spatial error of
the output. (l) Performance of the network during the training of t-DM,
quantified by the probability to successfully produce time interval (upper),
the relative error of the produced interval (middle) and the probability of
choice error (lower). Figure S2: Interval production task. (a) Trajectory
speed with time in the perception epoch, shaded belt indicating s.e.m.
(standard error of mean). (b) Probability distribution function (p.d.f) of
scaling indexes of the activities of single neurons in the production epoch,
after counting neurons with the top 10% highest activity (upper panel), top
50% (middle panel) and all neurons (lower panel). (c) The scaling index and
explained variance of principal components (PC) in the production epoch. (d)
We calculated the scaling components in the subspace spanned by the first nine
principal components. Shown are the first (upper) and last (lower) scaling
component of the production epoch of an example training configuration. Color
of lines indicate to-be-produced interval $T$. (e) The mean activity of the
last scaling component as a function of $T$, with the activities when $T=600$
ms and $T=1200$ ms are respectively normalized to be 0 and 1. (f) Scaling
index (blue) and ratio of explained variance (orange) in the subspace spanned
by the accumulated scaling components. This panel is in the same style as Fig.
2n, except that it analyzes the perception epoch of IP task. (g,h) These two
panels explain the relationship between the low dimensionality of manifold
$\mathcal{M}$ at the end of the delay epoch and the dominance of neurons
monotonically tuned by $T$ (Section S2). (g) Histogram of the elements of PC1
of the manifold $\mathcal{M}$ at the end of the delay epoch at different $T$s
of an example training configuration. Note that the elements corresponding
with monotonically decreasing (MoD) and monotonically increasing (MoI) neurons
have different signs. (h) In 16 training configurations, for a given element
in PC1 of $\mathcal{M}$, it has over 98% probability to have the same sign
with most other elements corresponding with neurons of the same type, while
have the opposite sign with most other elements corresponding with neurons of
the opposite type. In panels c,e, error bars indicate s.e.m. over training
configurations. Figure S3: Interval comparison tasks. (a-c) Stimulus1 epoch.
(a) Population activity in the stimulus1 epoch in the subspace of the first
three PCs. Colors indicate the duration $T$ of the epoch. Stars and circles
respectively indicate the starting and ending points of the stimulus1 epoch.
(b) Coefficient of determination ($R^{2}$) that quantifies the overlap of the
firing profiles of individual neurons at different $T$s, in the same style as
Fig. 2d in the main text. (c) Trajectory speed as a function of time in the
stimulus1 epoch, shaded belt indicating s.e.m. (d-h) Delay epoch. (d)
Trajectory speed in the delay epoch when $T=600$ ms (blue) and 1200 ms (red),
in the same style as Fig. 2f. (e) Ratio of explained variance of the first
five PCs of manifold $\mathcal{M}$ at the end of the delay epoch, in the same
style as Fig. 2g. (f) The position of the state at the end of the delay epoch
projected in the first PC of manifold $\mathcal{M}$ as a function of $T$, in
the same style as Fig. 2h. (g) The distance between two adjacent curves in the
delay epoch as a function of time, in the same style as Fig. 2i. (h) The
portions of monotonically decreasing (MoD), monotonically increasing (MoI),
and non-monotonic (non-M) types of neurons at the end of the delay epoch, in
the same style as Fig. 2k. (j-o) Stimulus2 epoch. (i) Population activity in
the stimulus2 epoch in the subspace of the first three PCs. The meanings of
color scheme, stars and circles are the same as panel a. Triangles indicate
critical points. The duration of stimulus 2 is kept at 1200 ms. (j) Scaling
index (blue) and ratio of explained variance (orange) in the subspace spanned
by the accumulated scaling components, in the same style as Fig. 2n. In this
panel and panels k-n, only the trajectories from the beginning of stimulus 2
to the critical points are studied. (k) Trajectory speed in the subspace of
the first three scaling components, in the same style as Fig. 2o. (l)
Probability distribution of the scaling indexes of single neurons, in the same
style as Fig. S2b. (m) The scaling index and explained variance of principal
components, in the same style as Fig. S2c. (n) Mean activity of the last
scaling component as a function of $T$, in the same style as Fig. S2e. (o)
Left panel: speed of the trajectory before (blue) and after (red) the critical
point in the subspace of the first three scaling components (SC). SCs are
calculated using the trajectories before the cirtical points, the red line is
plotted by projecting the trajectories after the critical points into the
subspace of SCs calculated using those before critical points. Right panel:
speed of the trajectory before (blue) and after (red) the critical point in
the full population state space. Figure S4: Timed spatial reproduction task.
(a,b) Perception epoch. (a) Coefficient of determination ($R^{2}$) that
quantifies the overlap of the firing profiles of individual neurons at
different $T$s in the perception epoch, in the same style as Fig. 2d. (b)
Trajectory speed as a function of time in the perception epoch, shaded belt
indicating s.e.m. (c-f) Delay epoch. (c) Trajectory speed as a function of
time in the delay epoch when $T=600$ ms (blue) and 1200 ms (red), in the same
style as Fig. 2f. (d) The manifold $\mathcal{M}$ at the end of the delay epoch
are parameterized by both time interval $T$ between the first two pulses and
the spatial location $x$ of the first pulse. We denote $\mathcal{M}(T;x_{0})$
(or $\mathcal{M}(x;T_{0})$) to be the set of dots in $\mathcal{M}$ at specific
location $x_{0}$ (or time interval $T_{0}$). Left panel: the position of the
state at the end of the delay epoch projected to the first PC of
$\mathcal{M}(T;x_{0})$ as a function of $T$, with the position when $T=600$ ms
(or 1200 ms) normalized to be 0 (or 1), in the same style as Fig. 2h. Gray
curves: results from 16 training configurations, each at a randomly chosen
$x_{0}$. Blue curve: mean value averaging over $x_{0}$ and training
configurations. Right panel: the position of the state in the first PC of
$\mathcal{M}(x;T_{0})$. We see that in most training configurations, the
position in $\mathcal{M}(x;T_{0})$ encodes $x$ continuously and linearly, but
big jump happens in some configurations. (e) The distance between two adjacent
curves in the delay epoch as a function of time, similar to Fig. 2i. (f) The
portions of monotonically decreasing (MoD), monotonically increasing (MoI) and
non-monotonic (non-M) types of neurons tuned by $T$ at the end of the delay
epoch, in the same style as Fig. 2k. (g-k) Production epoch. (g) Scaling index
(blue) and ratio of explained variance (orange) in the subspace spanned by the
accumulated scaling components in the production epoch, averaging over spatial
locations and training configurations, in the same style as Fig. 2n. (h)
Trajectory speed in the subspace of the first three scaling components in
production epoch, in the same style as Fig. 2o. (i) Probability distribution
of the scaling indexes of single neurons, in the same style as Fig. S2b. (j)
The scaling index and explained variance of principal components, similar to
Fig. S2c. (k) Mean activity of the last scaling component, similar to Fig.
S2e. Error bars representing s.e.m. are much smaller than the plot markers.
Figure S5: Timed decision making task. (a-c) Perception epoch. (a) Left:
Firing profiles of two example neurons in the perception epoch. Colors
indicate $c$ value, which is the half difference between the strength of the
presented stimuli. Right: Trajectories in the subspace of the first two PCs.
Stars and circles respectively indicate the starting and ending points of the
perception epoch. (b) Coefficient of determination ($R^{2}$) that quantifies
the overlap of the firing profiles of individual neurons at different $T$s in
the perception epoch, in the same style as Fig. 2d. (c) Trajectory speed as a
function of time in the perception epoch, shaded belt indicating s.e.m. (d-h)
Delay epoch. (d) Trajectories in the subspace of the first three PCs. Stars
and circles respectively indicate the starting and ending points of the delay
epoch. Blackness of circles indicates $T$ value as annotated. Curve color
indicates $c$ value as indicated in the color map of panel a, only $c=-0.04,$
-0.01, 0.01, 0.04 cases are plotted. (e) Trajectory speed as a function of
time in the delay epoch when $T=600$ ms (blue) and 1200 ms (red), in the same
style as Fig. 2f. (f) The position of the state in the first PC of
$\mathcal{M}(T;d_{0})$ as a function of $T$, with the position when $T=600$ ms
(or 1200 ms) normalized to be 0 (or 1), in the same style as Fig. 2h. Here,
$\mathcal{M}(T;d_{0})$ represents the set of dots in manifold $\mathcal{M}$ at
the end of the delay epoch at specific decision choice $d_{0}$. (g) The
distance between two adjacent curves in the delay epoch as a function of time,
in a similar style to Fig. 2i. Left panel: the two adjacent curves have the
same $c$ value, but slightly different $T$ values. Right panel: the two
adjacent curves have the same $T$ value, but different $c$ values. In the
right panel, blue (orange) curve represents the case when their $c$ values
have the same (different) sign, so that they have the same (different)
decision choice. We see that two trajectories representing the same
(different) choice tend to get close to (far away from) each other, consistent
with the scenario in panel d. (h) The portions of monotonically decreasing
(MoD), monotonically increasing (MoI) and non-monotonic (non-M) types of
neurons tuned by $T$ at the end of the delay epoch, in the same style as Fig.
2k. (i-m) Production epoch. (i) Scaling index (blue) and ratio of explained
variance (orange) in the subspace spanned by the accumulated scaling
components, averaging over $c$ values and training configurations, in the same
style as Fig. 2n. (j) Trajectory speed in the subspace of the first three
scaling components, in the same style as Fig. 2o. (k) Probability distribution
of the scaling indexes of single neurons, in the same style as Fig. S2b. (l)
The scaling index and explained variance of principal components, in the same
style as Fig. S2c. (m) Mean activity of the last scaling component, in the
same style as Fig. S2e. (n-s) The angle between first parameter-marginalized
principal components and mixed variances in the perception (panels n,o), delay
(panels p,q) and production epochs (panels r,s). These panels are in the same
style as Fig. 3d, e, g-j, except that the non-spatial information is decision
choice. Figure S6: Examples that illustrate the geometry of the coding
combination of temporal and spatial information in t-SR. (a-e) Perception
epoch. (a) Each dot represents the angle between F-PC1 and S-PC1 as well as
their mixed variances in the perception epoch (after 400 ms of transient
period) of t-SR in a training configuration. (b-e) Iso-space lines in the
subspace spanned by F-PC1 and S-PC1, in the training configurations indicated
in panel a. Stars indicate the points after 400 ms of transient period from
the beginning of the perception epoch, and circles indicate the ending points
of the perception epoch. Redness from light to strong indicates the spatial
locations $x=0,2,4,\cdots,18$. (f-j) The same as panels a-e, except for
showing the iso-space lines in the manifold $\mathcal{M}$ at the end of the
delay epoch, in the subspace spanned by the first time-interval PC (I-PC1) and
S-PC1. Stars and circles indicate $T=600$ ms and 1200 ms cases respectively.
(k-o) The same as panels a-e, except that the iso-space lines in the
production epoch are shown. Stars indicate the points after 200 ms of
transient period from the beginning of the production epoch, and circles
indicate the ending points of the production epoch. Figure S7: Decoding
generalizability in t-SR. (a-b) Perception epoch. (a) Upper: decoding error as
a function of $|x_{train}-x_{test}|$, after Decoder 1 (solid line) or Decoder
2 (dashed line) is trained to read the time elapsed from the beginning of the
perception epoch (i.e., temporal flow) using the state trajectory at spatial
location $x_{train}$, and then tested at spatial location $x_{test}$, in the
same style as Fig. 4g. Horizontal dashed line indicates chance level,
supposing the decorder works by random guess. Lower: The correlations between
the angle (AG) between the first temporal-flow PC and the first spatial PC,
the mixed variance (MV) between temporal flow and spatial information, the
error of Decoder 1 (DE1) and the error of Decoder 2 (DE2), in the same style
as Fig. 4d, f. Note that the correlation between AG and MV is approximately
zero, see Section S1.4.5 for this point. (b) Upper: Decoding error as a
function of $|t_{train}-t_{test}|$, after Decoder 1 (solid line) or Decoder 2
(dashed line) is trained to read the spatial location at time $t_{train}$
after the beginning of the perception epoch, and then tested at time
$t_{test}$. Lower: Correlations between AG, MV, DE1 and DE2. (c-d) Delay
epoch. (c) Similar to panel a, except for decoding time interval across
spatial information using the state in manifold $\mathcal{M}$ at the end of
the delay epoch. (d) Decoding spatial information across time interval using
the states in manifold $\mathcal{M}$ at the end of the delay epoch. (e-h)
Production epoch. (e) Decoding temporal flow across spatial information in the
production epoch. The decoder was trained using
$\mathbf{r}(t;x_{train},T_{0})$ and tested using
$\mathbf{r}(t;x_{test},T_{0})$, where $\mathbf{r}(t;x_{0},T_{0})$ represents
the population activity as a function of $t$ at specific spatial information
$x_{0}$ and time interval $T_{0}$. $T_{0}=1200$ ms in this panel and panels f.
(f) Decoding space across temporal flow in the production epoch. The decoder
was trained using $\mathbf{r}(x;t_{train},T_{0})$ and tested using
$\mathbf{r}(x;t_{test},T_{0})$, where $\mathbf{r}(x;t_{0},T_{0})$ represents
the population activity as a function of spatial information $x$ at specific
time point $t_{0}$ and time interval $T_{0}$. (g) Decoding temporal flow
across time interval in the production epoch. The decoder was trained using
$\mathbf{r}(t;T_{train},x_{0})$ and tested using
$\mathbf{r}(t;T_{test},x_{0})$. The results are averaged over
$x_{0}\in[0,20]$. Upper left: The decoded value $t_{dec}$ as a function of the
time $t$ elapsed from the beginning of the production epoch, after Decoder 1
(solid line) or Decoder 2 (dashed line) was trained to read $t$ at $T=1200$
ms, and then tested at $T=600$ ms (blue), 900 ms (red) and 1200ms (yellow).
The dashed line indicates perfect temporal scaling. Upper right: Decoding
error as a function of $T$, after a decoder is trained to read scaled temporal
flow $t/T$ at $T=1200$ ms (indicated by the vertical dashed line), and then
tested at $T=T_{1}$. Lower: correlations. (h) Decoding space across time
interval in the production epoch. The decoder was trained using
$\langle\mathbf{r}(x;T_{train},t_{0})\rangle_{t_{0}}$ and tested using
$\langle\mathbf{r}(x;T_{test},t_{0})\rangle_{t_{0}}$, where
$\langle\cdot\rangle_{t_{0}}$ means averaging over temporal flow $t_{0}$.
Figure S8: Decoding generalizability in t-DM. All panels are in the same style
as Fig. S7, except that the non-temporal information in t-DM is the decision
choice. Note that in some panels (lower panels of b, d, h), the correlation
between DE2 and AG as well as the correlation between DE2 and MV are absent.
The reason is that in these cases, the decoding error is perfectly zero in all
training configurations, so the correlation is undefined. Figure S9:
Sequential activity and network structure. (a) The neuronal activity (with
maximum normalized to 1) in the production epoch of IP task in an example
training configuration, sorted according to peak time. (b, c) The same as
panel a, but for the stimulus1 (panel b) or stimulus2 (panel c) epoch of IC.
(d) Mean (solid line) and s.d. (shaded belt) of the recurrent weights as a
function of the peak order difference between post- and pre-synaptic neurons
in the production epoch of IP. (e, f) The same as panel d, but for the
stimulus1 (panel e) or stimulus2 (panel f) epoch of IC. (g) Recurrent weight
as a function of the difference $|x_{1}-x_{2}|$ between the preferred spatial
locations of post- and pre-synaptic neurons and their peak order difference in
the production epoch of t-SR. (h) Recurrent weight as a function of peak order
difference in the sequence of neurons with the same (blue) or different
(orange) preferred decision choices in the production epoch of t-DM. Shaded
belt indicates s.e.m. Figure S10: Coding geometry and network structure in
the absence of timing task requirement. (a,b) The angle and mixed variance
between the subspaces coding temporal flow (F), time interval (I) and spatial
information (S) in the delay epoch of t-SR, in the same style as Fig. 3i, j.
(c,d) Similar to panel a,b, except for the delay epoch of t-DM, where the non-
temporal information is decision choice (D). (e) The angle between the first
temporal-flow PC and the first spatial (in t-SR, SR, COMP and CD) or decision-
choice (in t-DM, DM and cue-DM) PC. Whisker plots: center line, median; box,
25th to 75th percentiles; whiskers, $\pm 1.5\times$ the interquartile range.
In t-SR and t-DM, the perception epoch is studied; in SR, COMP and CD, the
delay epoch is studied; in DM and cue-DM, the stimulus-presentation epoch is
studied. Asterisk indicates significant ($p<0.05$) larger than $45^{\circ}$ (t
test). The horizontal dotted line indicates $45^{\circ}$, the vertical dotted
line separates the spatial task group (t-SR, SR, COMP and CD) from the
decision-making task group (t-DM, DM and cue-DM). The two horizontal dashed
line indicate the median values of t-SR and t-DM (which respectively are the
only timing task in each group) separately. (f) Mixed ratio $\rho$ in several
tasks, where $\rho=v_{\text{min}}/\min(v_{\text{time}},v_{\text{non-time}})$,
where $v_{\text{min}}$ is the mixed variance, $v_{\text{time}}$ and
$v_{\text{non-time}}$ are the variance explained by temporal and non-temporal
information separately. (g) Recurrent weight as a function of the difference
$|x_{1}-x_{2}|$ between the preferred spatial locations of post- and pre-
synaptic neurons and their peak order difference in the delay epoch of SR. (h)
The same as panel g, except for COMP. (i) The same as panel g, except for CD.
(j) Recurrent weight as a function of peak order difference in the sequence of
neurons with the same (blue) or different (orange) preferred decision choices
during the presentation of the stimuli in cue-DM. Shaded belt indicates s.e.m.
(k) The same as panel j, except for DM. Figure S11: Dynamics of the network
when trained to produce long time intervals. (a-b) Perception epoch. (a)
Population activity in the perception epoch in the subspace of the first three
PCs. Colors indicate the time interval $T$. Stars and circles respectively
indicate the starting and ending points of the perception epoch. (b)
Coefficient of determination ($R^{2}$) that quantifies the overlap of the
firing profiles of individual neurons at different $T$s in the perception
epoch, in the same style as Fig. 2d. (c-g) Delay epoch (c) Trajectory speed in
the delay epoch when $T=1200$ ms (blue) and 2400 ms (red), in the same style
as Fig. 2f. (d) Ratio of explained variance of the first five PCs of manifold
$\mathcal{M}$ at the end of the delay epoch, in the same style as Fig. 2g. (e)
The position of the state at the end of the delay epoch projected in the first
PC of manifold $\mathcal{M}$ as a function of $T$, in the same style as Fig.
2h. (f) The distance between two adjacent curves in the delay epoch as a
function of time, in the same style as Fig. 2i. (g) The portions of
monotonically decreasing (MoD), monotonically increasing (MoI) and non-
monotonic (non-M) types of neurons tuned by $T$ at the end of the delay epoch,
in the same style as Fig. 2k. (h-l) Production epoch. (h) Scaling index (blue)
and ratio of explained variance (orange) in the subspace spanned by the
accumulated scaling components, in the same style as Fig. 2n. (i) Trajectory
speed in the subspace of the first three scaling components, in the same style
as Fig. 2o. (j) Probability distribution of the scaling indexes of single
neurons, in the same style as Fig. S2b. (k) The scaling index and explained
variance of principal components, in the same style as Fig. S2c. (l) Mean
activity of the last scaling component as a function of $T$, in the same style
as Fig. S2e.
## References
* [1] R. Hattori, K. V. Kuchibhotla, R. C. Froemke, and T. Komiyama, “Functions and dysfunctions of neocortical inhibitory neuron subtypes,” Nat. Neurosci., vol. 20, pp. 1199–1208, 2017.
* [2] J. Wang, D. Narain, E. A. Hosseini, and M. Jazayeri, “Flexible timing by temporal scaling of cortical responses,” Nat. Neurosci., vol. 21, pp. 102–110, 2018.
* [3] A. E. Orhan and W. J. Ma, “A diverse range of factors affect the nature of neural representations underlying short-term memory,” Nat. Neurosci., vol. 22, pp. 275–283, 2019.
* [4] G. R. Yang, M. R. Joglekar, H. F. Song, W. T. Newsome, and X.-J. Wang, “Task representations in neural networks trained to perform many cognitive tasks,” Nat. Neurosci., vol. 22, pp. 297–306, 2019.
* [5] Q. V. Le, N. Jaitly, and G. E. Hinton, “A simple way to initialize recurrent networks of rectifed linear units,” arXiv https://arxiv.org/abs/1504.00941, 2015.
* [6] A. Goel and D. V. Buonomano, “Temporal interval learning in cortical cultures is encoded in intrinsic network dynamics,” Neuron, vol. 91, pp. 1–8, 2016\.
* [7] C. J. MacDonald, K. Q. Lepage, U. T. Eden, and H. Eichenbaum, “Hippocampal ’time cells’ bridge the gap in memory for discontiguous events,” Neuron, vol. 71, pp. 737–749, 2011.
* [8] R. H. R. Hahnloser, A. A. Kozhevnikov, and M. S. Fee, “An ultra-sparse code underlies the generation of neural sequence in a songbird,” Nature, vol. 419, pp. 65–70, 2002.
* [9] E. Pastalkova, V. Itskov, A. Amarasingham, and G. Buzsáki, “Internally generated cell assembly sequences in the rat hippocampus,” Science, vol. 321, pp. 1322–1327, 2008.
* [10] C. D. Harvey, P. Coen, and D. W. Tank, “Choice-specifc sequences in parietal cortex during a virtual-navigation decision task,” Nature, vol. 484, pp. 62–68, 2012.
* [11] D. Kobak, W. Brendel, C. Constantinidis, C. E. Feierstein, A. Kepecs, Z. F. Mainen, X.-L. Qi, R. Romo, N. Uchida, and C. K. Machens, “Demixed principal component analysis of neural population data,” eLife, vol. 5, p. e10989, 2016.
* [12] J. D. Murray, A. Bernacchia, N. A. Roy, C. Constantinidis, R. Romo, and X.-J. Wang, “Stable population coding for working memory coexists with heterogeneous neural dynamics in prefrontal cortex,” Proc. Natl. Acad. Sci. USA, vol. 114, pp. 394–399, 2017.
|
Send correspondence to<EMAIL_ADDRESS>
# Instructive artificial intelligence (AI) for human training, assistance, and
explainability
Nicholas Kantack The Johns Hopkins Applied Physics Laboratory, Laurel, MD
20723 Nina Cohen The Johns Hopkins Applied Physics Laboratory, Laurel, MD
20723 Nathan Bos The Johns Hopkins Applied Physics Laboratory, Laurel, MD
20723 Corey Lowman The Johns Hopkins Applied Physics Laboratory, Laurel, MD
20723 James Everett The Johns Hopkins Applied Physics Laboratory, Laurel, MD
20723 Timothy Endres The Johns Hopkins Applied Physics Laboratory, Laurel,
MD 20723
###### Abstract
We propose a novel approach to explainable AI (XAI) based on the concept of
“instruction” from neural networks. In this case study, we demonstrate how a
superhuman neural network might instruct human trainees as an alternative to
traditional approaches to XAI. Specifically, an AI examines human actions and
calculates variations on the human strategy that lead to better performance.
Experiments with a JHU/APL-developed AI player for the cooperative card game
Hanabi suggest this technique makes unique contributions to explainability
while improving human performance. One area of focus for Instructive AI is in
the significant discrepancies that can arise between a human’s actual strategy
and the strategy they profess to use. This inaccurate self-assessment presents
a barrier for XAI, since explanations of an AI’s strategy may not be properly
understood or implemented by human recipients. We have developed and are
testing a novel, Instructive AI approach that estimates human strategy by
observing human actions. With neural networks, this allows a direct
calculation of the changes in weights needed to improve the human strategy to
better emulate a more successful AI. Subjected to constraints (e.g. sparsity)
these weight changes can be interpreted as recommended changes to human
strategy (e.g. “value A more, and value B less”). Instruction from AI such as
this functions both to help humans perform better at tasks, but also to better
understand, anticipate, and correct the actions of an AI. Results will be
presented on AI instruction’s ability to improve human decision-making and
human-AI teaming in Hanabi.
###### keywords:
XAI, explainability, interpretability, hanabi, human-machine, teaming,
instructive, instruction
## 1 INTRODUCTION
AI systems have demonstrated the ability to perform tasks remarkably well from
games [1, 2] to medical diagnosis [3]. Many of these AI are comprised of deep
neural networks from which it is very hard to extract insight or explanations
of the decision the network makes. Therefore, in many cases a neural network
can discover novel insights into a domain but cannot communicate these
insights to the humans that developed the network. This fundamental problem
has sparked the active field of research into explainable AI (XAI)[4], AI for
which there are some measures in place to facilitate human understanding of
the AI’s decision.
In some cases, the unexplainability of AI is a barrier to its use. Such cases
are those in which humans are agents who must collaborate with the AI (which
typically requires some level of common understanding) and those in which
humans are significant stakeholders (e.g. when the AI is recommending medical
treatment). A research effort at JHU/APL entitled “Learning to Read Minds”
studied this challenge in the context of human-machine teaming in the
collaborative card game Hanabi [5]. Hanabi is a sort of cooperative solitaire
with imperfect information that requires players (human or machine) to be able
to infer the knowledge, intentions, and future actions from the behavior of
their teammates [6]. Hanabi is a game for which the traditional process of
self-play optimization (i.e. training an AI through millions of games played
between copies of the same AI) does not lead to successful human-machine
performance [5], primarily because AI agents can develop obscure conventions
(e.g. repurposing an in-game clue to mean something entirely different from
its semantic meaning) that will be automatically understood by their mirror
image during self-play, but completely incomprehensible to a human. This is
why agents such as the Simplified Action Decoder [7], Rainbow [8], and
Fireflower [9] often achieve perfect scores in self-play, yet achieve low
scores when playing with human teammates [5]. Furthermore, due to the lack of
effective XAI techniques, there appear no practical means for these complex
self-play AIs to explain these obscure conventions to humans (setting aside
whether humans are even capable of implementing these conventions once
understood).
The “Learning to Read Minds” research project included a JHU/APL-internal
challenge tasking staff with developing AI agents that would excel when
playing Hanabi with human strangers. The winning JHU/APL agent not only
achieved human-play scores higher than any found in literature to date, [10,
11, 12] but it did so in a way that was constrained to allow human-readable
descriptions of strategy (Figure 1). In particular, the JHU/APL agent
demonstrated the ability to develop deep insights into human strategy through
observation of human play, to understand how the human strategy interacted
with the agent’s strategy, and to adapt to discover a play style which
complements the human strategy. This study summarizes the agent’s structure
which enabled it to successfully collaborate with human teammates, and
introduces a novel type of explanation (we call “instruction”) to share AI
insights with human observers.
## 2 A Human-like Hanabi Agent
The JHU/APL agent (henceforth referred to as “agent”) was developed under the
philosophy that if it could play like humans, it would play well with humans.
The agent was designed to convert the input space of the game state to a
latent space of a small number of human-preferred factors (HPFs) which are
aspects of the game that humans are known to attend to when making decisions.
The agent utilizes twelve HPFs (Table 1) which were suggested by intermediate
Hanabi players. Constraining the attention of an AI in this fashion in order
to guarantee some level of interpretability after training is a known practice
[13, 14]. In the case of the JHU/APL Hanabi agent, an expected reward for each
possible action is computed based on the expected effect the action will have
on the HPFs. In particular, the expected value of an action is the inner
product of a factor vector $\vec{h}$ with a weights vector $\vec{w}$.
Therefore,
$\displaystyle y_{i}=\vec{h}_{i}^{t}\vec{w}\hskip 28.45274pt\implies\hskip
28.45274pt\vec{y}=H^{T}\vec{w}$ (1)
where $y_{i}$ is the expected reward for action $i$, and vectors $\vec{h}_{i}$
form the columns of $H$. The elements of $\vec{h}$ are the expected changes
that an action will induce on each of the HPFs (e.g. for the HPF of playing a
playable card, the corresponding element in $\vec{h}_{i}$ is the probability
that action $i$ will result in the playing of a playable card). The elements
of $\vec{w}$ are the relative values of each HPF with respect to one another.
Thus, while $H$ represents information about the game state, $\vec{w}$
represents the agent’s strategy. Altering the elements of $\vec{w}$ can
dramatically alter the play style of the agent.
On each move, the agent calculates $\vec{y}$ which stores the expected reward
for each possible action. The agent always chooses the action with the highest
expected reward among the legal actions available. Of note, this technique
does not involve and consideration of moves beyond the ply under
consideration. Rather, the agent is pursuing an immediate improvement of the
game state with respect to the chosen HPFs.
Table 1: The twelve factors, along with their values for three different JHU/APL agent play styles (human-like, human-complementary, and self-play). To help interpret some of the factors, consider the following definitions. endangered card \- a card for which no copy has been played, yet there is only one copy of this card remaining in play. unneeded card \- A card which cannot be played in the future, for any reason. Factor | Weights
---|---
human-like | human-compl. | self-play
Playing a playable card | 1 | $\infty$ | 11
Playing unplayable card | | |
(fewer than 2 strikes) | -1 | -1 | -1
Playing an unplayable card (2 strikes) | 3 | $\infty$ | 3
Other player playing a | | |
playable card | 1.5 | 10 | 2
Other player playing an | | |
unplayable card | 0 | 0 | 1
Discarding a non-endangered card | 0.1 | 0.55 | 0.8
Discarding an unneeded card | 0.25 | 1 | 0
Playing a singled out card | 3 | 1.5 | 5
Giving a clue that singles | | |
out a playable card | 3 | 3 | 2
Giving a clue that singles | | |
out a non-playable card | 0 | -5 | 4
Discarding a singled out card | -0.5 | -2 | -3
Added value to any clue | | |
per info token held | 0.5 | 0.1 | 0
Figure 1: The score distributions are shown for Simplified Action Decoder (a
special off-belief version made for the competition), Rainbow, Fireflower, and
the JHU/APL agent. These scores were obtained by pairing an agent with a human
teammate (drawn from a pool of 21).
### 2.1 Modeling Human Decision Making
While human-like play was the preliminary goal during the agent’s development,
the first training efforts were aimed at generating decent self-play scores.
For this phase, the training of the agent was separated into epochs. Each
epoch consists of a four-dimensional, full factorial design experiment on a
subset of four elements from $\vec{w}$. Each element under test was given
three test values (a low, medium, and high) around the neighborhood of where
the optimal value was expected to be. Therefore, each epoch tested $3^{4}=81$
unique $\vec{w}$ vectors. For each $\vec{w}$ vector tested, 200 games were
played between identical copies of the agent. The elements of $\vec{w}$ under
test were not altered until a an epoch occurred for which the highest score
was achieved by assigning the medium value for each element under test (i.e.
increasing or decreasing any element led to poorer performance). The
progression of self-play scores during this development phase are shown in
Figure 2.
Once the agent was optimized for self-play, the next objective was to find a
strategy vector $\vec{w}$ that would lead to play that was as human-like as
possible. To facilitate this exploration, a dataset of 376 decisions was
collected by examining the play of one of the authors. With a dataset of
decision made by a single human, we aimed to determine if a strategy vector
$\vec{w}$ could be fitted to a particular human’s play style rather than a
$\vec{w}$ which represented some ambiguous (perhaps bad) play style that was
averaged across humans with potentially dissimilar play styles. The
progression of increasing humanness is displayed in Figure 3. The highest
humanness fraction of any agent was 64.2%, achieved by the human-like agent
(that is, the agent was able to independently agree with the human decision in
64.2% of the game states examined).
Once the human-like version of the agent had been optimized for fitting the
dataset of human decisions, a final training effort was made by pairing a
training version of the agent with the human-like version. In this fashion,
the training process was intended to approximate playing with a human
teammate. As before, full factorial design experiments were run altering four
elements of $\vec{w}$ per epoch, each across three levels. At then end of the
training process, the “human-complementary” version of the agent was created.
Cross play results (Figure 4) illustrate the performance of different
combinations of agents developed.
### 2.2 An Important Note on Human Perception of Strategy
It is worth noting that the JHU/APL agent needed to make significant changes
to its initial $\vec{w}$ vector in order to accurately predict human decision
making in the game, despite the fact that the initial $\vec{w}$ given to the
agent was intended to accurately describe human decision making. For this
reason, it became clear that human players could not accurately depict the
weights they attributed to HPFs. This has profound implications for XAI. A
common XAI approach would have involved taking the values from the self-play
strategy vector in Table 1 and describing these to a human player (e.g. “You
should value discarding a non-endangered card at 0.8”). However, if a human
already egregiously misunderstands what value they actually attribute to these
HPFs, it is unlikely that the human will be able to act on this insight.
Rather, it would perhaps be more suitable for us to look at the difference in
weights between the human-like agent and the self-play agent, since doing so
would allow us to specify corrections a human should make to their strategy
(e.g. “you should value discarding a non-endangered card more”). These
corrections are human interpretable regardless of whether the human accurately
understands their current strategy. This is the principal idea behind AI
instruction.
Figure 2: Self play scores are shown for different versions of the agent
during the initial development process. Each transition from one version to
the next was accompanied by changes to the weights vector $\vec{w}$ or the
addition of new elements to the weights vector. Notably, the human-like agent
had significantly poorer self-play scores, consistent with the fact that the
human-like agent was optimized to agree with a database of human decisions.
Figure 3: The humanness of major versions of the agent are shown. Humanness is
defined as the fraction of human decisions with which the agent agrees when
analyzing a database of 376 game decisions made by one of the authors. Of the
agents shown, only the “human-like” agent was explicitly optimized for maximal
humanness. The considerable humanness of the other models indicates how
optimizing an HPF focused agent for self play can lead to considerably human-
like performance. Figure 4: Average scores are shown for different pairings of
the agents (and humans). The human-human score is included for reference, but
is an average across a small number of games played within the development
team. All other scores are averages across at least 10 games. Error margins
for each bin are less than 1 point. It is clear that the human-complementary
agent achieves the highest score of any agent paired with the human-like agent
(since these are the exact conditions under which the human-complementary
agent was optimized). It is notable that the average human + self-play and
human + human-compl. scores are identical in this plot (where a difference is
expected), but it is also worth noting that a very small number (two) of very
experienced humans were represented in these data. Therefore, while these
scores are useful for comparing how non-human agents perform in different
pairings, generalizations about human play from this figure should be made
with great caution.
## 3 Theory of AI Instruction
AI instruction is defined in the context of explaining differences in strategy
in the form of changes on weights. Therefore, it is relevant to consider a
difference in outputs (say, $\vec{y}_{1}$ from $\vec{w}_{1}$ and $\vec{y}_{2}$
from $\vec{w}_{2}$).
$\displaystyle\delta\vec{y}=\vec{y}_{1}-\vec{y}_{2}=H^{T}\vec{w}_{1}-H^{T}\vec{w}_{2}$
(2)
$\displaystyle\delta\vec{y}=H^{T}\left(\vec{w}_{1}-\vec{w}_{2}\right)=H^{T}\delta\vec{w}$
(3)
### 3.1 A note on Strategy vs. Perception
It is possible to imagine a difference in performance, $\delta\vec{y}$,
arising not from a difference in strategy ($\delta\vec{w}$), but rather a
difference in perception of the game state $\delta H$. This is particularly
practical if the elements of $H$ concern complex changes in the game state
such as the probabilities of certain outcomes (as it does in Hanabi). In this
case,
$\displaystyle\delta\vec{y}=\delta H^{T}\vec{w}$ (4)
In fact, there is ambiguity between this and (3), since $HH^{T}$ is a
$12\times 12$ matrix that will tend to be full rank, and thus the relation
$\displaystyle\delta\vec{w}=\left(HH^{T}\right)^{-1}H\delta H^{T}\vec{w}$ (5)
indicates that any strategic difference $\delta\vec{w}$ could be interpreted
instead as an observation error $\delta H$. This illustrates yet another
reason for providing instruction in the form of $\delta\vec{w}$ rather than an
explanation in the form of $\vec{w}_{2}$. By the very nature of this
equivalence relation, a recommended change in strategy, $\delta\vec{w}$ can
compensate for both misperception and strategic deficiency (and mixtures
thereof).
### 3.2 Non-uniqueness of and Constraints on $\delta\vec{w}$
Since $H^{T}$ is a tall matrix ($20\times 12$), $H^{T}$ has a non-empty null
space. Therefore, the condition
$\displaystyle\delta\vec{y}=H^{T}\left(\delta\vec{w}+\vec{n}\right)$ (6)
is satisfied for any $\vec{n}$ in the null space of $H^{T}$, and so any
$\delta\vec{w}+\vec{n}$ is a valid description of a strategic difference
needed to elicit the decision difference $\delta\vec{y}$. This non-uniqueness
of $\delta\vec{w}$ is advantageous, because it allows multiple possible
$\delta\vec{w}$ to be compared for fitness according to human friendly
constraints (e.g. norm minimality, sparsity, etc.).
### 3.3 Generating AI Instruction
Suppose that a human subject is presented with $g$ game states
$H_{1},...,H_{g}$, and that these game state matrices are stacked into a
$12\times 20\times g$ tensor $\mathcal{H}$. As before, a strategy $\vec{w}$
can be combined with a game state to yield a vector of outputs, $\vec{y}$.
$\displaystyle\vec{y}_{k}=\mathcal{H}(:,:,k)^{T}\vec{w}$ (7)
In the following formalism, a subscript $h$ corresponds to a human, while a
subscript $i$ corresponds to an ideal (typically a successful AI). Let’s
assume that the index of the maximum element of $\vec{y}_{h}$ indicates the
decision that will be taken (per strategy $\vec{w}_{h}$) for the game state
used. In this case, two different $\vec{y}$ vectors may still specify the same
action if their maximal elements occupy the same index. If not, then it is
worth describing the nearest (min $|\vec{y}_{i}-\vec{z}|$) vector $\vec{z}$
such that $\vec{y}_{h}$ and $\vec{z}$ have maximal elements in the same index
position (a position different than the maximal element of $\vec{y}_{h}$).
Suppose $y_{h}(t)=$max$(\vec{y}_{h})$, but no other information is known about
$\vec{y}_{h}$. Then let $m$ be the average of all the terms of $\vec{y}_{i}$
that are greater than $y_{i}(t)$. Then $\vec{z}$ is defined as
$\displaystyle z(i)=\left\\{\begin{matrix}y_{i}(j)&y_{i}(j)<m\ \&\ i\neq t\\\
m+\varepsilon&y_{i}(j)<m\ \&\ j=t\\\ m&y_{i}(j)>m\\\ \end{matrix}\right.$ (8)
where $\varepsilon$ is some small, positive tie-breaking factor. If the
vectors $\vec{z}$ (of which there are $g$) are made the columns of a $20\times
g$ matrix $Z$, and the output vectors $y_{i}$ from strategy $\vec{w}_{i}$ are
made columns of a $20\times g$ matrix $Y_{i}$, then we can relate $Z$ and the
game state tensor $\mathcal{H}$ as follows.
$\displaystyle\mathcal{H}(:,:,k)^{T}\delta\vec{w}=Z(:,k)-Y_{i}(:,k)\ \forall\
k\in[1,g]$ (9)
Where $\delta\vec{w}$ is a strategy change needed so that
$\vec{w}_{i}+\delta\vec{w}$ and $\vec{w}_{h}$ arrive at the same decision for
every game state in $\mathcal{H}$. To calculate for $\delta\vec{w}$, we can
utilize the following matrix unfolding.
$\displaystyle\left[\begin{matrix}\mathcal{H}(:,:,1)^{T}\\\
\mathcal{H}(:,:,2)^{T}\\\ ...\\\
\mathcal{H}(:,:,g)^{T}\end{matrix}\right]\delta\vec{w}=\widetilde{H}\delta\vec{w}=\text{vec}\left(Z-Y_{i}\right)$
(10)
This is an overspecified linear system, so a least squared error solution can
be taken for $\delta\vec{w}$. Then, $\delta\vec{w}$ is the norm-minimal change
to apply to $\vec{w}_{i}$ to better concur with strategy $\vec{w}_{h}$ in each
of the $g$ game states. If $\vec{w}_{h}$ is the strategy of a human, and
$\vec{w}_{i}$ is an ideal, $\delta\vec{w}$ is the change to the ideal needed
to concur with the human. The inverse ($-\delta\vec{w}$) has elements which
comprise the instructions that should be given to the human. In essence, the
instructed changes are the opposite of those needed for the ideal to be
altered to make the same decisions the human made.
### 3.4 Properties of the Generated $\delta\vec{w}$
$\delta\vec{w}$ is not guaranteed to produce consensus between the starting
strategy and the ideal when adopted. Formally, it does not always hold that
$\displaystyle\mathcal{H}(:,:,k)(\vec{w}_{i}+\delta\vec{w})=\mathcal{H}(:,:,k)\vec{w}_{h}$
(11)
However, it is possible (and desired) for this relation to circumstantially
hold for many $k$ values. Increasing $\varepsilon$ in the above formulation
will tend to increase the number of game states in which consensus is built
but at the expense of a larger norm $\delta{w}$ (i.e. bigger recommended
changes to $\vec{w}_{h}$). Even so, total consensus between the ideal and
modified strategies is rarely achieved because the model for decisions
generated from game states given by (1) may not accurately describe all
decisions ($\vec{y}$) made by a human (e.g. due to momentary misperception,
distraction, and attention to factors not captured in $H$). The quantity
$\lambda$ defined as
$\displaystyle\lambda=\min_{\vec{w}}\frac{1}{g}\sum_{k=1}^{g}|\vec{y}_{h}-\mathcal{H}(:,:,k)^{T}\vec{w}|$
(12)
may be introduced as a figure of merit for the list of factors which define
the strategy vector $\vec{w}$. Furthermore, $\lambda$ can be used to measure
the utility of elements of $\vec{w}$ by examining the change in $\lambda$
induced by the removal or inclusion of factors. Ideally, the only factors kept
would be those whose inclusion result in a significant decrease in $\lambda$.
Similarly, one can define a figure of merit for generated instruction. If we
define $f(\vec{a},\vec{b})$ as
$\displaystyle
f(\vec{a},j)=\left\\{\begin{matrix}1&a(j)=\text{max}(\vec{a})\\\
0&\text{otherwise}\end{matrix}\right.$ (13)
If $n_{k}$ is the index of the decision the human instructee made for game
state $k$, then it is possible to evaluate the quality $q(\delta\vec{w})$ of
instructions as
$\displaystyle
q(\delta\vec{w})=\frac{1}{g}\sum_{k=1}^{g}f(\mathcal{H}(:,:,k)^{T}(\vec{w}_{i}+\delta\vec{w}),n_{k})$
(14)
$q(\delta\vec{w})$ falls in $[0,1]$ and can be interpreted as the fraction of
human decisions that can be understood as a variation ($\delta\vec{w}$) on an
ideal ($\vec{w}_{i}$).
### 3.5 Full AI Instruction Algorithm with Quality Monitoring
We recommend the algorithm in Figure 5 for generating AI instruction. The
algorithm has two preparation steps. The first is to train up an ideal
strategy ($\vec{w}_{i}$), and the second is to aggregate a dataset of human
decisions $\vec{n}$ paired with the game states in which they were made (slabs
of the $\mathcal{H}$ tensor). While it is possible to terminate the algorithm
after the step that assigns $\delta\vec{w}$, this algorithm includes a post-
processing component which seeks to zero out as many elements of
$\delta\vec{w}$ as possible while maintaining some preset explanatory fidelity
$\alpha$ to the human decision set. The purpose of this post-processing is to
generate instruction which concerns changes in as few of values as possible.
This is motivated by the assumption that low dimensional instructions are
easier for humans to understand (i.e. require focusing on fewer aspects of the
game in subsequent play).
Instructive AI Algorithm
$\alpha\leftarrow\text{Chose accuracy threshold in [0,1]}$
$\vec{w}_{i}\leftarrow\text{Train}(\hat{\mathcal{H}},C)$ Learn ideal weights
on a dataset $\hat{\mathcal{H}}$
$\mathcal{H},\vec{n}\leftarrow\text{Observe }g\text{ human decisions}$
for $k\in[1,g]$ do
$Y_{i}(:,k)\leftarrow\mathcal{H}(:,:,k)^{T}\vec{w}_{i}$
$Z(:,k)\leftarrow(Y_{i}(k),n(k))$ per (8)
end for
$\delta\vec{w}\leftarrow\text{Solve
}\widetilde{H}\delta\vec{w}=\text{vec}\left(Y_{i}-Z\right)$
$q\leftarrow\frac{1}{g}\sum_{k=1}^{g}f(\mathcal{H}(:,:,k)^{T}(\delta\vec{w}_{i}+\delta\vec{w}),n_{k})$
while $q>\alpha$ do
$\delta\vec{w}\leftarrow$ zero out element with smallest impact on $q$
$q\leftarrow\frac{1}{g}\sum_{k=1}^{g}f(\mathcal{H}(:,:,k)^{T}(\delta\vec{w}_{i}+\delta\vec{w}),n_{k})$
end while
Give $-\delta\vec{w}$ to human
Figure 5: This algorithm generates sparse AI instruction tailored to a set of
human decisions.
## 4 Experimental Results
During the “Learning to Read Minds” challenge, a database of 376 human
decisions in Hanabi games was generated. Preliminary results are shown based
on analysis of this dataset. To illustrate the instruction generation process,
a trial agent was created by copying the self-play agent. Because the self-
play agent already agrees with human decision at a high rate, the weight for
the non-endangered discard was inflated (to a value of 10). Then, in an
iterative process, instructions were generated (on how the trial agent could
better emulate human decision making based on the dataset), the trial agent
applied the instructed changes to its weights, and a new set of instructions
were generated. This process is shown to lead to asymptotic improvement in the
agreement between the trial agent and the human dataset (Figure 6).
High agreement ($68\%$) was achievable after 40 instruction based weight
updates. Furthermore, the spurious discard weight was shown to be brought into
closer agreement with the target strategy. Importantly, this (and other
initially matching weights) were shown to drift to new equilibrium values.
This serves as an empirical demonstration of the non-uniqueness of strategies
as described in the previous section. However, it is important to note that
the generation of a norm minimally different $Z$ matrix may not provide a
linear system in (10) that admits a solution that produces high prediction
accuracy when observing the target strategy. This is because the matrix $Z$
may be a poor estimation for the target strategy’s output vectors, a
circumstance that is increasingly likely when the instructee strategy differs
significantly from the target strategy.
These results (Figure 6) indicate that AI instruction can indeed provide
stepwise improvements to strategy which, taken iteratively, can lead to
significant improvement in the agreement between the instructee strategy and
the ideal. In this way, instructions serve as something of a proxy gradient of
a cost function, namely, agreement with the ideal. Utilizing the instructions
as a gradient for agent training was shown in this experiment to lead to
better humanness scores ($68\%$ vs. $64\%$) in a much shorter computation time
(minutes vs. hours) compared to the full factorial approach. Additionally,
these instructions provide a novel approach to portraying AI insight in a way
that is understandable to human observers. Specifically, these instructions
can be phrased as corrections to the weights attributed to human-preferred
factors, allowing for AI systems to develop an understanding of human decision
making and to share those insights through tailored instructions.
Figure 6: The effect of following successive batches of AI instructions are
shown. For this experiment, a trial version of the self-play agent was created
that had a non-endangered discard weight of 10 rather than the normal 0.8.
Instructions are generated for how this agent can better emulate the self-play
agent. (a) The agreement between the agents is shown as a function of how many
batches of instructions were generated. After each batch, the trial agent
applies the correction to its weights and reexamines agreement. (b) The trial
agent’s inflated discard weight decreases with time. Notably, it does not
appear to stabilize at the same value as the ideal (0.8). (c) The weight for
discarding unneeded cards is shown over the experiment. Note that while the
two agents initially had the same value (0), this weight drifts to a new value
as a result of the instructions, indicating that a new set of weights is being
discovered that agrees with the self-play agent over the decisions studied.
(d) Another weight is shown (which does not pertain to discards) to further
illustrate how instruction may not encourage the same, unique weights as the
ideal.
## 5 Conclusion
Leveraging insights obtained from the development of a highly successful,
artificially intelligent human teammate for Hanabi, we propose a technique of
instructive AI to better enable humans to obtain insight from complicated AI
systems. There are assumptions in this approach that may not hold true for
certain contexts. For instance, this technique hopes that the requisite
$\delta\vec{w}$ for consensus building is small. If not, then implementing a
$\delta\vec{w}$ may be just as confusing for humans as being told
$\vec{w}_{i}$, or perhaps even more so. More fundamental, the model given by
(1) may not accurately capture a majority of a human’s decisions, and
$\vec{w}$ is always at risk of missing elements that are crucial to a human’s
decision making. In general, it is challenging to produce a complete set of
values relevant to human decision making. For the purposes of this experiment,
the list of values is produced from human introspection and trial and error.
Techniques to organically learn the needed values may be possible and highly
valuable to the task of generating AI instruction, but are beyond the scope of
the experiments described above.
Many of the challenges described above apply in similar form to other methods
of XAI. However, instructive AI shows promise to circumvent some of the
greatest challenges of XAI and provide a novel framework in which further
research might push the frontier on extracting human-useful insight from
complex AI systems.
## References
* [1] Silver, D., Schrittwieser, J., and et al., K. S., “Mastering the game of go without human knowledge,” Nature 550, 354–359 (2017).
* [2] Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., and Hassabis, D., “Mastering chess and shogi by self-play with a general reinforcement learning algorithm,” (2017).
* [3] McKinney, S. M. and et al., M. S., “International evaluation of an ai system for breast cancer screening,” Nature 577, 89–94 (2020).
* [4] Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Pedreschi, D., and Giannotti, F., “A survey of methods for explaining black box models,” (2018).
* [5] Anonymous, “Towards human-compatible ai teammates,” (2022).
* [6] Bard, N., Foerster, J. N., Chandar, S., Burch, N., Lanctot, M., Song, H. F., Parisotto, E., Dumoulin, V., Moitra, S., Hughes, E., and et al., “The hanabi challenge: A new frontier for ai research,” Artificial Intelligence 280, 103216 (Mar 2020).
* [7] Hu, H. and Foerster, J. N., “Simplified action decoder for deep multi-agent reinforcement learning,” (2021).
* [8] Hessel, M., Modayil, J., van Hasselt, H., Schaul, T., Ostrovski, G., Dabney, W., Horgan, D., Piot, B., Azar, M., and Silver, D., “Rainbow: Combining improvements in deep reinforcement learning,” (2017).
* [9] “Fireflower.” https://github.com/lightvector/fireflower (2018).
* [10] Hu, H., Lerer, A., Peysakhovich, A., and Foerster, J., “”other-play” for zero-shot coordination,” (2021).
* [11] Eger, M., Martens, C., and Cordoba, M. A., “An intentional ai for hanabi,” in [2017 IEEE Conference on Computational Intelligence and Games (CIG) ], 68–75 (2017).
* [12] Siu, H. C., Pena, J. D., Chen, E., Zhou, Y., Lopez, V. J., Palko, K., Chang, K. C., and Allen, R. E., “Evaluation of human-ai teams for learned and rule-based agents in hanabi,” (2021).
* [13] Yang, Z., Zhang, A., and Sudjianto, A., “Enhancing explainability of neural networks through architecture constraints,” (2019).
* [14] Alvarez-Melis, D. and Jaakkola, T. S., “Towards robust interpretability with self-explaining neural networks,” (2018).
|
# Vertex Fitting In Low-Material Budget Pixel Detectors
Andrea Loreti Department of Physics, University of Liverpool, The Oliver
Lodge Laboratory, Liverpool L69 7ZE, United Kingdom
###### Abstract
This paper provides a detailed description of a vertex fitting algorithm
suitable for precision measurements in low-energy particle physics
experiments. An accurate reconstruction of low-momentum trajectories can be
accomplished by reducing the material budget of the detector to a few per mill
of the radiation length. This limits the multiple scattering undertaken by
particles inside the detector and improves the vertex fitting accuracy.
However, for sufficiently light detection systems, additional sources of
errors, such as the intrinsic spatial resolution of the sensors, must be
considered in the reconstruction of the vertex parameters. The algorithm
developed in this work provides a complete treatment of multiple scattering
and spatial resolution in the context of vertex fitting for light pixel
detectors. In addition to this, a study of the vertex reconstruction in the
low-material budget pixel detector of the Mu3e experiment is presented.
## I Introduction
With the increase of the instantaneous and integrated beam luminosities, the
requirements of particle physics experiments for precise tracking and
vertexing detectors, with high radiation tolerance, have become more stringent
e.g., SNOEYS2023168678 ; Hartmut2018 ; CARNESECCHI2019608 ; MOSER201685 . In
this regard, silicon pixel sensors can provide high granularity, low material
budget structures and the radiation-hardness that most experiments need, e.g.,
MOSER201685 ; SPANNAGEL2019612 ; Abelev_2014 ; ARNDT2021165679 . It is
important, however, that precise detection systems are developed in
conjunction with equally performing analysis methods for the reconstruction of
particle trajectories and decay vertices, e.g. FRUHWIRTH1987444 ;
BILLOIR1992139 ; Waltenberger2007 ; RevModPhys.82.1419 . To this aim, several
fitting algorithms have been designed and optimized over the years to deal
with hit selection, pattern recognition, errors calculations and high track
multiplicity (see for instance Mankel_2004 ; RevModPhys.82.1419 and
references therein). The practical implementation of these methods must be
tailored around the actual detector and magnetic field configuration of the
experiments. This makes tracking and vertexing a topic which is in continuous
evolution adapting itself to the new challenges set by upcoming experiments.
This study addresses the problem of vertex fitting in the low-material budget
pixel detector of the Mu3e experiment ARNDT2021165679 . As explained in
section II, the detector design has been optimized to minimize Multiple
Coulomb Scattering (MS) effects on particle trajectories and signal
kinematics. However, for light detectors such as Mu3e, the intrinsic pixel
resolution becomes another limiting factor in the vertex reconstruction which
cannot be ignored. Under these circumstances, the vertex fitting should
account for MS and pixel resolution in the error calculations as well as for
any energy loss in the detector which may hamper tracks momentum
reconstruction. The present algorithm deals with all these sources of errors
and it is illustrated in section III whilst in section IV, a comparative study
is made among different inclusive scenarios that encompass: (A) MS only; (B)
MS and pixel resolution together; (C) all sources of errors (MS, pixel
resolution and energy losses) are included.
## II The low-material budget Mu3e pixel detector
The Mu3e experiment aims to find or exclude the rare Charged Lepton Flavour
(CLF) violating muon decay:
$\mu^{+}\rightarrow e^{+}e^{-}e^{+}$ (1)
at Branching Ratios (BR) $>10^{-16}$ ARNDT2021165679 . This threshold is $4$
orders of magnitude smaller than previous experimental upper limits
($10^{-12}$) BELLGARDT19881 and $38$ orders of magnitude larger than
theoretical Standard Model (SM) calculations (BR $=10^{-54}$), e.g.,
MARCIANO1977303 ; RevModPhys.73.151 . However, new theoretical models predict
the existence of extra degrees of freedom beyond the SM which may bring CLF
violation within the reach of near future experiments such as Mu3e, e.g.,
KAKIZAKI2003210 ; DEGOUVEA201375 . Consequentially, an observation of
$\mu^{+}\rightarrow e^{+}e^{-}e^{+}$ at single event sensitivities aimed by
the Mu3e experiment will imply scenarios of new physics.
The process in (1) yields a relatively simple decay topology with the 3 final
state leptons produced at the same vertex of interaction and momentum vectors,
$\vec{p}$, determined by the energy and momentum conservation for decaying
muons at rest. The main background processes in Mu3e measurements are the muon
internal conversion $\mu^{+}\rightarrow
e^{+}e^{-}e^{+}+\nu_{e}+\bar{\nu}_{\mu}$ (BR $\approx 10^{-5}$) and the
combination of one electron and 2 positrons from independent sources, e.g.,
one Bhabha electron plus two Michel positrons $\mu^{+}\rightarrow
2\times(e^{+}+\nu_{e}+\bar{\nu}_{\mu})+e^{-}$ BELLGARDT19881 .
Figure 1: Scheme of the central barrel of the Mu3e pixel detector, side and
x-y views ARNDT2021165679 .
The aimed single event sensitivity of $2\cdot 10^{-15}$, during phase I of the
experiment, can be achieved with an energy-momentum resolution of $\lesssim 1$
MeV/c and by using precise vertexing and timing systems ARNDT2021165679 .
The energy spectrum of the decay particles in the Mu3e experiment extends up
to $m_{\mu}/2$, where $m_{\mu}$ is the muon mass. In this low-energy region,
MS poses a serious challenge to the reconstruction of particle trajectories
and signal kinematics. To minimize MS, Mu3e uses a low-material budget pixel
detector ($0.1\%$ of the radiation length, Xo, per layer). This is made of
high-voltage monolithic active pixel sensors PERIC2007876 that can be thinned
down to $50\text{\,}\mathrm{\SIUnitSymbolMicro m}$ or $0.05\%$Xo. The rest of
the material budget of the detector is used in the flex-tape that provides
mechanical support and the electrical routing to the sensors.
Figure 1 shows the schematic of the foreseen Mu3e tracker central station
which is important for vertex fitting and track reconstruction ARNDT2021165679
. Two recurl stations, one up-stream and one down-stream, will also be part of
the final detector design. These increase the angular acceptance of the
experiment and allow to measure long-armed trajectories to achieve improved
momentum resolution.
The layers have cylindrical symmetry and are concentrically placed around the
target, a hollow double cone made of Mylar 100 mm in length and with a base
radius of 19 mm. The target is placed in a solenoid magnetic field of 1 T with
the base at a minimum distance of $\approx 4$ mm from the innermost layer of
the pixel tracker. Particle trajectories bend inwards following helical
trajectories around the field lines possibly making multiple re-curls. Each
layer of the pixel detector is sectioned in sub-elements called ladders. A
ladder is a series of chips mounted on the same flex-tape. There are 8, 10, 24
and 28 ladders for layer 1, 2, 3 and 4, respectively. For instance, the
innermost layer of the tracker, crucial for vertex fitting, is made of 8
ladders each one tilted by a 45∘ angle with respect to the neighbours. This
configuration forms a 8-sided surface which extends for $\sim 12$ cm or 6
chips length, see figure 1.
The intrinsic detector spatial resolution is set by the pixel sensitive area,
$80\times 80$$\text{\,}\mathrm{\SIUnitSymbolMicro m}$2. Pixel resolution
becomes more important for high-momentum trajectories for which MS is lower.
However, for low-material budget detectors such as Mu3e, this effect cannot be
ignored at any momentum and it must be treated simultaneously with MS.
## III Vertex fitting in the Mu3e detector
Vertex reconstruction can be accomplished in two steps: vertex finding and
vertex fitting, see e.g., RevModPhys.82.1419 . The former consists in grouping
trajectories that have been most likely produced in the same decay process.
The latter involves finding the most likely vertex coordinates
$(x,y,z)_{\text{v}}$ and the initial momentum vectors of all clustered tracks.
In Mu3e, vertex finding is accomplished by considering all possible
combinations of two positive and one negative tracks in the detector within
time frames of 64 ns. For the vertex fitting, a least-squares optimization
algorithm has been developed based on the method illustrated in BILLOIR1992139
.
### III.1 Track parameters and uncertainties
Trajectories are defined by 6 parameters ($[x,y],z,\phi,\lambda,k)$. These are
the coordinates of one point along the track, the angles $\phi$ and $\lambda$
defining the direction of the tangent vector to the trajectory and the factor
$k=(p/q)^{-1}$ where $q$ is the charge of the particle and $p$ is the
magnitude of the momentum vector. The following relationships among the
momentum components in the global Cartesian frame and the angles $\phi$ and
$\lambda$ hold true:
$\left\\{\begin{aligned} p_{x}&=p\,\text{cos}(\lambda)\,\text{cos}(\phi)\,,\\\
p_{y}&=p\,\text{cos}(\lambda)\,\text{sin}(\phi)\,,\\\
p_{z}&=p\,\text{sin}(\lambda)\,,\\\
R_{\perp}&=\frac{p\,\text{cos}(\lambda)}{q\,B}\,,\\\ \phi&\in[0,2\pi]\,,\\\
\lambda&\in[-\pi/2,\pi/2]\,,\end{aligned}\right.$ (2)
where $B=B_{z}$ is the homogeneous magnetic field directed along the beam-line
$\hat{z}$ and $R_{\perp}$ is the transverse radius of the trajectory.
The input measurements to fit are the values of the track parameters at a
given Reference Surface (RS) along with their covariance matrix ($\Xi$):
($[x,y],z,\phi,\lambda,k,\Xi)_{\text{meas}}$. In this study, the RS is the
innermost layer of the pixel detector which is the closest one to the expected
real vertex position. The coordinates $x,y$ are not independent and can be
given in a single expression by using a local reference frame with center in
the middle of the pixel area $(\bar{x},\bar{y},\bar{z})$ and base vectors
$(\hat{u},\hat{z^{\prime}})$. The vector $\hat{z^{\prime}}$ is parallel to the
global coordinate $\hat{z}$ while $\hat{u}$ is perpendicular to it and
parallel to the pixel surface, see figure 2. The equations that link the local
coordinates to the global ones are:
$\begin{array}[]{lll}u=\left(x-\bar{x}\right)\text{cos}(\gamma)+\left(y-\bar{y}\right)\text{sin}(\gamma)\,,\\\
z^{\prime}=z-\bar{z}\,,\\\ \phi^{\prime}=\phi\,,\\\
\lambda^{\prime}=\lambda\\\ k^{\prime}=k\end{array}$ (3)
where the angle $\gamma$ is the orientation of the pixel with respect to the
$x$ axis of the global reference frame. Following the equations in 3, the
track parameters at the RS become $(u,z,\phi,\lambda,k)$ 111The coordinate
$z^{\prime}$ can be replaced by $z$ given that the constant $\bar{z}$ in eq. 3
does not contribute to the calculations of the derivatives and residuals
carried out in section III.2.
Figure 2: Sketch of the RS in the Mu3e vertex fitting. The axes of the global
(local) reference frame are drawn in black (green); the $\hat{z^{\prime}}$
axis perpendicular to the $x-y$ plane is not shown. The parameters
(${x,y},z)_{\text{meas}}$ are placed in the geometrical centre of the pixel
surface $\square$. The fit parameters ($u,z,\phi,\lambda,k)_{\text{fit}}$ are
obtained by forward propagating vertex parameters
($x,y,z,\phi,\lambda,k)_{\text{v}}$ via a function
$h=h(\textbf{v}=(x,y,z)_{\text{v}},\textbf{t}(\phi_{\text{v}},\lambda_{\text{v}}),k_{\text{v}}))$.
The covariance matrix can be written as the sum of two terms, i.e.,
$\Xi_{\text{meas}}=\Xi_{\text{track}}+\Xi_{\text{phys}}$. The term
$\Xi_{\text{track}}$ accounts for the uncertainties accumulated during track
fitting while $\Xi_{\text{phys}}$ accounts for MS and pixel resolution at the
RS. Pixel resolution contributes to the smearing of the hit position by
$\sigma_{l}=l/\sqrt{12}$, where $l$ is the length of the pixel side. The MS
changes the direction of the track upon crossing the RS. The tilt can be
approximated by a Gaussian distribution with zero mean and standard deviation
$\theta_{\text{MS}}$ Yao_2006 :
$\theta_{\text{MS}}=13.6\,\left[\text{MeV/c}\right]\,\frac{q}{p}\sqrt{\frac{x}{\beta^{2}\text{X}_{o}}}\left(1+0.038\,\text{ln}\left(\frac{d}{\text{X}_{o}}\right)\right)\,.$
(4)
In the previous equation $\beta$ is the relativistic velocity and $d$ is the
distance travelled by the particle in the RS. From eq. 4, the changes in
$\phi$ and $\lambda$ due to MS, $\sigma_{\phi}$ and $\sigma_{\lambda}$, can be
obtained by projecting $\theta_{\text{MS}}$ onto the transverse and
longitudinal planes, respectively, see e.g., VALENTAN2009728 :
$\sigma_{\lambda}=\theta_{\text{MS}}\,,\\\ $ (5)
and
$\sigma_{\phi}=\theta_{\text{MS}}/\text{cos}(\lambda)\,.$ (6)
In addition to MS and pixel resolution, an error $\sigma_{k}$ on the track
curvature (or $k$) is introduced in $\Xi_{\text{meas}}$ to account for the
energy lost by electrons and positrons in the detector, see e.g., Mankel_2004
. Although energy losses in a thin silicon layer like the RS are negligible
RevModPhys.60.663 , they are nevertheless included in the present vertex
fitting algorithm to provide a complete treatment of the errors. Without loss
of generality, the covariance matrix in this study is written as
$\Xi_{\text{meas}}$ =
diag$\left\\{\sigma_{u}^{2},\sigma_{z}^{2},\sigma_{\phi}^{2},\sigma_{\lambda}^{2},\sigma_{k}^{2}\right\\}$
with $\sigma_{u}=\sigma_{z}=\sigma_{l}$.
### III.2 Least-squares algorithm
In the vertex fitting, a map that links the track vertex parameters
($x,y,z,\phi^{i},\lambda^{i},k^{i})_{\text{v}}$ to the parameters at the RS
($u^{i},z^{i},\phi^{i},\lambda^{i},k^{i})_{\text{fit}}$ is needed, as shown in
figure 2. In what follows, $i=1,2,3$ for the ($e^{-},e^{+},e^{+}$) in a Mu3e
decay. Starting from the analytical expression of a helical trajectory of a
charged particle in a magnetic field, this map can be written as:
$h_{j}=h(\textbf{v},\textbf{t}^{i},k^{i}_{\text{v}})_{j}$, where
$h_{j}=(u,z,\phi,\lambda,k)_{\text{fit}}$ for $j=1,2,...,5$,
$\textbf{v}=(x,y,z)_{\text{v}}$ and $\textbf{t}=(\phi,\lambda)_{\text{v}}$,
see section VII for the analytic expressions of $h_{j}$. In first
approximation, this function can be linearized near some initial guessed
vertex parameters ($\textbf{v}_{o},\textbf{t}_{o},k_{o}$):
$h(\textbf{v}_{o}+\delta\textbf{v},\textbf{t}_{i,o}+\delta\textbf{t}_{i},k_{i,o}+\delta
k_{i})_{j}\simeq
h(\textbf{v}_{o},\textbf{t}_{i,o},k_{i,o})_{j}+D_{i}\,\delta\textbf{v}+E_{i}\,\delta\textbf{t}_{i}\,+F_{i}\,\delta
k_{i},$ (7)
where, for each track $i$, the matrices $D_{i}$, $E_{i}$ and $F_{i}$ have
dimensions $5\times 3$, $5\times 2$ and $5\times 1$, respectively, and are
calculated as follows:
$\displaystyle\begin{array}[]{lll}D^{j}_{n}=\frac{\partial
h_{j}}{\partial\textbf{v}_{n}}\hskip
14.22636pt\textbf{v}_{n}=x_{\text{v}},y_{\text{v}},z_{\text{v}}\,\hskip
14.22636pt\text{for }n=1,2,3,\\\ E^{j}_{m}=\frac{\partial
h_{j}}{\partial\textbf{t}_{m}}\hskip
14.22636pt\textbf{t}_{m}=\phi_{\text{v}},\lambda_{\text{v}}\,\hskip
14.22636pt\text{for }m=1,2,\\\ F^{j}=\frac{\partial h_{j}}{\partial
k_{\text{v}}}\end{array}$ (11)
From the definitions in eq. 7, a quadratic cost function, $\chi^{2}$, is
defined:
$\chi^{2}=\sum_{i}\left(\delta
q_{i}-D_{i}\delta\textbf{v}-E_{i}\delta\textbf{t}_{i}-F_{i}\delta
k_{i}\right)^{T}W_{i}\left(\delta
q_{i}-D_{i}\delta\textbf{v}-E_{i}\delta\textbf{t}_{i}-F_{i}\delta
k_{i}\right).$ (12)
where $W=\Xi^{-1}$ is the weight matrix (see for instance BILLOIR1985115 ) and
$\delta q$ is the residual at the RS:
$\displaystyle\delta q^{i}$
$\displaystyle=(u^{i},z^{i},\phi^{i},\lambda^{i},k^{i})_{\text{meas}}-h(\textbf{v}_{o},\textbf{t}^{i}_{o},k_{o}^{i})\,.$
(13)
Equation 12 is the total normalized error, due to the approximation in eq. 7,
expressed as a function of $\delta\textbf{v}$, $\delta\textbf{t}$ and $\delta
k$. The accuracy of such an approximation can be seen in figure 3(a) where the
RMS of the deviations $(x,y,z)_{\text{meas}}-(x,y,z)_{\text{fit}}$ at the RS
are plotted versus fit iteration number.
The corrections to the initial guessed vertex parameters can be found by
minimizing the cost function, i.e., by solving the following system of
equations:
$\left\\{\begin{aligned}
&\sum_{i}A_{i}^{T}\delta\textbf{v}+\sum_{i}B^{T}_{i}\delta\textbf{t}_{i}+\sum_{i}H^{T}_{i}\delta
k_{i}=\sum_{i}T_{i},\\\
&B_{i}\delta\textbf{v}+C_{i}\delta\textbf{t}_{i}+G_{i}\delta k_{i}=U_{i},\\\
&H_{i}\delta\textbf{v}+G_{i}^{T}\delta\textbf{t}_{i}+L_{i}\delta
k_{i}=Z_{i}.\end{aligned}\right.$ (14)
where
$\displaystyle A_{i}^{T}=D_{i}^{T}W_{i}D_{i}\,,\hskip
14.22636ptB_{i}=E_{i}^{T}W_{i}D_{i}\,\hskip
14.22636ptC_{i}=E_{i}^{T}W_{i}E_{i}\,,$ (15) $\displaystyle
G_{i}=E_{i}^{T}W_{i}F_{i}\,,\hskip 14.22636ptH_{i}=F^{T}W_{i}D_{i}\,,\hskip
14.22636ptL_{i}=F_{i}^{T}W_{i}F_{i}\,,$ $\displaystyle
T_{i}=D_{i}^{T}W_{i}\delta q_{i}\,,\hskip 14.22636ptU_{i}=E_{i}^{T}W_{i}\delta
q_{i}\,\hskip 14.22636ptZ_{i}=F_{i}^{T}W_{i}\delta q_{i}.$
The solutions of system described in eq. 14 are:
$\displaystyle\delta\textbf{v}$
$\displaystyle=\left[\left(\sum_{i}A^{T}-\sum_{i}B^{T}_{i}C^{-1}_{i}B_{i}\right)+\left(\sum_{i}B^{T}_{i}C^{-1}_{i}G_{i}-\sum
H_{i}^{T}\right)N_{1}N_{3}\right]^{-1}$ (16)
$\displaystyle\times\left(\sum_{i}T_{i}-\sum_{i}B_{i}^{T}C^{-1}_{i}U_{i}+\left(\sum_{i}B^{T}_{i}C^{-1}_{i}G_{i}-\sum
H_{i}^{T}\right)N_{1}N_{2}\right),$ $\displaystyle\delta k_{i}$
$\displaystyle=N_{1}\left(N_{2}-N_{3}\delta\textbf{v}\right),$
$\displaystyle\delta\textbf{t}_{i}$
$\displaystyle=C_{i}^{-1}\left(U_{i}-B_{i}\delta\textbf{v}-G_{i}\delta
k_{i}\right),$
where:
$\displaystyle N_{1}$
$\displaystyle=\left(L_{i}-G^{T}_{i}C^{-1}_{i}G_{i}\right)^{-1}\,,$ (17)
$\displaystyle N_{2}$
$\displaystyle=\left(Z_{i}-G^{T}_{i}C^{-1}_{i}U_{i}\right)\,$ $\displaystyle
N_{3}$ $\displaystyle=\left(H_{i}-G^{T}_{i}C^{-1}_{i}B_{i}\right)\,.$
From eq. 16 the covariance matrices for the track parameters at the vertex can
be calculated:
$\displaystyle\text{Cov}(\textbf{v})$
$\displaystyle=\left[\left(\sum_{i}A^{T}-\sum_{i}B^{T}_{i}C^{-1}_{i}B_{i}\right)+\left(\sum_{i}B^{T}_{i}C^{-1}_{i}G_{i}-\sum
H_{i}^{T}\right)N_{1}N_{3}\right]^{-1},$ (18) $\displaystyle\text{Cov}(k_{i})$
$\displaystyle=L_{i}^{-1}+\left(N_{1}N_{3}\right)\text{Cov}(\textbf{v})\,\left(N_{1}N_{3}\right)^{T},$
$\displaystyle\text{Cov}(\textbf{t}_{i})$
$\displaystyle=C_{i}^{-1}+\left(C_{i}^{-1}B_{i}\right)\text{Cov}(\textbf{v})\left(C_{i}^{-1}B_{i}\right)^{T}+\left(C_{i}^{-1}G_{i}\right)\text{Cov}(k)\left(C_{i}^{-1}G_{i}\right)^{T}.$
### III.3 Algorithm testing
Geant4 based Monte-Carlo (MC) simulations of Mu3e decays have been carried out
for testing the vertex fitting described in section III.2. The procedure
followed by the test was:
1. 1.
Hit parameters at the RS, $(u_{i},z_{i},\phi_{i},\lambda_{i},k_{i})_{MC}$,
were obtained from MC trajectories.
2. 2.
$(u_{i},z_{i})_{MC}$ were smeared by using pixel resolution, i.e., by adding a
random offset derived from a uniform distribution with mean and standard
deviation $(0,\sigma_{l})$. The $(\phi_{i},\lambda_{i})$ angles were tilted
according to the errors in equations 5 and 6 for values of the MS pooled from
a Gaussian distribution with $\hat{\mu}=0$ and $\sigma=\theta_{\text{MS}}$.
The error on $k_{i}$ was obtained by drawing from a Gaussian distribution with
mean zero and a standard deviation $\sigma_{k}$.
3. 3.
From point 1 and 2, $\Xi_{\text{meas}}$ was derived together with the weight
matrix $W$.
4. 4.
The initial parameters $\textbf{v}_{o}=x_{o},y_{o},z_{o}$ were obtained as the
average coordinates of the tracks intersection points. Since two tracks can
have up to two intersections, the one that is met first when back propagating
the track parameters from the RS to the target is retained in the calculation
of the average value. If two tracks do not intersect, the point of closest
approach is used. The vectors $\textbf{t}_{o}$ were extracted at the point of
closest approach of the trajectories to $\textbf{v}_{o}$ whilst $k_{o}$ was
directly obtained from the track reconstruction carried out before the vertex
fitting.
5. 5.
From equations 7 and 11, the residuals at the RS were calculated and thus the
corrections $\delta\textbf{v}$, $\delta k_{i}$ and $\delta\textbf{t}_{i}$ in
eq. 16. Only a few iterations were required to minimize the $\chi^{2}$ in eq.
12, as it can be seen in figure 3(b).
(a) (b)
Figure 3: Typical RMS of the residuals $(x,y,z)_{meas}-(x,y,z)_{fit}$ at the
RS (a) and the $\chi^{2}$ minimization (b) as a function of the fit iteration
number.
(a) (b)
(c) (d)
(e) (f)
Figure 4: Pull distributions (a-f) of the track parameters at the vertex:
($x,y,z,\phi,\lambda,k)_{\text{v}}$ for test trajectories obtained by
following the steps described in the text.
A precise determination of the fit errors depends on the correct
characterization of $\Xi_{\text{meas}}$ and its propagation, e.g.,
WOLIN1993493 Lund_2009 . In the present study, the covariance matrix of the
vertex parameters was calculated by propagating $\Xi_{\text{meas}}$ from the
RS to the vertex point by using equations 18. A key test of the algorithm
developed in this section consists in plotting the pull distributions of the
track parameters. The pull of a variable $X$ with expected value $\mu_{X}$ and
standard error $\sigma_{X}$ is:
$P=\frac{\left(X-\mu_{X}\right)}{\sigma_{X}}.$ (19)
If the error and the residual in eq. 19 are well characterized, the pulls are
normally distributed.
Figure 4 shows the normal distributions for the pulls of all the track
parameters at the vertex $(x,y,z,\phi,\lambda,k)_{\text{v}}$ as calculated by
the present vertex fitting algorithm.
## IV Comaprative study of error sources
In section III.1, the explicit form of the covariance matrix was given by
including the contribution of MS, pixel resolution and energy losses at RS:
$\Xi_{\text{meas}}=\text{diag}\left\\{\sigma_{u}^{2},\sigma_{z}^{2},\sigma_{\phi}^{2},\sigma_{\lambda}^{2},\sigma_{k}^{2}\right\\}$.
In this section, the relative weights of these errors on the determinations of
the fit vertex parameters and their uncertainties are discussed. Three
different inclusive scenarios have been considered: (A) MS only; (B) MS and
pixel resolution are both included; (C) all sources of errors (MS, pixel
resolution and energy losses) are considered. These three scenarios are
summarized in table 1.
It must be noted that the kernel of $\Xi_{\text{meas}}$ grows larger going
from scenario (A) to (C) along with the dimensionality of the problem, see for
example (BILLOIR1992139, ). For instance, if the energy loss and pixel
resolution at the RS are ignored, MS remains the only source of uncertainty
against which all measurements are fixed but
$(\phi_{i},\lambda_{i})_{\text{meas}}$, i.e.,
$\Xi_{\text{meas}}=\text{diag}\left\\{\sigma_{\phi}^{2},\sigma_{\lambda}^{2}\right\\}$.
These 6 angles can be fitted with 3 vertex variables $(x,y,z)_{\text{v}}$ thus
simplifying eq. 16, see e.g., schenk2013vertex .
Table 1: Scenrarios A,B,C in the comparative study of the vertex fitting. Scenario | $\Xi_{\text{meas}}$ | measurements | fit parameters | errors
---|---|---|---|---
A | $\text{diag}\left\\{\sigma_{\phi}^{2},\sigma_{\lambda}^{2}\right\\}$ | $(\phi,\lambda)_{\text{meas}}$ | $(x,y,z)_{\text{v}}$ | MS
B | $\text{diag}\left\\{\sigma_{u}^{2},\sigma_{z}^{2},\sigma_{\phi}^{2},\sigma_{\lambda}^{2}\right\\}$ | $(u,z,\phi,\lambda)_{\text{meas}}$ | $(x,y,z,\phi,\lambda)_{\text{v}}$ | MS, pixel
C | $\text{diag}\left\\{\sigma_{u}^{2},\sigma_{z}^{2},\sigma_{\phi}^{2},\sigma_{\lambda}^{2},\sigma_{k}^{2}\right\\}$ | $(u,z,\phi,\lambda,k)_{\text{meas}}$ | $(x,y,z,\phi,\lambda,k)_{\text{v}}$ | MS, pixel, $\Delta E$
Panels (a-e) in figure 5 show the the deviations between MC vertex parameters
of simulated Mu3e decays and those obtained from vertex fitting in scenarios
(A) and (B), respectively. From figure 5(a-c), it can be seen that the fit
accuracy for the determinations of $(x,y,z)_{\text{v}}$ does not improve when
the pixel resolution is included in covariance matrix. However, a significant
improvement is found on the determinations of
$(\phi_{i},\lambda_{i})_{\text{v}}$, as shown in figure 5(d,e). The results
for scenario (C) are statistically the same as for scenario (B). The former
being the only case in which the fit attempts to optimize the track parameter
$k$, see table 1. As expected, having neglected the energy losses at the RS,
track curvatures do not vary significantly throughout the vertex fitting, as
it can be seen in figure 5(f).
The improved fit accuracy of case (C) with respect to scenario (A) for the
coordinates of the momentum vector $p_{x}$, $p_{y}$ and $p_{z}$ is shown in
figure 6(a-c). This improvement reflects also onto the determination of the
average total momentum which is $\sim$10$\%$ smaller in scenario (C) than the
corresponding average in scenario (A), and thus closer to real MC value (in
the hypothesis of decaying muons at rest), see figure 6(d).
The invariant mass of simulated Mu3e decays, calculated from the fit vertex
parameters in scenarios (A) and (C), is shown in figure 7. As expected, no
significant difference is seen in the two fit scenarios. In fact, the
magnitude of the invariant mass is dominated by the muon rest mass over which
the fit accuracy has little leverage.
(a) (b)
(c) (d)
(e) (f)
Figure 5: Deviations between the fit and MC track parameters at the vertex.
The legends show the mean and standard deviation obtained from a Gaussian fit
in scenarios (A) [full gray] and (B or C) [empty red], respectively.
(a) (b)
(c) (d)
Figure 6: Deviations between the fit and MC momentum coordinates $p_{x}$,
$p_{y}$ and $p_{z}$ panels (a-c) and total momenutm $P$ in panel (d) for
scenario (A) [full gray] and (C) [empty red], respectively. In (a,b,c), the
legends show the mean and standard deviation obtained from a Gaussian fit
whilst the legend in (d) shows the average and stadard deviation of the
distributions. Figure 7: Rreconstructed invariant mass for vertices with
$\chi^{2}<15$ and total momentum $P<4$ MeV/c, scenario (A) [full gray] and (C)
[empty red], respectively.
## V Conclusion
In this paper, a simple least-squares method has been described which can be
applied to reconstruct decay vertices in experiments equipped with pixel
detectors. The relative weights of 1) MS, 2) pixel resolution and 3) energy
losses to the final reconstruction accuracy has been investigated in the case
study of the Mu3e low-material budget pixel detector. The exhaustive errors
treatment of the present study goes beyond the MS-only approximation showing a
significant improvement of the fit accuracy when the intrinsic pixel
resolution is accounted for. This should encourage a rigorous treatment of the
pixel resolution in the development of future reconstruction algorithms
concerning precise particle physics measurements at low-energy.
## VI Acknowledgements
I am grateful to the STFC grant for supporting this work. I wish to thank the
members of the Mu3e Software and Analysis group for providing the simulation
and track reconstruction software behind this study. I also want to thank Joel
Goldstein, Niklaus Berger and Gavin Hesketh for all the detailed and useful
discussions about this work. I also thank Naik Paras and Andre Schoning for
their careful reading and useful comments.
## VII Appendix 1: forward propagation of track parameters
The map $h(\textbf{v},\textbf{t},k)$ propagates a trajectory from the vertex
to the hit at the RS. In this section, its analytical expression is derived by
starting from the propagation of a track in the transverse plane and then
along the beam direction.
#### Propagation in the transverse plane
In this study, trajectories are helices with symmetry axis $\hat{z}$ and
transverse radius $R_{\perp}$ which sign is given by the charge $q$ of the
particle, see eq. 2. We write $R_{\perp}=R$ cos($\lambda$) such as:
$R:=\left\\{\begin{aligned} &+\frac{p}{|q|B}\hskip 8.5359pt\text{if q
}>0\text{ c.c.w rotation}\,\\\ &-\frac{p}{|q|B}\hskip 8.5359pt\text{if q
}<0\text{ c.w rotation}\,.\end{aligned}\right.$ (20)
In the x-y plane, a helix is a circumference with center in $(x_{c},y_{c})$
and radius $R_{\perp}$. It is not difficult to prove that:
$\displaystyle x_{c}=x_{\text{v}}-R_{\perp}\text{sin}(\phi_{\text{v}})\,,$
(21) $\displaystyle
y_{c}=y_{\text{v}}+R_{\perp}\text{cos}(\phi_{\text{v}})\,,$
Figure 8: Sketch of a track with negative radius, as defined in eq. 20,
projected on the x-y plane. The $\phi_{\text{v}}$ values are obtained by
subtracting $\pi/2$ from the phase angle of the radial vector.
The transport equations in the transverse plane is obtained by calculating the
coordinates $(x_{q},y_{q})$ of the intersection point between the track
originating from $(x_{\text{v}},y_{\text{v}})$ and the detector ladder. In the
x-y plane, the ladder profile is a line characterised by the parameters
$y_{o}$ and $m$, see figure 9:
$\left\\{\begin{aligned}
&(y_{q}-y_{c}(\textbf{v},\textbf{t},k))^{2}+(x_{q}-x_{c}(\textbf{v},\textbf{t},k))^{2}=R_{\perp}^{2}\,,\\\
&y_{q}=m\,x_{q}+y_{o}\,,\\\ &m=\text{tg}(\gamma)\,.\end{aligned}\right.$ (22)
Figure 9: A sketch representing the intersection between a trajectory and the
detector RS in the x-y plane.
In the previous equation, $\gamma$ is the angle of the detector ladder with
respect to the global $\hat{x}$ axis. The parameter $y_{o}$ can be calculated
by using $y_{\text{meas}}=tg(\gamma)x_{\text{meas}}+y_{o}$. In conclusion, the
solutions of the system of equations 22 can be written as:
$\begin{array}[]{l}x_{q}=\left(\begin{array}[]{c}-\frac{\mathrm{y_{o}}-\frac{\mathrm{y_{o}}+m\,\mathrm{x_{v}}+\sigma_{1}+m^{2}\,\mathrm{y_{v}}+\sigma_{2}-\sigma_{3}}{m^{2}+1}}{m}\\\
-\frac{\mathrm{y_{o}}-\frac{\mathrm{y_{o}}+m\,\mathrm{x_{v}}-\sigma_{1}+m^{2}\,\mathrm{y_{v}}+\sigma_{2}-\sigma_{3}}{m^{2}+1}}{m}\end{array}\right)\\\
\end{array}$ (23)
and
$y_{q}=\left(\begin{array}[]{c}\frac{\mathrm{y_{o}}+m\,\mathrm{x_{v}}+\sigma_{1}+m^{2}\,\mathrm{y_{v}}+\sigma_{2}-\sigma_{3}}{m^{2}+1}\\\
\frac{\mathrm{y_{o}}+m\,\mathrm{x_{v}}-\sigma_{1}+m^{2}\,\mathrm{y_{v}}+\sigma_{2}-\sigma_{3}}{m^{2}+1}\end{array}\right)\,,\\\
\mathrm{}\\\ $ (24)
where
$\begin{array}[]{l}\mathrm{}\\\
\;\;\sigma_{1}=m\,\left[-R^{2}\,m^{2}\,{\mathrm{cos}\left(\mathrm{\lambda_{v}}\right)}^{2}\,{\mathrm{sin}\left(\mathrm{\phi_{v}}\right)}^{2}+R^{2}\,m^{2}\,{\mathrm{cos}\left(\mathrm{\lambda_{v}}\right)}^{2}\right.\\\
\left.-2\,R^{2}\,m\,{\mathrm{cos}\left(\mathrm{\lambda_{v}}\right)}^{2}\,\mathrm{cos}\left(\mathrm{\phi_{v}}\right)\,\mathrm{sin}\left(\mathrm{\phi_{v}}\right)-R^{2}\,{\mathrm{cos}\left(\mathrm{\lambda_{v}}\right)}^{2}\,{\mathrm{cos}\left(\mathrm{\phi_{v}}\right)}^{2}\right.\\\
\left.+R^{2}\,{\mathrm{cos}\left(\mathrm{\lambda_{v}}\right)}^{2}+2\,R\,m^{2}\,\mathrm{x_{v}}\,\mathrm{cos}\left(\mathrm{\lambda_{v}}\right)\,\mathrm{sin}\left(\mathrm{\phi_{v}}\right)+2\,R\,m\,\mathrm{x_{v}}\,\mathrm{cos}\left(\mathrm{\lambda_{v}}\right)\,\mathrm{cos}\left(\mathrm{\phi_{v}}\right)\right.\\\
\left.+2\,R\,m\,\mathrm{y_{o}}\,\mathrm{cos}\left(\mathrm{\lambda_{v}}\right)\,\mathrm{sin}\left(\mathrm{\phi_{v}}\right)-2\,R\,m\,\mathrm{y_{v}}\,\mathrm{cos}\left(\mathrm{\lambda_{v}}\right)\,\mathrm{sin}\left(\mathrm{\phi_{v}}\right)\right.\\\
\left.+2\,R\,\mathrm{y_{o}}\,\mathrm{cos}\left(\mathrm{\lambda_{v}}\right)\,\mathrm{cos}\left(\mathrm{\phi_{v}}\right)-2\,R\,\mathrm{y_{v}}\,\mathrm{cos}\left(\mathrm{\lambda_{v}}\right)\,\mathrm{cos}\left(\mathrm{\phi_{v}}\right)-m^{2}\,{\mathrm{x_{v}}}^{2}\right.\\\
\left.-2\,m\,\mathrm{x_{v}}\,\mathrm{y_{o}}+2\,m\,\mathrm{x_{v}}\,\mathrm{y_{v}}-{\mathrm{y_{o}}}^{2}+2\,\mathrm{y_{o}}\,\mathrm{y_{v}}-{\mathrm{y_{v}}}^{2}\right]^{1/2}\\\
\mathrm{}\\\
\;\;\sigma_{2}=R\,m^{2}\,\mathrm{cos}\left(\mathrm{\lambda_{v}}\right)\,\mathrm{cos}\left(\mathrm{\phi_{v}}\right)\\\
\mathrm{}\\\
\;\;\sigma_{3}=R\,m\,\mathrm{cos}\left(\mathrm{\lambda_{v}}\right)\,\mathrm{sin}\left(\mathrm{\phi_{v}}\right)\,.\end{array}$
(25)
A choice between the two solutions in equations 23 and 24 can be made by
accounting for the track direction of motion and the vertex position relative
to the hit.
For what is concerned with $\phi_{\text{v}}$, the following expression can be
written, see figure 8:
$\phi_{q}=\text{atan}\left(\frac{y_{q}-y_{c}(x_{\text{v}},y_{\text{v}},\phi_{\text{v}},\lambda_{\text{v}},k_{\text{v}})}{x_{q}-x_{c}(x_{\text{v}},y_{\text{v}},\phi_{\text{v}},\lambda_{\text{v}},k_{\text{v}})}\right)+\text{sign}(R)\pi/2\,.$
(26)
#### Propagation along the beam axis
The propagation of helical trajectories along the $\hat{z}$ axis is
characterized by the following equations VALENTAN2009728 :
$\left\\{\begin{aligned} &\lambda_{q}=\lambda_{\text{v}}\,,\\\
&z_{q}=z_{\text{v}}+R_{\perp}\,\text{tan}(\lambda_{\text{v}})\left(\phi_{q}-\phi_{\text{v}}\right)\,.\end{aligned}\right.$
(27)
## References
## References
* [1] W. Snoeys. Monolithic CMOS sensors for high energy physics — challenges and perspectives. NIM-A, 1056:168678, 2023.
* [2] H. F. W. Sadrozinski, A. Seiden, and N. Cartiglia. 4D tracking with ultra-fast silicon detectors. Rep. Prog. Phys., 81:026101, 2018.
* [3] F. Carnesecchi et al. Development of ultra fast silicon detector for 4D tracking. NIM-A, 936:608–611, 2019. Frontier Detectors for Frontier Physics: 14th Pisa Meeting on Advanced Detectors.
* [4] H. G. Moser. The Belle II DEPFET pixel detector. NIM-A, 831:85–87, 2016. Proceedings of the 10th International “Hiroshima” Symposium on the Development and Application of Semiconductor Tracking Detectors.
* [5] S. Spannagel. Technologies for future vertex and tracking detectors at CLIC. NIM-A, 936:612–615, 2019. Frontier Detectors for Frontier Physics: 14th Pisa Meeting on Advanced Detectors.
* [6] B. Abelev and The ALICE Collaboration. Technical design report for the upgrade of the ALICE inner tracking system. Journal of Physics G: Nuclear and Particle Physics, 41(8):087002, 2014.
* [7] K. Arndt et al. Technical design of the phase I Mu3e experiment. NIM-A, 1014:165679, 2021.
* [8] R. Frühwirth. Application of Kalman filtering to track and vertex fitting. NIM-A, 262(2):444–450, 1987.
* [9] P. Billoir and S. Qian. Fast vertex fitting with a local parametrization of tracks. NIM-A, 311(1):139–150, 1992.
* [10] W. Waltenberger, R. Frühwirth, and P. Vanlaer. Adaptive vertex fitting. Journal of Physics G: Nuclear and Particle Physics,, 34:N343, 2007\.
* [11] A. Strandlie and R. Frühwirth. Track and vertex reconstruction: from classical to adaptive methods. Rev. Mod. Phys., 82:1419–1458, 2010.
* [12] R. Mankel. Pattern recognition and event reconstruction in particle physics experiments. Reports on Progress in Physics, 67(4):553, 2004.
* [13] U. Bellgardt et al. Search for the decay ${\mu}^{+}$ ${\rightarrow}$ ${e}^{+}$${e}^{+}$${e}^{-}$. Nuclear Physics B, 299(1):1–6, 1988.
* [14] W.J. Marciano and A.I. Sanda. Exotic decays of the muon and heavy leptons in gauge theories. Physics Letters B, 67(3):303–305, 1977.
* [15] Y. Kuno and O. Yasuhiro. Muon decay and physics beyond the standard model. Rev. Mod. Phys., 73:151–202, 2001.
* [16] M. Kakizaki, Y. Ogura, and F. Shima. Lepton flavor violation in the triplet Higgs model. Physics Letters B, 566(3):210–216, 2003.
* [17] A. de Gouvêa and P. Vogel. Lepton flavor and number conservation, and physics beyond the standard model. Progress in Particle and Nuclear Physics, 71:75–92, 2013. Fundamental Symmetries in the Era of the LHC.
* [18] I. Perić. A novel monolithic pixelated particle detector implemented in high-voltage CMOS technology. NIM-A, 582(3):876–885, 2007. VERTEX 2006.
* [19] W. M. Yao et al. Review of particle physics. Journal of Physics G: Nuclear and Particle Physics, 33(1):1, 2006\.
* [20] M. Valentan, M. Regler, and R. Frühwirth. Generalization of the Gluckstern formulas II: multiple scattering and non-zero dip angles. NIM-A, 606(3):728–742, 2009.
* [21] H. Bichsel. Straggling in thin silicon detectors. Rev. Mod. Phys., 60:663–699, 1988.
* [22] P. Billoir, R. Frühwirth, and M. Regler. Track element merging strategy and vertex fitting in complex modular detectors. NIM-A, 241(1):115–131, 1985.
* [23] E. J. Wolin and L. L. Ho. Covariance matrices for track fitting with the Kalman filter. NIM-A, 329(3):493–500, 1993.
* [24] E. Lund et al. Transport of covariance matrices in the inhomogeneous magnetic field of the atlas experiment by the application of a semi-analytical method. Journal of Instrumentation, 4(04):P04016, 2009.
* [25] Sebastian Schenk. A vertex fit for low momentum particles in a solenoidal magnetic field with multiple scattering. Master’s Thesis, Heidelberg University, 2013.
|
A Deep Learning Generative Model Approach
for Image Synthesis of Plant Leaves
Alessandro Benfenati 1, Davide Bolzi2, Paola Causin 2*, Roberto Oberti 3
1 Dept. of Environmental Science and Policy, Università degli Studi di Milano,
Milano, Italy
2 Dept. of Mathematics, Università degli Studi di Milano, Milano, Italy
3 Dept. of Agricultural and Environmental Sciences - Production, Landscape,
Agroenergy, Università degli Studi di Milano, Milano, Italy
*<EMAIL_ADDRESS>
## Abstract
### Objectives
We generate via advanced Deep Learning (DL) techniques artificial leaf images
in an automatized way. Our aim is to dispose of a source of training samples
in artificial intelligence applications for modern crop management in
agriculture, with focus on disease recognition on plant leaves. Such
applications require large amounts of data and, while leaf images are not
truly scarce, image collection and annotation remains a very time–consuming
process. Data scarcity can be addressed by augmentation techniques consisting
in simple transformations of samples belonging to a small dataset, but the
richness of the augmented data is limited: this motivates the search for
alternative approaches.
### Methods
Pursuing an approach based on DL generative models, we propose a Leaf-to-Leaf
Translation (L2L) procedure structured in two steps: firstly, a residual
variational autoencoder architecture is used to generate synthetic leaf
skeletons (leaf profile and veins) starting from companions binarized
skeletons of real leaf images. In a second step, we perform the process of
translation via a Pix2pix framework, which uses conditional generator
adversarial networks to reproduce the colorization of leaf blades, preserving
the shape and the venation pattern.
### Results
The L2L procedure generates synthetic images of leaves with a realistic
appearance. We address the performance measurement both in a qualitative and a
quantitative way; for this latter evaluation, we employ a DL anomaly detection
strategy which quantifies the degree of anomaly of synthetic leaves with
respect to real samples.
### Conclusions
Generative DL approaches have the potential to be a new paradigm to provide
low-cost meaningful synthetic samples for computer-aided applications. The
present L2L approach represents a step towards this goal, being able to
generate synthetic samples with a relevant qualitative and quantitative
resemblance to real leaves.
## Author summary
In this work we present an end-to-end workflow incorporating state-of-the-art
Deep Learning strategies based on generative methods to produce realistic
synthetic images of leaves. At the best of our knowledge, this is the first
attempt of such an approach to this problem. Our focus application is the
training of neural networks for modern crop management systems in agriculture,
but we believe that many other computer–aided applications may benefit from
it. We take inspiration from previous works carried out on eye retina image
synthesis, an application domain which shares some similarities with the
present problem (a venation pattern over a colorized “fundus”). Our approach
relies on the successive use of autoencoders and generative adversarial
architectures, able to generate leaf images both in the Red-Green-Blue
channels as well as in the Near-Infra-Red. The generated leaf images have a
realistic appearance even if they sometimes suffer from small inconsistencies,
especially discolored patches. A quantitative evaluation via an anomaly
detection algorithm shows that in average a synthetic sample is classified as
such only in 25% of the cases.
## Introduction
The ability to generate meaningful synthetic images of leaves is highly
desirable for many computer-aided applications. At the best of our knowledge,
attempts at generating synthetic images of leaves have been made mostly in the
field of computer graphics and were aimed at creating the illusion of
realistic landscapes covered with plants, trees or meadows. These efforts were
mainly based on procedural mathematical models describing the venation
structure and the color/texture of the leaf. A specific type of formal
grammar, called L–grammar, was developed to generate instructions to draw a
leaf. Several instances of the profile of leaves of a certain species were
created upon random variations of parameters of the directives of a certain
L–grammar [1]. Biologically-motivated models were also proposed. A main point
of these approaches is the representation of the interaction between auxin
sources distributed over the leaf blade and the formation of vein patterns
[2]. Some attempts were also carried out using finite elements to build
mechanistic models of the leaf blade, tuned on its structural parameters [3].
After generating the leaf shape and venation pattern - regardless of the
approach- texture and colors were rendered by a color palette prescribed by
the user or generated according to a pseudo–random algorithm. A color model
based on convolution sums of divisor functions was proposed in [4], while a
shading model based on the PROSPECT model for light transmission in leaves
[5], was proposed in [6]. “Virtual rice” leaves were created in [7] based on a
RGB-SPAD model.
In this work we aim at introducing a radically different approach by
generating artificial images of leaves by automatized techniques based on Deep
Learning (DL) techniques. Our focus is mainly to enrich dataset of leaf images
for neural networks training, even if we deem that the present approach may be
of interest also in a wide range of other fields, starting from computer
graphics. The motivation underlying this work is that DL methods require a
large amount of data – often of the order of hundreds of thousands of images –
to avoid overfitting phenomena. Data augmentation is a common remedy, usually
consisting in simple transformations such as random rotations, translations or
deformations of the original images. However, the richness of the augmented
dataset is limited and more sophisticated approaches for synthesizing
additional training data have a greater potential to improve the training
process. In this respect, DL generative models represent attractive methods to
produce synthetic images (with corresponding labels) using the information
from a limited set of real, unlabeled images of the same domain. This idea is
not new - in absolute - but it has been used mainly in the field of medicine,
where data may be extremely scarce and difficult to obtain (see, e.g., the
recent review [8]). In the present context, while scarcity of leaf images may
be not a real issue, what is more relevant is to avoid the huge mole of work
required to collect, examine and annotate images. This is especially true when
image segmentation should be performed, which is a pixel-wise problem: the
acquisition of annotated segmentation masks is exceedingly costly and time
consuming, as a human expert annotator has to label every pixel manually. For
our model we take inspiration from [9] (and reference therein), where the
authors synthesised eye retina images. The fundus of the eye shares indeed
several characteristics with our problem: a fine network of hierarchically
organized blood vessels (leaf veins) superposed to a colored background (leaf
blade). In addition, in our problem the leaf blade is also characterized by a
specific silhouette that must be represented as well. We propose a Leaf-to-
Leaf Translation (L2L) approach to obtain synthetic colorized leaf blades
organized in two steps: first we use a residual variational autoencoder
architecture to generate fake leaf skeletons starting from binarized companion
skeletons of real leaf images. In a second step we perform the process of
translation via a Pix2pix framework, which uses conditional generator
adversarial networks (cGANs) to reproduce the specific color distribution of
the leaf blade, preserving leaf shape and venation pattern. We carry out both
qualitative and quantitative evaluations of the degree of realism of the
synthetic samples of leaves. Specifically, a DL-based anomaly detection
strategy is used to evaluate the distance (“anomaly”) between synthetic and
real samples. The results show a good degree of realism, that is a low anomaly
score, and indicate that with the present approach one can significantly
enrich a small dataset and improve the training performance of DL
architectures.
## Materials and methods
### Dataset
Grapevine leaves were imaged via a QSi640 ws-Multispectral camera (Atik
Cameras, UK) equipped with a Kodak 4.2 Mp micro-lens image sensor and 8
spectral selection filters operating in the bands 430 to 740 nm. For the
purpose of this experiment, leaves were imaged singularly on a dark
background, under controlled diffuse illumination conditions. Images were
acquired in the single spectral channels 430 nm (blue, B), 530 nm (green, G),
685 nm (red, R) and 740 nm (near–infrared, NIR). These channels are typically
considered when dealing with the task of recognition of plant diseases in a
multispectral analysis approach [10, 11]. A set of RGB images of the same
leaves in standard CIE color space were also acquired for reference. Camera
parameters were set and image collection was performed via an in–house
developed acquisition software written in MATLAB. Reflectance calibration was
carried out by including in each image 3 reflectance references (Spectralon R
= 0.02, R = 0.50 and R = 0.99; Labsphere, USA). We obtained photos of 80
leaves with a resolution of $2048\times 2048$ pixels and 8 bit for each
channel. Preprocessing operations were performed on each image: removal of hot
pixels, normalization along each channel according to the reference probes,
creation of a companion binarized skeleton image. For this latter procedure,
the NIR channel was used, since it presents an a high contrast between the
leaf and background. The skeleton comprises the profile of the leaf and the
vein pattern. Images and companion skeletons were resized at $256\times 256$
resolution. Fig 1 shows the original images in the RGB and RGNIR spaces, the
normalized NIR channel and the corresponding companion skeleton. Before using
the generative algorithms, we performed standard data augmentation by randomly
flipping each image horizontally and vertically, rotating by an angle randomly
chosen in $[-\pi/4,\pi/4]$ and finally zooming with a random amount in the
range $[-20\%,+20\%]$. The dataset was thus increased in this way from 80 to
240 samples.
Fig 1: Sample of grapevine leaf from the dataset. A: RGB image; B: RGNIR
image; C: normalized and cropped NIR image; D: companion skeleton. In the
skeleton binarized image, the white color identifies the leaf profile and
veins, the black color identifies other parts of the leaf and the background.
### Generative methods for L2L translation
The authors of [9, 12] generated artificial patterns of blood vessels along
with corresponding eye fundus images using a common strategy which divides the
problem of the image generation into two sub–problems, each one addressed by a
tailored DL architecture: first they generate the blood vessel tree, then they
color the eye fundus. We adopt this very approach, first generating the leaf
profile and veins and then coloring the leaf blade. Also in our experience
this approach has turned out to be more effective than generating the
synthetic image altogether.
#### Skeleton Generation
According to the above considerations, the generation of a realistic leaf
skeleton is the first step towards the final goal of our work. For this task,
we use a convolutory autoencoder architecture, that is, a network trained to
reconstruct its input. An autoencoder (AE) is composed of two submodels: 1) an
encoder $Q$ that maps the training dataset to a latent (hidden) representation
$z$; 2) a decoder $P$ that maps $z$ to an output that aims to be a plausible
replica of the input. We have experimented that simple autoencoders cannot
generate realistic skeletons. For this reason, we use a more sophisticated
architecture, called Residual Variational Autoencoder (ResVAE, see Fig 2).
Fig 2: Illustration of the ResVAE framework (training).
This learning framework has already been successfully applied to image
recognition, object detection, and image super-resolution (see, e.g., [13]).
In the data generation framework, AEs learn the projection of the initial data
into a _latent subspace_ , and then a sample of this subspace is randomly
extracted to build up a new instance of the initial data. Instead of learning
such projection, VAEs learn the probability distribution of the latent
variables given the input $x$. As a matter of fact, a variational autoencoder
can be defined as an autoencoder whose training is regularized to avoid
overfitting and ensure that the latent space has good properties that enable
the generative process. To achieve this goal, instead of encoding an input as
a single point, VAEs encode it as a (Gaussian) distribution over the latent
space, where $p(z|x)$ represents the probability of the latent variable $z$
given the input $x$. The decoding part consists in sampling a variable from
$p(z|x)$ and then providing a reconstruction $\widehat{x}$ of the initial data
$x$. We associate to this framework the following loss function
$\mathcal{L}_{\text{VAE}}(x,\hat{x})=\mathcal{L}_{L_{2}}(x,\hat{x})+\beta\mathcal{L}_{KL}\left(p(z|x),\mathcal{N}(0,1)\right),$
(1)
where the first term $\mathcal{L}_{L_{2}}=||x-\widehat{x}||^{2}$ is the
$L_{2}$ norm of the reconstruction loss, and the second term
$\mathcal{L}_{KL}=KL[N(\mu_{x},\sigma_{x}),N(0,1)]$ is the Kullback–Leibler
(KL) divergence [14, 15] The KL divergence enhances sparsity in neurons
activation to improve the quality of the latent features keeping the
corresponding distribution close to the Gaussian distribution
$\mathcal{N}(0,1)$. The tunable regularization hyperparameter $\beta$ is used
to weigh the two contributions[16]. With respect to VAEs, ResVAEs additionally
employ residual blocks and connection skips. The idea beyond residual blocks
is the following [17]: normal layers try to directly learn an underlying
mapping, say $h(x)$, while residual ones approximate a residual function
$r(x)=h(x)-x$. Once the learning is complete, $r(x)$ is added to the input to
retrieve the mapping: $h(x)=r(x)+x$. In our architecture, residual blocks are
concatenated to the decoder to increase the capacity of model [13]. The
connection skips allow to back–propagate the gradients more efficiently giving
the bottleneck more access to the simpler features extracted earlier in the
encoder. The resulting ResVAE compresses $256\times 256$ leaf skeleton images
to a low dimension latent vector of size 32 and then it reconstructs it to
$256\times 256$ images. We refer to S1 Appendix for specifications of the
present ResVAE architecture.
#### Translation to colorized leaf image
We consider the colorization of the leaf out of an existing skeleton as an
image-to- image translation problem, which implies to learn a mapping from the
binary vessel map into another representation. Similarly to what observed in
[9] for retinal image generation, many leaf images can share a similar binary
skeleton network due to variations in color, texture, illumination. For this
reason, learning the mapping is an ill-posed problem and some uncertainty is
present. We learn the mapping via a Pix2pix net (also known as conditional
GAN, (cGAN)), an unsupervised generative model which represents a variation of
a standard GAN. As such it includes two deep neural networks, a generator $G$
and discriminator $D$. The generator aims to capture the data distribution,
while the discriminator estimates the probability that a sample actually came
from the training data rather than from the generator. In order to learn a
generative distribution over the data $x$, the generator builds a mapping
$G(z;\theta_{G})$ from a prior noise distribution $p_{z}$ to the image data
space, $\theta_{G}$ being the generator parameters. The discriminator outputs
the probability that $x$ came from the real data distribution $p_{data}(x)$
rather from the generated one. We denote by $D(x;\theta_{D})$ the
discriminator function, $\theta_{D}$ being the discriminator parameters. In
standard GANs, the optimal mappings $G^{*}$ is obtained as the equilibrium
point of the min–max game:
$(G^{*},D^{*})=\arg\displaystyle\min_{G}\max_{D}\mathcal{L}_{GAN}(D,G),$
where we have defined the objective function
$\mathcal{L}_{GAN}(D,G):=\mathbb{E}_{x\sim p_{data}(x)}[\log
D(x;\theta_{D})]+\mathbb{E}_{z\sim p_{z}(z)}[\log(1-D(G(z;\theta_{G})))].$ (2)
In the conditional framework, an extra variable $y$ is added as a further
source of information on $G$, which combines the noise prior $p_{z}(z)$ and
$y$. The objective function thus becomes
$\mathcal{L}_{cGAN}(D,G)=\mathbb{E}_{x\sim p_{data}(x)}[\log
D(x;\theta_{D})]+\mathbb{E}_{z\sim p_{z}(z)}[\log(1-D(G(z|y;\theta_{G})))].$
(3)
Previous approaches have found it beneficial to mix the GAN objective with a
more traditional loss, such as $L_{2}$ distance [18]. The discriminator’s job
remains unchanged, but the generator is bound not only to fool the
discriminator but also to stay near the ground truth output in an $L_{2}$
sense. In this work we rather explore the use of the $L_{1}$ distance rather
than $L_{2}$ as $L_{1}$ promotes sparsity and at the same time it encourages
less blurring [19]:
$\mathcal{L}_{L_{1}}(G)=\mathbb{E}_{x,y,z}[||y-G(z|y;\theta_{G})||].$ (4)
The final objective is thus
$(G^{*},D^{*})=\arg\min_{G}\max_{D}\mathcal{L}_{cGAN}(D,G)+\lambda\mathcal{L}_{L_{1}}(G)$
(5)
where $\lambda$ is a regularization hyperparameter. In our implementation the
extra information corresponds to the leaf skeletons which condition $G$ in the
image generation task to preserve leaf shape and venation pattern. The
discriminator is provided with skeleton plus generated image pairs and must
determine whether the generated image is a plausible (feature preserving)
translation. Fig 3 shows the training process of the cGAN. We refer to S2
Appendix for specifications of the Pix2pix architecture we adopted.
Fig 3: Illustration of the Pix2Pix framework (training).
#### L2L workflow: from random samples to leaf images
Upon training of the ResVAE and Pix2pix architectures, we dispose of an end-
to-end procedure for the generation of synthetic leaves. The procedure, which
is completely unsupervised, can be summarized as follows (see also Fig 4):
1. 1.
Load weights of the trained ResVAE decoder and Pix2pix generator.
2. 2.
Draw a random vector from a normal distribution whose parameters are chosen
according to the ResVAE latent space representation (note that its size equals
the dimension of the latent space used in the ResVAE, 32 in the present case).
3. 3.
Input the random vector in the trained ResVAE decoder and generate a leaf
skeleton
4. 4.
Input the leaf skeleton into the trained generator of the Pix2Pix net to
translate it into a fully colorized leaf.
Fig 4: L2L workflow illustration. A random input vector is drawn from the
ResVAE latent space representation and is input into the trained ResVAE
decoder. This latter outputs a synthetic leaf skeleton, which in turn is fed
into the trained generator of the Pix2Pix and translated into a corresponding
colorized leaf.
## Results and Discussion
The proposed technique can be employed to generate as many synthetic leaf
images as the user requires. The model has been implemented with Keras 111Code
to reproduce our experiments will be made available upon publication of this
work.. Upon generation of the synthetic images, their quality is assessed
performing both experimental qualitative (visual) and quantitative evaluations
as follows.
### Visual qualitative evaluation
Consistency test. Beforehand, we have evaluated the consistency of the
methodology by verifying that the net has learned to translate a leaf sample
comprised in the training set into itself. Fig 5 shows an example of this
test. The generated leaf is very similar to the real one, except for some vein
discoloration and a small blurring effect effect, which is a well–known
product of AEs employed in image generation [20].
Fig 5: Consistency test. The companion binarized leaf skeleton of a real leaf
is passed through the generator of the Pix2Pix net to check whether the
synthetic colorized leaf blade is similar to the original one. A: companion
skeleton of a real leaf; B: synthetic colorized blade generated; C: original
leaf.
Translation from unseen real companion skeleton. Having ensured that the model
has learned to translate on the training data, we verify that it is able to
produce reliable synthetic images using skeletons obtained from leaves that
are not part of the training dataset. Fig 6 shows an instance of colorized
leaf obtained from this test.
Fig 6: Translation from unseen real companion skeleton. A binarized leaf
skeleton companion of a real leaf not belonging to the training set is passed
through the generator of the Pix2Pix net to check. A: companion skeleton; B:
synthetic colorized blade.
Full L2L translation Fig 7 shows several instances of synthetic colorized
leaves obtained starting from different random latent vectors. Note that the
generated leaf images differ in terms of their global appearance, that is the
model generalizes and does not trivially memorizes the examples. As a note,
one should observe that some discolored parts may be appear. Moreover,
sometimes the skeletons show small artifacts consisting in not–connected
pixels positioned outside the leaf boundary (not appearing in Fig 7). This
latter issue will be addressed via a refinement algorithm explained below.
Fig 7: Full L2L translation results. Examples of synthetic colorized leaves
along with the corresponding synthetic companion skeletons.
L2L-RGNIR translation As mentioned above, applications in crop management
require to have at disposal images also in the NIR channel. To do this, we use
the L2L generation procedure as for the RGB channels starting from RGNIR
images as Pix2Pix targets. Since the same leaf skeletons are used, it is not
necessary to re-train the ResVAE if this procedure has been already carried
out for the RGB case. Fig 8 shows some results of this model.
Fig 8: L2L-RGNIR translation results. Examples of synthetic leaves colorized
in the RGNIR channels along with the corresponding synthetic companion
skeletons.
Refinement algorithm. We have already discussed the fact that synthetically
generated images may sometimes present artifacts (leaf regions that appear
detached from the leaf blade). Obviously this is not realistic and we need to
remove such artifacts. The refinement algorithm is implemented at present in a
procedural way and it is based on the idea of finding the contours of all the
objects and removing all objects laying outside the leaf contour. Note that
this procedure must pay attention to leave internal holes intact, because in
nature such holes are the result of the superposition of leaf lobes or due to
several abiotic/biotic conditions. Fig 9 shows the first leaf in Fig 7 which
presents artifacts (panel A, zoomed area including the artifact in panel B)
and its cleaned counterpart (panel C).
Fig 9: Refinement algorithm. The generative procedure sometimes produces
artifacts, that is leaf regions that appear outside the leaf blade. These
artifacts are corrected by procedurally finding the contours of all the
objects in the image and removing the objects outside the leaf contour. A:
first leaf in Fig 7 presenting artifacts; B: inset showing the magnified
artifacts; C: cleaned leaf.
### Quantitative Quality Evaluation
In order to assess quantitatively the quality of the generated leaves, we
employ a DL–based anomaly detection strategy. This approach is discussed in
detail in [21], here we briefly recall the main points. The strategy consists
in training an AE to compress real leaf images in a latent subspace and then
reconstruct the images using the latent representation (see Skeleton
Generation section for the same concept). Once the network is trained in this
way, we feed it with a synthetic image generated by our procedure. The AE
encodes it in the latent space and tries to recover the original image
according to its training rules. Since the net has been trained to be the
identity operator for real images, if the artificial images are substantially
different, an anomalous reconstruction is obtained. Fig 10 provides a visual
schematization of this approach. The figure also details the score system used
to detect the anomaly.
Fig 10: AE for anomaly detection. The AE is trained with images of real leaves
to be the identity operator of the input. A synthetic leaf with a low level of
similarity is recognized as an anomaly if fed into the trained AE and its
anomaly score $s_{x}$ is high.
The degree of anomaly is quantified via the ROC curve and its area, the AUC
index [22]. For this latter, we found AUC=0.25, which means that for a random
synthetic image fed into the AE, there is a 25% of possibility to classify it
as an anomaly, that is to be synthetic instead or real. While this result does
not indicate a perfect reconstruction of the real leaves, it shows that the
synthetic leaves are a reasonably accurate surrogate of real leaves and can be
used for a first massive training at a very low cost. A successive refinement
can then be applied using a limited number of real leaves upon transfer
learning.
Fig 11: Quantification of anomaly via ROC curve and AUC index. A point on the
ROC curve represents - for a certain threshold on the anomaly score - the
false positive rate (genuinely real images) vs the true positive rate
(genuinely synthetic images). The value AUC=0.25 means that a synthetic image
is (mis–)classified as synthetic in the 25% of cases. The dotted line
represents the result one would obtain by tossing a coin to decide whether an
image is artificial or real.
## Conclusion
Goal of this work was to explore advanced DL generative methods to produce
realistic images of leaves to be used in computer–aided applications The main
focus was on the generation of artificial samples of leaves to be used to
train DL networks for modern crop management systems in precision agriculture.
Disposing of synthetic samples which have a reasonable resemblance to real
samples alleviates the burden of manually collecting and annotating hundreds
of data. The Pix2pix net performs good translations from the leaf skeletons
generated by the ResVAE, except for some discolored parts, both for the
colorization of RGB and RGNIR images. Also, the leaves generated by ResVAE
have sometimes pixels positioned outside the boundary which, if not corrected,
can cause artifacts in the synthetic leaves. An easy procedure has been
proposed as well to correct these artifacts. We believe that the generative
approach can significantly contribute to automatize the process of building a
low-cost training set for DL applications. Several computer–aided applications
may also benefit of such a strategy, where many samples are required, possibly
with different degree of accuracy in the representation.
## Author contribution
Conceptualization: Alessandro Benfenati, Paola Causin
Dataset: Alessandro Benfenati, Davide Bolzi, Paola Causin, Roberto Oberti
Methodology: Alessandro Benfenati, Davide Bolzi, Paola Causin
Implementation: Davide Bolzi
Analysis: Alessandro Benfenati, Davide Bolzi, Paola Causin, Roberto Oberti
Writing: Alessandro Benfenati, Paola Causin
## Supporting information
##### S1 Appendix
Implementation and training of the ResVAE neural network. The architecture,
inspired by the one described in [23], is shown in Fig 12.
Fig 12: ResVAE architecture. Building blocks of the encoder and decoder
components of the ResVAE. The convolutional filters have kernels of size
$4\times 4$. The Residual block is formed by 5 convolutional layers of 16
filters each with kernel of size $4\times 4$ and stride equal to 1, followed
by a Batch Normalization layer and LeakyReLU activation function.
The training is performed via a stochastic gradient descent strategy, with
gradients computed by standard back–propagation; we use the Adam optimizer
with learning rate $\eta=0.001$ and we train the model for 2000 epochs with a
batch size of 64. After a hyper–parameter search, $\beta$ in the loss function
(1) was set to 75.
##### S2 Appendix
Implementation and training of the Pix2pix neural network.
The Pix2pix is a GAN architecture designed for image-to-image translation,
originally presented in [19] and comprising a generator and a discriminator.
The discriminator is deep neural network that performs image classification.
It takes both the source image (leaf skeleton) and the target image (colorized
leaf) as input and predicts the likelihood of whether the target image is real
or a fake translation of the source image. We use a PatchGAN model which tries
to establish whether each $N\times N$ (local) patch in the image is real or
fake. We run this discriminator convolutionally across the image, averaging
all responses to provide the ultimate output of the discriminator. The
generator is an encoder-decoder model using a U-Net architecture with feature-
map concatenation between two corresponding blocks of the encoder/decoder. The
encoder and decoder of the generator are comprised of standardized blocks of
convolution, batch normalization, dropout, and activation layers. We proceed
as suggested in [19]: the generator is updated via a weighted sum of both the
adversarial loss and the $L_{1}$ loss, where the parameter $\lambda$ in the
loss function eq5 is set to 100 in order to encourage the generator to produce
plausible translations of the input image, and not just plausible images in
the target domain. We initialize the generator/discriminator weights with a
normal distribution of zero mean and standard deviation $\sigma=0.002$; we use
the Adam optimizer with a learning rate $\eta=0.0002$ and we train the
generator/discriminator paired model for 12000 training steps, using a batch
size of 1. Fig 13 shows the generator and discriminator architectures.
Fig 13: Pix2pix architecture. Building blocks of the generator and
discriminator components.
## Acknowledgments
We acknowledge support from the SEED PRECISION project (PRecision crop
protection: deep learnIng and data fuSION), funded by Università degli Studi
di Milano. AB and PC are part of the GNCS group of INDAM (Istituto Nazionale
di Alta Matematica ”Francesco Severi”).
## References
* 1. Peyrat A, Terraz O, Merillou S, Galin E. Generating vast varieties of realistic leaves with parametric 2Gmap L-systems. Vis Comput. 2008;24(7):807–816.
* 2. Runions A, Fuhrer M, Lane B, Federl P, Rolland-Lagan AG, Prusinkiewicz P. Modeling and visualization of leaf venation patterns. In: ACM SIGGRAPH 2005 Papers; 2005. p. 702–711.
* 3. Samee SB. Modeling and Simulation of Tree Leaves Using Image-Based Finite Element Analysis. PhD Thesis, University of Cincinnati; 2012.
* 4. Kim D, Kim J. Procedural modeling and visualization of multiple leaves. Multimed Sys. 2017;23(4):435–449.
* 5. Féret JB, Gitelson A, Noble S, Jacquemoud S. PROSPECT-D: towards modeling leaf optical properties through a complete lifecycle. Remote Sens Environ. 2017;193:204–215.
* 6. Miao T, Zhao C, Guo X, Lu S. A framework for plant leaf modeling and shading. Math Comput Model. 2013;58(3-4):710–718.
* 7. Yi Wl, He Hj, Wang Lp, Yang Hy. Modeling and simulation of leaf color based on virtual rice. DEStech Trans Mater Sci Eng. 2016;(mmme).
* 8. Taghanaki SA, Abhishek K, Cohen JP, Cohen-Adad J, Hamarneh G. Deep semantic segmentation of natural and medical images: a review. Artif Intell Rev. 2021;54(1):137–178.
* 9. Costa P, Galdran A, Meyer MI, Niemeijer M, Abràmoff M, Mendonça AM, et al. End-to-end adversarial retinal image synthesis. IEEE Trans Med Imaging. 2017;37(3):781–791.
* 10. Oberti R, Marchi M, Tirelli P, Calcante A, Iriti M, Borghese AN. Automatic detection of powdery mildew on grapevine leaves by image analysis: Optimal view-angle range to increase the sensitivity. Comput Electron Agric. 2014;104:1–8.
* 11. Mahlein AK, Kuska MT, Behmann J, Polder G, Walter A. Hyperspectral sensors and imaging technologies in phytopathology: state of the art. Annu Rev Phytopathol. 2018;56:535–558.
* 12. Sengupta S, Athwale A, Gulati T, Zelek J, Lakshminarayanan V. FunSyn-Net: enhanced residual variational auto-encoder and image-to-image translation network for fundus image synthesis. In: Medical Imaging 2020: Image Processing. vol. 11313. International Society for Optics and Photonics; 2020. p. 113132M.
* 13. Cai L, Gao H, Ji S. Multi-stage variational auto-encoders for coarse-to-fine image generation. In: Proceedings of the 2019 SIAM International Conference on Data Mining. SIAM; 2019. p. 630–638.
* 14. Asperti A, Trentin M. Balancing Reconstruction Error and Kullback-Leibler Divergence in Variational Autoencoders. IEEE Access. 2020;8:199440–199448.
* 15. Benfenati A, Ruggiero V. Image regularization for Poisson data. Journal of Physics: Conference Series. 2015 nov;657:012011. Available from: https://doi.org/10.1088/1742-6596/657/1/012011.
* 16. Higgins I, Matthey L, Pal A, Burgess C, Glorot X, Botvinick M, et al. $\beta$-VAE: Learning basic visual concepts with a constrained variational framework. In: ICLR 2017 Conference Proceedings; 2017. .
* 17. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. p. 770–778.
* 18. Kurach K, Lučić M, Zhai X, Michalski M, Gelly S. A large-scale study on regularization and normalization in GANs. In: International Conference on Machine Learning. PMLR; 2019. p. 3581–3590.
* 19. Isola P, Zhu JY, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. p. 1125–1134.
* 20. Huang H, Li Z, He R, Sun Z, Tan T. Introvae: Introspective variational autoencoders for photographic image synthesis. arXiv preprint arXiv:180706358. 2018;.
* 21. Benfenati A, Causin P, Oberti R, Stefanello G. Unsupervised feature–oriented deep learning techniques for powdery mildew recognition based on multispectral imaging. submitted;.
* 22. Bradley AP. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognit. 1997;30(7):1145–1159.
* 23. Chollet F. Variational AutoEncoder; 2020. https://keras.io/examples/generative/vae/.
|
$h_{\pi\pi,0}=$ | $-0.09\pm 0.60$ | $\begin{bmatrix}[r]1&-0.5&-0.5&0.6\;\;\;\;\;&-0.2&0.3&-0.4&0.2&{\bf-1}&-0.5\\\ &1&-0.4&0.4\;\;\;\;\;&0&-0.6&{\bf 1}&-0.2&0.5&{\bf 1}\\\ &&1&{\bf-1}\;\;\;\;\;&0.1&0.3&-0.5&0.1&0.5&-0.5\\\ &&&1\;\;\;\;\;&-0.1&-0.3&0.5&-0.2&-0.6&0.4\\\\[6.88889pt] &&&&1&-0.2&0&-0.2&0.1&0\\\ &&&&&1&-0.6&0.6&-0.2&-0.6\\\ &&&&&&1&-0.3&0.4&{\bf 1}\\\ &&&&&&&1&-0.1&-0.2\\\ &&&&&&&&1&0.5\\\ &&&&&&&&&1\end{bmatrix}$
---|---|---
$h_{\pi\pi,1}=$ | $0.001\pm 0.400$
$h_{\overline{K}K,0}=$ | $0.8\pm 1.6$
$h_{\overline{K}K,1}=$ | $-0.28\pm 0.50$
$m=$ | $0.1338\,(5)\cdot a_{t}^{-1}$
$g_{\pi\pi}=$ | $0.441\,(9)$
$g_{K\overline{K}}=$ | $0.17\,(30)$
$\gamma_{\pi\pi,\pi\pi}=$ | $(2.9\pm 0.9)\cdot a_{t}^{2}$
$\gamma_{\pi\pi,K\overline{K}}=$ | $-(2.4\pm 5.0)\cdot a_{t}^{2}$
$\gamma_{K\overline{K},K\overline{K}}=$ | $-(2.2\pm 4.0)\cdot a_{t}^{2}$
$\chi^{2}/N_{\text{dof}}=\frac{126.9}{32-4}=4.53$ ,
(51)
where we present also the parameters of the scattering amplitude to illustrate
the correlation between the functions $\mathcal{F}_{a}$ and $\mathcal{M}$
181818The quoted $\chi^{2}$ describes only the variation of the smooth
function parameters, $h$. The slight variations of the correlation matrix in
Eq. 51 with respect to what is reported in Eq. 16 are due to the fact that
only a subset of 348 configurations out of the 400 available to calculate the
spectrum are used for the extraction of the form factors.. Note that some of
the smooth function parameters, $h$, are maximally correlated or
anticorrelated with parameters in the scattering amplitude.
Even though the smooth functions are individually consistent with zero, when
weighted by the finite-volume correction factors or the scattering amplitude,
the resulting values are not compatible with zero. For this to occur, it is
necessary, although not sufficient, that the functions $\mathcal{F}_{a}$ have
a significant correlation with the scattering amplitude $\mathcal{M}$, which
can be seen in Eq. 51.
## Appendix H Spacelike form factor of the pion and renormalization constant
The pion form-factor in the _spacelike_ region was extracted from three-point
correlation functions, $\langle 0|\Omega_{\pi}(\Delta
t)\,\mathcal{J}(t)\,\Omega^{\dagger}_{\pi}(0)|0\rangle$, computed with a
single value of $\Delta t=32\,a_{t}$. Details of the computational approach
are presented in Ref. Radhakrishnan _et al._ (2022) and Ref. Shultz _et al._
(2015). In order to cover a wide kinematic region, correlators were computed
giving access to matrix elements,
$\matrixelement{\pi(\mathbf{p}_{1})}{\mathcal{J}^{i}_{\rho,\text{lat}}}{\pi(\mathbf{p}_{2})}$,
for combinations of pion momenta up to $|\mathbf{p}_{i}|^{2}\leq
6\,\left(\tfrac{2\pi}{L}\right)^{2}$, and current momentum insertion up to
$|\mathbf{p}_{1}-\mathbf{p}_{2}|^{2}\leq 4\,\left(\tfrac{2\pi}{L}\right)^{2}$.
In a previous analysis of some of these correlation functions in Ref.
Radhakrishnan _et al._ (2022), the leading time dependence was removed by
forming the combination,
$\widetilde{C}_{\text{3pt}}(\Delta t,t)=\frac{\langle 0|\Omega_{\pi}(\Delta
t)\,\mathcal{J}(t)\,\Omega^{\dagger}_{\pi}(0)|0\rangle}{e^{-E_{\mathbf{p}_{1}}(\Delta
t-t)}e^{-E_{\mathbf{p}_{2}}t}}\,,$ (52)
where $E_{\mathbf{p}}$ corresponds to the energy of a single-pion state of
momentum $\mathbf{p}$, and where the normalization of the optimized operators
follows the same convention as in the main text.
It can be the case that the timeslice-to-timeslice data correlation for this
quantity is considerable, resulting in fits with reasonable values of
$\chi^{2}$ which undershoot the data. One such case is presented in panel (a)
of Fig. 24.
An alternative approach is to form a ratio using optimized two-point
correlation functions,
$R_{\text{3pt}}(\Delta
t,t)=4E_{\mathbf{p}_{1}}E_{\mathbf{p}_{2}}\frac{\expectationvalue{\Omega_{\pi}(\Delta
t)\,\mathcal{J}(t)\,\Omega^{\dagger}_{\pi}(0)}{0}}{\expectationvalue{\Omega_{\pi}(\Delta
t\\!-\\!t)\,\Omega^{\dagger}_{\pi}(0)}{0}\expectationvalue{\Omega_{\pi}(t)\,\Omega^{\dagger}_{\pi}(0)}{0}}\,,$
(53)
which will have the same constant contribution, but differing excited-state
contributions. This combination proves to have much smaller timeslice-to-
timeslice data correlation, and fits follow more closely the data points. This
is illustrated in panel (b) of Fig. 24. Fits to a constant, and a constant
with an excited state exponential at source or sink or both are carried out
for a range of time windows, and the results averaged using the AIC as in the
two-point function case discussed previously. The columns on the right
describe the time window of the fit $[t_{\text{min}},$ $t_{\text{max}}]$, and
the number of exponentials at the source, $n_{\text{src}}$, and sink,
$n_{\text{snk}}$.
(a) (b)
Figure 24: Fits to three-point correlation functions with
$\mathbf{p}_{1}=\mathbf{p}_{2}=\tfrac{2\pi}{L}[110]$ (averaged over rotations
and directions of current insertion) and fixed $\Delta t=32\,a_{t}$ using
either (a) Eq. 52 or (b) Eq. 53. Fitted constant value in this case
corresponds to $1/Z_{V}^{\ell}$, the vector current renormalization constant.
Variation of fit window is shown in the right columns, along with the model-
averaged result.
The difference with respect to the previous method using
$\widetilde{C}_{\text{3pt}}(\Delta t,t)$ is modest, but is the origin of any
differences in the current analysis with that in Ref. Radhakrishnan _et al._
(2022), such as for the light-quark vector current renormalization factor as
shown in Fig. 25.
Figure 25: Determination of light-quark vector current renormalization factor
$Z_{V}^{l}$ by correlated fitting of extractions from six values of pion
momentum. |
# Excess area dependent scaling behavior of nano-sized membrane tethers
N. Ramakrishnan Department of Bioengineering, University of Pennsylvania,
Philadelphia, PA, 19104, USA, Arpita Roychoudhury Department of Physics,
Indian Institute of Science Education and Research, Pune, 411008, India,
David M. Eckmann Department of Bioengineering, University of Pennsylvania,
Philadelphia, PA, 19104, USA, Department of Anesthesiology and Critical Care,
University of Pennsylvania, Philadelphia, PA, 19104, USA, Portnovo S.
Ayyaswamy Department of Mechanical engineering and Applied Mechanics,
University of Pennsylvania, Philadelphia, PA, 19104, USA, Tobias Baumgart
Department of Chemistry, University of Pennsylvania, Philadelphia, PA, 19104,
USA, Thomas Pucadyil Department of Biology, Indian Institute of Science
Education and Research, Pune, 411008, India, Shivprasad Patil Department of
Physics, Indian Institute of Science Education and Research, Pune, 411008,
India, Valerie M. Weaver Department of Surgery and Anatomy, University of
California San Francisco, San Francisco, CA, 94143, USA, Ravi Radhakrishnan
Department of Chemical and Biomolecular engineering, University of
Pennsylvania, Philadelphia, PA, 19104, USA, Department of Biochemistry and
Biophysics, University of Pennsylvania, Philadelphia, PA, 19104, USA
<EMAIL_ADDRESS>
###### Abstract
Thermal fluctuations in cell membranes manifest as an excess area (${\cal
A}_{\rm ex}$) which governs a multitude of physical process at the sub-micron
scale. We present a theoretical framework, based on an in silico tether
pulling method, which may be used to reliably estimate ${\cal A}_{\rm ex}$ in
live cells. The tether forces estimated from our simulations compare well with
our experimental measurements for tethers extracted from ruptured GUVs and
HeLa cells. We demonstrate the significance and validity of our method by
showing that all our calculations along with experiments of tether extraction
in 15 different cell types collapse onto two unified scaling relationships
mapping tether force, tether radius, bending stiffness $\kappa$, and membrane
tension $\sigma$. We show that ${\cal R}_{\rm bead}$, the size of the wetting
region, is an important determinant of the radius of the extracted tether,
which is equal to $\xi=\sqrt{\kappa/2\sigma}$ (a characteristic length scale
of the membrane) for ${\cal R}_{\rm bead}{}<\xi$, and is equal to ${\cal
R}_{\rm bead}$ for ${\cal R}_{\rm bead}>\xi$. We also find that the estimated
excess area follows a linear scaling behavior that only depends on the true
value of ${\cal A}_{\rm ex}$ for the membrane, based on which we propose a
self-consistent technique to estimate the range of excess membrane areas in a
cell.
Keywords : _mechanotype, excess area, membrane tether, tether pulling,
umbrella sampling, dynamically triangulated Monte Carlo_
The mechanical properties of a cell can be used as a surrogate marker to
identify cellular phenotypes. Mechanical characterization (or mechanotyping)
has been particularly useful in identifying a number of pathophysiologies —
well known examples include the stiffening of malaria infected erythrocytes
and hepatocytes, the softening of metastatic cancer cells, and the sickle
shape of an erythrocyte laden with hemoglobin S [1, 2, 3]. Several works in
biomechanics have aimed to characterize cells based on mechanical measurements
using a wide range of techniques such as flow and optical cytometry,
manipulation using micropipette aspiration, optical tweezers and laser traps,
and microfluidic devices (see [1, 4, 5] for comprehensive reviews). These
studies have focused on whole cell measurements and hence have investigated
the relationship between the mechanotype and pathophysiology at the cellular
and tissue scales. In many cases, the changes in mechanical properties are
primarily caused by variations in the structure and organization of the
cellular cytoskeleton [6] and the extracellular matrix [7]. Such subcellular
scale rearrangements can significantly impact the mechanical properties of the
cell membrane at length-scales smaller than cellular dimensions (i.e., tens of
nanometers to less than one micron), a range which also corresponds to the
scale at which the cell membrane is effective as an organizer and a host of
functional signaling complexes.
The sub-cellular scale relevant to the above discussion corresponds to the
dimensions primarily set by the cortical cytoskeletal mesh, which has been
estimated to be between $l_{c}=150-500$ nm [8, 9]. The mechanical properties
of a patch of the cell membrane that spans the region between multiple
cytoskeletal pinning points, with typical dimensions $l_{c}$, can differ from
the bulk because the nature of the thermal undulations (and the associated
conformational entropy of the membrane) depends directly on $l_{c}$, and in
turn influences the system’s free energy. The total area of the membrane
(denoted by ${\cal A}$) is in general larger than the projected area of the
cytoskeletal mesh (denoted by ${\cal A}_{\rm patch}{}$). The characteristics
of the membrane deformations and undulations can be described by a
dimensionless scalar quantity called the membrane excess area given as ${\cal
A}_{\rm ex}=100*({\cal A}-{\cal A}_{\rm patch}{})/{\cal A}_{\rm patch}{}$ and
the membrane is taken to be flat when ${\cal A}_{\rm ex}$=0 and curved/ruffled
if ${\cal A}_{\rm ex}{}>0$. The presence of excess area (and curvature
gradients) can alter the local signaling microenvironment for a number of
biophysical processes whose downstream components include curvature sensing
proteins like BAR, Exo70, and ENTH domains [10, 11, 12]. Notable processes
where modulations in the membrane excess area at the sub-cellular scale can
significantly impact common cellular functions including intracellular
transport of cargo or viral/bacterial internalization through
exo-/endo-/phago-cytosis [13, 14], cell polarization [15, 16], and cell
motility [17]. Hence it is logical to posit that the primary mechanisms
linking the cell-microenvironment to cell fate can revolve around the physical
factors impacting the membrane at length-scales below $l_{c}$ [6, 18, 19, 20,
21].
We note that a number of experimental studies have focused on how membranous
reservoirs respond to perturbations in the physical environment of the cell.
The estimates for excess membrane area determined using conventional
morphometric measurements, involving osmotic shock assays and cryo-EM [22] do
not delineate thermally undulating excess areas, which causes a mis-estimation
of the area. Moreover, such methods, by averaging over the entire cell (or
even 100s of cells), ignore the heterogeneity on the scale of $l_{c}$ at a
single cell level or the asymmetry in membrane response that could exist in a
polarized cell (where the basal and apical surfaces may sustain very different
membrane properties). In this article, we propose a theoretical
framework/computational model applicable to tether pulling assays (reviewed in
[18]) to obtain reliable estimates for the membrane excess area. Unique to our
modelling approach is a new methodology that allows incorporation of large
deformations as well as thermal membrane undulations in the estimate.
## 1 Computational model
We consider a square frame with a lateral size ${\cal L}_{\rm patch}{}=510$
nm, which houses the membrane surface. As noted in the introduction ${\cal
A}$, ${\cal A}_{\rm patch}$, and ${\cal A}_{\rm ex}$ are respectively the
curvilinear, projected, and excess areas of the membrane. We discretize the
membrane surface into a triangulated surface that contains $M$ triangles
intersecting at $N$ vertices and forming $L$ links [23, 24] and the
statistical weights of the membrane conformations are governed by the discrete
form of the Canham-Helfrich Hamiltonian [25, 26]:
${\cal
H}=\sum\limits_{i=1}^{N}\left\\{\frac{\kappa}{2}\left(c_{1,i}+c_{2,i}\right)^{2}+\sigma{}\right\\}{\cal
A}_{v}.$ (1)
$\kappa$ and $\sigma$ are respectively the bending rigidity and the bare
surface tension of the membrane and ${\cal A}_{v}$ is the curvilinear area per
vertex on the surface. $c_{1,i}$ and $c_{2,i}$ are the principal curvatures at
a given vertex $i$ computed as in our earlier work [27]. In our studies we
hold ${\cal A}_{\rm patch}$ to be a constant and take $\sigma=0$. However when
thermal undulations are taken into account, the effective surface tension in
the membrane will be non-zero due to renormalization effects and a mapping
between the renormalized tension and excess area has been quantified in our
earlier work [28]. All our simulations have been performed in a constant
$N$-${\cal A}_{\rm patch}$-$T$ ensemble, where $T$ is the absolute
temperature.
The conformational states of the triangulated surface are evolved using the
dynamically triangulated Monte Carlo (MC) technique which consists of two
independent MC moves: (i) a vertex move that simulates thermal fluctuations
and (ii) a link flip that captures the fluid nature of biological membranes
(see supplementary information Sec. S1 for details). A MC step consists of $N$
vertex moves and $L$ link flips that are performed at random and all the moves
are accepted using the Metropolis scheme [29]. All the simulations reported
here have been performed using a membrane patch with $N=2601$ vertices and the
statistics are collected over 1.5 million MC steps.
### 1.1 Analytical model for the membrane excess area
The excess area of a planar membrane in the small deformation limit ($|\nabla
h|\ll 1$) can be analytically estimated to be [30, 31];
${\cal G}=\dfrac{100}{2{\cal L}_{\rm patch}^{2}}\sum\limits_{q=q_{\rm
min}}^{q=q_{\rm max}}\dfrac{{k}_{\rm B}T{}}{\kappa q^{2}+\sigma},$ (2)
where $q$ denotes the wavenumber of all possible undulation modes in the
membrane and $k_{\rm B}$ the Boltzmann constant. The maximum value of the
wavenumber $q_{\rm max}=2\pi a_{0}^{-1}$ is set by the size of the
triangulated vertices $a_{0}$ and its minimum value $q_{\rm min}=2\pi
l_{p}^{-1}$ is set by the length scale $l_{p}$ such that $l_{p}\gg a_{0}$ and
$l_{p}\leq{\cal L}_{\rm patch}{}$. We have performed all our analysis using
three values of $l_{p}=150$, $250$, and $510$ nm that represent the variations
in the cytoskeletal length-scales. We note that this model only has
applicability in the regime of small ${\cal A}_{\rm ex}$ when $|\nabla h|\ll
1$ is satisfied and is expected to fail in regimes where the ${\cal A}_{\rm
ex}$ of the cell is not small (see supplementary information Sec. S3) .
### 1.2 In silico tether pulling assay
If ${\cal F}_{\rm t}$ be the force required to extract a tether of radius
${\cal R}_{\rm t}$ and length ${l}_{\rm t}$ from the membrane patch, as
illustrated in Fig. 1, the total energy ${\cal H}_{\rm tot}$, which has a
contribution due to membrane deformations (eqn. (1)) and an additional part
from the work done to extract the tether (assuming that the tether is a
perfect cylinder and ignoring thermal undulations), is given by [32]:
${\cal H}_{\rm tot}=\dfrac{\kappa\pi{l}_{\rm t}{}}{{\cal R}_{\rm
t}{}}+2\pi\sigma{}{l}_{\rm t}{}{\cal R}_{\rm t}{}-{\cal F}_{\rm t}{}{l}_{\rm
t}{}.$ (3)
Minimization of the total energy with respect to ${l}_{\rm t}$ and ${\cal
R}_{\rm t}$ yields: (i) $\kappa={\cal F}_{\rm t}{}{\cal R}_{\rm t}{}/(2\pi)$
and (ii) $\sigma={\cal F}_{\rm t}{}/(4\pi{\cal R}_{\rm t}{})$. These
relationships allow one to determine the elastic properties of the cell
membrane through tether pulling experiments; however, the non-trivial geometry
of a tether (which in general is not a perfect cylinder) and the underlying
membrane patch (which is not a perfect planar entity but rather a ruffled
surface subject to undulations, especially under high ${\cal A}_{\rm ex}$)
limits the applicability of eqn. 3. To overcome these limitations, we have
extended the umbrella sampling technique [33] to extract tethers of a
specified length ${\cal L}_{\rm t}$ from a membrane in the $N$-${\cal A}_{\rm
patch}$-$T$ ensemble. This is analogous to tether extraction in experiments
where a constant outward force is applied on a selected region of the cell
membrane through an AFM or an optical tweezer. In our model, we use an
additional harmonic biasing potential of the form ${\cal H}_{\rm bias}=k_{\rm
bias}({l}_{\rm t}-{\cal L}_{\rm t})^{2}/2$ in place of the force employed in
experiments. Here $k_{\rm bias}$ is the spring constant of the biasing
potential and ${\cal L}_{\rm t}$ is a reaction coordinate that denotes the
prescribed length of the extruded tether. In our calculations we take $k_{\rm
bias}=0.5\,{k}_{\rm B}T{}/{\rm nm}^{2}$ and this value is chosen such that the
undulation modes of the membrane remains unaltered. It should be noted that
the addition of the biasing potential does not alters the equilibrium
characteristics of the membrane since its contribution will be removed in the
WHAM analysis.
Figure 1: (a) Representative equilibrium conformation of a membrane with
$\kappa=20\,k_{\rm B}T$ and ${\cal A}_{\rm ex}\sim 40\%$. The set of biased
vertices at the tip ($\\{{\bf X}_{T}\\}$) and at the base ($\\{{\bf
X}_{B}\\}$) along with the position of their respective centers of mass ${\bf
R}_{T}$ and ${\bf R}_{B}$ (shown as crosses) are also shown. $\\{{\bf
X}_{T}\\}$ is the set of all vertices within a region of size ${\cal R}_{\rm
bead}$. (b) Conformation of the membrane in panel (a) with a fully developed
tether, obtained for ${\cal L}_{\rm t}{}=600$ nm. The tether force and radius,
${l}_{\rm t}$ and ${\cal R}_{\rm t}$ and the membrane dimension ${\cal L}_{\rm
patch}$ are also marked.
The length of the tether ${l}_{\rm t}$ is defined using a macroscopic order
parameter, determined from two different sets of vertices $\\{{\bf X}_{T}\\}$
and $\\{{\bf X}_{B}\\}$, that are shown in Fig. 1(a). ${\bf R}_{T}$ and ${\bf
R}_{B}$, which are also shown in Fig. 1(a), represent the centers of mass of
the chosen vertices that define the two macroscopic variables from which the
instantaneous tether length is calculated as $l_{t}=|{\bf R}_{T}-{\bf
R}_{B}|$. While $\\{{\bf X}_{T}\\}$ is predetermined at the start of the
simulation, $\\{{\bf X}_{B}\\}$ is computed at runtime and taken to be the set
of all vertices at the boundary of the membrane patch (also see supplementary
information Movie M1).
In a typical tether pulling assay, the bead used to extract the tether is only
partially wetted by the membrane surface and in general the wetting area is
unknown. Also, due to the non-specific nature of these adhesions the wetting
area may vary in different experiments, even for the same cell. In order to
investigate the role of the wetting area on the properties of the extracted
tether, we choose the biased vertices in the tip to be a circular region of
radius ${\cal R}_{\rm bead}$. This is illustrated in the lower panel of Fig.
1(a).
### 1.3 Potential of mean force
For a given membrane patch, independent simulations are performed to extract
tethers within a given umbrella sampling window. For all simulations reported
in this article, we use at least $64$ windows each of width $5$ nm — the
number of windows required to extract fully developed tethers increases with
increasing ${\cal A}_{\rm ex}$. Histograms of the instantaneous tether length
in each of the windows are recorded for $1.5$ million Monte Carlo steps and
these statistics are converted to a potential of mean force (PMF) using the
Weighted Histogram Analysis method [34]. The typical runtime for an umbrella-
sampling window to sample $1.5$ million MCS is around $36$ hours on a $2.6$
GHz processor.
### 1.4 Computing the radius and length of membrane tethers
The radius and length of the membrane tether ${\cal R}_{\rm t}$ and ${l}_{\rm
t}$, respectively, can be determined exactly in the simulations, as shown in
Fig. 1(b). Let ${\bf[r]}$ be the set of all $N_{c}$ vertices on the tubular
region and ${\bf r}_{CM}=(N_{c})^{-1}\sum_{i}{\bf r}_{i}$ their center of
mass: here ${\bf r}_{i}$ is the three-dimensional position vector of vertex
$i$ in the Cartesian coordinates. The center of mass can be used to construct
the gyration tensor as, ${\bf G}=(N_{c})^{-1}\sum_{i=1}^{N_{c}}({\bf
r}_{i}-{\bf r}_{CM})\otimes({\bf r}_{i}-{\bf r}_{CM})$ whose eigenvalues are
$\lambda_{1}$, ${\lambda_{2}}$, and ${\lambda_{3}}$. Since the tethers formed
are axi-symmetric we identify $\lambda_{2}$ and $\lambda_{3}$ using the
relation $\lambda_{2}\approx\lambda_{3}$. Of the three eigenvalues,
$\lambda_{1}$ represents the length of the tether, with ${l}_{\rm t}{}\approx
2\sqrt{\lambda_{1}}$, and $\sqrt{\lambda_{2}}$ and $\sqrt{\lambda_{3}}$
represent its two principal radii. We estimate the average tether radius as
${\cal R}_{\rm t}{}=(\sqrt{\lambda_{2}}+\sqrt{\lambda_{3}})/2$.
## 2 Experimental Methods
### 2.1 Cell culture
HeLa cells were placed in $35$ mm petridishes at $37$° C in $5$% CO2 in DMEM
(Dulbecco’s Modified Eagle’s medium, Lonza) containing $10$% FBS (Fetal Bovine
Serum, Gibco) and $0.02$% Penicillin/Streptomycin for $48$ hours before
commencing the experiment. A confluent culture of HeLa cells was treated with
$0.25$% Trypsin-EDTA (Gibco), detrypsinised in DMEM containing $10$% FBS and
seeded at a density of $80,000$ cells/coverslip (Ted Pella Inc., Redding), so
that a single monolayer of cells are obtained on the coverslip.
### 2.2 Giant Unilamellar Vesicles (GUVs)
For the preparation of vesicles, $1,2$-dioleolyl-sn-glycero-$3$-phosphocholine
(DOPC), $1,2$-dioleolyl-sn-glycero-$3$-phospho-L-serine (DOPS) (Avanti Polar,
Alabaster, AL) and $1,2$-dioleolyl-sn-
glycero-$3$-phosphoethanolamine-N-(lissamine rhodamine B sulfonyl)(RhPE)
(Invitrogen) stock solutions in chloroform, at room temperature were used. The
lipid mix was aliquoted in a glass vial to a total lipid concentration of 1 mM
at a ratio of DOPC:DOPS:RhPE ($84$:$15$:$1$ mol%).
Gel-assisted formation of GUVs were carried out using polyvinyl alcohol (PVA)
as described earlier [35], with a few modifications as per the requirements of
the experiments. In this method of GUV formation, a drop of $5$% w/v degassed
PVA (MW $145,000$, Sigma) in deionized water is added to a clean glass
coverslip placed on a hot plate set at $75$° C. The water gets evaporated in
about $10$ minutes leaving a dry thin film of PVA on the coverslip. To this,
around $3$ $\mu$L of the $1$ mM lipid stock solution in chloroform was added
to dry PVA while on the hot plate to let the chloroform evaporate. The thin
film was peeled off and immersed in eppendorfs containing $20$ mM HEPES, $150$
mM NaCl, pH $7.4$ with $100$ mM sucrose. This immersed film was left
undisturbed for around one hour followed by gentle tapping to release the GUVs
from the PVA film to the buffer solution. The buffer containing large free
floating GUVs ($10$-$15$ $\mu{\rm m}$) was pipetted out and used for tether
pulling experiments.
### 2.3 AFM Experiments
AFM-based force spectroscopic experiments were performed using Nanowizard II
atomic force microscope (JPK Instruments). The AFM liquid cell was assembled
with freshly cleaved mica discs prior to adding the GUV solution. The liquid
cell was then mounted on the AFM stage and left undisturbed for $20$ minutes
to allow the vesicles to settle on the mica surface. Using a fluorescence
microscope attached with the AFM set up, we could confirm that the GUVs
settled on the surface and the floating ones were washed away by exchanging
buffer solution with HBS. Subsequently, the GUVs got ruptured on the mica
surface and they were imaged using AFM. The images obtained using AFM revealed
the location and height of the ruptured GUV patches which matched with that of
the height of a single bilayer membrane ($5$-$6$ nm). Force spectroscopy was
then performed on these particular patches to pull membrane tethers. Silicon
nitride cantilevers (MikroMasch CSC$38$/AlBS) were used for pulling the
tethers. Cantilevers were calibrated before each experiment and its spring
constant was determined using equipartition theorem [36]. The measured spring
constant of the cantilevers used for most experiments was found to be range of
$20$-$80$ mN/m. Constant speed mode was used for approaching the tip to the
sample surface followed by retraction at the same speed. The approach-retract
cycle was repeated at various points on the membrane patch using force mapping
tool built in Nanowizard II software and force-displacement curves were
recorded. Force curves showing step profiles were selected and analyzed using
JPK data processing software by fitting the curves with the in-built functions
to measure the force minimum corresponding to the tether force and step
heights in retraction force curves.
## 3 Results
### 3.1 Extraction of membrane tether proceeds through three distinct regimes
We first demonstrate the characteristics of a tether extracted from a model
membrane with $\kappa=20$ ${k}_{\rm B}T$ and ${\cal A}_{\rm ex}\sim 40\%$,
using a bead size of ${\cal R}_{\rm bead}=50$ nm in the $N$-${\cal A}_{\rm
patch}$-$T$ ensemble. The tether is extracted using the umbrella sampling
technique described in the methods section, for reaction coordinate (imposed
tether length) values in the range $0<{\cal L}_{\rm t}<500$ nm, with a window
size of $5$ nm. The top panel in Fig. 2 shows representative snapshots of the
membrane stabilized at four different values of ${\cal L}_{\rm t}$ = $0$,
$200$, $300$, and $450$ nm. At small values of ${\cal L}_{\rm t}$, the
membrane conformations show large undulations whose magnitudes are set by the
value of ${\cal A}_{\rm ex}$. However, at large values of ${\cal L}_{\rm t}$,
the membrane undulations are absorbed into the large out of plane protrusions
that resemble a tether extracted from a planar membrane. It is noted that the
shape of a fully developed tether (i.e., when the undulations in the planar
region becomes very small) is consistent with that predicted for nearly planar
membranes, using analytical methods [37].
Figure 2: (a) Representative conformations of a membrane with
$\kappa=20\,k_{\rm B}T$ and ${\cal A}_{\rm ex}\sim 40\%$ as a function of
${\cal L}_{\rm t}$. Panels (b) and (c) show the computed values of the tether
length ${l}_{\rm t}$, and radius ${\cal R}_{\rm t}$, respectively, as a
function of ${\cal L}_{\rm t}$. These quantities are computed as described in
Sec. 1.4. The shaded regions mark the three regimes for tether extraction
namely, regime 1: suppression of undulations, regime 2: formation of tethers,
and regime 3: extrusion of tethers at a constant radius. The boxed numbers in
the top panel denote the regimes to which the configurations correspond to.
The instantaneous length and radius of the tether region, denoted by ${l}_{\rm
t}$ and ${\cal R}_{\rm t}$, as a function of the reaction coordinate ${\cal
L}_{\rm t}$, are shown in the middle and lower panels of Fig. 2, respectively.
Both ${l}_{\rm t}$ and ${\cal R}_{\rm t}$ show non-monotonic behaviors with
respect to ${\cal L}_{\rm t}$, which are solely attributable to the non-zero
excess area of the membrane. For membrane with thermal undulations, and hence
non-zero excess areas, we identify three characteristic regimes for tether
growth which are marked as shaded regions in the figure. These regions are
characterized as follows:
* •
Regime 1 (${l}_{\rm t}{}\approx{\cal R}_{\rm t}{}$): for ${\cal L}_{\rm
t}$$<75$ nm, where the tether radius and length are similar, the applied
biasing potential only serves to suppress the short wavelength undulations in
the membrane. This is reflected in the fact that the membrane conformations in
this regime are not distinguishable from their equilibrium counterparts.
* •
Regime 2 (${l}_{\rm t}{}\approx$ constant and ${\cal R}_{\rm t}{}\propto{\cal
L}_{\rm t}{}^{-1}$): for $75<{\cal L}_{\rm t}<300$ nm a pronounced protrusion
is seen in the vicinity of the region where the biasing potential is applied.
The radius of this protrusion decreases with increasing ${\cal L}_{\rm t}$,
while its length remains unchanged.
* •
Regime 3 (${\cal R}_{\rm t}{}\approx$ constant and ${l}_{\rm t}{}\propto{\cal
L}_{\rm t}{}$): for ${\cal L}_{\rm t}$$>300$ nm in Fig. 2, the tether radius
remains constant while its length increases linearly with ${\cal L}_{\rm t}$,
marking a region of tether growth. The linear increase in ${l}_{\rm t}$ fails
to hold when all excess area in the membrane is drawn into the tether region.
The extent of the three regimes, depend on the values of $\kappa$ and ${\cal
A}_{\rm ex}$. This is shown in the supplementary information, where we have
displayed the effects of ${\cal A}_{\rm ex}$ and $\kappa$ on the radius of the
extracted tether.
The characteristic length scale for a membrane, given by
$\xi=\sqrt{\kappa/2\sigma}$ [38, 39], sets the limit below which curvature
contributions are dominant. In our model, $\xi$ is an increasing function of
$\kappa$ and ${\cal A}_{\rm ex}$ — the latter may be deduced from the inverse
relationship between $\sigma$ and ${\cal A}_{\rm ex}$ in eqn. (2). In a tether
pulling experiment performed in the $N$-${\cal A}_{\rm patch}$-$T$ ensemble,
the radius of the extracted tether depends either on $\xi$ or on the size of
the biased region ${\cal R}_{\rm bead}$ used for tether extraction. This is
shown in Fig. 3 where we display the values of ${\cal R}_{\rm t}$ as a
function of ${\cal R}_{\rm bead}$, for $\kappa=20,\,40,$ and $160$ ${k}_{\rm
B}T$ and ${\cal A}_{\rm ex}{}=10$ and $40\%$. The conformations shown in panel
(a) for a membrane with $\kappa=20\,k_{\rm B}T$ and ${\cal A}_{\rm ex}\sim
10\%$, for ${\cal L}_{\rm t}{=300}$ nm, clearly illustrates the interplay
between the characteristic length $\xi$ and the imposed length ${\cal R}_{\rm
bead}$. While we observe fully grown and geometrically identical tethers for
${\cal R}_{\rm bead}\leq 75$ nm, we find the tether extracted with ${\cal
R}_{\rm bead}=100$ nm to be significantly different. This feature is also
quantified in Fig. 3(b) where we find the nearly constant tether radius
(${\cal R}_{\rm t}{}\sim 80$ nm) for ${\cal R}_{\rm bead}\leq 75$ nm to show a
marked increase to ${\cal R}_{\rm t}{}\sim 110$ nm when ${\cal R}_{\rm
bead}=100$ nm.
In panels (b) and (c) of Fig. 3 two key features are worth noting: (i) as
expected, the value of ${\cal R}_{\rm t}$ is an increasing function of
$\kappa$ for all values of ${\cal R}_{\rm bead}$, and (ii) the dependence of
${\cal R}_{\rm t}$ on ${\cal R}_{\rm bead}$ is minimal for large values of
$\kappa$ and also when ${\cal A}_{\rm ex}$ is large.
Figure 3: Dependence of the tether radius on the size of the biasing region.
(a) Representative conformations of tethers extracted using beads with ${\cal
R}_{\rm bead}=25$, $50$, $75$, and $100$ nm, from a membrane with
$\kappa=20\,k_{\rm B}T$ and ${\cal A}_{\rm ex}\sim 10\%$. Panels (b) and (c)
show the computed values of ${\cal R}_{\rm t}$, as a function of ${\cal
R}_{\rm bead}$, for $\kappa=20,\,40,$ and $160$ ${k}_{\rm B}T$ for ${\cal
A}_{\rm ex}{}=10$ and $40\%$, respectively.
### 3.2 PMF and tether force
The PMF (${\cal W}_{\rm t}$) to extract a tether of length ${l}_{\rm t}$ from
a membrane patch of fixed ${\cal A}_{\rm ex}$ is computed from the umbrella
sampling data using the WHAM technique (see methods section). ${\cal W}_{\rm
t}$ for a membrane with $\kappa=20\,k_{\rm B}T$ and ${\cal A}_{\rm ex}\sim
40\%$ is shown in the top panel of Fig. 4(a). The three characteristic regimes
seen for ${\cal R}_{\rm t}$ (see Sec. 3.1) are also reflected in the form of
${\cal W}_{\rm t}$. Here, we again observe three scaling regimes : (i) an
initial linear regime given by ${\cal F}_{1}{l}_{\rm t}{}$, (ii) a second non-
linear regime, $\propto{l}_{\rm t}{}^{2}$, and (iii) a final linear regime,
$\propto{\cal F}_{2}{l}_{\rm t}{}$. Both the linear regimes are shown as solid
lines in panel (a) of Fig. 4 and the latter is attributable to tether
extrusion at a constant radius, for which the elastic energy is expected to
scale as ${\cal H}_{\rm tot}\propto{l}_{\rm t}{}$ (eqn. (3)). On the other
hand, the source of the non-linear scaling is attributed to ${\cal R}_{\rm t}$
being a decreasing function of ${l}_{\rm t}$. We note that the scaling
behavior is universal and is observed for all systems investigated.
Figure 4: (a) The potential of mean force ${\cal W}_{\rm t}$ and the tether
force ${\cal F}_{\rm t}$, as a function of the tether length ${l}_{\rm t}$,
for a membrane with $\kappa=20\,k_{\rm B}T$ and ${\cal A}_{\rm ex}\sim 40\%$.
In the top panel, ${\cal W}_{\rm t}$ shows a linear scaling in regimes 1 and
3, which are represented by the functions ${\cal F}_{1}{l}_{\rm t}{}$ and
${\cal F}_{2}{l}_{\rm t}{}$, respectively. The lower panel compares values of
${\cal F}_{\rm t}$ estimated from direct numerical differentiation of ${\cal
W}_{\rm t}$ (symbols) to that obtained from the scaling relations (lines). (b)
Force displacement curves for experimental tether pulling assay using ruptured
GUVs (top panel) and HeLa cells (lower panel) – the inset shows a transition
between regions of constant force. The illustration in the top panel shows the
state of the membrane tether at various stages of the experiment. The vertical
deflection of the AFM tip is measure of the tether force ${\cal F}_{\rm t}$
and its separation from the sample is a measure of the tether length ${l}_{\rm
t}$.
The force required to extract the tether may be computed as ${\cal F}_{\rm
t}{}=|\nabla_{{l}_{\rm t}{}}{\cal W}_{\rm t}{}|$, where $\nabla_{{l}_{\rm
t}{}}$ denotes a gradient with respect to ${l}_{\rm t}$. ${\cal F}_{\rm t}$
can be estimated either from direct numerical differentiation of ${\cal
W}_{\rm t}$ or from the scaling relations — for the latter, ${\cal F}_{\rm
t}{}={\cal F}_{1}$ in regime 1 and ${\cal F}_{\rm t}{}={\cal F}_{2}$ in regime
3. The tether forces computed using the two methods for ${\cal W}_{\rm t}$ in
Fig. 4(a) are shown in the lower panel — symbols and lines correspond to
${\cal F}_{\rm t}$ obtained using numerical differentiation and using the
scaling relations, respectively. We find the estimates from both the methods
to be in excellent agreement. Since direct numerical differentiation is
subject to a large noise to signal ratio, we primarily rely on the scaling
relation based method to estimate ${\cal F}_{\rm t}$. As in experiments, we
report the value of the force in the second regime as the tether force, i.e.,
${\cal F}_{\rm t}{}\sim{\cal F}_{2}$.
The tether force shown in Fig. 4(a) has the same qualitative and quantitative
behavior as that normally observed in experiments. The top and bottom panels
in Fig. 4(b) show forces required to extrude a tether from ruptured GUVs on
mica and from the HeLa cells, respectively. The pulling speeds in both the
experimental assays are taken to be 1 $\mu$m/s, which satisfies the assumption
of quasi-equilibrium tether extraction employed in our simulations.
Measurements at speeds less than that reported here are not possible due to
the noise arising from cantilever thermal drift. Though there are no known
techniques to calculate the precise value of ${\cal A}_{\rm ex}$ for both
systems, it is reasonable to assume that it is finite. While the force-
displacement curves for both the systems depend on the properties of their
respective bilayer membrane, in the case of HeLa cells there may be additional
contributions due to the underlying cytoskeletal mesh. Though we would expect
ruptured GUVs on a mica surface to be free of any pinning contacts, there
could be a finite number of pinning sites due to the chemical heterogeneity on
the surface in spite of the surface being atomically smooth. The salt
concentration in the buffer may screen the interactions between the membrane
and the mica surface leading to a sparse contact between the two and the
effect of these non-specific contacts on the force-displacement curves are
minimal. The forces measured in experiments match very well with the
numerically computed values of ${\cal F}_{\rm t}$. The measured tether force
is about 20 pN for tethers pulled from both the ruptured GUVs and the HeLa
cells. For the case of ruptured GUVs, the tether length at which we observe a
transition to the tether extrusion regime is consistent with that seen in our
simulations, while that for the cells is considerably higher extending into
few microns. We attribute this deviation to the lack of a suitable reference
frame for cellular measurements.
Figure 5: The potential of mean force ${\cal W}_{\rm t}$ as a function of the
tether length ${l}_{\rm t}$, extracted with ${\cal R}_{\rm bead}{}=50$ nm,
from membranes with ${\cal L}_{\rm patch}{}=0.51$ $\mu$m and $1.02$ $\mu$m,
and excess areas ${\cal A}_{\rm ex}=10\%$ and $40\%$. Data for
$\kappa=20\,k_{\rm B}T$ are shown in panel (a) and that for $\kappa=40\,k_{\rm
B}T$ is shown in panel (b).
As noted in the introduction, the size of the cytoskeletal mesh ($l_{c}$)
bounding the cell membrane significantly influences the characteristics of the
extracted tether. The current theoretical model only considers tethers from a
homogeneous membrane with constant $\kappa$ and ${\cal A}_{\rm ex}$. However,
to zeroth order, the role of the cytoskeleton in suppressing long wavelength
undulations beyond $l_{c}$ can be taken into account in our model by examining
the dependence on the membrane patch size ${\cal L}_{\rm patch}$. In Fig. 5,
we investigate this effect by extracting tethers from two planar patches with
${\cal L}_{\rm patch}{}=510$ nm and ${\cal L}_{\rm patch}{}=1.02$ $\mu$m,
which are representative of cell membranes scaffolded by dense and sparse
cytoskeletal meshes, respectively. Panels (a) and (b) show data for membranes
with $\kappa=20$ and $40$ ${k}_{\rm B}T$, respectively, for excess areas
${\cal A}_{\rm ex}{}=10$ and $40\%$. It is evident from these figures that the
PMF, and hence ${\cal F}_{\rm t}$ and ${\cal R}_{\rm t}$, in addition to the
elastic parameters $\kappa$ and ${\cal A}_{\rm ex}$, are also functions of
${\cal L}_{\rm patch}$. This points to the fact the cell may have a
heterogeneous mechanical microenvironment depending on the cytoskeletal mesh
size and may provide varied response to biochemical processes, such as
nanocarrier or viral binding, depending of the characteristic value of $l_{c}$
at the site of the process [40]. Hence, characterizing the mechanical
properties of the cell membrane at the scale of $l_{c}$ would be extremely
important. In the following, we will only focus on membrane patches with
${\cal L}_{\rm patch}{}=510$ nm to establish how the excess area of the
membrane can be inferred from tether pulling experiments.
### 3.3 Tether radii and forces measured in silico compare well with range of
values measured in in vivo experiments
Figure 6: (a) Six model membrane systems, denoted M1–M6, with specified values
of ${\cal A}_{\rm ex}$ and $\kappa$. For any system Mi ($i=1\cdots 6$), Mi1,
Mi2, and Mi3 correspond to tethers extracted with ${\cal R}_{\rm bead}=25$,
$50$, and $75$ nm, respectively. The values of ${\cal W}_{\rm t}$, ${\cal
F}_{\rm t}$, and ${\cal R}_{\rm t}$ for all the systems are shown in panels
(b), (c), and (d), respectively.
Pontes et. al. [41] have recently reported results for in vivo tether pulling
assays studies of 15 different cell types in the central nervous system (CNS)
— the data is also shown in the supplementary information. Based on this
study, we classify cells in the CNS into four distinct categories: (i) small
$\kappa$ ($20-60$${k}_{\rm B}T$) & small $\sigma$, (ii) small $\kappa$ & large
$\sigma$, (iii) large $\kappa$ ($\sim 160$ ${k}_{\rm B}T$) & small $\sigma$,
and (iv) large $\kappa$ & large $\sigma$. In order to establish the
quantitative accuracy of our model, we compute the values of ${\cal R}_{\rm
t}$ and ${\cal F}_{\rm t}$ for six model systems which are representative of
the cells in the CNS. They are denoted by M1 ($\kappa=20\,k_{\rm B}T$, ${\cal
A}_{\rm ex}\sim 10\%$), M2 ($\kappa=20\,k_{\rm B}T$, ${\cal A}_{\rm ex}\sim
44\%$), M3 ($\kappa=40\,k_{\rm B}T$, ${\cal A}_{\rm ex}\sim 9\%$), M4
($\kappa=40\,k_{\rm B}T$, ${\cal A}_{\rm ex}\sim 43\%$), M5
($\kappa=160\,k_{\rm B}T$, ${\cal A}_{\rm ex}\sim 13\%$), and M6
($\kappa=160\,k_{\rm B}T$, ${\cal A}_{\rm ex}\sim 38\%$). These model systems
are also depicted in Fig. 6(a).
We extract tethers from all the six model system (Mi, with $i=1\cdots 6$),
using bead sizes ${\cal R}_{\rm bead}=25,\,50$, and $75$ nm — the
corresponding data are denoted by Mij, where $j=1$, $2$, and $3$,
respectively. The PMFs for these systems are displayed in Fig. 6(b) and the
presence of the three characteristic regimes for ${\cal W}_{\rm t}$, discussed
earlier, are evident. Despite a similarity in the scaling behavior, the values
of ${\cal W}_{\rm t}$ are highly sensitive to changes in both ${\cal R}_{\rm
bead}$ and the elastic parameters $\kappa$ and ${\cal A}_{\rm ex}$,
predominantly so for the latter. The average values of ${\cal R}_{\rm t}$ and
${\cal F}_{\rm t}$ for the model systems are displayed in Figs. 6(c) and (d)
respectively. ${\cal R}_{\rm t}$ is found to be independent of ${\cal R}_{\rm
bead}$ and, as expected, we find: (i) for a given $\kappa$, ${\cal R}_{\rm t}$
is a decreasing function of ${\cal A}_{\rm ex}$ (e.g. M1$>$M2), and (ii) for a
fixed ${\cal A}_{\rm ex}$, ${\cal R}_{\rm t}$ is an increasing function of
$\kappa$ (e.g. M5$>$M3$>$M1). The tether force also shows a similar behavior,
with ${\cal F}_{\rm t}$ being larger for systems with smaller ${\cal A}_{\rm
ex}$ and larger $\kappa$. The range of values for the tether force ($10<{\cal
F}_{\rm t}{}<50$ pN) and radius ($60<{\cal R}_{\rm t}{}<110$ nm) measured in
our simulations compare very well with the experiments of Pontes et. al. [41],
where they report values in the range $15<{\cal F}_{\rm t}{}<70$ pN and
$43<{\cal R}_{\rm t}{}<158$ nm. This establishes the validity of our present
model as a tool for interpreting tether pulling assays that aim to probe
tethers in the nanoscopic scale.
Figure 7: Validity of the scaling relations for $\kappa$ and $\sigma$ for
data from simulations (M1–M6, shown as open symbols) and experiments (C1–C15,
shown as filled symbols). Panel (a) shows the relation $\kappa/\alpha=1/2\pi$
and panel (b) shows the scaling relation $\sigma/\Gamma=1/4\pi$, and the
corresponding correlation coefficients for systems $M1-M6$ are found to be
$r^{2}=0.846$ and $r^{2}=0.952$, respectively. The dotted lines in panels (a)
and (b) correspond to $1/2\pi$ and $1/4\pi$ respectively.
Our results in Figs. 7(a) and (b), depict the adherence to the constitutive
relations derived by minimizing eqn. (3). Briefly, the effective bending
rigidity and the surface tension are expected as follow the relations
$\kappa/\alpha=(2\pi)^{-1}$ and $\sigma/\Gamma=(4\pi)^{-1}$, respectively.
Here the scaling parameters are $\alpha={\cal F}_{\rm t}{}{\cal R}_{\rm
t}{}/{k}_{\rm B}T{}$ and $\Gamma={\cal F}_{\rm t}{}/{\cal R}_{\rm t}{}$. As
can be seen from the figures, data from both our simulations (marked M1–M6 and
shown as open symbols) and from the experiments of Pontes et. al. [41] (marked
C1–C15 and shown as filled symbols) show a good collapse, with correlation
coefficients of $r^{2}=0.846$ for $\kappa$ and $r^{2}=0.952$ for $\sigma$,
which further establishes the agreement of our calculations and the referred
experiments with known scaling relationships. The dotted lines in Figs. 7(a)
and (b) correspond to $(2\pi)^{-1}$ and $(4\pi)^{-1}$, respectively.
### 3.4 Data from tether pulling experiments may be classified according to
${\cal A}_{\rm ex}$
Using a suitable choice of scaling parameters, data from various tether
pulling assays may be classified according to the excess area in the membrane.
We demonstrate this feature in Fig. 8(a) where we show a plot of $\alpha$ vs
$\Gamma$ for the six model systems we have chosen. Each system is represented
by a set of four data points which correspond to tethers extracted with ${\cal
R}_{\rm bead}=25$, $50$, $75$, and $100$ nm. The entire set of data clusters
into groups, that are primarily dependent on the value of ${\cal A}_{\rm ex}$
in the model membrane. It may be seen that systems M1, M3, and M5 (with ${\cal
A}_{\rm ex}\sim 10\%$) are clustered in the top right while M2, M4, and M6
(with ${\cal A}_{\rm ex}\sim 40\%$) are clustered in the bottom left, and
these two clusters are marked as shaded regions. Such a clustering analysis
provides a useful route to experimentally classify cells. However, it does not
yield any information about the value of ${\cal A}_{\rm ex}$.
Figure 8: (a) A plot of $\alpha$ vs $\Gamma$ for M1–M6, for different values
of ${\cal R}_{\rm bead}$, show data clustering in an excess area dependent
fashion. (b) ${\cal G}(\alpha)$, the analytical estimates for the membrane
excess area for M1–M6, computed using eqn. (2). The dotted line denotes a
scaling of the form $G/\alpha$, with $G\sim 1107$.
Based on eqn. (2), we recognize that ${\cal G}(\alpha)$ shows a scaling of the
form $G/\alpha$ (dotted line in Fig. 8(b)). The data from our calculations are
consistent with this scaling as depicted in Fig. 8(b). Given the potential for
clustering of our data in Fig. 8(a) on the basis of ${\cal A}_{\rm ex}$, and
the scaling shown in ${\cal G}(\alpha)$ in Fig. 8(b), we define a
dimensionless variable $\eta={\cal A}_{\rm ex}{}/{\cal G}$.
A plot of $\eta$ as a function of $\alpha$ for systems M1–M6, for four
different values of ${\cal R}_{\rm bead}$, are shown in Fig. 9(a).
Intriguingly, the data collapse into a linear scaling behavior when $\eta$ is
plotted against $\alpha$ (see Fig. 8(a)) where the slope of the scaling line
depends only on ${\cal A}_{\rm ex}$. The scaling is represented as:
$\eta_{i}=m_{i}\alpha+1,$ (4)
with $i=1\cdots 6$. The intercept is taken to be $1$ since $m_{i}\rightarrow
0$ as $\eta_{i}\rightarrow 1$, i.e., when ${\cal G}\rightarrow{\cal A}_{\rm
ex}{}$. We estimate the values of $m_{i}$ for each system by fitting the
corresponding data to a linear function. The three representative dotted lines
in Fig. 8(a), corresponding to the small, intermediate, and large excess area
regimes, show the clustering of data that only depends on the value of ${\cal
A}_{\rm ex}$ in the membrane. The values of $m_{i}$ computed for each set of
data in M1–M6 (Fig. 9(a)) are shown as a function of ${\cal A}_{\rm ex}$ in
Fig. 9(b). In general, the dependence of $m_{i}$ on ${\cal A}_{\rm ex}$ may be
expressed as:
$m_{i}=f({\cal A}_{\rm ex}{}_{,i}),$ (5)
where $f$ is an unknown function. As a first approximation, we find $m_{i}$ to
be a linear function of ${\cal A}_{\rm ex}$ and hence $f({\cal A}_{\rm
ex}{}_{,i})=K{\cal A}_{\rm ex}{}_{,i}$ with $K$ being the slope of the best
fit linear function, shown as a dotted line in Fig. 9(b).
Figure 9: (a) Scaling plot of $\eta$ vs $\alpha$ for systems M1–M6 for four
different values of ${\cal R}_{\rm bead}$. The dotted lines, show
representative scaling relations of the form $\eta_{i}=m_{i}\alpha+1$, for
small, intermediate, and large ${\cal A}_{\rm ex}$ regimes. (b) A plot of the
slope $m_{i}$ as a function of ${\cal A}_{\rm ex}$ and the dotted lines denote
the best linear fit to the data. Fitting $f({\cal A}_{\rm ex}{}_{,i})=K{\cal
A}_{\rm ex}{}_{,i}$ we find the value of $K=0.00085/({k}_{\rm B}T{})$.
The presence of an excess area dependent scaling described by the slope $m$ in
Fig. 9(b) can allow one to devise strategies to estimate the range of ${\cal
A}_{\rm ex}$ in cells directly from tether pulling experiments. One possible
approach is to use eqn. (5) in eqn. (4) and self consistently solve for ${\cal
A}_{\rm ex}$ using the relationship:
${\cal A}_{\rm ex}{}=\left(f({\cal A}_{\rm ex})\alpha+1\right){\cal G}.$ (6)
Here, the variables $\alpha={\cal F}_{\rm t}{}{\cal R}_{\rm t}{}/{k}_{\rm
B}T{}$ and ${\cal G}$ are directly computed from the tether force and radius
measured in tether pulling experiments. The form of the unknown function
$f({\cal A}_{\rm ex})$ is in turn obtained from simulations of model systems,
that correctly accounts for the size of the cytoskeletal mesh in the target
cell. The excess membrane area may then be estimated by self consistently
solving eqn. (6).
## 4 Discussion
We have presented a computational approach based on umbrella sampling and the
weighted histogram analysis technique to compute the free energy landscape and
the force-extension relationship for the pulling of membrane tethers from
membrane patches of different excess membrane areas, ${\cal A}_{\rm ex}$. The
tether forces measured in our simulations agree very well with in vitro tether
pulling experiments on ruptured GUVs on substrate and on HeLa cells. Unlike
existing models, we are able to account for both mechanical work as well as
entropic work in tether extraction by performing finite temperature
calculations, delineation of the Helmholtz free energy, and performing the
analysis in an ensemble with non-zero ${\cal A}_{\rm ex}$. Based on the
computed values of the force required for tether extraction and the tether
radius, we established scaling relationships involving the ${\cal F}_{\rm t}$,
${\cal R}_{\rm t}$, and ${\cal A}_{\rm ex}$. We demonstrated the relevance of
the calculations by showing the scaling of $\kappa$ with $\alpha$ and $\sigma$
with $\Gamma$ from the model and those obtained from 15 different cell
experiments collapse on to a single curve. These scaling curves can be used to
construct new schemes for estimating the excess membrane area, which alleviate
the limitations of previous methods by being valid for large curvatures, and
by taking into account the thermal membrane undulations in the high curvature
limit. We have shown that our results successfully recapitulate the results of
the previous model in the small-curvature limit. However, in the large-
curvature limit, when the domain of applicability of the previous model is
limited, we predict the values of the excess membrane areas that are
substantially larger than the estimates from the small-curvature model. In
light of the discussion above, there is a profound biomedical ramification of
the excess membrane area distribution as revealed by our analyses of the
tether pulling experiments using the fully non-linear model of the membrane
patch subject to finite temperature undulations.
Our model while directly relevant to tether extraction in well behaved in
vitro setups, such as GUVs or supported bilayers, does not include the full
complexity required to recapitulate the cellular experiments. The complexities
arise due to: (i) the dynamic nature of the cytoskeletal reorganization, (ii)
changes in ${\cal A}_{\rm ex}$ due to cellular trafficking mechanisms; the
latter poses an important constraint regarding the ensemble. While in in vitro
experiments or in our model, we have the ability to either select/design a
constant ${\cal A}_{\rm ex}$ or a constant $\sigma$ ensemble, it is not
obvious what the correct cellular condition would be. For example, at early
timescales (i.e. too short for changes in $l_{c}$) the cell membrane patch may
be under a state of tension but at later times both $\sigma$ and ${\cal
A}_{\rm ex}$ can change due to signaling and trafficking. Notwithstanding
these considerations, our model can still be applicable under certain cellular
conditions, namely (i) the timescale of the tether extraction is faster than
that for cytoskeletal reorganization and trafficking ($\sim 10$-$100$ s [42]);
(ii) the dimensions of the extracted tethers are smaller than $l_{c}$. When
these conditions are met, one can treat the tether extraction as a quasi-
equilibrium process where the cytoskeleton merely serves as a pinning boundary
condition for the membrane. This is further justified because the membrane
tension equilibrates at a much faster time scale of $\tau_{\rm
tension}=\eta_{s}\textfractionsolidus\sigma{}\sim 1$-$100$ $\mu{\rm s}$,
(where $\eta_{s}$ is the surface dilational viscosity of the bilayer $\approx
0.35$ Ns/m [43]). Under these assumptions, ${\cal L}_{\rm patch}$ can serve as
an approximate surrogate to include cytoskeletal pinning effects. These
considerations and caveats must be taken into consideration in developing
experimental methods for determining ${\cal A}_{\rm ex}$ in cells based on the
model we have described here.
A bi-directional coupling can be established between the cell exterior and
cell interior in a “mechano-sensitive” fashion through the control of membrane
excess area [19], because ${\cal A}_{\rm ex}$ is the conjugate variable for
membrane tension as well as membrane curvature. Several signaling mechanic
events can therefore be transduced via the regulation in ${\cal A}_{\rm ex}$ :
they include cell-ECM interactions, which can tune acto-myosin tension and
influence cell-proliferation through integrin-mediated signaling pathways [44,
45, 46]. Glycocalyx remodeling can influence membrane-curvature distribution
on the cell surface and initiate a proliferative cell-response, funneling
through integrin-mediating signals [20]. Cellular recycling pathways
responsible for cargo transport from the endosome to the plasma membrane can
also induce and nucleate cell-membrane protrusions providing dominant
mechanisms for cell migration and motility [47, 12]. These examples serve to
reiterate how membrane excess area, in response to the tuning of tension, and
by influencing the curvature distribution of the cell membrane, can transduce
signals impacting cell-fate decisions in ECM-specific, and mechano-sensitive
fashion.
Mechanotyping cells to characterize the state of the cell membrane is,
therefore, expected to be crucial in circumstances where the underlying
heterogeneity is intrinsic such as in a tumor microenvironment and influences
cell fate through outside-in mechanisms relayed via membrane
mechanotransduction to intracellular signaling. Mechanotyping will be equally
important in circumstances where the membrane plays a dominant role such as in
the viral invasion of host cells in virology, formation of the immunological
synapse in adaptive immunity, or targeted delivery of nanocarriers in
pharmacology.
## Acknowledgements
This work was supported in part by Grants NSF-CBET-1236514 (R.R),
NIH/U01EB016027 (R.R), NIH/1U54CA193417 (R.R and T.B), and NIH/R01GM097552
(T.B). T.P and S.P acknowledge support from the Wellcome Trust-DBT India
alliance. Computational resources were provided in part by the Grant MCB060006
from XSEDE and NSF/DMR-1120901.
## Author contributions statement
R.R. and N.R. designed and performed the simulations. A.R, T.P and S.P
designed and performed the experiments. All authors were involved in data
analysis and interpretation and in writing of the manuscript.
## Competing financial interests
The authors declare that they have no competing financial interests.
Supplementary Information
## S1 Dynamical Triangulated Monte Carlo
The dynamical triangulation Monte Carlo technique consists of two independent
moves to alter the degrees of freedom that define the triangulated surface
which is taken as a model for the fluid membrane [24, 27]:
1) Vertex Move: A randomly chosen vertex is randomly displaced to a new
position within a cube of size $\epsilon$, centered around the vertex. The
move is performed by the holding the connectivity fixed as shown in Fig. S1(a)
and accepted using the Metropolis scheme [29].
2) Link Flip: A randomly chosen tether shared between two triangles on the
surface is removed and reconnected between the two previously unconnected
vertices as shown in Fig. S1(b), by holding the vertex positions fixed.
Both moves are accepted using the standard Metropolis scheme with a
probability given by the Boltzmann constant of the energy change ($\Delta{\cal
H}_{\rm tot})$ due to the move. In the case of tether pulling simulations the
total energy of the membrane is given by ${\cal H}_{\rm tot}={\cal H}+{\cal
H}_{\rm bias}$, where ${\cal H}$ denotes the elastic Hamiltonian and ${\cal
H}_{\rm bias}$ is the harmonic biasing potential as defined in the main
manuscript. Here, ${k}_{\rm B}T$ = 1 is the inverse temperature, with $k_{\rm
B}$ the Boltzmann constant and $T$ the absolute temperature.
Figure S1: Dynamical triangulated Monte Carlo scheme to independently modify
the position (a) and the connectivity (b) of the vertices in the triangulated
surface model.
The state of the membrane can be affected by variations either in the bending
stiffness or in the self-avoidance parameter, leading to membranes with
different excess areas ${\cal A}_{\rm ex}$. Snapshots of the membrane
conformations in the parameter space of bending rigidity and excess area are
shown in Fig. S2.
## S2 Membrane conformations in various limits
The conformations of a planar membrane, when ${\cal H}_{\rm bias}=0$, for two
different bending rigidities ($\kappa=10$ and $40$ ${k}_{\rm B}T$) for two
different values of ${\cal A}_{\rm ex}$ ($=4\%$ and $40\%$) are shown in Fig.
S2. The surface is colored with respect to the $z$ position of the vertices.
Figure S2: Conformations of membranes with different bending stiffness and
excess area. Shown are shapes for two values of the excess area ${\cal A}_{\rm
ex}{}=4$ and $40\%$.
## S3 Undulation spectrum for the planar membrane
In the continuum limit, a planar membrane can be parameterized based on its
height with respect to a reference plane and such a parameterization is called
the Monge gauge. If the reference plane is taken to be the plane, then the
height of the membrane at a chosen point on the plane, with coordinates $x$
and $y$, is given by $h(x,y)$. The height of the membrane can also be
expressed in terms of its Fourier modes as [39]
$h({\bf X})=\frac{1}{{\cal L}_{\rm patch}^{2}}\int d{\bf q}\,\,h_{\bf
q}\exp(-i{\bf q}\cdot{\bf X})$ (S7)
Figure S3: Validation of the small deformation limit. The power spectrum, for
each of the Fourier modes, scales as $q^{-4}$ when the membranes have small
excess area or large bending stiffness.
Here we have used the short hand notations ${\bf X}=[x,y]$ and ${\bf
q}=[q_{x},q_{y}]$ to denote two dimensional real and Fourier spaces and the
Fourier amplitude also has two components given by $h_{\bf
q}=[h_{q_{x}},h_{q_{y}}]$. When the elastic Hamiltonian ${\cal H}$ (see eqn. 1
of the main manuscript) is expressed in terms of its Fourier modes, the power
spectrum for each of the modes can be shown to obey the relation,
${\cal A}_{\rm patch}{}\left\langle h_{q}h_{-q}\right\rangle=\dfrac{{k}_{\rm
B}T{}}{\kappa q^{4}+\sigma q^{2}}$ (S8)
This result is derived for nearly planar membranes (where $|\nabla h\ll 1|$)
and hence should be reproducible in the simulations for membranes with either
large bending stiffnesses or small excess areas or both. The power spectrum
for planar membranes with small excess area and for a range of values of is
shown in Fig. S3. The observed undulation modes scale as $q^{-4}$, which is in
good agreement with the theoretical expression given above. However, it should
be remembered that membranes with large excess area would not adhere to this
scaling behavior, since the excess area manifests as large amplitude
undulations, which takes the systems beyond the small deformation limit (as
$|\nabla h\sim 1|$).
## S4 Properties of the tether as a function of $\kappa$ and ${\cal A}_{\rm
ex}$
In this section, we display the effect of the membrane excess area and bending
rigidity on the length and radius of a tether extracted from a cell membrane.
In Fig. S4 we show ${l}_{\rm t}$ and ${\cal R}_{\rm t}$, along with the
membrane conformations, as a function of the imposed tether length ${\cal
L}_{\rm t}$ for a membrane with $\kappa=20\,k_{\rm B}T$ and ${\cal A}_{\rm
ex}\sim 10\%$.
Figure S4: The length and radius of the tether extracted from a membrane with
$\kappa=20\,k_{\rm B}T$ and ${\cal A}_{\rm ex}\sim 10\%$ as a function of the
imposed tether length ${\cal L}_{\rm t}$.
Similarly, in Fig. S5 we show the effect of $\kappa$ on ${l}_{\rm t}$ and
${\cal R}_{\rm t}$ for membranes with similar excess areas, chosen to be
${\cal A}_{\rm ex}\sim 10\%$. The tether pulling data is displayed for
$\kappa=20$, and $160$ ${k}_{\rm B}T$.
Figure S5: Effect of $\kappa$ on the length and radius of the extracted
tether as a function of the imposed tether length ${\cal L}_{\rm t}$, for
membranes with similar excess areas, taken to be ${\cal A}_{\rm ex}\sim 10\%$.
As noted in the discussions on Fig.2 in the main manuscript, we find both the
systems to exhibit the three distinct scaling regimes previously identified
for the tether radius. However, for the membranes with low excess area
considered here we find the third regime to occur at a smaller value of ${\cal
L}_{\rm t}$ compared to that seen for membranes with large excess areas.
Similarly, the value of ${\cal R}_{\rm t}$ in the final regime is an
increasing function of $\kappa$, as is evident from Fig. S5.
## S5 Tether pulling experiments
A typical tether pulling experiment proceeds through many stages as
illustrated in Fig. S6. In the first stage, the tip of an atomic force
microscope (AFM), attached to a cantilever, is indented into the cell surface
and held fixed until the tip makes a contact with the cell membrane; these
stages are illustrated in Figs. S6(a) and (b). Stage (b) in the experiments is
analogous to the initial configurations used in our simulations. After the
formation of a stable contact the AFM tip is retracted at a constant velocity
until it returns to its undeflected state, as shown in Figs. S6(c) and (d). In
the course of retraction the adherence between the tip and the membrane leads
to formation of a tether followed by its extrusion and these process are
identical to those observed in our simulations and described in Sec.4 of the
main manuscript.
Figure S6: Various stages of a tether pulling experiment.
## S6 Mechanical properties of the 15 different cells in the CNS
Here we show data from Pontes et. al. [41] for the mechanical properties of 15
different cells in the central nervous system (CNS). The tether force ${\cal
F}_{\rm t}$ and radius ${\cal R}_{\rm t}$ for each of these cells (marked
C1–C15) satisfies the scaling relation ${\cal F}_{\rm t}{\cal R}_{\rm
t}/(2\kappa)=\pi$ and this is shown in Fig. S7(a). The values of $\kappa$ and
$\sigma$ are shown in Fig. S7(b) and the spread of the data show three
characteristic mechanical regimes namely: (i)low $\kappa$ and low $\sigma$,
(ii)low $\kappa$ and high $\sigma$, and (iii) high $\kappa$ and high $\sigma$.
Figure S7: (a) The scaling relation ${\cal F}_{\rm t}{\cal R}_{\rm
t}/2\kappa$ and (b) the values of $\kappa$ and $\sigma$ for 15 different cells
(marked C1–C15) in the CNS. Data from Pontes et. al. [41].
## S7 Movie M1
The movie shows the conformations of a tether extracted from a planar membrane
as a function of the reaction coordinate ${\cal L}_{\rm t}$ – data shown for a
membrane with ${\cal L}_{\rm patch}{}=510$ nm, $\kappa=40$ ${k}_{\rm B}T$, and
${\cal A}_{\rm ex}\sim 40\%$. The histogram shown alongside corresponds to the
distribution of the mean curvature of the membrane surface
Figure S8: Movie showing the evolution of tether as a function of the reaction
coordinate ${\cal L}_{\rm t}$.
## References
* [1] Suresh, S. Biomechanics and biophysics of cancer cells. _Acta Biomater_ 3, 413–438 (2007).
* [2] Physical Sciences - Oncology Centers Network _et al._ A physical sciences network characterization of non-tumorigenic and metastatic cells. _Sci. Rep._ 3, 1449 (2013).
* [3] Steward, R. L., Rosner, S. R., Zhou, E. H. & Fredberg, J. J. Illuminating human health through cell mechanics. _Swiss Med Wkly_ 143, w13766 (2013).
* [4] Lee, G. Y. H. & Lim, C. T. Biomechanics approaches to studying human diseases. _Trends Biotechnol._ 25, 111–118 (2007).
* [5] Van Vliet, K. J., Bao, G. & Suresh, S. The biomechanics toolbox: experimental approaches for living cells and biomolecules. _Acta Materialia_ 51, 5881–5905 (2003).
* [6] Sheetz, M. P., Sable, J. E. & Döbereiner, H.-G. Continuous membrane-cytoskeleton adhesion requires continuous accommodation to lipid and cytoskeleton dynamics. _Annu. Rev. Biophys. Biomol. Struct._ 35, 417–434 (2006).
* [7] Acerbi, I. _et al._ Human breast cancer invasion and aggression correlates with ECM stiffening and immune cell infiltration. _Integr. Biol._ (2015).
* [8] Ritchie, K., Iino, R., Fujiwara, T., Murase, K. & Kusumi, A. The fence and picket structure of the plasma membrane of live cells as revealed by single molecule techniques (Review). _Mol. Membr. Biol._ 20, 13–18 (2003).
* [9] Morone, N. _et al._ Three-dimensional reconstruction of the membrane skeleton at the plasma membrane interface by electron tomography. _The Journal of Cell Biology_ 174, 851–862 (2006).
* [10] McMahon, H. T. & Gallop, J. L. Membrane curvature and mechanisms of dynamic cell membrane remodelling. _Nature Cell Biology_ 438, 590–596 (2005).
* [11] Zimmerberg, J. & Kozlov, M. M. How proteins produce cellular membrane curvature. _Nat. Rev. Mol. Cell Biol._ 7, 9–19 (2006).
* [12] Zhao, Y. _et al._ Exo70 Generates Membrane Curvature for Morphogenesis and Cell Migration. _Developmental Cell_ 26, 266–278 (2013).
* [13] Goh, L. K. & Sorkin, A. Endocytosis of Receptor Tyrosine Kinases. _Cold Spring Harbor Perspectives in Biology_ 5, a017459–a017459 (2013).
* [14] Grant, B. D. & Donaldson, J. G. Pathways and mechanisms of endocytic recycling. _Nat. Rev. Mol. Cell Biol._ 10, 597–608 (2009).
* [15] Bryant, D. M. & Mostov, K. E. From cells to organs: building polarized tissue. _Nat. Rev. Mol. Cell Biol._ 9, 887–901 (2008).
* [16] Orlando, K. & Guo, W. Membrane organization and dynamics in cell polarity. _Cold Spring Harbor Perspectives in Biology_ 1, a001321 (2009).
* [17] Luo, T., Mohan, K., Iglesias, P. A. & Robinson, D. N. Molecular mechanisms of cellular mechanosensing. _Nature Materials_ 12, 1064–1071 (2013).
* [18] Sheetz, M. P. Cell control by membrane-cytoskeleton adhesion. _Nat. Rev. Mol. Cell Biol._ 2, 392–396 (2001).
* [19] Diz-Muñoz, A., Fletcher, D. A. & Weiner, O. D. Use the force: membrane tension as an organizer of cell shape and motility. _Trends in Cell Biology_ 23, 47–53 (2013).
* [20] Paszek, M. J. _et al._ The cancer glycocalyx mechanically primes integrin-mediated growth and survival. _Nature_ 511, 319–325 (2014).
* [21] Miaczynska, M. Effects of Membrane Trafficking on Signaling by Receptor Tyrosine Kinases. _Cold Spring Harbor Perspectives in Biology_ 5, a009035 (2013).
* [22] Schmid-Schönbein, G. W., Shih, Y. Y. & Chien, S. Morphometry of human leukocytes. _Blood_ 56, 866–875 (1980).
* [23] Baumgärtner, A. & Ho, J. Crumpling of fluid vesicles. _Phys. Rev. A_ 41, 5747–5750 (1990).
* [24] Kroll, D. M. & Gompper, G. The conformation of fluid membranes: Monte Carlo simulations. _Science_ 255, 968–971 (1992).
* [25] Canham, P. B. The minimum energy of bending as a possible explanation of the biconcave shape of the human red blood cell. _J. Theor. Biol._ 26, 61–81 (1970).
* [26] Helfrich, W. Elastic properties of lipid bilayers: theory and possible experiments. _Z. Naturforsch. C_ 28, 693 (1973).
* [27] Ramakrishnan, N., Sunil Kumar, P. B. & Ipsen, J. H. Monte Carlo simulations of fluid vesicles with in-plane orientational ordering. _Phys. Rev. E_ 81, 041922 (2010).
* [28] Tourdot, R. W., Ramakrishnan, N. & Radhakrishnan, R. Defining the free-energy landscape of curvature-inducing proteins on membrane bilayers. _Phys. Rev. E_ 90, 022717 (2014).
* [29] Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. & Teller, E. Equation of State Calculations by Fast Computing Machines. _J. Chem. Phys._ 21, 1087–1092 (1953).
* [30] Helfrich, W. & Servuss, R. M. Undulations, steric interaction and cohesion of fluid membranes. _Il Nuovo Cimento D_ 3, 137–151 (1984).
* [31] Waheed, Q. & Edholm, O. Undulation Contributions to the Area Compressibility in Lipid Bilayer Simulations. _Biophys. J._ 97, 2754–2760 (2009).
* [32] Phillips, R., Kondev, J. & Theriot, J. _Physical biology of the cell_ (Garland Science, 2009).
* [33] Frenkel, D. & Smit, B. _Understanding Molecular Simulation : From Algorithms to Applications_ (Academic Press, 2001), 2 edn.
* [34] Roux, B. The calculation of the potential of mean force using computer simulations. _Computer Physics Communications_ 91, 275–282 (1995).
* [35] Weinberger, A. _et al._ Gel-assisted formation of giant unilamellar vesicles. _Biophys. J._ 105, 154–164 (2013).
* [36] Hutter, J. L. & Bechhoefer, J. Calibration of atomic-force microscope tips. _Review of Scientific Instruments_ 64, 1868–1873 (1993).
* [37] Derényi, I., Jülicher, F. & Prost, J. Formation and Interaction of Membrane Tubes. _Phys. Rev. Lett._ 88, 238101 (2002).
* [38] Lipowsky, R. The conformation of membranes. _Nature_ 349, 475–481 (1991).
* [39] Seifert, U. Configurations of fluid membranes and vesicles. _Advances in Physics_ 46, 13–137 (1997).
* [40] Ramakrishnan, N. _et al._ Biophysically inspired model for functionalized nanocarrier adhesion to cell surface: roles of protein expression and mechanical factors. _J. Royal Society Open Science_ 3, 160260 (2016).
* [41] Pontes, B. _et al._ Membrane Elastic Properties and Cell Function. _PLoS ONE_ 8, e67708 (2013).
* [42] Joanny, J. F. & Prost, J. Active gels as a description of the actin-myosin cytoskeleton. _HFSP journal_ 3, 94–104 (2009).
* [43] Haluska, C. K. _et al._ Time scales of membrane fusion revealed by direct imaging of vesicle fusion with high temporal resolution. _Proc. Natl. Acad. Sci. U.S.A._ 103, 15841–15846 (2006).
* [44] Mih, J. D., Marinkovic, A., Liu, F., Sharif, A. S. & Tschumperlin, D. J. Matrix stiffness reverses the effect of actomyosin tension on cell proliferation. _Journal of Cell Science_ 125, 5974–5983 (2012).
* [45] Paszek, M. J. _et al._ Tensional homeostasis and the malignant phenotype. _Cancer Cell_ 8, 241–254 (2005).
* [46] Samuel, M. S. _et al._ Actomyosin-mediated cellular tension drives increased tissue stiffness and beta-catenin activation to induce epidermal hyperplasia and tumor growth. _Cancer Cell_ 19, 776–791 (2011).
* [47] Zuo, X. _et al._ Exo70 interacts with the Arp2/3 complex and regulates cell migration. _Nature Cell Biology_ 8, 1383–1388 (2006).
|
Efficient model chemistries for peptides. II.
Basis set convergence in the B3LYP method. Pablo ECHENIQUE
Instituto de Biocomputación y Física de Sistemas Complejos (BIFI),
and Departamento de Física Teórica, Universidad de Zaragoza,
Pedro Cerbuna 12, E-50009 Zaragoza, Spain
E-mail<EMAIL_ADDRESS>Gregory A. CHASS
Global Institute Of COmputational Molecular and Materials Science (GIOCOMMS),
and School of Chemistry, University of Wales, Bangor, Gwynedd, LL57 2UW United
Kingdom,
and College of Chemistry, Beijing Normal University, Beijing, 100875, China
PACS: 07.05.Tp; 31.15.Ar; 31.50.Bc; 87.14.Ee; 87.15.Aa; 89.75.-k
Keywords: peptides, quantum chemistry, PES, B3LYP, basis set convergence
Abstract
Small peptides are model molecules for the amino acid residues that are the
constituents of proteins. In any bottom-up approach to understand the
properties of these macromolecules essential in the functioning of every
living being, to correctly describe the conformational behaviour of small
peptides constitutes an unavoidable first step. In this work, we present an
study of several potential energy surfaces (PESs) of the model dipeptide HCO-
L-Ala-NH2. The PESs are calculated using the B3LYP density-functional theory
(DFT) method, with Dunning’s basis sets cc-pVDZ, aug-cc-pVDZ, cc-pVTZ, aug-cc-
pVTZ, and cc-pVQZ. These calculations, whose cost amounts to approximately 10
years of computer time, allow us to study the basis set convergence of the
B3LYP method for this model peptide. Also, we compare the B3LYP PESs to a
previous computation at the MP2/6-311++G(2df,2pd) level, in order to assess
their accuracy with respect to a higher level reference. All data sets have
been analyzed according to a general framework which can be extended to other
complex problems and which captures the nearness concept in the space of model
chemistries (MCs).
## 1 Introduction
In any bottom-up attempt to understand the behaviour of protein molecules (in
particular, the still elusive protein folding process [3, 1, 5, 2, 4]), the
characterization of the conformational preferences of short peptides [13, 12,
7, 11, 6, 9, 10, 8] constitutes an unavoidable first step. Due to the lower
numerical effort required and also to the manageability of their
conformational space, the most frequently studied peptides are the shortest
ones: the _dipeptides_ [14, 17, 16, 15], in which a single amino acid residue
is capped at both the N- and C-termini with neutral peptide groups. Among
them, the most popular choice has been the _alanine_ dipeptide [34, 30, 26,
23, 27, 24, 21, 22, 6, 20, 29, 19, 33, 25, 31, 28, 32, 18], which, being the
simplest chiral residue, shares many similarities with most of the rest of
dipeptides for the minimum computational price.
Although classical force fields [35, 36, 37, 38, 39, 40, 41, 42, 43] are the
only feasible choice for simulating large molecules at present, they have been
reported to yield inaccurate _potential energy surfaces_ (PESs) for dipeptides
[44, 45, 46, 47, 29] and short peptides [48, 6]. Therefore, it is not
surprising that they are widely recognized as being unable to correctly
describe the intricacies of the whole protein folding process [49, 50, 51, 44,
52, 53, 54, 55]. On the other hand, albeit prohibitively demanding in terms of
computational resources, ab initio quantum mechanical calculations [56, 57,
58] are not only regarded as the correct physical description that in the long
run will be the preferred choice to directly tackle proteins (given the
exponential growth of computer power and the advances in the search for
pleasantly scaling algorithms [60, 59]), but they are also used in small
peptides as the reference against which less accurate methods must be compared
[61, 62, 44, 45, 47, 29, 6] in order to, for example, parameterize improved
generations of additive, classical force fields for polypeptides.
However, despite the sound theoretical basis, in practical quantum chemistry
calculations a plethora of approximations must be typically made if one wants
to obtain the final results in a reasonable human time. The exact ‘recipe’
that includes all the assumptions and steps needed to calculate the relevant
observables for any molecular system has been termed _model chemistry_ (MC) by
John Pople. In his own words, a MC is an “approximate but well-defined general
and continuous mathematical procedure of simulation” [63].
After assuming that the particles involved move at non-relativistic velocities
and that the greater weight of the nuclei allows to perform the Born-
Oppenheimer approximation, we are left with the problem of solving the non-
relativistic electronic Schrödinger equation [60]. The two starting
approximations to its exact solution that a MC must contain are, first, the
truncation of the $N$-electron space (in wavefunction-based methods) or the
choice of the functional (in DFT) and, second, the truncation of the one-
electron space, via the LCAO scheme (in both cases). The extent up to which
the first truncation is carried (or the functional chosen in the case of DFT)
is commonly called the _method_ and it is denoted by acronyms such as RHF,
MP2, B3LYP, CCSD(T), FCI, etc., whereas the second truncation is embodied in
the definition of a finite set of atom-centered Gaussian functions termed
_basis set_ [60, 64, 57, 58, 65], which is also designated by conventional
short names, such as 6-31+G(d), TZP or cc-pVTZ(–f). If we denote the method by
a capital $M$ and the basis set by a $B$, the specification of both is
conventionally denoted by $L:=M/B$ and called a _level of the theory_. Typical
examples of this are RHF/3-21G or MP2/cc-pVDZ [56, 57, 58].
Note that, apart from these approximations, which are the most commonly used
and the only ones that are considered in this work, the MC concept may include
a lot of additional features: the heterolevel approximation (explored in a
previous work in this series [34]), protocols for extrapolating to the
infinite-basis set limit [66, 67, 68, 69, 70], additivity assumptions [71, 72,
73, 74], extrapolations of the Møller-Plesset series to infinite order [75],
removal of the so-called _basis set superposition error_ (BSSE) [76, 77, 78,
79, 80, 81, 82], etc. The reason behind most of these techniques being the
urging need to reduce the computational cost of the calculations.
Now, although general applicability is a requirement that all MCs must
satisfy, general accuracy is not mandatory. Actually, the fact is that the
different procedures that conform a given MC are typically parameterized and
tested in very particular systems, which are often small molecules. Therefore,
the validity of the approximations outside that native range of problems must
be always questioned and checked. However, while the approximate computational
cost of a given MC for a particular system is rather easy to predict on the
basis of simple scaling relations, its expected accuracy on a particular
problem could be difficult to predict a priori, specially if we are dealing
with large molecules in which interactions in very different energy scales are
playing a role. The description of the conformational behaviour of peptides
(or, more generally, flexible organic species), via their PESs in terms of the
soft internal coordinates, is one of such problems and the one that is treated
in this work.
To this end, we first describe, in sec. 2, the computational and theoretical
methods used throughout the rest of the document. Then, in sec. 3, we
introduce a basic framework that rationalizes the actual process of evaluating
the efficiency of any MC for a complex problem. These general ideas are used,
in sec. 4, to perform an study of the _density-functional theory_ (DFT) B3LYP
[83, 84, 85, 86] method with the cc-pVDZ, aug-cc-pVDZ, cc-pVTZ, aug-cc-pVTZ,
and cc-pVQZ Dunning’s basis sets [87, 88]. To this end, we apply these levels
of the theory to the calculation the PES of the model dipeptide HCO-L-Ala-NH2
(see fig. 1), and assess their efficiency by comparison with a reference PES.
Finally, in sec. 5, the most important conclusions are briefly summarized.
## 2 Methods
All ab initio quantum mechanical calculations have been performed using the
GAMESS-US program [89, 90] under Linux and on 2.2 GHz PowerPC 970FX machines
with 2 GB RAM memory.
The internal coordinates used for the Z-matrix of the HCO-L-Ala-NH2 dipeptide
in the GAMESS-US input files are the _Systematic Approximately Separable
Modular Internal Coordinates_ (SASMIC) ones introduced in ref. 91. They are
presented in table 1 (see also fig. 1 for reference).
Atom name | Bond length | Bond angle | Dihedral angle
---|---|---|---
H1 | | |
C2 | (2,1) | |
N3 | (3,2) | (3,2,1) |
O4 | (4,2) | (4,2,1) | (4,2,1,3)
C5 | (5,3) | (5,3,2) | (5,3,2,1)
H6 | (6,3) | (6,3,2) | (6,3,2,5)
C7 | (7,5) | (7,5,3) | $\phi:=$(7,5,3,2)
C8 | (8,5) | (8,5,3) | (8,5,3,7)
H9 | (9,5) | (9,5,3) | (9,5,3,7)
H10 | (10,8) | (10,8,5) | (10,8,5,3)
H11 | (11,8) | (11,8,5) | (11,8,5,10)
H12 | (12,8) | (12,8,5) | (12,8,5,10)
N13 | (13,7) | (13,7,5) | $\psi:=$(13,7,5,3)
O14 | (14,7) | (14,7,5) | (14,7,5,13)
H15 | (15,13) | (15,13,7) | (15,13,7,5)
H16 | (16,13) | (16,13,7) | (16,13,7,15)
Table 1: Internal coordinates in Z-matrix form of the protected dipeptide HCO-
L-Ala-NH2 according to the SASMIC scheme introduced in ref. 91. The numbering
of the atoms is that in fig. 1, and the soft Ramachandran angles $\phi$ and
$\psi$ are indicated.
All PESs in this study have been discretized into a regular 12$\times$12 grid
in the bidimensional space spanned by the Ramachandran angles $\phi$ and
$\psi$, with both of them ranging from $-165^{\mathrm{o}}$ to
$165^{\mathrm{o}}$ in steps of $30^{\mathrm{o}}$. To calculate the PES at a
particular level of the theory, we have run constrained energy optimizations
at each point of the grid, freezing the two Ramachandran angles $\phi$ and
$\psi$ at the corresponding values. In order to save computational resources,
the starting structures were taken, when possible, from PESs previously
optimized at a lower level of the theory. All the basis sets used in the study
have been taken from the GAMESS-US internally stored library, and spherical
Gaussian-type orbitals (GTOs) have been preferred, thus having 5 d-type and 7
f-type functions per shell.
Figure 1: Atom numeration of the protected dipeptide HCO-L-Ala-NH2 according
to the SASMIC scheme introduced in ref. 91. The soft Ramachandran angles
$\phi$ and $\psi$ are also indicated.
We have computed 5 PESs, using the DFT B3LYP [83, 84, 85, 86] method with the
cc-pVDZ, aug-cc-pVDZ, cc-pVTZ, aug-cc-pVTZ, and cc-pVQZ Dunning’s basis sets
[87, 88]. The total cost of these calculations in the machines used is around
10 years of computer time.
Also, let us note that the correcting terms to the PES coming from mass-metric
tensors determinants and from the determinant of the Hessian matrix have been
recently shown to be relevant for the conformational behaviour of peptides
[18]. (The latter may be regarded as a residual entropy arising from the
elimination of the hard coordinates from the description.) Although, in this
study, we have included none of these terms, the PES calculated here is the
greatest part of the effective free energy [18], so that it may be considered
as the first ingredient for a further refinement of the study in which the
correcting terms are taken into account. The same may be said about another
important source of error in the calculation of relatives energies in peptide
systems: the already mentioned BSSE [31].
In order to compare the PESs produced by the different MCs, a statistical
criterium (distance) introduced in ref. 92 has been used. Let us recall here
that this _distance_ , denoted by $d_{12}$, profits from the complex nature of
the problem studied to compare any two different potential energy functions,
$V_{1}$ and $V_{2}$. From a working set of conformations (in this case, the
144 points of each PES), it statistically measures the typical error that one
makes in the _energy differences_ if $V_{2}$ is used instead of the more
accurate $V_{1}$, admitting a linear rescaling and a shift in the energy
reference.
Despite having energy units, the quantity $d_{12}$ approximately presents all
properties characteristic of a typical mathematical metric in the space of MCs
(hence the word ‘distance’), such as the possibility of defining a symmetric
version of it and a fulfillment of the triangle inequality (see ref. 92 for
the technical details and sec. 3 for more about the importance of these
facts). It also presents better properties than other quantities customarily
used to perform these comparisons, such as the energy RMSD, the average energy
error, etc., and it may be related to the Pearson’s correlation coefficient
$r_{12}$ by
$d_{12}=\sqrt{2}\,{\sigma}_{2}(1-r_{12}^{2})^{1/2}\ ,$ (1)
where $\sigma_{2}$ is the standard deviation of $V_{2}$ in the working set.
Moreover, due to its physical meaning, it has been argued in ref. 92 that, if
the distance between two different approximations of the energy of the same
system is less than $RT$, one may safely substitute one by the other without
altering the relevant dynamical or thermodynamical behaviour. Consequently, we
shall present the results in units of $RT$ (at $300^{\mathrm{o}}$ K, so that
$RT\simeq 0.6$ kcal/mol).
Finally, if one assumes that the effective energies compared will be used to
construct a polypeptide potential and that it will be designed as simply the
sum of mono-residue ones (more complex situations may be found in real
problems [93]), then, the number $N_{\mathrm{res}}$ of residues up to which
one may go keeping the distance $d_{12}$ between the two approximations of the
the $N$-residue potential below $RT$ is [92]
$N_{\mathrm{res}}=\left(\frac{RT}{d_{12}}\right)^{2}\ .$ (2)
According to the value taken by $N_{\mathrm{res}}$ for a comparison between a
fixed reference PES, denoted by $V_{1}$, and a candidate approximation,
denoted by $V_{2}$, we shall divide the whole accuracy range in sec. 4 in
three regions depending on the accuracy: the _protein region_ , corresponding
to $0<d_{12}\leq 0.1RT$, or, equivalently, to $100\leq
N_{\mathrm{res}}<\infty$; the _peptide region_ , corresponding to
$0.1RT<d_{12}\leq RT$, or $1\leq N_{\mathrm{res}}<100$; and, finally, the
_inaccurate region_ , where $d_{12}>RT$, and even for a dipeptide it is not
advisable to use $V_{2}$ as an approximation to $V_{1}$. Of course, these are
only approximate regions based on the general idea that we are not interested
on the dipeptides as a final system, but only as a mean to approach protein
behaviour from the botton-up. Therefore, not only the error in the dipeptides
must be measured, but it must also be estimated how this discrepancy
propagates to polypeptide systems.
## 3 General framework
The general abstract framework behind the investigation presented in this
study (and also implicitly behind most of the works found in the literature),
may be described as follows:
The objects of study are the _model chemistries_ defined by Pople [63] and
discussed in the introduction. The MCs under scrutiny are applied to a
particular _problem_ of interest, which may be thought to be formed by three
ingredients: the _physical system_ , the _relevant observables_ and the
_target accuracy_. The MCs are then selected according to their ability to
yield numerical values of the relevant observables for the physical system
studied within the target accuracy. The concrete numerical values that one
wants to approach are those given by the _exact model chemistry_ MCε, which
could be thought to be either the experimental data or the exact solution of
the non-relativistic electronic Schrödinger equation [60]. However, the
computational effort needed to perform the calculations required by MCε is
literally infinite, so that, in practice, one is forced to work with a
_reference model chemistry_ MCref, which, albeit different from MCε, is
thought to be close to it. Finally, the set of MCs that one wants to
investigate are compared to MCref and the nearness to it is seen as
approximating the nearness to MCε.
Figure 2: Space $\mathcal{M}$ of all model chemistries. The exact model
chemistry MCε is shown as a black circle, the MP2 reference MC is shown as a
grey-filled circle, and B3LYP MCs as white-filled ones. Both reference PESs
are indicated with an additional circle around the points. The situation
depicted is (schematically) the one found in this study, assuming that MP2 is
a more accurate method than B3LYP to account for the conformational
preferences of peptide systems. The positions of the different MCs have no
relevance, and only the relative measured distances among them are
qualitatively depicted.
These comparisons are commonly performed using a numerical quantity
$\mathcal{D}$ that is a function of the relevant observables. In order for the
intuitive ideas about relative proximity in the $\mathcal{M}$ space to be
captured and the above reasoning to be meaningful, this numerical quantity
$\mathcal{D}$ must have some of the properties of a mathematical distance. In
particular, it is advisable that the _triangle inequality_ is obeyed, so that,
for any model chemistry MC, one has that
$\displaystyle\mathcal{D}(\mathrm{MC}_{\varepsilon},\mathrm{MC})\leq\mathcal{D}(\mathrm{MC}_{\varepsilon},\mathrm{MC}^{\mathrm{ref}})+\mathcal{D}(\mathrm{MC}^{\mathrm{ref}},\mathrm{MC})\
,$ (3a)
$\displaystyle\mathcal{D}(\mathrm{MC}_{\varepsilon},\mathrm{MC})\geq\big{|}\mathcal{D}(\mathrm{MC}_{\varepsilon},\mathrm{MC}^{\mathrm{ref}})-\mathcal{D}(\mathrm{MC}^{\mathrm{ref}},\mathrm{MC})\big{|}\
,$ (3b)
and, assuming that
$\mathcal{D}(\mathrm{MC}_{\varepsilon},\mathrm{MC}^{\mathrm{ref}})$ is small
(and $\mathcal{D}$ is a positive function), we obtain
$\mathcal{D}(\mathrm{MC}_{\varepsilon},\mathrm{MC})\simeq\mathcal{D}(\mathrm{MC}^{\mathrm{ref}},\mathrm{MC})\
,$ (4)
which is the sought result in agreement with the ideas stated at the beginning
of this section.
The distance $d_{12}$ introduced in ref. 92 and summarized in the previous
section, measured in this case on the conformational energy surfaces (the
relevant observable) of the model dipeptide HCO-L-Ala-NH2 (the physical
system), approximately fulfills the triangle inequality and thus captures the
_nearness_ concept in the space $\mathcal{M}$ of model chemistries.
This space, $\mathcal{M}$, containing all possible MCs, is a rather complex
and multidimensional one. For example, two commonly used ‘dimensions’ which
may be thought to parameterize $\mathcal{M}$ are the size of the basis set and
the amount of electron correlation in the model (or the quality of the DFT
functional used). However, since there are many ways in which the size of a
basis set or the electron correlation may be increased and there are
additional approximations that can be included in the MC definition (see sec.
1), the ‘dimensions’ of $\mathcal{M}$ can be considered to be many more than
two.
The definition of a distance, such as the one described in the previous lines,
for a given problem of interest helps to provide a certain degree of structure
into this complex space. In fig. 2 a two-dimensional scheme of the overall
situation found in this study is presented.
## 4 Results
MCs | $d_{12}/RT$ a | $a_{12}$ b | $N_{\mathrm{res}}$ c | $t$ d
---|---|---|---|---
B3LYP/aug-cc-pVTZ | 0.079 | 15.2 | 159.8 | 79.09%
B3LYP/cc-pVTZ | 0.191 | 21.1 | 27.4 | 9.78%
B3LYP/aug-cc-pVDZ | 0.172 | 82.8 | 33.7 | 5.27%
B3LYP/cc-pVDZ | 1.045 | 109.4 | 0.9 | 1.29%
Table 2: Basis set convergence results for the B3LYP MCs investigated in this
work. aDistance with the B3LYP/cc-pVQZ reference in units of $RT$ at
$300^{\mathrm{o}}$ K. bEnergy offset with the reference MC in kcal/mol.
cMaximum number of residues in a polypeptide potential up to which the
corresponding MC may correctly approximate the reference (under the
assumptions in sec. 2). dRequired computer time, expressed as a fraction of
$t_{\mathrm{ref}}$.
Before starting with the results of the calculations, let us introduce the
concept of _efficiency_ of a particular MC that shall be used: It is laxly
defined as a balance between accuracy (in terms of the distance introduced in
sec. 2) and computational cost (in terms of computer time). It can be
graphically extracted from the _efficiency plots_ , where the distance
$d_{12}$ between any given MC and a reference one is shown in units of $RT$ in
the $x$-axis, while, in the $y$-axis, one can find the computer time taken for
each MC (see the following pages for two examples). As a general thumb-rule,
_we shall consider a MC to be more efficient for approximating the reference
when it is placed closer to the origin of coordinates in the efficiency plot_.
This approach is intentionally non-rigorous due to the fact that many factors
exist that influence the computer time but may vary from one practical
calculation to another; such as the algorithms used, the actual details of the
computers (frequency of the processor, size of the RAM and cache memories,
system bus and disk access velocity, operating system, mathematical libraries,
etc.), the starting guesses for the SCF orbitals or the starting structures in
geometry optimizations.
Taking all this into account, the only conclusions that shall be drawn in this
work about the relative efficiency of the MCs studied are those deduced from
strong signals in the plots and, therefore, those that can be extrapolated to
future calculations; in other words, _the small details shall be typically
neglected_.
Figure 3: Efficiency plot of all the B3LYP MCs studied. In the $x$-axis, we
show the distance $d_{12}$, in units of $RT$ at $300^{\mathrm{o}}$ K, between
any given MC and the B3LYP/cc-pVQZ reference (indicated by an encircled
point), while, in the $y$-axis, we present the computer time needed to compute
the whole 12$\times$12 grid in the Ramachandran space of the model dipeptide
HCO-L-Ala-NH2. The different accuracy regions are labeled, and the 10% of the
time $t_{\mathrm{best}}$ taken by the reference MC is also indicated.
In the first part of the study, we compare all B3LYP MCs to the one with the
largest basis set, B3LYP/cc-pVQZ (the highest level of the theory calculated
for this work, depicted in fig. 4) using the distance introduced in sec. 2.
All mentions to the accuracy of any given MC in this part must be understood
as relative to this reference. However, it has been reported that MP2 is a
superior method to B3LYP to account for the conformational behaviour of
peptide systems [94]. Therefore, the absolute accuracy of the B3LYP MCs
calculated here is probably closer to the relative accuracy with respect to
the MP2/6-311++G(2df,2pd) reference in what follows. In this spirit, this part
of the study should be regarded as an investigation of the convergence to _the
infinite basis set B3LYP limit_ , for which the best B3LYP MC here is probably
a good approximation.
Figure 4: Potential energy surface of the model dipeptide HCO-L-Ala-NH2
computed at the B3LYP/cc-pVQZ level of the theory. The PES has been originally
calculated in a 12$\times$12 discrete grid in the space spanned by the
Ramachandran angles $\phi$ and $\psi$ and later smoothed with bicubic splines
for visual convenience. The energy reference has been set to zero. (At this
level of the theory, the absolute energy of the minimum point in the
12$\times$12 grid, located at $(-75^{o},75^{o})$, is $-417.199231353$
hartree).
The results are depicted in fig. 3, and in table 2. We can extract several
conclusions from them:
* •
Regarding the convergence to the infinite basis set limit, we observe that
only the most expensive MC, B3LYP/aug-cc-pVTZ, correctly approximates the
reference for peptides of more than 100 residues. On the other hand, for only
5.27% of the computer time $t_{\mathrm{ref}}$ taken by the reference MC, we
can use B3LYP/aug-cc-pVDZ, which correctly approximates it up to 30-residue
peptides. Finally, the MC with the smallest basis set, B3LYP/cc-pVDZ cannot
properly replace the reference even in dipeptides.
* •
In ref. [34], using Pople’s basis sets [95, 96, 102, 97, 98, 99, 100, 101], we
saw that “the general rule that is sometimes assumed when performing quantum
chemical calculations, which states that ‘the more expensive, the more
accurate’, is rather coarse-grained and relevant deviations from it may be
found.” We recognized that “One may argue that this observation is due to the
unsystematic way in which Pople basis sets can be enlarged and that the
correlation between accuracy and cost will be much higher if, for example,
only Dunning basis sets are used.”, which is definitely observed in fig. 3,
but we argued that this was something to be expected, since “there are two few
Dunning basis sets below a reasonable upper bound on the number of elements to
see anything but a line in the efficiency plot”. In the results presented in
this work, we can see that, even if the correlation between accuracy and cost
is higher in the case of Dunning’s basis sets than in the case of Pople’s, due
to the smaller number of the former, we can still observe that the thumb-rule
‘the more expensive, the more accurate’ breaks also in this case, since the
B3LYP/aug-cc-pVDZ MC is, at the same time, more accurate and less costly than
B3LYP/cc-pVTZ. In general, this idea applies to all the approximations that a
MC may contain (see the introduction for a partial list), and justifies the
systematic search for the most efficient combination of them for a given
problem. This work is our second step (ref. [34] is the first one) in that
path for the particular case of the conformational behaviour of peptide
systems.
* •
The observation in the previous point also suggests that it may be efficient
to include diffuse functions (the ‘aug-’ in aug-cc-pVDZ) in the basis set for
this type of problems.
* •
The error of the studied MCs regarding the differences of energy (as measured
by $d_{12}$) is much smaller than the error in the absolute energies (as
measured by $a_{12}$), suggesting that the largest part of the discrepancy
must be a systematic one.
In the second part of the study, we assess the absolute accuracy of the B3LYP
MCs by comparing them to the (as far as we are aware) highest homolevel in the
literature, the MP2/6-311++ G(2df,2pd) PES in ref. [34]. If one assumes that
this level of the theory may be close enough to the exact result for the given
problem at hand, then this comparison measures the ‘absolute’ accuracy of the
B3LYP MCs, and not only their relative accuracy with respect to the B3LYP
infinite basis set limit, as we did in the previous part. This is the
fundamental difference between figs. 3 and 5.
MCs | $d_{12}/RT$ a | $a_{12}$ b | $N_{\mathrm{res}}$ c | $t$ d
---|---|---|---|---
B3LYP/cc-pVQZ | 1.008 | -457.2 | 0.98 | 1861
B3LYP/aug-cc-pVTZ | 1.029 | -442.0 | 0.94 | 1472
B3LYP/cc-pVTZ | 1.058 | -436.1 | 0.89 | 182
B3LYP/aug-cc-pVDZ | 1.006 | -374.4 | 0.99 | 98
B3LYP/cc-pVDZ | 1.533 | -347.8 | 0.43 | 24
Table 3: Comparison of all the B3LYP MCs investigated in this work with the
MP2/6-311++G(2df,2pd) in ref. 34. aDistance with the MP2/6-311++G(2df,2pd)
reference in units of $RT$ at $300^{\mathrm{o}}$ K. bEnergy offset with the
reference MC in kcal/mol. cMaximum number of residues in a polypeptide
potential up to which the corresponding MC may correctly approximate the
reference (under the assumptions in sec. 2). dComputer time needed for the
calculation of the whole PES, in days.
The results of this part of the study are depicted in fig. 5, and in table 3.
We can extract several conclusions from them:
* •
All B3LYP MCs, including the largest one, B3LYP/cc-pVQZ, lie in the inaccurate
region of the efficiency plot in fig. 5, meaning that they cannot be reliably
used to approximate the MP2/6-311++G(2df,2pd) reference even in the smallest
dipeptides.
* •
Related with the observations in the previous part of the study, we see that
there is no point, if one is worried about absolute accuracy, in going beyond
the aug-cc-pVDZ basis set in B3LYP.
* •
The B3LYP/cc-pVDZ MC again performs significantly worse than the rest,
agreeing with the results in the previous part of the study, and suggesting
that cc-pVDZ may be a too small basis set for the problem tackled here.
* •
Again, the error of the MCs in the differences of energy (as measured by
$d_{12}$) is much smaller than the error in the absolute energies (as measured
by $a_{12}$).
Figure 5: Efficiency plot of all the B3LYP MCs studied. In the $x$-axis, we
show the distance $d_{12}$, in units of $RT$ at $300^{\mathrm{o}}$ K, between
any given MC and the MP2/6-311++G(2df,2pd) reference calculated in ref. 34,
while, in the $y$-axis, we present the computer time needed to compute the
whole 12$\times$12 grid in the Ramachandran space of the model dipeptide HCO-
L-Ala-NH2. The different accuracy regions are labeled
## 5 Conclusions
In this study, we have investigated 5 PESs of the model dipeptide HCO-L-Ala-
NH2, calculated with the B3LYP method, and the cc-pVDZ, aug-cc-pVDZ, cc-pVTZ,
aug-cc-pVTZ, and cc-pVQZ Dunning’s basis sets. We have first assessed the
convergence of the B3LYP MCs to the infinite basis set limit, and then we have
evaluated their absolute accuracy by comparing them to the (as far as we are
aware) highest homolevel in the literature, the MP2/6-311++G(2df,2pd) PES in
ref. [34]. All the comparisons have been performed according to a general
framework which is extensible to further studies, and using a distance between
the different PESs that correctly captures the nearness concept in the space
of MCs. The calculations performed here have taken around 10 years of computer
time.
The main conclusions of the study are the following:
* •
The complexity of the problem (the conformational behaviour of peptides)
renders the correlation between accuracy and computational cost of the
different quantum mechanical algorithms imperfect. This ultimately justifies
the need for systematic studies, such as the one presented here, in which the
most efficient MCs are sought for the particular problem of interest.
* •
Assuming that the MP2/6-311++G(2df,2pd) level of the theory is closer to the
exact solution of the non-relativistic electronic Schrödinger equation than
B3LYP/cc-pVQZ, B3LYP is not a reliable method to study the conformational
behaviour of peptides. Even if, as we emphasize at the end of this section, it
may be dangerous to state that a method that performs well in the particular
model of an alanine residue studied here will also be recommendable for longer
and more complex peptides, we can clearly _reject_ any method that already
fails in HCO-L-Ala-NH2.
* •
If B3LYP is still needed to be used, due to, for example, computational
constraints, aug-cc-pVDZ represents a good compromise between accuracy and
cost.
* •
The error of the studied MCs regarding the differences of energy (as measured
by $d_{12}$) is much smaller than the error in the absolute energies (as
measured by $a_{12}$), suggesting that the largest part of the discrepancy
must be a systematic one.
Finally, let us stress again that the investigation performed here have used
one of the simplest dipeptides. The fact that we have treated it as an
isolated system, the small size of its side chain and also its aliphatic
character, all play a role in the results obtained. Hence, for bulkier
residues included in polypeptides, and, specially for those that contain
aromatic groups, those that are charged or may participate in hydrogen-bonds,
the methods that have proved to be efficient here must be re-tested and the
conclusions drawn about the B3LYP convergence to the infinite basis set limit,
as well as those regarding the comparison between B3LYP and MP2, should be re-
evaluated.
## Acknowledgments
The numerical calculations in this work have been performed thanks to a
computer time grant at the Zaragoza node (Caesaraugusta) of the Spanish
Supercomputing Network (RES). We thank all the support staff there, for the
efficiency at solving the problems encountered. We also thank J. L. Alonso for
illuminating discussions.
This work has been supported by the research projects DGA (Aragón Government,
Spain) E24/3 and MEC (Spain) FIS2006-12781-C02-01. P. Echenique is supported
by a MEC (Spain) postdoctoral contract.
## References
* [1] C. B. Anfinsen, Principles that govern the folding of protein chains, Science 181 (1973) 223–230.
* [2] V. Daggett and A. R. Fersht, Is there a unifying mechanism for protein folding?, Trends Biochem. Sci. 28 (2003) 18–25.
* [3] P. Echenique, Introduction to protein folding for physicists, Contemp. Phys. 48 (2007) 81–108.
* [4] B. Honig, Protein folding: From the Levinthal paradox to structure prediction, J. Mol. Biol. 293 (1999) 283–293.
* [5] J. Skolnick, Putting the pathway back into protein folding, Proc. Natl. Acad. Sci. USA 102 (2005) 2265–2266.
* [6] M. Beachy, D. Chasman, R. Murphy, T. Halgren, and R. Friesner, Accurate ab initio quantum chemical determination of the relative energetics of peptide conformations and assessment of empirical force fields, J. Am. Chem. Soc. 119 (1997) 5908–5920.
* [7] R. A. DiStasio Jr., Y. Jung, and M. Head-Gordon, A Resolution-of-The-Identity implementation of the local Triatomics-In-Molecules model for second-order Møller-Plesset perturbation theory with application to alanine tetrapeptide conformational energies, J. Chem. Theory Comput. 1 (2005) 862–876.
* [8] M. Elstner, K. J. Jalkanen, M. Knapp-Mohammady, T. Frauenheim, and S. Suhai, DFT studies on helix formation in $N$-acetyl-(L-alanyl)n-$N^{\prime}$-methylamide for $n$=1–20, Chem. Phys. 256 (2001) 15–27.
* [9] R. Hegger, A. Altis, P. Nguyen, and G. Stock, How complex is the dynamics of peptide folding?, Phys. Rev. Lett. 98 (2007) 028102.
* [10] A. Perczel, I. Jákli, and I. G. Csizmadia, Intrinsically stable secondary structure elements of proteins: A comprehensive study of folding units of proteins by computation and by analysis of data determined by X-ray crystallography, Chem. Eur. J. 9 (2003) 5332–5342.
* [11] A. Perczel, P. Hudáky, A. K. Füzéry, and I. G. Csizmadia, Stability issues of covalently and noncovalently bonded peptide subunits, J. Comput. Chem. 25 (2004) 1084–1100.
* [12] D. Toroz and T. van Mourik, The structure of the gas-phase tyrosine-glycine dipeptide, Mol. Phys. 104 (2006) 559–570.
* [13] H. Zhong and H. A. Carlson, Conformational studies of polyprolines, J. Chem. Theory Comput. 2 (2006) 342–353.
* [14] A. G. Császár and A. Perczel, Ab initio characterization of building units in peptides and proteins, Prog. Biophys. Mol. Biol. 71 (1999) 243–309.
* [15] P. Hudáky, I. Jákli, A. G. Császár, and A. Perczel, Peptide models. XXXI. Conformational properties of hydrophobic residues shaping the core of proteins. An ab initio study of N-formyl-L-valinamide and N-formyl-L-phenylalaninamide, J. Comput. Chem. 22 (2001) 732–751.
* [16] J. C. P. Koo, G. A. Chass, A. Perczel, Ö. Farkas, L. L. Torday, A. Varro, J. G. Papp, and I. G. Csizmadia, Exploration of the four-dimensional-conformational potential energy hypersurface of N-acetyl-L-aspartic acid-N’-methylamide with its internally hydrogen bonded side-chain orientation, J. Phys. Chem. A 106 (2002) 6999–7009.
* [17] A. Láng, I. G. Csizmadia, and A. Perczel, Peptide models. XLV: Conformational properties of N-formyl-L-methioninamide ant its relevance to methionine in proteins, PROTEINS: Struct. Funct. Bioinf. 58 (2005) 571–588.
* [18] P. Echenique, I. Calvo, and J. L. Alonso, Quantum mechanical calculation of the effects of stiff and rigid constraints in the conformational equilibrium of the Alanine dipeptide, J. Comput. Chem. 27 (2006) 1748–1755.
* [19] M. Elstner, K. J. Jalkanen, M. Knapp-Mohammady, and S. Suhai, Energetics and structure of glycine and alanine based model peptides: Approximate SCC-DFTB, AM1 and PM3 methods in comparison with DFT, HF and MP2 calculations, Chem. Phys. 263 (2001) 203–219.
* [20] G. Endrédi, A. Perczel, O. Farkas, M. A. McAllister, G. I. Csonka, J. Ladik, and I. G. Csizmadia, Peptide models XV. The effect of basis set size increase an electron correlation on selected minima of the ab initio 2D-Ramachandran map of For-Gly-NH2 and For-L-Ala-NH2, J. Mol. Struct. (Theochem) 391 (1997) 15-26.
* [21] R. F. Frey, J. Coffin, S. Q. Newton, M. Ramek, V. K. W. Cheng, F. A. Momany, and L. Schäfer, Importance of correlation-gradient geometry optimization for molecular conformational analyses, J. Am. Chem. Soc. 114 (1992) 5369–5377.
* [22] I. R. Gould, W. D. Cornell, and I. H. Hillier, A quantum mechanical investigation of the conformational energetics of the alanine and glycine dipeptides in the gas phase and in aqueous solution, J. Am. Chem. Soc. 116 (1994) 9250–9256.
* [23] T. Head-Gordon, M. Head-Gordon, M. J. Frisch, C. Brooks III, and J. Pople, A theoretical study of alanine dipeptide and analogs, Intl. J. Quant. Chem. 16 (1989) 311-322.
* [24] T. Head-Gordon, M. Head-Gordon, M. J. Frisch, C. L. Brooks III, and J. A. Pople, Theoretical study of blocked glycine and alanine peptide analogues, J. Am. Chem. Soc. 113 (1991) 5989–5997.
* [25] M. Iwaoka, M. Okada, and S. Tomoda, Solvent effects on the $\phi-\psi$ potential surfaces of glycine and alanine dipeptides studied by PCM and I-PCM methods, J. Mol. Struct. (Theochem) 586 (2002) 111–124.
* [26] M. Mezei, P. K. Mehrotra, and D. L. Beveridge, Monte Carlo determination of the free energy and internal energy of hydration for the Ala dipeptide at 25oC, J. Am. Chem. Soc. 107 (1985) 2239–2245.
* [27] A. Perczel, J. G. Angyán, M. Kajtar, W. Viviani, J.-L. Rivail, J.-F. Marcoccia, and I. G. Csizmadia, Peptide models. 1. Topology of selected peptide conformational potential energy surfaces (glycine and alanine derivatives), J. Am. Chem. Soc. 113 (1991) 6256-6265.
* [28] A. Perczel, Ö. Farkas, I. Jákli, I. A. Topol, and I. G. Csizmadia, Peptide models. XXXIII. Extrapolation of low-level Hartree-Fock data of peptide conformation to large basis set SCF, MP2, DFT and CCSD(T) results. The Ramachandran surface of alanine dipeptide computed at various levels of theory, J. Comput. Chem. 24 (2003) 1026–1042.
* [29] A. M. Rodríguez, H. A. Baldoni, F. Suvire, R. Nieto Vázquez, G. Zamarbide, R. D. Enriz, Ö. Farkas, A. Perczel, M. A. McAllister, L. L. Torday, J. G. Papp, and I. G. Csizmadia, Characteristics of Ramachandran maps of L-alanine diamides as computed by various molecular mechanics, semiempirical and ab initio MO methods. A search for primary standard of peptide conformational stability, J. Mol. Struct. (Theochem) 455 (1998) 275–301.
* [30] P. J. Rossky and M. Karplus, Solvation. A molecular dynamics study of a dipeptide in water, J. Am. Chem. Soc. 101 (1979) 1913.
* [31] R. Vargas, J. Garza, B. P. Hay, and D. A. Dixon, Conformational study of the alanine dipeptide at the MP2 and DFT levels, J. Phys. Chem. A 106 (2002) 3213–3218.
* [32] Z.-X. Wang and Y. Duan, Solvation effects on alanine dipeptide: A MP2/cc-pVTZ//MP2/6-31G** study of ($\Phi,\Psi$) energy maps and conformers in the gas phase, ether and water, J. Comput. Chem. 25 (2004) 1699–1716.
* [33] C.-H. Yu, M. A. Norman, L. Schäfer, M. Ramek, A. Peeters, and C. van Alsenoy, Ab initio conformational analysis of N-formyl L-alanine amide including electron correlation, J. Mol. Struct. 567–568 (2001) 361–374.
* [34] P. Echenique and J. L. Alonso, Efficient model chemistries for peptides. I. General framework and a study of the heterolevel approximation in RHF and MP2 with Pople split-valence basis sets, J. Comput. Chem. 29 (2008) 1408–1422.
* [35] J. W. Ponder and D. A. Case, Force fields for protein simulations, Adv. Prot. Chem. 66 (2003) 27–85.
* [36] A. D. MacKerell Jr., B. Brooks, C. L. Brooks III, L. Nilsson, B. Roux, Y. Won, and M. Karplus, CHARMM: The energy function and its parameterization with an overview of the program, in The Encyclopedia of Computational Chemistry, edited by P. v. R. Schleyer, P. R. Schreiner, N. L. Allinger, T. Clark, J. Gasteiger, P. Kollman, and H. F. Schaefer III, pp. 217–277, John Wiley & Sons, Chichester, 1998.
* [37] B. R. Brooks, R. E. Bruccoleri, B. D. Olafson, D. J. States, S. Swaminathan, and M. Karplus, CHARMM: A program for macromolecular energy, minimization, and dynamics calculations, J. Comput. Chem. 4 (1983) 187–217.
* [38] W. F. Van Gunsteren and M. Karplus, Effects of constraints on the dynamics of macromolecules, Macromolecules 15 (1982) 1528–1544.
* [39] W. D. Cornell, P. Cieplak, C. I. Bayly, I. R. Gould, J. Merz, K. M., D. M. Ferguson, D. C. Spellmeyer, T. Fox, J. W. Caldwell, and P. A. Kollman, A second generation force field for the simulation of proteins, nucleic acids, and organic molecules, J. Am. Chem. Soc. 117 (1995) 5179–5197.
* [40] D. A. Pearlman, D. A. Case, J. W. Caldwell, W. R. Ross, T. E. Cheatham III, S. DeBolt, D. Ferguson, G. Seibel, and P. Kollman, AMBER, a computer program for applying molecular mechanics, normal mode analysis, molecular dynamics and free energy calculations to elucidate the structures and energies of molecules, Comp. Phys. Commun. 91 (1995) 1–41.
* [41] W. L. Jorgensen and J. Tirado-Rives, The OPLS potential functions for proteins. Energy minimization for crystals of cyclic peptides and Crambin, J. Am. Chem. Soc. 110 (1988) 1657–1666.
* [42] W. L. Jorgensen, D. S. Maxwell, and J. Tirado-Rives, Development and testing of the OPLS all-atom force field on conformational energetics and properties of organic liquids, J. Am. Chem. Soc. 118 (1996) 11225–11236.
* [43] T. A. Halgren, Merck Molecular Force Field. I. Basis, form, scope, parametrization, and performance of MMFF94, J. Comput. Chem. 17 (1996) 490–519.
* [44] A. R. MacKerell Jr., M. Feig, and C. L. Brooks III, Extending the treatment of backbone energetics in protein force fields: Limitations of gas-phase quantum mechanics in reproducing protein conformational distributions in molecular dynamics simulations, J. Comput. Chem. 25 (2004) 1400–1415.
* [45] A. R. MacKerell Jr., M. Feig, and C. L. Brooks III, Improved treatment of the protein backbone in empirical force fields, J. Am. Chem. Soc. 126 (2004) 698–699.
* [46] Y. K. Kang and H. S. Park, Comparative conformational study of of N-acetyl-L-N’-methylprolineamide with different basis sets, J. Mol. Struct. (Theochem) 593 (2002) 55–64.
* [47] G. A. Kaminski, R. A. Friesner, J. Tirado-Rives, and W. L. Jorgensen, Evaluation and reparametrization of the OPLS-AA force field for proteins via comparison with accurate quantum chemical calculations on peptides, J. Phys. Chem. B 105 (2001) 6476–6487.
* [48] T. Wang and R. Wade, Force field effects on a $\beta$-sheet protein domain structure in thermal unfolding simulations, J. Chem. Theory Comput. 2 (2006) 140–148.
* [49] C. D. Snow, E. J. Sorin, Y. M. Rhee, and V. S. Pande, How well can simulation predict protein folding kinetics and thermodynamics?, Annu. Rev. Biophys. Biomol. Struct. 34 (2005) 43–69.
* [50] O. Schueler-Furman, C. Wang, P. Bradley, K. Misura, and D. Baker, Progress in modeling of protein structures and interactions, Science 310 (2005) 638–642.
* [51] K. Ginalski, N. V. Grishin, A. Godzik, and L. Rychlewski, Practical lessons from protein structure prediction, Nucleic Acids Research 33 (2005) 1874–1891.
* [52] A. V. Morozov, T. Kortemme, K. Tsemekhman, and D. Baker, Close agreement between the orientation dependence of hydrogen bonds observed in protein structures and quantum mechanical calculations, Proc. Natl. Acad. Sci. USA 101 (2004) 6946–6951.
* [53] C. Gómez-Moreno Calera and J. Sancho Sanz, editors, Estructura de Proteínas, Ariel ciencia, Barcelona, 2003.
* [54] M. Karplus and J. A. McCammon, Molecular dynamics simulations of biomolecules, Nat. Struct. Biol. 9 (2002) 646–652.
* [55] R. Bonneau and D. Baker, Ab initio protein structure prediction: Progress and prospects, Annu. Rev. Biophys. Biomol. Struct. 30 (2001) 173–189.
* [56] C. J. Cramer, Essentials of Computational Chemistry: Theories and Models, John Wiley & Sons, Chichester, 2nd edition, 2002.
* [57] F. Jensen, Introduction to Computational Chemistry, John Wiley & Sons, Chichester, 1998.
* [58] A. Szabo and N. S. Ostlund, Modern Quantum Chemistry: Introduced to Advanced Electronic Structure Theory, Dover Publications, New York, 1996.
* [59] Y. Shao, L. F. Molnar, Y. Jung, J. Kussmann, C. Ochsenfeld, S. T. Brown, A. T. B. Gilbert, L. V. Slipchenko, S. V. Levchenko, D. P. Oneill, R. A. Distasio, R. C. Lochan, T. Wang, G. J. O. Beran, N. A. Besley, J. M. Herbert, C. Y. Lin, T. Van Voorhis, S. H. Chien, A. Sodt, R. P. Steele, V. A. Rassolov, P. E. Maslen, P. P. Korambath, R. D. Adamson, B. Austin, J. Baker, E. F. C. Byrd, H. Dachsel, R. J. Doerksen, A. Dreuw, B. D. Dunietz, A. D. Dutoi, T. R. Furlani, S. R. Gwaltney, A. Heyden, S. Hirata, C.-P. Hsu, G. Kedziora, R. Z. Khalliulin, P. Klunzinger, A. M. Lee, M. S. Lee, W. Liang, I. Lotan, N. Nair, B. Peters, E. I. Proynov, P. A. Pieniazek, Y. M. Rhee, J. Ritchie, E. Rosta, D. C. Sherrill, A. C. Simmonett, J. E. Subotnik, L. H. Woodcock, W. Zhang, A. T. Bell, and A. K. Chakraborty, Advances in methods and algorithms in a modern quantum chemistry program package, Phys. Chem. Chem. Phys. 8 (2006) 3172–3191.
* [60] P. Echenique and J. L. Alonso, A mathematical and computational review of Hartree-Fock SCF methods in Quantum Chemistry, Mol. Phys. 105 (2007) 3057–3098.
* [61] P. Maurer, A. Laio, H. W. Hugosson, M. C. Colombo, and U. Rothlisberger, Automated parametrization of biomolecular force fields from Quantum Mechanics/Molecular Mechanics (QM/MM) simulations through force matching, J. Chem. Theory Comput. 3 (2007) 628–639.
* [62] Y. A. Arnautova, A. Jagielska, and H. A. Scheraga, New force field (ECEPP-05) for peptides, proteins and organic molecules, J. Phys. Chem. B 110 (2006) 5025–5044.
* [63] J. A. Pople, Nobel lecture: Quantum chemical models, Rev. Mod. Phys. 71 (1999) 1267–1274.
* [64] J. M. García de la Vega and B. Miguel, Basis sets for computational chemistry, in Introduction to Advanced Topics of Computational Chemistry, edited by L. A. Montero, L. A. Díaz, and R. Bader, chapter 3, pp. 41–80, Editorial de la Universidad de la Habana, 2003\.
* [65] T. Helgaker and P. R. Taylor, Gaussian basis sets and molecular integrals, in Modern Electronic Structure Theory. Part II, edited by D. R. Yarkony, pp. 725–856, World Scientific, Singapore, 1995.
* [66] P. Jurečka, J. Šponer, J. Černý, and P. Hobza, Benchmark database of accurate (MP2 and CCSD(T) complete basis set limit) interaction energies of small model complexes, DNA base pairs, and amino acid pairs, Phys. Chem. Chem. Phys. 8 (2006) 1985–1993.
* [67] G. A. Petersson, D. K. Malick, M. J. Frisch, and M. Braunstein, The convergence of complete active space self-consistent-field energies to the complete basis set limit, J. Chem. Phys. 123 (2005) 074111.
* [68] F. Jensen, Estimating the Hartree-Fock limit from finite basis set calculations, Theo. Chem. Acc. 113 (2005) 267–273.
* [69] Z.-H. Li and M. W. Wong, Scaling of correlation basis set extension energies, Chem. Phys. Lett. 337 (2001) 209–216.
* [70] M. R. Nyden and G. A. Petersson, Complete basis set correlation energies. I. The assymptotic convergence of pair natural orbital expansions, J. Chem. Phys. 75 (1981) 1843–1862.
* [71] P. Jurečka and P. Hobza, On the convergence of the $(\Delta E^{\mathrm{CCSD(T)}}-\Delta E^{\mathrm{MP2}})$ term for complexes with multiple H-bonds, Chem. Phys. Lett. 365 (2002) 89–94.
* [72] E. W. Ignacio and H. B. Schlegel, On the additivity of basis set effects in some simple fluorine containing systems, J. Comput. Chem. 12 (1991) 751–760.
* [73] J. S. Dewar and A. J. Holder, On the validity of polarization and correlation additivity in ab initio molecular orbital calculations, J. Comput. Chem. 3 (1989) 311–313.
* [74] R. H. Nobes, W. J. Bouma, and L. Radom, The additivity of polarization function and electron correlation effects in ab initio molecular-orbital calculations, Chem. Phys. Lett. 89 (1982) 497–500.
* [75] J. A. Pople, M. J. Frisch, B. T. Luke, and J. S. Binkley, A Moller-Plesset study of the energies of AHn molecules (A = Li to F), Intl. J. Quant. Chem. 17 (1983) 307–320.
* [76] R. Crespo-Otero, L. A. Montero, W.-D. Stohrer, and J. M. García de la Vega, Basis set superposition error in MP2 and density-functional theory: A case of methane-nitric oxide association, J. Chem. Phys. 123 (2005) 134107.
* [77] M. L. Senent and S. Wilson, Intramolecular basis set superposition errors, Intl. J. Quant. Chem. 82 (2001) 282–292.
* [78] I. Mayer and P. Valiron, Second order Møller-Plesset perturbation theory without basis set superposition error, J. Chem. Phys. 109 (1998) 3360–3373.
* [79] F. Jensen, The magnitude of intramolecular basis set superposition error, Chem. Phys. Lett. 261 (1996) 633–636.
* [80] I. Mayer, On the non-additivity of the basis set superposition error and how to prevent its appearance, Theo. Chem. Acc. 72 (1987) 207–210.
* [81] S. F. Boys and F. Bernardi, The calculation of small molecular interactions by the differences of separate total energies. Some procedures with reduced errors, Mol. Phys. 19 (1970) 553–566.
* [82] H. B. Jansen and P. Ros, Non-empirical molecular orbital calculations on the protonation of carbon monoxide, Chem. Phys. Lett. 3 (1969) 140–143.
* [83] P. J. Stephens, F. J. Devlin, C. F. Chabalowski, and M. J. Frisch, Ab initio calculation of vibrational absorption and circular dichroism spectra using density functional force fields, Journal of Physical Chemistry A 98 (1994) 11623–11627.
* [84] A. D. Becke, Density-functional thermochemistry. III. The role of exact exchange, J. Chem. Phys. 98 (1993) 5648.
* [85] C. Lee, W. Yang, and R. G. Parr, Development of the Colle-Salvetti correlation-energy formula into a functional of the electron density, Phys. Rev. B 37 (1988) 785–789.
* [86] S. H. Vosko, L. Wilk, and M. Nusair, Accurate spin-dependent electron liquid correlation energies for local spin density calculations: a critical analysis, Can. J. Phys. 58 (1980) 1200-1211.
* [87] T. H. Dunning Jr., Gaussian basis sets for use in correlated molecular calculations. I. The atoms boron through neon and hydrogen, J. Chem. Phys. 90 (1989) 1007–1023.
* [88] R. A. Kendall, T. H. Dunning Jr., and R. J. Harrison, Electron affinities of the first-row atoms revisited. Systematic basis sets and wave functions, J. Chem. Phys. 96 (1992) 6796–6806.
* [89] M. W. Gordon, M. S. ans Schmidt, Advances in electronic structure theory: GAMESS a decade later, in Theory and Applications of Computational Chemistry: The first forty years, edited by C. E. Dykstra, G. Frenking, K. S. Kim, and Scuseria, pp. 1167–1189, Elsevier, Amsterdam, 2005.
* [90] M. W. Schmidt, K. K. Baldridge, J. A. Boatz, S. T. Elbert, M. S. Gordon, H. J. Jensen, S. Koseki, N. Matsunaga, K. A. Nguyen, S. Su, T. L. Windus, M. Dupuis, and J. A. Montgomery, General Atomic and Molecular Electronic Structure System, J. Comput. Chem. 14 (1993) 1347–1363.
* [91] P. Echenique and J. L. Alonso, Definition of Systematic, Approximately Separable and Modular Internal Coordinates (SASMIC) for macromolecular simulation, J. Comput. Chem. 27 (2006) 1076–1087.
* [92] J. L. Alonso and P. Echenique, A physically meaningful method for the comparison of potential energy functions, J. Comput. Chem. 27 (2006) 238–252.
* [93] P. Echenique, A note on the accuracy of free energy functions in protein folding: Propagation of errors from dipeptides to polypeptides, In progress, 2008.
* [94] J. Kaminský and F. Jensen, Force field modelling of amino acid conformational energies, J. Chem. Theory Comput. 3 (2007) 1774–1788.
* [95] R. Ditchfield, W. J. Hehre, and J. A. Pople, Self-consistent molecular-orbital methods. IX. An extended Gaussian-type basis for molecular-orbital studies of organic molecules, J. Chem. Phys. 54 (1971) 724–728.
* [96] W. J. Hehre, R. Ditchfield, and J. A. Pople, Self-consistent molecular-orbital methods. XII. Further extensions of Gaussian-type basis sets for use in molecular-orbital studies of organic molecules, J. Chem. Phys. 56 (1972) 2257–2261.
* [97] M. J. Frisch, J. A. Pople, and J. S. Binkley, Self-consistent molecular-orbital methods. 25. Supplementary functions for Gaussian basis sets, J. Chem. Phys. 80 (1984) 3265–3269.
* [98] R. Krishnan, J. S. Binkley, R. Seeger, and J. A. Pople, Self-consistent molecular-orbital methods. XX. A basis set for correlated wave functions, J. Chem. Phys. 72 (1980) 650–654.
* [99] J. S. Binkley, J. A. Pople, and W. J. Hehre, Self-consistent molecular-orbital methods. 21. Small split-valence basis sets for first-row elements, J. Am. Chem. Soc. 102 (1980) 939–947.
* [100] G. W. Spitznagel, T. Clark, J. Chandrasekhar, and P. v. R. Schleyer, Stabilization of methyl anions by first row substituents. The superiority of diffuse function-augmented basis sets for anion calculations, J. Comput. Chem. 3 (1982) 363–371.
* [101] T. Clark, J. Chandrasekhar, G. W. Spitznagel, and P. v. R. Schleyer, Efficient diffuse function-augmented basis sets for anion calculations. III. The 3-21+G basis set for first-row elements Li–F, J. Comput. Chem. 4 (1983) 294–301.
* [102] P. C. Hariharan and J. A. Pople, The influence of polarization functions on molecular orbital hydrogenation energies, Theor. Chim. Acta 28 (1973) 213–222.
|
# MacWilliams’ Extension Theorem for rank-metric codes
Elisa Gorla and Flavio Salizzoni
###### Abstract
The MacWilliams’ Extension Theorem is a classical result by Florence Jessie
MacWilliams. It shows that every linear isometry between linear block-codes
endowed with the Hamming distance can be extended to a linear isometry of the
ambient space. Such an extension fails to exist in general for rank-metric
codes, that is, one can easily find examples of linear isometries between
rank-metric codes which cannot be extended to linear isometries of the ambient
space. In this paper, we explore to what extent a MacWilliams’ Extension
Theorem may hold for rank-metric codes. We provide an extensive list of
examples of obstructions to the existence of an extension, as well as a
positive result.
## Introduction and motivation
Coding theory provides tools for the transmission and storage of data over an
imperfect channel, where the data may be altered or lost. One of the main
goals is being able to automatically correct errors in a received message,
without asking for a retransmission. This is done through the use of (error-
correcting) codes: The data to be sent is encoded, i.e., transformed into a
codeword by adding redundancy to it. The set of codewords is called a code.
The codeword travels over the channel, where part of the information may be
lost or corrupted. At the receiver’s end, the received information is decoded,
that is, the error is corrected and the redundancy eliminated. In the
mathematical formulation of error-correcting codes, we usually ignore the step
in which the redundancy is eliminated, since it does not present any
theoretical or practical challenges.
In many scenarios, error correction is done via minimum distance decoding. A
code is a subset of a finite metric space and a received message is decoded to
the closest codeword. Mathematically, if $(S,d)$ is a finite metric space and
$C\subseteq S$ a code, a received $r\in S$ is decoded to an $x\in C$ which
minimizes $d(-,r)$. Under suitable assumptions, the $x$ which minimizes
$d(-,r)$ is unique. One way to guarantee uniqueness is as follows: Define the
minimum distance of a code $C$ as
$d_{\min}(C)=\min\\{d(x,y)\mid x,y\in C,x\neq y\\}.$
It is easy to show that, given $r\in S$, if there is an $x\in C$ such that
$d(x,r)<(d_{\min}(C)-1)/2$, then $x$ is the unique codeword which minimizes
$d(-,r)$. The quantity $(d_{\min(C)}-1)/2$ is often called the error-
correction capability of the code.
This motivates the interest for isometries between codes, since these are the
maps that preserve the pairwise distances of codewords, therefore the metric
structure of the code, and in particular its error-correction capability.
However, one could also look at isometries of the ambient space
$\varphi:S\rightarrow S$. Such an isometry does not only preserve the metric
structure of the code, mapping $C$ to an isometric code $\varphi(C)$, but also
the distance between any pair of elements of $S$, that is
$d(x,r)=d(\varphi(x),\varphi(r))$ for any $x,r\in S$. In particular, $\varphi$
preserves the whole error correction procedure, in the sense that $r\in S$ is
decoded to $x\in C$ if and only if $\varphi(r)\in S$ is decoded to
$\varphi(x)\in\varphi(C)$. In some cases, we know that any isometry between
codes is the restriction of an isometry of the ambient space $S$, that is, any
isometry between codes can be extended to an isometry of the ambient space. In
this paper, we call this property the Extension Property.
Linear block codes endowed with the Hamming distance are used in point-to-
point communication. These are linear subspaces of $\mathbb{F}_{q}^{n}$, where
$\mathbb{F}_{q}$ denotes the finite field with $q$ elements. In [10] Florence
Jessie MacWilliams showed that every Hamming distance-preserving linear
isomorphism $\varphi:\mathcal{C}_{1}\rightarrow\mathcal{C}_{2}$ between two
codes in $\mathbb{F}_{q}^{n}$ can be extended to a Hamming distance-preserving
linear isomorphism $\mu:\mathbb{F}_{q}^{n}\rightarrow\mathbb{F}_{q}^{n}$. An
elementary proof of this fact was later given by Kenneth Bogart, Don Goldberg
and Jean Gordon in [2]. Nowadays, this theorem is known as the MacWilliams’
Extension Theorem.
###### MacWilliams’ Extension Theorem.
Every linear Hamming weight isometry $\varphi$ of linear codes over a finite
field $\mathbb{F}_{q}$ extends to a linear Hamming weight isometry $\mu$ of
the ambient space $\mathbb{F}_{q}^{n}$.
In the last decades, there has been an increasing interest in understanding
for which ambient spaces and for which weights a similar Extension Property
holds. In [14, 15] Jay Wood studied the case of finite rings and established
the Extension Property for codes over finite Frobenius rings with respect to
the Hamming distance. Aleams Barra and Heide Gluesing-Luerssen investigated
further the case of finite Frobenius rings with various distance functions in
[1]. Friedrich Martin Schneider and Jens Zumbrägel extended the work of Wood
to Artinian rings in [12]. Recently, the Extension Property was proved in [5,
9] for codes over $\mathbb{Z}_{m}$ endowed with the Lee distance.
In this paper, we explore the Extension Property in the setting of rank-metric
codes. These are linear spaces of matrices inside $\mathbb{F}_{q}^{m\times
n}$, where $\mathbb{F}_{q}$ is the finite field with $q$ elements. The rank
distance between two matrices is the rank of their difference. Rank-metric
codes are useful for correcting errors and increasing the efficiency of data
transmission over a network.
###### Extension Property.
Let $\mathcal{C}_{1},\mathcal{C}_{2}$ be two linear codes in
$\mathbb{F}_{q}^{m\times n}$. A linear isometry
$\varphi:\mathcal{C}_{1}\rightarrow\mathcal{C}_{2}$ satisfies the Extension
Property if and only if there exists a linear isometry
$\mu:\mathbb{F}_{q}^{m\times n}\rightarrow\mathbb{F}_{q}^{m\times n}$ such
that $\mu|_{\mathcal{C}_{1}}=\varphi$.
It is well known that there exist isometries of rank metric codes that do not
satisfy the Extension Property (see [1] and [3, Section 7]). We are interested
in understanding under which conditions it may be possible to extend an
isometry to the whole ambient space and when instead the Extension Property
fails. Very little is know in this direction. The results in [7] imply that
isometries between two rank support spaces are extendable. The same result for
$\mathbb{F}_{q^{m}}$-isometries between Galois closed linear subspaces of
$\mathbb{F}_{q^{m}}^{n}$ was proved by Umberto Martínez-Peñas in [11, Theorem
5].
In Section 1, we recall some definitions and results on rank-metric codes. In
Section 2 we present an extensive list of obstructions to the Extension
Property, providing multiple examples, while in Section 4 we establish the
Extension Property in a special case. Section 3 is dedicated to developing
some tools that are used in Section 4. Our Main Theorem states that the
Extension Property holds for certain isometries of codes generated by
elementary matrices. In the appendix, we establish some mathematical facts
connected to the proof of the Main Theorem in Section 4.
## 1 Preliminaries on rank-metric codes
Throughout this paper, $q$ is a prime power and $\mathbb{F}_{q}$ denotes the
finite field with $q$ elements. For positive integers $m,n$, we denote by
$\mathbb{F}_{q}^{m\times n}$ the set of $m\times n$ matrices with entries in
$\mathbb{F}_{q}$. We denote by $\mathrm{rank}(M)$ the rank of a matrix
$M\in\mathbb{F}_{q}^{m\times n}$ and by $\dim(V)$ the dimension of an
$\mathbb{F}_{q}$-linear space $V$.
###### Definition 1.1.
The rank distance of $A,B\in\mathbb{F}_{q}^{m\times n}$ is defined as
$\displaystyle d:\mathbb{F}_{q}^{m\times n}\times\mathbb{F}_{q}^{m\times n}$
$\displaystyle\longrightarrow\mathbb{N}$ $\displaystyle(A,B)\qquad$
$\displaystyle\longmapsto\mathrm{rank}(A-B).$
A rank-metric code $\mathcal{C}\subseteq\mathbb{F}_{q}^{m\times n}$ is an
$\mathbb{F}_{q}$-linear subspace endowed with the rank distance.
In order to properly state the Extension Property in the context of rank-
metric codes, we briefly recall the notion of isometric and equivalent codes.
###### Definition 1.2.
Let $\mathcal{C}_{1},\mathcal{C}_{2}$ be two linear codes in
$\mathbb{F}_{q}^{m\times n}$. An $\mathbb{F}_{q}$-linear isomorphism
$\varphi:\mathcal{C}_{1}\rightarrow\mathcal{C}_{2}$ such that
$\mathrm{rank}(C)=\mathrm{rank}(\varphi(C))$ for all $C\in\mathcal{C}_{1}$ is
called isometry and $\mathcal{C}_{1},\mathcal{C}_{2}$ are isometric.
The following classification of the linear isometries of
$\mathbb{F}_{q}^{m\times n}$ is due to Hua [8] for odd characteristic and to
Wan [13] for characteristic 2. The statement can also be found in [6, Theorem
11.1.9].
###### Theorem 1.3.
Let $\varphi:\mathbb{F}_{q}^{m\times n}\rightarrow\mathbb{F}_{q}^{m\times n}$
be an $\mathbb{F}_{q}$-linear isometry with respect to the rank metric.
1. (a)
If $m\neq n$ then there exist matrices $A\in\mathrm{GL}_{m}(\mathbb{F}_{q})$
and $B\in\mathrm{GL}_{n}(\mathbb{F}_{q})$ such that $\varphi(M)=AMB$ for all
$M\in\mathbb{F}_{q}^{m\times n}$.
2. (b)
If $m=n$ then there exist matrices $A,B\in\mathrm{GL}_{n}(\mathbb{F}_{q})$
such that either $\varphi(M)=AMB$ for all $M\in\mathbb{F}_{q}^{n\times n}$, or
$\varphi(M)=AM^{t}B$ for all $M\in\mathbb{F}_{q}^{n\times n}$.
###### Definition 1.4.
Two codes $\mathcal{C}_{1},\mathcal{C}_{2}\leq\mathbb{F}_{q}^{m\times n}$ are
equivalent if there exists a linear rank-metric isometry
$\varphi:\mathbb{F}_{q}^{m\times n}\rightarrow\mathbb{F}_{q}^{m\times n}$ such
that $\phi(\mathcal{C}_{1})=\mathcal{C}_{2}$.
According to these definitions and Theorem 1.3, we can formulate the Extension
Property for rank-metric linear codes as follows.
###### Extension Property.
Let $\mathcal{C}_{1},\mathcal{C}_{2}$ be two linear codes in
$\mathbb{F}_{q}^{m\times n}$. An isometry
$\varphi:\mathcal{C}_{1}\rightarrow\mathcal{C}_{2}$ satisfies the Extension
Property if and only if there exist two matrices
$A\in\mathrm{GL}_{m}(\mathbb{F}_{q})$ and
$B\in\mathrm{GL}_{n}(\mathbb{F}_{q})$ such that either $\varphi(M)=AMB$ for
all $M\in\mathcal{C}_{1}$, or $\varphi(M)=AM^{t}B$ for all
$M\in\mathcal{C}_{1}$, where the latter case can only happen if $m=n$.
## 2 Obstructions to the Extension Property
In this section we discuss several obstructions to the Extension Property in
the rank-metric case. A first problem arises from the fact that the
transposition is an isometry of the ambient space only in the square case.
This makes the composition of the transposition with the natural inclusion of
$\iota:\mathbb{F}_{q}^{m\times m}\hookrightarrow\mathbb{F}_{q}^{m\times n}$,
$m\leq n$, into an $\mathbb{F}_{q}$-linear isometry of
$\iota(\mathbb{F}_{q}^{m\times m})\subseteq\mathbb{F}_{q}^{m\times n}$ with
itself, which cannot be extended to $\mathbb{F}_{q}^{m\times n}$. This is a
way of looking at the next example, due to Aleams Barra and Heide Gluesing-
Luerssen.
###### Example 2.1 ([1], Example 2.9).
Let
$\mathcal{C}=\\{\begin{pmatrix}A&0\end{pmatrix}:A\in\mathbb{F}_{q}^{2\times
2}\\}\leq\mathbb{F}_{q}^{2\times 3}$ and let
$\varphi:\mathcal{C}\rightarrow\mathcal{C}$ be the isometry given by
$\varphi(\begin{pmatrix}A&0\end{pmatrix})=\begin{pmatrix}A^{t}&0\end{pmatrix}$
for all $A\in\mathbb{F}_{q}^{2\times 2}$. It is easy to see that it is not
possible to extend $\varphi$ to an isometry of the whole ambient space.
A similar phenomenon happens in the next example, also due to Barra and
Gleusing-Luerssen.
###### Example 2.2 ([1], Example 2.9).
Let $\mathcal{C}\leq\mathbb{F}_{q}^{4\times 4}$ be the code given by
$\mathcal{C}=\left\\{\begin{pmatrix}A&0\\\
0&B\end{pmatrix}:A,B\in\mathbb{F}_{q}^{2\times 2}\right\\}$
and consider the isometry $\varphi:\mathcal{C}\rightarrow\mathcal{C}$ given by
$\varphi\left(\begin{pmatrix}A&0\\\
0&B\end{pmatrix}\right)=\begin{pmatrix}A&0\\\ 0&B^{t}\end{pmatrix}$
As before, one can check that $\varphi$ cannot be extended to an isometry of
$\mathbb{F}_{q}^{4\times 4}$.
In general, the natural inclusion $\iota:\mathbb{F}_{q}^{m\times
m}\times\mathbb{F}_{q}^{n\times
n}\hookrightarrow\mathbb{F}_{q}^{(m+n)\times(m+n)}$ is an isometry with
respect to the sum-rank metric in the domain and the rank metric in the
codomain. When composed with the product of the identity on
$\mathbb{F}_{q}^{m\times m}$ and the transposition on $\mathbb{F}_{q}^{n\times
n}$, it yields an isometry of $\iota(\mathbb{F}_{q}^{m\times
m}\times\mathbb{F}_{q}^{n\times n})\subseteq\mathbb{F}_{q}^{(m+n)\times(m+n)}$
with itself, which does not extend to $\mathbb{F}_{q}^{(m+n)\times(m+n)}$.
We stress that, in both examples there is a smaller, natural ambient space to
which the isometry can be extended. In fact even more, in those specific
examples the isometries are already defined on a smaller ambient space (on
which therefore they can be trivially extended). In the first example, the
isometry is defined on $\mathbb{F}_{q}^{2\times 2}$ while in the second
example it is defined on $\mathbb{F}_{q}^{2\times
2}\times\mathbb{F}_{q}^{2\times 2}$, naturally endowed with the sum-rank
metric. In order to avoid such problems, one may want to consider codes that
cannot be contained in a smaller ambient space, that is, such that
$\operatorname{rowsp}(\mathcal{C})=\mathbb{F}_{q}^{n}$ and
$\operatorname{colsp}(\mathcal{C})=\mathbb{F}_{q}^{m}$.
We now discuss a different obstruction to the Extension Property. Let
$\varphi$ be an isometry of $\mathbb{F}_{q}^{m\times n}$. Then for every
$\mathcal{C}\leq\mathbb{F}_{q}^{m\times n}$ we have that
$\dim(\mathrm{rowsp}(\mathcal{C}))=\dim(\mathrm{rowsp}(\varphi(\mathcal{C})))\text{
and
}\dim(\mathrm{colsp}(\mathcal{C}))=\dim(\mathrm{colsp}(\varphi(\mathcal{C}))).$
(1)
Therefore, in order to be extendable, an isometry must satisfy this property.
The next example shows that not all linear isometries do.
###### Example 2.3.
Let $\mathcal{C}_{1},\mathcal{C}_{2}\in\mathbb{F}_{2}^{2\times 3}$ be the
codes
$\mathcal{C}_{1}=\left\langle\begin{pmatrix}1&1&0\\\
0&1&0\end{pmatrix},\begin{pmatrix}0&1&0\\\
1&0&0\end{pmatrix}\right\rangle\,\,\,\,\,\,\,\mathcal{C}_{2}=\left\langle\begin{pmatrix}0&0&1\\\
0&1&0\end{pmatrix},\begin{pmatrix}0&1&0\\\ 1&0&0\end{pmatrix}\right\rangle$
and let $\varphi:\mathcal{C}_{1}\rightarrow\mathcal{C}_{2}$ be the
$\mathbb{F}_{2}$-linear map given by
$\varphi\left(\begin{pmatrix}1&1&0\\\
0&1&0\end{pmatrix}\right)=\begin{pmatrix}0&0&1\\\
0&1&0\end{pmatrix}\,\,\,\,\,\,\,\varphi\left(\begin{pmatrix}0&1&0\\\
1&0&0\end{pmatrix}\right)=\begin{pmatrix}0&1&0\\\ 1&0&0\end{pmatrix}\,.$
Since $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ are codes of constant rank 2,
then $\varphi$ is an isometry. Notice that
$\dim(\mathrm{rowsp}(\mathcal{C}_{1}))=2$ while
$\dim(\mathrm{rowsp}(\mathcal{C}_{2}))=3$. In particular, $\varphi$ cannot be
extended to an isometry of $\mathbb{F}_{2}^{2\times 3}$.
The last example motivates us to look at isometries
$\varphi:\mathcal{C}_{1}\rightarrow\mathcal{C}_{2}\leq\mathbb{F}_{q}^{m\times
n}$ with the following property, which implies (1).
###### Property 1.
There exist $A\in\mathrm{GL}_{m}(\mathbb{F}_{q})$ and
$B\in\mathrm{GL}_{n}(\mathbb{F}_{q})$ such that
$\mathrm{rowsp}(\varphi(C))=\mathrm{rowsp}(CB)\mbox{ and
}\mathrm{colsp}(\varphi(C))=\mathrm{colsp}(AC)$
for all $C\in\mathcal{C}_{1}$.
Notice that none of the isometries considered in Examples 2.1, 2.2 and 2.3
satisfy Property 1. While Property 1 is necessary for the Extension Property
to hold, it is not sufficient, as the next example shows.
###### Example 2.4.
In [4, Example 1] the authors exhibit three distinct equivalence classes of
MRD codes in $\mathbb{F}_{2}^{4\times 4}$ with minimum distance $4$. Any
$\mathbb{F}_{2}$-linear map between codes in different equivalent classes is
an isometry, since each nonzero element has rank 4. Moreover, each of these
maps satisfy Property 1 with $A=B=\mathrm{Id}$. A proof that these codes do
not satisfy the Extension Property appeared in the first arXiv version of the
same paper as [3, Example 7.1].
The obstruction to the Extension Property in Example 2.4 can be seen as coming
from the interaction between the linear structure of the code and the group
structure of the code without the zero matrix. More precisely, if
$\mathcal{C}$ is a vector space of square matrices and
$\mathcal{C}\setminus\\{0\\}$ is a subgroup of the general linear group, then
every $\mathbb{F}_{q}$-linear isomorphism from $\mathcal{C}$ to itself is a
linear isometry. Moreover, if it fixes the identity and it has the Extension
Property, then it is a group homomorphism. Therefore, any
$\mathbb{F}_{q}$-linear isomorphism from $\mathcal{C}$ to itself which fixes
the identity and is not a group homomorphism cannot have the Extension
Property.
###### Example 2.5.
Let $P\in\operatorname{GL}_{n}(\mathbb{F}_{q})$ of order $q^{n}-1$, let
$Q=P^{q-1}$. Let $\mathcal{C}=\mathbb{F}_{q}[P]=\langle
P\rangle\cup\\{0\\}\subseteq\mathbb{F}_{q}^{n\times n}$. Every nonzero element
of $\mathcal{C}$ has rank $n$, hence any injective $\mathbb{F}_{q}$-linear
isomorphism of $\mathcal{C}$ with itself is an isometry. Both $P$ and $Q$ are
linearly independent from the identity matrix $\mathrm{Id}$, so there is a
linear isometry $\varphi:\mathcal{C}\rightarrow\mathcal{C}$ with
$\varphi(\mathrm{Id})=\mathrm{Id}$ and $\varphi(P)=Q$. If $\varphi$ has the
Extension Property, then either $\varphi(M)=AMA^{-1}$ or
$\varphi(M)=AM^{t}A^{-1}$ for some
$A\in\operatorname{GL}_{n}(\mathbb{F}_{q})$. Therefore
$Q=\varphi(P)\in\\{APA^{-1},AP^{t}A^{-1}\\}$, however $Q$ has order
$q^{n-1}+q^{n-2}+\ldots+1$, while $APA^{-1}$ and $AP^{t}A^{-1}$ have order
$q^{n}-1$.
Even when $\mathcal{C}\setminus\\{0\\}$ is not a group, an isometry on a set
of square matrices which fixes the identity and for which the Extension
Property holds needs to be multiplicative. This constitutes an obstruction to
the Extension Property, since not every linear isometry is multiplicative.
###### Example 2.6.
Let $\mathcal{C}\in\mathbb{F}_{2}^{3\times 3}$ be the code given by
$\mathcal{C}=\left\\{0,\mathrm{Id},\begin{pmatrix}1&0&0\\\ 1&1&0\\\
0&0&0\end{pmatrix},\begin{pmatrix}0&0&0\\\ 1&0&0\\\
0&0&1\end{pmatrix}\right\\}$
and let $\varphi:\mathcal{C}\rightarrow\mathcal{C}$ be the isometry of
$\mathcal{C}$ with itself that fixes the identity matrix and swaps the other
two matrices.
Suppose that $\varphi$ can be extended to an isometry of the whole ambient
space. Then, there are $A,B\in\operatorname{GL}_{3}(\mathbb{F}_{2})$ such that
either $\varphi(C)=ACB$ for all $C\in\mathcal{C}$ or $\varphi(C)=AC^{t}B$ for
all $C\in\mathcal{C}$. Since $\varphi(\mathrm{Id})=\mathrm{Id}$, we have that
$AB=\mathrm{Id}$ and so $B=A^{-1}$. Therefore, we obtain that
$\begin{split}\varphi\left(\begin{pmatrix}1&0&0\\\ 0&1&0\\\
0&0&0\end{pmatrix}\right)&=\varphi\left(\begin{pmatrix}1&0&0\\\ 1&1&0\\\
0&0&0\end{pmatrix}\begin{pmatrix}1&0&0\\\ 1&1&0\\\
0&0&0\end{pmatrix}\right)=\varphi\left(\begin{pmatrix}1&0&0\\\ 1&1&0\\\
0&0&0\end{pmatrix}\right)\varphi\left(\begin{pmatrix}1&0&0\\\ 1&1&0\\\
0&0&0\end{pmatrix}\right)=\\\ &=\begin{pmatrix}0&0&0\\\ 1&0&0\\\
0&0&1\end{pmatrix}\begin{pmatrix}0&0&0\\\ 1&0&0\\\
0&0&1\end{pmatrix}=\begin{pmatrix}0&0&0\\\ 0&0&0\\\
0&0&1\end{pmatrix}\,.\end{split}$
The map $\varphi$ sends an element of rank $2$ to an element of rank $1$,
contradicting the assumption that $\varphi$ is an isometry. We conclude that
$\varphi$ does not have the Extension Property. Notice however that $\varphi$
satisfies Property 1 with
$A=\begin{pmatrix}0&0&1\\\ 1&1&1\\\ 1&0&0\end{pmatrix}\text{ and
}B=\begin{pmatrix}1&0&0\\\ 1&0&1\\\ 1&1&0\end{pmatrix}\,.$
Property 1 suggests to look at codes generated by rank-one elements. In fact,
if $C$ is a rank-one element with row space $\langle u\rangle$ and column
space $\langle v\rangle$, then $\varphi(C)$ is a rank-one element with row
space $\langle uB\rangle$ and column space $\langle Av\rangle$. Therefore,
$\varphi$ determines $Av$ and $uB$ up to a scalar multiple. This simple
observation allows us to prove the next result.
###### Proposition 2.7.
Let $\mathcal{C}_{1},\mathcal{C}_{2}\leq\mathbb{F}_{2}^{m\times n}$ and let
$\varphi:\mathcal{C}_{1}\rightarrow\mathcal{C}_{2}$ be an isometry which
satisfies Property 1. If $\mathcal{C}_{1}$ is generated by elements of rank 1,
then $\varphi$ is extendable.
###### Proof.
Since $\varphi$ has Property 1, then $\varphi(C)$ and $ACB$ have the same row
and column space for all $C\in\mathcal{C}$. Over $\mathbb{F}_{2}$ this give
that $A^{-1}\varphi(C)B^{-1}=C$ for every $C\in\mathcal{C}_{1}$ of rank 1. If
$\mathcal{C}_{1}$ is generated by elements of rank 1, we conclude by linearity
that $A^{-1}\varphi(C)B^{-1}=C$ for all $C\in\mathcal{C}_{1}$. ∎
Even for $\mathcal{C}$ generated by elements of rank $1$, the Extension
Property may fail if we do not require Property 1.
###### Example 2.8.
Let $\mathcal{C}\subseteq\mathbb{F}_{2}^{2\times 3}$ be the linear code
generated by
$C_{1}=\begin{pmatrix}1&0&0\\\
0&0&0\end{pmatrix},\;\;C_{2}=\begin{pmatrix}0&0&0\\\
0&1&0\end{pmatrix},\;\;C_{3}=\begin{pmatrix}0&0&1\\\
0&0&1\end{pmatrix},\;\;C_{4}=\begin{pmatrix}1&1&0\\\ 1&1&0\end{pmatrix}.$
Let $\varphi:\mathcal{C}\rightarrow\mathcal{C}$ be the linear map given by
$\varphi(C_{i})=C_{i}$ for $i=1,2,3$ and $\varphi(C_{4})=C_{4}+C_{3}$. One can
verify that $\varphi$ is an isometry that cannot be extended to the whole
ambient space, since it does not satisfy Property 1.
One may wonder whether the failure of the Extension Property is due to the
fact that the code is small compared to the ambient space. The next example
shows that this is not the case.
###### Example 2.9.
Starting from the code $\mathcal{C}$ from the previous example, for each $n>3$
we construct a code $\mathcal{C}_{n}\in\mathbb{F}_{2}^{2\times n}$ given by
$\mathcal{C}=\left\\{\begin{pmatrix}A&C\end{pmatrix}:A\in\mathbb{F}_{2}^{2\times(n-3)},\,C\in\mathcal{C}\right\\}.$
Let $\varphi_{n}:\mathcal{C}_{n}\rightarrow\mathcal{C}_{n}$ be the linear map
given by $\varphi_{n}\begin{pmatrix}A&0\end{pmatrix}=A$ for
$A\in\mathbb{F}_{2}^{2\times(n-3)}$ and
$\varphi_{n}\begin{pmatrix}0&C\end{pmatrix}=\varphi(C)$. Again, $\varphi_{n}$
is an isometry that cannot be extended to the whole ambient space. Moreover,
notice that
$\lim_{n\to\infty}\frac{\dim(\mathcal{C}_{n})}{\dim\left(\mathbb{F}_{2}^{2\times
n}\right)}=\lim_{n\to\infty}\frac{2n-2}{2n}=1.$
This show that there exist non-extendable isometries defined on codes, whose
dimension comes arbitrarily close to that of the ambient space.
We state the analogous result of Proposition 2.7 for arbitrary $q$ as an open
question.
###### Question 2.10.
Let $\mathcal{C}_{1},\mathcal{C}_{2}\leq\mathbb{F}_{q}^{m\times n}$ and let
$\varphi:\mathcal{C}_{1}\rightarrow\mathcal{C}_{2}$ be an isometry which
satisfies Property 1. If $\mathcal{C}_{1}$ is generated by elements of rank 1,
then the same is true for $\mathcal{C}_{2}$. If this is the case, does
$\varphi$ have the Extension Property?
Our Main Theorem provides a positive answer to Question 2.10, for codes which
are generated by elementary matrices.
Let $1\leq i\leq m$ and $1\leq j\leq n$. We denote by $E_{i,j}$ the matrix in
$\mathbb{F}_{q}^{m\times n}$ that has $1$ in position $(i,j)$ and $0$
everywhere else. We call these matrices elementary. We now state our main
result, which we will prove in Section 4.
###### Main Theorem.
Let $\mathcal{C}=\langle
E_{i_{1},j_{1}},\dots,E_{i_{k},j_{k}}\rangle\leq\mathbb{F}_{q}^{m\times n}$ be
a code generated by $k$ elementary matrices. Let
$\varphi:\mathcal{C}\rightarrow\mathcal{C}$ be an isometry such that for all
$1\leq h\leq k$ one has $\varphi(E_{i_{h},j_{h}})=\alpha_{h}E_{i_{h},j_{h}}$
for some $\alpha_{h}\in\mathbb{F}_{q}^{*}$. Then $\varphi$ satisfies the
Extension Property.
The next example shows that the statement of the Main Theorem fails, if the
code is generated by non-elementary, rank-one matrices.
###### Example 2.11.
Let $q\neq 2$ and let $\mathcal{C}\in\mathbb{F}_{q}^{2\times 4}$ the code
generated by the following elements of rank 1:
$\begin{split}&C_{1}=\begin{pmatrix}1&0&0&0\\\
0&0&0&0\end{pmatrix},\;\;\;C_{2}=\begin{pmatrix}0&0&0&0\\\
0&1&0&0\end{pmatrix},\\\ &C_{3}=\begin{pmatrix}0&0&1&0\\\
0&0&2&0\end{pmatrix},\;\;\;C_{4}=\begin{pmatrix}0&0&0&1\\\
0&0&0&1\end{pmatrix},\;\;\;C_{5}=\begin{pmatrix}0&0&0&0\\\
1&1&1&1\end{pmatrix}.\end{split}$
Let $\alpha\in\mathbb{F}_{q}\setminus\\{0,1\\}$ and let
$\varphi:\mathcal{C}\rightarrow\mathcal{C}$ be the linear map given by
$\varphi(C_{i})=C_{i}$ for $1\leq i\leq 4$ and $\varphi(C_{5})=\alpha C_{5}$.
One can check that $\varphi$ is an isometry and that it does not have the
Extension Property. In fact, $\varphi$ does not satisfies Property 1, since
$\mathrm{rowsp}(C_{5}-C_{2})\leq\mathrm{rowsp}(\sum_{i=1}^{5}C_{i})$ but
$\mathrm{rowsp}(\varphi(C_{5}-C_{2}))\cap\mathrm{rowsp}(\varphi(\sum_{i=1}^{5}C_{i}))=\\{0\\}$.
Notice that, since $\varphi$ does not satisfies Property 1, it does not yield
a negative answer to Question 2.10. In addition, this example shows that it
does not suffice in general to check Property 1 on a system of generators of
the code.
## 3 Matrix paths
In this section we establish some preliminary result which we will use in the
proof of the Main Theorem. We start by introducing the notion of path in a
matrix. From here on, let $m,n\geq 2$.
###### Definition 3.1.
Let $M\in\mathbb{F}_{q}^{m\times n}$ be a matrix. A path $\pi$ of length
$k\in\mathbb{N}$ in $M$ is a finite ordered sequence of positions of nonzero
entries $\left((i_{1},j_{1}),(i_{2},j_{2}),\dots(i_{k},j_{k})\right)$ such
that two consecutive elements share either the first or the second component
and $(i_{h},j_{h})\neq(i_{s},j_{s})$ for $h\neq s$.
A path $\pi$ of length at least $4$ is closed if the first and the last
entries share a component. The support $\mathrm{supp}(\pi)$ of a path $\pi$ is
the set of elements of $\pi$. A path $\pi$ is simple if no three entries of
$\pi$ share a component.
These definitions are borrowed from graph theory. Indeed, one can naturally
associate to every $M\in\mathbb{F}_{q}^{m\times n}$ a finite graph
$G_{M}=(V_{M},E_{M})$, such that $V_{M}$ is the set of positions of the
nonzero entries of $M$ and two vertices in $V_{M}$ are connected by an edge in
$E_{M}$ if and only if the corresponding entries lay on a common line (that
is, a common row or column). The notions of path and closed path from
Definition 3.1 correspond to the usual definitions in graph theory. A path is
simple if the subgraph of $G_{M}$ induced by the set of vertices in the path
does not contain any clique.
We are mainly interested in closed simple paths. We begin by establishing some
of their basic properties. First notice that, up to a cyclic permutation and
to reversing the order, every simple path is determined by its support.
Moreover, in the next lemma we see that the entries corresponding to the
elements of a closed simple path are contained in a square submatrix with
exactly two nonzero elements in each row and column.
###### Lemma 3.2.
Let $M\in\mathbb{F}_{q}^{m\times n}$ be a matrix. The entries of $M$
corresponding to the elements of a closed simple path are contained in a
square submatrix with exactly two nonzero elements in each row and column.
###### Proof.
Let $\pi=\left((i_{1},j_{1}),(i_{2},j_{2}),\dots(i_{k},j_{k})\right)$ be a
closed path in $M$. By definition, each line of $M$ contains at most two
nonzero entries whose position belongs to the support of $\pi$. Suppose by
contradiction that there exists a line in $M$ which contains exactly one
nonzero entry in position $(i_{h},j_{h})$. If $1<h<k$, then the three elements
$(i_{h-1},j_{h-1}),(i_{h},j_{h}),(i_{h+1},j_{h+1})$ have either the first or
the second coordinate in common. If $h=1$, the same is true for
$(i_{1},j_{1}),$ $(i_{2},j_{2}),(i_{k},j_{k})$. If $h=k$, the same holds for
$(i_{1},j_{1}),(i_{k-1},j_{k-1}),(i_{k},j_{k})$. In each case, $\pi$ is not
simple. We conclude that the entries of $M$ corresponding to the elements of a
closed simple path are contained in a square submatrix with exactly two
nonzero elements in each row and column. In particular, it must be that
$2m=2n$ and so $m=n$. ∎
The next proposition ensures that in every matrix with enough nonzero entries
there is a closed simple path.
###### Proposition 3.3.
Let $m,n\geq 2$ and let $M\in\mathbb{F}_{q}^{m\times n}$ be a matrix with at
least $m+n$ nonzero entries. Then there is a closed simple path in $M$.
###### Proof.
We proceed by induction on $m+n$. If $m+n=4$ then $m=n=2$ and all the entries
of the matrix are nonzero and so trivially we have a closed simple path.
Suppose now that $m+n>4$. If there exists a row in which there is at most one
nonzero entry, then $m>2$. By Lemma 3.2 no close simple path can contain the
position of that entry. Therefore, one may erase that row from $M$ and obtain
a matrix of size $(m-1)\times n$ which contains the same paths as $M$.
Similarly, one may erase any column of $M$ which contain a single nonzero
entry without affecting the paths contained in $M$.
By eliminating all rows and columns of $M$ which contain at most one nonzero
entry, we reduce to a matrix which contains at least two nonzero entries in
each row and column. Notice that the operation of canceling any rows and
columns of $M$ which contain at most one nonzero entry preserves the property
that the matrix has at least as many nonzero entries as the sum of its number
of rows and its number of columns. We can now build a closed simple path as
follows. Starting from an arbitrary nonzero entry, move along the
correspondent row and select another nonzero entry. Then move along the column
of last nonzero entry picked and select another nonzero entry. Proceed in this
way, alternating between rows and columns. At every step, we find a nonzero
entry different from the last one that was picked, since we supposed that in
each line we have at least two nonzero entries. Since the number of lines is
finite, after $k$ steps we must choose an entry on a line where there is
already one entry which was picked at a step $h$ with $1\leq h<k-1$. As soon
as that happens, we choose that entry. The positions of the entries that we
have picked are the support of a closed simple path in $M$. ∎
###### Remark 3.4.
The result in Proposition 3.3 is optimal, in the sense that there are matrices
in $\mathbb{F}_{q}^{m\times n}$ with $m+n-1$ nonzero entries that do not
contain any closed simple path. An example is given by
$M=\begin{pmatrix}1&1&\dots&1\\\ 1&0&\dots&0\\\ \vdots&\vdots&\ddots&\vdots\\\
1&0&\dots&0\end{pmatrix}\in\mathbb{F}_{q}^{m\times n}.$
###### Definition 3.5.
Let $m,n\geq 2$ and $M\in\mathbb{F}_{q}^{m\times n}$. We say that a matrix
$M^{\prime}\in\mathbb{F}_{q}^{m\times n}$ is a path-reduction \- or just a
reduction \- of $M$ if it is obtained from $M$ by changing to zero a nonzero
entry that belong to a closed simple path.
A matrix $M\in\mathbb{F}_{q}^{m\times n}$ is path-irreducible \- or just
irreducible \- if does not contain any closed simple path.
Let $M_{1},\dots,M_{\ell}\in\mathbb{F}_{q}^{m\times n}$. We say that
$(M_{1},\dots,M_{\ell})$ is a path-reduction chain if for every $1\leq
i<\ell$, $M_{i+1}$ is a reduction of $M_{i}$ and $M_{\ell}$ is irreducible.
Since in a closed simple path there are at least four entries and a matrix may
have more than one closed simple path, a matrix may have several path-
reductions. We illustrate the situation in the next simple example.
###### Example 3.6.
Consider the matrix $M\in\mathbb{F}_{2}^{3\times 5}$ given by
$M=\begin{pmatrix}1&0&0&1&0\\\ 0&1&0&1&0\\\ 1&1&0&0&0\end{pmatrix}\,.$
The path $((1,1),(1,4),(2,4),(2,2),(3,2),(3,1))$ is closed and simple.
Replacing any of the ones in $M$ yields a reduction of $M$. In particular
$M^{\prime}=\begin{pmatrix}0&0&0&1&0\\\ 0&1&0&1&0\\\
1&1&0&0&0\end{pmatrix}\qquad M^{\prime\prime}=\begin{pmatrix}1&0&0&0&0\\\
0&1&0&1&0\\\ 1&1&0&0&0\end{pmatrix}\,$
are reductions of $M$. Notice that both $M^{\prime}$ and $M^{\prime\prime}$
are irreducible.
The next corollary is an immediate consequence of Proposition 3.3.
###### Corollary 3.7.
Let $M\in\mathbb{F}_{q}^{m\times n}$. If $M$ is irreducible, than $M$ has at
most $m+n-1$ nonzero entries.
Given a matrix $M\in\mathbb{F}_{q}^{m\times n}$, it is always possible to find
a path-reduction chain starting from $M$. In fact, one can simply apply
consecutive reductions. Since $M$ has a finite number of nonzero entries, one
obtains an irreducible matrix in a finite number of steps.
###### Proposition 3.8.
Let $M\in\mathbb{F}_{q}^{m\times n}$. Then there exists a path-reduction chain
$(M_{1},\dots,M_{\ell})$ such that $M_{1}=M$.
Notice that one can find more than one path-reduction chain starting with the
same matrix $M$. In Appendix A we prove that each path-reduction chain with
$M_{1}=M$ has the same length.
###### Example 3.9.
Let $M\in\mathbb{F}_{2}^{3\times 3}$ be the matrix
$M=\begin{pmatrix}1&1&0\\\ 1&1&1\\\ 0&1&1\end{pmatrix}\,.$
Both
$\left(\begin{pmatrix}1&1&0\\\ 1&1&1\\\
0&1&1\end{pmatrix},\begin{pmatrix}0&1&0\\\ 1&1&1\\\
0&1&1\end{pmatrix},\begin{pmatrix}0&1&0\\\ 1&1&1\\\
0&1&0\end{pmatrix}\right),$
and
$\left(\begin{pmatrix}1&1&0\\\ 1&1&1\\\
0&1&1\end{pmatrix},\begin{pmatrix}1&1&0\\\ 1&0&1\\\
0&1&1\end{pmatrix},\begin{pmatrix}1&1&0\\\ 1&0&1\\\ 0&1&0\end{pmatrix}\right)$
are path-reduction chains starting with $M$.
## 4 Proof the Main Theorem
In order to clarify the structure of the proof of the Main Theorem, we enclose
part of it in two technical lemmas. The first one shows under which conditions
two maps coincide on a closed simple path.
###### Lemma 4.1.
Let $M\in\mathbb{F}_{q}^{m\times n}$ and let
$((i_{1},j_{1}),\dots,(i_{k},j_{k}))$ be a closed simple path in $M$. Let
$\varphi,\psi:\langle
E_{i_{1},j_{1}},\dots,E_{i_{k},j_{k}}\rangle\rightarrow\langle
E_{i_{1},j_{1}},\dots,E_{i_{k},j_{k}}\rangle$ two rank-preserving linear maps
such that $\varphi(E_{i_{h},j_{h}})=s_{h}E_{i_{h},j_{h}}$ and
$\psi(E_{i_{h},j_{h}})=t_{h}E_{i_{h},j_{h}}$, where
$s_{1},\dots,s_{k},t_{1},\dots,t_{k}\in\mathbb{F}_{q}^{*}$. If $s_{h}=t_{h}$
for $1\leq h<k$, then $s_{k}=t_{k}$.
###### Proof.
For $a\in\mathbb{F}_{q}^{*}$, consider the matrix
$M_{a}=\left(\sum_{h=1}^{k-1}E_{i_{h},j_{h}}\right)+aE_{i_{k},j_{k}}.$
Since $((i_{1},j_{1}),\dots,(i_{k},j_{k}))$ is a closed simple path, by Lemma
3.2, $k$ is even and the nonzero entries of $M_{a}$ are contained in a square
submatrix of size $k/2$, whose determinant is a linear function of $a$. Hence
there exists $\bar{a}\in\mathbb{F}_{q}^{*}$ such that
$\mathrm{rank}(M_{\bar{a}})=k/2-1$ and $\mathrm{rank}(M_{a})=k/2$ for all
$a\in\mathbb{F}_{q}\setminus\\{\bar{a}\\}$.
Let $M$ be the matrix given by
$M=\left(\sum_{h=1}^{k-1}s_{h}^{-1}E_{i_{h},j_{h}}\right)+\bar{a}s_{k}^{-1}E_{i_{k},j_{k}}.$
By assumption
$\mathrm{rank}(\psi(M))=\mathrm{rank}(M)=\mathrm{rank}(\varphi(M))=k/2-1$.
Moreover, if $s_{h}=t_{h}$ for $1\leq h<k$, then
$\psi(M)=\left(\sum_{h=1}^{k-1}E_{i_{h},j_{h}}\right)+t_{k}\bar{a}s_{k}^{-1}E_{i_{k},j_{k}}\,.$
By the uniqueness of $\bar{a}$ we conclude that
$\bar{a}=t_{k}\bar{a}s_{k}^{-1}$, hence $t_{k}=s_{k}$. ∎
The next lemma establish the Extension Property in a special case.
###### Lemma 4.2.
Let $\varphi:\langle
E_{i_{1},j_{1}},\dots,E_{i_{k},j_{k}}\rangle\rightarrow\langle
E_{i_{1},j_{1}},\dots,E_{i_{k},j_{k}}\rangle\subseteq\mathbb{F}_{q}^{m\times
n}$ be a rank-preserving linear map such that
$\varphi(E_{i_{h},j_{h}})=s_{h}E_{i_{h},j_{h}}$, where
$s_{1},\dots,s_{k}\in\mathbb{F}_{q}$. If the matrix
$M=\sum_{h=1}^{k}E_{i_{h},j_{h}}$ is irreducible, then there are two diagonal
invertible matrices $A\in\mathbb{F}_{q}^{m\times m}$ and
$B\in\mathbb{F}_{q}^{n\times n}$ such that
$\varphi(C)=ACB$
for all $C\in\langle E_{i_{1},j_{1}},\dots,E_{i_{k},j_{k}}\rangle$.
###### Proof.
We build the matrices $A=(a_{i,j})$ and $B=(b_{i,j})$ step by step. Let $h=1$
and set $a_{i_{1},i_{1}}=1$ and $b_{j_{1},j_{1}}=s_{1}$. This guarantees that
$AE_{i_{1},j_{1}}B=s_{1}E_{i_{1},j_{1}}$. At each subsequent step, choose
$h\in\\{1,\ldots,k\\}$ among those that have not been previously chosen and
such that either $a_{i_{h},i_{h}}$ or $b_{i_{h},i_{h}}$ has been assigned a
value, if such an $h$ exists. If $a_{i_{h},i_{h}}$ was already assigned a
value, set $b_{j_{h},j_{h}}=a_{i_{h},i_{h}}^{-1}s_{h}$. If $b_{j_{h},j_{h}}$
was already assigned a value, set $a_{i_{h},i_{h}}=b_{j_{h},j_{h}}^{-1}s_{h}$.
Notice that at most one among $a_{i_{h},i_{h}}$ and $b_{j_{h},j_{h}}$ can
already have an assigned value. Indeed, assume by contradiction that both
$a_{i_{h},i_{h}}$ and $b_{j_{h},j_{h}}$ are fixed. Then there exist two simple
paths $(\alpha_{1},\dots,\alpha_{u})$ and $(\beta_{1},\dots,\beta_{v})$ such
that $\alpha_{1}=\beta_{1}=(i_{1},j_{1})$,
$\alpha_{u}=\beta_{v}=(i_{h},j_{h})$ and $\alpha_{u-1}\neq\beta_{v-1}$. Let
$z>1$ be the smallest index such that $\alpha_{z}\neq\beta_{z}$. Let $N$ be
the inclusion-minimal submatrix of $M$ whose support contains
$\\{\alpha_{z-1},\dots,\alpha_{u},\beta_{z},\dots,\beta_{v-1}\\}$. Let $d,e$
be such that $N$ has size $d\times e$. Notice that $d,e\geq 2$, since
$\alpha_{z-1},\alpha_{z}$, and $\alpha_{u}$ are not aligned. If $\beta_{z}$
and $\alpha_{z}$ are not aligned, then every line of $N$ contains at least two
nonzero entries. Otherwise, $\alpha_{z-1},\alpha_{z}$, and $\beta_{z}$ are
aligned, then any line that does not pass through the position $\alpha_{z-1}$
contains at least two nonzero entries of $N$. Therefore, in both cases, we
have $2\max\\{d,e\\}$ nonzero entries in a submatrix of size $d\times e$.
Since $d+e\leq 2\max\\{d,e\\}$, by Proposition 3.3 there exists a closed
simple path in $N$, contradicting the irreducibility of $M$.
If no such $h$ exists, choose any $h$ among those that have not been
previously chosen and set $a_{i_{h},i_{h}}=1$ and $b_{j_{h},j_{h}}=s_{h}$.
When all values of $h$ have been considered, set to $1$ all the entries on the
diagonal of $A$ and $B$ which have not been assigned a value yet. ∎
###### Remark 4.3.
The matrix $M$ in Lemma 4.2 is irreducible, which by Corollary 3.7 implies
that $\dim(\langle E_{i_{1},j_{1}},\dots,E_{i_{k},j_{k}}\rangle)\leq m+n-1$.
Notice that $m+n-1$ is the number of degree of freedom of the pair of matrices
$A,B$.
We conclude the section with the proof of the Main Theorem.
###### Proof of the Main Theorem.
If $m=1$ or $n=1$, any injective linear map is a linear isometry and the
statement holds. Suppose therefore that $m,n\geq 2$ and let
$M=\sum_{h=1}^{k}E_{i_{h},j_{h}}$. By Proposition 3.8 there exists a path-
reduction chain $(M,M_{2},\dots,M_{\ell})$ with $M_{\ell}$ irreducible.
Consider the subset $R\subseteq\\{1,\dots,k\\}$ such that $M_{\ell}=\sum_{r\in
R}E_{i_{r},j_{r}}$. By Lemma 4.2 there are two invertible matrices $A,B$ such
that
$AE_{i_{r},j_{r}}B=\varphi(E_{i_{r},j_{r}}),$
for all $r\in R$. Following the path-reduction chain and applying $\ell-1$
times Lemma 4.1, we have that $AE_{i_{h},j_{h}}B=\varphi(E_{i_{h},j_{h}})$,
for $1\leq h\leq k$. By linearity we conclude that $\varphi(C)=ACB$ for all
$C\in\mathcal{C}$. ∎
## Appendix A Length of path-reduction chains
In this appendix, we prove that every path-reduction chain of a matrix
$M\in\mathbb{F}_{q}^{m\times n}$ has the same length.
###### Remark A.1.
Let $M\in\mathbb{F}_{q}^{m\times n}$ and let
$\sigma_{1}=((i_{1},j_{1}),\dots,(i_{k},j_{k}))$ and
$\sigma_{2}=((i^{\prime}_{1},j^{\prime}_{1}),\dots,(i^{\prime}_{h},j^{\prime}_{h}))$
be two closed simple paths. Notice that if
$\mathrm{supp}(\sigma_{1})\neq\mathrm{supp}(\sigma_{2})$, then
$\mathrm{supp}(\sigma_{1})\nsubseteq\mathrm{supp}(\sigma_{2})$ and vice versa.
In the next lemma, we prove that if $M$ contains two distinct closed single
paths, than a path-reduction chain of $M$ has length at least 3.
###### Lemma A.2.
Let $M=(m_{ij})\in\mathbb{F}_{q}^{m\times n}$, let
$\sigma_{1}=((i_{1},j_{1}),\dots,(i_{k},j_{k}))$ and
$\sigma_{2}=((i^{\prime}_{1},j^{\prime}_{1}),\dots,(i^{\prime}_{h},j^{\prime}_{h}))$
be two closed simple paths such that
$\mathrm{supp}(\sigma_{1})\neq\mathrm{supp}(\sigma_{2})$. If
$(i_{1},j_{1})=(i^{\prime}_{1},j^{\prime}_{1})$, then for each
$(i_{s},j_{s})\in\mathrm{supp}(\sigma_{1})\setminus\mathrm{supp}(\sigma_{2})$
there is a closed simple path in $M-m_{i_{1},j_{1}}E_{i_{1},j_{1}}$ that
contains $(i_{s},j_{s})$.
###### Proof.
Up to reversing the order of $\sigma_{2}$ and to a transposition, we may
suppose without loss of generality that
$j_{1}^{\prime}=j_{2}^{\prime}=j_{k}=j_{1}$. As a consequence, also
$i_{1}=i_{2}=i_{h}^{\prime}=i_{1}^{\prime}$. Consider the list of positions
$\gamma=(\gamma_{1},\dots,\gamma_{h+k-2})=((i_{2},j_{2}),\dots,(i_{k},j_{k}),(i^{\prime}_{2},j^{\prime}_{2}),\dots,(i^{\prime}_{h},j^{\prime}_{h})).$
Notice that $\gamma$ is not always a path, since it can contain more than two
entries with the same first or second coordinate, as well as repeated entries.
Fix an $s$ such that
$(i_{s},j_{s})\in\mathrm{supp}(\sigma_{1})\setminus\mathrm{supp}(\sigma_{2})$
and let $\gamma_{x}=(i_{s},j_{s})$. We now recursively build a finite sequence
of simple paths $\pi_{n}$, whose support is contained in that of $\gamma$ and
which start with $\gamma_{x}$. Let $\pi_{1}=(\gamma_{x})$. Suppose that we
have constructed $\pi_{n-1}=(p_{1},\dots,p_{\ell})$ with $p_{1}=\gamma_{x}$
and $p_{\ell}=\gamma_{y}$, with $y=x+n-2$ mod. $h+k-2$ and $\ell\geq 2$. Let
$z=y+1$ mod. $h+k-2$ and define $\pi_{n}$ as follows:
* •
If no two entries of $\pi_{n-1}$ have either the first or the second
coordinate in common with $\gamma_{z}$, then let
$\pi_{n}=(p_{1},\ldots,p_{\ell},\gamma_{z})$.
* •
If there exists $1\leq r<t\leq\ell$ such that $p_{r},p_{t}$ and $\gamma_{z}$
share either the first or the second component, then let
$\pi_{n}=(p_{1},\ldots,p_{r},\gamma_{z})$ if $t=r+1$. Notice that if $t\neq
r+1$, then $\pi_{n-1}$ is a closed simple path.
For $n\geq 2$, $\pi_{n}$ is a simple path of length at least $2$. If for some
$n$ we find a closed simple path, then we are done. Else, $\pi_{h+k-2}$ is a
closed simple path, since $\gamma_{x-1}$ and $\gamma_{x}$ lay on a common line
and $\gamma_{x-2}$ and $\gamma_{x+1}$ do not. ∎
The next lemma shows that the length of a path-reduction chain is independent
of the order of the reductions.
###### Lemma A.3.
Let $M\in\mathbb{F}_{q}^{m\times n}$ and let $M,M_{2},\dots,M_{k+1}$ be a
path-reduction chain for $M$. Let $\alpha_{1},\ldots,\alpha_{k}$ be the
ordered list of positions of the entries that we set to zero during the path-
reduction chain. Any permutation of the sequence
$\alpha_{1},\ldots,\alpha_{k}$ still yields a path-reduction chain for $M$.
###### Proof.
Since the group of permutation of $k$ elements is generated by the $k-1$
transpositions $(1,2),(2,3),\ldots,(k-1,k)$, it suffices to prove that setting
to zero the entries in position
$\alpha_{1},\ldots,\alpha_{i-2},\alpha_{i},\alpha_{i-1},\alpha_{i+1},\ldots,\alpha_{k}$
in the given order gives a path-reduction chain for $M$, for $i=2,\ldots,k$.
This corresponds to the sequence of matrices
$M_{1},M_{2},\ldots,M_{i-1},\bar{M}_{i},M_{i+1},M_{i+2},\ldots,M_{k+1}$
where we let $M_{1}=M$. By assumption, $M_{k+1}$ is irreducible and $M_{j}$ is
a reduction of $M_{j-1}$ for $j=2,\ldots,i-1,i+2,\ldots,k$.
The matrix $\bar{M}_{i}$ is obtained from $M_{i-1}$ by setting to zero the
entry in position $\alpha_{i}$. Since $\alpha_{i}$ belongs to a closed simple
path $\pi$ in $M_{i}$ and every nonzero entry in $M_{i}$ is also a nonzero
entry in $M_{i-1}$, then $\pi$ is also a closed simple path in $M_{i-1}$.
Therefore, $\bar{M}_{i}$ is a reduction of $M_{i-1}$. In order to prove that
$M_{i+1}$ is a reduction of $\bar{M}_{i}$, we need to show that there is a
closed simple path in $\bar{M}_{i}$ which contains $\alpha_{i-1}$. Notice that
$\bar{M}_{i}$ is equal to $M_{i}$, except for the entries in position
$\alpha_{i-1}$ and $\alpha_{i}$. By assumption, there are closed simple paths
$\sigma_{1}$ and $\sigma_{2}$ such that $\sigma_{1}$ contains $\alpha_{i-1}$
and $\sigma_{2}$ contains $\alpha_{i}$, but not $\alpha_{i-1}$. If
$\sigma_{1}$ does not contain $\alpha_{i}$, then it is a closed simple path in
$\bar{M}_{i}$ which contains $\alpha_{i-1}$. If instead $\sigma_{1}$ contains
$\alpha_{i}$, then by Lemma A.2 there is a closed simple path in $M_{i}$ which
contains $\alpha_{i-1}$ but not $\alpha_{i}$. This gives a closed simple path
in $\bar{M}_{i}$ which contains $\alpha_{i-1}$. ∎
We are now ready to prove that every path reduction chain of a given matrix
has the same length.
###### Theorem A.4.
Let $M\in\mathbb{F}_{q}^{m\times n}$ be a matrix. Every path-reduction chain
of $M$ has the same length.
###### Proof.
We proceed by induction on the maximum length $\ell$ of a path-reduction chain
of $M$. Notice that $\ell\geq 1$ and equality holds if and only if $M$ is
irreducible. If $\ell=2$, then $M$ needs to have at least one closed simple
path. Moreover, there is an $\alpha$ in the path such every closed simple path
in $M$ contains $\alpha$. If $M$ contains two distinct closed simple paths
through $\alpha$, then by Lemma A.2 it also contains a closed simple path that
does not pass through $\alpha$. It follows that $M$ contains exactly one
closed simple path and every path-reduction chain has length two and is
obtained by replacing with zero one of the entries of $M$ in one of the
positions on the closed simple path.
Let $M,M_{2},\dots,M_{\ell}$ and $M,M_{2}^{\prime},\dots,M_{k}^{\prime}$ be
two path-reduction chains for $M$, $\ell\geq k$. Let
$\alpha_{1},\dots,\alpha_{k-1}$ and $\beta_{1},\dots,\beta_{\ell}$ be the
positions of the entries of $M$ that we replace with zero to obtain the path-
reduction chains $M,M_{2}^{\prime},\dots,M_{k}^{\prime}$ and
$M,M_{2},\dots,M_{\ell}$, respectively. Notice that $M_{2},\dots,M_{\ell}$ is
a path-reduction chain for $M_{2}$ and, by the induction hypothesis, every
path reduction chain for $M_{2}$ has length $\ell-1$. Starting from $M_{2}$,
we construct a path-reduction chain $M_{2},\bar{M}_{3},\ldots,\bar{M}_{\ell}$
as follows. At each step $i=1,\ldots,k-1$, if there is a closed simple path
that contains $\alpha_{i}$, we replace the entry in position $\alpha_{i}$ by
zero. We claim that we delete at most $k-2$ entries of $M_{2}$. In fact, if
setting to zero the entries in position
$\beta_{1},\alpha_{1},\ldots,\alpha_{k-1}$ in the prescribed order yields a
path-reduction chain of $M$, by Lemma A.3 so does setting to zero the entries
in position $\alpha_{1},\ldots,\alpha_{k-1},\beta_{1}$. This contradicts the
assumption that $M,M_{2}^{\prime},\dots,M_{k}^{\prime}$ is a path-reduction
chain. So we have obtained a path-reduction chain for $M_{2}$ of length
$\ell-1\leq k-1$. It follows that $\ell=k$. ∎
## References
* [1] Aleams Barra and Heide Gluesing-Luerssen. MacWilliams Extension theorems and the local-global property for codes over Frobenius rings. Journal of Pure and Applied Algebra, 219(4):703–728, 2015.
* [2] Kenneth Bogart, Don Goldberg, and Jean Gordon. An elementary proof of the MacWilliams theorem on equivalence of codes. Information and control, 37(1):19–22, 1978.
* [3] Javier de la Cruz, Michael Kiermaier, Alfred Wassermann, and Wolfgang Willems. Algebraic structures of MRD codes. Preprint, available at https://arxiv.org/abs/1502.02711v1.
* [4] Javier de la Cruz, Michael Kiermaier, Alfred Wassermann, and Wolfgang Willems. Algebraic structures of MRD codes. Advances in Mathematics of Communications, 10(3):499–510, 2016\.
* [5] Sergey Dyshko. The extension theorem for Lee and Euclidean weight codes over integer residue rings. Designs, Codes and Cryptography, 87(6):1253–1269, 2019.
* [6] Elisa Gorla. Rank-metric codes. In W. Cary Huffman, Jon-Lark Kim, and Patrick Solé, editors, Concise Encyclopedia of Coding Theory, pages 227–250. Chapman and Hall/CRC, 2021\.
* [7] Marcus Greferath, Thomas Honold, Cathy Mc Fadden, Jay A. Wood, and Jens Zumbrägel. MacWilliams’ Extension Theorem for bi-invariant weights over finite principal ideal rings. Journal of Combinatorial Theory, Series A, 125:177–193, 2014.
* [8] Loo-Keng Hua. A theorem on matrices over a sfield and its applications. Acta Mathematica Sinica, Chinese Series, 1(2):109–163, 1951.
* [9] Philippe Langevin and Jay A. Wood. The Extension Theorem for the Lee and Euclidean Weights over $\mathbb{Z}/p^{k}\mathbb{Z}$. Journal of Pure and Applied Algebra, 223(3):922–930, 2019.
* [10] Florence Jessie MacWilliams. Combinatorial problems of elementary Abelian groups. PhD thesis, Harvard University, 1962.
* [11] Umberto Martínez-Peñas. On the similarities between generalized rank and Hamming weights and their applications to network coding. IEEE Transactions on Information Theory, 62(7):4081–4095, 2016\.
* [12] Friedrich Martin Schneider and Jens Zumbrägel. MacWilliams’ extension theorem for infinite rings. Proceedings of the American Mathematical Society, 147:947–961, 2019\.
* [13] Zhe-Xian Wan. A proof of the automorphisms of linear groups over a field of characteristic 2. Scientia Sinica, 11:1183–1194, 1962.
* [14] Jay A. Wood. Extension theorems for linear codes over finite rings. In Teo Mora and Harold Mattson, editors, Applied Algebra, Algebraic Algorithms and Error-Correcting Codes”, pages 329–340, Berlin, Heidelberg, 1997. Springer Berlin Heidelberg.
* [15] Jay A. Wood. Duality for modules over finite rings and applications to coding theory. American Journal of Mathematics, 121(3):555–575, 1999.
|
# Does universal controllability of physical systems prohibit thermodynamic
cycles?
Dominik Janzing
Max Planck Institute for Intelligent Systems
Max-Planck-Ring 4
72076 Tübingen, Germany
Email<EMAIL_ADDRESS>
Pawel Wocjan
Department of Computer Science, University of Central Florida
4328 Scorpius Street
Orlando, FL 32816, USA
Email<EMAIL_ADDRESS>
(March 28, 2018)
###### Abstract
Here we study the thermodynamic cost of computation and control using
’physically universal’ cellular automata or Hamiltonians. The latter were
previously defined as systems that admit the implementation of any desired
transformation on a finite target region by first initializing the state of
the surrounding and then letting the system evolve according to its autonomous
dynamics. This way, one obtains a model of control where each region can play
both roles the controller or the system to be controlled. In physically
universal systems every degree of freedom is indirectly accessible by
operating on the remaining degrees of freedom.
In a nutshell, the thermodynamic cost of an operation is then given by the
size of the region around the target region that needs to be initialized. In
the meantime, physically universal CAs have been constructed by Schaeffer (in
two dimensions) and Salo & Törmä (in one dimension). Here we show that in
Schaeffer’s CA the cost for implementing $n$ operations grows linearly in $n$,
while operating in a thermodynamic cycle requires sublinear growth to ensure
zero cost per operation in the limit $n\to\infty$. Although this particular
result need not hold for general physically universal CAs, this strong notion
of universality does imply a certain kind of instability of information, which
could result in lower bounds on the cost of protecting information from its
noisy environment.
The technical results of the paper are sparse and quite simple. The
contribution of the paper is mainly conceptual and consists in illustrating
the type of thermodynamic questions raised by models of control that rely on
the concept of physical universality.
## 1 Why thermodynamics of computation and control requires new models
### 1.1 The debate on thermodynamics of computation since the 1960s
The question of whether there are fundamental lower bounds on the energy
consumption of computing devices has attracted the attention of researchers
since the 1960s. Landauer [1] realized that logically irreversible operations
like erasure of memory space necessarily require to transfer the energy $\ln
2kT$ per bit to the environment (with $k$ denoting Boltzmann’s constant and
$T$ the temperature of the environment) due to the second law of
thermodynamics.111In [2] we have argued that the energy requirements for
reliable erasure are even larger than Landauer’s bound when the state of the
energy source is noisy, for instance if it is given by two thermodynamic
reservoirs of different temperatures. For further different perspectives on
Landauer’s principle see, e.g., [3, 4, 5]. Bennett [6] clarified that
computation can be performed without logically irreversible operations and
thus Landauer’s argument does not prove any fundamental lower bound for the
energy needed by computation tasks without further specification. Ref. [7]
argues that physical models of reversible computation should include the
clocking mechanism (that control the implementation of logical gates) because
otherwise one neglects the question of how to implement clocking in a
thermodynamically reversible way (after all, if both gates and clocking device
are described as quantum systems then the influence of the latter on the
former would, to some extent, also imply an influence of the former on the
latter [8]).
### 1.2 External clocking and control signals as loopholes
To motivate this work step by step we first discuss the thermodynamics of
clocking and synchronization briefly which is a sophisticated problem [9, 10,
11, 12]. Ref. [11], for instance, studies some synchronization protocols that
suggest that thermodynamically reversible synchronization requires to exchange
quantum information, which links the a priori different tasks of reversible
computation and quantum computing.222Here, the formal distinction between
quantum and classical clock signals as well as the conversion of time
information between them is based on the rather general framework introduced
in [13].
Going beyond the question of whether implementing reversible logical
operations is possible in a thermodynamically reversible way, we ask whether
implementing unitary operations on some quantum system is possible in a
thermodynamically reversible way. Regardless of how we call the physical
devices controlling the implementation (we called it ‘clock’ in the case of
computation processes), also the implementation of a unitary $U$ requires to
‘change Hamiltonians’ – except for the special case where $U=e^{-iHt}$ with
$H$ being the free Hamiltonian of the system of consideration. However, do we
really have appropriate models for discussing the thermodynamic cost of
‘changing a system’s Hamiltonian’? After all, describing a control field in
classical terms is only a valid approximation if it can be considered
macroscopic. For instance, a ‘macroscopic’ number of electrons, sufficiently
distant from some probe particle under consideration, could create such a
‘classical’ field. It is hard, however, to imagine a macroscopic controller
whose energy consumption does not exceed the energy content of the microscopic
target system. This suggests that discussing potential thermodynamic
limitations requires microscopic models of control.
For both tasks, computation and control, we are criticizing basically the same
issue: as long as the device controlling or triggering the operations
(regardless of whether we call it ‘clock’ or ‘controller’) is not included in
our microscopic description, we are skeptical about the claim that the
operation could ‘in principle’ be implemented in a thermodynamic cycle without
any energy cost.
These remarks raise the following two questions: (1) What are appropriate
models for discussing resource requirements of computation and control? Given
such a model, we need to ask (2) how to define resource requirements within
the model.
To discuss the cost of ‘changing Hamiltonians’ we first recall that changing
‘effective Hamiltonians’ is what is actually done: Let the target system, for
instance, be a single particle. Changing control fields actually means to
change the quantum state of the physical systems surrounding the particle. In
a certain mean-field limit, this state change amounts to the change of a
classical field. Thus, the particles interact according to a fixed
Hamiltonian. Taking this perspective seriously, we are looking for a model
where control operations are implemented by a fixed interaction Hamiltonian if
the states of the surrounding quantum systems are tuned in an appropriate way.
Ref. [14] also studies thermodynamic laws in a scenario where system,
controller, and baths are coupled by a fixed time-independent Hamiltonian,
while [15] also considers autonomuous dynamics of open systems. Although the
goal of the present paper is also to study thermodynamics in a scenario with
autonomuous time evolution, we consider a model that is nevertheless general
enough to enable controlling controllers by ‘meta’-controllers and so on.
This, in turn, requires to couple the target system considered in the first
place to an infinite system that is not just a ‘heat bath’ as it is often
assumed but something that can be controlled and, further, act as a controller
at the same time.
### 1.3 Spin lattice Hamiltonians as autonomous models of computation
As models for reversible computing, Hamiltonians on spin lattices have been
constructed that are able to perform computation [16] by their autonomous
evolution. This addresses the above criticism in the sense that these models
do not require any external clocking. Instead, synchronization is achieved by
the fixed and spatially homogeneous interaction Hamiltonian itself. Refs. [17,
18] go one step further and describe Hamiltonians on spin lattices for which
the result of the computation need not be read out within a certain time
interval because the time average state encodes the result. This solves the
more subtle problem that otherwise the readout required an external clock.
There are several properties that make spin lattices attractive as physical
toy models of the world (and not only as model for a computing device): the
discrete lattice symmetry represents spatial homogeneity of the physical laws
and the constant Hamiltonian the homogeneity in time. By looking at lattices
as discrete approximations of a field theoretical description of the physical
world, even the presence and absence of matter can be seen as just being
different states of the lattice. Accordingly, one can argue that spin lattices
allow for a quite principled way of studying thermodynamics of computation and
control because they model not only the computing device itself but also its
interaction with the environment: to this end, just consider some region in
the lattice as the computing device and the complement of that region as the
environment.
### 1.4 Why we propose to add physical universality
For the purpose of developing our ‘toy thermodynamics of computing and
control’ we propose to consider spin lattices or cellular automata (as their
discrete analog) that satisfy the additional condition of physical
universality introduced in [19]. This property will be explained and motivated
on an informal level in the following section. Roughly speaking, physical
universality means that the autonomous time evolution of the system is able to
implement any mathematically possible process on an arbitrarily large finite
region after the complement of the region is prepared to an appropriate
initial state. In the case of quantum systems, we mean by ‘mathematically
possible’ the set of completely positive trace preserving maps. In the
classical case, we refer to the set of stochastic maps. Given that one
believes in the hypothesis that real physical systems admit in principle the
implementation of any mathematically possible process333For critical remarks
on this postulate see [20], Chapter 7: here doubts are raised that every self-
adjoint operator in a multi-particle system can be measured in practice.
However, there exists always a unitary transformation that reduces the
observable to an observable that is diagonal in the tensor product basis,
i.e., measurements of every single particle. Given that one believes that
these individual measurements are always possible even for multi-partite
systems, the doubts thus question the implementation of arbitrary unitaries.
Further, Ref. [21] discusses the concept of physical universality for an
understanding of life and also proposes to weaken physical universality – just
to mention a second critical point of view., it is natural to demand that the
interaction at hand itself is able to implement the transformation. Otherwise,
the interaction does not fully describe the interface between system and its
environment. For the purpose of our thermodynamic considerations, however, we
want to study systems whose interface is completely described by the
interaction under consideration rather than relying on control operations that
come as additional, external, ingredients.
The paper is structured as follows. Section 2 briefly motivates the notion of
physical universality introduced in [19] for both Hamiltonians and cellular
automata444Note that this paper contains several ideas that already appear in
the preprint [19], but often less explicit than here. Since [19] will not be
published because its main purpose had been to state a question that has been
solved in the meantime, we do not care about this overlap. , although we focus
on the latter for sake of simplicity. Section 3 introduces the condition of
physical universality formally and describes and discusses the notion of
resource requirements introduced in [19], which is also the basis of this
paper. Further, we raise the question of whether the resource requirements of
repeating a certain operation can grow sublinear in the number of repetitions
(which we argue to be necessary to justify the term ’thermodynamic cycle’).
Section 4 explains why CAs that are not physically universal may admit
thermodynamic cycles in our sense. This is because they admit initializations
of a finite region that ensure the implementation of endless repetitions of
the same control operation. Section 5 explains why this simple construction is
impossible in physically universal CAs and shows that Schaeffer’s CA does not
admit sublinear growth. Whether no physically universal CA admits sublinear
growth has to be left to the future.
## 2 Physical universality: informal description and possible consequences
### 2.1 Physically universal systems as consistent models of control
Ref. [19] introduces the notion of physical universality for three types of
systems:
(1) | translationally invariant finite-range interaction Hamiltonians on infinite spin
---|---
| lattices,
(2) | quantum cellular automata, and
(3) | classical cellular automata.
While (1) is the model that is closest to physics, (2) and (3) describe
increasing abstractions that are useful for our purpose. Essentially, (2) is
just the discrete time version of (1). We will restrict the attention to (3)
because it turns out that the problem is already difficult enough for this
case.
On an abstract level, the definition of physical universality coincides for
all three cases: a system is called physically universal if every desired
transformation on any desired target region (of arbitrary but finite size) can
be implemented by first initializing the (infinite) complement of that region
to an appropriate state and then letting the system evolve according to its
autonomous dynamics for a certain ‘waiting time’ $t$. For the cases (2) and
(3), $t$ is a positive integer while it is a positive real number for the case
(1). Since cases (1) and (2) refer to quantum systems the set of possible
transformations (completely positive trace preserving maps) is uncountably
infinite, we should only demand that one can get arbitrarily close to the
desired transformation via appropriate initializations and waiting times
instead of being able to implement the desired transformation exactly.
#### Shifting the boundary between target and controller
Physically universal systems are intriguing because they provide a model class
where every physical degree of freedom is indirectly accessible by operating
on the remaining degrees of freedom in the ‘world’ and then letting the joint
system evolve. In other words, the complement of the target region acts as the
controller of the target region so that any part of the world can become the
controller or the system to be controlled. This is in contrast to some
physical models of computation, e.g., [17], for which data and program
registers are represented by different types of physical degrees of freedom.
These systems are able to perform any desired transformation on the data
register by appropriate initialization of the program register. The question
of how to act on the program register cannot be addressed within the model. In
physically universal systems, on the other hand, the preparation of any region
can be achieved by operating on its complement. This reduces the question of
how to act on some target region to the question of how to act on some
‘controller’ region around it. In turn, this controller region can be prepared
by acting on some ‘meta-controller’ region around it. Although this does not
solve the problem it shows at least that the boundary between controller and
target region can be arbitrarily shifted.
#### Analogy to the quantum measurement problem
This is similar to the quantum measurement problem where the boundary between
the measurement apparatus and the quantum system to be measured (the famous
‘Heisenberg cut’) can be arbitrarily shifted as long as the quantum
description is considered appropriate: the transition from a pure
superposition to the corresponding mixture can be explained by entanglement
between the target system and its measurement aparatus [22] (for simplicity,
one may define ‘measurement apparatus’ as all parts of the environment that
carry information about the result). The resulting joint superposition of
measurement apparatus and target system can be transferred to a mixture by
entanglement with a ‘meta’ measurement apparatus and so on.
### 2.2 Potential thermodynamic implications
Physical universality can have important thermodynamic consequences because it
excludes the ability to completely protect information. Physically
universality means that any system can be controlled by its surrounding.
Therefore, the unknown state of the surrounding will eventually cause the
state of the system to change. In contrast, in systems such as [17] the state
of the program register never changes during the autonomous because of the
strict separation between data and program registers. Here, we don’t want to
accept the latter class of models as physical models of computation because in
the real world also program registers are physical systems that can be somehow
accessed by actions on their environment. In other words, the information of
the ‘program’ register is only safe because the model fails to describe how to
act on that part of the system using the given interactions (these actions are
external to the theory).
#### Trade-off between stability and controllability
Physical universality thus gives rise to a thermodynamics in which the
inability to protect information is a result of the ability to control every
degree of freedom. On the one hand, the target system needs to interact with
its environment otherwise we were not able to control it. On the other hand,
this interaction makes entropy leaking from the surrounding into the target
system. Ref. [19] defines the model class of physically universal systems for
the purpose of studying this conflict on an abstract level. Here, we restrict
the attention to discrete time dynamics on classical cellular automata. In the
long run, one should certainly address our thermodynamic questions using
continuous time dynamics on quantum systems. As a first approach, however, it
is convenient to simplify the problem by restricting oneself to classical CAs.
Another reason for considering classical CAs is also to make this problem more
accessible to the computer science community.555Note, further, that already
von Neumann’s self-reproducing automata [23] follows the principle to study
physical or biological universality properties using CAs. After all, it is one
of the lessons learned from quantum information theory [24] that translating
physics into computer scientific language can provide a new perspective and
new paradigms. Indeed, the past two decades have shown that understanding
thermodynamics via computer scientific models is also promising.666For
instance, the principle of cooling devices [25, 26] and heat engines [27] can
be illustrated using an $n$-bit register represented by $n$ two-level systems
or other simple discrete systems. For this model class, the relation between
physics and information is most obvious. On the microscopic level one can
hardly tell apart computing devices from thermodynamic machines in the
conventional sense.777See also the adaptive heat engine in Ref. [28]. As part
of this oversimplification, we will define the thermodynamic cost of an
operation simply by the size of the region in the surrounding of the target
system that needs to be initialized. This will be partly justified in Section
3.2.
## 3 The formal setting
### 3.1 Notation and terminology
For the basic notation we mainly follow [29]. The cells of our CA in $d$
dimensions are located at lattice points in $\Omega:={\mathbb{Z}}^{d}$. The
state of each cell is given by an element of the alphabet $\Sigma$. For any
subset $X\subset\Omega$, a configuration $\gamma_{X}$ of $X$ is a map
$X\to\Sigma$. Let $\Sigma^{X}$ denote the set of all configurations of $X$.
The dynamics of the CA is given by a map
$\alpha:\Sigma^{\Omega}\to\Sigma^{\Omega}$ that is local (i.e. the state of
each cell is only influenced by the state of cells in a fixed neighborhood)
and spatially homogeneous (i.e., it commutes with all lattice translations).
Later, we will often consider a class of CAs in dimension $d=2$ where the
state of a cell one time step later only depends on the state of the cell
itself and its $8$ surrounding neighbors, the so-called Moore neighborhood,
and refer to this class as ‘Moore CAs’.
If $\gamma^{\prime}:=\alpha(\gamma)$ for any $\gamma\in\Sigma^{\Omega}$, we
also write $\gamma\to\gamma^{\prime}$ to indicate that the configuration
$\gamma$ evolves to $\gamma^{\prime}$ in one time step and
$\gamma\stackrel{{\scriptstyle n}}{{\to}}\gamma^{\prime}=\alpha^{n}(\gamma)$
means that $\gamma$ evolves to $\gamma^{\prime}$ in $n$ time steps.
###### Definition 1 (implementing a function).
Let $X,Y\subset\Omega$ be finite sets and $f:\Sigma^{X}\to\Sigma^{Y}$ be an
arbitrary function. Then we say a configuration $\phi\in\Sigma^{\bar{X}}$
implements $f$ in time $t$ if for every $x\in\Sigma^{X}$
$\phi\oplus x\stackrel{{\scriptstyle t}}{{\mapsto}}\psi_{x}\oplus f(x),$
holds for some $\psi_{x}\in\Sigma^{\bar{Y}}$. Here, the sign $\oplus$ denotes
merging configurations of disjoint regions to a configuration of the union.
For physical universality, we follow Schaeffer’s modified definition [29],
which is equivalent to our original one, and also his definition of
efficiently physically universal:
###### Definition 2 (physical universality).
We say a cellular automaton is physically universal if for all finite regions
$X,Y$ and all transformations $f:\Sigma^{X}\to\Sigma^{Y}$, there exists a
configuration $\phi$ of the complement of $X$ and a natural number
$t\in{\mathbb{N}}$ such that $\phi$ implements $f$ in time $t$.
We say the CA is efficiently physically universal if the implementation runs
in time $t_{0}$, where $t_{0}$ is polynomial in
$\bullet$ the diameter of $X$ (i.e., the width of the smallest hypercube
containing the set) and diameter of $Y$,
$\bullet$ the distance between $X$ and $Y$, and
$\bullet$ the computational complexity of $f$ under some appropriate model of
computation (e.g., the number of logical gates in a circuit for $f$).
For simplicity, we will often consider only the case $Y=X$. Since every signal
in our CA propagates only one cite per time step, at most a margin of
thickness $t$ around $X$ matters for what happens after $t$ time steps.
Depending on the dynamical law and the desired operation on the target region,
the relevant part of the state can be significantly less. To explore the
resource requirements of an ’implementation’ we phrase the notion of an
implementation formally in a way that is explicit about which parts of the
surrounding cells matter to achieve the desired operation:
###### Definition 3 (device for implementing $f$).
A device for implementing $f:\Sigma^{X}\rightarrow\Sigma^{Y}$ is a triple
$(Z,\phi_{Z},t)$ such that $\phi_{Z}\oplus\phi^{\prime}$ implements $f$ in $t$
time steps for all $\phi^{\prime}\in\Sigma^{\bar{Z}\cap\bar{X}}$. Here, $X$
and $Y$ are called the ‘source region’ and ‘target region’, respectively, and
$Z\subset\bar{X}$ is called the ‘relevant region’, $\phi_{Z}\in\Sigma^{Z}$ the
state of this region, and $t\in{\mathbb{N}}$ the ‘implementation time’. Then,
the ‘size’ of the device is the size of $W:=Z\cup X\cup Y$. The ‘range’ of the
device is the side length of the smallest $d$-dimensional hypercube containing
$W$.
Note that the relevant region may overlap with the target region while it
needs to be disjoint of the source region. Further, note that the definition
of a device does not imply that the relevant region has been chosen in a
minimal way. Accordingly, future theorems on the resource requirements of
implementations may read ‘the relevant region consists of at least …cells.’
The range can be seen as the size of the apparatus. Assume, for instance, that
$W$ consists of a small number $n$ of single cells spread over a hypercube of
side length $k\gg n$. Then we would still call this a ‘large’ apparatus even
if $n$ is small.
So far, we have only considered the ability to implement one specific
transformation once. We also want to be able to study processes where one
desired operation is performed after time $t_{1}$, a second one after time
$t_{2}+t_{1}$, and so on. Assume, for instance, that we want to achieve that
the information content of a certain cell $c_{1}\in\Omega$ is shifted to cell
$c_{2}$ after some time $t_{1}$ and then shifted to cell $c_{3}$ at some later
time $t_{2}+t_{1}$. Then the entire process consisting should be performed by
one initialization rather than demanding re-preparing the system after each
transformation. To this end, we define devices for implementing concatenations
of transformations as generalization of Definition 3:
###### Definition 4 (device for implementing a sequence of transformations).
Let $X_{1},\dots,X_{n+1}$ be finite regions and $f_{1},\dots,f_{n}$ be
functions with $f_{j}:\Sigma^{X_{j}}\to\Sigma^{X_{j+1}}$ for $j=1,\ldots,n$.
In other words, the target region of $f_{j}$ is the source region of
$f_{j+1}$. A device for implementing the sequence $f_{1},f_{2},\dots,f_{n}$ is
an $n+2$-tuple $(Z,\phi_{Z},t_{1},\dots,t_{n})$ with $t_{j}>0$, where
$Z\subset\bar{X}_{1}$ is called the ‘relevant region’ and
$\phi_{Z}\in\Sigma^{Z}$ is a configuration such that
$\phi_{Z}\oplus\phi^{\prime}$ implements $f_{j}\circ f_{j-1}\circ\cdots\circ
f_{1}$ in $\sum_{i=1}^{j}t_{i}$ time steps for all
$\phi^{\prime}\in\Sigma^{\bar{Z}\cap\bar{X}_{1}}$. The size of the device is
the size of $W:=Z\cup(\cup_{j=1}^{n+1}X_{j})$ and its range is the side length
of the smallest $d$-dimensional hypercube containing $W$.
The idea of Definition 4 is that the CA implements the transformation $f_{j}$
within $t_{j}$ time steps, but this interpretation can be misleading because
the Definition only specifies that the initial state $x$ is transformed into
the final state
$f_{n}(f_{n-1}(\cdots f_{1}(x)\cdots))$
if the CA is not disturbed during the entire process. This does not require,
for instance, that an external intervention that changes the state of the
region $X_{1}$ from $f_{1}(x)$ to some $y$ between step $t_{1}$ and $t_{1}+1$
yields the final state $f_{n}(f_{n-1}(\cdots f_{2}(y)\cdots))$.888Rephrased in
causal language [30], if we denote the state of $X_{j}$ at time
$\sum_{i=1}^{j}t_{i}$ by $x_{j}$, then the equation $x_{j}=f_{j}(x_{j-1}),$
(1) is not a ‘structural equation’, since the latter describes, by definition,
also the impact of interventions on the input variable on the right hand side.
A priori it is not obvious that physical universality entails the ability of
implementing sequences with $n>1$. Th following result shows that this is the
case:
###### Theorem 1 (ability to implement sequences).
In every physically universal CA there is a device for any sequence of
transformations.
###### Proof.
We provide a proof by induction on $n$. The base case $n=1$ follows from
physical universality. For the induction hypothesis assume that sequences of
$n$ arbitrary functions can be implemented.
For the induction step, let $f_{1},\ldots,f_{n},f_{n+1}$ be a sequence of
$n+1$ arbitrary transformations with
$f_{j}:\Sigma^{X_{j}}\rightarrow\Sigma^{X_{j+1}}$
for $j=1,\ldots,n+1$.
By physical universality there exists a device
$(Z_{n+1},\phi_{Z_{n+1}},t_{n+1})$ with $Z_{n+1}\subset\bar{X}_{n+1}$ that
implements the last function $f_{n+1}$ of the above sequence. Using this
device we define the following augmented version $\hat{f}_{n}$ of the second
last function $f_{n}$ of the above sequence by setting
$\hat{f}_{n}:\left\\{\begin{array}[]{ccccc}\Sigma^{X_{n}}&\rightarrow&\Sigma^{X_{n+1}}&\cup&\Sigma^{Z_{n+1}}\\\
x&\mapsto&f_{n}(y)&\oplus&\phi_{Z_{n+1}}\end{array}\right.$
for all $y\in\Sigma^{X_{n}}$. In words, the output of the augmented function
$\hat{f}_{n}$ consists of the output of original function $f_{n}$ on the
region $X_{n}$ and the constant output $\phi_{Z_{n+1}}$ on the region
$Z_{n+1}$.
By induction hypothesis there exists a device
$(Z,\phi,t_{1},\ldots,t_{n-1},t_{n})$ that implements the sequence
$f_{1},\ldots,f_{n-1},\hat{f}_{n}$. The special form of the output of the
augmented function $\hat{f}_{n}$ ensures that the device
$(Z,\phi,t_{1},\ldots,t_{n-1},t_{n},t_{n+1})$ also implements the sequence
$f_{1},\ldots,f_{n-1},f_{n},f_{n+1}$. This is because after
$t_{1}+\ldots+t_{n}$ times steps the output is
$\hat{f}_{n}(y)=f_{n}(y)\oplus\phi_{Z_{n+1}}\in\Sigma_{X_{n+1}}\cup\Sigma_{Z_{n+1}}\quad\mbox{where}\quad
y=f_{n-1}(\ldots(f_{1}(x))\ldots)$
so that after $t_{n+1}$ additional time steps the final output is
$f_{n+1}(f_{n}(y))\in\Sigma^{X_{n+2}}$
as desired. ∎
To mention a simple example of the kind of sequences we are interested in,
consider a CA with binary alphabet $\Sigma=\\{0,1\\}$. Assume the task is to
implement a NOT gate on the same bit $n$ times on some target bit. Then the
desired functions read $f_{j}={\rm NOT}$ and the numbers $t_{j}$ specify the
time instants for which the autonomous dynamics has implemented another NOT
gate on our target bit, given that some region $Z$ has been initialized to the
state $\phi_{Z}$.
### 3.2 Formalizing ‘thermodynamic cost’ of operations
Here we will consider the size of the relevant region as the thermodynamic
cost of an implementation. This first approximation is justified by the
following idea: a priori, the state of each cell is unknown, i.e., we assume
uniform distribution over $\Sigma$. According to Landauer’s principle it then
requires the energy $kT\ln|\Sigma|$ to initialize one cell to the desired
state. This way, the thermodynamic cost of the initialization process is
simply proportional to the number of cells to be initialized. This view will
be further discussed at the end of this subsection.
Note that the size of the relevant region can only grow with $O(t^{d})$ if $t$
is the running time for an implementation since a signal can only proceed a
constant number of cells per time step. Therefore, the thermodynamic cost
scales only polynomial in the computational complexity if a CA is efficiently
physically universal. This statement, however, is too weak for our purpose. To
phrase the main questions of this paper (which look for stronger statements)
we need the following terminology:
###### Definition 5 (zero cost per operation).
Given a function $f:\Sigma^{X}\to\Sigma^{X}$, a physically universal CA is
said to admit the implementation of $f$ at zero cost per operation, if there
are devices $(Z_{n},\phi_{Z_{n}},t_{1},\dots,t_{n})$ for every
$n\in{\mathbb{N}}$, each implementing $f$, such that
$\lim_{n\to\infty}\frac{|Z_{n}|}{n}=0.$
Note that this definition does not require that the implementation of $f$
stops after the time $t_{n}$. Likewise, we define:
###### Definition 6 (zero cost of information storage per time).
For some region $X$, a physically universal CA is said to admit zero cost of
information storage per time on $X$ if there are devices
$(Z_{n},\phi_{Z_{n}},t_{n})$ for every $n\in{\mathbb{N}}$ with
$t_{n}\to\infty$ that implement the identity on $X$ after the time $t_{n}$
such that
$\lim_{n\to\infty}\frac{|Z_{n}|}{t_{n}}=0.$
We are now able to phrase our main questions:
* •
Question 1: Does there exist a physically universal CA that admits zero cost
per operation for any / for all functions $f$?
* •
Question 2: Does there exist a physically universal CA that admits zero cost
for information storage per time for any / for all finite regions $X$?
If we recall that the state of the CA may also encode the presence or absence
of matter, our definition of implementation cost also includes the aspect of
hardware deterioration. Assume one has built some microscopic control device
that degrades after performing an operation some large number $n_{0}$ of
times, a device for implementing the operation $n>n_{0}$ times includes a
‘meta’ device repairing the original one. 999Thermodynamic considerations that
account also for reproduction processes are certainly related to
thermodynamics of life [31].
On the one hand, we will show that the answers are negative for Schaeffer’s CA
[29] to both questions above. On the other hand, we will show that there exist
physically non-universal CAs for which both answers are positive. We leave it
as an open question whether physically universality precludes the ability to
achieve zero cost. However, we give some intuitive arguments that suggest that
physical universality makes it at least more difficult to achieve zero
implementation cost per operation or zero cost for information storage per
time.
#### Discussion of the above formalization of thermodynamic cost
It is certainly an oversimplification to identify the size of the region that
needs to be initialized with the thermodynamic cost of an implementation.
Consider, for instance, a physical many particle system where each cell is a
physical system that is weakly interacting with its neighbors. This ensures
that the total energy of the composed system is approximately given by the sum
of the energy of the individual systems. Assume, furthermore, that the state
$0\in\Sigma$ corresponds to the ground state, that is, the state of lowest
energy. In the limit of low temperatures, this state has probability close to
$1$, which implies that initializing the lattice to the all-zero state does
not require significant free energy resources. In this case, however, it
requires significant free energy resources to set a cell to any state other
than $0$ and the resource requirement then depends on the number of cells that
need to be in a non-zero state (which may correspond to the number of
particles in physics).
On the other hand, identifying the number of cells to be initialized with the
thermodynamic cost, can also be justified from the following point of view:
assume we are not interested in the amount of free energy that is required for
one specific transformation. Instead, we only ask whether the amount increases
sublinearly or not. Assuming, in the above physical picture, non-zero
temperature (although it may still be low, which favors the state $0$),
initializing $n$ states to $0$ with certainty yet requires an amount of free
energy of the order $n$. This way, the asymptotic behavior of resource
requirements is unaffected by the details of the physical hardware
assumptions.
## 4 Cost of operations in Turing complete CAs
As a simple toy example, we consider the control task of repeatedly turning on
and off a target bit without ever stopping. Intuitively, this process already
reminds us of a program with an infinite loop:
###### Example 1 (infinite bit switching).
$a:=0$
while $(\,1\,)$ do
$a:=1\oplus a$ // bit XOR
end while
Every Turing-complete CA is capable of implementing the above program. We now
explain briefly the notion of Turing-complete CAs. A CA is called Turing-
complete if there exists a finite configuration that allows the CA to simulate
any universal Turing machine, where the concepts of ‘finite configuration’ and
‘halting’ are defined as follows. ‘Finite configuration’ means that only
finitely many cells are in a non-zero state, where a single element of the
alphabet $\Sigma$ is chosen to be zero, denoted by $0$. ‘Halting’ is defined
as the event of a single previously selected cell becoming non-zero.
It is important to observe that finite configuration does not imply finite
resources in our sense. ‘Finite configuration’ means that all but a finite
number of cells are in the zero state, whereas ‘finite resources’ means that
all but a finite number of cells are in an unknown state.
Consider the following situtation: the simulation of a universal Turing
machine by a CA could require that all but a finite number of cells be zero
because otherwise the non-zero cells would eventually perturb the simulation.
This would mean infinite resources in our sense. However, as long as we do not
demand physical universality, we can easily modify Turing complete CAs such
that they are able to implement an infinite loop with finite resources, as
will be discussed in the following two subsections.
### 4.1 Conway’s Game of Life
We first consider the implementation of our target operation ‘infinite bit
switch’ in a well-known cellular automaton, namely Conway’s Game of Life. It
is a CA in two dimensions, each cell being ‘alive’ or ’dead’, i.e., formally
each cell is just one bit. The rules are [32]:
(1) Any live cell with fewer than two live neighbours dies, as if caused by
under-population.
(2) Any live cell with two or three live neighbours lives on to the next
generation.
(3) Any live cell with more than three live neighbours dies, as if by over-
population.
(4) Any dead cell with exactly three live neighbours becomes a live cell, as
if by reproduction.
To implement the bit flip, as desired, we find simple oscillating patterns in
[32]: The ‘Blinker’ has period 2, as shown in Figure 1.
$\leftrightarrow$
Figure 1: A simple configuration in Conway’s Game of Life that yields a
dynamical behavior with period $2$. The system changes between the two
configurations on the left and the right hand side, respectively. ‘Alive’ and
‘dead’ cells are indicated by gray and white, respectively.
We now focus on the space requirements of this 2-cycle and recall that space
requirements in our sense refer to the amount of space that needs to be
initialized to a specific value. For the Blinker to work, it is essential that
there are no ‘particles’ in the direct neighborhood that disturb the patterns.
Whenever there is a region outside which the state is not known at all, this
complementary region contains with some probability a pattern that moves
towards the blinker and disturbs its cycle. It is therefore possible, that,
without having some control about the entire space, we cannot guarantee that
the blinker works forever.
### 4.2 Modified Game of Life with impenetrable walls
There is, however, a simple modification of the Game of Life for which we can
ensure that the blinker works forever although we only control the state of a
finite region. To this end, we augment each cell by an additional third state
‘brick’ $\blacksquare$, indicated by black color, that blocks the diffusion
from the surrounding. The transition rule of the new CA now consist of the
following rules:
(0) a cell being in the state $\blacksquare$ remains there forever. (1)-(4) as
before, with the convention that the brick $\blacksquare$ counts as $\square$
for its neighbors.
The idea of bricks is that they can form a ‘wall’ around our blinker that
protects it from the influence of its surrounding (which can be in an unknown
state). In physical terms, the wall protects the blinker from the heat of the
environment, as shown in Figure 2.
Figure 2: The blinker surrounded by a wall of ‘bricks’, which protect it from
uncontrolled perturbations from its environment.
### 4.3 Reversible CA: Margolus’ billard ball model
To get one step closer to physics and account for the bijectivity of
microscopic dynamics in the physical world, we now consider reversible CAs,
i.e., CAs in which every state has a unique predecessor, which is not the case
for Game of Life. We now show that even reversible CAs exist that admit
perfect protection of an implementation of an infinite loop, which results in
zero cost per operation.
Margolus’ billard ball model CA [33] is a CA in $2$ dimensions whose update
rules are defined on Margolus neighborhoods, i.e., there are two partitions of
the grid into blocks of $2\times 2$ cells describing the updates at even and
odd time instants: At even time instances, the update is done on the blocks
$\\{(2i,2j),(2i,2j+1),(2i+1,2j),(2i+1,2j+1)\\}$, at odd times it is done on
the blocks $\\{(2i-1,2j-1),(2i-1,2j),(2i,2j-1),(2i,2j)\\}$, as visualized by
the black and the red grid in Figure 3, right. For each such block, the update
rules are shown in Figure 3, left. To interpret such a CA with Margolus
neighborhood as a so-called Moore CA where the update rules do not change
between even and odd time steps (see Subsection 3.1), we consider two time
steps in the Margolus CA as one time step of a Moore CA. To ensure that the
update of a cell of the Moore CA only depends on its surrounding neighbors
(which is convenient for some purposes) one may consider each $2\times 2$
block of the Margolus CA as one cell of the Moore CA.
(1) (2) (3) (4) (5) (6)
Figure 3: Left: Transition rules of Margolus’ billiard ball model CA. Right:
the two different partitions are indicated by the black and the red grid.
As noted in [29], the billiard ball CA is not physically universal since it
allows for impenetrable walls [33]. We will use such walls to implement a bit
switching process that continues forever although only a finite region has
been initialized. A simple example is shown in Figure 4. In the sense of the
present paper, this CA implements the NOT operation in a thermodynamic cycle
since there are no resource requirements per operation because there is no
need to initialize the cells outside the wall.
Figure 4: Configuration of Margolus billiard ball CA that implements bit
switching forever: applying Rule (3) to the black partitioning takes the
configuration on the left hand side to the one on the right hand side. Then,
an update according to the red partitioning leaves the state unchanged due to
Rule (5). Applying Rule (3) to the black partitioning takes the configuration
on the right hand side back to the one on the left hand side. Again, updating
according to the red partitioning has no effect.
## 5 Cost of operations in physically universal CAs
### 5.1 Schaeffer’s physically universal CA
Schaeffer [29], see also [34], constructed an efficiently physically universal
CA that is close to Margolus’ billiard ball model CA. The update rules are
shown in Figure 5. Here, physical universality refers to the Moore CA whose
update rule consists of two time steps of the Margolus CA (following the
remarks in Subsection 4.3).
(1) (2) (3) (4) (5) (6)
Figure 5: Transition rules of Schaeffer’s physically universal CA. Further
rules are given by rotation invariance.
We now discuss a rather primitive solution of implementing our bit switching
task in Schaeffer’s CA. Its resource requirements grow at least linearly in
$n$, which at first appears to be suboptimal. Yet, we will later show that
linear growth is optimal. We first observe that the CA admits free particle
propagation in diagonal direction, a fact that is heavily used in the proof
for physical universality [29]. Figure 6 visualizes this motion.
Figure 6: Free particle propagation in Schaeffer’s physically universal CA:
the configuration on the left turns into the one in the middle by applying
Rule (2) to the red partitioning. The middle configuration turns into the
right one by applying the same rule to the black partitioning.
We now use a ‘beam of particles’ in diagonal direction in which a particle and
a hole alternate, as shown in Figure 7.
Figure 7: Beam of $3$ propagating particles which implement the turning on
and off of the blue target cell $3$ times.
Then choose a target bit along the diagonal, as indicated by the blue square
in Figure 7. Just by waiting, this bit is turned on and off when particles and
holes appear, respectively. The resource requirements of this implementation
are large: not only does it require to correctly locate particles and holes,
it also requires to keep the space around the beam empty to protect the beam
from collisions.
###### Remark 1 (complexity aspect of preparation).
Apart from being costly from the thermodynamic point of view, the
implementation is also ‘not nice’ in other respects: compared to the
simplicity of our control problem, the initialization is rather complex.
Assume, for comparison, the following general control task: given some
arbitrary binary string $b$ of length $2n$, the target bit is supposed to
attain the value $b_{j}$ at ime $j$. Then, the above beam solves this task for
the special case where $b=101010\cdots 10$. The general task can be obviously
solved by the same procedure as above: just locate particles and holes
according to $b$. The fact that the solution of the simple special case is
based on the same principle suggests that it is a ‘bad’ solution; it is
inappropriately complex compared to the simplicity of the task. In a way, it
reduces a simple control operation to one that seems more complex. This raises
the question of what one wants to call a ‘solution’ of a control task.
To return to the thermodynamic question, one may wonder if there exist smarter
implementations of the bit switch process where the resource requirements do
not grow linearly in $n$. We can easily show that the range of the
implementation of the $n$-fold bit switch grows linearly in $n$. To this end,
we first need the Diffusion Theorem of [29]:
###### Theorem 2 (Diffusion Theorem).
Let $S\subset{\mathbb{Z}}^{2}$ be an arbitrary square of side length $s$ in
the Moore CA and $\phi$ an arbitrary configuration that is empty on $\bar{S}$.
Then $\alpha^{t}(\phi)$ is empty on $S$ for all $t\geq s$.
We then have:
###### Theorem 3 (range of device restoring a region after $t$ time steps).
Let $f:\Sigma^{X}\rightarrow\Sigma^{X}$ denote an arbitrary bijection for some
region $X\subset{\mathbb{Z}}^{2}$. Assume that $(Z,\phi_{Z},t)$ is a device
for implementing $f$. Then its range is at least $t$.
###### Proof.
Let $S$ be the smallest square containing $W=Z\cup X$ and $s$ its side length.
We must have $s\geq t$. Assume to the contrary that $s\leq t-1$. Then by the
diffusion theorem any configuration $\phi$ that is empty on $\bar{S}$ evolves
in $t$ times steps to a configuration that is empty on $S$. This contradicts
the assumption there there is a configuration $\phi_{Z}$ that implements a
bijection on site $X$. ∎
An important special case of the above theorem is when $f$ is the identity
function ${\rm ID}$. Moreover, we have:
###### Theorem 4 (range of device implementing $n$ powers of a
transformation).
Let $f:\Sigma^{X}\to\Sigma^{X}$ be an arbitrary bijection for some region
$X\subset{\mathbb{Z}}^{2}$. Assume that $(Z,\phi_{Z},t_{1},t_{2},\dots,t_{n})$
be a device for implementing of the sequence $f,f,\dots,f$. Then its range is
at least $n$.
###### Proof.
The proof is very similar to the proof of Theorem 3. If $W=Z\cup X$ were
contained in a square of side length $s\leq t_{n}-1$ the configuration after
$t_{n}$ would be empty on $X$. Thus $s\geq t_{n}$. The result follows since we
must have $t_{n}\geq n$. ∎
###### Remark 2 (resources requirements for 1D physically universal CA).
We make some comments on resource requirements of the one-dimensional
physically universal CA in [35]. This CA uses interacting particles particles
that propagate with different speeds, namely $\pm 1$ or $\pm 2$ sites per time
step. Similar results to Theorem 3 and Theorem 4 hold for this CA as well.
[35, Lemma 2] is also a kind of diffusion theorem similar to Theorem 2. We
reformulate its statements slightly. Let $S$ be an interval of length $s$ and
$\phi$ a configuration that is empty on $\bar{S}$. Then, after $t(s)\in O(s)$
time steps all configurations $\phi^{\prime}$ that arise from $\phi$ under the
autonomous time evolution are empty on $S$. It is convenient to rephrase
$t(s)\in O(s)$ as follows: there exist two contants $s_{0}$ and $\kappa$ such
that $t(s)\leq\kappa s$ for all $s\geq s_{0}$.
Using the same arguments but now with the one-dimensional diffusion theorem,
we may conclude that for the one-dimensional CA the ranges must be at least
$t/\kappa$ and $n/\kappa$ in Theorem 3 and Theorem 4, respectively, provided
that $X$ is sufficiently large. The latter condition on $X$ is necessary
because the diffusion theorem only applies for intervals of length at least
$s_{0}$.
The range is a rather crude measure of the resource requirements. A finer
measure is the size, that is, the number of cells of the relevant region. We
focus the elementary control task of restoring a bit $n$ times and derive a
lower bound on the size of the corresponding device.
###### Theorem 5 (size of device restoring a bit $n$ times).
Let ${\rm ID}$ denote the identity on some cell of ${\mathbb{Z}}^{2}$ in the
Moore CA corresponding to Schaeffer’s construction. Assume that
$(Z,\phi_{Z},t_{1},t_{2},\dots,t_{n})$ is a device for implementing ${\rm
ID},{\rm ID},\dots,{\rm ID}$. Then $Z$ contains at least $n/4-1$ cells (also
counted in the Moore CA).
###### Proof.
Below, the term ’cell’ refers to a cell in the Margolus CA (containing just
one bit), not the $2\times 2$ block defining the cell of the corresponding
Moore CA. Let $X$ denote the source/target $2\times 2$ block. Since $Z$
consists, by definition, of cells of the Moore CA, it consists only of
complete $2\times 2$ blocks in the Margolus CA.
We now rely on the techniques developed in the proof of Theorem 4 in [29]. We
also consider an ‘abstract’ CA that consists of three states $\\{0,1,\top\\}$,
where $\top$ denotes a ‘wild card’ that stands for an uncertain state. The
purpose of the abstract CA is merely to keep track of how uncertain states
propagate in the concrete CA. Ref. [29] describes a rather simple set of
update rules for the abstract CA, whose details are not needed. The essential
observation that we adopt is that $\top$ particles exhibit free particle
propagation as long as the following ‘forbidden’ patterns
and their rotated versions do not occur. Here a grey box indicates the $\top$
state, which stands for either the $0$ state (white) or the $1$ state (black).
It is important that these forbidden patterns will never occur during the
dynamical evolution of the abstract CA if the initial configuration does not
contain any forbidden patterns [29].
First, we assign $\top$ to all cells in $W:=Z+X$ representing the fact that
their states are unknown or could be arbitrary (because we do not know what
the correct $\phi_{Z}$ looks like and the source block $X$ could also be in
any state). Second, we assign $0$ to all cells in the complement of $W$. This
is possible because cells outside $W$ do not matter for the correct
implementation.
This way, the forbidden patterns do not occur in the initial configuration and
the dynamics of the abstract CA can be described by free propagation of
$\top$-particles: each $\top$-particle moves to the diagonally opposite cell,
that is, in either NE, NW, SE, SW direction. Consequently, any cell can attain
$\top$ and, in particular $1$, at most $4|W|$ times.
Assume one of the cells in the source region $X$ is in the state $1$ at $t=0$.
Consequently, it must be in the state $1$ at least $n$ times during the
interval $1,2,\ldots,t_{n}$ to ensure the correct implementation of the
$n$-fold repetition of ${\rm ID}$. By combining these two arguments together
we conclude that $W$ consists of at least $n$ cells of the Margolus CA. Hence,
it consists of at least $n/4$ cells of the Moore CA. Since $Z$ differs from
$W$ by only one cell we finally obtain the lower bound $|Z|\geq n/4-1$. ∎
Theorem 5 can easily be applied to our task of $n$-fold NOT since the latter
amounts to implementing the identity for all $t_{j}$ with even $j$.
It is unclear whether some of these insights apply to a general physically
universal CA. The question whether there exist physically universal CAs that
do not satisfy the Diffusion Theorem has already been raised by Schaeffer
[29], which seems related to our thermodynamic questions since diffusion is
what makes information so extremely unstable.
It is, however, clear that in any physically universal CA a configuration of a
finite region is unstable in the following sense:
###### Theorem 6 (instability of patterns).
For some physically universal CA, let $Z\subset{\mathbb{Z}}^{d}$ be a finite
region that is initialized to the state $\phi_{Z}$. Assume that the states of
all cells of $\bar{Z}$ are unknow and described by some probability
distribution $P$ that assigns a non-zero probability to every possible state
in $\Sigma^{\bar{Z}}$. Then, for any configuration $\phi^{\prime}_{Z}$ of $Z$
there is a time $t$ such that $\phi_{Z}$ evolves to $\phi_{Z}^{\prime}$ with
non-zero probability.
###### Proof.
Choose a function $f:\Sigma^{Z}\to\Sigma^{Z}$ with
$f(\phi_{Z})=\phi_{Z}^{\prime}$. By physical universality, there is a
configuration of the complement of $Z$ implementing $f$ for some $t$. Since
only the restriction of the configuration to a finite region matters (cells
that are further away than $t$ sites do not matter) the set of all
configurations implementing $f$ has a non-zero probability. ∎
The absence of impenetrable walls in physically universal automata is only the
most intuitive consequence of this obervation. Less obvious consequences
remain to be discovered in the future.
## 6 Conclusions
Common discussions on thermodynamic irreversibility of operations often focus
on entropy generation while they substantially differ with respect to the
underlying notion of entropy (e.g. Boltzmann entropy, Shannon respective von
Neumann entropy, or Kolmogorov complexity [36, 37, 38]). Given these different
notion of entropy, entropy generation is explained by coarse graining [39],
because complexity also contributes to physical entropy by definition [36,
37], or because entropy leaks into the system from its environment.
Irreversibility in physically universal CAs or Hamiltonian systems is not due
to entropy production – at least not in any obvious sense. Instead, every
evolution is to some extent irreversible simply because one has no access to
the evolution, the autonomous time evolution of the system just continues
forever. Therefore, simulating the inverse evolution on some target system
involves sophisticated initialization of a large number of cells in the
surrounding (acting as the controller). Since this initialization is typically
destroyed by the autonomous evolution of the system, restoring the state of
the joint system of target and its controller involves a sophisticated
initialization of a ‘meta-controller’, which, in turn, will then be destroyed
by the evolution. The question of how to reverse the dynamics of one system
without disturbing the state of its surrounding thus raises the same question
for an even larger system.
The idea that control operations, even when they are unitary, imply heat
generation in the controlling device, is certainly not new. However,
physically universal CAs and Hamiltonians may allow us to look at the idea
from a new perspective because they admit to describe target, controller,
meta-controller and so on, in a unified way since all of them are just regions
of cells. Moreover, physically universal CAs formalize the conflict between
controllability and isolability of a system in a principled way. This is
because physical universality, which formalizes the ability to control
subsystems, implies instability of information, although quantitative results
have to be left to the future. Here we have shown that in the existing
constructions of physically universal cellular automata information is
extremely unstable – for instance, in the sense that the resource required for
protecting information grows linearly in time.
The intention of this article is to inspire other researchers to explore
implications of physical universality rather than exploring properties of
specific constructions of CAs. Here we have discussed properties of
Schaeffer’s construction only to illustrate how to work with our notion of
resource requirements in the context of a physically universal CA.
#### Acknowledgements:
We would like to thank Scott Aaronson and Luke Schaeffer for interesting
discussions on related questions and Armen Allahverdyan for comments on an
earlier version of this manuscript.
## References
* [1] R. Landauer. Irreversibility and heat generation in the computing process. IBM J. Res. Develop., 5:183–191, 1961.
* [2] D. Janzing, P. Wocjan, R. Zeier, R. Geiss, and Th. Beth. Thermodynamic cost of reliability and low temperatures : Tightening Landauer’s principle and the Second Law. Int. Jour. Theor. Phys., 39(12):2217–2753, 2000.
* [3] O. J. E. Maroney. Generalizing landauer’s principle. Phys. Rev. E, 79:031105, 2009.
* [4] Takahiro S. Thermodynamic and logical reversibilities revisited. Journal of Statistical Mechanics: Theory and Experiment, 2014(3):P03025, 2014.
* [5] D. H. Wolpert. Extending Landauer’s Bound from Bit Erasure to Arbitrary Computation. arxiv:1508.05319, 2015.
* [6] C. H. Bennett. Logical reversibility of computation. IBM J. Res. Develop., 17:525–532, 1973.
* [7] D. Janzing and Th. Beth. Are there quantum bounds on the recyclability of clock signals in low power computers? In Proceedings of the DFG-Kolloquium VIVA, Chemnitz, 2002. See also preprint arXiv:quant-ph/0202059.
* [8] D. Janzing and T. Decker. How much is a quantum controller controlled by the controlled system? Applicable Algebra in Engineering, Communication and Computing, 19(3):241–258, 2008.
* [9] D. Janzing and B. Steudel. Quantum broadcasting problem in classical low power signal processing. Phys. Rev., A(75):022309, 2007.
* [10] D. Janzing and T. Beth. Bounds on the entropy generated when timing information is extracted from microscopic systems. preprint arXiv:quant-ph/0301125, 2003.
* [11] D. Janzing and T. Beth. Synchronizing quantum clocks with classical one-way communication: Bounds on the generated entropy. preprint arXiv:quant-ph/0306023v1, 2003.
* [12] P. Erker, M. Mitchison, R. Silva, M. Wood, Brunner M., and M. Huber. Autonomous quantum clocks: how thermodynamics limits our ability to measure time. preprint arXiv:1609.06704, 2016.
* [13] D. Janzing and T. Beth. Quasi-order of clocks and their synchronism and quantum bounds for copying timing information. IEEE Transactions Information Theory, 49(1):230–240, 2003.
* [14] S. Deffner and C. Jarzynski. Information processing and the second law of thermodynamics: An inclusive, hamiltonian approach. Phys. Rev. X, 3:041003, 2013.
* [15] J. Horowitz and M. Esposito. Thermodynamics with continuous information flow. Phys. Rev. X, 4:031015, 2014.
* [16] N. Margolus. Parallel quantum computation. In W. Zurek, editor, Complexity, Entropy, and the Physics of Information. Addison Wesley Longman, 1990.
* [17] D. Janzing and P. Wocjan. Ergodic quantum computing. Quant. Inf. Process., 4(2):129–158, 2005.
* [18] D. Janzing. Spin-1/2 particles moving on a 2D lattice with nearest-neighbor interactions can realize an autonomous quantum computer. Phys. Rev. A, 75:012307, 2007.
* [19] D. Janzing. Is there a physically universal cellular automaton or Hamiltonian? preprint arXiv:1009.1720.
* [20] Omnès, R. The interpretation of quantum mechanics. Princeton Series in Physics. Princeton University Press, 1994.
* [21] A. Adams, A. Berner, P. daview, and S. Walker. Physical universality, state-dependent dynamical laws and open-ended novelty. Entropy, 19(461):1–20, 2017.
* [22] D. Guilini, E. Joos, C. Kiefer, J. Kupsch, I.-O. Stamatescu, and H.D. Zeh. Decoherence and the Appearance of a Classical World in Quantum Theory. Springer, Berlin, 1996.
* [23] J. von Neumann. Theory of self-reproducing automata. University of Illinois Press, Champaign, 1966.
* [24] M. Nielsen and I. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2000.
* [25] J. Fernandez, S. Lloyd, T. Mor, and V. Roychowdhury. Algorithmic cooling of spins: A practicable method for increasing polarization. Int. Journ. Quant. Inf., 2(4):461–467, 2004.
* [26] A. Allahverdyan, K. Hovhannisyan, D. Janzing, and G. Mahler. Thermodynamic limits of dynamic cooling. Phys. Rev. E, E(48):041109, 2011.
* [27] D. Janzing. On the computational power of molecular heat engines. J. Stat. Phys., 122(3):531–556, 2006.
* [28] A. E. Allahverdyan, S. G. Babajanyan, N. H. Martirosyan, and A. V. Melkikh. Adaptive heat engine. Phys. Rev. Lett., 117:030601, 2016.
* [29] L. Schaeffer. A physically universal cellular automaton. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, ITCS ’15, pages 237–246, New York, NY, USA, 2015\. ACM.
* [30] J. Pearl. Causality: Models, reasoning, and inference. Cambridge University Press, 2000.
* [31] C. P. Kempes, D. Wolpert, Z. Cohen, and J. Pérez-Mercader. The thermodynamic efficiency of computations made in cells across the range of life. Philosophical Transactions of the Royal Society of London Series A, 375:20160343, 2017.
* [32] Game of life. Wikipedia. accessed 22 May 2016.
* [33] N. Margolus. Physics-like models of computation. Physica D: Nonlinear Phenomena, 10(1):81–95, 1984.
* [34] L. Schaeffer. A physically universal cellular automaton (website). http://web.mit.edu/lrs/www/physCA/, accessed February 2016.
* [35] V. Salo and I. Törmä. A one-dimensional physically universal cellular automaton. preprint arXiv:1501.03988.
* [36] C. H. Bennett. Time/space trade-offs for reversible computation. SIAM J. Computing, 18(4):766–776, 1989.
* [37] W. Zurek. Algorithmic randomness and physical entropy. Phys Rev A, 40(8):4731–4751, 1989.
* [38] D. Janzing, R. Chaves, and B. Schölkopf. Algorithmic independence of initial condition and dynamical law in thermodynamics and causal inference. New Journal of Physics, 18(093052):1–13, 2016.
* [39] R. Balian. From microphysics to macrophysics, volume 1. Springer, 2007.
|
11institutetext: JADBio - Gnosis DA S.A. 22institutetext: Computer Science
Department - University of Crete 22email:
<EMAIL_ADDRESS>
# A Meta-Level Learning Algorithm for Sequential Hyper-Parameter Space
Reduction in AutoML
Giorgos Borboudakis 11 Paulos Charonyktakis 11 Konstantinos Paraschakis 11
Ioannis Tsamardinos 1122
###### Abstract
AutoML platforms have numerous options for the algorithms to try for each step
of the analysis, i.e., different possible algorithms for imputation,
transformations, feature selection, and modelling. Finding the optimal
combination of algorithms and hyper-parameter values is computationally
expensive, as the number of combinations to explore leads to an exponential
explosion of the space. In this paper, we present the Sequential Hyper-
parameter Space Reduction (SHSR) algorithm that reduces the space for an
AutoML tool with negligible drop in its predictive performance. SHSR is a
meta-level learning algorithm that analyzes past runs of an AutoML tool on
several datasets and learns which hyper-parameter values to filter out from
consideration on a new dataset to analyze. SHSR is evaluated on 284
classification and 375 regression problems, showing an approximate $30\%$
reduction in execution time with a performance drop of less than $0.1\%$.
###### Keywords:
AutoML Algorithm Recommendation Algorithm Selection Hyper-parameter
Optimization
## 1 Introduction
AutoML platforms for predictive modelling try to solve the Combined Algorithm
Selection and Hyper-parameter Optimization (CASH) [24] problem. CASH optimizes
the algorithmic choices, as well as the hyper-parameter values of the machine
learning pipeline (hereafter called a configuration) that produces the final
model. Each configuration may contain several steps, such as algorithms for
missing value imputation, feature transformation, feature extraction, feature
selection, and of course, predictive modelling. All such choices can be
represented with hyper-parameters which form the decision variables of an
optimization problem with objective function the out-of-sample predictive
performance of the final model, called Hyper-Parameter Optimization (HPO). The
hyper-parameters form the configuration space over which to optimize.
There have been at least two complementary approaches to address this
optimization. The first approach is to employ black-box optimization
strategies to search the configuration space, such as grid search, random
search, and Sequential Bayesian Optimization [22]. These algorithms evaluate
the configurations on the dataset to analyze by training models. The second
approach is try to reduce or prioritize the exploration of the configuration
space based on prior information. Such algorithms, called meta-level learning
algorithms, analyze the past performance of configurations on other datasets
and try to predict their performance on a new dataset based on its
characteristics, called meta-features [21]. Obviously, the two approaches can
be combined with the meta-level algorithm selecting a subset of configurations
to explore, and the HPO algorithm performing an intelligent search within this
reduced space.
When optimizing over a continuous hyper-parameter, e.g., the cost $C$ of a
Support Vector Machine (SVM), several reasonable assumptions to facilitate to
explore the configuration space have been proposed, e.g., that neighboring $C$
values will lead to correlated model performances. In the Bayesian Sequential
Optimization these assumptions are captured by the kernel function used in the
Gaussian Process that fits the performance landscape. However, for discrete
choices and hyper-parameters, particularly the choice of which algorithm to
use at each analysis step, it is less clear if and which assumptions are
reasonable. Will an SVM perform well on a specific dataset, given the
performance of the Decision Tree? Meta-level learning algorithms can be proven
invaluable when optimizing categorical hyper-parameters.
In this paper, we propose such an algorithm called Sequential Hyper-Parameter
Space Reduction (SHSR). SHSR is a meta-level learning algorithm that analyzes
the past performances and execution times of configurations on a corpus of
datasets, stored in matrices $\mathbf{P}$ and $\mathbf{E}$, respectively. It
learns which configurations can be safely removed from consideration given the
meta-features of the dataset, without affecting the predictive performance of
the final model (within some user-defined tolerance threshold), while
simultaneously trying to minimize execution time. It then recursively applies
this step to the remaining configurations. By removing such values, SHSR
exponentially reduces the discrete part of the configuration space. To apply
SHSR, it is required that a set of configurations is run on a corpus of
datasets, and their performances and execution times are measured and stored
in matrices $\mathbf{P}$ and $\mathbf{E}$, respectively.
SHSR is evaluated on 284 classification and 375 regression problems on real
public datasets. The trade-offs between the tolerance threshold vs. the
computational savings are explored. In addition, it is shown that the
algorithm performs well when provided with incomplete matrices $\mathbf{P}$
and $\mathbf{E}$, i.e., when configurations are run only on a small fraction
of the datasets, practically providing the same results even if only 20% of
the values in $\mathbf{P}$ and $\mathbf{E}$ are present. For a very strict
tolerance threshold, SHSR achieves a relative drop in predictive performance
of less than 0.1%, with 30% computational savings. We note that time savings
are measured with respect to a simple grid search, where all choices are
discrete. For less strict thresholds, SHSR achieves computational savings of
50% and 40% with a relative performance drop of 1.5% and 0.1%) for
classification and regression, respectively.
This paper is structured as follows: Section 2 overviews the literature of
meta-level learning and dataset characterization. Section 3 presents our
proposed algorithm, SHSR, while Section 4 describes SHSR’s experimental
evaluation. Specifically, in Section 4.1, we describe the experimental setup,
while section 4.2 presents the evaluation results. Finally, Section 5
summarizes our conclusions and future work, while Section 6 discusses the
limitations of this work.
## 2 Related Work
Meta-level learning, often called meta-learning, is a heavily overloaded term,
and has been used to refer to different problems in the literature, such as
algorithm recommendation, ensembling and transfer learning [14]. We will
hereafter focus on the Algorithm Recommendation Problem [7], which deals with
learning to recommend configurations to try for a new problem, based on past
runs. This is typically done by first characterizing datasets using measures
(also called meta-features) that describe their properties, and then learning
to predict the predictive performance and/or running time of configurations.
Next, we will provide a brief overview of different meta-features and meta-
level learning methods. Interested readers may refer to [28] for an overview
of meta-level learning, and to [10] on how it is used in the context of
AutoML.
### 2.1 Dataset Characterization
[20, 19] introduced the idea of dataset characterization for meta-level
learning, and studied the effect of simple measures on the performance of
learning algorithms. [16] describe several dataset characterization measures,
and group them into simple (e.g., number of samples), statistical (e.g.,
average absolute correlation between feature pairs) and information-theoretic
measures (e.g., average entropy of features). Additional categories and types
of measures have been introduced over time: model-based measures, which are
extracted from models induced from the meta-features (e.g., the tree depth of
a decision tree trained on the meta-features), landmarking measures, which use
the performance of simple and fast learning algorithms (e.g., accuracy of
Naive Bayes), and other measures (e.g., silhouette index of k-means
clustering). We refer the reader to [21] for a comprehensive review of meta-
features.
### 2.2 Meta-level Learning
To the best of our knowledge, the algorithm recommendation problem was first
addressed in [20]. The authors used simple meta-features, and studied whether
learning to recommend algorithms based on them leads to improvements. [1]
extended that work, by (a) considering additional simple meta-features and (b)
introducing the idea of learning interpretable rules for algorithm
recommendation, and evaluated the method on synthetic data. [3] were the first
to apply meta-level learning on real-world data. They used simple, statistical
and information-theoretic meta-features, and applied decision trees to learn
whether an algorithm should be applied on a given dataset or not. The work was
subsequently extended to use regression and instance-based models to directly
predict the predictive performance of algorithms on new datasets [8]. [4] use
a KNN based algorithm on dataset meta-features to find similarities and then
they apply a performance/computational time multi-criterion to rank algorithms
(more on this in our Comparison Section below). More recently, collaborative
filtering methods [17, 23, 30, 31] were proposed to solve the algorithm
recommendation problem, inspired by the Netflix challenge. These methods allow
for incomplete inputs (i.e., not all configurations are run on all datasets).
However, they suffer from the cold-start problem, but there exist approaches
that try to avoid that by using meta-features [17]. For a detailed review of
meta-level learning methods, we point the reader to [15, 11].
## 3 The Sequential Hyper-parameter Space Reduction Algorithm for Meta-level
Learning
In this section we introduce the Sequential Hyper-parameter Space Reduction
algorithm (SHSR) for algorithm recommendation. SHSR uses the predictive
performances and execution times of groups of configurations on past analyses,
and returns models which are used to filter out unpromising groups, while
simultaneously trying to minimize execution time. Configuration groups can
contain one or multiple configurations (e.g., all configurations with the same
modelling algorithm), are not necessarily mutually exclusive (i.e., the same
configuration might be present in multiple groups), and the input can be
partial (i.e., results for some configurations might not be present for all
past analyses). SHSR is shown in Algorithm 1. We proceed with a detailed
description of SHSR.
Let $G$, $D$, and $F$ denote the number of configuration groups, datasets, and
meta-features, respectively. SHSR takes as input: (a) $G\times D$ matrices
$\mathbf{P}$ and $\mathbf{E}$ containing the performance ratios and execution
times of all configuration groups, (b) an $F\times D$ matrix $\mathbf{X}$ of
meta-features, (c) a threshold $T$, and (d) a list of $G$ sets Active, with
$\textsc{Active}[g]$ initialized to all datasets for which results for $g$ are
present. For a given group $g$ and dataset $d$, the performance ratio
$\mathbf{P}_{g,d}$ is defined as the maximum performance over configurations
in group $g$ on dataset $d$, divided by the maximum performance over all
configurations. The execution time $\mathbf{P}_{g,d}$ is defined as the sum of
execution times of all configurations in $g$. The output is a sequence of
models $(\textsc{Model}[g_{1}],\dots,\textsc{Model}[g_{k}])$, where
$\textsc{Model}[g]$ is a model predicting the performance ratio achieved
without group $g$ on a dataset $d$ based on its meta-features
$\mathbf{X}_{*,d}$.
SHSR starts by creating one model per configuration group $g$, and computing
the time saved if that group were to be removed from consideration. For this,
an outcome $\mathbf{y}$ is created, which contains the maximum performance for
each dataset in $\textsc{Active}[g]$ over all groups except $g$, and a model
$\textsc{Model}[g]$ for $\mathbf{y}$ is fitted using meta-features
$\mathbf{X}$. Next, $\textsc{Covered}[g]$ is computed, by applying
$\textsc{Model}[g]$ on all $\textsc{Active}[g]$, and selecting only the ones
for which the model predicts a performance ratio at least as large as $T$. In
other words, $\textsc{Covered}[g]$ contains all datasets for which group $g$
can be excluded, as the remaining groups are sufficient to achieve a high
performance ratio. Finally, the time saved $\textsc{TimeSavings}[g]$ is
computed as the sum of execution times of all $\textsc{Covered}[g]$. Once this
is done, the group $g^{*}$ with the highest time savings is selected, and SHSR
is called recursively with $\textsc{Covered}[g^{*}]$ removed from
$\textsc{Active}[g]$ (in the algorithm we slightly abuse notation for the sake
of brevity). SHSR stops once no more time savings can be achieved, which
happens if no more datasets can be covered by the removal of any group.
Finally, to decide which groups to run on a new dataset $d^{\prime}$, the
models $(\textsc{Model}[g_{1}],\dots,\textsc{Model}[g_{k}])$ are applied in
sequence, and a group $g_{i}$ is removed from consideration if
$\textsc{Predict}(\textsc{Model}[g_{i}],\mathbf{X_{*,d^{\prime}}})\geq T$.
Algorithm 1 Sequential Hyper-parameter Space Reduction
Constants: Performance ratios $\mathbf{P}$, Execution times $\mathbf{E}$,
Meta-features $\mathbf{X}$, Threshold $T$
Input: Active datasets per configuration group Active
Output: Sequence of regression models
1:for each configuration group $g$ do
2: $\mathbf{y}_{d}\leftarrow\max_{i}\mathbf{P}_{i,d},i\neq g,\forall
d\in\textsc{Active}[g]$
3: $\textsc{Model}[g]\leftarrow\textsc{FitModel}(\mathbf{y},\mathbf{X})$
4: $\textsc{Covered}[g]\leftarrow\\{d\mid
d\in\textsc{Active}[g]\wedge\textsc{Predict}(\textsc{Model}[g],\mathbf{X_{*,d}})\geq
T\\}$
5:
$\textsc{TimeSavings}[g]\leftarrow\sum_{d}\mathbf{E}_{g,d},d\in\textsc{Covered}[g]$
6:end for
7:$g^{*}\leftarrow\operatorname*{arg\,max}_{g}\textsc{TimeSavings}[g]$
8:if $\textsc{TimeSavings}[g^{*}]=0$ then
9: return $\emptyset$
10:end if
11:return
$(\textsc{Model}[g^{*}])\cup\textsc{SHSR}(\textsc{Active}\setminus\textsc{Covered}[g^{*}])$
## 4 Experimental Evaluation
In this section we evaluate the ability of SHSR to filter out groups of
configurations, and measure its impact on running time and predictive
performance. To this end, we collected classification and regression datasets
and trained a set of predefined configurations on them, in order to obtain
performance estimates and execution times for SHSR. The full analysis required
training $\sim$85M models, taking a total of $\sim$58K core hours. We proceed
with a detailed description of the experimental setup. Results are presented
in Section 4.2.
### 4.1 Experimental Setup
#### 4.1.1 Datasets
Figure 1: Sample size vs feature size for regression and classification
datasets. The x-axis shows the sample size, while the y-axis shows the feature
size. For the classification datasets, the color intensity varies depending on
the class distribution. Both axes are in $\log_{10}$ scale.
We collected a total of 659 datasets, out of which 284 are binary
classification problems and 375 are regression problems. Datasets were
selected to cover a wide range of problems, with varying sample sizes, number
of variables and, in case of classification problems, class distributions; see
Fig. 1 for a summary. All datasets were downloaded from OpenML [29] (licensed
under CC-BY 4.0) and from BioDataome [13] (publicly available). A list of all
datasets and their sources can be found on the project’s GitHub
repository111https://github.com/JADBio/SHSR.
#### 4.1.2 Implementation
For model training we used JADBio [26]. JADBio is an automated machine
learning platform specifically designed for life scientists and molecular
data. It uses a fully automated machine learning protocol to select the best
combination of preprocessing, feature selection and modelling algorithms, and
returns a predictive model and an estimate of its predictive performance. Any
features that could interfere with results (e.g., early dropping of
unpromising configurations [27]) were disabled. Also, in order to get accurate
timing results, only a single CPU core was used per configuration, and
analyses were not run in parallel.
Everything else was implemented in Python and is available on GitHub. For
machine model training, we used the scikit-learn package [18]. This includes
code for the implementation of SHSR, as well as code for producing all
results.
#### 4.1.3 Algorithms and Configurations
For preprocessing, JADBio uses standardization and mean imputation for
continuous variables and mode imputation for the categorical ones. As feature
selection algorithms, the Statistical Equivalent Signatures (SES) algorithm
[12] and Lasso [25] were employed. Finally, for modelling JADBio uses
$L_{2}$-regularized linear and logistic regression [9], Support Vector
Machines (SVM) [2] and Support Vector Regression (SVR) with Linear,
Polynomial, and Gaussian kernels, Decision Trees [6], and Random Forests [5].
We used the default grid search parameters of JADBio with the tuning parameter
set to “extensive”: 6 hyper-parameter combinations for SES, 7 for Lasso, 7 for
linear and logistic regression, 150 for SVMs, 1500 for SVRs, 15 for Decision
Trees and 60 for Random Forests222The complete list of hyper-parameters is
available on GitHub.. Configurations were obtained by taking all combinations
of algorithms and hyper-parameter values, and constraining them to contain one
or multiple transformations (imputation is always included, if applicable, and
takes precedence over standardization), one feature selection algorithm, and
one modelling algorithm. The total number of configurations were 2983 and 8633
for classification and regression tasks, respectively.
Based on the above configurations, we created a total of 21 configuration
groups: (a) one group per feature selection algorithm (13 in total, one per
hyper-parameter combination), and (b) one group per modelling algorithm (8 in
total; SVMs with different kernels and polynomial kernels with different
degrees are considered separate algorithms). Each such group contains only the
subset of configurations for which the respective algorithm was used (e.g.,
the random forest group contains all results with random forest, irrespective
of the feature selection algorithm used).
For performance estimation, we ran JADBio with 10-fold cross-validation,
repeated at most 20 times (a stopping criterion is applied when a plateau is
reached during the repetitions) [27]. We note that JADBio returns unbiased
(and typically conservative) performance estimates, by applying the BBC-CV
algorithm [27] on the out-of-sample predictions from cross-validation, without
the need of using a test set or expensive techniques like nested cross-
validation.
At this point we note that the execution time of feature selection algorithms
was only counted once per fold and not per configuration, as in practice
feature selection has to be performed only once for a given dataset and hyper-
parameter combination, regardless of the number of subsequently trained
models.
#### 4.1.4 Evaluation of SHSR
Table 1: Meta-features used in the experiments. Meta-feature | Description
---|---
n_samples | Number of samples
n_features | Number of features
samples_to_features | Number of samples to number of features ratio
total_missing | Number of missing values
total_missing_f | Proportion of missing values
samples_with_any_missing | Number of samples with at least one missing value
samples_with_any_missing_f | Proportion of samples with at least one missing value
categorical_features | Number of categorical features
numerical_features | Number of numerical features
target_majority_class_instances | Number of samples for majority class
target_majority_class_f | Proportion of samples for majority class
target_minority_class_instances | Number of samples for minority class
target_minority_class_f | Proportion of samples for minority class
categorical_to_numerical | Ratio of categorical to numerical features
silhouette_k | Silhouette index of k-means clustering
pca_p | Number of components that explain $p\%$ of variance
Table 1 lists the meta-features used for the analysis [21]. We used 14 simple
measures, along with the silhouette index of k-means clustering for
$k\in\\{2,3,\dots,10\\}$ and the number of PCA components required to explain
$p\%$ of the variance of a dataset for $p\in\\{60,70,80,90\\}$, resulting in a
total of 27 meta-features. We note that, as the types of meta-features used
are not part of SHSR, we mainly chose simple measures because they are easy
and fast to compute.
To evaluate SHSR, we randomly keep out 10% of the datasets for testing, and
executed SHSR on the remaining 90%. For each held-out dataset, we applied the
model returned by SHSR to recommend a set of configurations to compute the
predictive performances and running times. The procedure was repeated 20 times
and averages are reported.
Finally, as a regression model in SHSR (see Line 3) we used regression trees,
and tuned them using 5-fold cross-validation, by optimizing over
min_samples_leaf $\in\\{3,5,7\\}$, as well as over the solution path returned
by the minimal cost-complexity pruning algorithm [18]. The reason we chose
decision trees is because they are interpretable, non-linear and have a low
computational cost.
### 4.2 Results
We ran experiments to investigate the effect of the threshold $T$ on
predictive performance and time savings when using SHSR to select
configurations. Next, we evaluate how SHSR performs when only a random subset
of the results is available. Finally, we compare SHSR to a trivial algorithm
which randomly removes a proportion of configurations. Results and additional
details about the experiments are presented below.
#### 4.2.1 Effect of Threshold $T$ on Predictive Performance and Execution
Time
Figure 2: Effect of threshold $T$ on predictive performance and execution
time. The x-axis shows the threshold, and the y-axis shows the ratio between
the predictive performance (execution time) using the configurations returned
by SHSR, relative to using all configurations. Error bars show the 95%
Gaussian confidence intervals for the mean, resulting from 20 runs of the
experiment. We observe that SHSR leads to a significant reduction in execution
time, with minimal drop in predictive performance.
We varied the threshold hyper-parameter $T$ of SHSR to investigate the trade-
off between predictive performance and running time. Specifically, we
considered thresholds in $\\{0.95,0.97,0.99,0.999,0.9999\\}$. Lower values
were not tried, as they would lead to a significant drop in performance, with
minimal time savings.
The results are summarized in Fig. 2. As one would expect, with increasing
$T$, performance increases while execution time decreases. Furthermore, we
observe that for very low values of $T$ there is a negligible loss in
performance, while still providing significant time savings. For instance, a
threshold of $0.999$ leads to an average performance loss of $\sim 1.5\%$ for
regression and $\sim 0.1$ for classification problems, while saving $\sim
50\%$ and $\sim 40\%$ time respectively. On the other hand, if one was mainly
interested in execution time, they could use a lower threshold, sacrificing
predictive performance for a reduction in execution time. For example, a
threshold of $0.95$ reduces time by $\sim 95\%$ and $\sim 99\%$ for regression
and classification tasks respectively, while retaining $\sim 91\%$ and $\sim
95\%$ of the predictive performance.
#### 4.2.2 Evaluation on Partial Results
Figure 3: Effect of running SHSR with partial results on predictive
performance and execution time. The x-axis shows the proportion of used
results, and the y-axis shows the ratio between the predictive performance
(execution time) using the configurations returned by SHSR, relative to using
all configurations. Error bars show the 95% Gaussian confidence intervals for
the mean, resulting from 20 runs of the experiment. We observe that SHSR is
able to perform well even with partial results, and that it performs better
the more results are available, as expected.
In this experiment we evaluate SHSR when run with partial results (i.e., when
only a subset of configurations have been applied on each dataset). We
simulated that scenario by randomly sub-sampling $\\{20\%,40\%,60\%,80\%\\}$
of all results. Note that sub-sampling was performed on all results, not on
each dataset or configuration. The threshold $T$ was set to $0.999$ for this
experiment.
Fig. 3 shows the results. We can see that SHSR performs well, even if only 20%
of all results are available. Furthermore, we observe that execution time is
decreasing the more results are available to SHSR, especially for regression
tasks, where it drops from $\sim 75\%$ of the original running time when 20%
of the results are used, to $\sim 50\%$ when all results are used. This
confirms that SHSR is able to make better decisions the more data are
available, as one would expect.
#### 4.2.3 Comparison against [4] and random elimination of configurations
As a final experiment, we compared SHSR to the KNN based meta-level algorithm
in [4], as well as a simple baseline which randomly eliminates a percentage of
configurations. The authors in [4] apply KNN on the meta-features of the
datasets in order to locate the $N$ most similar from a pool of already
evaluated datasets for a new incoming dataset. Subsequently they apply a
pairwise multi-criterion (performance/time) metric to compare all available
configurations on the training datasets and a configuration specific relative
score is being calculated. The latter can be used to rank configurations and
select the top ranking to run on the new dataset. Regarding the random
elimination, we considered randomly removing
$\\{50\%,60\%,70\%,80\%,90\%,95\%,97\%,99\%\\}$ of configurations. SHSR was
executed with $T$ taking the values $0.95,0.97,0.99,0.999,0.9999$. The KNN
based algorithm was applied with various hyper-parameters (see original paper
for a description of them):
$Nneighbors\in\\{1,3,10\\},AccD\in\\{0.001,0.01,0.1\\}$. The method was used
to rank configurations according to their performance and execution time on
test datasets and then the top $m$ ranked configurations were chosen to be
applied on the test datasets, where
$m\in\\{100,300,500,1000,1500,2000,2500\\}$ for classification and
$m\in\\{100,500,1000,2000,4000,6000\\}$ for regression. Each combination
setting was repeated 20 times.
The results are shown in Fig. 4. For regression tasks, random elimination of
less than $90\%$ of configuration leads to similar results as SHSR, both in
terms of predictive performance and running time. The same holds for
classification tasks, when fewer than $80\%$ of configurations are removed.
However, if more configurations are removed, performance of random elimination
drops abruptly, with minimal time savings. We note that the reason running
time does not drop linearly with configurations, which would be expected to
hold on average using random elimination, is that it is unlikely that a
feature selection algorithm instance is completely removed by chance, as this
can only happen if all configurations it participates in are removed. In
contrast, SHSR does not suffer from this, as it either keeps an algorithm
group or removes it completely.
The KNN based algorithm seems to be performing in-between the other two for
classification and slightly worse for regression for less than $90\%$ time
reduction. It does exceed random elimination’s performance even in regression
for heavily reduced configurations sets, though.
We further investigated why random elimination performs that well. We found
that often multiple configurations achieve maximum performance or close to it
(i.e., there is a lot of redundancy), making it likely to select a good
configuration by chance. This is explained by the fact that the hyper-
parameter configurations used by JADBio are curated and optimized to cover a
wide range of problems. Thus, we would expect random elimination’s performance
to drop if the proportion of “lower quality” configurations increases, while
SHSR wouldn’t be affected significantly, as it uses the maximum performance of
a group to determine its performance. To better understand that, consider an
example with 10000 configurations, 50 out of which are optimal or very close
to it, and assume that 9000 of the configurations are randomly removed. Then,
the probability of selecting any optimal configuration by chance is
$1-0.995^{1000}=99.33\%$. If we were to reduce the number of optimal
configurations to 25 (i.e., increase the proportion of bad configurations),
the probability of selecting any of them drops to $1-0.9975^{1000}=91.82\%$.
Thus, we argue that in practice SHSR is always preferable over random
elimination, especially when used for hyper-parameter configurations that have
not been fine-tuned. SHSR is also clearly superior to the KNN based algorithm
in all investigated scenarios.
Figure 4: Comparison between SHSR, KNN based algorithm, and random
elimination. The x-axis (y-axis) shows the ratio of execution time
(performance) using the configurations returned by the algorithms, relative to
using all configurations. Dotted lines show the 95% Gaussian confidence
intervals for the mean, resulting from 20 runs of the experiment. In the case
of the KNN based algorithm splines were used to smooth both the mean and the
CI lines. All three algorithms perform similarly when few configurations are
dropped, with SHSR being superior when many configurations are removed.
## 5 Discussion, Conclusions, and Future Work
SHSR introduces a recursive evaluation of choices and their filtering in meta-
level learning. It is evaluated on a large cohort of datasets, spanning a wide
range of dataset characteristics. It exhibits significant computational
savings for a simple grid search, with minimal relative drop in predictive
performance. The performance drop can be controlled by a tolerance threshold.
The algorithm can be coupled as a filtering step with any HPO strategy when
some hyper-parameter domains are discrete. Learning which values to filter in
SHSR is currently evaluated based on a simple Decision Tree algorithm. Any
modelling algorithm could be employed instead, of course. However, trees have
the advantage of being interpretable. Hence, the decision of the system to
drop some algorithmic choices from consideration based on the characteristics
of the dataset, can be explained to the user of the AutoML platform. In
addition, it can be used to provide intuition into the deficiencies of
algorithms in particular situations that may inspire new algorithms. There are
numerous future directions to explore, such as coupling SHSR with other HPO
algorithms than grid search and employing more powerful models than DTs for
filtering.
## 6 Limitations
This study contains some limitations inevitably. First of all, most of the
datasets for the classification experiments used in this study are molecular
(multi-omics) datasets which are high dimensional and typically contain
relatively few samples. Secondly, while the algorithm is general and can be
applied on any analysis task this study evaluates it only in binary
classification and regression tasks. Finally, we evaluated the SHSR employing
only Decision Trees for their rule generation feature without testing any
other modelling algorithm.
#### 6.0.1 Acknowledgements
The research leading to these results has received funding from the European
Research Council under the European Union’s Seventh Framework Programme
(FP/2007-2013) / ERC Grant Agreement n. 617393, the Hellenic Foundation for
Research and Innovation (H.F.R.I.) under the “First Call for H.F.R.I. Research
Projects to support Faculty members and Researchers and the procurement of
high-cost research equipment grant” (Project Number: 1941), and the METALASSO
project, which is co-financed by the European Union and Greek national funds
through the Operational Program Competitiveness, Entrepreneurship and
Innovation, under the call RESEARCH–CREATE– INNOVATE (project code:
T1EDK-04347).
## References
* [1] Aha, D.W.: Generalizing from case studies: A case study. In: Machine Learning Proceedings 1992, pp. 1–10. Elsevier (1992)
* [2] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Proceedings of the fifth annual workshop on Computational learning theory. pp. 144–152 (1992)
* [3] Brazdil, P., Gama, J., Henery, B.: Characterizing the applicability of classification algorithms using meta-level learning. In: European conference on machine learning. pp. 83–102. Springer (1994)
* [4] Brazdil, P., Soares, C., Costa, J.: Ranking learning algorithms: Using ibl and meta-learning on accuracy and time results. Machine Learning 50, 251–277 (03 2003)
* [5] Breiman, L.: Random forests. Machine learning 45(1), 5–32 (2001)
* [6] Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and regression trees. Routledge (1984)
* [7] Fürnkranz, J., Petrak, J., Brazdil, P., Soares, C.: On the use of fast subsampling estimates for algorithm recommendation. Technical Report, Austrian Research Institute for Artificial Intelligence (2002)
* [8] Gama, J., Brazdil, P.: Characterization of classification algorithms. In: Portuguese Conference on Artificial Intelligence. pp. 189–200. Springer (1995)
* [9] Hoerl, A.E., Kennard, R.W.: Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 12(1), 55–67 (1970)
* [10] Kedziora, D.J., Musial, K., Gabrys, B.: Autonoml: Towards an integrated framework for autonomous machine learning. arXiv preprint arXiv:2012.12600 (2020)
* [11] Khan, I., Zhang, X., Rehman, M., Ali, R.: A literature survey and empirical study of meta-learning for classifier selection. IEEE Access 8, 10262–10281 (2020)
* [12] Lagani, V., Athineou, G., Farcomeni, A., Tsagris, M., Tsamardinos, I.: Feature selection with the R package MXM: Discovering statistically equivalent feature subsets. Journal of Statistical Software 80(7) (2017)
* [13] Lakiotaki, K., Vorniotakis, N., Tsagris, M., Georgakopoulos, G., Tsamardinos, I.: Biodataome: a collection of uniformly preprocessed and automatically annotated datasets for data-driven biology. Database 2018 (2018)
* [14] Lemke, C., Budka, M., Gabrys, B.: Metalearning: a survey of trends and technologies. Artificial intelligence review 44(1), 117–130 (2015)
* [15] Luo, G.: A review of automatic selection methods for machine learning algorithms and hyper-parameter values. Network Modeling Analysis in Health Informatics and Bioinformatics 5(1), 1–16 (2016)
* [16] Michie, D., Spiegelhalter, D.J., Taylor, C.C., Campbell, J. (eds.): Machine Learning, Neural and Statistical Classification. Ellis Horwood, USA (1995)
* [17] Mısır, M., Sebag, M.: Alors: An algorithm recommender system. Artificial Intelligence 244, 291–314 (2017), combining Constraint Solving with Mining and Learning
* [18] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12, 2825–2830 (2011)
* [19] Rendell, L., Cho, H.: The effect of data character on empirical concept learning. In: Proceedings The Fifth Conference on Artificial Intelligence Applications. pp. 199–200. IEEE Computer Society (1989)
* [20] Rendell, L.A., Sheshu, R., Tcheng, D.K.: Layered concept-learning and dynamically variable bias management. In: IJCAI. pp. 308–314. Irvine, CA (1987)
* [21] Rivolli, A., Garcia, L.P., Soares, C., Vanschoren, J., de Carvalho, A.C.: Meta-features for meta-learning. Knowledge-Based Systems p. 108101 (2022)
* [22] Snoek, J., Larochelle, H., Adams, R.P.: Practical bayesian optimization of machine learning algorithms. Advances in neural information processing systems 25 (2012)
* [23] Sun-Hosoya, L., Guyon, I., Sebag, M.: ActivMetaL: Algorithm Recommendation with Active Meta Learning. IAL 2018 workshop, ECML PKDD (Sep 2018), poster
* [24] Thornton, C., Hutter, F., Hoos, H.H., Leyton-Brown, K.: Auto-weka: Combined selection and hyperparameter optimization of classification algorithms. In: Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 847–855 (2013)
* [25] Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) pp. 267–288 (1996)
* [26] Tsamardinos, I., Charonyktakis, P., Papoutsoglou, G., Borboudakis, G., Lakiotaki, K., Zenklusen, J.C., Juhl, H., Chatzaki, E., Lagani, V.: Just add data: automated predictive modeling for knowledge discovery and feature selection. NPJ precision oncology 6(1), 38 (2022)
* [27] Tsamardinos, I., Greasidou, E., Borboudakis, G.: Bootstrapping the out-of-sample predictions for efficient and accurate cross-validation. Machine Learning 107(12), 1895–1922 (2018)
* [28] Vanschoren, J.: Meta-learning: A survey. arXiv preprint arXiv:1810.03548 (2018)
* [29] Vanschoren, J., van Rijn, J.N., Bischl, B., Torgo, L.: Openml: Networked science in machine learning. SIGKDD Explorations 15(2), 49–60 (2013)
* [30] Yang, C., Akimoto, Y., Kim, D.W., Udell, M.: OBOE: collaborative filtering for automl initialization. CoRR abs/1808.03233 (2018)
* [31] Yang, C., Fan, J., Wu, Z., Udell, M.: Automl pipeline selection: Efficiently navigating the combinatorial space. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. p. 1446–1456. KDD ’20 (2020)
|
# Exponential stability of Euler-Bernoulli beam under boundary controls in
rotation and angular velocity
Alemdar Hasanov111Department of Mathematics, Kocaeli University, Turkey
<EMAIL_ADDRESS>Department of Mathematics, Kocaeli University,
Turkey
###### Abstract
This paper addresses the analysis of a boundary feedback system involving a
non-homogeneous Euler-Bernoulli beam governed by the equation
$m(x)u_{tt}+\mu(x)u_{t}$$+\left(r(x)u_{xx}\right)_{xx}=0$, subject to the
initial $u(x,0)=u_{0}(x)$, $u_{t}(x,0)=v_{0}(x)$ and boundary conditions
$u(0,t)=0$,
$\left(-r(x)u_{xx}(x,t)\right)_{x=0}=-k^{-}_{r}u_{x}(0,t)-k^{-}_{a}u_{xt}(0,t)$,
$u(\ell,t)=0$,
$\left(-r(x)u_{xx}(x,t)\right)_{x=\ell}=-k^{+}_{r}u_{x}(\ell,t)-k^{+}_{a}u_{xt}(\ell,t)$,
with boundary control at both ends resulting from the rotation and angular
velocity. The approach proposed in this study relies on the utilization of
regular weak solutions, energy identity, and a physically motivated Lyapunov
function. By imposing natural assumptions concerning physical parameters and
other inputs, which ensure the existence of a regular weak solution, we
successfully derive a uniform exponential decay estimate for the system’s
energy. The decay rate constant featured in this estimate is solely dependent
on the physical and geometric properties of the beam. These properties
encompass crucial parameters such as the viscous external damping coefficient
$\mu(x)$, as well as the boundary springs $k^{-}_{r},k^{+}_{r}$ and dampers
$k^{-}_{a},k^{+}_{a}$. To illustrate the practical effectiveness of our
theoretical findings, numerical examples are provided. These examples serve to
demonstrate the applicability and relevance of our derived results in real-
world scenarios.
###### keywords:
Exponential stability, Euler-Bernoulli beam, boundary control, regular weak
solution, energy identity, Lyapunov function, decay rate.
††journal: arXiv
## 1 Introduction
Submarine pipelines and long bridges can be considered as an elastic beam with
both ends controlled by the boundary rotation and angular velocity [2, 18]. In
many studies related to pipeline modeling, the pipes are defined as beams
resting on a rigid seabed without any penetration (see [14] and references
therein). However, such hypotheses are not always satisfied in practice. An
analysis of the torsional effects on pipe lateral buckling was given in [9],
where essential influence of torsion under some specific boundary conditions
was demonstrated analytically. Similar situation arise in bridge models
governed by the Euler-Bernoulli beam. Namely, it is very important for the
sensitivity analysis of bridges to obtain a relationship between the rotation
spring constant and the bridge responses (deflections/slopes). This
relationship can then be used for evaluating the support condition of bridges
[19]. Furthermore, in modeling of long flexible structures through the Euler-
Bernoulli equation, the bending moment at the end of the beam is controlled by
the linear feedback of rotation angle and angular velocity, and the shear
force at the same end is controlled by the linear feedback of displacement and
velocity. We refer [12] and references therein, for the detailed description
of such models.
Considering the effect of the above factor on both models, there is a need for
a realistic model that will take into account the effects of both the rotation
spring and the angular velocity damper at both ends of the beam, within the
framework of the Euler-Bernoulli beam equation. In the most natural way, this
can be taken into account by the corresponding boundary conditions at both
ends of the beam, including a linear combinations of the rotation spring and
the angular velocity damper. This leads to the following mathematical model:
$\displaystyle\left\\{\begin{array}[]{ll}m(x)u_{tt}+\mu(x)u_{t}+\left(r(x)u_{xx}\right)_{xx}=0,\,(x,t)\in\Omega_{T},\\\\[4.0pt]
u(x,0)=u_{0}(x),\,u_{t}(x,0)=u_{1}(x),\,x\in(0,\ell),\\\\[4.0pt]
u(0,t)=0,\,\left(-r(x)u_{xx}(x,t)\right)_{x=0}=-k^{-}_{r}u_{x}(0,t)-k^{-}_{a}u_{xt}(0,t),\\\\[4.0pt]
\quad
u(\ell,t)=0,\,\left(-r(x)u_{xx}(x,t)\right)_{x=\ell}=k^{+}_{r}u_{x}(\ell,t)+k^{+}_{a}u_{xt}(\ell,t),\\\\[4.0pt]
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad~{}t\in[0,T],\end{array}\right.$
(6)
where $\Omega_{T}=(0,\ell)\times(0,T)$, $\ell>0$ is the length of the beam and
$T>0$ is the final time.
Here and below, $u(x,t)$ is the deflection, $u_{t}(x,t)$, $u_{x}(x,t)$,
$u_{xt}(x,t)$, $u_{xx}(x,t)$, $-\left(r(x)u_{xx}\right)$ and
$-\left(r(x)u_{xx}\right)_{x}$ are the velocity, rotation (or slope), angular
velocity, curvature, moment and shear force, respectively [6, 17]. Further,
$m(x)=\rho(x)S(x)>0$, while $\rho(x)$ is the mass density and $S(x)$ is the
cross section area of the beam, and $r(x):=E(x)I(x)>0$ represent the flexural
rigidity (or bending stiffness) of the beam, respectively, while $E(x)>0$ is
the elasticity modulus and $I(x)>0$ is the moment of inertia. The non-negative
coefficient $\mu(x):=\gamma\,m(x)$ of viscous resistance to transverse motion
of the beam represents the viscous external damping, while $\gamma\geq 0$ is
the damping constant of proportionality [1]. Furthermore, nonnegative
constants $k^{-}_{r},k^{-}_{a}\geq 0$ and $k^{+}_{r},k^{+}_{a}\geq 0$ are the
stiffness of the torsional springs and dampers on the left and right ends of
the beam, respectively.
$x$$u$$u(0,t)=0$$\left(-r(x)u_{xx}(x,t)\right)_{x=0}$$=-k^{-}_{r}u_{x}(0,t)-k^{-}_{a}u_{xt}(0,t)$$u(\ell,t)=0$$\left(-r(x)u_{xx}(x,t)\right)_{x=\ell}$$=k^{+}_{r}u_{x}(\ell,t)+k^{+}_{a}u_{xt}(\ell,t)$$k^{-}_{r}$$k^{-}_{a}$$k^{+}_{r}$$k^{+}_{a}$
Figure 1: Beam connected to torsional springs and dampers at both ends
The boundary conditions
$\left(-r(x)u_{xx}(x,t)\right)_{x=0}=-k^{-}_{r}u_{x}(0,t)-k^{-}_{a}u_{xt}(0,t)$
and
$\left(-r(x)u_{xx}(x,t)\right)_{x=\ell}=k^{+}_{r}u_{x}(\ell,t)+k^{+}_{a}u_{xt}(\ell,t)$
at the left and right ends of the beam, respectively, mean the controls
resulting from the linear combination of rotation and angular velocity. In
this context, the above parameters
$k^{-}_{r},\,k^{-}_{a},\,k^{+}_{r},\,k^{+}_{a}$ are defined also as the
boundary controls.
Geometry of the problem (1) is given in Fig. 1.
This work is devoted to the systematic study of the following issues. _Under
what minimum conditions imposed on the input data is the energy of the system
governed by ( 6) exponentially stable?_ _If the system governed by ( 6) is
stable, how much does each damping parameter $\gamma$, $k^{-}_{a}$ and
$k^{+}_{a}$ contribute to this stability?_ It should be especially noted that
the nature of both the external and the boundary damping mechanisms greatly
changes the nature of the vibration, and hence controls the response of the
beam, as the experimental and theoretical results discussed in [1, 7] show.
Modeling of large flexible structures through a class of Euler-Bernoulli beams
with structural damping, has begun to be developed, starting with studies [3,
4, 21]. The exponential stability of distributed systems governed by Euler-
Bernoulli beam equation under classical boundary conditions has been discussed
starting from the work [4], and then more general results are obtained in [3,
15, 16]. Various methods have been developed in the literature for initial
boundary value problems for Euler-Bernoulli equations with a boundary feedback
systems. Among these methods, the spectral method turned out to be efficient
and useful since it allows to establish the Riesz basis property, which is the
most fundamental property of a linear vibrating system [5, 10, 11, 12]. In
turn, this property means that the generalized eigenvectors of the system form
an unconditional basis of the (state) Hilbert space. With semigroup approach,
this allows to derive the spectrum determined growth condition and the
exponential stability for a system.
In the exponential stability estimate $\mathcal{E}(t)\leq Me^{-\omega
t}\mathcal{E}(0)$ obtained in the studies listed above, the relationship of
the decay rate parameter $\omega>0$ with the physical and geometric parameters
of the beam, including the damping coefficient $\mu(x)\geq 0$ and the
stiffness $k^{-}_{a},k^{+}_{a}\geq 0$ of the torsional dampers, has not been
determined. Since the relationship of this decay rate parameter with the
damping parameters is not known, in concrete applications, such an evaluation
does not give a qualified result. In this paper, we develop the approach based
on the weak solution theory for the initial boundary value problem (6), energy
estimates and the Lyapunov method to establish an exponential stability
estimate for system (6) under minimum conditions imposed on the input data.
Furthermore, this approach allows us to derive the role of both types of
parameters in the exponential decay of the solution. To our knowledge, this
model, defined by the initial boundary value problem (6), in which the viscous
external and boundary (torsional) damping factors are considered together and
in the presence of torsional springs, is discussed for the first time in the
literature.
The rest of the paper is structured as follows. Energy identity and
dissipativity of system (6) are derived in Section 2. In Section 3, the
Lyapunov function is introduced and then energy decay estimate for system (6)
is derived. Numerical examples are presented in Section 4. Some concluding
remarks are given in the final Section 5.
## 2 Necessary estimates for the weak solution of problem (6)
We assume that the inputs in (6) satisfy the following basic conditions:
$\displaystyle\left\\{\begin{array}[]{ll}\rho_{S},\mu,r\in
L^{\infty}(0,\ell),\\\\[3.0pt] 0<m_{0}\leq m(x)\leq
m_{1},~{}0\leq\mu_{0}\leq\mu(x)\leq\mu_{1},\\\\[3.0pt] 0<r_{0}\leq r(x)\leq
r_{1},\,x\in(0,\ell),\\\\[3.0pt] u_{0}\in H^{2}(0,\ell),~{}u_{1}\in
L^{2}(0,\ell),\\\\[3.0pt] k^{-}_{r},k^{-}_{a},k^{+}_{r},k^{+}_{a}\geq
0,\\\\[3.0pt]
\gamma+k^{-}_{r}+k^{-}_{a}+k^{+}_{r}+k^{+}_{a}>0.\end{array}\right.$ (13)
For the case when all the parameters $k^{-}_{r},k^{-}_{a},k^{+}_{r},k^{+}_{a}$
are equal to zero, under conditions (13), the existence of the weak solution
$u\in L^{2}(0,T;\mathcal{V}^{2}(0,\ell))$, with $u_{t}\in
L^{2}(0,T;L^{2}(0,\ell))$ and $u_{tt}\in L^{2}(0,T;H^{-2}(0,\ell))$ of the
initial boundary value problem (6) was proved in [13]. Here and below,
$\displaystyle\mathcal{V}^{2}(0,\ell):=\\{v\in
H^{2}(0,\ell):\,v(0)=v(\ell)=0,\\},$
and $H^{2}(0,\ell)$ is the Sobolev space [8]. For system (6), with
$k^{-}_{r},k^{-}_{a},k^{+}_{r},k^{+}_{a}>0$, the existence of the weak
solution $u\in L^{2}(0,T;\mathcal{V}^{2}(0,\ell))$ can be proved in the
similar way. In this section we derive necessary energy identities and
estimates for the weak solution of problem (6).
###### Theorem 1
Assume that the inputs in (6) satisfy the basic conditions (13). Then the
following energy identity holds:
$\displaystyle\mathcal{E}(t)+\int_{0}^{t}\int_{0}^{\ell}\mu(x)u_{\tau}^{2}(x,\tau)dxd\tau\qquad\qquad\qquad\qquad\qquad\qquad\qquad$
$\displaystyle\qquad\qquad=\mathcal{E}(0)-k^{-}_{a}\int_{0}^{t}u_{x\tau}^{2}(0,\tau)d\tau-k^{+}_{a}\int_{0}^{t}u_{x\tau}^{2}(\ell,\tau)d\tau,\,t\in[0,T],$
(14)
where
$\displaystyle\mathcal{E}(t)=\frac{1}{2}\int_{0}^{\ell}\left[m(x)u^{2}_{t}(x,t)+r(x)u^{2}_{xx}(x,t)\right]dx\qquad\qquad\qquad$
$\displaystyle+\frac{1}{2}\,k^{-}_{r}u_{x}^{2}(0,t)+\frac{1}{2}\,k^{+}_{r}\,u_{x}^{2}(\ell,t),~{}t\in[0,T],$
(15)
is the total energy of system (6) and
$\displaystyle\mathcal{E}(0)=\frac{1}{2}\int_{0}^{\ell}\left[m(x)\left(u_{1}(x)\right)^{2}+r(x)\left(u^{\prime\prime}_{0}(x)\right)^{2}\right]dx\qquad\qquad\qquad$
$\displaystyle\qquad+\frac{1}{2}\,k^{-}_{r}\left(u^{\prime}_{0}(0)\right)^{2}+\frac{1}{2}\,k^{+}_{r}\left(u^{\prime}_{0}(\ell)\right)^{2}$
(16)
is the initial value of the total energy.
Proof. Multiply both sides of equation (6) by $u_{t}(x,t)$, integrate it over
$\Omega_{t}:=(0,\ell)\times(0,t)$, employ the identity
$\displaystyle\int_{0}^{t}\int_{0}^{\ell}(r(x)u_{xx})_{xx}u_{\tau}dxd\tau=\int_{0}^{t}\int_{0}^{\ell}[(r(x)u_{xx})_{x}u_{\tau}-r(x)u_{xx}u_{x\tau}]_{x}dxd\tau$
$\displaystyle+\,\frac{1}{2}\int_{0}^{t}\int_{0}^{\ell}\left(r(x)u_{xx}^{2}\right)_{\tau}dxd\tau,\quad$
(17)
$t\in(0,T]$. Then we obtain the following integral identity:
$\displaystyle\frac{1}{2}\int_{0}^{t}\int_{0}^{\ell}\left(\rho_{S}(x)u_{\tau}^{2}\right)_{\tau}dx\,d\tau+\frac{1}{2}\int_{0}^{t}\int_{0}^{\ell}\left(r(x)u_{xx}^{2}\right)_{\tau}dx\,d\tau\qquad\qquad\qquad$
$\displaystyle\qquad+\int_{0}^{t}\left((r(x)u_{xx})_{x}u_{\tau}-r(x)u_{xx}u_{x\tau}\right)_{x=0}^{x=\ell}d\tau+\int_{0}^{t}\int_{0}^{\ell}\mu(x)u_{\tau}^{2}dxd\tau=0,$
for all $t\in(0,T]$. Using here the initial and boundary conditions (6), we
get:
$\displaystyle\frac{1}{2}\int_{0}^{\ell}\left[m(x)u^{2}_{t}+r(x)u_{xx}\right]dx+\frac{1}{2}\,k^{-}_{r}u_{x}^{2}(0,t)+\frac{1}{2}\,k^{+}_{r}\,u_{x}^{2}(\ell,t)\qquad\qquad\qquad$
$\displaystyle+\int_{0}^{t}\int_{0}^{\ell}\mu(x)u_{\tau}^{2}dxd\tau$
$\displaystyle=\frac{1}{2}\int_{0}^{\ell}\left[m(x)\left(u_{1}(x)\right)^{2}+r(x)\left(u^{\prime\prime}_{0}(x)\right)^{2}\right]dx+\frac{1}{2}\,k^{-}_{r}\left(u^{\prime}_{0}(0)\right)^{2}+\frac{1}{2}\,k^{+}_{r}\left(u^{\prime}_{0}(\ell)\right)^{2}$
$\displaystyle-k^{-}_{a}\int_{0}^{t}u_{x\tau}^{2}(0,\tau)d\tau-k^{+}_{a}\int_{0}^{t}u_{x\tau}^{2}(\ell,\tau)d\tau,\,t\in[0,T],$
for all $t\in(0,T]$. This leads to (1) with (1) and (1). $\Box$
###### Remark 1
The integral identity (1), with (1) and (1), clearly shows that the increase
in the stiffness of the torsional springs $k^{-}_{r}$ and $k^{+}_{r}$ leads to
an increase in the total energy $\mathcal{E}(t)$. Conversely, the increase in
the stiffness of the torsional dampers $k^{-}_{a}$ and $k^{+}_{a}$ leads to a
decrease in the total energy.
###### Remark 2
The sum
$\displaystyle\frac{1}{2}\,k^{-}_{r}u_{x}^{2}(0,t)+\frac{1}{2}\,k^{+}_{r}\,u_{x}^{2}(\ell,t),~{}t\in[0,T]$
in (1) represents the energy of the rigid motion of the elastic system (1),
generated by the spring constants $k^{-}_{r},k^{+}_{r}\geq 0$.
###### Lemma 1
Assume that the basic conditions (13) hold. Then for the decay rate of the
total energy the following integral formula is valid:
$\displaystyle\frac{d\mathcal{E}(t)}{dt}=-\int_{0}^{\ell}\mu(x)u^{2}_{t}dx-k^{-}_{a}u_{xt}^{2}(0,t)-k^{+}_{a}u_{xt}^{2}(\ell,t),\,t\in(0,T).$
(18)
Proof. From formula (1) for the total energy we deduce that
$\displaystyle\frac{d\mathcal{E}(t)}{dt}=\int_{0}^{\ell}\left[m(x)u_{t}u_{tt}+r(x)u_{xx}u_{xxt}\right]dx\qquad\qquad\qquad\qquad$
$\displaystyle\qquad\qquad\qquad+k^{-}_{r}u_{x}(0,t)u_{xt}(0,t)+k^{+}_{r}u_{x}(\ell,t)u_{xt}(\ell,t),~{}t\in[0,T].$
Using here the identities
$\displaystyle\int_{0}^{\ell}m(x)u_{t}u_{tt}dx=-\int_{0}^{\ell}\mu(x)u_{t}^{2}dx-\int_{0}^{\ell}\left(r(x)u_{xx}\right)_{xx}u_{t}dx,\qquad\qquad$
$\displaystyle\int_{0}^{\ell}\left(r(x)u_{xx}\right)_{xx}u_{t}dx=\int_{0}^{\ell}r(x)u_{xx}u_{xxt}dx+k^{-}_{r}u_{x}(0,t)u_{xt}(0,t)\qquad$
$\displaystyle+k^{-}_{a}u^{2}_{xt}(0,t)+k^{+}_{r}u_{x}(\ell,t)u_{xt}(\ell,t)+k^{+}_{a}u^{2}_{xt}(\ell,t),~{}t\in[0,T],$
we arrive at the required result (18). $\Box$
###### Corollary 1
Integrating (18) over $(0,t)$ we arrive at the energy identity introduced in
(1), that is
$\displaystyle\mathcal{E}(t)=\mathcal{E}(0)-\int_{0}^{t}\int_{0}^{\ell}\mu(x)u^{2}_{\tau}(x,\tau)dxd\tau\qquad\qquad\qquad\qquad$
$\displaystyle-\int_{0}^{t}\left[k^{-}_{a}u_{x\tau}^{2}(0,\tau)+k^{+}_{a}u_{x\tau}^{2}(\ell,t)\right]d\tau,~{}t\in[0,T].$
(19)
In particular,
$\displaystyle\mathcal{E}(t)\leq\mathcal{E}(0),~{}t\in[0,T],$
that is, the energy of the system (6) is dissipating.
## 3 Lyapunov function and exponential stability estimate
Introduce the auxiliary function:
$\displaystyle\mathcal{J}(t)=\int_{0}^{\ell}m(x)u\,u_{t}dx+\frac{1}{2}\int_{0}^{\ell}\mu(x)u^{2}dx+\frac{1}{2}\,k^{-}_{a}u_{x}^{2}(0,t)+\frac{1}{2}\,k^{+}_{a}u_{x}^{2}(\ell,t),$
(20)
$t\in[0,T]$, that includes all the damping parameters.
###### Lemma 2
Assume that the basic conditions (13) are satisfied. Then between the
auxiliary function $\mathcal{J}(t)$ and the energy function $\mathcal{E}(t)$,
the following relationship holds:
$\displaystyle\frac{d\mathcal{J}(t)}{dt}=2\int_{0}^{\ell}m(x)u_{t}^{2}dx-2\mathcal{E}(t),~{}t\in[0,T].$
(21)
Proof. Taking the derivative of the function $\mathcal{J}(t)$ with respect to
the time variable and using then the equation (6) we find:
$\displaystyle\frac{d\mathcal{J}(t)}{dt}=\int_{0}^{\ell}m(x)u^{2}_{t}dx-\int_{0}^{\ell}\left(r(x)u_{xx}\right)_{xx}udx\qquad\qquad\qquad\qquad$
$\displaystyle\qquad+k^{-}_{a}u_{x}(0,t)u_{xt}(0,t)+k^{+}_{a}u_{x}(\ell,t)u_{xt}(\ell,t),~{}t\in[0,T].$
To transform the second right-hand side integral here, we employ the identity
$\displaystyle-\int_{0}^{\ell}\left(r(x)u_{xx}\right)_{xx}udx=-\int_{0}^{\ell}r(x)u^{2}_{xx}dx-k^{-}_{r}u^{2}_{x}(0,t)-k^{-}_{a}u_{x}(0,t)u_{xt}(0,t)$
$\displaystyle-k^{+}_{r}u^{2}_{x}(\ell,t)-k^{-}_{a}u_{x}(\ell,t)u_{xt}(\ell,t),~{}t\in[0,T].$
Then we get:
$\displaystyle\frac{d\mathcal{J}(t)}{dt}=\int_{0}^{\ell}m(x)u^{2}_{t}dx-\int_{0}^{\ell}r(x)u^{2}_{xx}dx-k^{-}_{r}u^{2}_{x}(0,t)-k^{+}_{r}u^{2}_{x}(\ell,t),$
for all $t\in[0,T]$. This leads to the required result (21). $\Box$
The next lemma shows another relationship between the auxiliary function
$\mathcal{J}(t)$ and the energy function $\mathcal{E}(t)$. Namely, it shows
that the energy function serves as lower and upper bounds to the auxiliary
function introduced in (20).
###### Lemma 3
Assume that in addition to the basic conditions (13), the coefficient $r(x)$
in (6) satisfies the regularity condition: $r\in H^{2}(0,\ell)$. Then the
following inequalities hold:
$\displaystyle-\beta_{0}\,\mathcal{E}(t)\leq\mathcal{J}(t)\leq\beta_{1}\,\mathcal{E}(t),~{}t\in[0,T],$
(22)
where
$\displaystyle\left.\begin{array}[]{ll}\displaystyle\beta_{0}=\frac{\ell^{2}}{2}\,\sqrt{\frac{m_{1}}{r_{0}}}\\\\[14.0pt]
\displaystyle\beta_{1}=\beta_{0}\left\\{1+\frac{1}{\sqrt{m_{1}r_{0}}}\left[\ell^{2}\mu_{1}+\frac{2}{\ell}\left(k_{a}^{-}+k_{a}^{+}\right)\right]\right\\}\,,\\\
\end{array}\right.$ (25)
and $m_{1},\,\mu_{1},\,r_{0}>0$ are the constants introduced in (13).
Proof. We estimate separately each term on the right hand side of formula
(20). For the first term we use the $\varepsilon$-inequality to get
$\displaystyle\left|\int_{0}^{\ell}m(x)uu_{t}dx\right|\leq\frac{\varepsilon}{2}\,\int_{0}^{\ell}m(x)u_{t}^{2}dx+\frac{1}{2\varepsilon}\,\int_{0}^{\ell}m(x)u^{2}dx.$
(26)
Under the condition $r\in H^{2}(0,\ell)$ the exists the regular weak solution
$u\in L^{2}(0,T;H^{4}(0,\ell))$, with $u_{t}\in
L^{2}(0,T;\mathcal{V}^{2}(0,\ell))$, $u_{tt}\in L^{2}(0,T;L^{2}(0,\ell))$ and
$u_{ttt}\in L^{2}(0,T;H^{-2}(0,\ell))$ of problem (6) [13]. For this solution
we employ the inequality
$\displaystyle\int_{0}^{\ell}u^{2}dx\leq\frac{\ell^{4}}{4}\int_{0}^{\ell}u_{xx}^{2}dx,~{}t\in[0,T],$
(27)
which can be easily proved due to the conditions $u(0,t)=u(\ell,t)=0$. This
yeilds:
$\displaystyle\int_{0}^{\ell}m(x)u^{2}dx\leq\frac{\ell^{4}\rho_{1}}{4r_{0}}\int_{0}^{\ell}r(x)u_{xx}^{2}dx,$
Substituting this in (26) we get:
$\displaystyle\left|\int_{0}^{\ell}m(x)uu_{t}dx\right|\leq\frac{\varepsilon}{2}\,\int_{0}^{\ell}m(x)u_{t}^{2}dx+\frac{\ell^{4}m_{1}}{8\varepsilon
r_{0}}\int_{0}^{\ell}r(x)u_{xx}^{2}dx.$
Choose here the parameter $\varepsilon>0$ from the condition
$\varepsilon/2=\ell^{4}m_{1}/(8r_{0}\,\varepsilon)$ as
$\displaystyle\varepsilon=\frac{\ell^{2}}{2}\,\sqrt{\frac{m_{1}}{r_{0}}}\,,$
we obtain the following estimate:
$\displaystyle\left|\int_{0}^{\ell}m(x)uu_{t}dx\right|\leq\frac{\ell^{2}}{4}\,\sqrt{\frac{m_{1}}{r_{0}}}\left[\int_{0}^{\ell}m(x)u_{t}^{2}dx+\int_{0}^{\ell}r(x)u_{xx}^{2}dx\right].$
(28)
Now, we estimate the second right hand side integral in formula (20), using
inequality (27). We have:
$\displaystyle\int_{0}^{\ell}\mu(x)u^{2}dx\leq\frac{\ell^{4}\mu_{1}}{4r_{0}}\int_{0}^{\ell}r(x)u_{xx}^{2}dx.$
(29)
Finally, to estimate the third and fourth terms on the right side of formula
(20), we use the same argument as above to conclude that
$\displaystyle
u^{2}_{x}(0,t)=\left(-\int_{0}^{\tilde{x}}u_{xx}(x,t)dx\right)^{2}\leq\tilde{x}\,\int_{0}^{\tilde{x}}u^{2}_{xx}(x,t)dx,$
$\displaystyle
u^{2}_{x}(\ell,t)=\left(\int_{\tilde{x}}^{\ell}u_{xx}(x,t)dx\right)^{2}\leq(\ell-\tilde{x})\,\int_{0}^{\tilde{x}}u^{2}_{xx}(x,t)dx.$
Hence,
$\displaystyle\left.\begin{array}[]{ll}\displaystyle\frac{1}{2}\,k^{-}_{a}u_{x}^{2}(0,t)\leq\frac{\ell}{2}\,\frac{k^{-}_{a}}{r_{0}}\int_{0}^{\ell}r(x)u^{2}_{xx}(x,t)dx,\\\\[9.0pt]
\displaystyle\frac{1}{2}\,k^{+}_{a}u_{x}^{2}(\ell,t)\leq\frac{\ell}{2}\,\frac{k^{+}_{a}}{r_{0}}\int_{0}^{\ell}r(x)u^{2}_{xx}(x,t)dx.\end{array}\right.$
(32)
In view of (28), (29) and (32) we obtain the following upper estimate for the
auxiliary function $\mathcal{J}(t)$:
$\displaystyle\mathcal{J}(t)\leq\frac{\ell^{2}}{4}\,\sqrt{\frac{m_{1}}{r_{0}}}\int_{0}^{\ell}m(x)u_{t}^{2}dx\qquad\qquad\qquad\qquad\qquad\qquad\qquad$
$\displaystyle\qquad\qquad\qquad+\left[\frac{\ell^{2}}{4}\,\sqrt{\frac{m_{1}}{r_{0}}}+\frac{\ell^{4}}{4r_{0}}\,\mu_{1}+\frac{\ell}{2r_{0}}\left(k^{-}_{a}+k^{+}_{a}\right)\right]\int_{0}^{\ell}r(x)u_{xx}^{2}dx,$
for all $t\in(0,T]$. This leads to the upper bound
$\displaystyle\mathcal{J}(t)\leq\beta_{1}\,\mathcal{E}(t),~{}t\in[0,T],$
in terms of the energy functional $\mathcal{E}(t)$ and the constant
$\beta_{1}>0$ introduced in (25).
The lower bound
$\displaystyle\mathcal{J}(t)\geq-\beta_{0}\,\mathcal{E}(t),~{}t\in[0,T]$
follows from the second part
$\displaystyle\int_{0}^{\ell}m(x)uu_{t}dx\geq-\,\frac{\ell^{2}}{4}\,\sqrt{\frac{m_{1}}{r_{0}}}\left[\int_{0}^{\ell}m(x)u_{t}^{2}dx+\int_{0}^{\ell}r(x)u_{xx}^{2}dx\right]$
of estimate (28). This leads to the required estimates (22). $\Box$
###### Remark 3
The constants $\beta_{0},\beta_{1}>0$ introduced in (25) depend only on the
geometric and physical parameters of a beam.
We introduce now the Lyapunov function
$\displaystyle\mathcal{L}(t)=\mathcal{E}(t)+\lambda\mathcal{J}(t),\,t\in[0,T]$
(33)
through the energy function $\mathcal{E}(t)$ and the auxiliary function
$\mathcal{J}(t)$, where $\lambda>0$ is the penalty term.
###### Theorem 2
Assume that the inputs in (6) satisfy the basic conditions (13) and the
regularity condition $r\in H^{2}(0,\ell)$. Suppose, in addition that the
damping constant of proportionality is positive,
$\displaystyle\gamma_{0}>0.$ (34)
Then system (6) is exponentially stable, that is,
$\displaystyle\mathcal{E}(t)\leq
M\,e^{-\sigma\,t}\,\mathcal{E}(0),~{}t\in[0,T],$ (35)
where
$\displaystyle\left.\begin{array}[]{ll}\displaystyle
M=\frac{1+\beta_{1}\lambda}{1-\beta_{0}\lambda}~{},~{}\sigma=\frac{2\lambda}{1+\beta_{1}\lambda}~{},\\\\[14.0pt]
0<\lambda<\min(1/\beta_{0},\,\gamma\,m_{0}/(2m_{1})),\end{array}\right.$ (38)
where $\mu_{0},m_{1}>0$ and $\beta_{0},\beta_{1}>0$ are the constants
introduced in (13) and (25), respectively, and $\mathcal{E}(0)>0$ is the
initial energy defined in (1).
Proof. Using estimates (22) in (33) we get:
$\displaystyle\left(1-\beta_{0}\lambda\right)\,\mathcal{E}(t)\leq\mathcal{L}(t)\leq\left(1+\beta_{1}\lambda\right)\,\mathcal{E}(t),~{}t\in[0,T].$
From the positivity requirement of the first left hand side multiplier, we
find that the penalty term should satisfiy the following condition:
$\displaystyle 0<\lambda<1/\beta_{0},~{}\beta_{0}>0.$ (39)
Differentiate now $\mathcal{L}(t)$ with respect to the variable $t\in(0,T)$
and use formulas (18) and (21). We have:
$\displaystyle\frac{d\mathcal{L}(t)}{dt}+2\lambda\mathcal{E}(t)=-\int_{0}^{\ell}\left[\mu(x)-2\lambda
m(x)\right]u_{t}^{2}dx\qquad\qquad\quad$
$\displaystyle\qquad\qquad\qquad-k^{-}_{a}u^{2}_{xt}(\ell,t)-k^{+}_{a}u^{2}_{xt}(\ell,t),~{}t\in[0,T].$
(40)
Assume that, in addition to (39), the penalty term satisfies also the
following condition:
$\displaystyle\lambda\leq\mu_{0}/(2m_{1})$
which guarantees positivity of the term in the square bracket under the right
hand side intagral in (3). In view of the relation $\mu_{0}=\gamma\,m_{0}$,
this condition implies
$\displaystyle\lambda\leq\gamma\,m_{0}/(2m_{1}).$ (41)
This leads to
$\displaystyle\frac{d\mathcal{L}(t)}{dt}+2\lambda\mathcal{E}(t)\leq
0,~{}t\in[0,T],$
or, with $\mathcal{E}(t)\geq\mathcal{L}(t)/(1+\lambda\gamma_{1})$, to the
inequality
$\displaystyle\frac{d\mathcal{L}(t)}{dt}+\frac{2\lambda}{1+\lambda\gamma_{1}}\,\mathcal{L}(t)\leq
0,~{}t\in[0,T].$
Solving this inequality we find:
$\displaystyle\mathcal{L}(t)\leq e^{-\sigma\,t}\,\mathcal{E}(0),~{}t\in[0,T]$
which implies the required estimate (35). $\Box$
###### Remark 4
The constant $\sigma>0$ in (38), called the decay rate parameter, depends only
on the geometric and physical parameters of the beam and also on the stiffness
of the torsional dampers introduced in (13), as formulas (25) show. Hence, the
uniform exponential stability estimate (38) can be applied to study
exponential stability for Euler-Bernoulli beams with various physical and
geometric properties, under boundary controls in rotation and angular
velocity. Furthermore, considering formula (25), estimate (38) also clearly
shows the contribution of each damping factor $\mu(x)$, $k_{a}^{-}$ and
$k_{a}^{-}$ to the energy decay rate.
## 4 Numerical results
Although there is an exponential function $e^{-\sigma\,t}$ on the right side
of the estimate (35), with the decay rate parameter $\sigma>0$ introduced in
(38), in some cases, this appearance can be misleading. Namely, $\sigma>0$ is
dependent on the positive parameters $\lambda$ and $\beta_{1}$. The specific
values of these parameters play a crucial role in determining the decay
behavior of the function $e^{-\sigma\,t}$. Depending on the values of
$\lambda$ and $\beta_{1}$, the decay of this function can exhibit
characteristics similar to the decay of a linear function. To see such cases,
it is necessary to study the dependence of the decay rate parameter on not
only the geometric and physical parameters of the beam, but also on the
viscous external damping parameter $\mu(x)$ and the torsional dampers
$k_{a}^{-},k_{a}^{-}$ separately.
The examples below are provided to illustrate these situations and their
causes. Without loss of generality, here we consider the constant coefficient
beam equation
$\displaystyle mu_{tt}+\mu u_{t}+ru_{xxxx}=0,\,(x,t)\in\Omega_{T},$ (42)
where
$\displaystyle m=\rho\,S,~{}\mu=\gamma m,~{}r=EI,$
in accordance with the above notation. For this constant coefficients
equation, formulas (25) and (38) for the parameters
$\beta_{0},\,\beta_{1},\,M_{1},\,\sigma>0$ and conditions are as follow:
$\displaystyle\left.\begin{array}[]{ll}\displaystyle\beta_{0}=\frac{\ell^{2}}{2}\,\sqrt{\frac{m}{r}}\\\\[14.0pt]
\displaystyle\beta_{1}=\beta_{0}\left[1+\ell^{2}\sqrt{\frac{m}{r}}\,\gamma\right]+\frac{\ell}{2r}\left(k_{a}^{-}+k_{a}^{+}\right)\,,\\\\[14.0pt]
\displaystyle
M=\frac{1+\beta_{1}\lambda}{1-\beta_{0}\lambda}~{},~{}\sigma=\frac{2\lambda}{1+\beta_{1}\lambda}~{}.\end{array}\right.$
(46)
Here, the beam with the rectangular cross section $S=b\,h$, where $b>0$ and
$h>0$ are the width height, with the following numerical values of the
geometric and physical parameters are examined [20]:
$\displaystyle\left.\begin{array}[]{ll}\ell=0.502\,\mbox{m},~{}b=1.7\times
10^{-3}\,\mbox{m},~{}h=0.89\times 10^{-3}\,\mbox{m},\\\\[4.0pt]
\rho=1.42\times 10^{3}\,\mbox{Kg\,m}^{-3},~{}E=3.1\times
10^{9}\,\mbox{N/m}^{2},~{}\gamma\in[0.01,\,10]\,\mbox{s}^{-1}.\end{array}\right.$
(49)
With the numerical values in (49) we have:
$\displaystyle\left.\begin{array}[]{ll}S=1.51\times
10^{-6}\,\mbox{m}^{2},~{}I:=bh^{3}/12=0.1\times
10^{-12}\,\mbox{m}^{3},\\\\[4.0pt] m=2.14\times
10^{-3}\,\mbox{Kg\,m}^{-1},~{}r=0.31\times
10^{-3}\,\mbox{N\,m}^{2},~{}\mu=0.22\,\mbox{Kg\,m}^{-1}.\end{array}\right.$
We consider three-level, weak, medium, and high damping cases corresponding to
the values $\gamma=0.1$, $\gamma=1.0$ and $\gamma=5.0$ of the damping constant
of proportionality, using the following values $\langle
k^{-}_{a},k^{+}_{a}\rangle=\langle 0,\,0\rangle$ and $\langle
k^{-}_{a},k^{+}_{a}\rangle=\langle 0.01,\,0.01\rangle$ of the stiffness of the
torsional dampers.
The calculated by formulas given in (46) values of the decay rate parameter
$\sigma>0$ are listed in Table 1. The values of the penalty term $\lambda>0$
are set according to the requirement $0<\lambda<\min(1/\beta_{0},\,\gamma/2)$.
From the last column of Table 1 it can be seen that, in absence of the
torsional dampers ($k^{-}_{a}=k^{+}_{a}=0$), the increase in the value of the
damping constant from $\gamma=0.1$ to $\gamma=5.0$, leads to the increase of
the decay parameter $\sigma>0$. Thus, for the weak damping case $\gamma=0.01$
the value of the decay parameter is $\sigma=0.08$, and the energy decay is
only exponential in appearance, in fact, it is linear (Figure 1 on the left).
Table 1. The decay rate parameters corresponding to the geometric and physical
parameters given in (49).
Damping constant
---
$\gamma=0.1$
$\gamma=1.0$
$\gamma=5.0$
$\langle k^{-}_{a},k^{+}_{a}\rangle$ | $\langle\beta_{0},\beta_{1}\rangle$ | $\lambda$ | $M$ | $\sigma$
---|---|---|---|---
$\langle 0,\,0\rangle$ | $\langle 0.33,\,0.35\rangle$ | $0.04$ | $1.03$ | $0.08$
$\langle 0.01,\,0.01\rangle$ | $\langle 0.33,\,16.55\rangle$ | $0.04$ | $1.68$ | $0.05$
$\langle 0,\,0\rangle$ | $\langle 0.33,\,0.55\rangle$ | $0.4$ | $1.41$ | $0.66$
$\langle 0.01,\,0.01\rangle$ | $\langle 0.33,\,16.75\rangle$ | $0.4$ | $8.87$ | $0.10$
$\langle 0,\,0\rangle$ | $\langle 0.33,\,1.42\rangle$ | $2.4$ | $21.19$ | $1.09$
$\langle 0.01,\,0.01\rangle$ | $\langle 0.33,\,17.62\rangle$ | $2.4$ | $208.12$ | $0.11$
Figure 2: Behaviour of the function $\exp(-\sigma t)$: with
$k^{-}_{a}=k^{+}_{a}=0$ (left) and with with $k^{-}_{a}=k^{+}_{a}=0.01$
(right).
Comparing the values of the decay rate parameter, in the last column of Table
1, corresponding to zero and non-zero values of the stiffness of the torsional
dampers, we can observe the role of these boundary controls (Figure 1 on the
right).
## 5 Conclusions
This study proposes an approach for the exponential stability analysis of
Euler-Bernoulli beams under boundary controls in rotation and angular
velocity. By employing the regular weak solution, energy identity, and
Lyapunov function, we are able to derive a uniform exponential decay estimate
for the system’s energy.
Our approach is grounded in natural assumptions concerning physical parameters
and other inputs, ensuring the existence of a regular weak solution. The decay
rate constant in the derived estimate relies solely on the physical and
geometric parameters of the beam, which include the viscous external damping
coefficient, as well as the boundary springs and dampers. This feature enables
straightforward utilization of decay rate estimation in practical engineering
applications.
Furthermore, we have provided preliminary numerical examples that shed light
on the role of damping parameters. However, a more detailed analysis, focusing
on the individual contributions of each damping parameter to the overall
damping behavior, will be pursued in future research.
## Acknowledgments
The research has been supported by the Scientific and Technological Research
Council of Turkey (TUBITAK) through the Incentive Program for International
Scientific Publications (UBYT). The research of the author has also been
supported by FAPESP, through the Visiting Researcher Program, proc.
2021/08936-1, in Escola Politécnica, University of São Paulo, Brazil, during
the period November 02 - December 18, 2022.
## References
* [1] H.T. Banks, D.J. Inman, On Damping Mechanisms in Beams, Journal of Applied Mechanics, 58(3) (1991) 716–723.
* [2] J. Cai, P. Le Grognec, Lateral buckling of submarine pipelines under high temperature and high pressure - A literature review, Ocean Engineering 244(15) (2022) 110254.
* [3] G. Chen, S.G. Krantz, D.W. Ma, C.E. Wayne, H.H. West, H. The Euler-Bernoulli beam equation with boundary energy dissipation. Report, 1 Sep. 1985 - 31 Aug. 1987, Pennsylvania State Univ., University Park., 1, 1988. https://dx.doi.org/10.21236/ada189517.
* [4] G. Chen, D.L. Russell, A mathematical model for linear elastic systems with structure damping, Quart. Appl. Math. 39(1982) 433–454.
* [5] Y.L. Chen, G. Q. Xu, Exponential stability of uniform Euler-Bernoulli beams with non-collocated boundary controllers, J. Math. Anal. Appl. 409(2014) 851–867.
* [6] R.C. Clough, J. Penzien, Dynamics of Structures, McGraw Hill Inc., New York, 1975.
* [7] S.H. Crandall, The Role of Damping in Vibration Theory, J. Sound Vibr. 11(1970) 3–18. 1970\.
* [8] L.C. Evans, Partial Differential Equations, 2nd edn., American Mathematical Society, Rhode Island, 2010.
* [9] P. Le Grognec, A. Néme, J. Cai, Investigation of the torsional effects on the lateral buckling of a pipe-like beam resting on the ground under axial compression, International Journal of Structural Stability and Dynamics 20 (9) (2020) 2050110.
* [10] B.-Z. Guo and R. Yu, On Riesz basis property of discrete operators with application to an Euler-Bernoulli beam equation with boundary linear feedback control, IMA J. Math. Control Inform. 18 (2001) 241–251.
* [11] B.-Z. Guo, Riesz basis property and exponential stability of controlled Euler–Bernoulli beam equations with variable coefficients, SIAM J. Control Optim. 40 (2002) 1905–1923.
* [12] F Guo, F Huang, Boundary Feedback Stabilization of the Undamped Euler–Bernoulli Beam with Both Ends Free, SIAM J. Control Optim. 43(1) (2004) 341–356.
* [13] A. Hasanov Hasanoglu, A.G. Romanov, Introduction to Inverse Problems for Differential Equations, 2nd ed, Springer, New York, 2021.
* [14] Z. Hong, R. Liu, W. Liu, S. Yan, A lateral global buckling failure envelope for a high temperature and high pressure (ht/hp) submarine pipeline, Applied Ocean Research 51 (2015) 117–128.
* [15] F.L. Huang, Some problems for linear elastic systems with damping, Acta Math. Sci. 6 (1986) 101–107.
* [16] F. Huang, On the mathematical model for linear elastic systems with analytic damping, SIAM J. Control Optim. 26 (1988) 714–724.
* [17] D. J. Inman, Engineering Vibration, 4th Edn., Pearson Education Limited, 2014.
* [18] R. Liu, X. Wang, Lateral global buckling high-order mode analysis of a submarine pipeline with imperfection, Applied Ocean Research 73 (2018) 107–126.
* [19] Y.S. Park, S. Kim, N. Kim, J.J. Lee, Evaluation of bridge support condition using bridge responses. Structural Health Monitoring, 18(3) (2019) 767-777.
* [20] C.E. Repetto, A. Roatta and R.J. Welti, Forced vibrations of a cantilever beam, Eur. J. Phys. 33 (2012) 1187–1195.
* [21] D. L. Russell, Controllability and stabilizatiblity theory for linear partial differential equations: Recent progress and open questions, SIAM Rev., 20 (1978) 639–739.
|
# Robust Risk-Sensitive Reinforcement Learning
with Conditional Value-at-Risk
1st Xinyi Ni Electrical and Computer Engineering
University of California, Davis
Davis, USA
<EMAIL_ADDRESS>2nd Lifeng Lai Electrical and Computer Engineering
University of California, Davis
Davis, USA
<EMAIL_ADDRESS>
###### Abstract
Robust Markov Decision Processes (RMDPs) have received significant research
interest, offering an alternative to standard Markov Decision Processes (MDPs)
that often assume fixed transition probabilities. RMDPs address this by
optimizing for the worst-case scenarios within ambiguity sets. While earlier
studies on RMDPs have largely centered on risk-neutral reinforcement learning
(RL), with the goal of minimizing expected total discounted costs, in this
paper, we analyze the robustness of CVaR-based risk-sensitive RL under RMDP.
Firstly, we consider predetermined ambiguity sets. Based on the coherency of
CVaR, we establish a connection between robustness and risk sensitivity, thus,
techniques in risk-sensitive RL can be adopted to solve the proposed problem.
Furthermore, motivated by the existence of decision-dependent uncertainty in
real-world problems, we study problems with state-action-dependent ambiguity
sets. To solve this, we define a new risk measure named NCVaR and build the
equivalence of NCVaR optimization and robust CVaR optimization. We further
propose value iteration algorithms and validate our approach in simulation
experiments.
###### Index Terms:
ambiguity sets, RMDP, risk-sensitive RL, CVaR
## I Introduction
Markov Decision Processes (MDP) are foundational in Reinforcement Learning
(RL), typically premised on complete knowledge of model parameters.
Nevertheless, real-world applications frequently encounter uncertainties in
MDP elements, such as transition probabilities and reward/cost functions,
leading to estimation errors in RL algorithms and subsequent sensitivity to
model inaccuracies, thus impairing performance [1, 2, 3]. In light of these
challenges, Robust MDP (RMDP) has been developed to focus on optimal policies
that accommodate worst-case transition probabilities within an ambiguity set
[4], with most studies assuming known and rectangular ambiguity sets due to
computational considerations [5, 6, 7, 4, 8, 9].
The existing RMDP research has largely focused on risk-neutral objectives that
minimize the expected total discounted costs. This risk-neutral approach does
not take events that are rare but have high costs into consideration. To
counteract this, recently many risk-sensitive approaches where risk measures
critically evaluate and quantify associated risks have been developed. Within
risk-sensitive RL, the focus is on minimizing the risk of the total discounted
cost to ascertain optimal policies [10]. Coherent risk measures, conforming to
principles of monotonicity, translation invariance, subadditivity, and
positive homogeneity, offer a robust framework for such evaluations [11].
Notably, Conditional Value-at-Risk (CVaR) has gained popularity in RL, with
numerous studies proposing CVaR RL solutions for different setups [12, 13, 14,
15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. Although risk-
sensitive RL is widely popular, its robustness within the RMDP framework is
not clear. While Chow et al. (2015) [16] roughly mention how solving CVaR can
enhance the robustness of risk-neutral RL in certain uncertainty sets, there
is a noticeable gap in understanding how CVaR’s robustness fares against
various types of uncertainty sets.
This study presents a novel and comprehensive investigation into the
robustness of risk-sensitive RL within RMDP. The primary goal is to determine
an optimal policy that minimizes the robust CVaR value. This value is
characterized as the highest CVaR of the total discounted cost across
transition probabilities within a defined rectangular ambiguity set. We
initially explore scenarios where the uncertain budget is fixed, and utilize
the coherent properties of CVaR and the dual representation theorem to convert
the optimization challenge into a manageable risk-sensitive RL problem,
facilitating the use of existing algorithms.
Furthermore, considering that in many real-world applications, ambiguity sets
are often dynamic and influenced by decision-making processes [29], we delve
deep into a more challenging setup about designing robust CVaR optimization
under decision-dependent uncertainty. To tackle this problem, we introduce a
new coherent risk measure NCVaR and propose a crucial decomposition theorem.
We develop value iteration algorithms for NCVaR and validate our methods
through simulation experiments. Based on these results, the emergence of NCVaR
not only enhances the robustness of CVaR RL under decision-dependent
uncertainty but also brings insights to risk-sensitive RL. Adopting NCVaR as
the risk measure for risk-sensitive RL provides strong robustness compared to
risk-neutral RL while rationally capturing risk. This makes NCVaR promising
for potential future research and also shed lights on solving decision-
dependent uncertainty for RL.
The structure of this paper is as follows: In Section II, we outline
mathematical foundations and problem formulation. Section III discusses
solutions utilizing predetermined ambiguity sets and risk-sensitive RL
methods. Section IV focuses on undetermined ambiguity sets and corresponding
value iteration algorithms. Section V validates our approaches through
experimental simulations and presents the numerical results. Conclusions are
drawn in Section VI.
## II Preliminaries
### II-A RMDP and Ambiguity Set
We consider a MDP represented by the tuple
$(\mathcal{X},\mathcal{A},C,P,\gamma,x_{0})$, where $\mathcal{X}$ is the state
space, $\mathcal{A}$ denotes the action space, $C(x,a)$ specifies a bounded
deterministic cost for selecting action $a$ in state $x$, $P(\cdot|x,a)$
represents the transition probability distribution, $\gamma\in[0,1]$ is the
discount factor and $x_{0}$ denotes the given initial state. For each state
$x\in\mathcal{X}$, the corresponding set of actions is represented by
$\mathcal{A}(x)$. A policy $\pi$ is a mapping from the state space to the
action space. The history space up to time $t\geq 1$ is represented as
$H_{t}=H_{t-1}\times\mathcal{A}\times\mathcal{X}$, with $H_{0}=\mathcal{X}$,
where a history $h_{t}=(x_{0},a_{0},x_{1},\dots,a_{t-1},x_{t})$ is an element
of $H_{t}$. The policy at time $t$, $\pi_{t}$, maps $h_{t}$ to a distribution
over $\mathcal{A}$. The set of such policies at time $t$ is denoted as
$\Pi_{H,t}$, with $\Pi_{H}=\lim_{t\rightarrow\infty}\Pi_{H,t}$ encompassing
all history-dependent policies. Similarly, $\Pi_{M,t}$ and
$\Pi_{M}=\lim_{t\rightarrow\infty}\Pi_{M,t}$ denote the sets of all $t$-step
and overall Markovian policies, respectively.
Addressing robustness, the transition probability $P$ is known to belong to a
non-empty, compact set $\mathcal{P}$, with the uncertain transition
probability denoted as $\tilde{P}\in\mathcal{P}$. The robust policy evaluation
over non-rectangular ambiguity sets $\mathcal{P}$ is known to be NP-hard, even
with a fixed policy $\pi$ [3]. Therefore, robust RL research often focuses on
rectangular ambiguity sets. In this work, we examine a specific rectangular
ambiguity set:
$\mathcal{P}=\big{\\{}\tilde{P}:\sum_{x^{\prime}\in\mathcal{X}}\tilde{P}(x^{\prime}|x,a)=1,\hskip
5.69054ptD(\tilde{P},P)\leq K\big{\\}},$
where $K$ is the non-negative uncertain budget and the divergence measure
$D(\tilde{P},P)$ satisfies
$D(\tilde{P},P)=\sum_{x^{\prime}\in\mathcal{X}}P(x^{\prime}|x,a)\phi\big{(}\frac{\tilde{P}(x^{\prime}|x,a)}{P(x^{\prime}|x,a)}\big{)}\leq
K.$ (1)
In (1), $\phi:\mathbb{R}\rightarrow\mathbb{R}$ represents a convex function
with the constraint $\phi(1)=0$. This function represents the
$\phi$-divergence measure, a form of divergence extensively utilized in RL
[30].
### II-B Risk Measures
In risk-sensitive RL, risk measures play a fundamental role in quantifying and
managing risk inherent in decision-making processes. We consider a probability
space $(\Omega,\mathcal{F},P)$, where $\Omega$ represents the sample space,
$\mathcal{F}$ is a $\sigma$-algebra over the sample space, and $\mathbb{P}$ is
a probability measure. $Z:\Omega\rightarrow\mathbb{R}$ is a bounded random
variable in the probability space.
Conditional Value-at-Risk(CVaR), which is also known as the expected shortfall
or tail conditional expectation. The CVaR at confidence level $\alpha\in(0,1]$
is defined as follows [31]:
$\text{CVaR}_{\alpha}(Z)=\inf_{t\in\mathbb{R}}\big{\\{}t+\frac{1}{\alpha}\mathbb{E}_{P}\left[(Z-t)^{+}\right]\big{\\}},$
where $(z)^{+}=\max(z,0)$. One important property of CVaR is coherency and the
corresponding dual representation, which serves as a crucial factor in
establishing the equivalence between risk-sensitive RL and the robustness of
risk-sensitive RL. The dual representation for CVaR is [32]:
$\text{CVaR}_{\alpha}(Z)=\sup_{Q\in\mathcal{U}_{\text{CVaR}}}\mathbb{E}_{Q}[Z],$
where $\mathcal{U}_{\text{CVaR}}=\left\\{Q\ll
P:D_{\text{RN}}(Q,P))\in\left[0,\frac{1}{\alpha}\right]\right\\}$ with
$D_{\text{RN}}(Q,P):=\frac{Q(\omega)}{P(\omega)}$.
We also introduce another significant risk measure, known as Entropic Value-
at-Risk (EVaR). Suppose that the moment generating function
$M_{Z}(t)=\mathbb{E}_{P}\left[e^{tZ}\right]$ exists for all
$t\in\mathbb{R}^{+}$ for the random variable $Z$. In such a case, the EVaR at
a given confidence level $\alpha$ is defined as follows [33]:
$\text{EVaR}_{\alpha}(Z)=\inf_{t>0}\left\\{t^{-1}\ln(M_{Z}(t))-t^{-1}\ln\alpha\right\\}.$
It is noteworthy that EVaR is also a coherent risk measure. The dual
representation theorem for EVaR, as outlined in [33], is as follows:
$\text{EVaR}_{\alpha}(Z)=\sup_{Q\in\mathcal{U}_{\text{EVaR}}}\mathbb{E}_{Q}[Z],$
where $\mathcal{U}_{\text{EVaR}}=\left\\{Q\ll
P:D_{\text{KL}}(Q,P)\leq-\ln\alpha\right\\}$ with
$D_{\text{KL}}(Q,P):=\sum_{\omega}Q(\omega)\log\frac{Q(\omega)}{P(\omega)}$.
## III Robust CVaR Optimization with Predetermined Ambiguity Set
In this work, the robust CVaR value is defined as the worst-case CVaR value of
a policy $\pi$ when starting from the initial state $x_{0}$ and traversing
through transition probabilities specified in the ambiguity set. The objective
is to minimize this robust CVaR value across all history-dependent policies,
as expressed by the following optimization problem:
$\min_{\pi\in\Pi_{H}}\max_{\tilde{P}\in\mathcal{P}}\text{CVaR}_{\alpha}\big{[}\lim_{T\rightarrow\infty}\sum_{t=0}^{T}\gamma^{t}C(x_{t},a_{t})\mid
x_{0},\pi\big{]}.$ (2)
The sets $\Pi_{H}$ and $\mathcal{P}$ are both non-empty and compact.
Additionally, the objective function is finite due to $\gamma<1$. Thus, the
minimum and maximum values can be achieved, as guaranteed by the Weierstrass
theorem in optimization theory [9]. This theorem ensures that the optimization
problem is well-defined and can be effectively solved to obtain the desired
policy that minimizes the robust CVaR value under the given constraints.
Contrasting with the robustness analysis of CVaR in [16], our approach
evaluates the inner CVaR objective in Equation (2) across the entire set
$\mathcal{P}$, instead of limiting the analysis to the true transition
probabilities $P$ alone. This broader evaluation provides a more comprehensive
analysis of the robustness of CVaR in diverse uncertain environments.
Recalling the coherent nature of CVaR as a risk measure and leveraging the
dual representation theorem, the original optimization problem (2) can be
reformulated as follows:
$\min_{\pi\in\Pi_{H}}\max_{\tilde{P}\in\mathcal{P}}\max_{Q\in\mathcal{U}_{\text{CVaR}}}\mathbb{E}_{Q}\big{[}\lim_{T\rightarrow\infty}\sum_{t=0}^{T}\gamma^{t}C(x_{t},a_{t})\mid
x_{0},\pi\big{]}.$ (3)
where $\mathcal{U}_{\text{CVaR}}=\\{Q\ll\tilde{P}:0\leq
Q(x^{\prime}|x,a)/\tilde{P}(x^{\prime}|x,a)\leq\frac{1}{\alpha}\\}.$ Notice
that the ${}^{\prime}\sup^{\prime}$ has been replaced by
${}^{\prime}\max^{\prime}$ since $\mathcal{U}_{\text{CVaR}}$ is convex and
compact and the objective function is continuous in $Q$.
We first focus on solving problem (3) with a predetermined ambiguity set,
where the uncertain budget remains fixed for every state and action. Our
approach involves combining two inner maximization problems by analyzing the
divergence $D(Q,P)$. Under the assumption that the function $\phi$ in (1) is
chosen such that $D(Q,P)$ remains bounded, i.e., $D(Q,P)\leq\tilde{K}$ (a
condition satisfied by the divergence measure used in this paper), we show
that problem (3) can be reformulated to:
$\min_{\pi\in\Pi_{H}}\max_{Q\in\mathcal{Q}}\mathbb{E}_{Q}\big{[}\lim_{T\rightarrow\infty}\sum_{t=0}^{T}\gamma^{t}C(x_{t},a_{t})\mid
x_{0},\pi\big{]},$ (4)
where $\mathcal{Q}=\left\\{Q:D(Q,P)\leq\tilde{K}\right\\}$ represents the
uncertain transition problem set.
This approach effectively addresses robust CVaR across diverse uncertainty
sets by combining the set’s divergence measure with the Radon-Nikodym
derivative, forming a new envelope set for risk-sensitive RL. This strategy
not only links the robustness of risk-sensitive RL with its intrinsic
transformation but also provides a universal framework for evaluating CVaR’s
robustness. We further illustrate this approach by analyzing two specific
$\phi$-divergence measures.
### III-A Radon-Nikodym Derivative
Firstly, we consider the scenario where $\phi$-divergence is Radon-Nikodym
derivative, subject to a fixed uncertain budget for all states and actions:
$D_{\text{RN}}(\tilde{P},P)=\frac{\tilde{P}(x^{\prime}|x,a)}{P(x^{\prime}|x,a)}\in[0,K]$,
where $K\geq 0$ is a predetermined constant.
Consequently, we obtain:
$D_{\text{RN}}(Q,P)\in\left[0,\frac{K}{\alpha}\right].$ In this context, the
original optimization problem (3) transforms into:
$\min_{\pi\in\Pi_{H}}\max_{Q\in\mathcal{U}_{\text{RN}}}\mathbb{E}_{Q}\big{[}\lim_{T\rightarrow\infty}\sum_{t=0}^{T}\gamma^{t}C(x_{t},a_{t})\mid
x_{0},\pi\big{]},$ (5)
where $\mathcal{U}_{\text{RN}}=\left\\{Q\ll
P:D_{\text{RN}}(Q,P)\in[0,\frac{K}{\alpha}]\right\\}.$
Notice that solving problem (5) is equivalent to solving the following CVaR
optimization problem with confidence level $\alpha^{\prime}=\frac{\alpha}{K}$:
$\min_{\pi\in\Pi_{H}}\text{CVaR}_{\alpha^{\prime}}\big{[}\lim_{T\rightarrow\infty}\sum_{t=0}^{T}\gamma^{t}C(x_{t},a_{t})\mid
x_{0},\pi\big{]},$
which can be solve by employing CVaR value iteration algorithms proposed in
[16].
### III-B KL Divergence
In this scenario, we consider that the uncertain transition probability
$\tilde{P}$ is defined in the neighborhood of the true transition probability
$P$ using the KL divergence, given by:
$D_{\text{KL}}(\tilde{P},P)=\sum_{x^{\prime}\in\mathcal{X}}\tilde{P}(x^{\prime}|x,a)\log\big{(}\frac{\tilde{P}(x^{\prime}|x,a)}{P(x^{\prime}|x,a)}\big{)}\leq
K,$ where $K\geq 0$ is a fixed value. Without loss of generality, we set
$K=\ln\kappa$ with $\kappa\geq 1$. We can combine the two inner maximization
problems into one, as the KL divergence of $Q$ and $P$ satisfies:
$D_{\text{KL}}(Q,P)\leq-\ln\alpha+1/\alpha\ln\kappa=-\ln(\alpha/\kappa^{\frac{1}{\alpha}}).$
Then, the original optimization problem (3) is transformed into:
$\min_{\pi\in\Pi_{H}}\max_{Q\in\mathcal{U}_{\text{KL}}}\mathbb{E}_{Q}\big{[}\lim_{T\rightarrow\infty}\sum_{t=0}^{T}\gamma^{t}C(x_{t},a_{t})\mid
x_{0},\pi\big{]},$ (6)
where $\mathcal{U}_{\text{KL}}=\left\\{Q\ll
P:D_{\text{KL}}(Q,P)\leq-\ln\frac{\alpha}{\kappa^{\frac{1}{\alpha}}}\right\\}.$
Notice that solving problem (6) is equivalent to solving the following EVaR
optimization problem with confidence level
$\alpha^{\prime}=\alpha/\kappa^{\frac{1}{\alpha}}$:
$\min_{\pi\in\Pi_{H}}\text{EVaR}_{\alpha^{\prime}}\big{[}\lim_{T\rightarrow\infty}\sum_{t=0}^{T}\gamma^{t}C(x_{t},a_{t})\mid
x_{0},\pi\big{]}.$
The problem could be solved by existing EVaR RL works [34].
## IV Robust CVaR Optimization with Decision-Dependent Uncertainty
In real-world scenarios, ambiguity sets can dynamically change due to
decisions made during optimization, introducing endogenous uncertainty [35].
This variability means that the uncertain budget can fluctuate over time,
adding complexity to robust CVaR optimization analysis. To tackle this
decision-dependent uncertainty, we focus on the Radon-Nikodym derivative,
i.e.,
$D_{\text{RN}}(\tilde{P},P)=\frac{\tilde{P}(x^{\prime}|x,a)}{P(x^{\prime}|x,a)}\in\left[0,\vec{\kappa}(x,a)\right],\forall(x,a)\in\mathcal{X}\times\mathcal{A},$
where $\vec{\kappa}:=\left\\{\vec{\kappa}(x,a),\forall
s\in\mathcal{S},a\in\mathcal{A}\right\\}$ is the decision-dependent
uncertainty budget vector.
By combining the dual representation theorem of CVaR, we obtain the following
expression:
$D_{\text{RN}}(Q,P)=\frac{Q(x^{\prime}|x,a)}{P(x^{\prime}|x,a)}\in\big{[}0,\frac{\vec{\kappa}(x,a)}{\alpha}\big{]},\forall(x,a)\in\mathcal{X}\times\mathcal{A}.$
The problem at hand cannot be straightforwardly addressed by treating it as a
fixed confidence level CVaR optimization. To overcome this challenge, we
introduce a novel risk measure called NCVaR, which incorporates both the
confidence level $\alpha$ and an undetermined uncertain budget vector
$\vec{\kappa}$. Before delving into its definition, we set forth an assumption
to ensure that both NCVaR and the uncertain budget are meaningful.
###### Assumption 1
The undetermined uncertain budget satisfies $1\leq\vec{\kappa}(x,a)\leq
K_{\text{max}},\forall x\in\mathcal{X}$ and $a\in\mathcal{A}$. Here
$K_{\max}\geq 1$ is a real value.
We now present the formal definition of NCVaR.
###### Definition 1
For a random variable $Z:\Omega\rightarrow\mathbb{R}$ with probability mass
function (p.m.f.) $P$, the NCVaR at a given confidence level $\alpha\in(0,1]$
with an undetermined uncertain budget $\vec{\kappa}$ is defined as follows:
$\text{NCVaR}_{\alpha,\vec{\kappa}}(Z)=\sup_{Q\in\mathcal{Q}}\mathbb{E}_{Q}[Z],$
(7)
where
$\mathcal{Q}=\left\\{Q:D_{\text{RN}}(Q,P)=\frac{Q(\omega)}{P(\omega)}\in\big{[}0,\frac{\vec{\kappa}(\omega)}{\alpha}\right],\forall\omega\in\Omega\big{\\}}.$
It’s easy to observe that when $P(\omega)=0$, it implies $Q(\omega)=0$,
indicating that $Q$ is absolutely continuous with respect to $P$ (i.e., $Q\ll
P$). By leveraging Theorem 3.2 in [33], we can demonstrate that NCVaR is a
coherent risk measure, which provides a solid theoretical foundation for
employing NCVaR in practical applications and risk-sensitive RL scenarios.
As a consequence of the coherency property, solving problem (4) with an
undetermined uncertain budget defined by the Radon-Nikodym derivative is
equivalently transformed into:
$\min_{\pi\in\Pi_{H}}\text{NCVaR}_{\alpha,\vec{\kappa}}\big{[}\lim_{T\rightarrow\infty}\sum_{t=0}^{T}\gamma^{t}C(x_{t},a_{t})\mid
x_{0},\pi\big{]}.$ (8)
Given the computational challenges associated with directly computing NCVaR,
as it requires knowledge of the entire distribution of the total discounted
cost, we present a decomposition theorem for NCVaR, which is key to
simplifying NCVaR computation and the proof is detailed in Theorem 21 of [36].
###### Theorem 1
(NCVaR Decomposition) For any $\alpha\in(0,1]$ and $\vec{\kappa}$ satisfying
Assumption 1, the $\text{NCVaR}_{\alpha,\vec{\kappa}}$ has the following
decomposition
$\begin{split}\text{NCVaR}_{\alpha,\vec{\kappa}}(Z|H_{t},\pi)&=\max_{\xi\in\mathcal{U}_{\text{NCVaR}}(\alpha,\vec{\kappa}(x_{t},a_{t}),P(\cdot|x_{t},a_{t}))}\mathbb{E}_{P}[\xi_{x_{t+1}}\\\
&\cdot\text{NCVaR}_{\alpha\xi,\vec{\kappa}}(Z|H_{t+1},\pi)|H_{t},\pi],\end{split}$
where $\xi(x_{t+1})=\frac{Q(x^{\prime}|x,a)}{P(x^{\prime}|x,a)}\geq 0$ is in
the set
$\begin{split}&\mathcal{U}_{\text{NCVaR}}(\alpha,\vec{\kappa}(x_{t},a_{t}),P(\cdot|x_{t},a_{t}))\\\
&=\big{\\{}\xi:\xi(x_{t+1})\in\big{[}0,\frac{\vec{\kappa}(x_{t},a_{t})}{\alpha})\big{]},\\\
&\hskip
28.45274pt\sum_{x_{t+1}\in\mathcal{X}}\xi(x_{t+1})P(x_{t+1}|x_{t},a_{t})=1\big{\\}}.\end{split}$
This decomposition theorem provides a valuable insight to NCVaR computation,
effectively linking the risk measure between different states, and facilitates
a more tractable approach to handling the complexity of NCVaR evaluation
within risk-sensitive RL under the RMDP framework. In light of the distinct
confidence levels on both sides of equation (1), we introduce an augmented
continuous space $\mathcal{Y}=(0,1]$ to represent the domain of confidence
levels.
Accordingly, the value-function $V(x,y)$ for every
$(x,y)\in\mathcal{X}\times\mathcal{Y}$ is defined as:
$\begin{split}&V(x,y)\\\
&=\min_{\pi\in\Pi_{H}}\text{NCVaR}_{y,\vec{\kappa}}\big{(}\lim_{T\rightarrow\infty}\sum_{t=0}^{T}\gamma^{t}C(x_{t},a_{t})|x_{0}=x,\pi\big{)}.\end{split}$
The Bellman operator
$\mathbf{T}:\mathcal{X}\times\mathcal{Y}\rightarrow\mathcal{X}\times\mathcal{Y}$
is defined as:
$\begin{split}\mathbf{T}[V](x,y)=&\min_{a\in\mathcal{A}}\big{[}C(x,a)+\gamma\max_{\xi\in\mathcal{U}_{\text{NCVaR}}(y,\vec{\kappa}(x,a),P(\cdot|x,a))}\\\
&\sum_{x^{\prime}\in\mathcal{X}}\xi(x^{\prime})V(x^{\prime},y\xi(x^{\prime}))P(x^{\prime}|x,a)\big{]}.\end{split}$
Lemma 1 introduces some important properties for the NCVaR Bellman operator.
###### Lemma 1
The Bellman operator $\mathbf{T}$ has the following properties: P1)
Monotonicity; P2) Transition invariance; P3) Contraction; P4) Concavity
preserving: Suppose $yV(x,y)$ is concave in $y\in\mathcal{Y},\forall
x\in\mathcal{X}$. Then the maximization problem in (8) is concave and
$y\mathbf{T}[V](x,y)$ is also concave in y.
Properties P1-P3 are similar to standard dynamic programming [37], and are key
to design a convergent value iteration method. P4 ensures that value-iteration
updates involve concave, and thus tractable, optimization problems.
Based on Lemma 1, we are able to propose the following theorem, which
demonstrates the existence of a unique fixed-point solution and outline a
method for deriving an optimal policy.
###### Theorem 2
The unique fixed-point solution $V^{*}(x,y)$ of $\mathbf{T}[V](x,y)=V(x,y)$
exists and equals to the optimal value of optimization problem (8), i.e.,
$\begin{split}&V^{*}(x,y)\\\
&=\min_{\pi\in\Pi_{H}}\text{NCVaR}_{y,\vec{\kappa}}\big{(}\lim_{T\rightarrow\infty}\sum_{t=0}^{T}\gamma^{t}C(x_{t},a_{t})|x_{0}=x,\pi\big{)}.\end{split}$
Although the problem is optimized over history-dependent policies, we
demonstrate that an optimal Markov policy exists, from which the optimal
history-dependent policy can be derived. Considering the easier implementation
of the Markov policy, we adopt the greedy policy w.r.t $V^{*}(x,y)$ as the
optimal policy.
We introduce Algorithm 1 to effectively solve the NCVaR optimization problem.
This solution is equivalent to addressing the original problem incorporating
an undetermined uncertain budget defined by the Radon-Nikodym derivative.
Algorithm 1 Value Iteration for NCVaR
1: for $x\in\mathcal{X}$ and $y\in\mathcal{Y}$ do
2: arbitrarily initialize $V_{0}(x,y)$
3: end for
4: for $t=0,1,2,\dots$ do
5: for $x\in\mathcal{X}$ and $y\in\mathcal{Y}$ do
6: $\hskip 22.76219ptV_{t+1}(x,y)=\mathbf{T}[V_{t}](x,y)$
7: end for
8: end for
9: set $V^{*}(x,y)=\lim_{t\rightarrow\infty}V_{t}(x,y)$, then construct
$\pi^{*}$ as the greedy policy w.r.t $V^{*}(x,y)$
However, implementing Algorithm 1 directly can be challenging due to the
continuous nature of the set $\mathcal{Y}$. To address this issue, we employ a
sampling approach, where we select multiple points in $\mathcal{Y}$ and
subsequently utilize linear interpolation to derive the value function $V$.
However, to guarantee convergence, we need to satisfy the following assumption
for the initial value function $V_{0}$.
###### Assumption 2
The initial value function $V_{0}(x,y)$ is continuous and bounded in
$y\in\mathcal{Y}$ for any $x\in\mathcal{X}$. Also, $yV_{0}(x,y)$ is concave in
$y\in\mathcal{Y}$.
Let $N(x)$ denote the number of sample points, and
$Y(x)={y_{1},y_{2},\dots,y_{N(x)}}\in[0,1]^{N(x)}$ be the corresponding
confidence level set. Notably, we have $y_{1}=0$ and $y_{N(x)}=1$. To perform
linear interpolation of $yV(x,y)$, we define the interpolation function
$\mathcal{I}_{x}V$ as follows:
$\begin{split}&\mathcal{I}_{x}[V](y)\\\
&=y_{i}V(x,y_{i})+\frac{y_{i+1}V(x,y_{i+1})-y_{i}V(x,y_{i})}{y_{i+1}-y_{i}}(y-y_{i}),\end{split}$
(9)
where $y_{i}$ and $y_{i+1}$ are the closest points such that
$y\in[y_{i},y_{i+1}]$. With this, we introduce the interpolated Bellman
operator for NCVaR, denoted as $\mathbf{T}_{\mathcal{I}}V$:
$\begin{split}\mathbf{T}_{\mathcal{I}}[V](x,y)=\min_{a\in\mathcal{A}}\big{[}&C(x,a)+\gamma\max_{\xi\in\mathcal{U}_{\text{NCVaR}}(y,P(\cdot|x,a))}\\\
&\sum_{x^{\prime}\in\mathcal{X}}\frac{\mathcal{I}_{x^{\prime}}[V](y\xi(x^{\prime}))}{y}P(x^{\prime}|x,a)\big{]}.\end{split}$
(10)
An essential observation regarding the interpolated Bellman operator is that
it also exhibits the properties stated in Lemma 1. This can be demonstrated by
employing a similar approach as used in the proof of Lemma 1. Moreover, we
present a more practical and applicable version of the value iteration
algorithm in Algorithm 2. This algorithm utilizes the interpolated Bellman
operator and leverages linear interpolation to achieve the near-optimal value
function and near-optimal policy.
Algorithm 2 NCVaR Value Iteration with Linear Interpolation
1: choose $Y(x)$, $V_{0}(x,y)$ satisfying Assumption 2
2: for $t=0,1,2,\dots$ do
3: for $x\in\mathcal{X}$ and $y\in\mathcal{Y}$ do
4: $\hskip 22.76219ptV_{t+1}(x,y)=\mathbf{T}_{\mathcal{I}}[V_{t}](x,y)$
5: end for
6: end for
7: set $V^{*}(x,y)=\lim_{t\rightarrow\infty}V_{t}(x,y)$, then construct
$\pi^{*}$ as the greedy policy w.r.t $V^{*}(x,y)$
## V Experiment
In this study, we adopt an experimental setup that aligns with previous works
[16, 34], ensuring comparability and consistency in our results. We use a
$64\times 53$ grid world RL environment with a state space representing all
positions. The agent starts at $(60,50)$, aiming to reach $(60,2)$. It can
move east, south, west, or north, transitioning to adjacent states with a
probability of $0.95$, or to any other neighboring state with a probability of
$0.05/3$. The environment has $80$ obstacles; colliding with one incurs a $40$
cost, while safe movements cost 1. The agent’s goal is to find a secure and
cost-effective path. For value iteration with linear interpolation, we use
$21$ sample points, following the rule $y_{i+1}=\theta y_{i}$ for
$i=1,2,\dots,20$.
(a) $\alpha=0.48$, no uncertainty
(b) $\alpha=0.48$, $K_{\text{RN}}=2$
(c) $\alpha=0.48$, $K_{\text{KL}}=2$
(d) $\alpha=0.48$, $K_{\text{unfix}}\in[1,2]$
Figure 1: Optimal value function and path in robust CVaR optimization across
various uncertainty sets.
We first validate our approach for a fixed uncertain budget using Radon-
Nikodym derivative and KL divergence. This involves visualizing the optimal
value function with color variations (a bluer color indicates a lower risk
while a yellower color indicates a higher risk) and tracing the optimal path
as a red line (Figure 1(a)). In Figure 1(a), 1(b) and 1(c), we select a
confidence level of $\alpha=0.48$ and an uncertain budget of $K=2$ for both RN
derivative and KL divergence. Consequently, we obtain
$\alpha^{\prime}_{\text{CVaR}}=0.24$ and $\alpha^{\prime}_{\text{EVaR}}=0.03$,
which indicates that the new optimal policy will exhibit a more risk-averse
behavior compared to the original one. Accordingly, the optimal path becomes
longer and is positioned closer to obstacles, aligning with the result that
the value function is larger. We further assess Algorithm 2 for decision-
dependent cases, setting the uncertain budget range to $[1,2]$. As a result,
for a fixed current state $x$, the new confidence level on the right side of
the decomposition theorem significantly deviates from the fixed case. This
increased deviation leads to the agent becoming more risk-averse as shown in
Figure 1(d). In conclusion, our algorithms effectively induce risk-averse
policies, equipping agents to navigate more cautiously in uncertain
environments. The experiments validate our methodology’s efficacy in guiding
agents towards safer decision-making strategies.
## VI Conclusion and Future Direction
In this study, we have conducted a comprehensive and novel analysis of robust
CVaR-based risk-sensitive RL within the framework of RMDP. We have
successfully addressed robust CVaR optimization in the presence of fixed
uncertain budgets while adopting a rectangular ambiguity set. We have
introduced a novel risk measure NCVaR and devised NCVaR value iteration
algorithms to solve the challenges associated with state-action dependent
uncertainty. Furthermore, we have demonstrated the convergence of our
algorithms through theoretical analysis. We have validated the proposed
approaches through simulation experiments, and the results showcased the
effectiveness and practicality of our methods. This paper leaves several
interesting directions for future works, including the extension of robustness
analysis to a broader spectrum of uncertainty sets and a deeper exploration of
NCVaR within risk-sensitive RL.
## References
* [1] K. Zhang, T. Sun, Y. Tao, S. Genc, S. Mallya, and T. Basar, “Robust Multi-Agent Reinforcement Learning with Model Uncertainty,” _Advances in Neural Information Processing Systems_ , vol. 33, pp. 10 571–10 583, 2020.
* [2] Y. Le Tallec, “Robust, Risk-Snsitive, and Data-Driven Control of Markov Decision Processes,” Ph.D. dissertation, Massachusetts Institute of Technology, 2007.
* [3] W. Wiesemann, D. Kuhn, and B. Rustem, “Robust Markov Decision Processes,” _Mathematics of Operations Research_ , vol. 38, no. 1, pp. 153–183, 2013.
* [4] C. Ho, M. Petrik, and W. Wiesemann, “Fast Bellman Updates for Robust MDPs,” in _Proc. International Conference on Machine Learning_ , Stockholm, Sweden, July. 2018, pp. 1979–1988.
* [5] A. Nilim and L. Ghaoui, “Robustness in Markov Decision Problems with Uncertain Transition Matrices,” _Advances in Neural Information Processing Systems_ , vol. 16, 2003.
* [6] G. Iyengar, “Robust Dynamic Programming,” _Mathematics of Operations Research_ , vol. 30, no. 2, pp. 257–280, 2005.
* [7] A. Ben-Tal, D. Den Hertog, A. De Waegenaere, B. Melenberg, and G. Rennen, “Robust Solutions of Optimization Problems Affected by Uncertain Probabilities,” _Management Science_ , vol. 59, no. 2, pp. 341–357, 2013\.
* [8] Y. Wang and S. Zou, “Online Robust Reinforcement Learning with Model Uncertainty,” _Advances in Neural Information Processing Systems_ , vol. 34, pp. 7193–7206, 2021.
* [9] C. Ho, M. Petrik, and W. Wiesemann, “Robust Phi-Divergence MDPs,” _arXiv preprint arXiv:2205.14202_ , 2022.
* [10] O. Mihatsch and R. Neuneier, “Risk-Sensitive Reinforcement Learning,” _Machine Learning_ , vol. 49, pp. 267–290, 2002.
* [11] P. Artzner, F. Delbaen, J. Eber, and D. Heath, “Coherent Measures of Risk,” _Mathematical Finance_ , vol. 9, no. 3, pp. 203–228, Jul. 1999\.
* [12] H. Kashima, “Risk-Sensitive Learning via Minimization of Empirical Conditional Value-at-Risk,” _IEICE Transactions on Information and Systems_ , vol. 90, no. 12, pp. 2043–2052, Dec. 2007.
* [13] A. Tamar, Y. Glassner, and S. Mannor, “Policy Gradients beyond Expectations: Conditional Value-at-Risk,” _arXiv preprint arXiv:1404.3862_ , 2014.
* [14] Y. Chow and M. Ghavamzadeh, “Algorithms for CVaR Optimization in MDPs,” _Advances in Neural Information Processing Systems_ , vol. 27, 2014\.
* [15] L. Prashanth, “Policy Gradients for CVaR-Constrained MDPs,” in _International Conference on Algorithmic Learning Theory_. Springer, 2014, pp. 155–169.
* [16] Y. Chow, A. Tamar, S. Mannor, and M. Pavone, “Risk-Sensitive and Robust Decision-Making: A CVaR Optimization Approach,” _arXiv preprint arXiv:1506.02188_ , 2015.
* [17] A. Tamar, Y. Glassner, and S. Mannor, “Optimizing the CVaR via Sampling,” in _Proc. AAAI Conference on Artificial Intelligence_ , Austin, TX, Feb. 2015, pp. 2993–2999.
* [18] Y. Chow, M. Ghavamzadeh, L. Janson, and M. Pavone, “Risk-Constrained Reinforcement Learning with Percentile Risk Criteria,” _The Journal of Machine Learning Research_ , vol. 18, no. 1, pp. 6070–6120, Jan. 2017\.
* [19] S. Stanko and K. Macek, “Risk-Averse Distributional Reinforcement Learning: A CVaR Optimization Approach.” in _International Journal of Child-Computer Interaction_ , 2019, pp. 412–423.
* [20] X. Ma, L. Xia, Z. Zhou, J. Yang, and Q. Zhao, “Dsac: Distributional Soft Actor Critic for Risk-Sensitive Reinforcement Learning,” _arXiv preprint arXiv:2004.14547_ , 2020.
* [21] R. Keramati, C. Dann, A. Tamkin, and E. Brunskill, “Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 34, no. 04, 2020, pp. 4436–4443.
* [22] M. Godbout, M. Heuillet, S. Chandra, R. Bhati, and A. Durand, “CARL: Conditional-Value-at-Risk Adversarial Reinforcement Learning,” _arXiv preprint arXiv:2109.09470_ , Sep. 2021.
* [23] D. Kim and S. Oh, “Trc: Trust Region Conditional Value at Risk for Safe Reinforcement Learning,” _IEEE Robotics and Automation Letters_ , vol. 7, no. 2, pp. 2621–2628, 2022.
* [24] C. Ying, X. Zhou, H. Su, D. Yan, N. Chen, and J. Zhu, “Towards Safe Reinforcement Learning via Constraining Conditional Value-at-Risk,” _arXiv preprint arXiv:2206.04436_ , 2022.
* [25] S. H. Lim and I. Malik, “Distributional Reinforcement Learning for Risk-Sensitive Policies,” _Advances in Neural Information Processing Systems_ , vol. 35, pp. 30 977–30 989, 2022.
* [26] I. Greenberg, Y. Chow, M. Ghavamzadeh, and S. Mannor, “Efficient Risk-Averse Reinforcement Learning,” _Advances in Neural Information Processing Systems_ , vol. 35, pp. 32 639–32 652, 2022.
* [27] K. Wang, N. Kallus, and W. Sun, “Near-Minimax-Optimal Risk-Sensitive Reinforcement Learning with CVaR,” in _International Conference on Machine Learning_. PMLR, 2023, pp. 35 864–35 907.
* [28] Q. Zhang, S. Leng, X. Ma, Q. Liu, X. Wang, B. Liang, Y. Liu, and J. Yang, “CVaR-Constrained Policy Optimization for Safe Reinforcement Learning,” _IEEE Transactions on Neural Networks and Learning Systems_ , 2024.
* [29] O. Nohadani and K. Sharma, “Optimization Under Decision-Dependent Uncertainty,” _SIAM Journal on Optimization_ , vol. 28, no. 2, pp. 1773–1795, 2018.
* [30] C. Gong, Q. He, Y. Bai, Z. Yang, X. Chen, X. Hou, X. Zhang, Y. Liu, and G. Fan, “The $f$-Divergence Reinforcement Learning Framework,” _arXiv preprint arXiv:2109.11867_ , 2021.
* [31] R. T. Rockafellar and S. Uryasev, “Optimization of Conditional Value-at-Risk,” _Journal of Risk_ , vol. 2, pp. 21–42, Apr. 2000.
* [32] M. Ang, J. Sun, and Q. Yao, “On the Dual Representation of Coherent Risk Measures,” _Annals of Operations Research_ , vol. 262, no. 1, pp. 29–46, Mar. 2018.
* [33] A. Ahmadi-Javid, “Entropic Value-at-Risk: A New Coherent Risk Measure,” _Journal of Optimization Theory and Applications_ , vol. 155, no. 3, pp. 1105–1123, 2012.
* [34] X. Ni and L. Lai, “Risk-Sensitive Reinforcement Learning via Entropic-VaR Optimization,” in _Proc. Asilomar Conference on Signals, Systems, and Computers_ , Pacific Grove, CA, Oct. 2022, pp. 953–959.
* [35] F. Luo and S. Mehrotra, “Distributionally Robust Optimization with Decision Dependent Ambiguity Sets,” _Optimization Letters_ , vol. 14, pp. 2565–2594, 2020.
* [36] G. C. Pflug and A. Pichler, “Time-Consistent Decisions and Temporal Decomposition of Coherent Risk Functionals,” _Mathematics of Operations Research_ , vol. 41, no. 2, pp. 682–699, May. 2016.
* [37] D. Bertsekas, _Dynamic Programming and Optimal Control: Volume I_. Nashua, NA: Athena scientific, 2012, vol. 1.
|
# Progressive Feature Self-Reinforcement for
Weakly Supervised Semantic Segmentation
Jingxuan He1, Lechao Cheng1, Chaowei Fang3, Zunlei Feng2, Tingting Mu4, Mingli
Song2 corresponding author.
###### Abstract
Compared to conventional semantic segmentation with pixel-level supervision,
weakly supervised semantic segmentation (WSSS) with image-level labels poses
the challenge that it always focuses on the most discriminative regions,
resulting in a disparity between fully supervised conditions. A typical
manifestation is the diminished precision on the object boundaries, leading to
a deteriorated accuracy of WSSS. To alleviate this issue, we propose to
adaptively partition the image content into certain regions (e.g., confident
foreground and background) and uncertain regions (e.g., object boundaries and
misclassified categories) for separate processing. For uncertain cues, we
propose an adaptive masking strategy and seek to recover the local information
with self-distilled knowledge. We further assume that the unmasked confident
regions should be robust enough to preserve the global semantics. Building
upon this, we introduce a complementary self-enhancement method that
constrains the semantic consistency between these confident regions and an
augmented image with the same class labels. Extensive experiments conducted on
PASCAL VOC 2012 and MS COCO 2014 demonstrate that our proposed single-stage
approach for WSSS not only outperforms state-of-the-art benchmarks remarkably
but also surpasses multi-stage methodologies that trade complexity for
accuracy. The code can be found at https://github.com/Jessie459/feature-self-
reinforcement.
## Introduction
Weakly supervised semantic segmentation (WSSS) reduces the cost of annotating
“strong” pixel-level labels by using “weak” labels such as bounding boxes
(Dai, He, and Sun 2015; Song et al. 2019), scribbles (Lin et al. 2016; Vernaza
and Chandraker 2017), points (Bearman et al. 2016; Su et al. 2022) and image-
level class labels (Araslanov and Roth 2020; Ru et al. 2022; Wu et al. 2023;
Ru et al. 2023). Among these, image-level class labels are the most
affordable, but challenging to exploit. A commonly used WSSS approach based on
image-level class labels typically includes the following steps: (1) to train
a neural network for image classification; (2) to use the network to generate
class activation maps (CAMs) (Zhou et al. 2016) as seed regions; (3) to refine
the CAMs to pseudo segmentation labels that will be used as the ground truth
for supervising a segmentation network. These steps can either be implemented
as separate stages or as a single collaborative stage, and single-stage
frameworks are usually more efficient as they streamline the training
pipeline. In general, high-quality pseudo labels lead to superior semantic
segmentation performance. In this work, we focus on developing an effective
single-stage approach to generate more accurate pseudo labels from image-level
class labels.
Figure 1: Our main idea. The flawed CAM only identifies discriminative
regions. To solve this, we propose to partition the image into uncertain
regions (e.g., object boundaries) and confident regions (e.g., the main body
of an object) and reinforce features of these regions in a complementary way.
Unfortunately, CAMs are essentially flawed because they are intended for
classification, i.e., they strive to identify the most discriminative regions
of an object aiming at improved classification accuracy. To tackle this, one
can improve the initial seeds (Lee et al. 2019; Wang et al. 2020) or refine
pseudo labels (Ahn, Cho, and Kwak 2019; Chen et al. 2020), by expanding
activations or labels to semantically consistent pixels in the neighborhood.
Recent studies have found that the restricted receptive field of convolution
negatively affects the recognition of integral objects (Ru et al. 2022, 2023)
and use vision transformer (Dosovitskiy et al. 2020) to model the global
relationships for improvement. But this does not resolve the issue of CAM
seeds or pseudo labels, and we still observe empirically high uncertainty in
(1) boundary regions between foreground objects and background, and (2)
misclassified regions within multiple semantically-different objects. In the
example of Figure 1, the generated CAM is uncertain about the two arms of the
person on the chair, also the boundary between the foreground (person and
chair) and the background is unclear. These uncertain regions are easily
confused by obscure semantic clues due to the absence of pixel-level
supervision.
Our goal is to clarify the visual semantics of uncertain regions mentioned
above. We emphasize that the local visual patterns should be explicitly
modeled and captured. As can be seen from Figure 1, head and upper thighs are
well recognized, while the recognition of arms and lower legs needs
improvement. A better understanding of that arms and lower legs surely belong
to a person should be established using local visual context. Although some
methods can deal with noisy object boundaries by employing off-the-shelf
saliency detection models for rich object contours (Lee et al. 2021b; Li, Fan,
and Zhang 2022), they overlook uncertain regions caused by low confidence
within objects. Alternatively, it has been proposed to attain the training
objective using knowledge gathered from the past training iterations, i.e.,
self-distillation (Caron et al. 2021). Encouraged by the success of self-
distillation, we discard saliency detection models, but take advantage of the
strategy of self-distillation in our model training.
To this end, to explore and strengthen semantics over uncertain regions, we
propose a progressive self-reinforcement method. To distinguish uncertain
regions from confident ones, we define those with intermediate CAM scores as
uncertain regions, since a very low/high score strongly indicates the
background/foreground. Specifically, we propose to mask uncertain features
(equivalent to image patch tokens) and learn to recover the original
information with the help of an online momentum teacher. This masking strategy
aligns with a state-of-the-art pre-training paradigm called masked image
modeling (MIM) that brings locality inductive bias to the model (Xie et al.
2023). We upgrade its random masking strategy with semantic uncertainty so
that the network can focus on uncertain features controlled by the masking
ratio. This design is beneficial to facilitate features in both object
boundaries and misclassified regions. Assuming that confident features should
be robust enough to present global semantics, we also introduce a
complementary method that constrains semantic consistency between two
augmented views with the same class labels. Our proposal can be seamlessly
integrated into a vision transformer based single-stage WSSS framework. We
summarize our main contributions as follows :
* •
We propose a novel WSSS approach, _progressive feature self-reinforcement_ ,
to effectively enhance the semantics of uncertain regions. The investigation
of uncertain regions, including both object boundaries and misclassified
categories, significantly improves WSSS performance.
* •
We design an adaptive masking strategy to identify uncertain regions. Unlike
most previous works that adopt additional saliency detection models, we locate
uncertain regions with the guidance of semantic-aware CAMs.
* •
Exhaustive experiments on PASCAL VOC 2012 (Everingham et al. 2010) and MS COCO
2014 (Lin et al. 2014) show that our method outperforms SOTA single-stage
competitors, even better than existing sophisticated multi-stage methods.
## Related Work
Figure 2: Overview of our framework. For the student pipeline, we forward one
view through the encoder, and the encoded patch tokens are fed into the
classifier for classification and the decoder for semantic segmentation,
separately. The other view is masked and sequentially forwarded through the
encoder, the aggregation module, and the projector. For the teacher pipeline,
both views are propagated through the encoder, the aggregation module, and the
projector. The teacher network requires no gradient and is an exponential
moving average (EMA) of the student network.
### Weakly Supervised Semantic Segmentation
Multi-stage WSSS methods adopt a classification model to generate CAMs as
pseudo labels, then train a segmentation model for evaluating the final
performance. To overcome the commonly acknowledged weakness that CAMs can only
focus on discriminative regions, several works aim at improving the training
dynamic by erasing and seeking (Hou et al. 2018) or adversarial learning (Yoon
et al. 2022). Some recent approaches also adopt vision transformer
(Dosovitskiy et al. 2020) for the WSSS task, considering its favorable long-
range modeling capability. TS-CAM (Gao et al. 2021) proposes to couple class-
agnostic attention maps with semantic-aware patch tokens to promote object
localization. MCTformer (Xu et al. 2022) introduces multiple class tokens so
that class-specific attention maps can be generated. Other approaches
incorporate extra data into training or post-processing, e.g., saliency maps
(Lee et al. 2021b) or contrastive language-image pre-training (CLIP) models
(Lin et al. 2023). Our solution aims at improving pseudo labels as well, but
it is integrated into a single-stage framework for simplicity, and it requires
neither extra data nor off-the-shelf saliency detection models.
Single-stage WSSS methods treat multiple stages such as classification, pseudo
label refinement, segmentation as a whole and perform joint training. 1Stage
(Araslanov and Roth 2020) achieves comparable performance with dominant multi-
stage approaches by ensuring local consistency, semantic fidelity and mask
completeness. AFA (Ru et al. 2022) explores the intrinsic architecture of ViT
and derives reliable semantic affinity from multi-head self-attention for
pseudo label refinement. ToCo (Ru et al. 2023) tackles the issue of over-
smoothing observed in ViT by supervising the final patch tokens with
intermediate knowledge. Despite the simplified and streamlined training
procedure, single-stage methods often suffer from inferior performance
compared with multi-stage ones. In this work, we achieve superior semantic
segmentation results using a single-stage framework by discovering and
reinforcing underlying semantic layouts.
### Self-Distillation
Self-distillation associates self-supervised learning (He et al. 2020) with
knowledge distillation (Hinton, Vinyals, and Dean 2015), where knowledge is
transferred and learned without resorting to any labels. It is primarily
designed to compress large networks, and is hoping to promote performance on
downstream tasks via mimicking the output of a frozen teacher (Noroozi et al.
2018; Zhang et al. 2023; Wang et al. 2023). Recently, some approaches (Caron
et al. 2021; Zhou et al. 2021) build the teacher dynamically during training,
where the teacher adopts the same architecture as that of the student and is
updated with the knowledge of past iterations. The resulting framework
simplifies the training process and achieves compelling results compared with
other self-training frameworks. This motivates us to adapt the core idea of
self-distillation to the WSSS task for the purpose of rectifying inaccurate
object boundaries as well as improving discriminative object features.
## Methodology
### A Single-Stage Framework for WSSS
The proposed single-stage framework for WSSS is illustrated in Figure 2. We
use an encoder-decoder architecture to accomplish semantic segmentation with
image-level supervision. The encoder is a vision transformer supervised by
image-level class labels. We adopt patch token contrast (PTC) (Ru et al. 2023)
for affinity learning as it is crucial to constrain affinities between patch
tokens of the last layer against over-smoothing (Gong et al. 2021). As for
semantic segmentation, we borrow a lightweight convolutional decoder from
DeepLab (Chen et al. 2017), which is supervised by pseudo segmentation labels
that are generated from CAMs. An aggregation module is used to summarize patch
tokens into one class token and an MLP-based projector to transform all tokens
into an appropriate feature space for feature learning. To improve model
training, we enable a student and a teacher pipeline to achieve self-
distillation.
Formally, let $\mathcal{F}$ be the transformer encoder with its output
embedding dimension denoted by $D$, $\mathcal{P}$ the projector, $\mathcal{M}$
the masking operator, and $\mathcal{A}$ the aggregating operator. We start
from explaining the student pipeline. An input image is randomly augmented to
two distorted views: $x_{1}$ and $x_{2}$. Each view is subsequently divided
into $HW$ non-overlapping patch tokens, denoted as
$T_{1}=\left\\{t_{1}^{(i)}\right\\}_{i=1}^{HW}$ and
$T_{2}=\left\\{t_{2}^{(i)}\right\\}_{i=1}^{HW}$, respectively. We forward
$T_{1}$ into the encoder to obtain the logits
$Z_{1}=\mathcal{F}(T_{1})\in\mathbb{R}^{HW\times D}$, which are then fed into
the classifier for classification, and also the decoder for segmentation,
following the standard image classification and segmentation setup. To
reinforce features, we divide $T_{2}$ into uncertain and confident tokens and
mask the uncertain ones with learnable parameters, for which the uncertain
token selection and masking approaches will be explained later. The resulting
masked view, denoted as $\hat{T}_{2}=\mathcal{M}(T_{2})$, is also fed into the
encoder to obtain $\hat{Z}_{2}=\mathcal{F}\left(\hat{T}_{2}\right)$.
Embeddings of the unmasked confident tokens in $\hat{Z}_{2}$ are summarized
into a class token by an aggregation module, denoted by
$\mathcal{A}\left(\hat{Z}_{2}\right)\in\mathbb{R}^{1\times D}$. This class
token is concatenated with $\hat{Z}_{2}$, and further projected and normalized
to resemble probabilities distributions in
$\hat{P}_{2}\in\mathbb{R}^{(1+HW)\times D}$, as
$\hat{P}_{2}=\sigma\left(\mathcal{P}\left(\left[\mathcal{A}\left(\hat{Z}_{2}\right);\hat{Z}_{2}\right]\right)\right),$
(1)
where $\sigma$ is the row-wise softmax function, and $[;]$ the concatenation.
We will explain the aggregation design later.
The teacher shares the same architecture as the student’s encoder and
projector, and has a similar pipeline described by Eq. (1), except it takes
the unmasked inputs $T_{1}$ and $T_{2}$, and returns two distributions $P_{1}$
and $P_{2}$ for the two views, respectively. The student output $\hat{P}_{2}$
and the teacher outputs $P_{1}$ and $P_{2}$ are used for feature reinforcement
training.
### Uncertain Patch Token Selection
We select uncertain patch tokens under the guidance of semantic-aware CAMs,
generated using the logits computed earlier with the first view, i.e.,
$Z_{1}=\mathcal{F}(T_{1})$. We linearly project $Z_{1}$ using the weights
$W\in\mathbb{R}^{C\times D}$ of the classifier for image classification, where
$C$ is the class number, and then normalize it by the $\operatorname{ReLU}$
function and $\operatorname{min-max}$ normalization. The normalized CAM,
denoted as $M_{c}\in\mathbb{R}^{HW\times C}(0\leq M_{C}\leq 1)$, is defined by
$M_{c}:=\operatorname{min-
max}\left(\operatorname{ReLU}\left(ZW^{\top}\right)\right).$ (2)
It encodes the semantic uncertainty for each patch driven by CAM scores
$ZW^{\top}$.
Next, we identify the uncertain regions based on the normalized CAM and mask
the uncertain patches, following an adaptive masking strategy. Features in
non-reliable regions are considered as uncertain features. However, some
reliable regions can be wrongly labeled, and their corresponding features can
also be uncertain. To remedy this, we propose an adaptive masking strategy,
resulting in a soft masking vector $M_{s}\in\mathbb{R}^{HW}$ with each element
given as
$M_{s}^{(i)}=\left\\{\begin{array}[]{lr}u_{i}+1,&\text{if
}\beta_{l}<\max\left(M_{c}^{(i,:)}\right)<\beta_{h},\\\
u_{i},&\text{otherwise},\end{array}\right.$ (3)
where $u_{i}\sim\text{U}(0,1)$ draws from a standard uniform distribution and
enables a stochastic selection process. The above use of two background
thresholds $0<\beta_{l}<\beta_{h}<1$ for dividing patches into reliable and
non-reliable ones is inspired by Zhang et al. (2020) and Ru et al. (2022),
which suggests an intermediate score to be a sign of uncertainty. As a result,
elements in $M_{s}$ with larger values suggest uncertain patches.
We use a masking ratio $0<r<1$ to control the amount of selected uncertain
patches, and defines the following binary selection mask
$M_{b}\in\mathbb{R}^{HW}$ with each element as
$M_{b}^{(i)}=\left\\{\begin{array}[]{lr}1,&\text{if
}i\in\operatorname*{arg\,max_{top(k)}}(M_{s}),k:=\lfloor HW*r\rfloor,\\\
0,&\text{otherwise},\end{array}\right.$ (4)
where $\lfloor\cdot\rfloor$ denotes the floor function. The selected uncertain
patches, flagged by 1 in $M_{b}$, correspond to those top-$k$ large-valued
elements in $M_{s}$. Our masking strategy is designed to relax the hard
foreground-background thresholds by the masking ratio $r$. When more patches
are flagged as uncertain by
$\beta_{l}<\max\left(M_{c}^{(i,:)}\right)<\beta_{h}$, the selection is
randomly conducted within them through $u_{i}$. When less uncertain patches
are flagged, part of confident patches are also selected. The original
features of the selected tokens to mask are replaced by learnable parameters
with the same feature dimension.
Figure 3: Illustration of the aggregation module. This module is composed of
several aggregation blocks, where each block alternates in turn a cross-
attention layer and a feed-forward layer. The cross-attention layer computes
attention between a class token and a sequence of unmasked patch tokens.
### Certain Feature Aggregation
We design an attentive aggregation module to compress the embeddings of a
sequence of $N=HW$ patch tokens, stored in $\hat{Z}\in\mathbb{R}^{N\times D}$,
into one class token embedding $\bar{Z}\in\mathbb{R}^{1\times D}$. As shown in
Figure 3, this module contains several aggregation blocks, where each block
contains a masked cross-attention (MCA) layer and a feed-forward (FF) layer,
given as
$\displaystyle\bar{Z}^{(l)}_{(o)}$
$\displaystyle=\bar{Z}^{(l)}+\text{MCA}\left(\eta\left(\left[\bar{Z}^{(l)};\hat{Z}^{(l)}\right]\right)\right),$
(5) $\displaystyle\bar{Z}^{(l+1)}$
$\displaystyle=\bar{Z}^{(l)}_{(o)}+\text{FF}\left(\eta\left(\bar{Z}^{(l)}_{(o)}\right)\right),$
where $l$ denotes the layer index and $\eta$ is the LayerNorm (Ba, Kiros, and
Hinton 2016).
MCA is analogous to self-attention (Vaswani et al. 2017), except that it
computes attention between the class token and a set of unmasked patch tokens.
We parameterize MCA with projection weights
$W_{Q},W_{K},W_{V},W_{O}\in\mathbb{R}^{D\times D}$, and calculate the queries
$Q\in\mathbb{R}^{1\times D}$, keys $K\in\mathbb{R}^{N\times D}$ and values
$V\in\mathbb{R}^{N\times D}$ by projection, so that
$Q=\eta\left(\bar{Z}\right)W_{Q}^{\top},K=\eta\left(\hat{Z}\right)W_{K}^{\top},V=\eta\left(\hat{Z}\right)W_{V}^{\top}.$
(6)
Note that queries are derived from the class token, while keys and values are
calculated on patch tokens. The masked cross-attention
$A\in\mathbb{R}^{1\times N}$ is then formulated as
$A=\sigma\left(\frac{\left(1-M_{b}\right)\left(QK^{\top}\right)}{\sqrt{D}}\right).$
(7)
The output of MCA is computed as a weighted sum of values, i.e.,
$\left(AV\right)W_{O}^{\top}$.
### Feature Self-reinforcement
We adopt self-distillation (Caron et al. 2021; Zhou et al. 2021; Oquab et al.
2023) to improve the model training for feature reinforcement. As explained
earlier, given two distorted views of the same image, we compute one student
output $\hat{P}_{2}$ and two teacher outputs $P_{1}$ and $P_{2}$, where their
first element stores the aggregated token information, while the remaining the
individual token content. We propose a self-reinforcement loss
$\mathcal{L}_{u}$ for the uncertain tokens, as the cross-entropy loss between
each student’s patch token and its corresponding teacher’s patch token:
$\mathcal{L}_{u}=-\sum_{i=2}^{1+N}M_{b}^{(i)}P_{2}^{(i)}\log\hat{P}_{2}^{(i)},$
(8)
where $M_{b}$ is the mask in Eq. (4) to help select masked patch tokens. We
also conduct self-reinforcement for the confident tokens, formulated as the
cross-entropy loss on the two aggregated class tokens of the two views, given
as
$\mathcal{L}_{c}=-P_{1}^{(1)}\log\hat{P}_{2}^{(1)}.$ (9)
Following a common practice, we adopt the multi-label soft margin loss
$\mathcal{L}_{cls}$ for classification, the pixel-wise cross-entropy loss
$\mathcal{L}_{seg}$ for segmentation, and the cosine similarity loss
$\mathcal{L}_{aff}$ for affinity regularization. Denote the weighting factors
as $\\{\lambda_{i}\\}_{i=1}^{5}$, the overall training objective is
$\mathcal{L}=\lambda_{1}\mathcal{L}_{cls}+\lambda_{2}\mathcal{L}_{aff}+\lambda_{3}\mathcal{L}_{seg}+\lambda_{4}\mathcal{L}_{u}+\lambda_{5}\mathcal{L}_{c}.$
(10)
It consolidates classification, segmentation and feature self-reinforcement
within a single-stage framework.
## Experiments
Method | Sup. | Net. | Val | Test
---|---|---|---|---
Multi-stage WSSS methods.
RIB (Lee et al. 2021a) | $\mathcal{I}+\mathcal{S}$ | RN101 | 70.2 | 70.0
EDAM (Wu et al. 2021) | $\mathcal{I}+\mathcal{S}$ | RN101 | 70.9 | 70.6
EPS (Lee et al. 2021b) | $\mathcal{I}+\mathcal{S}$ | RN101 | 71.0 | 71.8
SANCE (Li, Fan, and Zhang 2022) | $\mathcal{I}+\mathcal{S}$ | RN101 | 72.0 | 72.9
L2G (Jiang et al. 2022) | $\mathcal{I}+\mathcal{S}$ | RN101 | 72.1 | 71.7
RCA (Zhou et al. 2022) | $\mathcal{I}+\mathcal{S}$ | RN38 | 72.2 | 72.8
SEAM (Wang et al. 2020) | $\mathcal{I}$ | RN38 | 64.5 | 65.7
BES (Chen et al. 2020) | $\mathcal{I}$ | RN101 | 65.7 | 66.6
CPN (Zhang et al. 2021) | $\mathcal{I}$ | RN38 | 67.8 | 68.5
CDA (Su et al. 2021) | $\mathcal{I}$ | RN38 | 66.1 | 66.8
ReCAM (Chen et al. 2022) | $\mathcal{I}$ | RN101 | 68.5 | 68.4
URN (Li et al. 2022b) | $\mathcal{I}$ | RN101 | 69.5 | 69.7
ESOL (Li et al. 2022a) | $\mathcal{I}$ | RN101 | 69.9 | 69.3
$\dagger$ViT-PCM (Rossetti et al. 2022) | $\mathcal{I}$ | RN101 | 70.3 | 70.9
$\dagger$MCTformer (Xu et al. 2022) | $\mathcal{I}$ | RN38 | 71.9 | 71.6
$\dagger$OCR (Cheng et al. 2023) | $\mathcal{I}$ | RN38 | 72.7 | 72.0
$\dagger$BECO (Rong et al. 2023) | $\mathcal{I}$ | MiT-B2 | 73.7 | 73.5
$\dagger$MCTformer+ (Xu et al. 2023) | $\mathcal{I}$ | RN38 | 74.0 | 73.6
Single-stage WSSS methods.
RRM (Zhang et al. 2020) | $\mathcal{I}$ | RN38 | 62.6 | 62.9
1Stage (Araslanov and Roth 2020) | $\mathcal{I}$ | RN38 | 62.7 | 64.3
$\dagger$AFA (Ru et al. 2022) | $\mathcal{I}$ | MiT-B1 | 66.0 | 66.3
$\dagger$ToCo (Ru et al. 2023) | $\mathcal{I}$ | ViT-B | 71.1 | 72.2
$\dagger$Ours | $\mathcal{I}$ | ViT-B | 75.7 | 75.0
Table 1: Performance comparison of semantic segmentation on PASCAL VOC 2012 in terms of mIoU(%). Sup. denotes the supervision type. $\mathcal{I}$: image-level class labels. $\mathcal{S}$: off-the-shelf saliency maps. Net. denotes the segmentation network for multi-stage methods or the backbone for single-stage methods. RN38: Wide ResNet38 (Wu, Shen, and Van Den Hengel 2019), RN101: ResNet101 (He et al. 2016), MiT: Mix Transformer (Xie et al. 2021). $\dagger$ flags transformer based classification network or backbone. Method | Sup. | Net. | Val
---|---|---|---
Multi-stage WSSS methods.
EPS (Lee et al. 2021b) | $\mathcal{I}+\mathcal{S}$ | RN101 | 35.7
RIB (Lee et al. 2021a) | $\mathcal{I}+\mathcal{S}$ | RN101 | 43.8
L2G (Jiang et al. 2022) | $\mathcal{I}+\mathcal{S}$ | RN101 | 44.2
CDA (Su et al. 2021) | $\mathcal{I}$ | RN38 | 33.2
URN (Li et al. 2022b) | $\mathcal{I}$ | RN101 | 40.7
ESOL (Li et al. 2022a) | $\mathcal{I}$ | RN101 | 42.6
$\dagger$MCTformer (Xu et al. 2022) | $\mathcal{I}$ | RN38 | 42.0
$\dagger$ViT-PCM (Rossetti et al. 2022) | $\mathcal{I}$ | RN101 | 45.0
$\dagger$OCR (Cheng et al. 2023) | $\mathcal{I}$ | RN38 | 42.5
BECO (Rong et al. 2023) | $\mathcal{I}$ | RN101 | 45.1
$\dagger$MCTformer+ (Xu et al. 2023) | $\mathcal{I}$ | RN38 | 45.2
Single-stage WSSS methods.
$\dagger$AFA (Ru et al. 2022) | $\mathcal{I}$ | MiT-B1 | 38.9
$\dagger$ToCo (Ru et al. 2023) | $\mathcal{I}$ | ViT-B | 42.3
$\dagger$Ours | $\mathcal{I}$ | ViT-B | 45.4
Table 2: Performance comparison of semantic segmentation on MS COCO 2014 in
terms of mIoU(%). We use the same notations as in Table 1.
### Experimental Settings
#### Datasets
We evaluate our method on two benchmarks: PASCAL VOC 2012 (Everingham et al.
2010) and MS COCO 2014 (Lin et al. 2014). PASCAL VOC contains 20 object
classes and one background class. Following the common practice of previous
works (Zhang et al. 2020; Araslanov and Roth 2020; Ru et al. 2022, 2023), it
is augmented with data from the SBD dataset (Hariharan et al. 2011), resulting
in $10,582$, $1,449$ and $1,456$ images for training, validation and testing,
respectively. MS COCO contains 80 object classes and one background class. It
has $82,081$ images for training and $40,137$ images for validation. Note that
we only adopt image-level labels during the training phase. We report mean
Intersection-over-Union (mIoU) as the evaluation metric.
#### Implementation Details
We adopt ViT-B (Dosovitskiy et al. 2020) pretrained on ImageNet (Deng et al.
2009) as the transformer encoder. The convolutional decoder refers to DeepLab-
LargeFOV (Chen et al. 2017). We use two aggregation blocks in the aggregation
module. The projector comprises a 3-layer perceptron and a weight-normalized
fully connected layer (Caron et al. 2021). Parameters in the aggregation
module and the projector are randomly initialized. We use a light data
augmentation: random resized cropping to $448\times 448$ with the scale
$[0.32,1.0]$ and the ratio $[3/4,4/3]$, random horizontal flipping, and random
color jittering. The student network is optimized with AdamW (Loshchilov and
Hutter 2017). The base learning rate is warmed up to $6e-5$ at the first 1,500
iterations and decayed with a cosine schedule. The weighting factors
$(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4},\lambda_{5})$ are set to
$(1.0,0.2,0.1,0.1,0.1)$. The teacher network requires no gradient and is
updated with the EMA momentum. Experimentally, we find that synchronizing the
teacher encoder with the student (i.e., momentum is $0.0$) works better. The
momentum for the teacher projector is $0.996$ and increases to $1.0$ with a
cosine schedule during training. We embrace the centering and sharpening
technique suggested in (Caron et al. 2021) to avoid collapsed solutions. The
masking ratio $r$ is $0.4$ for adaptive uncertain feature selection. The
background scores $(\beta_{l},\beta_{h})$ introduced to determine uncertain
regions are $(0.2,0.7)$. Training iterations are 20,000 for PASCAL VOC 2012
and 80,000 for MS COCO 2014. We use multi-scale testing and dense CRF (Chen et
al. 2014) at test time following (Ru et al. 2022, 2023).
### Comparison with State-of-the-arts
##### PASCAL VOC 2012
Table 1 shows comparison results of our proposed Feature Self-Reinforcement
(FSR) with other state-of-the-art methods on PASCAL VOC 2012. As can be seen
from this table, FSR significantly outperforms other single-stage approaches,
achieving $75.7\%$ and $75.0\%$ mIoU on the validation and test sets,
respectively. It is noticeable that our method achieves even higher mIoU than
several sophisticated multi-stage methods, e.g., FSR surpasses BECO (Rong et
al. 2023) by margins of $2.0\%$ and $1.5\%$. Compared with multi-stage methods
using both image-level labels and off-the-shelf saliency maps, e.g., L2G
(Jiang et al. 2022) and RCA (Zhou et al. 2022), our method still achieves
superior performance. We assume although saliency maps are effective in
providing additional background clues, our method can strengthen both
confident regions (mostly the main body of objects or the background) and
uncertain regions (mostly object boundaries), so that semantically distinct
objects can be better differentiated. Moreover, it shows that recent methods
with transformer-based networks (with $\dagger$) generally outperform those
with convolutional networks (without $\dagger)$. Nevertheless, due to the
difficulty of end-to-end optimization, single-stage transformer-based methods
(e.g., ToCo reports $71.1\%$ and $72.2\%$) can only achieve comparable
performance with multi-stage ones (e.g., BECO reports $73.7\%$ and $73.5\%$).
Our method proves the efficacy of transformer-based single-stage training by
attaining even better results.
##### MS COCO 2014
Table 2 gives comparison results of semantic segmentation on a more
challenging benchmark MS COCO 2014. We achieve $45.5\%$ mIoU on the validation
set, which outperforms previous single-stage solutions and is slightly better
than multi-stage MCTformer+ (Xu et al. 2023) by $0.2\%$. This further
demonstrates the superiority of our proposed method.
Figure 4: Visualization results of CAMs and predicted segmentation labels with
SOTA single-stage frameworks (i.e., AFA and ToCo). (left) Ground truth.
(middle) Comparison results of CAMs. (right) Comparison results of predicted
segmentation labels.
##### Visualization Results
In Figure 4, we visualize CAMs derived from the classifier and semantic
segmentation labels predicted by the decoder of three single-stage methods,
i.e., AFA (Ru et al. 2022), ToCo (Ru et al. 2023) and our proposed FSR.
Compared with AFA, ToCo and FSR can generate more integral and deterministic
CAMs. For instance, the wheels of “motorbike” are mildly activated by AFA
while strongly confirmed by ToCo and FSR. This proves the effectiveness of FSR
for uncertain features. However, AFA only focuses on boosting uncertain
features, whereas our method enhances both uncertain and certain ones. For
instance, AFA mistakes “drawer” as “chair”, while FSR successfully recognizes
the different semantics. This shows the importance of FSR for seemingly
certain features.
### Ablation Studies
In this section, we present extensive ablation studies to verify the
effectiveness of our proposed FSR. We report segmentation performance of
pseudo labels (Pseu.) derived from CAMs as well as predicted labels (Pred.)
generated by the decoder. All results are evaluated on PASCAL VOC 2012 val
set. Dense CRF is not applied with ablations.
| Edge | CAM | CAM
---|---|---|---
mask ratio | (strict) | (strict) | 0.1 | 0.2 | 0.3 | 0.4 | 0.5
Pseu. label results (%)
random | - | - | 73.1 | 73.6 | 74.1 | 74.2 | 73.2
uncertain | 73.3 | 74.0 | 74.1 | 74.2 | 73.9 | 74.4 | 73.7
Pred. label results (%)
random | - | - | 71.7 | 72.3 | 71.3 | 72.3 | 71.2
uncertain | 71.6 | 71.8 | 72.2 | 72.3 | 72.0 | 72.5 | 72.1
Table 3: Ablation results of uncertain feature selection methods. “random” means random masking, “uncertain” means our adaptive masking strategy that gives priority to masking uncertain regions. Masking | unc.FSR | cer.FSR | Pseu. (%) | Pred. (%)
---|---|---|---|---
- | | | 71.1 | 67.9
CAM | ✓ | | 74.4${}_{\textbf{{\color[rgb]{1,0,0}+3.3}}}$ | 72.5${}_{\textbf{{\color[rgb]{1,0,0}+4.6}}}$
| ✓(GAP) | 72.3${}_{\textbf{{\color[rgb]{1,0,0}+1.2}}}$ | 70.9${}_{\textbf{{\color[rgb]{1,0,0}+3.0}}}$
| ✓(GMP) | 71.8${}_{\textbf{{\color[rgb]{1,0,0}+0.7}}}$ | 70.0${}_{\textbf{{\color[rgb]{1,0,0}+2.1}}}$
| ✓(MCA) | 75.2${{}_{\textbf{{\color[rgb]{1,0,0}+4.1}}}}$ | 73.3${}_{\textbf{{\color[rgb]{1,0,0}+5.4}}}$
✓ | ✓(MCA) | 75.7${}_{\textbf{{\color[rgb]{1,0,0}+4.6}}}$ | 73.6${}_{\textbf{{\color[rgb]{1,0,0}+5.7}}}$
Table 4: Ablation results of unc.FSR and cer.FSR. “GAP”, “GMP”, and “MCA” are
aggregation methods of cer.FSR.
#### Analysis of Uncertain Feature Selection
In Table 3, we compare two _strict_ selection methods for uncertain features:
edge-based selection and CAM-based selection. For edge-based selection, we
choose the conventional Canny edge detector to extract edges in an image and
generate exact masks of these edges. Activation thresholds for CAM-based
selection are $(0.2,0.7)$. CAM-based selection is marginally better than edge-
based selection; the improvement continues when CAM-based selection is
relaxed, i.e., uncertain features are not strictly but preferentially masked.
Empirically, we find that $r=0.4$ gives the best result. In addition,
uncertain feature masking achieves higher performance than random feature
masking in most cases, showing it is important to reinforce uncertain features
for semantics clarification.
Figure 5: FSR optimizes the boundary regions (e.g., dashed red box) through
the adaptive masking of regions characterized by uncertainty and the
integration of unc.FSR and cer.FSR.
#### Analysis of Feature Self-reinforcement
Table 4 shows the ablation results of FSR on uncertain regions (unc.FSR) and
on certain regions (cer.FSR). The masking ratio is set to $0.4$ for
comparison. It demonstrates the advancement of unc.FSR by achieving $74.4\%$
(compared to $71.1\%$) on pseudo labels and $72.5\%$ (compared to $67.9\%$) on
predicted labels. This proves that reinforcing uncertain features, which
mainly contain ambiguous object boundaries and misclassified categories, is
fairly effective. When combining unc.FSR with cer.FSR, the quality of pseudo
labels can be further improved, from $74.4\%$ to $75.7\%$; predicted labels
directly supervised by pseudo labels are promoted as well, from $72.5\%$ to
$73.6\%$. This indicates that reinforcing confident features is complementary
to unc.FSR with enhanced global understanding. We showcase examples of our FSR
strategy and its effect on object boundaries in Figure 5.
Figure 6: Average attention entropy of different attention heads (dots) across
different layers.
##### (a) Analysis of unc.FSR
To gain a deep understanding of unc.FSR, we investigate the training process
by analyzing the attention mechanism. Specifically, we compute average
attention entropy (Attanasio et al. 2022) for each attention head across
transformer layers. As shown in Figure 6, the entropy at shallow layers (e.g.,
layer 0 to 6) holds similar without unc.FSR; however, it becomes higher and
tighter at deep layers (e.g., layer 7 to 11) when unc.FSR is applied. A large
entropy for a specific token indicates that a broad context contributes to
this token, while a small entropy tells the opposite. From this point of view,
we assume that unc.FSR benefits semantic segmentation by improving the degree
of contextualization at deep layers.
Figure 7: Class-to-patch attention maps derived from the aggregation module.
Class labels are displayed below.
##### (b) Analysis of cer.FSR
We compare our attentive aggregation of certain features (MCA) with two
conventional methods: Global Average Pooling (GAP) and Global Maximum Pooling
(GMP). GAP assigns an equal weight to each unmasked patch token, while GMP
picks up the dominant one along each dimension. Table 4 shows that GAP
performs better than GMP, as GMP tends to intensify the most discriminative
features, which may have an adverse effect in recognizing an integral object.
It is noticeable that MCA outperforms GAP by a large margin, indicating an
attentive weighting mechanism is superior to average weighting. We visualize
class-to-patch attention maps in Figure 7, which illustrates that the class
token can adaptively learn to pay attention to object regions. Note that the
class token is not directly supervised by classification in our design; it
interacts with unmasked patch tokens and learns to summarize effective
information from them.
| Ours | +GaussianBlur | +Solarization | AutoAugment
---|---|---|---|---
Pseu. (%) | 75.7 | 75.9 $\pm$ 0.05 | 75.3 $\pm$ 0.12 | 74.8 $\pm$ 0.09
Pred. (%) | 73.6 | 73.6 $\pm$ 0.02 | 73.2 $\pm$ 0.06 | 72.8 $\pm$ 0.04
Table 5: 10-trial experimental results of data augmentations. “Ours” is our
default data augmentation setting.
#### Data Augmentation
We present comparison results with other data augmentations in Table 5, which
reveals that data augmentations have limited impacts on the performance. For
example, the performances display variations within the anticipated range when
incorporating GaussianBlur or Solarization. Even when we substitute the data
augmentation with the robust AutoAugmentation (Cubuk et al. 2018), the results
witness a slight decline as a strong augmentation may interfere with the
segmentation objective.
## Conclusion
In this work, we propose to estimate boundaries with the guidance of semantic
uncertainty identified by CAM. To achieve this, we design an activation-based
masking strategy and seek to recover local information with self-distilled
knowledge. We further introduce a self-distillation method to reinforce
semantic consistency with another augmented view. We integrate our method into
the single-stage WSSS framework and validate its effectiveness on PASCAL VOC
2012 and MS COCO 2014 benchmarks.
## Acknowledgement
This work was supported by the National Natural Science Foundation of China
under Grant No. (62106235, 62202015, 62376206, 62003256), and in part by the
Exploratory Research Project of Zhejiang Lab under Grant 2022PG0AN01.
## References
* Ahn, Cho, and Kwak (2019) Ahn, J.; Cho, S.; and Kwak, S. 2019. Weakly supervised learning of instance segmentation with inter-pixel relations. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2209–2218.
* Araslanov and Roth (2020) Araslanov, N.; and Roth, S. 2020. Single-stage semantic segmentation from image labels. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 4253–4262.
* Attanasio et al. (2022) Attanasio, G.; Nozza, D.; Hovy, D.; and Baralis, E. 2022. Entropy-based attention regularization frees unintended bias mitigation from lists. _arXiv preprint arXiv:2203.09192_.
* Ba, Kiros, and Hinton (2016) Ba, J. L.; Kiros, J. R.; and Hinton, G. E. 2016. Layer normalization. _arXiv preprint arXiv:1607.06450_.
* Bearman et al. (2016) Bearman, A.; Russakovsky, O.; Ferrari, V.; and Fei-Fei, L. 2016. What’s the point: Semantic segmentation with point supervision. In _European conference on computer vision_ , 549–565. Springer.
* Caron et al. (2021) Caron, M.; Touvron, H.; Misra, I.; Jégou, H.; Mairal, J.; Bojanowski, P.; and Joulin, A. 2021. Emerging properties in self-supervised vision transformers. In _Proceedings of the IEEE/CVF international conference on computer vision_ , 9650–9660.
* Chen et al. (2020) Chen, L.; Wu, W.; Fu, C.; Han, X.; and Zhang, Y. 2020. Weakly supervised semantic segmentation with boundary exploration. In _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVI 16_ , 347–362. Springer.
* Chen et al. (2014) Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2014. Semantic image segmentation with deep convolutional nets and fully connected crfs. _arXiv preprint arXiv:1412.7062_.
* Chen et al. (2017) Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2017. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. _IEEE transactions on pattern analysis and machine intelligence_ , 40(4): 834–848.
* Chen et al. (2022) Chen, Z.; Wang, T.; Wu, X.; Hua, X.-S.; Zhang, H.; and Sun, Q. 2022. Class re-activation maps for weakly-supervised semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 969–978.
* Cheng et al. (2023) Cheng, Z.; Qiao, P.; Li, K.; Li, S.; Wei, P.; Ji, X.; Yuan, L.; Liu, C.; and Chen, J. 2023. Out-of-candidate rectification for weakly supervised semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 23673–23684.
* Cubuk et al. (2018) Cubuk, E. D.; Zoph, B.; Mane, D.; Vasudevan, V.; and Le, Q. V. 2018. Autoaugment: Learning augmentation policies from data. _arXiv preprint arXiv:1805.09501_.
* Dai, He, and Sun (2015) Dai, J.; He, K.; and Sun, J. 2015. Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In _Proceedings of the IEEE international conference on computer vision_ , 1635–1643.
* Deng et al. (2009) Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and Fei-Fei, L. 2009. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_ , 248–255. Ieee.
* Dosovitskiy et al. (2020) Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020\. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_.
* Everingham et al. (2010) Everingham, M.; Van Gool, L.; Williams, C. K.; Winn, J.; and Zisserman, A. 2010\. The pascal visual object classes (voc) challenge. _International journal of computer vision_ , 88: 303–338.
* Gao et al. (2021) Gao, W.; Wan, F.; Pan, X.; Peng, Z.; Tian, Q.; Han, Z.; Zhou, B.; and Ye, Q. 2021\. Ts-cam: Token semantic coupled attention map for weakly supervised object localization. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2886–2895.
* Gong et al. (2021) Gong, C.; Wang, D.; Li, M.; Chandra, V.; and Liu, Q. 2021. Vision transformers with patch diversification. _arXiv preprint arXiv:2104.12753_.
* Hariharan et al. (2011) Hariharan, B.; Arbeláez, P.; Bourdev, L.; Maji, S.; and Malik, J. 2011. Semantic contours from inverse detectors. In _2011 international conference on computer vision_ , 991–998. IEEE.
* He et al. (2020) He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum contrast for unsupervised visual representation learning. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 9729–9738.
* He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 770–778.
* Hinton, Vinyals, and Dean (2015) Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the knowledge in a neural network. _arXiv preprint arXiv:1503.02531_.
* Hou et al. (2018) Hou, Q.; Jiang, P.; Wei, Y.; and Cheng, M.-M. 2018. Self-erasing network for integral object attention. _Advances in Neural Information Processing Systems_ , 31.
* Jiang et al. (2022) Jiang, P.-T.; Yang, Y.; Hou, Q.; and Wei, Y. 2022. L2g: A simple local-to-global knowledge transfer framework for weakly supervised semantic segmentation. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 16886–16896.
* Lee et al. (2021a) Lee, J.; Choi, J.; Mok, J.; and Yoon, S. 2021a. Reducing information bottleneck for weakly supervised semantic segmentation. _Advances in Neural Information Processing Systems_ , 34: 27408–27421.
* Lee et al. (2019) Lee, J.; Kim, E.; Lee, S.; Lee, J.; and Yoon, S. 2019. Ficklenet: Weakly and semi-supervised semantic image segmentation using stochastic inference. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 5267–5276.
* Lee et al. (2021b) Lee, S.; Lee, M.; Lee, J.; and Shim, H. 2021b. Railroad is not a train: Saliency as pseudo-pixel supervision for weakly supervised semantic segmentation. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 5495–5505.
* Li, Fan, and Zhang (2022) Li, J.; Fan, J.; and Zhang, Z. 2022. Towards noiseless object contours for weakly supervised semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 16856–16865.
* Li et al. (2022a) Li, J.; Jie, Z.; Wang, X.; Wei, X.; and Ma, L. 2022a. Expansion and shrinkage of localization for weakly-supervised semantic segmentation. _Advances in Neural Information Processing Systems_ , 35: 16037–16051.
* Li et al. (2022b) Li, Y.; Duan, Y.; Kuang, Z.; Chen, Y.; Zhang, W.; and Li, X. 2022b. Uncertainty estimation via response scaling for pseudo-mask noise mitigation in weakly-supervised semantic segmentation. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 36, 1447–1455.
* Lin et al. (2016) Lin, D.; Dai, J.; Jia, J.; He, K.; and Sun, J. 2016. Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 3159–3167.
* Lin et al. (2014) Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In _Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13_ , 740–755. Springer.
* Lin et al. (2023) Lin, Y.; Chen, M.; Wang, W.; Wu, B.; Li, K.; Lin, B.; Liu, H.; and He, X. 2023. Clip is also an efficient segmenter: A text-driven approach for weakly supervised semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 15305–15314.
* Loshchilov and Hutter (2017) Loshchilov, I.; and Hutter, F. 2017. Decoupled weight decay regularization. _arXiv preprint arXiv:1711.05101_.
* Noroozi et al. (2018) Noroozi, M.; Vinjimoor, A.; Favaro, P.; and Pirsiavash, H. 2018. Boosting self-supervised learning via knowledge transfer. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 9359–9367.
* Oquab et al. (2023) Oquab, M.; Darcet, T.; Moutakanni, T.; Vo, H.; Szafraniec, M.; Khalidov, V.; Fernandez, P.; Haziza, D.; Massa, F.; El-Nouby, A.; et al. 2023. Dinov2: Learning robust visual features without supervision. _arXiv preprint arXiv:2304.07193_.
* Rong et al. (2023) Rong, S.; Tu, B.; Wang, Z.; and Li, J. 2023. Boundary-Enhanced Co-Training for Weakly Supervised Semantic Segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 19574–19584.
* Rossetti et al. (2022) Rossetti, S.; Zappia, D.; Sanzari, M.; Schaerf, M.; and Pirri, F. 2022. Max pooling with vision transformers reconciles class and shape in weakly supervised semantic segmentation. In _European Conference on Computer Vision_ , 446–463. Springer.
* Ru et al. (2022) Ru, L.; Zhan, Y.; Yu, B.; and Du, B. 2022. Learning affinity from attention: End-to-end weakly-supervised semantic segmentation with transformers. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 16846–16855.
* Ru et al. (2023) Ru, L.; Zheng, H.; Zhan, Y.; and Du, B. 2023. Token contrast for weakly-supervised semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 3093–3102.
* Song et al. (2019) Song, C.; Huang, Y.; Ouyang, W.; and Wang, L. 2019. Box-driven class-wise region masking and filling rate guided loss for weakly supervised semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 3136–3145.
* Su et al. (2022) Su, H.; Ye, Y.; Hua, W.; Cheng*, L.; and Song, M. 2022. SASFormer: Transformers for Sparsely Annotated Semantic Segmentation. In _IEEE International Conference on Multimedia and Expo (ICME), 2023_.
* Su et al. (2021) Su, Y.; Sun, R.; Lin, G.; and Wu, Q. 2021. Context decoupling augmentation for weakly supervised semantic segmentation. In _Proceedings of the IEEE/CVF international conference on computer vision_ , 7004–7014.
* Vaswani et al. (2017) Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. _Advances in neural information processing systems_ , 30.
* Vernaza and Chandraker (2017) Vernaza, P.; and Chandraker, M. 2017. Learning random-walk label propagation for weakly-supervised semantic segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 7158–7166.
* Wang et al. (2023) Wang, Y.; Cheng*, L.; Duan, M.; Wang, Y.; Feng, Z.; and Kong, S. 2023. Improving Knowledge Distillation via Regularizing Feature Norm and Direction. _arXiv preprint arXiv:2305.17007_.
* Wang et al. (2020) Wang, Y.; Zhang, J.; Kan, M.; Shan, S.; and Chen, X. 2020. Self-supervised equivariant attention mechanism for weakly supervised semantic segmentation. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 12275–12284.
* Wu et al. (2023) Wu, F.; He, J.; Cheng*, L.; Yin, Y.; Hao, Y.; and Huang, G. 2023. Masked Collaborative Contrast for Weakly Supervised Semantic Segmentation. _arXiv preprint arXiv:2305.08491_.
* Wu et al. (2021) Wu, T.; Huang, J.; Gao, G.; Wei, X.; Wei, X.; Luo, X.; and Liu, C. H. 2021. Embedded discriminative attention mechanism for weakly supervised semantic segmentation. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 16765–16774.
* Wu, Shen, and Van Den Hengel (2019) Wu, Z.; Shen, C.; and Van Den Hengel, A. 2019. Wider or deeper: Revisiting the resnet model for visual recognition. _Pattern Recognition_ , 90: 119–133.
* Xie et al. (2021) Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J. M.; and Luo, P. 2021. SegFormer: Simple and efficient design for semantic segmentation with transformers. _Advances in Neural Information Processing Systems_ , 34: 12077–12090.
* Xie et al. (2023) Xie, Z.; Geng, Z.; Hu, J.; Zhang, Z.; Hu, H.; and Cao, Y. 2023. Revealing the dark secrets of masked image modeling. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 14475–14485.
* Xu et al. (2023) Xu, L.; Bennamoun, M.; Boussaid, F.; Laga, H.; Ouyang, W.; and Xu, D. 2023. MCTformer+: Multi-Class Token Transformer for Weakly Supervised Semantic Segmentation. _arXiv preprint arXiv:2308.03005_.
* Xu et al. (2022) Xu, L.; Ouyang, W.; Bennamoun, M.; Boussaid, F.; and Xu, D. 2022. Multi-class token transformer for weakly supervised semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 4310–4319.
* Yoon et al. (2022) Yoon, S.-H.; Kweon, H.; Cho, J.; Kim, S.; and Yoon, K.-J. 2022. Adversarial erasing framework via triplet with gated pyramid pooling layer for weakly supervised semantic segmentation. In _European Conference on Computer Vision_ , 326–344. Springer.
* Zhang et al. (2020) Zhang, B.; Xiao, J.; Wei, Y.; Sun, M.; and Huang, K. 2020. Reliability does matter: An end-to-end weakly supervised semantic segmentation approach. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, 12765–12772.
* Zhang et al. (2021) Zhang, F.; Gu, C.; Zhang, C.; and Dai, Y. 2021. Complementary patch for weakly supervised semantic segmentation. In _Proceedings of the IEEE/CVF international conference on computer vision_ , 7242–7251.
* Zhang et al. (2023) Zhang, T.; Xue, M.; Zhang, J.; Zhang, H.; Wang, Y.; Cheng, L.; Song, J.; and Song, M. 2023. Generalization Matters: Loss Minima Flattening via Parameter Hybridization for Efficient Online Knowledge Distillation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023_.
* Zhou et al. (2016) Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; and Torralba, A. 2016. Learning deep features for discriminative localization. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2921–2929.
* Zhou et al. (2021) Zhou, J.; Wei, C.; Wang, H.; Shen, W.; Xie, C.; Yuille, A.; and Kong, T. 2021. ibot: Image bert pre-training with online tokenizer. _arXiv preprint arXiv:2111.07832_.
* Zhou et al. (2022) Zhou, T.; Zhang, M.; Zhao, F.; and Li, J. 2022. Regional semantic contrast and aggregation for weakly supervised semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 4299–4309.
|
0 1cm [gray]0.75 1.5 Preprint
# Dynamic Embeddings for Interaction Prediction
Zekarias T. Kefato<EMAIL_ADDRESS>KTH Royal Institute of
TechnologyStockholmSweden , Sarunas Girdzijauskas<EMAIL_ADDRESS>KTH Royal
Institute of TechnologyStockholmSweden , Nasrullah Sheikh
<EMAIL_ADDRESS>IBM Research – AlmadenSan JoseUSA and Alberto
Montresor<EMAIL_ADDRESS>University of TrentoTrentoItaly
(2021)
###### Abstract.
In recommender systems (RSs), predicting the next item that a user interacts
with is critical for user retention. While the last decade has seen an
explosion of RSs aimed at identifying relevant items that match user
preferences, there is still a range of aspects that could be considered to
further improve their performance. For example, often RSs are centered around
the user, who is modeled using her recent sequence of activities. Recent
studies, however, have shown the effectiveness of modeling the _mutual_
interactions between users and items using separate user and item embeddings.
Building on the success of these studies, we propose a novel method called
DeePRed that addresses some of their limitations. In particular, we avoid
recursive and costly interactions between consecutive short-term embeddings by
using long-term (stationary) embeddings as a proxy. This enable us to train
DeePRed using simple mini-batches without the overhead of specialized mini-
batches proposed in previous studies. Moreover, DeePRed’s effectiveness comes
from the aforementioned design and a multi-way attention mechanism that
inspects user-item compatibility. Experiments show that DeePRed outperforms
the best state-of-the-art approach by at least 14% on next item prediction
task, while gaining more than an order of magnitude speedup over the best
performing baselines. Although this study is mainly concerned with temporal
interaction networks, we also show the power and flexibility of DeePRed by
adapting it to the case of static interaction networks, substituting the
short- and long-term aspects with local and global ones. The source code is
available here: https://github.com/zekarias-tilahun/deepred
dynamic embeddings, mutual RNN, recommender systems, interaction prediction,
multi-way attention
††copyright: acmcopyright††journalyear: 2021††doi:
10.1145/1122445.1122456††conference: Preprint; ††booktitle: ;††price:
15.00††isbn: 978-1-4503-XXXX-X/18/06
## 1\. Introduction
Vital to the success of a number of real-world recommender systems (RS) is the
ability to predict future interactions between entities based on their
previous interaction history. In many recommender systems, effective user-item
interaction prediction enable end-users to sift through an overwhelming number
of choices. In addition, in biology, pharmacology and related fields,
interaction prediction between biological and chemical compounds has been
explored to better understand unknown bio-chemical interactions (Buza and
Peška, 2017; You et al., 2019; Zitnik et al., 2018).
In this paper, we are primarily interested in temporal interaction networks
between two sets of entities– _users_ and _items_. The terms cover a variety
of notions, _e.g._ users could be customers in an e-commerce system, or
accounts on Reddit, YouTube or Spotify; items could be products, posts, media
produced or consumed by users.
Given a set of observed interactions between users and items, predicting
possible future interactions is an increasingly important and challenging
task. The goal of this paper is to introduce a new method to predict the next
items that users interact with, based on their previous history of
interaction. We model our problem through bipartite temporal interaction
networks, as they can naturally and effectively represent user-item
interactions over time.
#### Existing studies
In the context of RS, several approaches have been proposed to predict future
items a user is likely to interact with, providing encouraging results (Wu et
al., 2017, 2019; Hidasi et al., 2015; Xu et al., 2019; Wang et al., 2020; Tan
et al., 2016; Kumar et al., 2019; Covington et al., 2016; Dai et al., 2016a).
Often times, however, the focus is on modeling users, while the user-item
interaction dynamics that provide a richer signal is overlooked (Wang et al.,
2019). In several cases, RNNs and other models suitable for sequences were
used to train a predictive model over the item sequence corpus.
Recently, studies have shown how to mutually model both user and items based
on bipartite interaction networks and demonstrate significant improvement over
existing methods (Kumar et al., 2019; Dai et al., 2016a). Unlike previous
approaches, they have employed mutually recursive RNNs that are more capable
to model the user-item interaction dynamics. While they use two types of
embeddings, long-term and short-term, the former is just a fixed one-hot
vector and the latter is the real core of their models, that it is used to
capture recent user preferences and item properties. Moreover, these
approaches work by recursively computing the short-term embedding at time $t$
based on the embedding at time $t-1$ , which leads to sequential training that
proved to be a bottleneck as the network scales up. Even if recent work has
introduced a mini-batch training algorithm, the overhead is not completely
alleviated yet (Kumar et al., 2019).
#### This study
We propose a novel algorithm called DeePRed (Dynamic Embeddings for
Interaction Prediction). DeePRed provides a simple yet powerful way of
modeling short-term interaction behaviours that removes the aforementioned
recursive dependency for efficient training. This is achieved by decoupling
the _learnable_ user or item embeddings into long-term and short-term
embeddings, in order to capture both stationary and transitory interaction
patterns. Furthermore, DeePRed computes separate embeddings from the point of
view of both: users and items. Henceforth, although our discussion mostly
covers users, the same principles can be applied to items unless explicitly
stated otherwise.
The key idea behind the effectiveness of DeePRed is that, each time a user
interacts with an item, it is modeled using a sequence of $k$ recent items she
interacted with, which reflects a context of interaction. For example, Fig. 1
shows the $k$ recent interaction history of a user $u$ and an item $i$. We see
that the two most recent interactions at $t_{l}$ and $t_{l-1}$ are within the
context of SciFi, for both $u$ and $i$. That is, the last two items that $u$
has interacted with are relevant to the theme of SciFi. Thus, the long-term
(contextually stationary) embedding of the items (Spider man and Alien movies)
at $t_{l}$ and $t_{l-1}$ are used to encode such context.
Similar to previous work (Kumar et al., 2019; Dai et al., 2016a), we use two
mutual RNNs that capture the interaction and temporal patterns within a
history sequence (the $k$ most recent interaction events), and generate high-
level features. However, unlike previous work, the two RNNs share the same
model parameters and are not recursively dependent.
Figure 1. An illustration of the sequence of $k$ recent interaction events
starting from the last event $t_{l}$ before a given time $t$ for a user $u$
(${\mathbb{L}}_{u}(t^{<},k)$) and an item $i$ (${\mathbb{L}}_{i}(t^{<},k)$).
Finally, the power of DeePRed comes from a multi-way attention mechanism that
we employ to capture the user-item interaction signal, to check whether the
short-term history ($k$ most recent interactions) of a user and an item are
compatible using attention weights. The weights are then used as feature
selectors over the high-level features and predict the short-term embeddings.
In DeePRed, each interaction produces a new instance of short-term embedding
for both the user and item. This gives DeePRed the power to reason based on
consistent behaviours as opposed to rare events, and it is in contrast to
(Kumar et al., 2019) that updates the existing ones. Besides its qualitative
power, predicting short-term embeddings as opposed to interaction
probabilities is another choice in our design that boosts DeePRed’s
efficiency.
The last but not the least aspect of DeePRed is that it can be seamlessly
extended to tackle static interaction networks. This is achieved by replacing
long and short-term aspects with global and local ones, based on a sample of
interactions as opposed to the latest (recent) ones.
#### Our contribution
Our contributions are the following:
* •
Novelty: We propose a novel algorithm that captures user (item) preferences
over time by modeling users (items) using their recent interaction history. By
leveraging the decoupling of the learnable embeddings, we employ _non-
recursive_ mutual RNNs to capture interaction and temporal patterns within the
histories. Furthermore, an attention mechanism is used to inspect user-item
compatibility allowing to significantly improve the predictive performance of
our approach.
* •
Empirical results: With respect to the state of the art, our results show at
least a 14% gain on mean reciprocal rank, measured on three real-world and
pubicly available datasets.
* •
Efficiency: As a result of eliminating the recursive self-dependency between
short-term embeddings at different time steps, DeePRed achieves more than one
order of magnitude speedup over the best performing SOTA methods.
* •
Easy extension to static networks: Though the focus of this study is on
temporal interaction networks, we have shown that DeePRed is seamlessly
extendable to static interaction networks using three real-world datasets.
## 2\. Modeling Preliminaries
The focus of this study is to model temporal interaction networks; yet, our
proposal could be adapted to static networks with little effort. We therefore
show first the general model, and then we show how to specialize it for the
static case.
We take an ordered set ${\mathbb{L}}$ containing a log of interactions between
a set of users ${\mathbb{U}}$ and a set of items ${\mathbb{I}}$, where
${L=|{\mathbb{L}}|}$, ${U=|{\mathbb{U}}|}$, and ${I=|{\mathbb{I}}|}$. An event
$e=(u,i,t)\in{\mathbb{L}}$ records an interaction between a user $u$ and an
item $i$ at time $t$. Events associated with users and items are intrinsically
ordered by time. Let ${\mathbb{L}}_{u}$ be the set of all interaction events
of user $u$, such that ${{\mathbb{L}}_{u}=\\{e_{1},e_{2},\ldots,e_{l}\\}}$ and
events are intrinsically ordered, that is if two events
${e_{j}=(u,i_{j},t_{j})}$ and ${e_{k}=(u,i_{k},t_{k})}$ are such that $j\leq
k$, then $t_{j}\leq t_{k}$.
In predicting future interactions between users and items, generally, both
long-term and short-term interaction behaviours are commonly used (Zhu et al.,
2017; Beutel et al., 2018; Dai et al., 2016b). However, short-term behaviours
are mostly favored to have a strong impact on follow-up interactions. We adopt
a similar assumption and model user and item preferences from both long-term
and short-term perspectives. The long-term preferences are captured through
the complete interaction histories of users/items. For a user $u$ and an item
$i$, ${\mathbb{L}}_{u}$ and ${\mathbb{L}}_{i}$ denote their complete
interaction history, respectively.
Although user preferences are normally considered to change over time (Wu et
al., 2017), we assume that users usually have a dominant (stationary)
preference, which remains unchanged. However, as their preferences change over
time depending on recent actions, users have a tendency to do related actions.
For instance, in movie RS, a particular genre might be preferred by a user at
any given time. More importantly, however, one is likely to show interest in
movies of different genres based on mood, events in her life (e.g. marriage,
childbirth, trauma) and seasons (e.g. Christmas, Summer) (Wu et al., 2017).
Thus, the most recent watching behaviors have a stronger impact than the old
preferences over the next movie that a user is likely to watch.
To capture recent preferences, in line with (Zhang et al., 2019; Beutel et
al., 2018), we use the $k$ most recent interaction events. Unlike some studies
(Hidasi et al., 2015; Kang and McAuley, 2018; Wu et al., 2019; Covington et
al., 2016), however, we assume that the $k$ most recent interaction events
from both the user and the item influence the next user-item interaction.
Later, we shall discuss the details of the benefit of this design choice.
Thus, the $k$ most recent interactions of each user ${u\in{\mathbb{U}}}$ and
item ${i\in{\mathbb{I}}}$, respectively, before a given time $t$ are
identified by:
$\displaystyle{\mathbb{L}}_{u}(t^{<},k)=\\{i_{j},\Delta_{j}:(u,i_{j},t_{j})\in{\mathbb{L}}_{u},t_{j}<t,j=l-k,\ldots,l\\}$
$\displaystyle{\mathbb{L}}_{i}(t^{<},k)=\\{u_{j},\Delta_{j}:(u_{j},i,t_{j})\in{\mathbb{L}}_{i},t_{j}<t,j=l-k,\ldots,l\\}$
where $\Delta_{j}=t-t_{j}$ captures the hotness (recency) of the $j^{th}$
event.
For static networks, we simply strip out time from ${\mathbb{L}}$; any subsets
thereof become unordered. In this case,
${{\mathbb{L}}_{u}(k)\subseteq{\mathbb{L}}_{u}}$ and
${{\mathbb{L}}_{i}(k)\subseteq{\mathbb{L}}_{i}}$ simply denote a sampled set
of $k$ events from observed events ${\mathbb{L}}_{u}$ and ${\mathbb{L}}_{i}$,
respectively.
#### Research question
The main goal of this study is: given an ordered set of observed events
${\mathbb{L}}_{\mathcal{O}}$, can we design a novel algorithm that effectively
predicts future interactions in temporal interaction networks? Can we also
ensure efficiency? In addition, can we design it flexible enough to make it
applicable to static interaction networks?
## 3\. DeePRed
The proposed algorithm, DeePRed, captures both stationary and transitory
preferences of users and items in interaction networks, by maintaining two
dynamic embeddings, one long-term and one short-term, in a latent context-
space ${\mathcal{S}}$. The main hypothesis in DeePRed is that an underlying
hidden context-space ${\mathcal{S}}$ is considered to have been generated as a
result of interactions between users and items. This space is assumed to be
thematically divided into different regions that are associated to a
particular theme or context. For an intuitive understanding of ${\mathcal{S}}$
in DeePRed, let us consider an illustration shown in Fig. 2. To simplify our
discussion, suppose ${\mathcal{S}}$ is a 2-dimensional euclidean space, which
is further divided into three different thematic regions, $C_{1},C_{2},C_{3}$.
The notion of a theme/context is related to user interests (preferences) and
item properties.
The two dynamic embeddings are updated at every user-item interaction, both
for users and items. Since DeePRed applies the same procedure for both, the
following discussion is given from a user’s perspective. Suppose user $u$ has
interacted with an item relevant to context $C_{2}$ at time $t_{1}$. To
reflect such behavior, we start by initializing long-term and short-term
embeddings, which are located within the same context $C_{2}$.
As time progresses, when the user interacts with different items, new
instances of the short-term embeddings are generated by keeping the previous
ones. The new instances are shown in the figure along with a timestamp
associated to the interaction, which caused the current embedding. The
motivation for keeping the embeddings comes from a need to maintain a smooth
short-term embedding that reflects the “normal” behaviour and the property of
a user and an item, respectively. Unless there is a “significant” amount of
interactions that cause a drift in a user’s interest, for example from $C_{2}$
to $C_{1}$, “unexpected” interactions should not have a strong influence on
future behaviors (Koren, 2009). Rarely, a user might interact with items from
distant contexts (e.g. $C_{3}$); for such a temporary case, a new instance can
be projected without affecting other short-term embeddings. This allows
DeePRed to reason based on embeddings that are closer to a query than
exceptional cases. Furthermore, DeePRed gives the flexibility to use embedding
histories as needed. In addition, depending on the setting one can choose to
discard old embeddings or store them in aggregated form.
The long-term embeddings, on the other hand, are updated and shifted to a new
point, discarding the old ones. The dotted line Fig. 2 shows the trajectory of
the long-term embedding of user $u$, starting from the inactive to active. In
a nutshell, these embeddings can be seen as aggregates of the short-term ones
over time.
In platforms like YouTube, LastFM, Spotify, the flexibility proposed by
DeePRed can be utilized to recommend multi-faceted sets of items, for example
one based on short-term embeddings and another based on long-term ones. This
is in contrast to several studies that use a single embedding for
recommendation.
#### Modeling long-term interactions
The overall preferences of users and the properties of items are captured by
their long-term interactions. To capture patterns in such interactions, we use
identity-based embeddings of users and items that live in the same context
space ${\mathcal{S}}$. That is, we use an embedding matrix
${\bm{E}}\in{\mathbb{R}}^{d\times U+I}$, which will be trained as interactions
are observed; ${\bm{e}}_{u}$ and ${\bm{e}}_{i}$ denote the long-term embedding
of user $u$ and item $i$. To emphasize that ${\bm{e}}_{u}$ and ${\bm{e}}_{i}$
are conditioned on ${\mathbb{L}}_{u}$ and ${\mathbb{L}}_{i}$, we use the
notation ${\bm{e}}_{u}|{\mathbb{L}}_{u}$ and ${\bm{e}}_{i}|{\mathbb{L}}_{i}$,
respectively.
#### Modeling short-term interactions
Here the focus is in modeling recent interaction patterns of users and items
that govern their follow-up actions. To capture such patterns, we use short-
term embeddings ${{\bm{u}}(t)|{\mathbb{L}}_{u}(t^{<},k)}$ and
${{\bm{i}}(t)|{\mathbb{L}}_{i}(t^{<},k)}$ for users and items, respectively,
conditioned on their recent interactions. In previous studies, short-term
embeddings of users and items were recursively dependent on their respective
previous short-term embeddings (Kumar et al., 2019; Dai et al., 2016b). The
recursive nature of these algorithms inherently makes them expensive, as they
would need to introduce a specialized algorithm for processing batches of
interactions to avoid sequential processing. For example, Kumar et al. had to
introduce an algorithm called t-batch, that process batches by respecting the
temporal order (Kumar et al., 2019). Our design choice avoids such overhead by
relying on the interaction histories rather than the previous short-term
embeddings, which allows for “simple” batching.
Figure 2. An illustration of the evolution of the short-term and long-term
embeddings of user $u$ in a context space, which is further divided into
smaller sub-spaces reflecting a context or theme ($C_{1},C_{2},C_{3}$). The
dotted arrow indicates the trajectory of the long-term embedding (indicated in
black circle). The short-term embeddings of the user are annotated with
timestamps, which is associated with the interaction that generated them.
### 3.1. The proposed architecture
The complete architecture of DeePRed is depicted in Fig. 3. The input of
DeePRed is given by the observed interaction events and a hyper-parameter $k$
of the model.
Figure 3. The architecture of DeePRed
#### Encoder
We process the user and item histories separately, using user and item
encoders that share weights. Again, this is in contrast to previous studies
that use separate RNN modules that are dependent on previous short-term
embeddings. In DeePRed, both the user and item encoder have the same
structure; for this reason, most of our discussion is related to the user
encoder, while the item encoder is similar.
The first component of a user (item) encoder computes a signature embedding of
the short-term history using the long-term embedding of the items (users) and
the deltas as follows:
(1)
${\bm{S}}_{u}(t)=f({\mathbb{L}}_{u}(t^{<},k))=[[{\bm{e}}_{i_{j}};\Delta_{j}]:(i_{j},\Delta_{j})\in{\mathbb{L}}_{u}(t^{<},k)]$
(2)
${\bm{S}}_{i}(t)=f({\mathbb{L}}_{i}(t^{<},k))=[[{\bm{e}}_{u_{j}};\Delta_{j}]:(u_{j},\Delta_{j})\in{\mathbb{L}}_{i}(t^{<},k)]$
The simple, yet expressive and powerful trick used here is that to compute the
signature ${\bm{S}}_{u}(t)$ at time $t$, Eq. 1 relies on the long-term
embeddings of the $k$ most recent items that the user $u$ interacted with.
Equivalently, in Eq. 2, the $k$ most recent users that interacted with the
item $i$ are used to compute ${\bm{S}}_{i}(t)$. The key hypothesis is that the
long-term or stationary embeddings of _multiple items_ is a strong signal for
capturing a user’s recent interest, as each stationary embedding
${\bm{e}}_{i_{j}}\in{\bm{S}}_{u}(t)$ captures a sticking property or context
(e.g. SciFi) of item $i_{j}$. In addition, note that the signature at time $t$
contains information only from the _past_ , as we want to predict the present.
Furthermore, it has been shown that the delay between interactions plays a
significant role in predicting future interactions. Thus, each long-term
embedding is combined $[\cdot;\cdot]$ with $\Delta_{j}$ in the signature to
increase the impact of fresh activities and decrease the importance of the
stale ones. Note that, some studies use a decay function of $\Delta_{j}$
instead, e.g. ${g(\Delta_{j})=1/\log(e+\Delta_{j})}$ (Zhang et al., 2019; Zhu
et al., 2017; Beutel et al., 2018). In our experiments we found no difference
between these approaches, and hence we simply use $\Delta_{j}$.
Second, to model recurring interaction and delay patterns in a history, we
employ shared and mutual RNN modules over the signatures, ${\bm{S}}_{u}(t)$
and ${\bm{S}}_{i}(t)$. Empirically, Gated Recurrent units (GRU) tend to give
better performance, thus we use GRU instead of the basic RNN. Therefore, the
standard GRU model for capturing recurrence in a signature ${\bm{S}}(t)$ (user
or item) slightly modified to integrate $\Delta_{j}$ is given as
(3) $\displaystyle{\bm{z}}_{j}$
$\displaystyle=\sigma({\bm{W}}_{1z}{\bm{e}}_{j}+{\bm{b}}_{1z}+{\bm{W}}_{2z}\Delta_{j}+{\bm{b}}_{2z}+{\bm{W}}_{3z}{\bm{h}}_{j-1}+{\bm{b}}_{3z})$
(4) $\displaystyle{\bm{r}}_{j}$
$\displaystyle=\sigma({\bm{W}}_{1r}{\bm{e}}_{j}+{\bm{b}}_{1r}+{\bm{W}}_{2r}\Delta_{j}+{\bm{b}}_{2r}+{\bm{W}}_{3r}{\bm{h}}_{j-1}+{\bm{b}}_{3r})$
(5) $\displaystyle{\bm{n}}_{j}$
$\displaystyle=\tanh({\bm{W}}_{1n}{\bm{e}}_{j}+{\bm{b}}_{1n}+{\bm{W}}_{2n}\Delta_{j}+{\bm{b}}_{2n}+{\bm{z}}_{j}*({\bm{W}}_{3n}{\bm{h}}_{j-1}+{\bm{b}}_{3n}))$
(6) $\displaystyle{\bm{h}}_{j}$
$\displaystyle=(1-{\bm{r}}_{j})*{\bm{n}}_{j}+{\bm{r}}_{j}*{\bm{h}}_{j-1}$
where $\sigma$ is the sigmoid function and ${\bm{W}}_{pq}$, ${\bm{b}}_{pq}$,
${p\in\\{1,2,3\\}}$ and ${q\in\\{z,r,n\\}}$ are the parameters of the model
shared by the encoders; ${\bm{e}}_{j}$ corresponds to either
${\bm{e}}_{i_{j}}$ or ${\bm{e}}_{u_{j}}$ depending on the specified signature.
At each step $j$, a new hidden state ${\bm{h}}_{j}$ is computed using the
$j^{\textrm{th}}$ step inputs of ${\bm{S}}(t)$, _i.e._ the long-term embedding
${\bm{e}}_{j}$ and $\Delta_{j}$, and the previous hidden state
${\bm{h}}_{j-1}.$
Finally, we concatenate the hidden states of the GRU as
(7) ${\bm{F}}(t)=[{\bm{h}}_{1},\ldots,{\bm{h}}_{k}]$
in order to obtain a high-level feature matrix of the signature at time $t$
that captures recurring interaction and delay patterns. Again, depending on
the encoder, ${\bm{F}}(t)$ is either ${\bm{F}}_{u}(t)$ or ${\bm{F}}_{i}(t)$.
#### Alignment
Recall that both the user’s and item’s long-term embeddings live in the same
space, and the high-level features ${\bm{F}}_{u}(t)$ and ${\bm{F}}_{i}(t)$ are
derived based on such embeddings. Thus, as shown in Eq. 8, the alignment
component is used to inspect the compatibility between these features, to see
how well the recent events of $u$ and $i$ agree contextually.
(8) ${\bm{A}}(t)=\tanh({\bm{F}}_{u}(t)^{T}{\bm{F}}_{i}(t))$
We can interpret each row $j$ of ${\bm{A}}(t)\in{\mathbb{R}}^{k\times k}$ as a
measure of context agreement between the $j^{\textrm{th}}$ item in the given
user’s $(u)$ short-term history with all the users in the given item’s $(i)$
short-term history at time $t$. In Eq. 8, similar to (dos Santos et al., 2016;
Kefato and Girdzijauskas, 2020b), one can add more degree of freedom by
introducing a trainable parameter $\Theta\in{\mathbb{R}}^{d\times d}$
depending on the problem setting as in the following equation:
(9) ${\bm{A}}(t)=\tanh({\bm{F}}_{u}(t)^{T}\Theta{\bm{F}}_{i}(t))$
However, we have empirically observed that for the problem at hand, fixing
$\Theta$ to the identity matrix ${\bm{I}}$ gives a better result. When Eq. 9
is applied, DeePRed tends to overfit faster even with a strong regularization;
as a result, we opted for Eq. 8 instead. Hence, the only free parameters of
DeePRed are the long-term embedding ${\bm{E}}$ and the GRU parameters.
#### Attention + Projection
Finally, we want to pay attention to the strong context agreements in
${\bm{A}}(t)$, signaled by high scores, in order to obtain embeddings that
reflect short-term behaviours. In other words, we want to investigate the
compatibility between the recent interest of a user and the property of an
item to understand where the agreement lies. To this end, we compute attention
weights for each item in the user’s recent history (and vice-versa for each
user in the item’s recent history) using a column-wise (${\bm{X}}_{\bullet:}$)
and row-wise (${\bm{X}}_{:\bullet}$) max-pooling as shown in Eq. 10 and 11,
respectively.
(10) $\tilde{{\bm{u}}}(t)=\max{{\bm{A}}(t)_{\bullet:}}$ (11)
$\tilde{{\bm{i}}}(t)=\max{{\bm{A}}(t)_{:\bullet}}$
The $j^{\textrm{th}}$ component $\tilde{{\bm{u}}}_{j}(t)$ of the vector
${\tilde{{\bm{u}}}(t)\in{\mathbb{R}}^{k}}$ corresponds to the attention weight
of the $j^{\textrm{th}}$ event,
$(i_{j},\Delta_{j})\in{\mathbb{L}}_{u}(t^{<},k)$. It indicates:
* $\square$
the strongest alignment (contextual agreement) of the $j^{\textrm{th}}$ item
$i_{j}$ from all the users in the short-term history
${\mathbb{L}}_{i}(t^{<},k)$ of the item $i$
* $\square$
the hotness of the event
and it is the result of the column-wise pooling on the $j^{\textrm{th}}$ row,
$\max({\bm{A}}(t)_{j:})$. These two interpretations of the attention weights
are based on the assumption that future activities are governed by recent
actions and interest (Zhang et al., 2019; Nguyen et al., 2018; Zhu et al.,
2017). Inversely, stale events should have less impact on future interactions.
Equivalently, the $j^{\textrm{th}}$ component $\tilde{{\bm{i}}}_{j}(t)$ of
${\tilde{{\bm{i}}}(t)\in{\mathbb{R}}^{k}}$ represents the attention weights of
the $j^{\textrm{th}}$ event, $(u_{j},\Delta_{j})\in{\mathbb{L}}_{i}(t^{<},k)$
and it is the result of the row-wise pooling on the $j^{\textrm{th}}$ column,
$\max({\bm{A}}(t)_{:j})$. The interpretation remains the same.
In this way, each item in the user history and each user in the item history
are now scored in relation to their contextual agreement, from which we obtain
the compatibility between the interacting user and item. Alternatively, we
have used mean-pooling in Eq. 10 and 11 and empirically observed no
difference.
At this point, we _project_ a new point representing the short-term interest
and properties using the normalized attention weights. Eq. 12 and 13 compute
the user and item projection using the weighted sum of the features
${\bm{F}}_{u}(t)$ and ${\bm{F}}_{i}(t)$, respectively.
(12)
${\bm{u}}(t)={\bm{F}}_{u}(t)\cdot\texttt{softmax}(\tilde{{\bm{u}}}(t)^{T})$
(13)
${\bm{i}}(t)={\bm{F}}_{i}(t)\cdot\texttt{softmax}(\tilde{{\bm{i}}}(t)^{T})$
Both equations can be seen as feature selectors based on contextual agreement
and freshness. That is, they select those features that have a strong
contextual agreement and are relatively new as indicated by the magnitude of
the attention weights. The $\texttt{softmax}(\cdot)$ function gives us a
distribution of weights for events in the short-term history of $u$ and $i$.
That is, fresh and contextually agreeing events will get weights close to 1,
otherwise close to 0. We argue that the model can learn in a way that weights
are distributed in the aforementioned manner. As desired, consequently,
weighted features with weights close to 1 will govern the projections. We
consider ${\bm{u}}(t)$ and ${\bm{i}}(t)$ as predictions of the short-term
embeddings of the user and item at time $t$, respectively.
### 3.2. Training DeePRed
Similarly to previous work (Kumar et al., 2019), DeePRed predicts the user and
item embeddings, albeit in a different manner. Thus, we employ a similar loss
function using mean squared error. Our goal is to jointly train the long-term
and short-term embeddings in order to bring the projection of frequently
interacting items as close as possible. To this end, we minimize the $L_{2}$
distance as
(14)
${\mathcal{L}}=\min\frac{1}{N}\sum_{(u,i,t)\in{\mathbb{L}}_{train}}||{\bm{u}}(t)-{\bm{i}}(t)||_{2}^{2}+{\mathcal{L}}_{reg}$
where $N$ is the batch size for batch training and ${\mathbb{L}}_{train}$ is
the observed event log in the training set. The second term on the RHS of Eq.
14, a regularization loss, is introduced to avoid the trivial solution of
collapsing into a subspace. It is motivated by the Laplacian eigenmaps method,
which adds the constraint ${\bm{u}}(t)^{T}{\bm{i}}(t)=1$ to avoid the
collapse. Therefore, we specify ${\mathcal{L}}_{reg}$ as
(15) ${\mathcal{L}}_{reg}=\gamma*||{\bm{v}}^{T}{\bm{v}}-{\bm{I}}||_{F}^{2}$
where ${\bm{v}}=[{\bm{u}}(t);{\bm{i}}(t)]\in{\mathbb{R}}^{d\times 2}$ and
$\gamma$ is a regularization coefficient. ${\mathcal{L}}_{reg}$ encourages
points to be similar to themselves but not others. Given that we predict
embeddings following (Belkin and Niyogi, 2003; Kumar et al., 2019) as opposed
to scores as in (Dai et al., 2016b), we do not need for a contrastive loss in
Eq. 14.
Since our algorithm is designed in such a way that the short-term embeddings
at time $t$ are not dependent on the ones at time $t-1$, batching is
straightforward and DeePRed incurs in no overhead from batch processing unlike
the work of Kumar et al. (Kumar et al., 2019). Together with design choices
explained above, this makes DeePRed efficient, as demonstrated in Section 4.
### 3.3. DeePRed for Static Networks
DeePRed requires only minor changes to be applicable to static interaction
networks, as explained below.
The first obvious change is the lack of time, and consequently the lack of
order; we consider ${\mathbb{L}}$ to be an unordered set. Thus, the notion of
“long-term” and “short-term” interactions is meaningless. Instead, the
equivalent idea in static networks is “global” for “long-term” and “context-
aware” for “short-term”. Global interactions are modeled as
$({\bm{e}}_{u}|{\mathbb{L}}_{u}$ or ${\bm{e}}_{i}|{\mathbb{L}}_{i})$ using
almost all the observed events in no specific order. We refer to the
corresponding embeddings as _global embeddings_. Similarly, context-aware
interactions are modeled using _context-aware embeddings_
${\bm{u}}|{\mathbb{L}}_{u}(k)$ or ${\bm{i}}|{\mathbb{L}}_{i}(k)$ conditioned
on $k$ randomly sampled events. The context-aware embeddings are in line with
recent studies that argue against the adequacy of using a single embedding per
node (Epasto and Perozzi, 2019; Liu et al., 2019; Yang et al., 2020; Kefato
and Girdzijauskas, 2020a; Tu et al., 2017). Each node, instead, is represented
by multiple embeddings reflecting the multi-dimensional aspect of a node’s
interest or property.
Thus, the input is specified by each interaction $(u,i)\in{\mathbb{L}}$ and
$k$. The user and item encoders take ${\mathbb{L}}_{u}(k)$ and
${\mathbb{L}}_{i}(k)$; encoding amounts to a simple embedding lookup and
concatenation operation, to generate ${\bm{F}}_{u}$ and ${\bm{F}}_{i}$
ignoring the GRU model. The followup steps are a straightforward application
of the _alignment_ first, followed by _attention + projection_ to obtain the
context-aware embeddings ${\bm{u}}$ and ${\bm{i}}$.
## 4\. Empirical Evaluation
We evaluate the performance of the proposed algorithm using three real-world
temporal interaction networks and we compare DeePRed against seven state-of-
the-art baselines.
Method | Reddit | Wikipedia | LastFM | | Minimum % of improvement
---
of DeePRed over method
MRR | Recall@10 | MRR | Recall@10 | MRR | Recall@10 | MRR | Recall@10
lstm | 0.355 | 0.551 | 0.329 | 0.455 | 0.062 | 0.119 | 133.23 % | 51.17 %
TimeLstm | 0.387 | 0.573 | 0.247 | 0.342 | 0.068 | 0.137 | 113.95 % | 45.37 %
rrn | 0.603 | 0.747 | 0.522 | 0.617 | 0.089 | 0.182 | 37.13 % | 11.51 %
LatentCross | 0.421 | 0.588 | 0.424 | 0.481 | 0.148 | 0.227 | 96.67 % | 41.67 %
ctdne | 0.165 | 0.257 | 0.035 | 0.056 | 0.01 | 0.01 | 401.81 % | 224.12 %
DeepCoEvolve | 0.171 | 0.275 | 0.515 | 0.563 | 0.019 | 0.039 | 71.84 % | 57.90 %
Jodie | 0.726 | 0.852 | 0.746 | 0.822 | 0.195 | 0.307 | 14.04 % | -2.23 %
DeePRed | 0.828 | 0.833 | 0.885 | 0.889 | 0.393 | 0.416 | - | -
% gain over Jodie | 14.04 % | -2.23 % | 18.63 % | 8.15 % | 101.53 % | 35.50 % | - | -
Table 1. The comparison of the empirical results between DeePRed and the
baseline methods for the three temporal datasets. Bold and blue highlight
indicate best and second best performing algorithms, respectively
### 4.1. Datasets
The three publicly available datasets we selected are the following:
* •
Reddit (Kumar et al., 2019) contains post interactions by users on subreddits
(items), over a period of one month. The most active users (10,000) and items
(1,000) are collected, with 672,447 interactions in total. Actions are
repeated 79% of the time.
* •
Wikipedia (Kumar et al., 2019) contains edit interactions by editors (users)
on Wikipedia pages (items) over a period of one month. 8,227 editors with at
least 5 edits and the 1,000 most edited pages are included, for a total of
157,474 interactions. Actions are repeated 61% of the time.
* •
LastFM (Kumar et al., 2019) contains listening activities by users on songs
(items), over a period of one month, restricted to 1,000 users who listened to
the 1,000 most-listened songs, with 1,293,103 interactions in total. Actions
are repeated 8.6% of the time.
### 4.2. Baselines
We compare DeePRed with seven state-of-the-art algorithms commonly used in
recommender systems, grouped as follows:
* •
Sequence models are different flavors of RNNs trained based on item-sequence
data: lstm, TimeLstm (Zhu et al., 2017), rrn (Wu et al., 2017), LatentCross
(Beutel et al., 2018)
* •
Bipartite models are baselines based on bipartite interaction graph and employ
mutually recursive RNNs: DeepCoEvolve (Dai et al., 2016a), Jodie (Kumar et
al., 2019).
* •
Graph base model: finally, we have ctdne based on continuous time graph
embedding using temporal random walks.
### 4.3. Next Item Prediction Experiment
Based on observations of recent interactions with items, the goal is to
predict the next item a user is likely to interact with. This is what lies at
the backbone of a number of RS.
#### Setting
We use the same partitions used by Kumar et al. (Kumar et al., 2019), _i.e._
data is partitioned by respecting the temporal ordering of events as training
(80%), validation (10%), and test (10%). During training, we use the
validation set to tune the hyperparameters of our model using Bayesian
optimization.
During testing, given a ground-truth interaction $(u,i,t)$, DeePRed predicts a
ranked list of the top-$k$ items that $u$ will interact with at time $t$,
based on previous interactions ${\mathbb{L}}_{u}(t^{<},k)$ and
${\mathbb{L}}_{i}(t^{<},k)$. Since DeePRed predicts short-term embeddings, as
opposed to interaction probabilities, we can use an efficient nearest-neighbor
search to predict the top-$k$ items. We use mean reciprocal rank (MRR) and
Recall@10 to measure the quality of the ranked list.
#### Results
Results are reported in Table 1. Since all the settings are exactly the same,
the figures for all the baselines are directly taken from Kumar et al. (Kumar
et al., 2019).
DeePRed outperforms all the baselines by a significant margin in all but one
case. Almost all the baselines have a huge gap between MRR and Recall@10,
unlike the small gap of DeePRed. This shows that DeePRed ranks the ground
truth higher, while others simply detect it in lower positions in the top-10
predicted items. For example, for the only case where Jodie beats DeePRed by a
small margin, the Recall@1 is 0.648 for Jodie and 0.813 for DeePRed.
#### Effect of features
One might ask, and rightly so, why not include a richer set of features in
DeePRed, as in previous works (Beutel et al., 2018; Kumar et al., 2019; Dai et
al., 2016a). First, some of these features (software client, page) are not
easily accessible (Beutel et al., 2018). Other features, such as the textual
content, could be easily integrated into our model without affecting the
architecture; anyway, we found no difference for the three datasets. To verify
this, we have further investigated what happens when you remove textual
features from the strongest baseline, Jodie. As shown in Table 2, JodieNF
(Jodie with no features) performs as well as Jodie, if not better, for the two
datasets with textual interaction features.
Method | Reddit | Wikipedia
---|---|---
MRR | Recall@10 | MRR | Recall@10
Jodie | 0.726 | 0.852 | 0.746 | 0.822
JodieNF | 0.726 | 0.852 | 0.759 | 0.824
Table 2. Jodie vs JodieNF
### 4.4. Runtime Experiment
Figure 4. The computational time (in minutes) required to complete an epoch
using the Reddit dataset.
To empirically compare DeePRed’s efficiency, we measured the time needed to
run the models. In Fig. 4, we report the comparison between the methods for
completing an epoch using the Reddit dataset. We see that DeePRed is much
faster than all the baselines. Since we are using the figures from (Kumar et
al., 2019), Fig. 4 might not be a fair comparison as the machines are
different. Hence, we rerun Jodie on our machine and it took 15 minutes to
complete the same epoch, showing that the speedup by DeePRed is even better,
more than an order of magnitude.
### 4.5. Hyperparameter sensitivity experiment
In this section, we analyze the effect of different hyperparameters of the
methods on next item prediction. We simply compare DeePRed with Jodie, since
it is much better than all the other baselines.
#### Impact of proportion of training size
Despite their gap, as shown in Fig. 5, for both methods the observation of 60%
of the events is sufficient for effective next item prediction on Reddit and
Wikipedia. On the contrary, DeePRed executed on LastFM keeps improving as
repeated actions are sparse and patterns might emerge from observing more
examples.
#### Impact of Embedding Size
Fig. 6 shows the impact of the embedding size; for DeePRed, 128 is an optimal
value, while for Jodie this parameter has no influence, almost.
#### Effect of $k$
Parameter $k$, the number of short-term events in ${\mathbb{L}}_{u}(t^{<},k)$
and ${\mathbb{L}}_{i}(t^{<},k)$, affects DeePRed only. Our findings are
reported in Fig. 7; we observe that $k$ has different effects across datasets.
In LastFM, increasing the number of events produces an improvement; in Reddit,
there is no effect; in Wikipedia, a declining effect can be observed. Recall
that, actions are seldom repeated globally in LastFM, implying that repeated
actions are locally sparse; for this reason, interaction patterns are detected
by increasing the volume of retrospective observations.
Figure 5. Effect of training proportion Figure 6. Effect of embedding size
Figure 7. Effect of the short-term history size
### 4.6. Static Networks’ Experiment
We discuss now our experiments carried out on three static networks. Although
DeePRed performs well, our goal here is to show its potential and flexibility,
rather than report its superiority.
#### Datasets
We use the following static interaction networks:
* •
MATADOR (Manually Annotated Targets and Drugs Online Resource) (S et al.,
2008) is a drug-target interaction network, with 801 drugs (users) 2,901
targets (items), and 15,843 interactions.
* •
SIDER (Side Effect Resource version 4.1) (M et al., 2015) is a drug (user) and
side-effects (item) association dataset. There are 639 users, 10,184 items and
174,977 interactions (associations).
* •
STEAM (ste, [n.d.]) is a popular PC gaming hub dataset, containing games
(items) users have purchased. There are 12,393 users, 5,155 games, and 129,511
purchasing actions.
#### Baselines
We use four baselines grouped as follows:
* •
Context-aware: Splitter (Epasto and Perozzi, 2019) is a SOTA context-aware
baseline; similarly to DeePRed, it learns multiple embeddings of nodes for
static networks.
* •
Context-free Deepwalk (Perozzi et al., 2014), Node2vec (Grover and Leskovec,
2016) and line (Tang et al., 2015) are popular baselines used for static
network embedding
Figure 8. The Average Precision result for interaction prediction on static
networks
The interaction prediction is executed as a link prediction task, where we
create a random partition of the graph as training (60%), validation (10%),
and test (30%) sets. In addition, we randomly sample non-existing (negative)
interactions proportional to the test set (30%). An algorithm is trained on
the training set and tuned on the validation set. The average precision (AP),
which summarizes the precision-recall curve is then computed based on a
method’s capacity to rank the test (true) over negative (false) interactions.
The results are reported in Fig 8, where we see that DeePRed is comparable
with a context-aware and much better than context-free baselines.
## 5\. Related Work
Factorization methods have significantly influenced the study of recommender
systems (RS), more prominently since the Netflix prize competition. However,
as deep neural networks (DNNs) gained momentum across several domains, several
studies have shown the effectiveness of DNNs in RS as well (Covington et al.,
2016; Jing and Smola, 2017; Wang et al., 2019; Wu et al., 2017). Early efforts
used a vanilla DNN architecture by integrating crafted and learned features
into the models (Covington et al., 2016).
As recurring patterns in user-item interactions are considered to be critical
in recommending or predicting future activities, recurrent neural networks
(RNNs) and its variants have been widely used in interaction prediction or RS.
### 5.1. RNNs for Recommender Systems
RNNs are inherently suited for modeling patterns in sequential data, such as
language and time-series. Due to their effectiveness, they have seen
applicability in different areas, such as NLP, speech recognition, computer
vision, and health–just to name a few.
Initial efforts in RS have employed RNNs by simply using a sequence of user
actions in order to capture repeated user activity patterns, and model their
preference or behavior (Wu et al., 2017; Tan et al., 2016; Hidasi et al.,
2015). This approach has further been used to predict interesting items for
users based on their preference, for example on platforms like YouTube,
Spotify, LastFM. However, standard RNNs and its variants (LSTM, GRU) can only
capture recurrence and do not encode delay or interval between activities,
which is an intrinsic nature of user behaviours. This is because activities
that are close to an event in time are more likely to trigger such event than
the ones that are far apart.
### 5.2. Time-based RNNs
Motivated by the aforementioned need, extensions to RNNs (LSTM, GRU) have been
introduced to account for time. In addition to the existing gating mechanisms
in RNNs, these studies have introduced different time-gating mechanisms to
favor new events and discount the impact of old ones (Zhu et al., 2017; Zhang
et al., 2019). Novelty or oldness refer to the delta in time, not to the
position of events in a sequence.
### 5.3. Mutual RNNs
Closely related to our study, recently mutual RNNs for next item prediction
have been proposed (Kumar et al., 2019; Dai et al., 2016a). A simple yet
powerful aspect of these approaches is the bipartite temporal interaction
network model, and the mutual RNN architecture that paved a way to examine
user-item interaction dynamics. However, besides the essential differences in
modeling short-term embeddings of users and items, DeePRed is also different
in using shared and non-recursive mutual RNNs.
### 5.4. Other methods
Besides RNNs, other methods such as graph neural networks (GNN) and
transformers have also been employed in RS (Vaswani et al., 2017). The former
was introduced for neural collaborative-filtering and session-based RS (Wang
et al., 2020; Wu et al., 2019; Xu et al., 2019; Wang et al., 2019). Due to the
ever increasing impact of transformers for modeling sequential data, several
studies proposed this model for predicting next basket items (Kang and
McAuley, 2018; Sun et al., 2019; Xu et al., 2019). Training transformers has
proved to be much more efficient than RNNs, as they are highly parallelizable.
However, the core component of transformers– _self-attention_ –has the
tendency to distribute attention weights and discounting impact from local
dependencies (Xu et al., 2019).
## 6\. Conclusion and Future Work
In this study we present a novel algorithm called DeePRed for next item
prediction in temporal interaction networks. Building up on recent
achievements, DeePRed captures the mutual interaction dynamics in the
interactions between users and items. We propose a simple yet powerful
mechanism to model both user and item short-term preferences based on the
their recent interaction history. The history serves as proxy for the context
of interaction in recent events. We leverage the mechanism to avoid recursive
dependency between consecutive short-term embeddings of a user or an item over
time. Our design enables DeePRed to be effective in predicting next item
interaction without compromising efficiency.
Our empirical finding on three real-world datasets demonstrate the
effectiveness of DeePRed over seven SOTA baselines by at least 14%. In
addition, DeePRed is at least an order of magnitude faster than the best
performing baselines.
We have also shown that the design of DeePRed is flexible enough to
accommodate static networks. As a demonstration, we show how well it performs
for interaction prediction over bio-chemical and gaming interaction networks.
Though maintaining multiple embeddings in DeePRed is what lies behind its
effectiveness, it comes at the cost of memory. As GPU memory is expensive,
this calls for an improved design for DeePRed, that will be addressed in
future work.
## Appendix A DeePRed Configuration
Table A1 shows the final configurations of DeePRed used to report the results
in Section 4. The experiments are executed on an NVIDIA QUADRO RTX 5000 GPU
with NVLink, 3072 CUDA cores, and 16 GB GDDR6 memory.
Table A1. The configuration of DeePRed’s hyperparameters
## References
* (1)
* ste ([n.d.]) [n.d.]. https://www.kaggle.com/tamber/steam-video-games.
* Belkin and Niyogi (2003) M. Belkin and P. Niyogi. 2003. Laplacian Eigenmaps for Dimensionality Reduction and Data Representation. _Neural Computation_ 15, 6 (2003), 1373–1396.
* Beutel et al. (2018) Alex Beutel, Paul Covington, Sagar Jain, Can Xu, Jia Li, Vince Gatto, and Ed H. Chi. 2018. Latent Cross: Making Use of Context in Recurrent Recommender Systems. In _Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining_ (Marina Del Rey, CA, USA) _(WSDM ’18)_. Association for Computing Machinery, New York, NY, USA, 46–54. https://doi.org/10.1145/3159652.3159727
* Buza and Peška (2017) Krisztian Buza and Ladislav Peška. 2017. Drug–target interaction prediction with Bipartite Local Models and hubness-aware regression. _Neurocomputing_ 260 (2017), 284 – 293. https://doi.org/10.1016/j.neucom.2017.04.055
* Covington et al. (2016) Paul Covington, Jay Adams, and Emre Sargin. 2016\. Deep Neural Networks for YouTube Recommendations. In _Proceedings of the 10th ACM Conference on Recommender Systems_ (Boston, Massachusetts, USA) _(RecSys ’16)_. Association for Computing Machinery, New York, NY, USA, 191–198. https://doi.org/10.1145/2959100.2959190
* Dai et al. (2016a) Hanjun Dai, Yichen Wang, Rakshit Trivedi, and Le Song. 2016a. Deep Coevolutionary Network: Embedding User and Item Features for Recommendation. arXiv:1609.03675 [cs.LG]
* Dai et al. (2016b) Hanjun Dai, Yichen Wang, Rakshit Trivedi, and Le Song. 2016b. Recurrent Coevolutionary Feature Embedding Processes for Recommendation. _CoRR_ abs/1609.03675 (2016). arXiv:1609.03675 http://arxiv.org/abs/1609.03675
* dos Santos et al. (2016) Cicero dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016\. Attentive Pooling Networks. arXiv:1602.03609 [cs.CL]
* Epasto and Perozzi (2019) Alessandro Epasto and Bryan Perozzi. 2019. Is a Single Embedding Enough? Learning Node Representations that Capture Multiple Social Contexts. _The World Wide Web Conference on - WWW ’19_ (2019). https://doi.org/10.1145/3308558.3313660
* Grover and Leskovec (2016) Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable Feature Learning for Networks. _CoRR_ abs/1607.00653 (2016). arXiv:1607.00653 http://arxiv.org/abs/1607.00653
* Hidasi et al. (2015) Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. 2015. Session-based Recommendations with Recurrent Neural Networks. arXiv:1511.06939 [cs.LG]
* Jing and Smola (2017) How Jing and Alexander J. Smola. 2017. Neural Survival Recommender. In _Proceedings of the Tenth ACM International Conference on Web Search and Data Mining_ (Cambridge, United Kingdom) _(WSDM ’17)_. Association for Computing Machinery, New York, NY, USA, 515–524. https://doi.org/10.1145/3018661.3018719
* Kang and McAuley (2018) Wang-Cheng Kang and Julian McAuley. 2018. Self-Attentive Sequential Recommendation. arXiv:1808.09781 [cs.IR]
* Kefato and Girdzijauskas (2020a) Zekarias T. Kefato and Sarunas Girdzijauskas. 2020a. Gossip and Attend: Context-Sensitive Graph Representation Learning. In _14-th International AAAI Conference on Web and Social Media_ _(ICWSM’20)_. arXiv:2004.00413 [cs.LG] https://arxiv.org/abs/2004.00413
* Kefato and Girdzijauskas (2020b) Zekarias T. Kefato and Sarunas Girdzijauskas. 2020b. Graph Neighborhood Attentive Pooling. arXiv:2001.10394 [cs.LG]
* Koren (2009) Yehuda Koren. 2009\. Collaborative Filtering with Temporal Dynamics. In _Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ (Paris, France) _(KDD ’09)_. Association for Computing Machinery, New York, NY, USA, 447–456. https://doi.org/10.1145/1557019.1557072
* Kumar et al. (2019) Srijan Kumar, Xikun Zhang, and Jure Leskovec. 2019\. Predicting Dynamic Embedding Trajectory in Temporal Interaction Networks. In _Proceedings of the 25th ACM SIGKDD international conference on Knowledge discovery and data mining_.
* Liu et al. (2019) Ninghao Liu, Qiaoyu Tan, Yuening Li, Hongxia Yang, Jingren Zhou, and Xia Hu. 2019\. Is a Single Vector Enough? _Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ (Jul 2019). https://doi.org/10.1145/3292500.3330967
* M et al. (2015) Kuhn M, Letunic I, Jensen LJ, and Bork P. 2015\. The SIDER database of drugs and side effects.. In _Nucleic Acids Res_. https://doi.org/10.1093/nar/gkv1075
* Nguyen et al. (2018) Giang Hoang Nguyen, John Boaz Lee, Ryan A. Rossi, Nesreen K. Ahmed, Eunyee Koh, and Sungchul Kim. 2018. Continuous-Time Dynamic Network Embeddings. In _Companion Proceedings of the The Web Conference 2018_ (Lyon, France) _(WWW ’18)_. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 969–976. https://doi.org/10.1145/3184558.3191526
* Perozzi et al. (2014) Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014\. DeepWalk: Online Learning of Social Representations. _CoRR_ abs/1403.6652 (2014). arXiv:1403.6652 http://arxiv.org/abs/1403.6652
* S et al. (2008) Günther S, Kuhn M, Dunkel M, Campillos M, Senger C, Petsalaki E, Ahmed J, Urdiales EG, Gewiess A, Jensen LJ, Schneider R, Skoblo R, Russell RB, Bourne PE, Bork P, and Preissner R. 2008\. SuperTarget and Matador: resources for exploring drug-target relationships.. In _Nucleic Acids Res_.
* Sun et al. (2019) Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019. BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer. In _Proceedings of the 28th ACM International Conference on Information and Knowledge Management_ (Beijing, China). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3357384.3357895
* Tan et al. (2016) Yong Kiam Tan, Xinxing Xu, and Yong Liu. 2016. Improved Recurrent Neural Networks for Session-Based Recommendations. In _Proceedings of the 1st Workshop on Deep Learning for Recommender Systems_. ACM, 17–22. https://doi.org/10.1145/2988450.2988452
* Tang et al. (2015) Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015\. LINE: Large-scale Information Network Embedding. _CoRR_ abs/1503.03578 (2015). arXiv:1503.03578 http://arxiv.org/abs/1503.03578
* Tu et al. (2017) Cunchao Tu, Han Liu, Zhiyuan Liu, and Maosong Sun. 2017\. CANE: Context-Aware Network Embedding for Relation Modeling. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Association for Computational Linguistics, Vancouver, Canada, 1722–1731. https://doi.org/10.18653/v1/P17-1158
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, undefinedukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In _Proceedings of the 31st International Conference on Neural Information Processing Systems_ (Long Beach, California, USA) _(NIPS’17)_. Curran Associates Inc., Red Hook, NY, USA, 6000–6010.
* Wang et al. (2019) Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. 2019. Neural Graph Collaborative Filtering. In _Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval_ (Paris, France) _(SIGIR’19)_. Association for Computing Machinery, New York, NY, USA, 165–174. https://doi.org/10.1145/3331184.3331267
* Wang et al. (2020) Ziyang Wang, Wei Wei, Gao Cong, Xiao-Li Li, Xian-Ling Mao, and Minghui Qiu. 2020\. Global Context Enhanced Graph Neural Networks for Session-Based Recommendation. In _Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval_ (Virtual Event, China) _(SIGIR ’20)_. Association for Computing Machinery, New York, NY, USA, 169–178. https://doi.org/10.1145/3397271.3401142
* Wu et al. (2017) Chao-Yuan Wu, Amr Ahmed, Alex Beutel, Alexander J. Smola, and How Jing. 2017. Recurrent Recommender Networks. In _Proceedings of the Tenth ACM International Conference on Web Search and Data Mining_ (Cambridge, United Kingdom) _(WSDM ’17)_. Association for Computing Machinery, New York, NY, USA, 495–503. https://doi.org/10.1145/3018661.3018689
* Wu et al. (2019) Shu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, and Tieniu Tan. 2019\. Session-Based Recommendation with Graph Neural Networks. _Proceedings of the AAAI Conference on Artificial Intelligence_ 33 (Jul 2019), 346–353. https://doi.org/10.1609/aaai.v33i01.3301346
* Xu et al. (2019) Chengfeng Xu, Pengpeng Zhao, Yanchi Liu, Victor S. Sheng, Jiajie Xu, Fuzhen Zhuang, Junhua Fang, and Xiaofang Zhou. 2019\. Graph Contextualized Self-Attention Network for Session-based Recommendation. In _Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19_. International Joint Conferences on Artificial Intelligence Organization, 3940–3946. https://doi.org/10.24963/ijcai.2019/547
* Yang et al. (2020) Carl Yang, Aditya Pal, Andrew Zhai, Nikil Pancha, Jiawei Han, Charles Rosenberg, and Jure Leskovec. 2020. MultiSage: Empowering GCN with Contextualized Multi-Embeddings on Web-Scale Multipartite Networks. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ _(KDD ’20)_. ACM, 2434–2443. https://doi.org/10.1145/3394486.3403293
* You et al. (2019) Jiaying You, Robert D. McLeod, and Pingzhao Hu. 2019\. Predicting drug-target interaction network using deep learning model. _Computational Biology and Chemistry_ (2019). http://www.sciencedirect.com/science/article/pii/S1476927119301902
* Zhang et al. (2019) Yuan Zhang, Xi Yang, Julie Ivy, and Min Chi. 2019\. ATTAIN: Attention-based Time-Aware LSTM Networks for Disease Progression Modeling. In _Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19_. International Joint Conferences on Artificial Intelligence Organization, 4369–4375. https://doi.org/10.24963/ijcai.2019/607
* Zhu et al. (2017) Yu Zhu, Hao Li, Yikang Liao, Beidou Wang, Ziyu Guan, Haifeng Liu, and Deng Cai. 2017. What to Do next: Modeling User Behaviors by Time-LSTM. In _Proceedings of the 26th International Joint Conference on Artificial Intelligence_ (Melbourne, Australia) _(IJCAI’17)_. AAAI Press, 3602–3608.
* Zitnik et al. (2018) Marinka Zitnik, Monica Agrawal, and Jure Leskovec. 2018\. Modeling polypharmacy side effects with graph convolutional networks. _Bioinformatics_ (2018). https://doi.org/10.1093/bioinformatics/bty294
|
# Generalized Step-Chirp Sequences With Flexible Bandwidth
Cheng Du, Yi Jiang Department of Communication Science and Engineering
Fudan University
Shanghai, China
Email<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Sequences with low aperiodic autocorrelation sidelobes have been extensively
researched in literatures. With sufficiently low integrated sidelobe level
(ISL), their power spectrums are asymptotically flat over the whole frequency
domain. However, for the beam sweeping in the massive multi-input multi-output
(MIMO) broadcast channels, the flat spectrum should be constrained in a
passband with tunable bandwidth to achieve the flexible tradeoffs between the
beamforming gain and the beam sweeping time. Motivated by this application, we
construct a family of sequences termed the generalized step-chirp (GSC)
sequence with a closed-form expression, where some parameters can be tuned to
adjust the bandwidth flexibly. In addition to the application in beam
sweeping, some GSC sequences are closely connected with Mow’s unified
construction of sequences with perfect periodic autocorrelations, and may have
a coarser phase resolution than the Mow sequence while their ISLs are
comparable.
## I Introduction
Sequences with low aperiodic autocorrelation sidelobes are desirable in
communications and radar engineering, e.g., some of the chirp-like sequences
developed in [1, 2, 3, 4, 5]. With a low integrated sidelobe level (ISL),
these sequences have quite flat spectrums [6], which can be utilized to
achieve omnidirectional precoding in broadcast channels [7].
In 5G NR broadcast channels, the discrete Fourier transform (DFT) codebook is
adopted for broadcasting common messages [8, Section 6.1.6.3] in the initial
stage of communication. With the energy concentrated in the pointing
direction, the maximum beamforming gain can be achieve by the DFT codebook.
But for future wireless communication systems with massive number of antennas,
the resultant beam would be too narrow, thus requiring many times of beam
sweeping to cover the whole angular domain. In contrast, the chirp-like
sequence-based omnidirectional beamforming spreads the energy in the whole
angular domain, and thus avoids beam sweeping and improves the time
efficiency. The omnidirectional beamforming, however, has no beamforming gain,
and therefore may have insufficient range coverage for the millimeter-wave or
terahertz-wave communication systems where a high beamforming gain is required
for compensating the severe path loss.
To circumvent such a dilemma, it is desirable to achieve flexible tradeoffs
between the beamforming gain and the beam sweeping time, as pursued by the
3GPP [9]. From the aspect of spectrum, we aim at designing sequences whose
power variation in the passband and power leakage in the stopband should be as
small as possible, and the bandwidth of the passband should be flexibly
tunable. Besides, their entries should have equal amplitudes for maximizing
the energy efficiency of power amplifiers (PAs), and their phase resolutions
should be coarse for the implementation using a low-cost phase shifter network
(PSN).
Literatures on this topic include some numerical optimizations [10, 11, 12]
and some schemes with closed-form solutions [9, 13, 14, 15, 16]. Compared with
the numerical optimizations, the schemes with closed-form solutions are easier
for hardware implementation, but the bandwidth is less flexible except for the
scheme in [15]. The sequence inferred from [15], referred to as the
generalized chirp (GC) sequence in this paper, has flexible bandwidth the same
as the numerical counterparts, and its spectrum in the passband is
asymptotically flat [15]. Nevertheless, for the GC sequence, the phase
resolution of the PSN is too fine to be cost-effective when the number of
antennas is large, as shown in our simulations.
In recent years, polyphase sequences with low correlations and spectrally-null
constraints were constructed in [17, 18, 19, 20], whose $N$-point spectrums
(with $N$ being the sequence length) are ideally flat in the passbands and are
ideally null in the stopbands. Nevertheless, the $N$-point spectrum is
insufficient for beamforming because the user equipments (UEs) are distributed
in a continuous angular range, rather than the $N$ discrete directions.
Besides, the passbands are interleaved with the stopbands [20] and the
bandwidths are less flexible. Hence, they are still not suitable for beam
sweeping.
To achieve flexible tradeoffs between the beamforming gain and the beam
sweeping time, in this paper we construct a family of polyphase sequences with
flexible bandwidth, termed as the generalized step-chirp (GSC) sequence. The
GSC sequence enjoys a coarser phase resolution than the GC sequence. Besides,
when the passband stretches over the whole frequency domain, the GSC sequence
degenerates into a low-ISL sequence closely connected with the Mow sequence
[5] with perfect periodic autocorrelation, and may require a coarser phase
resolution than the Mow sequence.
Notations: $\lfloor\cdot\rfloor$ stands for taking the floor value.
${\mathbb{Z}}^{+}$ represents the set of positive integers,
${\mathbb{Z}}_{n}=\\{0,1,\cdots,n-1\\}$. $\omega_{N}=e^{j\frac{2\pi}{N}}$.
$\lVert\cdot\rVert$ is the Frobenius norm. For $x,y,s,t\in{\mathbb{R}}$,
$x\equiv y\mod s$ stands for that $x-y$ is an integer multiple of $s$;
$x=y\mod[s,t)$ means that $x-y$ is an integer multiple of $\left\lvert
s-t\right\rvert$ and $y\in[s,t)$; $x=y\mod s$ is equivalent to $x=y\mod[0,s)$.
## II Preliminaries
In this section, we review two kinds of passive beamformings for the common
message broadcasting: the conventional beam sweeping based on the DFT codebook
[8, Section 6.1.6.3] in Section II-A and the omnidirectional beamforming based
on the chirp-like sequence [5] in Section II-B.
### II-A DFT Codebook-based Beam Sweeping
Consider a uniform linear array (ULA) of $N$ isotropic antennas with half
wavelength spacing. Given a beamforming vector ${\bf
a}=[a_{0},a_{1},\cdots,a_{N-1}]$ with $\left\lvert
a_{n}\right\rvert=\frac{1}{\sqrt{N}},n\in{\mathbb{Z}}_{N}$, the radiated power
at azimuth angle $\theta$ and elevation angle $\varphi$ is
$y(u)=\left\lvert\sum_{n=0}^{N-1}a_{n}e^{-j\pi nu}\right\rvert^{2}$ (1)
where $u=\cos\varphi\cos\theta$. Note that $-1\leq u\leq 1$, hence $y(u)$ is
essentially the power spectrum of the sequence ${\bf a}$.
A DFT codeword is ${\bf d}(u_{0})=[d_{0},d_{1},\cdots,d_{N-1}]$ with
$d_{n}=\frac{1}{\sqrt{N}}e^{j\pi nu_{0}},n\in{\mathbb{Z}}_{N}$, where $u_{0}$
is the beam direction in the $u$-domain. Let $\Delta u\triangleq u-u_{0}$,
then the radiated power is
$y(u)=\frac{1}{N}\left\lvert\sum_{n=0}^{N-1}e^{-j\pi n\Delta
u}\right\rvert^{2}=\begin{cases}\left\lvert\frac{\sin\left(\frac{\pi
N}{2}\Delta u\right)}{\sqrt{N}\sin\left(\frac{\pi}{2}\Delta
u\right)}\right\rvert^{2},&\Delta u\neq 0\\\ N,&\Delta u=0\end{cases}.$ (2)
By (2), the maximum beamforming gain (the ratio of the maximum received power
to the average received power) $N$ can be achieved if $u=u_{0}$, and for a
sufficiently large $N$,
$\lim_{N\to\infty}\frac{y\left(u_{0}\pm\frac{1}{N}\right)}{y(u_{0})}=\left\lvert\lim_{N\to\infty}\frac{1}{N\sin\left(\frac{\pi}{2N}\right)}\right\rvert^{2}=\frac{4}{\pi^{2}}\approx
0.4,$ (3)
i.e., $u_{0}\pm\frac{1}{N}$ is closed to the half-power points of the beam.
Hence the DFT codeword ${\bf d}(u_{0})$is designed to cover
$[u_{0}-\frac{1}{N},u_{0}+\frac{1}{N}]$. Then a DFT codebook $\\{{\bf
d}(u_{0})\ |\ u_{0}\in\mathcal{T}\\}$ with
$\mathcal{T}=\\{\frac{2i+1}{N}-1|i\in{\mathbb{Z}}_{N}\\}$ is adopted to sweep
the beam over the whole space for broadcasting common message, as illustrated
by Fig. 1 (a). The beam sweeping would consume too many time slots if $N$ is
large.
### II-B Chirp-like Sequence-based Omnidirectional Beamforming
In contrast to the beam sweeping that requires many time slots, the
omnidirectional beamforming aims at broadcasting messages using only one time
slot, which can be achieved by designing a sequence with a flat power
spectrum.
###### Definition 1
For a length-$N$ complex sequence ${\bf a}$ with $\lVert{\bf a}\rVert=1$, its
aperiodic autocorrelation is defined as
$R_{a}(\tau)\triangleq\sum_{n=0}^{N-1}a_{n}\overline{a}_{n-\tau},\quad
1-N\leq\tau\leq N-1,$ (4)
where $a_{n}=0$ if $n<0$ or $n\geq N$, and the overbar represents the complex
conjugation.
The power spectrum of ${\bf a}$ is
$y(u)=\sum_{\tau=1-N}^{N-1}R_{a}(\tau)e^{-j\pi u\tau},$ (5)
and the variance of the power spectrum is
$\displaystyle\frac{1}{2}\int_{-1}^{1}\left(y(u)-1\right)^{2}\,du$ (6)
$\displaystyle=$ $\displaystyle\frac{1}{2}\sum_{\tau_{1}\neq
0}\sum_{\tau_{2}\neq 0}R_{a}(\tau_{1})R_{a}^{*}(\tau_{2})\int_{-1}^{1}e^{-j\pi
u(\tau_{1}-\tau_{2})}\,du$ $\displaystyle=$ $\displaystyle\sum_{\tau\neq
0}\left\lvert R_{a}(\tau)\right\rvert^{2}\triangleq ISL_{a}$
where $ISL_{a}$ is the integrated sidelobe level (ISL) of $R_{a}(\tau)$. Hence
for omnidirectional beamforming, the sequence ’s ISL should be small, e.g.,
some of the chirp-like sequences [1, 2, 3, 4, 5]. As a unified construction of
sequences with perfect periodic autocorrelation, the Mow sequence family [5]
where some sequences have low ISL, is given below for ease of reference.
###### Definition 2
[5] The Mow sequence is a kind of sequences of length $N=sm^{2}$ with $s$
being the square-free part of $N$, whose entries are
$\frac{1}{\sqrt{N}}\omega_{N}^{m\xi_{n}},n\in{\mathbb{Z}}_{N}$ with
$\xi_{km+l}=mc(s)\alpha(l)k^{2}+\beta(l)k+f_{l}(0),\
l\in{\mathbb{Z}}_{m},k\in{\mathbb{Z}}_{sm}$ (7)
where
$c(s)=\begin{cases}\frac{1}{2},&{\rm for}\ s\ {\rm even}\\\ 1,&{\rm for}\ s\
{\rm odd}\end{cases}\ ,$ (8)
$\alpha(l)\in\\{1,2,\cdots,s-1\\}$ is any function with
$\gcd(\alpha(l),s)=1,\forall l\in{\mathbb{Z}}_{m}$, and
$\beta(l)\in{\mathbb{Z}}_{sm}$ is any function such that $\beta(l)\
\text{mod}\ m$ is a permutation of ${\mathbb{Z}}_{m}$, and $f_{l}(0),\forall
l\in{\mathbb{Z}}_{m}$ are any rational numbers.
Figure 1: The power spectrums of two kinds of beamformings. (a): the DFT
codebook-based beam sweeping; (b) the chirp-like sequence-based
omnidirectional beamforming.
The beam sweeping based on the DFT codebook and the omnidirectional
beamforming based on the Mow sequence are compared in Fig. 1. For the DFT
codebook, $N=10$; for the Mow sequence, $N=50$, $s=2$, $m=5$,
$c(s)=\frac{1}{2}$, $\alpha(l)=1$, $\beta(l)=l-25$, $f_{l}(0)=-9.5l$. The DFT
codebook in Fig. 1 (a) achieves the maximum beamforming gain but requires $10$
times of beam sweeping, while the Mow sequence in Fig. 1 (b) can broadcast
messages in one time slot but has no beamforming gain.
## III Generalized Step-chirp Sequence
To achieve flexible tradeoffs between the beamforming gain and the beam
sweeping time, in Section III-A, we construct a family of polyphase sequences
with tunable bandwidth, termed as the generalized step-chirp (GSC) sequence;
in Section III-B, we discuss the relationships between the GSC sequence, the
DFT codebook, the generalized chirp (GC) sequence inferred from [15] and the
Mow sequence [5].
### III-A Construction of Generalized Step-chirp Sequence
Consider a step-chirp signal as follows:
$c(t)=e^{j2\pi\int_{0}^{t}f(\tau)d\tau},\quad 0\leq t\leq T,$ (9)
where $f(t)$ is a step approximation of linear frequency modulation (LFM):
$f(t)=a(\lfloor t\rfloor+b),\ 0\leq t\leq T$ (10)
for $\forall a>0,\forall b\in{\mathbb{R}}$. An LFM $f^{\prime}(t)=t$ and its
step approximation $f(t)$ with $a=1,b=\frac{1}{2}$, and $T=10$ are illustrated
by Fig. 2. The bandwidth of the step-chirp signal is $aT$ approximately.
Besides, we require the Nyquist sampling number $aT^{2}\geq 1$.
Figure 2: Step approximation of LFM, $a=1,b=\frac{1}{2}$, and $T=10$.
Now sample $c(t)$ in (9) at rate $m\triangleq aT/\gamma$ with $0<\gamma\leq
1$, where $m$ is assumed to be an integer via setting $a$ properly. We then
obtain $N$ samples
$c\left(t\right)|_{t=\frac{n}{m}}=e^{j\phi_{n}},\quad n\in{\mathbb{Z}}_{N}$
(11)
with
$N=mT=aT^{2}/\gamma=m^{2}\gamma/a.$ (12)
Because $aT^{2}\geq 1$, we have $\gamma=\frac{aT^{2}}{N}\geq\frac{1}{N}$.
Factor $n\in{\mathbb{Z}}_{N}$ into
$n=km+l,\ k\triangleq\left\lfloor n/m\right\rfloor,\ l\in{\mathbb{Z}}_{m}.$
(13)
Direct calculations show that
$\phi_{n}=2\pi\frac{ak\left(k-1+2b\right)}{2}+2\pi\frac{a\left(k+b\right)l}{m}.$
(14)
Note from (12) that $a=\frac{m^{2}\gamma}{N}$; thus,
$\phi_{n}=\frac{2\pi}{N}m\gamma\left(\frac{k(k-1)}{2}m+kl+bn\right).$ (15)
Besides, the Fourier transform of $c(t)$ can be derived to be a weighted
summation of $T$ sinc functions:
$C(f)=\sum_{i=0}^{T-1}e^{j\pi[ai^{2}+2(ab-f)i+ab-f]}{\rm sinc}[f-a(i+b)].$
(16)
At $f_{0}\triangleq a(b-\frac{1}{2})$, the value of the left-most sinc
function ($i=0$) is $\text{sinc}(-\frac{a}{2})$; at $f_{1}\triangleq
a(b-\frac{1}{2}+T)$, the value of the right-most sinc function ($i=T-1$) is
$\text{sinc}(\frac{a}{2})$. Hence the interval
$(-\infty,f_{0})\cup(f_{1},+\infty)$ can be regarded as the stopband since
most of the sinc functions have attenuated to a low level. Because the
bandwidth of the step-chirp signal is $aT$ approximately, the interval
$[f_{0},f_{1}]$ can be regarded as the passband of $C(f)$. Note that the
analog bandwidth $aT$ is scaled to be the digital bandwidth $2\pi\gamma$ by
over-sampling, hence the passband of the sample sequence is
$\left[\omega_{0},\omega_{0}+2\pi\gamma\right]$ where
$\omega_{0}=\frac{2\pi\gamma}{aT}f_{0}=\frac{2\pi}{N}m\gamma\left(b-\frac{1}{2}\right).$
(17)
The above arguments established the following theorem.
###### Theorem 1
The GSC sequence is a family of polyphase sequences with entries
$\frac{1}{\sqrt{N}}\omega_{N}^{m\zeta_{n}},n\in{\mathbb{Z}}_{N}$, where
$\displaystyle\zeta_{n}=$
$\displaystyle\gamma\left(\frac{k(k-1)}{2}m+kl+bn\right),$ (18) $\displaystyle
k=\left\lfloor n/m\right\rfloor,\ l=n-km,$
with parameter set $\\{N,\gamma,m,b\ |\
N\in{\mathbb{Z}}^{+},m|N,\frac{1}{N}\leq\gamma\leq 1,b\in{\mathbb{R}}\\}$. The
passband of the power spectrum of the GSC sequence is
$\left[\omega_{0},\omega_{0}+2\pi\gamma\right]$ (19)
where $\omega_{0}=\frac{2\pi}{N}m\gamma\left(b-\frac{1}{2}\right)$. For beam
sweeping, the beam is pointed at $u_{0}$ to cover
$[u_{0}-\gamma,u_{0}+\gamma)$, where
$u_{0}=\frac{2}{N}m\gamma\left(b-\frac{1}{2}\right)+\gamma\ \text{mod}\
[-1,1).$ (20)
The bandwidth of the GSC sequence can be flexibly adjusted by tuning the
parameter $\gamma$, thus achieving the flexible tradeoffs between the
beamforming gain and the beam sweeping time, as shown in Simulations.
### III-B Relationships Between the GSC sequence and Other Sequences
The relationships between the GSC sequence, the DFT codebook, the GC sequence
and the Mow sequence family are illustrated by Fig. 3, as explained below.
Figure 3: The relationships between the Mow sequence, the GC sequence, the DFT
codebook and the GSC sequence.
#### III-B1 GSC Sequence and DFT Codebook
In Theorem 1, let $m=N$ and $\gamma=\frac{1}{N}$. Then we have $k=0$ and $n=l$
for $\forall\ n\in{\mathbb{Z}}_{N}$. This degenerated GSC sequence has entries
$\frac{1}{\sqrt{N}}e^{j\frac{2\pi}{N}m\zeta_{n}}=\frac{1}{\sqrt{N}}e^{j\frac{2\pi}{N}bn},\
n\in{\mathbb{Z}}_{N},$ (21)
and the passband is
$\left[\frac{2\pi}{N}\left(b-\frac{1}{2}\right),\
\frac{2\pi}{N}\left(b+\frac{1}{2}\right)\right].$ (22)
According to Section II-A, this is exactly a DFT codeword pointing at
$u_{0}=\frac{2b}{N}\ \text{mod}\ [-1,1)$. Hence the GSC sequence encompasses
the DFT codebook and thus may be backward-compatible with the current
industrial standard.
#### III-B2 GSC Sequence and GC Sequence
When $m=1$, we have from (18) that
$\frac{1}{\sqrt{N}}\omega_{N}^{m\zeta_{n}}=\frac{1}{\sqrt{N}}\omega_{N}^{\gamma\frac{n(n+2b-1)}{2}}$,
which is the GC sequence inferred from over-sampling a chirp signal [15].
Therefore, the GSC sequence is also a generalization of the GC sequence. The
phase resolution of a sequence with phases in $\\{\frac{2\pi
p}{P}|p\in{\mathbb{Z}}_{P}\\}$ is $\frac{2\pi}{P}$. Note that the parameter
$m$ can be tuned for coarser phase resolution, e.g., suppose
$b\in{\mathbb{Z}}$ and $\gamma$ is a rational number of form $\frac{p}{q}$
with $p,q$ coprime, then the phase resolutions are
$R_{gsc}=\frac{2\pi}{Nq/\text{gcd}(Nq,mp)},\
R_{gc}=\frac{2\pi}{Nq/\text{gcd}(Nq,p)},$ (23)
from which we have $R_{gc}\leq R_{gsc}\leq mR_{gc}$, e.g., if $p=1$, then
$R_{gsc}=mR_{gc}$.
#### III-B3 GSC Sequence and Mow Sequence
Set $\gamma=1$ (i.e., the Nyquist sampling), and we obtain another kind of
degenerated GSC sequence with entries
$\frac{1}{\sqrt{N}}\omega_{N}^{m\zeta_{n}},n\in{\mathbb{Z}}_{N}$, where
$\displaystyle\zeta_{n}=$ $\displaystyle\ \frac{k(k-1)}{2}m+kl+bn,$ (24)
$\displaystyle k=\left\lfloor n/m\right\rfloor,\ l=n-km,$
with parameter set
$\\{N,m,b\,|\,N\in{\mathbb{Z}}^{+},m|N,b\in{\mathbb{R}}\\}$, which is related
to the Mow sequence as shown below.
###### Proposition 1
With the following two constraints on the parameter $m$ and $b$ in (24),
respectively, the degenerated GSC sequence in (24) is a special case of the
Mow sequence in (7):
1. 1.
$m$ is the square part of $N$, i.e., $N=sm^{2}$.
2. 2.
$2b$ is odd if $s$ is even or $b$ is an integer if $s$ is odd.
###### Proof:
First note that with the first constraint, the sequence length in (24) is
$sm^{2}$, which is the same as the Mow sequence.
Second, if $s$ is even and $2b$ is odd, then (24) is a special case of (7)
with $c(s)=\frac{1}{2}$, $\alpha(l)=1$, $\beta(l)=\frac{2b-1}{2}m+l$,
$f_{l}(0)=bl$ and $\frac{2b-1}{2}$ is an integer such that $\beta(l)\equiv l\
\text{mod}\ m$ is a permutation of ${\mathbb{Z}}_{m}$.
If $s$ is odd and $b$ is an integer, then denote $s=2d-1$ for some
$d\in{\mathbb{Z}}^{+}$. Since $k(k+2b-1)$ is an even number, it holds that
$\displaystyle dk(k+2b-1)$ $\displaystyle=\frac{k(k+2b-1)}{2}(s+1)$ (25)
$\displaystyle\equiv\frac{k(k+2b-1)}{2}\ \text{mod}\ s.$
Rewrite (24) as
$\zeta_{km+l}=\frac{k(k+2b-1)}{2}m+kl+bl.$ (26)
It follows from (26) and (25) that
$\zeta_{km+l}\equiv dk(k+2b-1)m+kl+bl\ \text{mod}\ sm,$ (27)
which is a special case of (7) with $c(s)=1$, $\alpha(l)=d$ [one may verify
that $\gcd(d,2d-1)=1$], $\beta(l)=(2b-1)dm+l$, $f_{l}(0)=bl$. ∎
Indeed, one may relax the constraints in Proposition 1 to improve the phase
resolution of the degenerated GSC sequence in (24). The phase resolution of
the Mow sequence in (7) with $f_{l}(0)$ being an integer is
$R_{mow}=\begin{cases}\frac{\pi}{N/m},&\ s\ {\rm is}\ {\rm even},m\ {\rm is}\
{\rm odd}\\\ \frac{2\pi}{N/m},&\ {\rm otherwise}\end{cases},$ (28)
and the phase resolution of the GSC sequence with
$\gamma=1,b\in{\mathbb{Z}}^{+}$ is $R_{gsc}=\frac{2\pi}{N/m}$. If $m$ is
larger than the square part of $N$, then the phase resolution of the GSC
sequence would be coarser than the Mow sequence as shown in Simulations.
## IV Simulations
This section presents simulation examples to verify the capability of the GSC
sequence in making flexible tradeoffs between the beamforming gain and the
beam sweeping time, and its advantages over the GC sequence and the Mow
sequence in terms of the phase resolution and the spectrum.
### IV-A Tradeoffs Between the Beamforming Gain and the Beam Sweeping Time
To show the flexibility of the GSC sequence for beam sweeping, we simulate and
show in Fig. 4 the beampatterns of the GSC sequences of length $N=120$ with
$(\gamma,m)\in\\{(\frac{1}{2},15),(\frac{1}{5},24),(\frac{1}{7},30),(\frac{1}{13},40)\\}$.
The parameter $b$ is chosen so that the beam direction $u_{0}$ in (20) runs
through $\\{(2i-1)\gamma-1|i=1,2,\cdots,\frac{1}{\gamma}\\}$ for the
contiguous coverage of $[-1,1)$. Fig. 4 (a) illustrates $2$ times of beam
sweeping with 2x beamforming gain while Fig. 4 (d) represents $13$ times of
beam sweeping with 13x beamforming gain. In summary, by adjusting $\gamma$ and
$b$ to control the bandwidth and the beam direction, flexible tradeoffs
between the beamforming gain and the beam sweeping time can be achieved for
efficient beam sweeping. We want to emphasize that the y-axis is in the linear
scale. Thus, the power fluctuation in the passband is less than $3$dB.
Figure 4: Flexible tradeoffs between the beamforming gain and the beam
sweeping time. (a): $\gamma=\frac{1}{2}$; (b): $\gamma=\frac{1}{5}$; (c):
$\gamma=\frac{1}{7}$; (d): $\gamma=\frac{1}{13}$.
### IV-B Phase Resolution and Spectrum
For a GSC sequence ${\bf g}=[g_{0},g_{1},\cdots,g_{N-1}]$, the normalized root
mean square error (NRMSE) of passband is defined as
$\sqrt{\frac{1}{\left\lvert{\cal
I}_{p}\right\rvert}\sum_{i\in{\mathcal{I}}_{p}}\left(\gamma\left\lvert\sum_{n=0}^{N-1}g_{n}e^{-j\frac{2\pi}{N^{\prime}}in}\right\rvert^{2}-1\right)^{2}}$
(29)
where $N^{\prime}$ is the DFT length and
${\mathcal{I}}_{p}\subset{\mathbb{Z}}_{N^{\prime}}$ is the set of passband
indices. Here we set $N^{\prime}=4N$. And the stopband leakage ratio is
defined as
$\frac{1}{N^{\prime}}\sum_{i\in{\mathcal{I}}_{s}}\left\lvert\sum_{n=0}^{N-1}g_{n}e^{-j\frac{2\pi}{N^{\prime}}in}\right\rvert^{2}$
(30)
where ${\mathcal{I}}_{s}={\mathbb{Z}}_{N^{\prime}}\setminus{\mathcal{I}}_{p}$
is the set of the stopband indices.
Compared with the GC sequence and the Mow sequence, the GSC sequence with a
proper parameter $m$ may have a coarser phase resolution and a comparable
spectrum or even flatter.
#### IV-B1 GSC Sequence versus GC sequence
Fig. 5 shows the impact of the parameter $m$ on the spectrum and the phase
resolution of a GSC sequence, where $N=50$, $\gamma=\frac{1}{2}$, $b=1$.
Compared with the GC sequence, i.e., $m=1$, the GSC sequence with $m=10$ has
smaller passband NRMSE and stopband leakage ratio as shown in Fig. 5 (a), and
the phase resolution of the proposed GSC sequence is $10$ times coarser as
shown in Fig. 5 (b) and Fig. 5 (c).
Figure 5: The improvement of phase resolution and spectrum of the GSC sequence
against the GC sequence. (a): the passband NRMSE and the stopband leakage
ratio of a length-$50$ GSC sequence for different $m$; (b): the phases of the
GC sequence corresponding to $m=1$ in (a); (c): the phases of the GSC sequence
corresponding to $m=10$ in (a).
#### IV-B2 GSC sequence versus Mow sequence
Fig. 6 shows the ISL of a GSC sequence of length $N=462$ for different
parameters $m$, with $\gamma=1$ and $b=\frac{1}{2}$. Note that the square part
of $N=462$ is $m=1$, thus the point with $m=1$ in Fig. 6 corresponds to a Mow
sequence, which can be verified by simulation to have exactly the minimum ISL
among all the $55440$ Mow sequences of length $462$ with a phase resolution
$\frac{\pi}{462}$ [5, Theorem 5]. Remarkably, the ISL for $m=1$ is $0.0297$
and the ISL for $m=21$ is $0.0307$, which means a reduction of phase
resolution by a factor of $21$ but with a negligible increase of ISL, i.e., a
comparably flat spectrum.
Figure 6: The ISL of a length-$462$ GSC sequence for different $m$.
## V Conclusions
In this paper, we construct the generalized step-chirp (GSC) sequence, which
can achieve flexible tradeoffs between the beamforming gain and the beam
sweeping time for the common message broadcasting in massive MIMO systems. The
GSC sequence has a coarser phase resolution than the generalized chirp (GC)
sequence, which facilitates its implementation with a low-cost phase shifter
network (PSN). Besides, the GSC sequence may have coarser phase resolution
than the Mow sequence with a negligible increase of the integrated sidelobe
level (ISL).
## References
* [1] R. Frank, “Polyphase codes with good nonperiodic correlation properties,” _IEEE Transactions on Information Theory_ , vol. 9, no. 1, pp. 43–45, 1963.
* [2] D. Chu, “Polyphase codes with good periodic correlation properties (corresp.),” _IEEE Transactions on information theory_ , vol. 18, no. 4, pp. 531–532, 1972.
* [3] A. Milewski, “Periodic sequences with optimal properties for channel estimation and fast start-up equalization,” _IBM Journal of Research and Development_ , vol. 27, no. 5, pp. 426–431, 1983.
* [4] B. M. Popovic, “Generalized chirp-like polyphase sequences with optimum correlation properties,” _IEEE Transactions on Information Theory_ , vol. 38, no. 4, pp. 1406–1409, 1992.
* [5] W. H. Mow, “A new unified construction of perfect root-of-unity sequences,” in _Proceedings of ISSSTA’95 International Symposium on Spread Spectrum Techniques and Applications_ , vol. 3. IEEE, 1996, pp. 955–959.
* [6] K.-U. Schmidt, “On a problem due to Littlewood concerning polynomials with unimodular coefficients,” _Journal of Fourier Analysis and Applications_ , vol. 19, no. 3, pp. 457–466, 2013.
* [7] X. Meng, X. Xia, and X. Gao, “Omnidirectional space-time block coding for common information broadcasting in massive mimo systems,” _IEEE Transactions on Wireless Communications_ , vol. 17, no. 3, pp. 1407–1417, March 2018.
* [8] 3GPP, “Study on new radio access technology physical layer aspects,” 3GPP, Technical Specification (TS) TS38.802 V14.2.0, Sept. 2017.
* [9] Intel, “Codebook with beam broadening,” 3GPP, Tech. Rep. R1-1611929, Nov. 2016.
* [10] W. Rowe, P. Stoica, and J. Li, “Spectrally constrained waveform design [sp tips&tricks],” _IEEE Signal Processing Magazine_ , vol. 31, no. 3, pp. 157–162, 2014.
* [11] V. Sergeev, A. Davydov, G. Morozov, O. Orhan, and W. Lee, “Enhanced precoding design with adaptive beam width for 5G new radio systems,” in _2017 IEEE 86th Vehicular Technology Conference (VTC-Fall)_. IEEE, 2017, pp. 1–5.
* [12] W. Ma, L. Zhu, and R. Zhang, “Passive beamforming for 3-D coverage in IRS-assisted communications,” _IEEE Wireless Communications Letters_ , vol. 11, no. 8, pp. 1763–1767, 2022.
* [13] Z. Xiao, T. He, P. Xia, and X.-G. Xia, “Hierarchical codebook design for beamforming training in millimeter-wave communication,” _IEEE Transactions on Wireless Communications_ , vol. 15, no. 5, pp. 3380–3392, 2016.
* [14] Z. Xiao, H. Dong, L. Bai, P. Xia, and X.-G. Xia, “Enhanced channel estimation and codebook design for millimeter-wave communication,” _IEEE Transactions on Vehicular Technology_ , vol. 67, no. 10, pp. 9393–9405, 2018.
* [15] C. Fonteneau, M. Crussière, and B. Jahan, “A systematic beam broadening method for large phased arrays,” in _2021 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit)_. IEEE, 2021, pp. 7–12.
* [16] C. Du, F. Li, and Y. Jiang, “Hierarchical beamforming for broadcast channels,” _IEEE Communications Letters_ , 2023.
* [17] S. Hu, Z. Liu, Y. L. Guan, W. Xiong, G. Bi, and S. Li, “Sequence design for cognitive cdma communications under arbitrary spectrum hole constraint,” _IEEE Journal on Selected Areas in Communications_ , vol. 32, no. 11, pp. 1974–1986, 2014.
* [18] Z. Liu, Y. L. Guan, U. Parampalli, and S. Hu, “Spectrally-constrained sequences: Bounds and constructions,” _IEEE Transactions on Information Theory_ , vol. 64, no. 4, pp. 2571–2582, 2018.
* [19] L. Tian, C. Xu, and Y. Li, “A family of single-channel spectrally-null-constrained sequences with low correlation,” _IEEE Signal Processing Letters_ , vol. 27, pp. 1645–1649, 2020.
* [20] Z. Ye, Z. Zhou, Z. Liu, X. Tang, and P. Fan, “New spectrally constrained sequence sets with optimal periodic cross-correlation,” _IEEE Transactions on Information Theory_ , vol. 69, no. 1, pp. 610–625, 2022.
|
# On the Finiteness Problem for classes of modular lattices
Christian Herrmann Technische Universität Darmstadt FB4
Schloßgartenstr. 7
64289 Darmstadt
Germany<EMAIL_ADDRESS>Dedicated to the memory of Rudolf
Wille
###### Abstract.
The Finiteness Problem is shown to be unsolvable for any sufficiently large
class of modular lattices.
###### Key words and phrases:
Finiteness problem, modular lattice
###### 1991 Mathematics Subject Classification:
06C05, 03D35
Given a class $\mathcal{A}$ of algebraic structures, the _Finiteness Problem_
is to decide for any given finite presentation, that is a list of generator
symbols and relations, whether or not there is a finite bound on the size of
members of the class which ’admit the presentation’, that is a system of
generators satisfying the given relations; if $\mathcal{A}$ is a quasi-
variety, this means finiteness of the free $\mathcal{A}$-algebra given by the
presentation. Due to Slavik [6], the finiteness problem is algorithmically
solvable for the class of all lattices, due to Wille [7] for any class of
modular lattices, containing the subspace lattice of an infinite projective
plane, if one allows only order relations between the generators. The present
note relies on the unsolvability of the Triviality Problem for modular
lattices [4] which in turn relies on the result of Adyan [1, 2] and Rabin [5]
for groups. For a vector space $V$ let $\operatorname{L}(V)$ denote the
lattice of subspaces.
###### Theorem 1.
Let $\mathcal{A}$ a class of modular lattices such that
$\operatorname{L}(V)\in\mathcal{A}$ for some $V$ of infinite dimension. Then
the Finiteness Problem for $\mathcal{A}$ is algorithmically unsolvable.
The following restates the relevant part of Lemma 10 in [4].
###### Lemma 2.
There is a recursive set $\Sigma$ of conjunctions
$\varphi(\bar{x},x_{\bot},x_{\top})$ of lattice equations such that
$\forall\bar{x}\forall x_{\bot}\forall
x_{\top}.\;\varphi(\bar{x},x_{\bot},x_{\top})\Rightarrow\bigwedge_{i}x_{\bot}\leq
x_{i}\leq x_{\top}$ is valid in all modular lattices and such that the
following hold where $\varphi^{\exists}$ denotes the sentence
$\exists\bar{x}\exists x_{\bot}\exists
x_{\top}.\;\varphi(\bar{x},x_{\bot},x_{\top})\wedge x_{\bot}\neq x_{\top}$.
1. (i)
If, for $\varphi\in\Sigma$, $\varphi^{\exists}$ is valid in some modular
lattice, then it is so within $\operatorname{L}(V)$ for any $V$ of infinite
dimension. Moreover, one can choose $x_{\bot}=0$ and $x_{\top}=V$.
2. (ii)
The set of all $\varphi\in\Sigma$ with $\varphi^{\exists}$ valid in some
modular lattice is not recursive.
Consider the conjunction $\pi(\bar{y},y_{\bot},y_{\top})$ of the following
lattice equations
$y_{i}\cdot y_{j}=y_{\bot}\;(1\leq i<j\leq 4),\quad
y_{i}+y_{j}=y_{\top}\;(1\leq i<j\leq 4,\,j\neq 2)$
We use $x,y,\ldots$ both as variables and generator symbols and also to denote
their values under a particular assignment. In [3],
$\operatorname{FM}(J_{4}^{1})$ was defined as the modular lattice freely
generated under the presentation $\pi(\bar{y},y_{\bot},y_{\top})$
(equivalently, by the partial lattice $J^{4}_{1}$ arising from the $6$-element
height $2$ lattice $M_{4}$ with atoms $y_{1},y_{2},y_{3},y_{4}$ keeping all
joins and meets except the join of $\\{y_{1},y_{2}\\}$). The following was
shown (to prove (i) consider $V$ the direct sum of infinitely many subspaces
of dimension $\aleph_{0}$).
###### Lemma 3.
Up to isomorphism, $M_{4}$ and singleton are the only proper homomorphic
images of $\operatorname{FM}(J_{4}^{1})$. Moreover,
$\operatorname{FM}(J_{4}^{1})$ has the following properties:
1. (i)
$\operatorname{FM}(J_{4}^{1})$ embeds into $\operatorname{L}(V)$ for any $V$
of infinite dimension. Moreover, the embedding can be chosen such that any
prime quotient has infinite index.
2. (ii)
$\operatorname{FM}(J_{4}^{1})$ has infinite height.
3. (iii)
$\operatorname{FM}(J_{4}^{1})$ has prime quotient $y_{\top}/(y_{1}+y_{2})$,
generating the unique proper congruence relation $\theta$.
4. (iv)
$\operatorname{FM}(J_{4}^{1})/\theta$ is isomorphic to $M_{4}$.
###### Proof.
of Theorem 1. Given $\varphi\in\Sigma$ from Lemma 2, consider the presentation
$\varphi^{\\#}$ with generators
$\bar{x},x_{\bot},x_{\top},\bar{y},y_{\bot},y_{\top}$ and the relations from
$\varphi$, $\pi$, and in addition $x_{\top}=y_{\top}$ and
$x_{\bot}=y_{1}+y_{2}$. Considering a modular lattice $L$ with generators and
relations according to $\varphi^{\\#}$, the following are equivalent in view
of Lemma 3.
1. (i)
$x_{\bot}=x_{\top}$.
2. (ii)
$L$ is singleton or $M_{4}$.
3. (iii)
$L$ is finite.
4. (iv)
$L$ is of finite height.
Clearly, if $x_{\bot}=x_{\top}$ in every modular lattice admitting
presentation $\varphi$ then the same applies to the presentation
$\varphi^{\\#}$. On the other hand, assume that $\varphi^{\exists}$ is valid
in some modular lattice. Given any vector space $V$, embed
$\operatorname{FM}(J_{4}^{1})$ into $\operatorname{L}(V)$ as in (i) of Lemma 3
and denote $U=y_{1}+y_{2}$. By (i) of Lemma 2 one can evaluate $\bar{x}$ in
$\operatorname{L}(V/U)$ such that $\varphi(\bar{x},x_{\bot},x_{\top})$ holds
where $x_{\bot}=U$ and $x_{\top}=V$. This results into generators of a
sublattice $L$ of $\operatorname{L}(V)$ satisfying the relations of
$\varphi^{\\#}$ and such that $x_{\bot}\neq x_{\top}$. Thus, to decide whether
$x_{\bot}=x_{\top}$ for all modular lattices admitting presentation $\varphi$
reduces to deciding whether (i)–(iv) apply to all $L\in\mathcal{A}$ admitting
presentation $\varphi^{\\#}$. Undecidability of the latter problems follows
now from (ii) of Lemma 2. ∎
###### Corollary 4.
For no quasi-variety $\mathcal{A}$ as in Theorem 1 there is an algorithm to
decide, given a finite presentation, whether or not the lattice freely
generated in $\mathcal{A}$ under that presentation is of finite height.
## References
* [1] Adyan, .I.: Algorithmic unsolvability of problems of recognition of certain properties of groups. Dokl. Akad. Nauk SSSR (N.S.) 103, 533–535 (1955) (Russian)
* [2] Adyan, S.I.: Unsolvability of some algorithmic problems in the theory of groups. Trudy Moskov. Mat. Obsc. 6, 231–298 (1957) (Russian)
* [3] Day, A., Herrmann, C., Wille, R.: On modular lattices with four generators. Algebra Universalis 2, 317–323 (1972)
* [4] Herrmann, C., Tsukamoto, Y., Ziegler, M.: On the consistency problem for modular lattices and related structures. Int. J. Algebra Comput. 26, 1573–1595 (2016)
* [5] Rabin, M.O.: Recursive unsolvability of group theoretic problems. Ann. of Math. 67, 172–194 (1958)
* [6] Slavik, V.: Finiteness of finitely presented lattices. In: Lattice theory and its applications (Darmstadt, 1991). Res. Exp. Math., vol. 23, pp. 219–227. Heldermann, Lemgo (1995)
* [7] Wille, R.: Über modulare Verbände, die von einer endlichen halbgeordneten Menge frei erzeugt werden. Math. Z. (1973) 131, 241–249 (German)
|
# $k$-positivity of dual canonical basis elements from 1324- and
2143-avoiding Kazhdan-Lusztig immanants
Sunita Chepuri∗ and Melissa Sherman-Bennett†
###### Abstract.
In this note, we show that certain dual canonical basis elements of
${\mathbb{C}}[SL_{m}]$ are positive when evaluated on _$k$ -positive
matrices_, matrices whose minors of size $k\times k$ and smaller are positive.
Skandera showed that all dual canonical basis elements of
${\mathbb{C}}[SL_{m}]$ can be written in terms of _Kazhdan-Lusztig immanants_
, which were introduced by Rhoades and Skandera. We focus on the basis
elements which are expressed in terms of Kazhdan-Lusztig immanants indexed by
1324- and 2143-avoiding permutations. This extends previous work of the
authors on Kazhdan-Lusztig immanants and uses similar tools, namely Lewis
Carroll’s identity (also known as the Desnanot-Jacobi identity).
∗University of Michigan, 2074 East Hall, 530 Church Street. Ann Arbor, MI
48109<EMAIL_ADDRESS>
†University of Michigan, 2074 East Hall, 530 Church Street. Ann Arbor, MI
48109<EMAIL_ADDRESS>
## 1\. Introduction
Given a function $f:S_{n}\to{\mathbb{C}}$, the _immanant_ associated to $f$,
$\operatorname{Imm}_{f}X:\text{Mat}_{n\times n}({\mathbb{C}})\to{\mathbb{C}}$,
is the function
$\operatorname{Imm}_{f}X:=\sum_{w\in S_{n}}f(w)~{}x_{1,w(1)}\cdots
x_{n,w(n)},$ (1.1)
where the $x_{i,j}$ are indeterminates. We evaluate $\operatorname{Imm}_{f}X$
on a matrix $M=(m_{i,j})$ by specializing $x_{i,j}$ to $m_{i,j}$ for all
$i,j$.
Immanants are a generalization of the determinant, where
$f(w)=(-1)^{\ell(w)}$, and the permanent, where $f(w)=1$. Positivity
properties of immanants have been studied since the early 1990’s [11, 12, 20,
13]. One of the main results in this area is that when $f$ is an irreducible
character of $S_{n}$, then $\operatorname{Imm}_{f}(X)$ is nonnegative on
_totally nonnegative matrices_ , that is, matrices with all nonnegative minors
[19]. In this note, we will investigate positivity properties of functions
closely related to _Kazhdan–Lusztig immanants_ , introduced by Rhoades and
Skandera [16].
###### Definition 1.1.
Let $v\in S_{n}$. The _Kazhdan-Lusztig immanant_
$\operatorname{Imm}_{v}X:\text{Mat}_{n\times n}({\mathbb{C}})\to{\mathbb{C}}$
is given by
$\operatorname{Imm}_{v}X:=\sum_{w\in
S_{n}}(-1)^{\ell(w)-\ell(v)}P_{w_{0}w,w_{0}v}(1)~{}x_{1,w_{1}}\cdots
x_{n,w_{n}}$ (1.2)
where $P_{x,y}(q)$ is the Kazhdan-Lusztig polynomial associated to $x,y\in
S_{n}$, $w_{0}\in S_{n}$ is the longest permutation, and we write permutations
$w=w_{1}w_{2}\dots w_{n}$ in one-line notation. (For the definition of
$P_{x,y}(q)$ and their basic properties, see e.g. [1].)
Our interest in Kazhdan–Lusztig immanants stems from their connection to the
dual canonical basis of ${\mathbb{C}}[SL_{m}]$. Using work of Du [9], Skandera
[18] showed that the dual canonical basis elements of ${\mathbb{C}}[SL_{m}]$
are exactly Kazhdan–Lusztig immanants evaluated on matrices of indeterminates
with repeated rows and columns.
Let $X=(x_{ij})$ be the $m\times m$ matrix of variables $x_{ij}$ and let
$\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$
denote the set of $n$-element multisets of $[m]:=\\{1,\dots,m\\}$. For
$R,C\in\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$
with $R=\\{r_{1}\leq\cdots\leq r_{n}\\}$ and $C=\\{c_{1}\leq\cdots\leq
c_{n}\\}$, we write $X(R,C)$ to denote the matrix
$(x_{r_{i},c_{j}})_{i,j=1}^{n}$ (see Definition 2.8).
###### Proposition 1.2 ([18, Theorem 2.1]).
The dual canonical basis of ${\mathbb{C}}[SL_{m}]$ consists of the nonzero
elements of the following set:
$\left\\{\operatorname{Imm}_{v}X(R,C):v\in S_{n}\text{ for some
}n\in\mathbb{N}\text{ and
}R,C\in\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.\right\\}.$
The positivity properties of dual canonical basis elements have been of
interest essentially since their definition, and are closely related to the
study of total positivity. In 1994, Lusztig [14] defined the totally positive
part $G_{>0}$ of any reductive group $G$. He also showed that all elements of
the dual canonical basis of $\mathcal{O}(G)$ are positive on $G_{>0}$. Fomin
and Zelevinsky [10] later proved that for semisimple groups, $G_{>0}$ is
precisely the subset of $G$ where all _generalized minors_ are positive.
Generalized minors are dual canonical basis elements corresponding to the
fundamental weights of $G$ and their images under Weyl group action.
Here, we study signs of dual canonical basis elements on a natural
generalization of $G_{>0}$. Let $S$ be some subset of generalized minors and
$G_{>0}^{S}$ the subset of $G$ where all elements of $S$ are positive. Which
dual canonical basis elements are positive on all elements of $G_{>0}^{S}$? In
this note, we consider the case where $G=SL_{m}$ and $S$ consists of the
generalized minors corresponding to the first $k$ fundamental weights and
their images under the Weyl group action. In this situation, $G_{>0}^{S}$ is
the set of _$k$ -positive matrices_, matrices where all minors of size $k$ and
smaller are positive. Cluster algebra structures, topology, and variation
diminishing properties of these matrices have been previously studied in [2,
4, 8, 7].
We call a matrix functional _$k$ -positive_ if it is positive when evaluated
on all $k$-positive matrices. Our main result is as follows:
###### Theorem 1.3.
Let $v\in S_{n}$ be $1324$-, $2143$-avoiding and suppose that for all $i<j$
with $v_{i}<v_{j}$, we have $j-i\leq k$ or $v_{j}-v_{i}\leq k$. Let
$R,C\in\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$.
Then $\operatorname{Imm}_{v}X(R,C)$ is identically zero or it is $k$-positive.
We also characterize precisely when the functions
$\operatorname{Imm}_{v}X(R,C)$ appearing in 1.3 are identically zero (see
Theorem 3.1).
Theorem 1.3 extends the results of [5], in which we showed the function
$\operatorname{Imm}_{v}X([m],[m])$ is $k$-positive under the assumptions of
1.3. Our techniques here are similar to [5]. Note that Theorem 1.3 does not
follow from [5, Theorem 1.4] because for $M$ $k$-positive, $M(R,C)$ is
$k$-nonnegative rather than $k$-positive.
Rephrasing Theorem 1.3 in terms of dual canonical basis elements, we have the
following corollary.
###### Corollary 1.4.
Let $F(X)=\operatorname{Imm}_{v}X(R,C)$ be an element of the dual canonical
basis of ${\mathbb{C}}[SL_{m}]$. Suppose $v$ is $1324$-, $2143$-avoiding and
for all $i<j$ with $v_{i}<v_{j}$, we have $j-i\leq k$ or $v_{j}-v_{i}\leq k$.
Then $F(X)$ is $k$-positive.
The paper is organized as follows. Section 2 gives background on the objects
we will be using to prove Theorem 1.3. It includes several useful lemmas
proven in [5]. Section 3 contains the proof of Theorem 1.3. We conclude with a
few thoughts on future directions in Section 4.
## 2\. Background
In an abuse of notation, we frequently drop curly braces around sets appearing
in subscripts and superscripts.
### 2.1. Background on 1324 and 2143-avoiding Kazhdan-Lusztig immanants
For integers $i\leq j$, let $[i,j]:=\\{i,i+1,\dots,j-1,j\\}$. We abbreviate
$[1,n]$ as $[n]$. For $v\in S_{n}$, we write $v_{i}$ or $v(i)$ for the image
of $i$ under $v$. We use the notation $<$ for both the usual order on $[n]$
and the Bruhat order on $S_{n}$; it is clear from context which is meant. To
discuss non-inversions of a permutation $v$, we’ll write $\langle i,j\rangle$
to avoid confusion with a matrix index or point in the plane. In the notation
$\langle i,j\rangle$, we always assume $i<j$. We use the notation
$\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$
for the collection of $n$-element multi-sets of $[m]$. We always list the
elements of a multiset in increasing order.
We are concerned with two notions of positivity, one for matrices and one for
immanants.
###### Definition 2.1.
Let $k\geq 1$. A matrix $M\in\text{Mat}_{n\times n}({\mathbb{C}})$ is _$k$
-positive_ if all minors of size at most $k$ are positive.
An immanant $\operatorname{Imm}_{f}(X):\text{Mat}_{n\times
n}({\mathbb{C}})\to{\mathbb{C}}$ is _$k$ -positive_ if it is positive on all
$k$-positive matrices.
Note that $k$-positive matrices have positive $1\times 1$ minors, i.e.
entries, and so are real matrices.
###### Example 2.2.
The matrix
$M=\begin{bmatrix}22&18&6&3\\\ 8&7&3&2\\\ 2&2&1&2\\\ 1&2&2&6\end{bmatrix}$
is $2$-positive but the upper left $3\times 3$ submatrix has negative
determinant, so is not $3$-positive or $4$-positive (totally positive).
Our results on $k$-positivity of Kazhdan-Lusztig immanants involve pattern
avoidance.
###### Definition 2.3.
Let $v\in S_{n}$, and let $w\in S_{m}$. Suppose $v=v_{1}\cdots v_{n}$ and
$w=w_{1}\cdots w_{m}$ in one-line notation. The pattern $w_{1}\cdots w_{m}$
_occurs_ in $v$ if there exists $1\leq i_{1}<\dots<i_{m}\leq n$ such that
$v_{i_{1}}\cdots v_{i_{m}}$ are in the same relative order as $w_{1}\cdots
w_{m}$. Additionally, $v$ _avoids_ the pattern $w_{1}\cdots w_{m}$ if it does
not occur in $v$.
Certain Kazdhan-Lusztig immanants have a very simple determinantal formula,
which involves the _graph_ of an interval.
###### Definition 2.4.
For $v\in S_{n}$, the _graph_ of $v$, denoted $\Gamma(v)$, refers to its graph
as a function. That is, $\Gamma(v):=\\{(1,v_{1}),\dots,(n,v_{n})\\}$. For
$v,w\in S_{n}$, the graph of the Bruhat interval $[v,w]$ is the subset of
$[n]^{2}$ defined as $\Gamma[v,w]:=\\{(i,u_{i}):u\in[v,w],i=1,\dots,n\\}$.
We think of an element $(i,j)\in\Gamma[v,w]$ as a point in row $i$ and column
$j$ of an $n\times n$ grid, indexed so that row indices increase going down
and column indices increase going right (see 2.6). A _square_ or _square
region_ in $\Gamma[v,w]$ is a subset of $\Gamma[v,w]$ which forms a square
when drawn in the grid.
We will also need the following notion on matrices.
###### Definition 2.5.
Let $P\subset[n]^{2}$ and let $M=(m_{ij})$ be an $n\times n$ matrix. The
_restriction_ of $M$ to $P$, denoted $M|_{P}$, is the matrix with entries
$m^{\prime}_{ij}=\begin{cases}m_{ij}&(i,j)\in P\\\ 0&\text{ else}.\end{cases}$
###### Example 2.6.
Consider $v=2413$ in $S_{4}$. We have
$[v,w_{0}]=\\{2413,4213,3412,2431,4312,4231,3421\\}$, and so $\Gamma[v,w_{0}]$
is as follows.
If $M$ is the matrix from Example 2.2, then
$M|_{\Gamma[v,w_{0}]}=\begin{bmatrix}0&18&6&3\\\ 0&7&3&2\\\ 2&2&1&0\\\
1&2&2&0\end{bmatrix}.$
Note that $v$ avoids patterns 1324 and 2143.
We can now state a simple determinantal formula for certain Kazhdan-Lusztig
elements. This follows from results of [17].
###### Proposition 2.7 ([5, Corollary 3.6]).
Let $v\in S_{n}$ avoid $1324$ and $2143$. Then
$\operatorname{Imm}_{v}(X)=(-1)^{\ell(v)}\det(X|_{\Gamma[v,w_{0}]}).$ (2.1)
Using 2.7, we can similarly obtain a simple determinantal formula for certain
dual canonical basis elements of ${\mathbb{C}}[SL_{m}]$. Recall from 1.2 that
every dual canonical basis element can be expressed as a Kazhdan-Lusztig
immanant evaluated on a matrix of indeterminants with repeated rows and
columns.
###### Definition 2.8.
Let $R=\\{r_{1}\leq r_{2}\leq\dots\leq r_{n}\\}$ and $C=\\{c_{1}\leq
c_{2}\leq\dots\leq c_{n}\\}$ be elements of
$\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$
and let $M=(m_{ij})$ be an $m\times m$ matrix. We denote by $M(R,C)$ the
matrix with $(i,j)$-entry equal to $m_{r_{i},c_{j}}$. We call $r_{i}$ the
_label_ of row $i$; similarly, $c_{j}$ is the label of column $j$. We view
$X(R,C)$ as a function from $\text{Mat}_{m\times m}({\mathbb{C}})$ to
$\text{Mat}_{n\times n}({\mathbb{C}})$, which takes $M$ to $M(R,C)$.
Note that our convention is always to list multisets in weakly increasing
order, so the row and column labels of $X(R,C)$ are weakly increasing.
###### Example 2.9.
Let $R=\\{1,1,3\\}$ and $C=\\{2,3,4\\}$. Then
$X(R,C)=\begin{bmatrix}x_{12}&x_{13}&x_{14}\\\ x_{12}&x_{13}&x_{14}\\\
x_{32}&x_{33}&x_{34}\end{bmatrix}.$
If $M$ is the matrix from Example 2.2, then
$M(R,C)=\begin{bmatrix}18&6&3\\\ 18&6&3\\\ 2&1&2\end{bmatrix}.$
We will focus on the dual canonical basis elements
$\operatorname{Imm}_{v}X(R,C)$ where $v$ is 1324- and 2143-avoiding. 2.7
immediately gives a determinantal formula for these immanants.
###### Lemma 2.10.
Let
$R,C\in\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$
and let $v\in S_{n}$ be $1324$\- and $2143$-avoiding. Then
$\operatorname{Imm}_{v}X(R,C)=(-1)^{\ell(v)}\det X(R,C)|_{\Gamma[v,w_{0}]}.$
(2.2)
We are interested in the sign of $\operatorname{Imm}_{v}X(R,C)$ on
$k$-positive matrices, so long as $\operatorname{Imm}_{v}X(R,C)$ is not
identically zero. Clearly, the function in (2.2) is identically zero when the
matrix $X(R,C)|_{\Gamma[v,w_{0}]}$ has two identical rows or columns. We make
the following definitions to discuss this situation.
###### Definition 2.11.
Let $P\subseteq[n]^{2}$. The _support_ of row $r$ of $P$ is the set of columns
$c\in[n]$ such that $(r,c)\in P$. The support of a column is defined
analogously.
###### Definition 2.12.
Let $P\subseteq[n]^{2}$, and let
$R,C\in\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$.
Then $P$ is _$(R,C)$ -admissible_ if no two rows or columns with the same
labels have the same support.
###### Example 2.13.
Let $P=\Gamma[v,w_{0}]$ where $v=2413$, as in Example 2.6. Rows 1 and 2 have
support $\\{2,3,4\\}$ and rows 3 and 4 have support $\\{1,2,3\\}$. Column 1
has support $\\{3,4\\}$, columns 2 and 3 have support $\\{1,2,3,4\\}$, and
column 4 has support $\\{1,2\\}$. This means $P$ is $(R,C)$-admissible if and
only if $r_{1}\neq r_{2},r_{3}\neq r_{4}$, and $c_{2}\neq c_{3}$. For example,
let $A=\\{1,2,2,3\\}$ and $B=\\{1,2,3,3\\}$. Then $P$ is $(A,B)$-admissible
but, since $a_{2}=a_{3}=2$, $P$ is not $(A,A)$-admissible.
For $v$ avoiding 1324 and 2143, $\operatorname{Imm}_{v}X(R,C)$ is identically
zero if $\Gamma[v,w_{0}]$ is not $(R,C)$-admissible. In the subsequent
sections, we will show the converse holds as well (see 3.10).
Finally, we introduce some notation that will be useful in proofs.
For $I\in\binom{[n]}{k}$, define $\delta_{I}:[n]\setminus I\to[n-k]$ as
$\delta_{I}(j):=j-|\\{i\in I:i<j\\}|$
That is, $\delta_{I}$ is the unique order-preserving map from $[n]\setminus I$
to $[n-k]$.
###### Definition 2.14.
For $I,J\in\binom{[n]}{k}$ and $P\subseteq[n]^{2}$, let
$P^{J}_{I}\subseteq[n-k]\times[n-k]$ be $P$ with rows $I$ and columns $J$
deleted. That is, $P^{J}_{I}=\\{(\delta_{I}(a),\delta_{J}(b)):(a,b)\in P\\}$.
The labels of rows and columns are preserved under deletions; to be more
precise, if $R=\\{r_{1}\leq\cdots\leq r_{n}\\}$ is the multiset of row labels
of $P$, the multiset of row labels of $P^{J}_{I}$ is
$\\{r^{\prime}_{1}\leq\cdots\leq r^{\prime}_{n-k}\\}$ where
$r^{\prime}_{j}=r_{\delta_{I}^{-1}(j)}$.
### 2.2. Combinatorics of graphs of upper intervals
We will now take a closer look at the graphs $\Gamma[v,w_{0}]$ that appear in
2.10. We begin by giving an alternate definition for $\Gamma[v,w_{0}]$.
###### Definition 2.15.
Let $v\in S_{n}$ and $(i,j)\in[n]^{2}\setminus\Gamma(v)$. Then $(i,j)$ is
_sandwiched_ by a non-inversion $\langle k,l\rangle$ if $k\leq i\leq l$ and
$v_{k}\leq j\leq v_{l}$. We also say $\langle k,l\rangle$ _sandwiches_
$(i,j)$.
In other words, $(i,j)$ is sandwiched by $\langle k,l\rangle$ if and only if
$(i,j)\in[n]^{2}$ lies inside the rectangle with diagonal corners $(k,v_{k})$
and $(l,v_{l})$.
###### Lemma 2.16 ([5, Lemma 3.4]).
Let $v\in S_{n}$. Then $\Gamma[v,w_{0}]=\Gamma(v)\cup\\{(i,j):(i,j)$ is
sandwiched by a non-inversion of $v\\}$.
Using this alternate characterization, one can translate the assumptions of
1.3 into a condition on $\Gamma[v,w_{0}]$.
###### Lemma 2.17 ([5, Lemma 4.1]).
Let $v\in S_{n}$. The graph $\Gamma[v,w_{0}]$ has a square of size $k+1$ if
and only if for some non-inversion $\langle i,j\rangle$ of $v$, we have
$j-i\geq k$ and $v_{j}-v_{i}\geq k$.
We now introduce some notation and a proposition that we will need to prove
our main result.
###### Definition 2.18.
Let $v\in S_{n}$. Define $\mathbf{B}_{i,v_{i}}$ to be the square region of
$[n]^{2}$ with corners $(i,v_{i}),\ (i,n-i+1),\ (n-v_{i}+1,v_{i})$ and
$(n-v_{i}+1,n-i+1)$. In other words, $\mathbf{B}_{i,v_{i}}$ is the square
region of $[n]^{2}$ with one corner at $(i,v_{i})$ and two corners on the
antidiagonal of $[n]^{2}$. We say $\mathbf{B}_{i,v_{i}}$ is a _bounding box_
of $\Gamma[v,w_{0}]$ if there does not exist some $j$ such that
$\mathbf{B}_{i,v_{i}}\subsetneq\mathbf{B}_{j,v_{j}}$. If
$\mathbf{B}_{i,v_{i}}$ is a bounding box of $\Gamma[v,w_{0}]$, we call
$(i,v_{i})$ a _spanning corner_ of $\Gamma[v,w_{0}]$. (See Figure 1 for an
example.)
Figure 1. An example of $\Gamma[v,w_{0}]$, with
$v=6~{}10~{}4~{}7~{}8~{}9~{}3~{}1~{}2$. The bounding boxes are blue, red,
blue, green, and purple, listed in the order of their northmost row. The
spanning corners of $\Gamma[v,w_{0}]$ are $(1,6)$, $(3,4)$, $(6,9)$, $(8,3)$,
$(9,1)$, and $(10,2)$.
The name “bounding boxes” comes from the following lemma.
###### Lemma 2.19 ([5, Lemma 4.12]).
Let $v\in S_{n}$. Then
$\Gamma[v,w_{0}]\subseteq\bigcup_{(i,v_{i})\in S}\mathbf{B}_{i,v_{i}}.$
We also color the bounding boxes.
###### Definition 2.20.
A bounding box $\mathbf{B}_{i,v_{i}}$ is said to be _red_ if $(i,v_{i})$ is
below the antidiagonal, _green_ if $(i,v_{i})$ is on the antidiagonal, and
_blue_ if $(i,v_{i})$ is above the antidiagonal. If $\mathbf{B}_{i,v_{i}}$ and
$\mathbf{B}_{n-v_{i}+1,n-i+1}$ are both bounding boxes, then
$\mathbf{B}_{i,v_{i}}=\mathbf{B}_{n-v_{i}+1,n-i+1}$ is both red and blue. We
say such a box is _purple_. (See Figure 1 for an example.)
###### Proposition 2.21 ([5, Proposition 4.14]).
Suppose $v\in S_{n}$ avoids $2143$ and $w_{0}v$ is not contained in a maximal
parabolic subgroup of $S_{n}$. Order the bounding boxes of $\Gamma[v,w_{0}]$
by the row of the northwest corner. If $\Gamma[v,w_{0}]$ has more than one
bounding box, then they alternate between blue and red and there are no purple
bounding boxes.
## 3\. Positivity of basis elements
In this section, we prove our main result.
###### Theorem 3.1.
Let
$R,C\in\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$,
let $v\in S_{n}$ be $1324$-, $2143$-avoiding and suppose that the largest
square region in $\Gamma[v,w_{0}]$ has size at most $k$. If $\Gamma[v,w_{0}]$
is not $(R,C)$-admissible, then $\operatorname{Imm}_{v}X(R,C)$ is identically
zero. Otherwise, $\operatorname{Imm}_{v}X(R,C)$ is $k$-positive.
Theorem 1.3 easily follows from Theorem 3.1, using Lemma 2.17.
Our proofs rely heavily on Lewis Carroll’s identity.
###### Proposition 3.2 (Lewis Carroll’s Identity).
If $M$ is an $n\times n$ square matrix and $M_{A}^{B}$ is $M$ with the rows
indexed by $A\subset[n]$ and columns indexed by $B\subset[n]$ removed, then
$\det(M)\det(M_{a,a^{\prime}}^{b,b^{\prime}})=\det(M_{a}^{b})\det(M_{a^{\prime}}^{b^{\prime}})-\det(M_{a}^{b^{\prime}})\det(M_{a^{\prime}}^{b}),$
where $1\leq a<a^{\prime}\leq n$ and $1\leq b<b^{\prime}\leq n$.
### 3.1. Young diagram case
We first consider the case where $\Gamma[v,w_{0}]$ is a Young diagram or the
complement of a Young diagram (using English notation). Recall that the
_Durfee square_ of a Young diagram $\lambda$ is the largest square contained
in $\lambda$.
###### Proposition 3.3.
Let $\lambda\subseteq n^{n}$ be a Young diagram with Durfee square of size at
most $k$ and $\mu:=n^{n}/\lambda$. Let $M$ be a $m\times m$ $k$-positive
matrix and
$R,C\in\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$.
Then
$(-1)^{|\mu|}\det M(R,C)|_{\lambda}\geq 0$
and equality holds only if $(n,n-1,\dots,1)\nsubseteq\lambda$ or if $\lambda$
is not $(R,C)$-admissible.
###### Proof.
Let $A=M(R,C)|_{\lambda}=\\{a_{ij}\\}$. For $\sigma\in S_{n}$, let
$a_{\sigma}:=a_{1,\sigma(1)}\cdots a_{n,\sigma(n)}$. If
$(n,n-1,\dots,1)\nsubseteq\lambda$ then there is some $1\leq j\leq n$ where
$\lambda_{n-j+1}<j$. Thus boxes in $\lambda$ in the last $j$ rows are in the
southwest most $j\times(j-1)$ rectangle. This means that for every $\sigma$,
$a_{\sigma}$ contains some zero entry, so $\det(A)=0$. It’s clear that if
$\lambda$ is not $(R,C)$-admissable then $\det(A)=0$.
Now we will assume that $(n,n-1,\dots,1)\subseteq\lambda$ and that $\lambda$
is $(R,C)$-admissible. We proceed by induction on $n$ to show that $\det(A)$
has sign $(-1)^{|\mu|}$. The base cases for $n=1,2$ are easy to check.
Let $a=\max\\{i\ |\ \lambda_{i}=n\\}$ and $b=\lambda_{n}=\max\\{j\ |\
\lambda^{\prime}_{j}=n\\}$ where $\lambda^{\prime}$ denotes the transpose of
$\lambda$. In other words, $a$ is the last row in $\lambda$ with $n$ boxes and
$b$ is the last column in $\lambda$ with $n$ boxes. From Lewis Carroll’s
identity, we have that
$\det(A)\det(A_{a,n}^{b,n})=\det(A_{a}^{b})\det(A_{n}^{n})-\det(A_{a}^{n})\det(A_{n}^{b}).$
(3.1)
Let’s see what we know about the signs of these determinants using our
inductive hypothesis. Say $I:=\\{i_{1}<\cdots<i_{k}\\}$ and
$J:=\\{j_{1}<\cdots<j_{k}\\}$, and let $\lambda_{I}^{J}$ denote the Young
diagram obtained from $\lambda$ by removing rows indexed by $I$ and columns
indexed by $J$. Note that
$A_{I}^{J}=M(R,C)_{I}^{J}|_{\lambda_{I}^{J}}=M(R\setminus\\{r_{i_{1}},\dots,r_{i_{k}}\\},C\setminus\\{c_{j_{1}},\dots,c_{j_{k}}\\})|_{\lambda_{I}^{J}}.$
Also, $\lambda_{I}^{J}$ has Durfee square of size at most $k$. So we can use
the inductive hypothesis to compute the signs of all of the determinants in
(3.1) other than $\det(A)$.
Let’s consider which determinants in (3.1) are zero. The shape
$\lambda_{a,n}^{b,n}$ contains the staircase $(n-2,\dots,1)$ and the shapes
$\lambda_{n}^{n},\lambda_{a}^{n}$, and $\lambda_{n}^{b}$ contain the staircase
$(n-1,\dots,1)$. However, $\lambda_{a}^{b}$ may not contain the staircase
$(n-1,\dots,1)$ (e.g. consider $\lambda=(3,3,1)$), so $\det A_{a}^{b}$ may be
zero. Now we need to determine when $\lambda_{I}^{J}$ is
$(R\setminus\\{r_{i_{1}},\dots,r_{i_{k}}\\},C\setminus\\{c_{j_{1}},\dots,c_{j_{k}}\\})$-admissible.
Consider $A_{a,n}^{b,n}$ and pick two row indices $p,q\notin\\{a,n\\}$ with
$p<q$ and $r_{p}=r_{q}$. Because $\lambda$ is $(R,C)$-admissible, rows $p,q$
have different support, so $\lambda_{p}>\lambda_{q}$. Further, because $R$ is
listed in weakly increasing order, $p>a$. We would like to argue that rows
$p^{\prime}:=\delta_{a,n}(p)$ and $q^{\prime}:=\delta_{a,n}(q)$ of
$A_{a,n}^{b,n}$ have distinct support. Since $p>a$, we have
$(\lambda_{a,n}^{b,n})_{p^{\prime}}=\lambda_{p}-1$ and
$(\lambda_{a,n}^{b,n})_{q^{\prime}}=\lambda_{q}-1$, so
$(\lambda_{a,n}^{b,n})_{p^{\prime}}>(\lambda_{a,n}^{b,n})_{q^{\prime}}$. An
analogous argument shows that columns of $A_{a,n}^{b,n}$ with the same index
have different support. Similarly, $A_{a}^{b},A_{a}^{n}$, and $A_{n}^{b}$ are
$(S,D)$-admissible for the appropriate $S,D$. On the other hand, $A_{n}^{n}$
may not be (consider $R=(1,1,2)$, $C=(1,2,3)$, $\lambda=(3,2,1)$, for
example).
Taking all of this together we find that the $\det(A_{a}^{b})\det(A_{n}^{n})$
term in (3.1) may be zero but that $\det(A_{a,n}^{b,n})$ and
$\det(A_{a}^{n})\det(A_{n}^{b})$ are always nonzero. By induction,
$\det(A_{a,n}^{b,n})$ has sign $(-1)^{|\mu|+a+b+1}$ and
$\det(A_{a}^{n})\det(A_{n}^{b})$ has sign $(-1)^{a+b}$. If
$\det(A_{a}^{b})\det(A_{n}^{n})$ is nonzero it has sign $(-1)^{a+b+1}$. Thus,
$\det(A_{a}^{b})\det(A_{n}^{n})-\det(A_{a}^{n})\det(A_{n}^{b})$ always has
sign $(-1)^{a+b+1}$ and $\det(A)$ is always nonzero with sign $(-1)^{|\mu|}$.
∎
###### Corollary 3.4.
Let $\lambda\subseteq n^{n}$ be a Young diagram and let $\mu:=n^{n}/\lambda$.
Suppose $\mu$ has Durfee square of size at most $k$, $M$ is a $k$-positive
$m\times m$ matrix, and
$R,C\in\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$.
Then
$(-1)^{|\lambda|}\det M(R,C)|_{\mu}\geq 0$
and equality holds if and only if $(n^{n}/(n-1,n-2,\dots,1,0))\nsubseteq\mu$
(or equivalently, $\lambda\nsubseteq(n-1,n-2,\dots,1,0)$) or if $\mu$ is not
$(R,C)$-admissible.
###### Proof.
Let $\dot{w_{0}}$ denote the matrix with ones on the antidiagonal and zeros
elsewhere. For a multiset $J=\\{j_{1}\leq\cdots\leq j_{n}\\}$, let
$\overline{J}:=\\{\overline{j}_{1}\leq\cdots\leq\overline{j}_{n}\\}$ where
$\overline{j}_{i}:=n+1-j_{n+1-i}$.
Let $M^{\prime}$ be the antidiagonal transpose of $M$; in symbols,
$M^{\prime}=\dot{w_{0}}M^{T}\dot{w_{0}}$. Taking antidiagonal transpose does
not effect the determinant, so $M^{\prime}$ is also $k$-positive.
If we transpose $M(R,C)|_{\mu}$ across the antidiagonal, we obtain the matrix
$N:=M^{\prime}(\overline{C},\overline{R})|_{\nu},$
where $\nu$ is the Young diagram obtained from the skew-shape $\mu$ by
reflecting across the antidiagonal. Applying 3.3, we have that $\det N$ has
sign $|\lambda|$ if $\nu$ is $(\overline{C},\overline{R})$-admissible and is
zero otherwise. It is not hard to check that $\nu$ is
$(\overline{C},\overline{R})$-admissible if and only if $\mu$ is
$(R,C)$-admissible. ∎
We can use 2.7 to rewrite 3.3 and 3.4 in terms of immanants.
###### Corollary 3.5.
Let $v\in S_{n}$ avoid $1324$ and $2143$. Suppose $\Gamma[v,w_{0}]$ is a Young
diagram $\lambda$ with Durfee square of size at most $k$. If $M$ is a
$k$-positive $m\times m$ matrix and
$R,C\in\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$
such that $\lambda$ is $(R,C)$-admissible, then
$\operatorname{Imm}_{v}M(R,C)>0$.
###### Proof.
Note that $\Gamma(w_{0})\subseteq\Gamma[v,w_{0}]$ implies $\lambda$ contains
the partition $(n,n-1,\dots,1)$. So, by 3.3, we know that $(-1)^{|\mu|}\det
M(R,C)|_{\Gamma[v,w_{0}]}>0$ where $\mu=n^{n}/\lambda$.
Notice that if a box of $\mu$ is in row $r$ and column $c$ then $v(r)<c$ and
$v^{-1}(c)<r$. This means that $(v^{-1}(c),r)$ is an inversion. If $(a,b)$ is
an inversion of $v$ and the box in row $b$ and column $v(a)$ is not in $\mu$,
then $(b,v(a))$ is sandwiched by some non-inversion $\langle a,j\rangle$ for
some $j$. But then $1~{}v(a)~{}v(b)~{}v(j)$ is an occurrence of the pattern
1324, a contradiction. So $(b,v(a))$ is in $\mu$. This means boxes in $\mu$
are in bijection with inversions of $v$ and $(-1)^{\ell(v)}\det
M(R,C)|_{\Gamma[v,w_{0}]}=(-1)^{|\mu|}\det M(R,C)|_{\Gamma[v,w_{0}]}>0$. By
2.7, this means $\operatorname{Imm}_{v}M(R,C)>0$. ∎
###### Corollary 3.6.
Let $v\in S_{n}$ avoid $1324$ and $2143$. Suppose $\Gamma[v,w_{0}]$ is
$\lambda=n^{n}/\mu$ for some partition $\mu$ and the largest square in
$\lambda$ is of size at most $k$. If $M$ is a $k$-positive $m\times m$ matrix
and
$R,C\in\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$
such that $\lambda$ is $(R,C)$-admissible, then
$\operatorname{Imm}_{v}M(R,C)>0$.
###### Proof.
Note that $\Gamma(w_{0})\subseteq\Gamma[v,w_{0}]$ implies $\lambda$ contains
the partition $(n^{n}/(n-1,n-2,\dots,1,0))$. So, by 3.4, we know that
$(-1)^{|\mu|}\det M(R,C)|_{\Gamma[v,w_{0}]}>0$.
As in the proof of 3.5, there is a bijection between boxes of $\mu$ and
inversions of $v$. So, we know $(-1)^{\ell(v)}\det
M(R,C)|_{\Gamma[v,w_{0}]}=(-1)^{|\mu|}\det M(R,C)|_{\Gamma[v,w_{0}]}>0$. By
2.7, this means $\operatorname{Imm}_{v}M(R,C)>0$. ∎
### 3.2. General Case
The following proposition will allow us to restrict to permutations that are
not elements of a maximal parabolic subgroup of $S_{n}$. To state the lemma we
temporarily denote the longest permutation in $S_{j}$ by $w_{(j)}$.
###### Proposition 3.7 ([5, Corollary 4.9]).
Suppose $v\in S_{n}$ is $1324$-, $2143$-avoiding and $\Gamma[v,w_{0}]$ is
block-antidiagonal. Let $v_{1}\in S_{j}$ and $v_{2}\in S_{n-j}$ be
permutations such that the upper-right antidiagonal block of $\Gamma[v,w_{0}]$
is equal to $\Gamma[v_{1},w_{(j)}]$ and the other antidiagonal block is equal
to $\Gamma[v_{2},w_{(n-j)}]$. Then
$\operatorname{Imm}_{v}M=\operatorname{Imm}_{v_{1}}M([j],[n-j+1,n])\cdot\operatorname{Imm}_{v_{2}}M([j+1,n],[n-j]).$
Figure 2. An example where $\Gamma[v,w_{0}]$ is block-antidiagonal. Here,
$v=74586132$. In the notation of 3.7, $j=3$, $v_{1}=41253$, and $v_{2}=132$.
See Figure 2 for an example illustrating a block-antidiagonal
$\Gamma[v,w_{0}]$ and the notation of Proposition 3.7.
To analyze the determinants appearing in Lewis Carroll’s identity for $\det
X(R,C)|_{\Gamma[v,w_{0}]}$, we will use the following two propositions.
###### Proposition 3.8.
Let $v\in S_{n}$ be $2143$\- and $1324$-avoiding, and choose $i\in[n]$. Let
$x\in S_{n-1}$ be the permutation
$x:\delta_{i}(j)\mapsto\delta_{v_{i}}(v_{j})$ (that is, $x$ is obtained from
$v$ by deleting $v_{i}$ from $v$ in one-line notation and shifting the
remaining numbers appropriately). Then
1. (1)
$\Gamma[x,w_{0}]=(\Gamma[v,w_{0}]\setminus\\{(p,q):(p,q)\text{ is sandwiched
only by a non-inversion involving }i\\})_{i}^{v_{i}}$;
2. (2)
If $(i,v_{i})$ is not a spanning corner of $\Gamma[v,w_{0}]$, then
$\Gamma[x,w_{0}]=\Gamma[v,w_{0}]_{i}^{v_{i}}$.
3. (3)
For all $i$,
$\det(M|_{\Gamma[x,w_{0}]})=\det(M|_{\Gamma[v,w_{0}]_{i}^{v_{i}}}).$
###### Proof.
Statement (1) follows from 2.16. Statements (2) and (3) are Proposition 4.17
from [5]. ∎
###### Proposition 3.9.
Let $v\in S_{n}$ be $1324$\- and $2143$-avoiding such that the last bounding
box of $\Gamma[v,w_{0}]$ is $\mathbf{B}_{n,v_{n}}$, and the second to last box
is $\mathbf{B}_{a,v_{a}}$ for some $a<n$ with $1<v_{a}<v_{n}$. Let
$b=v^{-1}(1)$ and $d=v_{a}$. Suppose
$\det(M(R,C)|_{\Gamma[v,w_{0}]})^{1}_{b}\cdot\det(M(R,C)|_{\Gamma[v,w_{0}]})^{d}_{a}$
is nonzero and has sign $\sigma$. Then
1. (1)
If
$\det(M(R,C)|_{\Gamma[v,w_{0}]})^{1}_{a}\cdot\det(M(R,C)|_{\Gamma[v,w_{0}]})^{d}_{b}\neq
0$, it has sign $-\sigma$.
2. (2)
If $\det(M(R,C)|_{\Gamma[v,w_{0}]})^{1,d}_{a,b}\neq 0$, it has sign
$\sigma\cdot(-1)^{\ell(v)}$.
###### Proof.
This follows from the proof of [5, Theorem 4.18]. ∎
We can now determine the sign of $\det X(R,C)|_{\Gamma[v,w_{0}]}$ on
$k$-positive matrices.
###### Theorem 3.10.
Let $v\in S_{n}$ avoid $1324$ and $2143$ and let $k$ be the size of the
largest square in $\Gamma[v,w_{0}]$. Choose
$R,C\in\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$.
For $M$ a $k$-positive $m\times m$ matrix,
$(-1)^{\ell(v)}\det M(R,C)|_{\Gamma[v,w_{0}]}\geq 0$
and equality holds if and only if $\Gamma[v,w_{0}]$ is not $(R,C)$-admissible.
###### Proof.
First, if $\Gamma[v,w_{0}]$ is not $(R,C)$-admissible, the determinant in
question is obviously zero. So we assume $\Gamma[v,w_{0}]$ is
$(R,C)$-admissible.
We follow the proof of [5, Theorem 4.18], and proceed by induction on $n$. If
$\Gamma[v,w_{0}]$ is a partition, a complement of a partition, or block-
antidiagonal, we are done by 3.5, 3.6, or 3.7, respectively.
So we may assume that $v$ has at least 2 bounding boxes and that adjacent
bounding boxes have nonempty intersection (where bounding boxes are ordered as
usual by the row of their northeast corner). Because $v$ avoids 1324 and 2143,
the final two bounding boxes of $\Gamma[v,w_{0}]$ are of opposite color by
2.21. Without loss of generality, we assume the final box is red and the
second to last box is blue. Otherwise, we can consider the antidiagonal
transpose of $M(R,C)|_{\Gamma[v,w_{0}]}$. This is equal to
$(\dot{w_{0}}M^{T}\dot{w_{0}})(\overline{C},\overline{R})|_{\Gamma[w_{0}v^{-1}w_{0},w_{0}]}$
(using the notation in the proof of 3.4) and has the same determinant as
$M(R,C)|_{\Gamma[v,w_{0}]}$.
This means the final box is $\mathbf{B}_{n,v_{n}}$, and the second to last box
is $\mathbf{B}_{a,v_{a}}$ for some $a<n$ with $1<v_{a}<v_{n}$. We analyze the
sign of $\det M(R,C)|_{\Gamma[v,w_{0}]}$ using Lewis Carroll’s identity on
rows $a,b:=v^{-1}(1)$ and columns $1,d:=v_{a}$. Note that $a<b$ and $1<d$.
The proof of [5, Theorem 4.18] shows that each of the 5 known determinants in
this Lewis Carroll’s identity is equal to $\det
M(R^{\prime},C^{\prime})|_{\Gamma[v^{\prime},w_{0}]}$ for an appropriate
choice of multisets $R^{\prime},C^{\prime}$ and permutation $v^{\prime}$. We
first show that two of these determinants, forming a single term on the right-
hand side of the identity, are non-zero.
1. (1)
Consider $(M(R,C)|_{\Gamma[v,w_{0}]})^{1}_{b}$. By 3.8, the determinant of
this matrix is equal to the determinant of
$M(R^{\prime},C^{\prime})|_{\Gamma[y,w_{0}]}$, where $y$ is obtained from $v$
by deleting 1 from $v$ in one-line notation and shifting appropriately,
$R^{\prime}=R\setminus\\{r_{b}\\}$ and $C^{\prime}=C\setminus\\{r_{1}\\}$.
We will check that $\Gamma[y,w_{0}]$ is $(R^{\prime},C^{\prime})$-admissible.
Note that because $(1,b)$ is not a spanning corner of $\Gamma[v,w_{0}]$,
$\Gamma[y,w_{0}]=\Gamma[v,w_{0}]^{1}_{b}$ by 3.8. So we first check that
removing column $1$ and row $b$ from $\Gamma[v,w_{0}]$ does not create any
rows $i,j$ with both the same support and the same labels. By [5, Theorem
4.18, pf. of (2)], rows $b,\dots,n$ of $\Gamma[v,w_{0}]$ all have support
$\\{1,\dots,v_{n}\\}$. Note that removing column $1$ from $\Gamma[v,w_{0}]$
shortens rows $b,\dots,n$ by one and does not effect other rows, so it
suffices to check that rows $b-1,\dots,n-1$ in $\Gamma[y,w_{0}]$ have distinct
labels. Since $\Gamma[v,w_{0}]$ is $(R,C)$-admissible and rows $b,\dots,n$ of
$\Gamma[v,w_{0}]$ have the same support, we must have $r_{b-1}\leq
r_{b}<r_{b+1}<\cdots<r_{n}$. So, letting $r^{\prime}_{i}$ denote the elements
of $R^{\prime}$, indexed in increasing order, we have
$r^{\prime}_{b-1}<r^{\prime}_{b}<\cdots<r^{\prime}_{n-1}$.
We now show there are no columns in $\Gamma[y,w_{0}]$ with both the same
support and same labels. Columns $1,\dots,v_{n}$ of $\Gamma[v,w_{0}]$ have
support containing $[b,n]$, and columns $v_{n}+1,\dots n$ have support
contained in $[1,b-1]$. Removing row $b$ removes one element from the support
of columns $1,\dots,v_{n}$ and does not effect other columns. Any two columns
with the same support in $\Gamma[y,w_{0}]$ correspond to two columns with the
same support in $\Gamma[v,w_{0}]$, and thus have different labels by the
$(R,C)$-admissibility of $\Gamma[v,w_{0}]$.
2. (2)
Consider $(M(R,C)|_{\Gamma[v,w_{0}]})^{d}_{a}$. By 3.8, the determinant of
this matrix is equal to the determinant of
$M(R^{\prime},C^{\prime})^{d}_{a}|_{\Gamma[z,w_{0}]}$, where $z$ is obtained
from $v$ by deleting $v_{a}$ from $v$ in one-line notation and shifting
appropriately, $R^{\prime}=R\setminus\\{r_{a}\\}$, and
$C^{\prime}=C\setminus\\{c_{d}\\}$. See Figure 3 for an example.
Figure 3. On the left, $\Gamma[v,w_{0}]$ for $v=62785314$. Elements of
$\Gamma[v]$ are marked with crosses. On the right, $\Gamma[z,w_{0}]$ where
$z=5674213$ is the permutation obtained by deleting 2 from the one-line
notation of $v$ and shifting remaining numbers appropriately. Note that
$\Gamma[z,w_{0}]$ is obtained from $\Gamma[v,w_{0}]$ by deleting row 2, column
2, and the shaded region $Q$, consisting of elements sandwiched only by non-
inversions of the form $\langle 2,i\rangle$.
As $(a,v_{a})$ is a spanning corner of $\Gamma[v,w_{0}]$, $\Gamma[z,w_{0}]$ is
obtained from $\Gamma[v,w_{0}]$ by deleting row $a$, column $d$, and the
subset $Q\subset[n]^{2}$ consisting of all elements $(p,q)$ which are
sandwiched only by a non-inversion of the form $\langle a,i\rangle$ (see
Figure 3). Note that if $(p,q)\in Q$, then row $p$ of $\Gamma[v,w_{0}]$ has
support $\\{d,d+1,\dots,d+j\\}$ for some $j$ and column $q$ of
$\Gamma[v,w_{0}]$ has support $\\{a,a+1,\dots,a+\ell\\}$ for some $\ell$.
Notice also that $Q$ consists of some initial chunk of each row and column of
$\Gamma[v,w_{0}]$ it intersects; thus, deleting elements of $Q$ will not
change the largest number in the support of any row or column. Since all
corners of $\Gamma[v,w_{0}]$ are elements of $\Gamma[v]$ and
$(a,d)\in\Gamma[v]$, there are no other corners in row $a$ or column $d$. So
$a$ (resp. $d$) cannot be the largest element in the support of a column
(resp. row). So for row $p$ in $\Gamma[v,w_{0}]_{a}^{d}$, with $\ell$ the
largest element in the support of $p$, $\delta^{-1}_{d}(\ell)$ is the largest
element in the support of row $\delta^{-1}_{a}(p)$ in $\Gamma[v,w_{0}]$. An
analogous statement holds for column $q$ in $\Gamma[v,w_{0}]_{a}^{d}$.
Consider rows $p<p^{\prime}$ of $\Gamma[v,w_{0}]$ with $r_{p}=r_{p}^{\prime}$
and $p,p^{\prime}\neq a$. Because $R$ is listed in weakly increasing order,
$r_{p}=r_{p+1}=\cdots=r_{p^{\prime}}$. By assumption, the support of these
rows in $\Gamma[v,w_{0}]$ must be different. Suppose rows $s=\delta_{a}(p)$,
$s^{\prime}=\delta_{a}(p^{\prime})$ have the same support in
$\Gamma[z,w_{0}]$; say $\ell$ is the largest number in their support. The
reasoning in the above paragraph implies that $\delta^{-1}_{d}(\ell)$ is the
largest number in the support of rows $p,p^{\prime}$ in $\Gamma[v,w_{0}]$, and
thus also in rows $p,p+1,\dots,p^{\prime}-1,p^{\prime}$. So the smallest
number in the support of rows $p,p+1,\dots,p^{\prime}$ must be different. On
the other hand, after deleting column $d$ and the elements of $Q$, the
supports should be the same. These deletions remove the first element of a row
only if that first element is in column $d$. Putting these together, we must
have $p=a-1$, $p^{\prime}=a+1$, and row $a+1$ has support starting at $d$;
otherwise we obtain rows of $\Gamma[v,w_{0}]$ with the same label and same
support. But now row $a$ is among rows $p,p+1,p^{\prime}$, and rows $a$ and
$p^{\prime}=a+1$ have support starting at $d$, a contradiction.
An identical argument with columns in place of rows shows that no two columns
of $\Gamma[z,w_{0}]$ have the same support and the same label. So
$\Gamma[z,w_{0}]$ is $(R^{\prime},C^{\prime})$-admissible.
So by the inductive hypothesis, one term on the right-hand side of the
identity is nonzero. Let $\sigma$ denote the sign of this term. By 3.9, the
other term on the right-hand side has sign $-\sigma$ if it is nonzero. In
either case, the right-hand side has sign $\sigma$, and in particular is
nonzero. Thus, both determinants on the left-hand side are non-zero. By 3.9,
the determinant $\det(M(R,C)|_{\Gamma[v,w_{0}]})^{1,d}_{a,b}$ has sign
$\sigma\cdot(-1)^{\ell(v)}$, so dividing through by that determinant shows
that $\det M(R,C)|_{\Gamma[v,w_{0}]}$ has sign $\ell(v)$.
∎
Taking this theorem with 2.10, we can now prove 3.1.
###### Proof of 3.1.
By 2.10,
$\operatorname{Imm}_{v}M(R,C)=(-1)^{\ell(v)}\det M(R,C)|_{\Gamma[v,w_{0}]}.$
Let $k^{\prime}\leq k$ be the size of the largest square in $\Gamma[v,w_{0}]$.
By 3.10, for $M$ $k^{\prime}$-positive, the right hand side of this expression
is positive. Any $k$-positive matrix is also $k^{\prime}$-positive, so we are
done. ∎
## 4\. Future Directions
The results in [5] and this paper were inspired by the following conjecture of
Pylyavskyy.
###### Conjecture 4.1 ( [15]).
Let $0<k<n$ be an integer and let $v\in S_{n}$ avoid the pattern
$12\cdots(k+1)$. Then $\operatorname{Imm}_{v}X$ is $k$-positive.
This conjecture remains open. The relation between pattern avoidance and
$k$-positivity of immanants is an interesting direction of further inquiry.
The results of this paper showcase an interesting phenomenon: the behavior of
the dual canonical basis element $\operatorname{Imm}_{v}X(R,C)$ on
$k$-positive matrices is the same as the behavior of the usual Kazhdan-Lusztig
immanant $\operatorname{Imm}_{v}X$. Based on this, we make the following
conjecture.
###### Conjecture 4.2.
Suppose $\operatorname{Imm}_{v}X$ is $k$-positive. Then as long as
$\operatorname{Imm}_{v}X(R,C)$ is not identically zero,
$\operatorname{Imm}_{v}X(R,C)$ is $k$-positive.
We also make a related conjecture based on the same phenomenon, which is
something of an intermediate conjecture; it would imply Conjecture 4.1 and
would be implied by Conjectures 4.1 and 4.2 together.
###### Conjecture 4.3.
Let $0<k<n\leq m$ be integers and let $v\in S_{n}$ avoid the pattern
$12\cdots(k+1)$. Let
$R,C\in\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$.
If $\operatorname{Imm}_{v}X(R,C)$ is not identically zero, then
$\operatorname{Imm}_{v}X(R,C)$ is $k$-positive.
The compact determinantal formulas we give for certain dual canonical basis
elements may be useful to understand the relationship between the dual
canonical basis of ${\mathbb{C}}[SL_{m}]$ and its cluster algebra structure.
Technically, the cluster algebra in question is the coordinate ring of
$G^{w_{0},w_{0}}$, the open double Bruhat cell in $SL_{m}$;
${\mathbb{C}}[G^{w_{0},w_{0}}]$ differs from ${\mathbb{C}}[SL_{m}]$ by
localization at certain principal minors. The cluster monomials of
${\mathbb{C}}[G^{w_{0},w_{0}}]$ are expected to be dual canonical basis
elements. One natural question is: do the cluster monomials include the
functions $\operatorname{Imm}_{v}X(R,C)$, where $v$ avoids 2143 and 1324? If
so, can the $k$-positivity of these immanants be explained from a cluster
algebraic viewpoint?
Work related to these questions appeared in the manuscript [6]; the connection
to Kazhdan-Lusztig immanants is explained in [3, Section 3.3]. The results of
[6] show that $\operatorname{Imm}_{v}X(R,C)$ is a cluster variable for $v$
avoiding 123, 2143, 1432, and 3214. The immanants occurring in [6] have a
determinantal form given by 2.10; they further conjecture that all cluster
variables of ${\mathbb{C}}[G^{w_{0},w_{0}}]$ can be written as $\pm\det
X(R,C)|_{P}$ for some $P\subset[n^{2}]$. Conjecturally, the Kazhdan–Lusztig
immanants that can be written as $\pm\det X(R,C)|_{P}$ are the exactly
$\operatorname{Imm}_{v}X(R,C)$ where $v$ is 2143 and 1324 avoiding. This leads
to the following conjecture.
###### Conjecture 4.4.
Fix $m$ and let $G^{w_{0},w_{0}}$ denote the big open double Bruhat cell in
$SL_{m}$.
1. (1)
All cluster variables of ${\mathbb{C}}[G^{w_{0},w_{0}}]$ are of the form
$\operatorname{Imm}_{v}X(R,C)$ for some $v$ avoiding $2143$ and $1324$.
2. (2)
For $v\in S_{n}$ avoiding $2143$ and $1342$ and
$R,C\in\left.\mathchoice{\left(\kern-4.79996pt\binom{[m]}{n}\kern-4.79996pt\right)}{\big{(}\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\big{)}}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}{\left(\kern-3.00003pt\binom{\smash{[m]}}{\smash{n}}\kern-3.00003pt\right)}\right.$
with $\Gamma[v,w_{0}]$ $(R,C)$-admissible, $\operatorname{Imm}_{v}X(R,C)$ is a
cluster variable in ${\mathbb{C}}[G^{w_{0},w_{0}}]$ if it is irreducible and a
cluster monomial otherwise.
## 5\. Acknowledgements
We would like to thank Pavlo Pylyavskyy for suggesting this topic to us. This
material is based upon work supported by the National Science Foundation under
Grant No. DMS-1439786 while the first author was in residence at the Institute
for Computational and Experimental Research in Mathematics in Providence, RI,
during the Spring 2021 semester. The second author was supported by an NSF
Graduate Research Fellowship DGE-1752814.
## References
* [1] A. Björner and F. Brenti, Combinatorics of Coxeter groups, vol. 231 of Graduate Texts in Mathematics, Springer, New York, 2005. http://dx.doi.org/10.1007/3-540-27596-7.
* [2] A. Brosowsky, S. Chepuri, and A. Mason, Parametrizations of k-nonnegative matrices: Cluster algebras and k-positivity tests, Journal of Combinatorial Theory, Series A, 174 (2020), p. 105217.
* [3] S. Chepuri, Generalizations of total positivity, PhD Thesis, (2020).
* [4] S. Chepuri, N. Kulkarni, J. Suk, and E. Tang, Factorizations of $k$-nonnegative matrices, arXiv preprint arXiv:1710.10867, (2017).
* [5] S. Chepuri and M. Sherman-Bennett, 1324- and 2143-avoiding Kazhdan-Lusztig immanants and k-positivity, Canadian Journal of Mathematics, (2021), p. 1–33.
* [6] M. Chmutov, P. Jiradilok, and J. Stevens, Double rim hook cluster algebras, REU Report, (2015).
* [7] P. N. Choudhury, Characterizing total positivity: single vector tests via linear complementarity, sign non-reversal, and variation diminution, arXiv preprint arXiv:2103.05624, (2021).
* [8] P. N. Choudhury, M. R. Kannan, and A. Khare, Sign non-reversal property for totally non-negative and totally positive matrices, and testing total positivity of their interval hull, Bulletin of the London Mathematical Society, (2021).
* [9] J. Du, Canonical bases for irreducible representations of quantum ${\rm GL}_{n}$, Bull. London Math. Soc., 24 (1992), pp. 325–334.
* [10] S. Fomin and A. Zelevinsky, Totally nonnegative and oscillatory elements in semisimple groups, Proc. Amer. Math. Soc., 128 (2000), pp. 3749–3759. http://dx.doi.org/10.1090/S0002-9939-00-05487-3.
* [11] I. P. Goulden and D. M. Jackson, Immanants of combinatorial matrices, J. Algebra, 148 (1992), pp. 305–324. http://dx.doi.org/10.1016/0021-8693(92)90196-S.
* [12] C. Greene, Proof of a conjecture on immanants of the Jacobi-Trudi matrix, Linear Algebra Appl., 171 (1992), pp. 65–79. http://dx.doi.org/10.1016/0024-3795(92)90250-E.
* [13] M. Haiman, Hecke algebra characters and immanant conjectures, J. Amer. Math. Soc., 6 (1993), pp. 569–595. http://dx.doi.org/10.2307/2152777.
* [14] G. Lusztig, Total positivity in reductive groups, in Lie theory and geometry, vol. 123 of Progr. Math., Birkhäuser Boston, Boston, MA, 1994, pp. 531–568. http://dx.doi.org/10.1007/978-1-4612-0261-5_20.
* [15] P. Pylyavskyy, personal communication, November 2018.
* [16] B. Rhoades and M. Skandera, Kazhdan-Lusztig immanants and products of matrix minors, J. Algebra, 304 (2006), pp. 793–811. http://dx.doi.org/10.1016/j.jalgebra.2005.07.017.
* [17] J. Sjöstrand, Bruhat intervals as rooks on skew Ferrers boards, J. Combin. Theory Ser. A, 114 (2007), pp. 1182–1198. http://dx.doi.org/10.1016/j.jcta.2007.01.001.
* [18] M. Skandera, On the dual canonical and Kazhdan-Lusztig bases and 3412-, 4231-avoiding permutations, J. Pure Appl. Algebra, 212 (2008), pp. 1086–1104. http://dx.doi.org/10.1016/j.jpaa.2007.09.007.
* [19] J. R. Stembridge, Immanants of totally positive matrices are nonnegative, Bull. London Math. Soc., 23 (1991), pp. 422–428. http://dx.doi.org/10.1112/blms/23.5.422.
* [20] J. R. Stembridge, Some conjectures for immanants, Canad. J. Math., 44 (1992), pp. 1079–1099. http://dx.doi.org/10.4153/CJM-1992-066-1.
|
††thanks: Current Adress: Simbeyond B.V., Het Eeuwsel 57, AS Eindhoven 5612,
Netherlands
# Surface termination dependence of electronic and optical properties in
Ti2CO2 MXene monolayers
Zafer Kandemir Department of Mechanical Engineering, Faculty of Engineering,
Eskisehir Technical University, 26555, Eskisehir, Turkey Engin Torun Fulvio
Paleari Istituto di Struttura della Materia and Division of Ultrafast
Processes in Materials (FLASHit) of the National Research Council, via Salaria
Km 29.3, I-00016 Monterotondo Stazione, Italy. Celal Yelgel Department of
Electricity and Energy, Recep Tayyip Erdogan University, 53100, Rize, Turkey
Cem Sevik Department of Mechanical Engineering, Faculty of Engineering,
Eskisehir Technical University, 26555, Eskisehir, Turkey
<EMAIL_ADDRESS>
###### Abstract
Two-dimensional (2D) MXenes are a rapid growing family of 2D materials with
rich physical and chemical properties where their surface termination plays an
essential role. Among the various 2D MXenes, functionalization of the TinCn-1
phase with oxygen (O) atoms makes them attractive for optoelectronic
applications due to their optical gap residing in the infrared or visible
region. In this manuscript, we theoretically investigate the electronic and
optical properties of four different O-atom-functionalized TinCn-1 MXene
monolayers using state-of-the-art, first-principles techniques. In particular,
we calculate the quasiparticle corrections on top of density functional theory
(DFT) at the GW level and the exciton-dominated optical spectra by solving the
Bethe-Salpeter equation (BSE) also at finite momentum. We find that all but
one of the monolayer models are indirect band gap semiconductors where
quasiparticle corrections are very important ($\sim 1$ eV). The optical
spectra are instead dominated by direct and indirect excitons with large
binding energies (between $0.5$ and $1$ eV). Most direct excitons lie above
$1.5$ eV, while the indirect ones are below: therefore, we conclude that
TinCn-1 should display strong absorption in the visible region, but phonon-
assisted emission in the infrared. Our work thus reveals the potential usage
of surface terminations to tune the optical and electronic properties of
TinCn-1 MXene monolayers, while emphasizing the pivotal role of many-body
effects beyond DFT to obtain accurate prediction for these systems.
## I Introduction
The family of 2D transition metal carbide, nitride, and carbonitride materials
– the so-called “MXenes” – possessing chemical formula of Mn+1XnTy ($n$=1, 2
or 3), where “M” is an early transition metal such as Sc, Ti, Zr, Hf, V, Nb,
Ta, Cr, Mo or W, “X” is either N or C, and “Ty” stands for the surface
terminations such as O, OH, F or S has been the object of great interest since
its first introduction by Gogotsi et al.[1] These layered materials are mostly
chemically synthesized through selective acid etching of A elements from MAX
phases, [2, 3, 4, 5] where ”A” is a group IIIA to VIA element. In addition,
chemical transformations and bottom-up construction techniques[6] such as
chemical vapor deposition have been also demonstrated for the successful
synthesis of some MXene crystals. To date, many kinds of MXene crystals have
been experimentally realized[6, 7, 8, 9, 10] and methods to control formation
on surface termination have been demonstrated as well. [11, 12] Indeed, large-
scale single-layer Ti2CO2 crystals have not been reported along with proper
characterization of a physical property such as optical. However, due to the
enormous research effort and current developments on techniques for MXene
Delamination into Single-Layer Flakes it is close to coming true[13]. On the
other hand, single-layer crystals with random functional groups such as OH, F,
O, Cl are available in the literature[14].
Numerous different MXene layered systems arise by combining their chemical
versatility and thier wide range of surface functionalizations, as
demonstrated in research studies revealing the enormous potential of MXenes in
various applications[8, 9, 15] such as power generation and storage, [6, 10,
16, 17, 18] gas, piezoresistive and strain sensors, [19, 20, 21, 22, 23]
chemical catalysis, [7] water purification, [24, 25, 26] plasmonics, [27, 28,
29] transparent conductors[30] and electromagnetic interference shielding.[31]
Recent studies have also suggested the occurrence of superconductivity and of
magnetic properties which might be the subject of future qubit and skyrmion-
based investigations. [32, 11, 33, 34]
Among the experimentally available MXenes, TinCn-1Tn is the most widely
investigated one due to its largest superficial area per weight and being one
of the thinnest MXene phases. [35] First-principles calculations have shown
that the pristine TinCn-1 are metallic. [3] However, after functionalization
with O, 2D Ti2C becomes semiconducting with a considerable band gap energy,
[36] this also holds for the Zr2CO2 and Hf2CO2 phases, as well.[37]
The notable influence of the surface termination on the physical properties,
e.g., electronic, mechanical, ionic diffusion, and ionic absorption have
already been investigated for O-terminated Ti2C monolayers.[38, 39, 40, 41]
The effect of surface termination on the optical properties of these
materials, on the other hand, has not been systematically investigated and the
literature is rather sparse. For instance, the optical gap and the binding
energy of the corresponding first bright direct exciton have been reported
using GW and BSE formalism for the most chemically stable O-terminated Ti2C
monolayer as 1.26 eV and 0.56 eV, respectively. [42] The absorbed photon flux
has been calculated as 1.76 mAcm-2 which is comparable with the 1 nm thick
layers of Si (0.1 mAcm-2), GaAs (0.3 mAcm-2), and P3HT polymer (0.2 mAcm-2).
[42, 43] This consequently emphasizes the potential of using these monolayers
in photodetection and photovoltaic applications. On the other hand, most of
the semiconductor MXene structures have been determined as indirect band gap
semiconductors, [36, 37] which means that it is important to include indirect
excitons in the computational studies to predict the optical properties and
application areas of the different MXene structures. Taking into consideration
all these facts, in this manuscript we present a thorough analysis based on
first-principles calculations on the ground and excited state properties of
the four different O terminated models of Ti2CO2 monolayers. We first present
the electronic structures of the monolayer models and show that all of them
but one are indirect band gap semiconductors at both DFT and GW levels.
Subsequently, we investigate the optical properties of these monolayers
including direct and indirect excitons by solving BSE using quasi particle
(QP) energies and DFT wave functions. We observe that the bound indirect
excitons are the lowest lying ones in the absorption spectra of indirect
monolayer models although their binding energies are in average lower than the
direct ones.
This manuscript is organized as follows: we first give the computational
details in Sec. II. Then in Sec. III we present and discuss the results
corresponding to the DFT, GW, and BSE calculations. Finally, we summarize our
main findings in Sec. IV.
## II Computational methods
The DFT ground state calculations were performed with Quantum ESPRESSO[44, 45]
using the local density approximation[46] (LDA) and norm-conserving
pseudopotentials. [47] The energy cutoff for the plane wave basis was set to
$90$ Ry and a $\Gamma$-centered 18$\times$18$\times$1 k-point mesh was used,
which guarantees a total energy convergence of 10-10 eV during the self-
consistent steps. The geometries were fully optimized until the Hellmann-
Feynman forces on each atom were less than $0.02$ eV/Å. A vacuum separation of
20 Å in the out-of-plane direction was ensured to eliminate spurious
interactions between monolayers with their periodically repeated images. Spin-
orbit coupling is not taken into account in the simulations presented in this
manuscript as our tests revealed its effects being negligible for the systems
under investigation.
The many-body perturbation theory (MBPT) calculations, performed on top of the
DFT results, were conducted with the YAMBO code.[48, 49] The G0W0[50]
corrections to the Kohn-Sham eigenvalues were computed with a plasmon-pole
approximation for the dynamical electronic screening. The direct and indirect
band gaps were converged with a 48 $\times$ 48 $\times$ 1 $k$-grid mesh, 217 k
points in the irreducible Brillouin zone (BZ), summing over $400$ states for
both the screening function and the Green’s function. The corrections were
computed for the top 3 valence bands and the bottom 3 conduction bands. The
BSE[bse-ref2] was then solved with RPA static screening, which was summed over
$400$ bands and in the Tamm-Dancoff approximation on top of the GW results.
The direct and indirect (i.e. finite-momentum) exciton energies and their wave
functions were obtained for the first 5500 excitonic states by using the
iterative scheme enabled by the SLEPC library.[51] The Coulomb cutoff (CC)
technique was used along the out-of-plane direction to eliminate the
interactions to all the undesired images of the systems in both G0W0 and BSE
steps.[52] Convergence tests for the parameters used in MBPT calculations are
provided in the supplementary material[smaterials]. 111The input parameters
were individually converged by slowly increasing them until differences in
band gaps and exciton energies between two subsequent runs were below 0.1 eV
Figure 1: (Color online) Top and side views of the optimized crystal
structures of the O-terminated Ti2CO2 monolayers. A $3\times 3$ supercell is
shown for clarity. The blue, red and brown spheres represent titanium, oxygen
and carbon atoms, respectively. (a) O-hTi; O atoms in the hollow site between
Ti atoms and on top of the Ti atom in the opposite layer. (b) O-hC; O atoms
again in the hollow site, but this time both of them are on top of the C atom.
(c) O-Ti; both O atoms on Ti atoms. (d) O-hTiC; O atoms in the hollow sites,
but with one O atom above the C atom and the other O atom above the opposite
Ti atom.
## III Results and Discussion
Crystal structure of the four possible O-terminated hexagonal Ti2C monolayer
models are shown in Fig. 1(a-d). The corresponding binding energies of the O
atom are predicted via the following equation:
$E_{b}=\dfrac{1}{N}\big{[}E_{\mathrm{Ti}_{2}\mathrm{CO}_{2}}-E_{\mathrm{Ti}_{2}\mathrm{C}}-2E_{\mathrm{O}}]$
(1)
where $E_{\mathrm{Ti}_{2}\mathrm{CO}_{2}}$, $E_{\mathrm{Ti}_{2}\mathrm{C}}$,
and $E_{\mathrm{O}}$ are the total energies of Ti2CO2, Ti2C, and the isolated
O atom, respectively ($N$ being the total number of atoms in the unit cell).
The calculated binding energies ($E_{b}$) along with lattice constants ($a$)
and thicknesses ($d$) of all the investigated Ti2CO2 monolayers are reported
in Table 1. The results are in good agreement with recent works based on DFT
calculations with different functionals.[42, 36, 24, 54] Here, more negative
binding energy indicates the more favorable exothermic binding of O atoms.
Therefore, the most and the least chemically stable structures are predicted
as O-hTi and O-Ti, respectively. We should note that these results are only to
compare the chemical stability of these structures in their pristine form and
any one of them could be stabilized by temperature, pressure, growth
conditions and substrate effects.
Table 1: Calculated parameters of the O-terminated Ti2CO2 monolayers: Lattice constant ($a$), monolayer thickness ($d$), binding energy of the O atom ($E_{b}$), band gap energies from the LDA ($E_{gap}^{LDA}$) and GW calculation ($E_{gap}^{GW}$), location of the valence band maximum (VBM), conduction band minimum (CBM) in the BZ and type of the band gap. Note that the minimum direct band gap of the indirect monolayers are reported in the paranthesis. System | $a$ (Å) | $d$ (Å) | $E_{b}$ (eV/atom) | $E_{gap}^{LDA}$ (eV) | $E_{gap}^{GW}$ (eV) | VBM | CBM | Type
---|---|---|---|---|---|---|---|---
O-hTi | 2.98 | 4.35 | -4.97 | 0.28(0.61) | 1.29(1.86) | $\Gamma$ | $M$ | Indirect
O-hC | 2.90 | 4.75 | -4.59 | 0.36(0.61) | 1.19(1.42) | $K-\Gamma$ | $\Gamma-M$ | Indirect
O-Ti | 3.28 | 5.43 | -3.83 | 0.68 | 1.92 | $M$ | $M$ | Direct
O-hTiC | 2.95 | 4.51 | -4.79 | 0.74(1.30) | 1.74(2.46) | $\Gamma$ | $M$ | Indirect
Figure 2: (Color online) Band structures of Ti2CO2 monolayers: (a) O-hTi, (b)
O-hC, (c) O-Ti and (d) O-hTiC. The light blue dashed and red dotted lines
represent LDA and G0W0 band structures, respectively. The LDA band structures
are shifted to the GW band gap to compare the band dispersions. The black
dashed line indicates the Fermi level which is shifted to 0 eV.
### III.1 Electronic structure and quasiparticles
The pristine bulk Ti2C (non-terminated) is metallic with a high density of
states at the Fermi level.[55] However, it turns into a semiconductor when
terminated with O atoms. In order to address the DFT-LDA band gap
underestimation which leads to discrepancies between calculated and
experimental spectra, [42, 36] we performed GW calculations to access the QP
spectral properties. Fig.2 demonstrates the LDA and GW-corrected band
structures of the Ti2CO2 monolayers. It is important to note that LDA gaps are
shifted to the GW ones to compare the dispersion of the bands. It is observed
that band dispersions are very similar at both LDA and GW levels for the
valence bands but slightly different for the conduction bands particularly
along $K-\Gamma$ direction. At both LDA and GW level, O-hTi, O-hC, and O-hTiC
monolayers are found to be indirect but O-Ti a direct band gap semiconductor
with a GW (LDA) band gap of 1.29 (0.28), 1.19 (0.36), 1.74 (0.74) eV, and 1.92
(0.68), respectively as reported in Tab. 1 together with the other properties
of the monolayers. As can be seen that the band gap values are enormously
increased by the self-energy corrections and brings them much closer to the
low-energy side of the visible spectrum. In particular, the indirect QP band
gap of O-hTi is now 1.29 eV, more than four times its DFT value, while the
direct O-Ti gap is now 1.92 eV, increasing by almost three times. Our results
are in good agreement with the available results for O-hTi structure, 1.32[42]
and 1.15[36] eV. The indirect band gap of O-hTi is 0.28 eV at the LDA level
also agrees well with the reported ones which are in the range of 0.24 and
0.32 eV. [24, 56, 36, 42, 57, 58]
Figure 3: (Color online) Total and partial densities of states (DOS) of Ti2CO2
monolayers: (a) O-hTi, (b) O-hC, (c) O-Ti and (d) O-hTiC. The total DOS, and
partial DOSes of Ti-3d, C-2p and O-2p orbitals are shown in black, red short-
dashed, green long-dashed and blue dotted-dashed lines, respectively. Fermi
energy is shifted to 0 ev.
Although having exactly the same atomic composition, the GW correction to the
LDA band gap varies and resulting QP gaps do not follow the same energy
ordering as in the DFT case due to the different screening environment for the
charge carrier interaction in each monolayer. The largest correction to the
band gap, 1.24 eV, is calculated for the O-Ti monolayer, which might be
expected as (i) the higher localisation of the isolated O orbitals leads to
stronger corrections, and (ii) O-Ti is the thinnest one among the considered
systems, therefore the electrons here are even more weakly screened compared
to the other cases, again leading to larger band gap openings.
Figure 4: (Color online) The imaginary part of the dielectric functions -
proportional to the absorption spectrum - of the O-terminated Ti2CO2 monolayer
models: (a) O-hTi, (b) O-hC, (c) O-Ti and (d) O-hTiC. The blue solid and blue
dashed lines represent the spectrum computed with GW+BSE and GW+IPA methods,
respectively. The solar flux of terrestrial (AM1.5g) spectra and visible-light
region is shown in the background[59]. The energy positions of the lowest-
lying finite-momentum excitons are shown with vertical dashed lines for
comparison. The exciton states are labelled as in the main text. Insets:
location in the BZ of the most relevant electron-hole contributions to the
labelled excitons.
Partial DOS analysis (Fig.3) reveals that the 3d orbitals of Ti and 2p
orbitals of C and O atoms partially contribute to the valence and conduction
bands of Ti2CO2 monolayers. Rather large DOS around Fermi level and conduction
band minimum of the O-hC, O-Ti and O-hC monolayers can be noticed in the
figure. This ultimately leads to high joint DOS which is an indication of
having strong light absorption and emission properties of these monolayers. On
the other hand, the very dispersive valence bands of O-hTi monolayer leads to
drastically reduced DOS around Fermi level. This suppresses the joint DOS and
hence the optical response of the monolayer.
In view of these striking results, it clearly becomes necessary to investigate
the effects of electron-hole interaction in order to accurately determine the
absorption and emission energy ranges of O-terminated layered Ti2CO2 systems.
### III.2 Optical properties and excitons
The imaginary part, $\epsilon_{2}(\omega)$, of the frequency dependent
dielectric function
$(\varepsilon(\omega)=\varepsilon_{1}(\omega)+i\varepsilon_{2}(\omega))$
proportional to the optical absorption spectrum which is defined in the
independent-particle approximation (IPA) as [60]
$\varepsilon_{2}(\omega)=\dfrac{8\pi^{2}e^{2}}{V}\sum_{\kappa}|d_{\kappa}|^{2}\delta(\omega-\Delta\epsilon_{\kappa})$
(2)
Here, $d_{\kappa}$ is the dipole matrix element and $\Delta\epsilon_{\kappa}$
is the transition energy of the electrons that absorb the incoming
electromagnetic field with frequency $\omega$. When the transition is allowed
by the symmetry, a peak appears in the absorption spectrum at the transition
energy with an intensity proportional to the transition probability.
This approach might provide reasonable absorption spectra for the materials
where the Coulomb interaction is highly screened and hence electron-hole
interaction has negligible effect on the optical response of the material.
However, for the very thin materials, such as O-terminated Ti2CO2 monolayers,
the screening of the Coulomb interaction is drastically reduced due to the
absence of screening in vacuum which in turn enhances the electron-hole
interaction. Therefore, it is necessary to plug in the electron-hole
interaction into $\epsilon_{2}(\omega)$ (Eqn. 2) via MBPT as
$\varepsilon_{2}(\omega)=\dfrac{8\pi^{2}e^{2}}{V}\sum_{\lambda}|\sum_{\kappa}\bar{A}^{\kappa}_{\lambda}d_{\kappa}|^{2}\delta(\omega-
E_{\lambda})$ (3)
Here excitations are summed over excitons, $\lambda$, which are composed of
linear combination of single-particle transitions, $\kappa$, with weights,
$\bar{A}^{\kappa}_{\lambda}$, and energy, $E_{\lambda}$. All these excitonic
quantities can be computed by solving the BSE. It is important to note that
only “vertical” or “direct” ($q=k_{v}-k_{c}=0$) excitons are relevant for the
optical absorption. On the other hand, finite-momentum excitons are
particularly decisive for the emission profiles of indirect semiconductor
systems such as some of the Ti2CO2 monolayer models studied in this
manuscript. Therefore, in these systems we solve the BSE also at the finite
$q$ corresponding to their indirect band gaps (as reported in Tab. 1) in order
to gain insight on their optical emission features.
Table 2: Energies of direct (D) and indirect (ID) excitons of the O-terminated Ti2CO2 monolayers with their respective binding energies in parentheses. All values are in eV. System | $\rm D_{1}$ | $\rm D_{2}$ | ${\rm ID}_{1}$ | ${\rm ID}_{2}$
---|---|---|---|---
O-hTi | 1.39 (0.51) | 2.45 (0.76) | 0.80 (0.49) | –
O-hC | 0.79 (0.98) | 1.46 (0.77) | 0.63 (0.56) | 0.67 (0.52)
O-Ti | 1.20 (0.84) | – | – | –
O-hTiC | 2.02 (0.73) | – | 1.15 (0.59) | –
The imaginary part of the dielectric function of the Ti2CO2 monolayer models
are shown in Fig. 4 which are calculated using the QP energies and LDA wave
functions. Spectra of the monolayers on top of the LDA eigenvalues are
provided in the supplementary material of the manuscript for comparison. The
solid and dashed blue lines in each figure corresponds to the spectrum with
(GW+BSE) and without (GW+IPA) excitons, respectively. Note that plotted
spectra correspond to “vertical” or “direct” ($q=k_{v}-k_{c}=0$) excitons, we
indicate the bound “indirect” excitons in the figures as dashed vertical lines
where relevant. It is known that the exciton is a collective excitation,
meaning that all electronic transitions in principle contribute to the
excitonic peak, albeit with a certain weight. We provide the transitions that
have the largest contribution to the specific excitonic peak in the BZ as
subfigure for each monolayer. The binding energy of the excitons are reported
in Tab. 2 and calculated as the energy difference between the exciton energy
and the QP single particle transition energy with greatest weight and same
momentum.
#### III.2.1 O-hTi monolayer
O-hTi monolayer is an indirect band gap semiconductor with a band gap of 1.29
eV at the GW level (Tab. 1), where the VBM and CBM are at $\Gamma$ and $M$
points in the BZ, respectively (Fig. 2(a)). Two prominent excitonic peaks D1
and D2 at 1.39 and 2.45 eV can be identified in the optical spectrum (Fig.
4(a)) and among those peaks, D1 has a binding energy of 0.51 eV with
corresponding transitons around $\Gamma$ point in the BZ as shown in the
subfigure. The transitions which composed of the D2 exciton are, on the other
hand, reside along the $\Gamma-M$ direction in the BZ as shown in the
subfigure. Change in the wave function characteristics of the contributed
orbitals manifest itself in the binding energy of the D2 exciton which is
calculated as 0.76 eV and rather larger than that of D1. In addition to these
vertical excitons, we also indicate the energy, 0.80 eV, of the lowest energy
indirect exciton (ID1) in the figure as a grey vertical dashed line. It is
found that the ID1 exciton is the lowest energy exciton of the O-hTi monolayer
with a binding energy of 0.49 eV for which the largest weight transitions
reside between $\Gamma$ and $M$ points in the BZ. Upon comparison with the
solar flux of terrestrial spectra, it is expected that O-hTi monolayer has
strong absorption in the near infrared and visible (blue and green color) but
indirect emission in the deep infrared region.
#### III.2.2 O-hC monolayer
Similar to O-hTi, O-hC monolayer is also an indirect band gap semiconductor
with a band gap of 1.42 eV at the GW level (Tab. 1) where the VBM and CBM
reside in between K-$\Gamma$ and $\Gamma$-$M$ directions, respectively (Fig.
2(b)). We indicate the first direct excitonic peak in Fig. 4(b) as D1 at 0.79
eV which originates from the transitions of the parallel bands between
$\Gamma$-$M$ directions with a binding energy of 0.98 eV. After the D1 exciton
in the absorption spectrum, there is a rather flat absorption region with
several exctionic peaks originates from the same paralel bands between
$\Gamma$-$M$ directions until D2 exciton at 1.46 eV. D2 peak has the largest
oscillator strength in the low energy region which coincides with the near
infrared region whose binding energy is 0.77 eV. Our finite-q BSE simulations
showed that there are two indirect bound exciton, ID1 and ID2, at 0.63 eV and
0.67 with a binding energy of 0.56 and 0.52 eV, respectively. Upon comparison
with the solar flux of terrestrial spectra, it is expected that O-hC monolayer
has strong absorption in the deep and near infrared but indirect emission in
the deep infrared regime.
#### III.2.3 O-Ti monolayer
O-Ti monolayer is the only model which has a direct band gap (1.92 eV at the
GW level) among the four Ti2CO2 Mxene monolayer models studied in this
manuscript. The absorption spectrum shows one large peak, D1, which originates
from direct transitions around $M$ point in the BZ at 1.20 eV and has a
binding energy of 0.72 eV. Comparing with the solar flux of terrestrial
spectra reveals that O-Ti monolayer has strong absorption and emission in the
near infrared region. Being a direct band gap semiconductor with an optical
gap in the infrared region signifies the potential usage of O-Ti monolayer in
the infrared laser applications.
#### III.2.4 O-hTiC monolayer
O-hTiC monolayer is the other indirect Ti2CO2 Mxene monolayer model with a
band gap of 1.74 eV at the GW level where the CBM and VBM are at $M$ and
$\Gamma$ points in the BZ, respectively. The absorption spectrum has one
prominent excitonic peak, D1, at 2.02 eV with a binding energy of 0.73 eV. Our
finite-q BSE simulations revealed the existence of an indirect exciton, ID1,
at 1.15 eV, which has a binding energy of 0.59 eV. Comparison with the solar
flux of terrestrial spectra reveals that O-hTiC monolayer has a strong
absorption in the visible region (orange-yellow) and indirect emission in the
infrared region.
## IV Conclusions
In this manuscript, we present a state-of-the-art first principles analysis,
based on DFT, GW and BSE formalisms, on the electronic and optical properties
of four O-terminated Ti2CO2 MXene monolayers. We show that the electronic band
gap of the monolayer models increase enormously upon inclusion of the GW
correction compared to the LDA values. Using the GW-corrected QP energies and
DFT wave functions, we then solve the BSE in order to investigate the direct
and indirect excitons in these monolayers. We show that the absorption spectra
of the monolayer models drastically redshifted upon inclusion of the electron-
hole interaction due to the large binding energy of the excitons. We also
observe that the binding energy of the indirect excitons are in general lower
than the direct ones, however, they are still the lowest lying excitons in the
absorption spectra of the indirect band gap monolayer models. We find that,
despite some of them being strong absorbers in the visible region, they all
likely are infrared emitters which opens the possibility of their usage in
infrared laser and medical applications. Our findings in this manuscript
emphasize the possible usage of surface termination to tune the optical and
electronic properties of O-terminated monolayer models as well as the
importance of inclusion of the many-body effects for the accurate prediction
of the electronic and optical properties of 2D MXenes in general.
###### Acknowledgements.
This material is based upon work supported by the Air Force Office of
Scientific Research under award number FA9550-19-1-7048. Computing resources
used in this work were provided by the National Center for High Performance
Computing of Turkey (UHeM) under grant number 1007502020. The numerical
calculations reported in this paper were partially performed at TUBITAK
ULAKBIM, High Performance and Grid Computing Center (TRUBA resources). CY
acknowledges the computational support provided by the Computational Shared
Facility at The University of Manchester. FP acknowledges the funding received
from the European Union project MaX Materials design at the eXascale
H2020-INFRAEDI-2018-2020/H2020-INFRAEDI-2018-1, Grant agreement n. 824143.
## References
* Naguib _et al._ [2011] M. Naguib, M. Kurtoglu, V. Presser, J. Lu, J. Niu, M. Heon, L. Hultman, Y. Gogotsi, and M. W. Barsoum, Advanced Materials 23, 4248 (2011).
* Gogotsi and Anasori [2019] Y. Gogotsi and B. Anasori, ACS Nano 13, 8491 (2019), pMID: 31454866, https://doi.org/10.1021/acsnano.9b06394 .
* Champagne and Charlier [2020] A. Champagne and J.-C. Charlier, Journal of Physics: Materials 3, 032006 (2020).
* Naguib _et al._ [2014] M. Naguib, V. N. Mochalin, M. W. Barsoum, and Y. Gogotsi, Advanced Materials 26, 992 (2014).
* Venkateshalu and Grace [2020] S. Venkateshalu and A. N. Grace, Applied Materials Today 18, 100509 (2020).
* Anasori _et al._ [2017] B. Anasori, M. R. Lukatskaya, and Y. Gogotsi, Nature Reviews Materials 2, 16098 (2017).
* Nguyen _et al._ [2020] T. P. Nguyen, D. M. Tuan Nguyen, D. L. Tran, H. K. Le, D.-V. N. Vo, S. S. Lam, R. S. Varma, M. Shokouhimehr, C. C. Nguyen, and Q. V. Le, Molecular Catalysis 486, 110850 (2020).
* Wang _et al._ [2020a] Y. Wang, Y. Xu, M. Hu, H. Ling, and X. Zhu, Nanophotonics 9, 1601 (2020a).
* Lei _et al._ [2015] J.-C. Lei, X. Zhang, and Z. Zhou, Frontiers of Physics 10, 276 (2015).
* Wang _et al._ [2020b] Y. Wang, M. Zhou, L.-C. Xu, W. Zhao, R. Li, Z. Yang, R. Liu, and X. Li, Journal of Power Sources 451, 227791 (2020b).
* Kamysbayev _et al._ [2020] V. Kamysbayev, A. S. Filatov, H. Hu, X. Rui, F. Lagunas, D. Wang, R. F. Klie, and D. V. Talapin, Science 369, 979 (2020).
* Hart _et al._ [2019] J. L. Hart, K. Hantanasirisakul, A. C. Lang, B. Anasori, D. Pinto, Y. Pivak, J. T. van Omme, S. J. May, Y. Gogotsi, and M. L. Taheri, Nature communications 10, 1 (2019).
* Anasori and Gogotsi [2019] B. Anasori and Y. Gogotsi, _2D Metal Carbides and Nitrides (MXenes)_ (Springer,Cham, Switzerland, 2019).
* Yang _et al._ [2017] Y. Yang, S. Umrao, S. Lai, and S. Lee, The Journal of Physical Chemistry Letters 8, 859 (2017), pMID: 28157319\.
* VahidMohammadi _et al._ [2021] A. VahidMohammadi, J. Rosen, and Y. Gogotsi, Science 372 (2021), 10.1126/science.abf1581.
* Sevik and Çakır [2019] C. Sevik and D. Çakır, Phys. Rev. Applied 12, 014001 (2019).
* Demiroğlu _et al._ [2019] I. Demiroğlu, F. M. Peeters, O. Gülseren, D. Çakır, and C. Sevik, The Journal of Physical Chemistry Letters 10, 727 (2019).
* Khazaei _et al._ [2014a] M. Khazaei, M. Arai, T. Sasaki, M. Estili, and Y. Sakka, Phys. Chem. Chem. Phys. 16, 7841 (2014a).
* Lee _et al._ [2020] S. H. Lee, W. Eom, H. Shin, R. B. Ambade, J. H. Bang, H. W. Kim, and T. H. Han, ACS Applied Materials & Interfaces 12, 10434 (2020).
* Ma _et al._ [2017] Y. Ma, N. Liu, L. Li, X. Hu, Z. Zou, J. Wang, S. Luo, and Y. Gao, Nature Communications 8, 1207 (2017).
* Cai _et al._ [2018] Y. Cai, J. Shen, G. Ge, Y. Zhang, W. Jin, W. Huang, J. Shao, J. Yang, and X. Dong, ACS Nano 12, 56 (2018).
* Zhang _et al._ [2018] Y.-Z. Zhang, K. H. Lee, D. H. Anjum, R. Sougrat, Q. Jiang, H. Kim, and H. N. Alshareef, Science Advances 4 (2018), 10.1126/sciadv.aat0098.
* Khazaei _et al._ [2018] M. Khazaei, V. Wang, C. Sevik, A. Ranjbar, M. Arai, and S. Yunoki, Phys. Rev. Materials 2, 074002 (2018).
* Zhang _et al._ [2016] H. Zhang, G. Yang, X. Zuo, H. Tang, Q. Yang, and G. Li, J. Mater. Chem. A 4, 12913 (2016).
* Srimuk _et al._ [2018] P. Srimuk, J. Halim, J. Lee, Q. Tao, J. Rosen, and V. Presser, ACS Sustainable Chemistry & Engineering 6, 3739 (2018).
* Ren _et al._ [2015] C. E. Ren, K. B. Hatzell, M. Alhabeb, Z. Ling, K. A. Mahmoud, and Y. Gogotsi, The Journal of Physical Chemistry Letters 6, 4026 (2015).
* Jaksic _et al._ [2020] Z. Jaksic, M. Obradov, T. Dragan, O. Jaksic, and D. Vasiljevic-Radovic, Optical and Quantum Electronics 52 (2020), 10.1007/s11082-020-2227-8.
* Sarycheva _et al._ [2017] A. Sarycheva, T. Makaryan, K. Maleski, E. Satheeshkumar, A. Melikyan, H. Minassian, M. Yoshimura, and Y. Gogotsi, The Journal of Physical Chemistry C 121, 19983 (2017).
* Chaudhuri _et al._ [2018] K. Chaudhuri, M. Alhabeb, Z. Wang, V. M. Shalaev, Y. Gogotsi, and A. Boltasseva, ACS Photonics 5, 1115 (2018).
* Hantanasirisakul _et al._ [2016] K. Hantanasirisakul, M.-Q. Zhao, P. Urbankowski, J. Halim, B. Anasori, S. Kota, C. E. Ren, M. W. Barsoum, and Y. Gogotsi, Advanced Electronic Materials 2, 1600050 (2016).
* Shahzad _et al._ [2016] F. Shahzad, M. Alhabeb, C. B. Hatter, B. Anasori, S. Man Hong, C. M. Koo, and Y. Gogotsi, Science 353, 1137 (2016).
* Kumar _et al._ [2017] H. Kumar, N. C. Frey, L. Dong, B. Anasori, Y. Gogotsi, and V. B. Shenoy, ACS Nano 11, 7648 (2017).
* Xu _et al._ [2015] C. Xu, L. Wang, Z. Liu, L. Chen, J. Guo, N. Kang, X.-L. Ma, H.-M. Cheng, and W. Ren, Nature Materials 14, 1135 (2015).
* Bekaert _et al._ [2020] J. Bekaert, C. Sevik, and M. V. Milošević, Nanoscale 12, 17354 (2020).
* Li _et al._ [2021] X. Li, F. Ran, F. Yang, J. Long, and L. Shao, Transactions of Tianjin University 27 (2021), https://doi.org/10.1007/s12209-021-00282-y.
* Zhang _et al._ [2019] Y. Zhang, W. Xia, Y. Wu, and P. Zhang, Nanoscale 11, 3993 (2019).
* Khan _et al._ [2017] S. A. Khan, B. Amin, L.-Y. Gan, and I. Ahmad, Phys. Chem. Chem. Phys. 19, 14738 (2017).
* Zha _et al._ [2015a] X.-H. Zha, K. Luo, Q. Li, Q. Huang, J. He, X. Wen, and S. Du, EPL (Europhysics Letters) 111, 26007 (2015a).
* Khazaei _et al._ [2013] M. Khazaei, M. Arai, T. Sasaki, C.-Y. Chung, N. S. Venkataramanan, M. Estili, Y. Sakka, and Y. Kawazoe, Advanced Functional Materials 23, 2185 (2013).
* Khazaei _et al._ [2014b] M. Khazaei, M. Arai, T. Sasaki, M. Estili, and Y. Sakka, Phys. Chem. Chem. Phys. 16, 7841 (2014b).
* Xie _et al._ [2014] Y. Xie, M. Naguib, V. N. Mochalin, M. W. Barsoum, Y. Gogotsi, X. Yu, K.-W. Nam, X.-Q. Yang, A. I. Kolesnikov, and P. R. C. Kent, Journal of the American Chemical Society 136, 6385 (2014).
* Ding _et al._ [2020] Y.-m. Ding, X. Nie, H. Dong, N. Rujisamphan, and Y. Li, Nanoscale Adv. 2, 2471 (2020).
* Bernardi _et al._ [2013] M. Bernardi, M. Palummo, and J. C. Grossman, Nano Letters 13, 3664 (2013).
* Giannozzi _et al._ [2009] P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, _et al._ , Journal of Physics: Condensed Matter 21, 395502 (2009).
* Giannozzi _et al._ [2017] P. Giannozzi, O. Andreussi, T. Brumme, O. Bunau, M. B. Nardelli, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, M. Cococcioni, _et al._ , Journal of Physics: Condensed Matter 29, 465901 (2017).
* Perdew and Zunger [1981] J. P. Perdew and A. Zunger, Phys. Rev. B 23, 5048 (1981).
* van Setten _et al._ [2018] M. van Setten, M. Giantomassi, E. Bousquet, M. Verstraete, D. Hamann, X. Gonze, and G.-M. Rignanese, Computer Physics Communications 226, 39 (2018).
* Marini _et al._ [2009] A. Marini, C. Hogan, M. Grüning, and D. Varsano, Computer Physics Communications 180, 1392 (2009).
* Sangalli _et al._ [2019] D. Sangalli, A. Ferretti, H. Miranda, C. Attaccalite, I. Marri, E. Cannuccia, P. Melo, M. Marsili, F. Paleari, A. Marrazzo, _et al._ , Journal of Physics: Condensed Matter 31, 325902 (2019).
* Hybertsen and Louie [1986] M. S. Hybertsen and S. G. Louie, Phys. Rev. B 34, 5390 (1986).
* Hernandez _et al._ [2005] V. Hernandez, J. E. Roman, and V. Vidal, ACM Trans. Math. Softw. 31, 351–362 (2005).
* Ismail-Beigi [2006] S. Ismail-Beigi, Phys. Rev. B 73, 233103 (2006).
* Note [1] The input parameters were individually converged by slowly increasing them until differences in band gaps and exciton energies between two subsequent runs were below 0.1 eV.
* Bai _et al._ [2016] Y. Bai, K. Zhou, N. Srikanth, J. H. L. Pang, X. He, and R. Wang, RSC Adv. 6, 35731 (2016).
* Wang _et al._ [2014] S. Wang, J.-X. Li, Y.-L. Du, and C. Cui, Computational Materials Science 83, 290 (2014).
* Xie and Kent [2013] Y. Xie and P. R. C. Kent, Phys. Rev. B 87, 235441 (2013).
* Xiao-Hong _et al._ [2019] L. Xiao-Hong, S. Xiang-Ying, and Z. Rui-Zhou, RSC Adv. 9, 27646 (2019).
* Zha _et al._ [2015b] X.-H. Zha, K. Luo, Q. Li, Q. Huang, J. He, X. Wen, and S. Du, EPL (Europhysics Letters) 111, 26007 (2015b).
* Gueymard [2004] C. A. Gueymard, Solar Energy 76, 423 (2004).
* Paleari [2019] F. Paleari, _First-principles approaches to the description of indirect absorption and luminescence spectroscopy: exciton-phonon coupling in hexagonal boron nitride_ , Ph.D. thesis, University of Luxembourg (2019).
|
# The structure of heavily doped impurity band in crystalline host
Hongwei Chen Department of Physics, Northeastern University, Boston,
Massachusetts 02115 Stanford Institute for Materials and Energy Sciences,
Stanford University, Stanford, CA 94305 Linac Coherent Light Source, SLAC
National Accelerator Laboratory, Menlo Park, CA 94720 Zi-Xiang Hu
<EMAIL_ADDRESS>Department of Physics and Chongqing Key Laboratory for
Strongly Coupled Physics, Chongqing University, Chongqing 401331, People’s
Republic of China
###### Abstract
We study the properties of the impurity band in heavily-doped non-magnetic
semiconductors using the Jacobi-Davidson algorithm and the supervised deep
learning method. The disorder averaged inverse participation ratio (IPR) and
thouless number calculation show us the rich structure inside the impurity
band. A Convolutional Neural Network(CNN) model, which is trained to
distinguish the extended/localized phase of the Anderson model with high
accuracy, shows us the results in good agreement with the conventional
approach. Together, we find that there are three mobility edges in the
impurity band for a specific on-site impurity potential, which means the
presence of the extended states while filling the impurity band.
###### pacs:
71.23.-k, 71.55.-i, 02.60.Cb
## I Introduction
The effect of disorder has been extensively studied since Anderson’s seminal
paperAnderson (1958). Diluted magnetic semiconductors (DMS) doped with a small
concentration of charged impurities constitute an interesting magnetic system
that has a number of novel features for study by numerical simulationAvérous
and Balkanski (1991). Much of the research has been focused on II-VI (such as
CdTe or ZnSe) and III-V (such as GaAs) compound semiconductors doped with a
low concentration ($x\sim 1-8\%$) of Manganese (Mn) impurities. Of particular
interest in this field is Ga1-xMnxAs which has been shown to exhibit
ferromagnetic behavior above 100KOhno (1998). In these samples, the Manganese
is substitutions with the Gallium and acts as an acceptor (donating one hole
to the crystal), so that the material is p-type. The holes bind to the
impurities with an energy of around 130 meV around $x\sim 10\%$Beschoten et
al. (1999). Since $x\ll 1$, the overlap between different impurity states can
be ignored, thus the interaction between the charge carriers can be neglected.
The system can be simply described by a noninteracting tight-binding model.
When the system contains only one impurity, and the binding energy is large
enough, an impurity state appears below the conductance band (we assume the
impurity potential is attractive). It is locally distributed in space near the
impurity potential within a localization length $\zeta$. As increasing the
concentration $x$, the overlap between different impurity states extends the
single impurity energy to an impurity band in the density of state (DOS) and
eventually merges with the conductance band. Simultaneously, the states in the
impurity band are expected to become more and more extended and ultimately
regain their bandlike character Cook and Berciu (2012). However, the details
inside the impurity band are rarely studied.
One reason for lacking such a study is the computation difficulty even in the
non-interacting case. Generally, the percentage of the state in the impurity
band in the total number of states is about $10\%$ at the concentration we are
interested in. Taking a 3-dimensional Anderson model with lattice size
$30\times 30\times 30$ as an example, the number of states which we need to
know in the impurity band is about 3000. The exact diagonalizationWeiße and
Fehske (2008) for such a system is very difficult due to the large dimension.
On the other hand, we have to do a large number of sample averages. The sparse
matrix diagonalization, such as the Lanczos methodOJALVO and NEWMAN (1970),
can be adapted to obtain a few lowest-lying states or a few states nearby
special energy (the simplest way is diagonalizing $(H-\epsilon I)^{2}$ by
using the original Lanczos diagonalization method).
Machine learning methods have recently emerged as a valuable tool to study the
quantum many-body physics problemsCarleo and Troyer (2017); Carrasquilla and
Melko (2017); Ch’Ng et al. (2017); Van Nieuwenburg et al. (2017); Venderley et
al. (2018); Wetzel (2017); Rodriguez-Nieva and Scheurer (2019); Lidiak and
Gong (2020); Hsu et al. (2018); Hendry et al. (2021); Choo et al. (2019); Pfau
et al. (2020); Sharir et al. (2020); Hendry et al. (2022); Chen et al. (2022).
Its ability to process high dimensional data and recognize complex patterns
have been utilized to determine phase diagrams and phase transitionsWang
(2016); Ohtsuki and Ohtsuki (2017); Tanaka and Tomiya (2017); Mano and Ohtsuki
(2017); Broecker et al. (2017); Schindler et al. (2017); Li et al. (2019);
Dong et al. (2019); Kotthoff et al. (2021); Zhang et al. (2019a, b); Käming et
al. (2021). In particular, Convolutional Neural Network(CNN)Krizhevsky et al.
(2012) model, which initially is designed for image recognition, was widely
used to study different kinds of phase transition problems including the Bose-
Hubbard modelBohrdt et al. (2021), spin 1/2 Heisenberg modelThéveniaut and
Alet (2019), quantum transverse-field Ising modelZhang et al. (2019a) and etc.
The power of using machine learning to recognize quantum states lies in their
ability to finish tasks without the knowledge of physics background or the
Hamiltonian of the system. Even if the neural network is trained in a small
energy region of the system, it can be used to obtain the whole phase
diagramOhtsuki and Ohtsuki (2017); Mano and Ohtsuki (2017). Also, it can
discriminate quantum states with high accuracy even if they are trained from a
totally different Hamiltonian. This special feature of machine learning
inspires us to try to identify the delocalized states in the “impurity band”.
In this paper, we develop a method to obtain the correct density of states
(DOS) and other localization properties, such as inverse participation ratio
(IPR)Brndiar and Markoš (2006) and thouless numberEdwards and Thouless (1972),
by using Jacobi-Davidson sparse matrix diagonalizationBollhöfer and Notay
(2007) with an importance sampling statistics method. Meanwhile, we train a
3-dimensional CNN model using the data generated from the Anderson model, and
then the trained model is used to identify the existence of extended states in
the impurity band. This manuscript is organized as follows: In sec.II we
describe the tight-binding model on the cubic lattice and numerical methods;
Sec. III demonstrates the effect of heavy doping studied by studying the IPR
and Thouless number; Sec.IV demonstrates the implementation of the deep
learning approach and the results from the trained neural network model;
finally, we close with a conclusion.
## II Model and Methods
We consider a tight-binding model on a D-dimensional hypercubic lattice with
the nearest neighbor hopping t, and on-site energies $\epsilon_{i}$:
$H=-t\sum_{\langle
i,j\rangle}(\hat{c}^{\dagger}_{j}\hat{c}_{j}+h.c.)+\sum_{i}\epsilon_{i}\hat{c}^{\dagger}_{i}\hat{c}_{i}$
(1)
The hopping term simulates the iterative electrons and the on-site energy has
a bimodal distribution $\epsilon_{i}=-W$ with probability $x$, and
$\epsilon_{i}=0$ with probability ($1-x$). This model a host lattice with a
single relevant band, with a fraction $x$ of substitutional impurities. For
one-dimensional (d = 1) free electrons, the energy-momentum dispersion
relation is $E(k)=2t\cos(k)$, it is easy to get the DOS with the formula
$\rho(E)=(\frac{1}{2\pi})^{d}\int\frac{dS}{\nabla_{k}E}.$ (2)
The result for 1D is:
$\rho_{1d}(E)=\frac{1}{\sqrt{4t^{2}-E^{2}}}.$ (3)
Figure 1: The density of states for free electrons in the tight-binding model
in one, two, and three dimensions. Here $t$ has been set to be unit.
There is no analytic solution for higher dimensional systems, however, an
approximation that is accurate to roughly $2\%$ was given by Andres et al
Andres et al. (1981). Instead, the DOS can be calculated numerically by exact
diagonalization as shown in Fig. 1 where $t$ has been set to the unit. After
introducing the impurities, all states become localized in 1D and 2D based on
the scaling theory of localization Abrahams et al. (1979). Part of the states
become to be localized and develop into an impurity band at the edge of the
conducting band. To determine the localized/extended state, namely the
location of the mobility gap, we calculate the inverse participation ratio
(IPR)Brndiar and Markoš (2006)
$\text{IPR}=\frac{\sum_{i}|\psi_{i}^{4}|}{(\sum_{i}|\psi_{i}^{2}|)^{2}}$ (4)
for each state, where the $\psi_{i}$ is the weight of an eigen wave function
on the $i$’th site. Heuristically, if we compare two trivial states with wave
functions for a $N$-site system:
$\Psi_{extended}=\sum_{i}(\psi_{i}=1/\sqrt{N})=\sqrt{N},$ (5)
and
$\Psi_{localized}(j)=\sum_{i}(\psi_{i}=\delta_{ij})=1$ (6)
where $\Psi_{extended}$ is an extended state which has equal weight on each
site and $\Psi_{localized}(j)$ is a localized state which only has weight on
the $j$’th site. It is easy to see that the IPR of $\Psi_{extended}$ decreased
with the order of $\frac{1}{N}$ and a constant for $\Psi_{localized}(j)$. On
the other hand, the Thouless numberEdwards and Thouless (1972) is defined as:
$g(E)=\frac{\langle|\Delta E|\rangle}{\langle\delta E\rangle},$ (7)
where $\delta E$ is the energy difference while the boundary condition changes
from periodic boundary condition (PBC) to anti-periodic boundary condition
(APBC) and the $|\Delta E|$ is the average energy distance around $E$. Since
only the extended states are sensitive to the change of boundary condition,
$g(E)$ grows linearly as a function of the system size for the extended state,
and conversely, it reduces for the localized state. In this work, we determine
the localization properties by systematically studying the IPR and Thouless
number for different system sizes, and the crossover points of the Thouless
number give us a hint of the mobility edge.
Figure 2: The evolution of the DOS at the band edge with different doping
strengths. The system has size $19\times 20\times 21$ and can be fully
diagonalized.
For three dimensional cubic lattice of size $L$, the Hamiltonian matrix has a
dimension of $L^{3}$. General full exact diagonalization methods, such as
Lapack library Anderson et al. (1999), can only deal with small system sizes.
The computation time of diagonalizing one matrix with size $L^{3}$ grows
dramatically as a function of the system size. As shown in Fig. 2, we deal
with a system with size $19\times 20\times 21$ with doping concentration
$x=5\%$, after averaging thousands of samples, we obtained the DOS for
different doping energies. It is shown that a peak emerges gradually near the
band edge as increasing the doping energy $W$. This peak becomes more
prominent around $W\sim 4.5$ at which an obvious depletion is developed at the
junction between the impurity band and the conduction band.
Since only the developed impurity band is the interesting part we are focusing
on. The number of states in the impurity band is about the lowest $10\%$ of
states in the whole band, thus we do not have to fully diagonalize the
Hamiltonian. On the other side, we just need to calculate the DOS, IPR, and
Thouless number for these lowest $10\%$ states after averaging thousands of
samples. According to our demand, we use the sparse matrix diagonalization
with Jacobi-Division (JADA) method Bollhöfer and Notay (2007) which can search
a few (10 to 20) states efficiently near specific points. For a given sample
at fixed doping strength, we randomly distribute the reference points (30-50
points) in the impurity band, taking $W=-4.5$ as an example, the reference
points are picked randomly in the region $[-8:-4]$, about 10-20 states can be
obtained by JADA around each reference point. The reference points could also
be picked by importance sampling based on the DOS for a small system from the
full diagonalization. We collect all these energies for each reference point
in one sample. After thousands of sample averaging, we obtain the same DOS as
that from the full diagonalization for a small system. It is obvious that the
JADA method can easily go beyond the limit of the full exact diagonalization.
At least on the same price of the computation time, we can nearly double the
system size compared to the Lapack method. In this work, we calculate the
properties for system sizes up to $40^{3}$ sites by using the JADA method.
## III The effect of heavily doping
Figure 3: The DOS and IPR for $5\%$ doped system with $W=-4.5$. The results
are obtained from exact diagonalization. The number of configurations ranges
from 1000 for system $14\times 15\times 16$ to 50 for $29\times 30\times 31$.
The DOS is almost system-size independent. The IPR drops in the center of the
impurity band.
As analyzed in the previous section, with typical doping concentration
$x=5\%$, we find that a clear impurity band in the DOS is developed at about
$W=-4.5$. We plot the DOS and IPR together for different system sizes as shown
in Fig.3. The line of the DOS for different system sizes collapses to a single
curve and it is the same as that from ED as shown in Fig. 2, which tells us
that we have already obtained the essential information of the impurity band.
As increasing the system size, the IPR does not change on the edge of the band
which means the states on the edge of the whole band are localized. The IPR in
the bulk decreases as enlarging the system, especially at the center of the
impurity band ($E\sim-6.7$), the IPR drops to zero, which is the same as in
the system bulk ($E\sim-4.0$). However, there is a small peak near $E\sim-5.5$
which is at the right edge of the impurity band. The IPR in the vicinity of
this point tends to saturate to a fixed value as increasing the system size.
The nonzero saturation of the IPR at this energy means another possible
mobility edge existing near the junction between the conduction band and the
impurity band.
Figure 4: The IPR/DOS as a function of system size for fixed energies. In
fig.(c), we fit the data by using a function
$\log(IPR)=A+B\log(L)+C\log(L)^{2}$. The curvature $C$ is labeled in the
figure.
In order to justify our conjecture, we systematically study the value of IPR
for several system sizes. As shown in Fig. 4(a), we choose four points from
the knowledge of the DOS and IPR. (1) $E=-4.2$ is in the bulk of the
conduction band, at which the state is extended. (2) $E=-5.4$ is at the right
edge of the impurity band. The state here is localized according to our
conjecture. (3) $E=-6.2$ is in the bulk of the impurity band, which is
extended according to its zero IPR value in large $L$ limit. (4) $E=-6.8$ is
on the left edge of the impurity band and thus at the edge of the whole energy
band. The state at the band edge is supposed to be localized. In Fig. 4(b) we
again compare the DOS from JADA with that from Lapack which shows a
convergence in large system size. According to the way of choosing these four
points, (1) and (3) should have similar behavior as increasing the system
size, and vice versa for (2) and (4). Fig. 4(c) shows the IPR for these four
energies in different system sizes. We plot the data in log scale and fit it
by function
$\log(\text{IPR})=A+B\log(L)+C\log(L)^{2}.$ (8)
The sign of the curvature $C$ tells us whether the state is localized or not.
For (1) and (3), $C<0$ means they are extended states, and oppositely $C>0$
for localized states at points (2) and (4).
Figure 5: The Thouless number $g(E)$ as a function of energy for finite
systems using exact diagonalization.
As another criterion, we calculate the Thouless number $g(E)$ for different
system sizes. The results are shown in Fig.5 in which we plot the DOS together
with the same horizontal axis. The impurity band has been divided into several
regions at the crossover of $g(E)$ for different sizes. We label these regions
by “L” (localized) and “E” (extended) to demonstrate different behavior
$g(E)$. As increasing the system size, it is obvious that the $g(E)$ increases
in the “E” region and decreases in the “L” region. The energies with vertical
lines are the locations of the mobility edges, or the boundaries between the
localized states and extended states.
## IV Deep learning approach
Convolutional neural network(CNN), which is originally designed for 2D image
recognition, has been widely adopted in studying phase transition and achieves
high accuracy in recognition. A standard image recognition model can be used
for a 3D electron system by integrating the 3D electron density in one
direction. But the drawback of this approach is that the information of the
electron density along one direction is lost during integration. So, we design
a 3D CNN model for our 3D lattice model. To distinguish the localized and
delocalized state, the CNN model will return two real numbers to represent the
probability of the extended state $P$ and localized state ($1-P$) for the
given wave function. If the probability of the extended state is larger than
0.5, we think the eigenstate is delocalized, and vice localized. Due to the
limitation of the graphics memory (8GB) of our graphics card (NVIDIA GTX
1080), we consider a 3D $20\times 20\times 20$ lattice. The hidden layers in
the CNN model consist of convolutional layers, max-pooling layers, and fully
connected layers. The loss function is defined by the cross entropy
$H(x)=-\sum_{x}p(x)\log q(x)$. During the training, we use the
RMSPropOptimizer solver defined in TensorflowAbadi et al. (2015) as the
stochastic gradient descent solver to minimize the loss function. The details
of the neural network model are in Appendix A.
The training data for different phases are sampled from the 3-dimensional
Anderson model using different disorder parameters. It’s well known that the
critical disorder at $E=0$ for the 3D Anderson model is 16.54 ±0.01 MacKinnon
and Kramer (1981, 1983); Kramer and MacKinnon (1993). When the disorder
strength $W$ is larger than the critical value, the wave functions are
exponentially localized and the system behaves as an insulator. Otherwise, the
wave functions are delocalized and the system behaves as a metal. This
phenomenon is known as Metal-Insulator Transition(MIT)Anderson (1958). We get
4000 eigenstates from $W\in[14.0,16.0)$ as the delocalized phase and 4000
eigenstates from $W\in[17.0,19.0)$ as the localized phase by steps of 0.1. For
each W, we prepare 40 different realizations of randomness and for each
realization, we take five eigenstates around $E=0$. For the validation data
set, we get another 600 eigenstates from $W\in[10.0,16.0)$ and 600 eigenstates
from $W\in[17.0,23.0)$ in steps of 0.1. During each step of the training, we
randomly select 256 eigenstates from the training data set as the input and
calculate the gradient of the loss function with respect to the parameters in
the CNN model and update them. After every 50 steps, we test the prediction
accuracy on the validation data set and save the model with the highest
prediction accuracy.
Figure 6: The performance of the trained neural network on Anderson model with
different disorder parameters $W$. $W_{c}=16.54$ is the critical disorder for
$E=0$. (a) The classification accuracy of the trained neural network model.
(b) The probability that the wave function is considered as an extended state
by the trained neural network model.
To show the prediction accuracy for different disorder parameters $W$, we
generate another 16000 eigenstates sampled from the Anderson model using
$W\in[0.1,16.0]$ and $W\in[17.0,33.0)$. The prediction accuracy for different
disorder strengths $W$ is shown in Fig.6(a), and the overall accuracy is
$99.0\%$. The lowest prediction accuracy around the critical disorder
$0.8W_{c}<W<1.2W_{c}$ is about $83\%$. We also test our trained model by
producing the phase transition diagram of the 3D Anderson model. The testing
data are sampled from $W\in[8.0,25.0]$ by steps of 0.1. In each realization of
the same disorder parameter $W$, we pick 5 eigenstates around the band
center($E=0$) as input data and use the averaged delocalized probability of
the five eigenstates as the delocalized probability of this realization. We
prepare 5 random realizations for each $W$ and average the delocalized
probability. The phase diagram calculated using our trained CNN model is shown
in Fig. 6(b). From Fig. 6(b), we see that the trained CNN model successfully
captures the Metal-Insulator Transition(MIT).
Figure 7: The probability that the corresponding wave function for different
eigenenergies is considered as an extended state by the trained neural network
model. The input wave functions are generated from Hamiltonian in Eq.1 using
exact diagonalization. Averages over 1000 realizations are taken.
Owing to its excellent classification accuracy, the trained neural network
model is ready to find the extended state in the impurity band. We generate
1000 random realizations for the Hamiltonian in Eq.1 with doping probability
$x=5\%$ and disorder parameter $W=-4.5$, and obtain all eigenstates using the
exact diagonalization method in Lapack. These quantum states are used as the
input data for our trained CNN model to calculate the delocalized probability.
We average the probability over 1000 realizations and the result is shown in
Fig.7. We can see that the CNN model confirms that delocalized states exist in
the impurity band, which is in good agreement with the results obtained by IPR
or Thouless number.
## V Conclusions
In this work, we numerically investigate the properties of the states in the
“impurity band” of heavily-doped non-magnetic semiconductors. By using general
full exact diagonalization and sparse matrix diagonalization with Jacobi-
Division (JADA) method, we find that with a typical doping probability
$x=5\%$, the impurity band in the DOS is developed at about $W=-4.5$. We
calculate the IPR, Thouless number, and DOS together for different system
sizes and study the relationship between them. The data fitting of IPR and
system size on four points suggests the existence of the extended states in
the impurity band. The Thouless number calculation supports the same
conclusion and gives the exact location of mobility edges.
Besides, we also utilize the supervised deep learning method, which is the
state-of-the-art method in pattern recognition, to distinguish the extended
and localized states in the impurity band. We train a 3D CNN model using the
data generated from the Anderson model and then apply the trained neural
network model to classify the states in the “impurity band”. Our trained
neural network model achieves high accuracy ($99.0\%$) in classifying
different states in the Anderson model. The prediction of our trained model on
“impurity band” also supports the finding from the relationship between IPR,
Thouless number and system size though the predicted locations of mobility
edges have small discrepancies. Our calculation gives direct evidence that
there are three mobility edges in the impurity band for a specific on-site
impurity potential in heavily-doped non-magnetic semiconductors.
###### Acknowledgements.
Z-X. Hu is supported by the National Natural Science Foundation of China Grant
No. 11974064 and 12147102, the Chongqing Research Program of Basic Research,
and Frontier Technology Grant No. cstc2021jcyjmsxmX0081, Chongqing Talents:
Exceptional Young Talents Project No. cstc2021ycjh-bgzxm0147, and the
Fundamental Research Funds for the Central Universities Grant No.
2020CDJQY-Z003. HC acknowledges the U.S. Department of Energy, Office of
Science, Basic Energy Sciences under Award No. DE-SC0022216.
## Appendix A Neural network model architecture and hyperparameters
The 3D CNN model used in this paper has a similar architecture to the
“AlexNet”Krizhevsky et al. (2012) and “VGGNet”Simonyan and Zisserman (2014),
but with a smaller number of convolutional, max pooling, and fully connected
layers. This is because we are dealing with a 3D lattice and the edges in the
lattice have a much smaller length compared to the images. The architecture of
our model is shown in Fig. 8, and the input and output dimension of each layer
is also listed in the figure.
Figure 8: The architecture of the 3D CNN model employed in this paper for 3D
$20\times 20\times 20$ lattice. “None” in the figure represents the batch size
during training or evaluation, which is not a fixed number.
The size of the convolution kernel applied in the first and second
convolutional layers are $5\times 5\times 5$ and $3\times 3\times 3$,
respectively. Activation function ReLU (rectified linear unit)Nair and Hinton
(2010) is performed after the convolutional layer and fully connected layer
except for the last layer, which is activated by the softmaxBridle (1989)
function. Bias parameters are included for artificial neurons.
DropoutSrivastava et al. (2014) is performed with probability $p=0.5$ after
the first fully connected layer to avoid over-fitting and increase the
evaluation accuracy.
## References
* Anderson (1958) P. W. Anderson, Phys. Rev. 109, 1492 (1958), URL https://link.aps.org/doi/10.1103/PhysRev.109.1492.
* Avérous and Balkanski (1991) M. Avérous and M. Balkanski, in _Semimagnetic semiconductors and diluted magnetic semiconductors_ (Springer, 1991).
* Ohno (1998) H. Ohno, Science 281, 951 (1998), URL https://www.science.org/doi/abs/10.1126/science.281.5379.951.
* Beschoten et al. (1999) B. Beschoten, P. A. Crowell, I. Malajovich, D. D. Awschalom, F. Matsukura, A. Shen, and H. Ohno, Phys. Rev. Lett. 83, 3073 (1999), URL https://link.aps.org/doi/10.1103/PhysRevLett.83.3073.
* Cook and Berciu (2012) A. M. Cook and M. Berciu, Phys. Rev. B 85, 235130 (2012), URL https://link.aps.org/doi/10.1103/PhysRevB.85.235130.
* Weiße and Fehske (2008) A. Weiße and H. Fehske, in _Computational many-particle physics_ (Springer, 2008), pp. 529–544.
* OJALVO and NEWMAN (1970) I. U. OJALVO and M. NEWMAN, AIAA Journal 8, 1234 (1970), eprint https://doi.org/10.2514/3.5878, URL https://doi.org/10.2514/3.5878.
* Carleo and Troyer (2017) G. Carleo and M. Troyer, Science 355, 602 (2017), ISSN 0036-8075, URL http://science.sciencemag.org/content/355/6325/602.
* Carrasquilla and Melko (2017) J. Carrasquilla and R. G. Melko, Nature Physics 13, 431 (2017).
* Ch’Ng et al. (2017) K. Ch’Ng, J. Carrasquilla, R. G. Melko, and E. Khatami, Physical Review X 7, 031038 (2017).
* Van Nieuwenburg et al. (2017) E. P. Van Nieuwenburg, Y.-H. Liu, and S. D. Huber, Nature Physics 13, 435 (2017).
* Venderley et al. (2018) J. Venderley, V. Khemani, and E.-A. Kim, Phys. Rev. Lett. 120, 257204 (2018), URL https://link.aps.org/doi/10.1103/PhysRevLett.120.257204.
* Wetzel (2017) S. J. Wetzel, Phys. Rev. E 96, 022140 (2017), URL https://link.aps.org/doi/10.1103/PhysRevE.96.022140.
* Rodriguez-Nieva and Scheurer (2019) J. F. Rodriguez-Nieva and M. S. Scheurer, Nature Physics 15, 790 (2019).
* Lidiak and Gong (2020) A. Lidiak and Z. Gong, Phys. Rev. Lett. 125, 225701 (2020), URL https://link.aps.org/doi/10.1103/PhysRevLett.125.225701.
* Hsu et al. (2018) Y.-T. Hsu, X. Li, D.-L. Deng, and S. D. Sarma, Physical Review Letters 121, 245701 (2018).
* Hendry et al. (2021) D. Hendry, H. Chen, P. Weinberg, and A. E. Feiguin, Phys. Rev. B 104, 205130 (2021), URL https://link.aps.org/doi/10.1103/PhysRevB.104.205130.
* Choo et al. (2019) K. Choo, T. Neupert, and G. Carleo, Phys. Rev. B 100, 125124 (2019), URL https://link.aps.org/doi/10.1103/PhysRevB.100.125124.
* Pfau et al. (2020) D. Pfau, J. S. Spencer, A. G. Matthews, and W. M. C. Foulkes, Physical Review Research 2, 033429 (2020).
* Sharir et al. (2020) O. Sharir, Y. Levine, N. Wies, G. Carleo, and A. Shashua, Physical review letters 124, 020503 (2020).
* Hendry et al. (2022) D. Hendry, H. Chen, and A. Feiguin, Phys. Rev. B 106, 165111 (2022), URL https://link.aps.org/doi/10.1103/PhysRevB.106.165111.
* Chen et al. (2022) H. Chen, D. G. Hendry, P. E. Weinberg, and A. Feiguin, in _Advances in Neural Information Processing Systems_ , edited by A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho (2022), URL https://openreview.net/forum?id=qZUHvvtbzy.
* Wang (2016) L. Wang, Phys. Rev. B 94, 195105 (2016), URL https://link.aps.org/doi/10.1103/PhysRevB.94.195105.
* Ohtsuki and Ohtsuki (2017) T. Ohtsuki and T. Ohtsuki, Journal of the Physical Society of Japan 86, 044708 (2017), URL https://doi.org/10.7566/JPSJ.86.044708.
* Tanaka and Tomiya (2017) A. Tanaka and A. Tomiya, Journal of the Physical Society of Japan 86, 063001 (2017).
* Mano and Ohtsuki (2017) T. Mano and T. Ohtsuki, Journal of the Physical Society of Japan 86, 113704 (2017), URL https://doi.org/10.7566/JPSJ.86.113704.
* Broecker et al. (2017) P. Broecker, J. Carrasquilla, R. G. Melko, and S. Trebst, Scientific reports 7, 1 (2017).
* Schindler et al. (2017) F. Schindler, N. Regnault, and T. Neupert, Physical Review B 95, 245134 (2017).
* Li et al. (2019) Z. Li, M. Luo, and X. Wan, Phys. Rev. B 99, 075418 (2019), URL https://link.aps.org/doi/10.1103/PhysRevB.99.075418.
* Dong et al. (2019) X.-Y. Dong, F. Pollmann, and X.-F. Zhang, Phys. Rev. B 99, 121104 (2019), URL https://link.aps.org/doi/10.1103/PhysRevB.99.121104.
* Kotthoff et al. (2021) F. Kotthoff, F. Pollmann, and G. De Tomasi, Physical Review B 104, 224307 (2021).
* Zhang et al. (2019a) W. Zhang, L. Wang, and Z. Wang, Physical Review B 99, 054208 (2019a).
* Zhang et al. (2019b) W. Zhang, J. Liu, and T.-C. Wei, Phys. Rev. E 99, 032142 (2019b), URL https://link.aps.org/doi/10.1103/PhysRevE.99.032142.
* Käming et al. (2021) N. Käming, A. Dawid, K. Kottmann, M. Lewenstein, K. Sengstock, A. Dauphin, and C. Weitenberg, Machine Learning: Science and Technology 2, 035037 (2021), URL https://doi.org/10.1088/2632-2153/abffe7.
* Krizhevsky et al. (2012) A. Krizhevsky, I. Sutskever, and G. E. Hinton, in _Advances in Neural Information Processing Systems_ , edited by F. Pereira, C. Burges, L. Bottou, and K. Weinberger (Curran Associates, Inc., 2012), vol. 25, URL https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf.
* Bohrdt et al. (2021) A. Bohrdt, S. Kim, A. Lukin, M. Rispoli, R. Schittko, M. Knap, M. Greiner, and J. Léonard, Physical Review Letters 127, 150504 (2021).
* Théveniaut and Alet (2019) H. Théveniaut and F. Alet, Physical Review B 100, 224202 (2019).
* Brndiar and Markoš (2006) J. Brndiar and P. Markoš, Physical Review B 74, 153103 (2006).
* Edwards and Thouless (1972) J. Edwards and D. Thouless, Journal of Physics C: Solid State Physics 5, 807 (1972).
* Bollhöfer and Notay (2007) M. Bollhöfer and Y. Notay, Comput. Phys. Commun. 177, 951 (2007).
* Andres et al. (1981) K. Andres, R. N. Bhatt, P. Goalwin, T. M. Rice, and R. E. Walstedt, Phys. Rev. B 24, 244 (1981), URL https://link.aps.org/doi/10.1103/PhysRevB.24.244.
* Abrahams et al. (1979) E. Abrahams, P. W. Anderson, D. C. Licciardello, and T. V. Ramakrishnan, Phys. Rev. Lett. 42, 673 (1979), URL https://link.aps.org/doi/10.1103/PhysRevLett.42.673.
* Anderson et al. (1999) E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, et al., _LAPACK Users’ Guide_ (Society for Industrial and Applied Mathematics, Philadelphia, PA, 1999), 3rd ed., ISBN 0-89871-447-8 (paperback).
* Abadi et al. (2015) M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al., _TensorFlow: Large-scale machine learning on heterogeneous systems_ (2015), software available from tensorflow.org, URL https://www.tensorflow.org/.
* MacKinnon and Kramer (1981) A. MacKinnon and B. Kramer, Physical Review Letters 47, 1546 (1981).
* MacKinnon and Kramer (1983) A. MacKinnon and B. Kramer, Zeitschrift für Physik B Condensed Matter 53, 1 (1983).
* Kramer and MacKinnon (1993) B. Kramer and A. MacKinnon, Reports on Progress in Physics 56, 1469 (1993).
* Simonyan and Zisserman (2014) K. Simonyan and A. Zisserman, arXiv preprint arXiv:1409.1556 (2014).
* Nair and Hinton (2010) V. Nair and G. E. Hinton, in _Icml_ (2010).
* Bridle (1989) J. Bridle, Advances in neural information processing systems 2 (1989).
* Srivastava et al. (2014) N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, The journal of machine learning research 15, 1929 (2014).
|
# A high-order shock capturing discontinuous Galerkin-finite-difference hybrid
method for GRMHD
Nils Deppe1, François Hébert1, Lawrence E. Kidder2, and Saul A. Teukolsky2,1
0000-0003-4557-4115 0000-0001-9009-6955 0000-0001-5392-7342
0000-0001-9765-4526 1Theoretical Astrophysics 350-17, California Institute of
Technology, Pasadena, CA 91125, USA 2Cornell Center for Astrophysics and
Planetary Science, Cornell University, Ithaca, New York 14853, USA
<EMAIL_ADDRESS>
###### Abstract
We present a discontinuous Galerkin-finite-difference hybrid scheme that
allows high-order shock capturing with the discontinuous Galerkin method for
general relativistic magnetohydrodynamics. The hybrid method is conceptually
quite simple. An unlimited discontinuous Galerkin candidate solution is
computed for the next time step. If the candidate solution is inadmissible,
the time step is retaken using robust finite-difference methods. Because of
its a posteriori nature, the hybrid scheme inherits the best properties of
both methods. It is high-order with exponential convergence in smooth regions,
while robustly handling discontinuities. We give a detailed description of how
we transfer the solution between the discontinuous Galerkin and finite-
difference solvers, and the troubled-cell indicators necessary to robustly
handle slow-moving discontinuities and simulate magnetized neutron stars. We
demonstrate the efficacy of the proposed method using a suite of standard and
very challenging 1d, 2d, and 3d relativistic magnetohydrodynamics test
problems. The hybrid scheme is designed from the ground up to efficiently
simulate astrophysical problems such as the inspiral, coalescence, and merger
of two neutron stars.
††: Class. Quantum Grav.
Keywords: discontinuous Galerkin, Finite Difference, GRMHD, neutron star, WENO
## 1 Introduction
The discontinuous Galerkin (DG) method was first presented by Reed and Hill
[1] to solve the neutron transport equation. Later, in a series of seminal
papers, Cockburn and Shu applied the DG method to nonlinear hyperbolic
conservation laws [2, 3, 4]. A very important property of the DG method is
that it guarantees linear stability in the $L_{2}$ norm for arbitrary high
order, which was proven for the scalar case in [5] and for systems in [6, 7].
While this means the DG method is very robust, DG alone is still subject to
Godunov’s theorem [8]: at high order it produces oscillatory solutions.
Accordingly, it requires some nonlinear supplemental method for stability in
the presence of discontinuities and large gradients. A large number of
different methods for limiting the DG solution to achieve such stability have
been proposed. The basic idea shared by all the limiters is to detect troubled
cells or elements (i.e., those whose solution is too oscillatory or has some
other undesirable property), then apply some nonlinear reconstruction using
the solution from neighboring elements. This idea is largely an extension of
what has worked well for finite-volume (FV) and finite-difference (FD) shock-
capturing methods.
In this paper we follow a different avenue that, to the best of our knowledge,
was first proposed in [9]. The idea is to supplement a high-order spectral-
type method—such as pseudospectral collocation or, in our case, DG—with robust
FV or FD shock-capturing methods. If the solution in an element is troubled or
inadmissible, the solution is projected to a FV or FD grid and evolved with
existing robust shock-capturing methods. This approach has been applied to DG
supplemented with FV in [10, 11, 12, 13, 14, 15]. The major breakthrough in
[12] was applying the shock detection and physical realizability checks on the
solution _after_ the time step is taken and redoing the step if the solution
is found to be inadmissible. We follow this a posteriori approach because it
allows us to guarantee a physically realizable solution (e.g., positive
density and pressure), as well as allowing us to prevent unphysical
oscillations from entering the numerical solution. This procedure is in strong
contrast to classical limiting strategies, where effectively a filter is
applied to the DG solution in an attempt to remove spurious oscillations.
We present a detailed derivation and description of our DG-FD hybrid scheme
and how we use it to solve the equations of general relativistic
magnetohydrodynamics (GRMHD). To the best of our knowledge, the algorithm is
the first to successfully evolve a 3d magnetized TOV star using DG methods. In
§2 we briefly review the equations of GRMHD. In §3 we give a brief overview of
DG and conservative FD methods, provide a new simple form of the moving mesh
evolution equations, and discuss the time step size restrictions of the DG and
FD methods. In §4 we state our requirements from a DG limiter or DG hybrid
scheme, and then give an overview of common limiters currently used, including
which of our requirements they meet. The new DG-FD hybrid scheme is described
in §5. Specifically, we discuss how to handle the intercell fluxes between
elements using DG and FD, the idea of applying the troubled-cell indicators a
posteriori, the troubled-cell indicators we use, and a new perspective on how
DG-FD hybrid schemes should be interpreted. In §6 we present numerical results
from the open-source code SpECTRE [16, 17] using our scheme and conclude in
§7.
## 2 Equations of GRMHD
We adopt the standard 3+1 form of the spacetime metric, (see, e.g., [18, 19]),
$\displaystyle ds^{2}$
$\displaystyle=g_{ab}dx^{a}dx^{b}=-\alpha^{2}dt^{2}+\gamma_{ij}\left(dx^{i}+\beta^{i}dt\right)\left(dx^{j}+\beta^{j}dt\right),$
(1)
where $\alpha$ is the lapse, $\beta^{i}$ the shift vector, and $\gamma_{ij}$
is the spatial metric. We use the Einstein summation convention, summing over
repeated indices. Latin indices from the first part of the alphabet
$a,b,c,\ldots$ denote spacetime indices ranging from $0$ to $3$, while Latin
indices $i,j,\ldots$ are purely spatial, ranging from $1$ to $3$. We work in
units where $c=G=M_{\odot}=1$.
SpECTRE currently solves equations in flux-balanced and first-order hyperbolic
form. The general form of a flux-balanced conservation law in a curved
spacetime is
$\displaystyle\partial_{t}u+\partial_{i}F^{i}=S,$ (2)
where $u$ is the state vector, $F^{i}$ are the components of the flux vector,
and $S$ is the source vector.
We refer the reader to the literature [20, 21, 18] for a detailed description
of the equations of general relativistic magnetohydrodynamics (GRMHD). If we
ignore self-gravity, the GRMHD equations constitute a closed system that may
be solved on a given background metric. We denote the rest-mass density of the
fluid by $\rho$ and its 4-velocity by $u^{a}$, where $u^{a}u_{a}=-1$. The dual
of the Faraday tensor $F^{ab}$ is
$\displaystyle\,{}^{*}\\!F^{ab}=\frac{1}{2}\epsilon^{abcd}F_{cd},$ (3)
where $\epsilon^{abcd}$ is the Levi-Civita tensor. Note that the Levi-Civita
tensor is defined here with the convention [22] that in flat spacetime
$\epsilon_{0123}=+1$. The equations governing the evolution of the GRMHD
system are:
$\displaystyle\nabla_{a}(\rho u^{a})$ $\displaystyle=0\quad(\textrm{rest-mass
conservation}),$ (4) $\displaystyle\nabla_{a}T^{ab}$
$\displaystyle=0\quad(\textrm{energy-momentum conservation}),$ (5)
$\displaystyle\nabla_{a}\,{}^{*}\\!F^{ab}$
$\displaystyle=0\quad(\textrm{homogeneous Maxwell equation}).$ (6)
In the ideal MHD limit the stress tensor takes the form
$T^{ab}=(\rho h)^{*}u^{a}u^{b}+p^{*}g^{ab}-b^{a}b^{b}$ (7)
where
$b^{a}=-\,{}^{*}\\!F^{ab}u_{b}$ (8)
is the magnetic field measured in the comoving frame of the fluid, and $(\rho
h)^{*}=\rho h+b^{2}$ and $p^{*}=p+b^{2}/2$ are the enthalpy density and fluid
pressure augmented by contributions of magnetic pressure
$p_{\mathrm{mag}}=b^{2}/2$, respectively.
We denote the unit normal vector to the spatial hypersurfaces as $n^{a}$,
which is given by
$\displaystyle n^{a}=$
$\displaystyle\left(1/\alpha,-\beta^{i}/\alpha\right)^{T},$ (9) $\displaystyle
n_{a}=$ $\displaystyle(-\alpha,0,0,0).$ (10)
The spatial velocity of the fluid as measured by an observer at rest in the
spatial hypersurfaces (“Eulerian observer”) is
$v^{i}=\frac{1}{\alpha}\left(\frac{u^{i}}{u^{0}}+\beta^{i}\right),$ (11)
with a corresponding Lorentz factor $W$ given by
$\displaystyle W$ $\displaystyle=-u^{a}n_{a}=\alpha
u^{0}=\frac{1}{\sqrt{1-\gamma_{ij}v^{i}v^{j}}}$ (12)
$\displaystyle=\sqrt{1+\gamma^{ij}u_{i}u_{j}}=\sqrt{1+\gamma^{ij}W^{2}v_{i}v_{j}}.$
(13)
The electric and magnetic fields as measured by an Eulerian observer are given
by
$\displaystyle E^{i}$ $\displaystyle=F^{ia}n_{a}=\alpha F^{0i},$ (14)
$\displaystyle B^{i}$
$\displaystyle=-\,{}^{*}\\!F^{ia}n_{a}=-\alpha\,{}^{*}\\!F^{0i}.$ (15)
Finally, the comoving magnetic field $b^{a}$ in terms of $B^{i}$ is
$\displaystyle b^{0}=$ $\displaystyle\frac{W}{\alpha}B^{i}v_{i},$ (16)
$\displaystyle b^{i}=$ $\displaystyle\frac{B^{i}+\alpha b^{0}u^{i}}{W},$ (17)
while $b^{2}=b^{a}b_{a}$ is given by
$b^{2}=\frac{B^{2}}{W^{2}}+(B^{i}v_{i})^{2}.$ (18)
We now recast the GRMHD equations in a 3+1 split by projecting them along and
perpendicular to $n^{a}$ [20]. One of the main complications when solving the
GRMHD equations numerically is preserving the constraint
$\displaystyle\partial_{i}(\sqrt{\gamma}B^{i})=0.$ (19)
Analytically, initial data evolved using the dynamical Maxwell equations are
guaranteed to preserve the constraint. However, numerical errors generate
constraint violations that need to be controlled. We opt to use the
Generalized Lagrange Multiplier (GLM) or divergence cleaning method [23] where
an additional field $\Phi$ is evolved in order to propagate constraint
violations out of the domain. Our version is very close to the one in [24].
The augmented system can still be written in flux-balanced form, where the
conserved variables are
$\displaystyle u$ $\displaystyle=\sqrt{\gamma}\left(\begin{array}[]{c}D\\\
S_{i}\\\ \tau\\\ B^{i}\\\
\Phi\end{array}\right)=\left(\begin{array}[]{c}\tilde{D}\\\ \tilde{S}_{i}\\\
\tilde{\tau}\\\ \tilde{B}^{i}\\\ \tilde{\Phi}\end{array}\right)$ (30)
$\displaystyle=\sqrt{\gamma}\left(\begin{array}[]{c}\rho W\\\ (\rho
h)^{*}W^{2}v_{i}-\alpha b^{0}b_{i}\\\ (\rho h)^{*}W^{2}-p^{*}-\left(\alpha
b^{0}\right)^{2}-\rho W\\\ B^{i}\\\ \Phi\end{array}\right),$ (36)
with corresponding fluxes
$\displaystyle F^{i}=\left(\begin{array}[]{c}\tilde{D}v^{i}_{\textrm{tr}}\\\
\tilde{S}_{j}v^{i}_{\textrm{tr}}+\alpha\sqrt{\gamma}p^{*}\delta^{i}_{j}-\alpha
b_{j}\tilde{B}^{i}/W\\\
\tilde{\tau}v^{i}_{\textrm{tr}}+\alpha\sqrt{\gamma}p^{*}v^{i}-\alpha^{2}b^{0}\tilde{B}^{i}/W\\\
\tilde{B}^{j}v^{i}_{\textrm{tr}}-\alpha
v^{j}\tilde{B}^{i}+\alpha\gamma^{ij}\tilde{\Phi}\\\
\alpha\tilde{B}^{i}-\tilde{\Phi}\beta^{i}\end{array}\right),$ (42)
and corresponding sources
$\displaystyle S=\left(\begin{array}[]{c}0\\\
(\alpha/2)\tilde{S}^{kl}\partial_{i}\gamma_{kl}+\tilde{S}_{k}\partial_{i}\beta^{k}-\tilde{E}\partial_{i}\alpha\\\
\alpha\tilde{S}^{kl}K_{kl}-\tilde{S}^{k}\partial_{k}\alpha\\\
-\tilde{B}^{j}\partial_{j}\beta^{i}+\Phi\partial_{k}(\alpha\sqrt{\gamma}\gamma^{ik})\\\
\alpha\tilde{B}^{k}\partial_{k}\ln\alpha-\alpha
K\tilde{\Phi}-\alpha\kappa\tilde{\Phi}\end{array}\right).$ (48)
The transport velocity is defined as $v_{\textrm{tr}}^{i}=\alpha
v^{i}-\beta^{i}$ and the generalized energy $\tilde{E}$ and source
$\tilde{S}^{ij}$ are given by
$\displaystyle\tilde{E}$ $\displaystyle=\tilde{\tau}+\tilde{D},$ (49)
$\displaystyle\tilde{S}^{ij}$ $\displaystyle=\sqrt{\gamma}\left[(\rho
h)^{*}W^{2}v^{i}v^{j}+p^{*}\gamma^{ij}-\gamma^{ik}\gamma^{jl}b_{k}b_{l}\right].$
(50)
## 3 The discontinuous Galerkin and conservative finite difference methods
We are interested in solving nonlinear hyperbolic conservation laws of the
form
$\partial_{a}F^{a}=\partial_{t}u+\partial_{i}F^{i}=S,$ (51)
where $u$ are the evolved/conserved variables, $F^{i}$ are the fluxes, and $S$
are the source terms.
### 3.1 Discontinuous Galerkin method
In the DG method the computational domain is divided up into non-overlapping
elements or cells, which we denote by $\Omega_{k}$. This allows us to write
the conservation law (51) as a semi-discrete system, where time remains
continuous. In the DG method one integrates the evolution equations (51)
against spatial basis functions of degree $N$, which we denote by
$\phi_{\breve{\imath}}$. We index the basis functions and collocation points
of the DG scheme with breve Latin indices, e.g.
$\breve{\imath},\breve{\jmath},\breve{k}$. The basis functions are defined in
the reference coordinates of each element, which we denote by
$\xi^{\hat{\imath}}$. We use hatted indices to denote tensor components in the
reference frame. The reference coordinates are mapped to the physical
coordinates using the general function
$\displaystyle x^{i}=x^{i}(\xi^{\hat{\imath}}).$ (52)
We will discuss making the mapping time-dependent in §3.3 below.
In the DG method we integrate the basis functions against (51),
$\displaystyle\int_{\Omega_{k}}d^{3}x\,\phi_{\breve{\imath}}\left[\partial_{t}u+\partial_{i}F^{i}-S\right]=0,$
(53)
where repeated indices are implicitly summed over. Note that we are
integrating over the physical coordinates, not the reference coordinates
$\xi^{\hat{\imath}}$. Following the standard prescription where we integrate
by parts and replace the flux on the boundary $n_{i}F^{i}$ with a boundary
term $G$ (a numerical flux dotted into the normal to the surface), we obtain
the weak form
$\displaystyle\int_{\Omega_{k}}d^{3}x\,\phi_{\breve{\imath}}\left[\partial_{t}u-S\right]-\int_{\Omega_{k}}d^{3}x\,F^{i}\partial_{i}\phi_{\breve{\imath}}+\oint_{\partial\Omega_{k}}d^{2}\Sigma\,\phi_{\breve{\imath}}G=0,$
(54)
where $\partial\Omega_{k}$ is the boundary of the element and $d^{2}\Sigma$ is
the surface element. Undoing the integration by parts gives us the equivalent
strong form
$\displaystyle\int_{\Omega_{k}}d^{3}x\,\phi_{\breve{\imath}}\left[\partial_{t}u+\partial_{i}F^{i}-S\right]+\oint_{\partial\Omega_{k}}d^{2}\Sigma\,\phi_{\breve{\imath}}\left(G-n_{i}F^{i}\right)=0,$
(55)
where $n_{i}$ is the outward-pointing unit normal covector in the physical
frame. Next, we use a nodal DG method and expand the various terms using the
basis $\phi_{\breve{\imath}}$ as
$\displaystyle
u=\sum_{\breve{\imath}=0}^{N}u_{\breve{\imath}}\phi_{\breve{\imath}}.$ (56)
The weak form can be written as
$\displaystyle\int_{\Omega_{k}}d^{3}x\,\phi_{\breve{\imath}}\phi_{\breve{k}}\left[\partial_{t}u_{\breve{k}}-S_{\breve{k}}\right]-\int_{\Omega_{k}}d^{3}x\,F^{i}_{\breve{k}}\phi_{\breve{k}}\partial_{i}\phi_{\breve{\imath}}+\oint_{\partial\Omega_{k}}d^{2}\Sigma\,\phi_{\breve{\imath}}\phi_{\breve{k}}G_{\breve{k}}=0.$
(57)
The equivalent strong form is
$\displaystyle\int_{\Omega_{k}}d^{3}x\,\phi_{\breve{\imath}}\phi_{\breve{k}}\left[\partial_{t}u_{\breve{k}}+(\partial_{i}F^{i})_{\breve{k}}-S_{\breve{k}}\right]+\oint_{\partial\Omega_{k}}d^{2}\Sigma\,\phi_{\breve{\imath}}\phi_{\breve{k}}\left(G-n_{i}F^{i}\right)_{\breve{k}}=0.$
(58)
In the strong form we have expanded $\partial_{i}F^{i}$ in the basis, which
might lead to aliasing [25]. In practice, we have not encountered any
aliasing-driven instabilities that require filtering.
In order to simplify the scheme, we use a tensor-product basis of 1d Lagrange
interpolating polynomials with Legendre-Gauss-Lobatto collocation points. We
denote this DG scheme with 1d basis functions of degree $N$ by $P_{N}$. A
$P_{N}$ scheme is expected to converge at order $\mathcal{O}(\Delta x^{N+1})$
for smooth solutions [26], where $\Delta x$ is the 1d size of the element. The
reference elements are intervals in 1d, squares in 2d, and cubes in 3d, where
each component of the reference coordinates $\xi^{\hat{\imath}}\in[-1,1]$. We
use the map $x^{i}(\xi^{\hat{\imath}})$ to deform the squares and cubes into
different shapes needed to produce an efficient covering of the domain. For
example, if spherical geometries are present, we use
$x^{i}(\xi^{\hat{\imath}})$ to create a cubed-sphere domain.
### 3.2 Conservative finite-difference methods
Conservative FD methods evolve the cell-center values, but the cell-face
values (the midpoints along each axis) are necessary for solving the Riemann
problem and computing the FD derivatives of the fluxes. Denoting the numerical
flux by $\hat{F}^{i}$ and the $k^{\mathrm{th}}$-order FD derivative operator
by $D^{(k)}_{\hat{\imath}}$, we can write the semi-discrete evolution
equations as
$\displaystyle\partial_{t}u_{\underline{i}}+\left(\frac{\partial\xi^{\hat{\imath}}}{\partial
x^{i}}\right)_{\underline{i}}\left(D^{(k)}_{\hat{\imath}}\hat{F}^{i}\right)_{\underline{i}}=S_{\underline{i}},$
(59)
where we use underlined indices to label FD cells/grid points. Equation (59)
can be rewritten to more closely resemble the DG form since we actually use
$G$ as the numerical flux $\hat{F}^{i}$ on the cell boundary. Specifically,
$\displaystyle\partial_{t}u_{\underline{i}}+\frac{1}{J_{\underline{i}}}\sum_{\hat{\imath}}\left[\mathcal{D}_{\hat{\imath}}\left(J\sqrt{\frac{\partial\xi^{\hat{\imath}}}{\partial
x^{i}}\gamma^{ij}\frac{\partial\xi^{\hat{\imath}}}{\partial
x^{j}}}G^{(\hat{\imath})}\right)\right]_{\underline{i}}=S_{\underline{i}},$
(60)
where $\mathcal{D}_{\hat{\imath}}$ is the undivided finite difference
operator111For example, at second order
$\left(\mathcal{D}_{\hat{\imath}}u\right)_{\underline{i}}=u_{\underline{i}+1/2}-u_{\underline{i}-1/2}$.
and $J$ is the determinant of the Jacobian matrix $\partial
x^{i}/\partial\xi^{\hat{\imath}}$. This form allows our implementation to
reuse as much of the DG Riemann solvers as possible, and also makes
interfacing between the DG and FD methods easier. Ultimately, we use a flux-
difference-splitting scheme, where we reconstruct the primitive variables to
the interfaces between cells. Which reconstruction method we use is stated for
each test problem below.
### 3.3 Moving mesh formulation
Moving the mesh to follow interesting features of the solution can greatly
reduce computational cost. A moving mesh is also essential for evolutions of
binary black holes, one of our target applications, where the interior of the
black holes needs to be excised to avoid the singularities [27, 28]. Here we
present a new form of the moving mesh evolution equations that is extremely
simple to implement and derive. We assume that the velocity of the mesh is
some spatially smooth function, though this assumption can be removed if one
uses the path-conservative methods described in [29] based on Dal Maso-
LeFloch-Murat theory [30]. We write the map from the reference coordinates to
the physical coordinates as
$\displaystyle t=\hat{t},\;\;\;x^{i}=x^{i}(\xi^{\hat{\imath}},\hat{t}).$ (61)
The spacetime Jacobian matrix is given by
$\displaystyle\frac{\partial
x^{a}}{\partial\xi^{\hat{a}}}=\left(\begin{array}[]{cc}\frac{\partial
t}{\partial\hat{t}}&\frac{\partial t}{\partial\xi^{\hat{\imath}}}\\\
\frac{\partial x^{i}}{\partial\hat{t}}&\frac{\partial
x^{i}}{\partial\xi^{\hat{\imath}}}\end{array}\right)=\left(\begin{array}[]{cc}1&0\\\
v^{i}_{g}&\frac{\partial
x^{i}}{\partial\xi^{\hat{\imath}}}\end{array}\right),$ (66)
where the mesh velocity of the physical frame is defined as
$\displaystyle v^{i}_{g}=\frac{\partial x^{i}}{\partial\hat{t}}.$ (67)
The inverse spacetime Jacobian matrix is given by
$\displaystyle\frac{\partial\xi^{\hat{a}}}{\partial
x^{a}}=\left(\begin{array}[]{cc}\frac{\partial\hat{t}}{\partial
t}&\frac{\partial\hat{t}}{\partial x^{i}}\\\
\frac{\partial\xi^{\hat{\imath}}}{\partial
t}&\frac{\partial\xi^{\hat{\imath}}}{\partial
x^{i}}\end{array}\right)=\left(\begin{array}[]{cc}1&0\\\
v^{\hat{\imath}}_{g}&\left(\frac{\partial
x^{i}}{\partial\xi^{\hat{\imath}}}\right)^{-1}\end{array}\right),$ (72)
where the mesh velocity in the reference frame is given by
$\displaystyle
v^{\hat{\imath}}_{g}\equiv\frac{\partial\xi^{\hat{\imath}}}{\partial
t}=-\frac{\partial\xi^{\hat{\imath}}}{\partial x^{i}}v^{i}_{g}.$ (73)
When composing coordinate maps the velocities combine as:
$\displaystyle v^{i}_{g}=\frac{\partial x^{i}}{\partial\hat{t}}=\frac{\partial
x^{i}}{\partial\tilde{t}}+\frac{\partial x^{i}}{\partial
X^{\tilde{\imath}}}\frac{\partial X^{\tilde{\imath}}}{\partial\hat{t}},$ (74)
where a new intermediate frame with coordinates
$\\{\tilde{t},X^{\tilde{\imath}}\\}$ is defined and
$X^{\tilde{\imath}}=X^{\tilde{\imath}}\left(\xi^{\hat{\imath}},\hat{t}\right)$.
To obtain the moving mesh evolution equations, we need to transform the time
derivative in (51) from being with respect to $t$ to being with respect to
$\hat{t}$. Starting with the chain rule for $\partial u/\partial\hat{t}$, we
get
$\displaystyle\frac{\partial u}{\partial t}=\frac{\partial
u}{\partial\hat{t}}-\frac{\partial
x^{i}}{\partial\hat{t}}\partial_{i}u=\partial_{\hat{t}}u-\partial_{i}\left(v^{i}_{g}u\right)+u\partial_{i}v^{i}_{g}.$
(75)
Substituting (75) into (51) we get
$\displaystyle\partial_{\hat{t}}u+\partial_{i}\left(F^{i}-v^{i}_{g}u\right)=S-u\partial_{i}v^{i}_{g}.$
(76)
This formulation of the moving mesh equations is simpler than the common ALE
(Arbitrary Lagrangian-Eulerian) formulation [31].
The same DG or FD scheme used to discretize (51) can be used to discretize
(76). In the case that $v^{i}_{g}$ is an evolved variable, the additional term
should be treated as a nonconservative product using the path-conservative
formalism [29]. Finally, we note that the characteristic fields are unchanged
by the mesh movement, but the characteristic speeds $\lambda$ are changed to
$\lambda\to\lambda-n_{i}v^{i}_{g}$.
### 3.4 Time discretization
We evolve the semi-discrete system (be it the DG or FD discretized system) in
time using a method of lines. We use either a third-order strong-stability
preserving Runge-Kutta method [32] or an Adams-Bashforth time stepper. Which
method is used will be noted for each test case.
The DG method has a rather restrictive Courant-Friedrichs-Lewy (CFL) condition
that decreases as the polynomial degree $N$ of the basis is increased. The CFL
number scales roughly as $1/(2N+1)$ [33, 34], which can be understood as a
growth in the spectrum of the spatial discretization operator [35]. For a DG
discretization in $d$ spatial dimensions, the time step $\Delta t$ must
satisfy
$\displaystyle\Delta t\leq\frac{1}{d(2N+1)}\frac{h}{|\lambda_{\max}|},$ (77)
where $h$ is the characteristic size of the element and $\lambda_{\max}$ is
the maximum characteristic speed of the system being evolved. For comparison,
FV and FD schemes have a time step restriction of
$\displaystyle\Delta t\leq\frac{1}{d}\frac{h}{|\lambda_{\max}|},$ (78)
where $h$ is the characteristic size of the FV or FD cell.
## 4 Limiting in the DG method
In this section we give an overview of what we require from a DG limiter,
followed by a brief discussion of existing limiters in the literature and
which of our requirements they meet.
### 4.1 Requirements
We have several requirements that, when combined, are very stringent. However,
we view these as necessary for DG to live up to the promise of a high-order
shock-capturing method. In no particular order, we require that
##### Requirements 4.1
1. (i)
smooth solutions are resolved, i.e., smooth extrema are not flattened,
2. (ii)
unphysical oscillations are removed,
3. (iii)
physical realizability of the solution is guaranteed,
4. (iv)
sub-cell or sub-element resolution is possible, i.e., discontinuities are
resolved inside the element, not just at boundaries,
5. (v)
curved hexahedral elements are supported,
6. (vi)
slow-moving shocks are resolved,
7. (vii)
moving meshes are supported,
8. (viii)
higher than fourth-order DG can be used.
Requirement 4.1(iv) is necessary to justify the restrictive time step size,
(77). That is, if discontinuities are only resolved at the boundaries of
elements, the DG scheme results in excessive smearing. In such a scenario it
becomes difficult to argue for using DG over FV or FD methods. While in
principle it is possible to use adaptive mesh refinement or $hp$-adaptivity to
switch to low-order DG at discontinuities, effectively switching to a low-
order FV method, we are unaware of implementations that are capable of doing
so for high-order DG.
We note that achieving higher-than-fourth order is especially challenging with
many of the existing limiters. Since FV and FD methods of fourth or higher
order are becoming more common, we view high order as being crucial for DG to
be competitive with existing FV and FD methods, especially given the
restrictive time step size.
### 4.2 Overview of existing DG limiters
Aside from the FV subcell limiters [10, 11, 12], DG limiters operate on the
solution after a time step or substep is taken so as to remove spurious
oscillations and sometimes also to correct unphysical values. This is
generally achieved by some nonlinear reconstruction using the solution in
neighboring elements. How exactly this reconstruction is done depends on the
specific limiters, but all limiters involve two general steps:
1. 1.
detecting whether or not the solution in the element is “bad” (troubled-cell
indicators),
2. 2.
correcting the degrees of freedom/solution in the element.
A good troubled-cell indicator (TCI) avoids triggering the limiter where the
solution is smooth while still preventing spurious unphysical oscillations.
Unfortunately, making this statement mathematically rigorous is challenging
and the last word is yet to be written on which TCIs are the best. Since the
TCI may trigger in smooth regions, ideally the limiting procedure does not
flatten local extrema when applied in such regions. In a companion paper [36]
we have experimented with the (admittedly quite dated but very robust) minmod
family of limiters [3, 4, 37], the hierarchical limiter of Krivodonova [38,
39], the simple WENO limiter [40], and the Hermite WENO (HWENO) limiter [41].
While this does not include every limiter applicable to structured meshes, it
covers the common ones. We will discuss each limiter in turn, reporting what
we have found to be good and bad.
The minmod family of limiters [3, 4, 37] linearize the solution and decrease
the slope if the slope is deemed to be too large. This means that the minmod
limiters quickly flatten local extrema in smooth regions, do not provide sub-
element resolution, and are not higher-than-fourth order. While they are
extremely robust and tend to do a good job of maintaining physical
realizability of the solution despite not guaranteeing it, the minmod limiters
are simply too aggressive and low-order to make DG an attractive replacement
for shock-capturing FD methods. Furthermore, generalizing the minmod limiters
to curved elements in the naïve manner makes them very quickly destroy any
symmetries of the domain decomposition and solution. Overall, we find that the
minmod limiters satisfy only Requirements 4.1(ii), 4.1(vi), and 4.1(vii).
The hierarchical limiter of Krivodonova [38, 39] works by limiting the
coefficients of the solution’s modal representation, starting with the highest
coefficient then decreasing in order until no more limiting is necessary. We
find that in 1d the Krivodonova limiter works quite well, even using fourth-
order elements. However, in 2d and 3d and for increasingly complex physical
systems, the limiter fails. Furthermore, it is nontrivial to extend to curved
elements since comparing modal coefficients assumes the Jacobian matrix of the
map $x^{i}(\xi^{\hat{\imath}})$ is spatially uniform. The Krivodonova limiter
satisfies Requirements 4.1(i), 4.1(vi), and 4.1(vii). We find that how well
the Krivodonova limiter works at removing unphysical oscillations depends on
the physical system being studied.
The simple WENO [40] and the HWENO [41] limiters are quite similar to each
other. When limiting is needed, these limiters combine the element’s solution
with a set of solution estimates obtained from the neighoring elements’
solutions. An oscillation indicator is applied on each solution estimate to
determine the convex nonlinear weights for the reconstruction. Overall, the
WENO limiters are, by design, very similar to WENO reconstruction used in FV
and FD methods. We have found that the WENO limiters are generally robust for
second- and third-order DG, but start producing unphysical solutions at higher
orders. The WENO limiters satisfy our Requirements 4.1(i), 4.1(ii), 4.1(vi),
and 4.1(vii). When supplemented with a positivity-preserving limiter [42], the
WENO schemes are also able to satisfy Requirement 4.1(iii).
In short, none of the above limiters satisfy even half of our Requirements
4.1. Furthermore, they all have parameters that need to be tuned for them to
work well on different problems. This is unacceptable in realistic
astrophysics simulations, where a large variety of complex fluid interactions
are occurring simultaneously in different parts of the computational domain,
and it is impossible to tune parameters such that all fluid interactions are
resolved.
The subcell limiters [10, 11, 12] are much more promising and we will extend
them to meet all the Requirements 4.1. We will focus on the scheme proposed in
[12] since it satisfies most of Requirements 4.1. The basic idea behind the
DG-subcell scheme is to switch to FV or, as proposed here, FD if the high-
order DG solution is inadmissible, either because of excessive oscillations or
violation of physical requirements on the solution. This idea was first
presented in [9], where a spectral scheme was hybridized with a WENO scheme.
In [10, 11] the decision whether to switch to a FV scheme is made before a
time step is taken. In contrast, the scheme presented in [12] undoes the time
step and switches to a FV scheme. The advantage of undoing the time step is
that physical realizability of the solution can be guaranteed as long as the
FV or FD scheme guarantees physical realizability. The scheme of [12] is often
referred to as an a posteriori limiting approach, where the time step is
redone using the more robust method. Given a TCI that does not allow
unphysical oscillations and a high-order positivity-preserving FV/FD method,
the subcell limiters as presented in the literature meet all Requirements
except 4.1(v) (curved hexahedral elements), 4.1(vi) (slow-moving shocks), and
4.1(vii) (moving mesh), limitations that we will address below. The key
feature that makes the DG-subcell scheme a very promising candidate for a
generic, robust, and high-order method is that the limiting is not based on
polynomial behavior alone but considers the physics of the problem. By
switching to a low-order method to guarantee physical realizability, the DG-
subcell scheme guarantees that the resulting numerical solution satisfies the
governing equations, even if only at a low order locally in space and time.
Moreover, the DG-subcell scheme can guarantee that unphysical solutions such
as negative densities never appear.
## 5 Discontinuous Galerkin-finite-difference hybrid method
In this section we present our DG-FD hybrid scheme. The method is designed
specifically to address all Requirements 4.1, and means in particular that the
method is a robust high-order shock-capturing method. We first discuss how to
switch between the DG and FD grids. Then we explain how neighboring elements
communicate flux information if one element is using DG while the other is
using FD. Next we review the a posteriori idea and discuss the TCIs we use,
when we apply them, and how we handle communication between elements. Finally,
we discuss the number of subcells to use and provide a new perspective on the
DG-FD hybrid scheme that makes the attractiveness of such a scheme clear. In A
we provide an example of how curved hexahedral elements can be handled.
### 5.1 Projection and reconstruction between DG and FD grids
We will denote the solution on the DG grid by $u_{\breve{\imath}}$ and the
solution on the FD grid by $u_{\underline{i}}$. We need to determine how to
project the solution from the DG grid to the FD grid and how to reconstruct
the DG solution from the FD solution. For simplicity, we assume an isotropic
number of DG collocation points $(N+1)^{d}$ and FD cells $(N_{s})^{d}$. Since
FD schemes evolve the solution value at the cell-center, one method of
projecting the DG solution to the FD grid is to use interpolation. However,
interpolation is not conservative and so we opt for an $L_{2}$ projection. The
$L_{2}$ projection minimizes the integral
$\displaystyle\int_{-1}^{1}\left(u-\underline{u}\right)^{2}\,dx=\int_{-1}^{1}\left(u-\underline{u}\right)^{2}J\,d\xi$
(79)
with respect to $\underline{u}$, where $\underline{u}$ is the solution on the
FD subcells. While we derive the projection matrix in 1d, generalizing to 2d
and 3d is straightforward for our tensor product basis. Substituting the nodal
basis expansion into (79) we obtain
$\displaystyle\int_{-1}^{1}\left[u_{\breve{\imath}}\ell_{\breve{\imath}}(\xi)u_{\breve{\jmath}}\ell_{\breve{\jmath}}(\xi)+u_{\underline{i}}\ell_{\underline{i}}(\xi)u_{\underline{j}}\ell_{\underline{j}}(\xi)-2u_{\underline{i}}\ell_{\underline{i}}(\xi)u_{\breve{\imath}}\ell_{\breve{\imath}}(\xi)\right]J\,d\xi,$
(80)
where $\ell_{\underline{j}}(\xi)$ are the Lagrange interpolating polynomials
on the subcells (i.e.
$\ell_{\underline{j}}(\xi_{\underline{i}})=\delta_{\underline{j}\underline{i}}$).
Varying (80) with respect to the coefficients $u_{\underline{i}}$ and setting
the result equal to zero we get
$\displaystyle\int_{-1}^{1}\left[u_{\underline{j}}\ell_{\underline{i}}(\xi)\ell_{\underline{j}}(\xi)-u_{\breve{\imath}}\ell_{\underline{i}}(\xi)\ell_{\breve{\imath}}(\xi)\right]\delta
u_{\underline{i}}J\,d\xi=0.$ (81)
Since (81) must be true for all variations $\delta u_{\underline{i}}$ we see
that
$\displaystyle\int_{-1}^{1}\left[u_{\underline{j}}\ell_{\underline{i}}(\xi)\ell_{\underline{j}}(\xi)-u_{\breve{\imath}}\ell_{\underline{i}}(\xi)\ell_{\breve{\imath}}(\xi)\right]J\,d\xi=0.$
(82)
By expanding the determinant of the Jacobian on the basis we can simplify (82)
to get
$\displaystyle
u_{\underline{i}}J_{\underline{i}}\int_{-1}^{1}\ell_{\underline{i}}(\xi)\ell_{\underline{j}}(\xi)\,d\xi=u_{\breve{\imath}}J_{\breve{\imath}}\int_{-1}^{1}\ell_{\breve{\imath}}(\xi)\ell_{\underline{j}}(\xi)\,d\xi.$
(83)
Note that expanding $uJ$ on the basis instead of $u$ creates some decrease in
accuracy and can cause aliasing if $uJ$ is not fully resolved by the basis
functions. However, this procedure allows us to cache the projection matrices
to make the method more efficient. Furthermore, expanding the Jacobian on the
basis means interpolation and projection are equal when $N_{s}\geq N+1$. We
solve for $u_{\underline{i}}J_{\underline{i}}$ in (83) by inverting the matrix
$\int_{-1}^{1}\ell_{\underline{i}}(\xi)\ell_{\underline{j}}(\xi)\,d\xi$ and
find that
$\displaystyle u_{\underline{i}}J_{\underline{i}}$
$\displaystyle=\left(\int_{-1}^{1}\ell_{\underline{i}}(\xi)\ell_{\underline{j}}(\xi)\,d\xi\right)^{-1}\int_{-1}^{1}\ell_{\breve{l}}(\xi)\ell_{\underline{j}}(\xi)\,d\xi
u_{\breve{l}}J_{\breve{l}}$ (84)
$\displaystyle=\ell_{\breve{l}}(\xi_{\underline{i}})u_{\breve{l}}J_{\breve{l}}=\mathcal{P}_{\underline{i}\breve{l}}u_{\breve{l}}J_{\breve{l}},$
where $\mathcal{P}_{\underline{i}\breve{l}}$ is the $L_{2}$ projection matrix.
Reconstructing the DG solution from the FD solution is a bit more involved.
Denoting the projection operator by $\mathcal{P}$ and the reconstruction
operator by $\mathcal{R}$, we desire the property
$\displaystyle\mathcal{R}(\mathcal{P}(u_{\breve{\imath}}J_{\breve{\imath}}))=u_{\breve{\imath}}J_{\breve{\imath}}.$
(85)
We also require that the integral of the conserved variables over the subcells
is equal to the integral over the DG element. That is,
$\displaystyle\int_{\Omega}u\,d^{3}x=\int_{\Omega}\underline{u}\,d^{3}x\Longrightarrow\int_{\Omega}uJ\,d^{3}\xi=\int_{\Omega}\underline{u}J\,d^{3}\xi.$
(86)
Since $N_{s}\geq N+1$ we need to solve a constrained linear least squares
problem.
We will denote the weights used to numerically evaluate the integral over the
subcells by $R_{\underline{i}}$ and the weights for the integral over the DG
element by $w_{l}$. To find the reconstruction operator we need to solve the
system
$\displaystyle\sum_{\breve{l}}\mathcal{P}_{\underline{i}\breve{l}}u_{\breve{l}}J_{\breve{l}}=$
$\displaystyle u_{\underline{i}}J_{\underline{i}},$ (87)
subject to the constraint
$\displaystyle\sum_{\breve{l}}w_{\breve{l}}u_{\breve{l}}J_{\breve{l}}=$
$\displaystyle\sum_{\underline{i}}R_{\underline{i}}u_{\underline{i}}J_{\underline{i}}.$
(88)
We do so by using the method of Lagrange multipliers. Denoting the Lagrange
multiplier by $\lambda$, we must minimize the functional
$\displaystyle
f=\left(\mathcal{P}_{\underline{i}\breve{l}}u_{\breve{l}}J_{\breve{l}}-u_{\underline{i}}J_{\underline{i}}\right)\left(\mathcal{P}_{\underline{i}\breve{\jmath}}u_{\breve{\jmath}}J_{\breve{\jmath}}-u_{\underline{i}}J_{\underline{i}}\right)-\lambda\left(w_{\breve{l}}u_{\breve{l}}J_{\breve{l}}-R_{\underline{i}}u_{\underline{i}}J_{\underline{i}}\right)$
(89)
with respect to $u_{\breve{l}}J_{\breve{l}}$ and $\lambda$. Doing so we obtain
the Euler-Lagrange equations
$\displaystyle\left(\begin{array}[]{cc}2\mathcal{P}_{\underline{i}\breve{l}}\mathcal{P}_{\underline{i}\breve{\jmath}}&-w_{\breve{l}}\\\
w_{\breve{l}}\delta_{\breve{l}\breve{\jmath}}&0\end{array}\right)\left(\begin{array}[]{c}u_{\breve{\jmath}}J_{\breve{\jmath}}\\\
\lambda\end{array}\right)=\left(\begin{array}[]{c}2\mathcal{P}_{\underline{i}\breve{l}}\\\
R_{\underline{i}}\end{array}\right)\left(\begin{array}[]{c}u_{\underline{i}}J_{\underline{i}}\end{array}\right).$
(97)
Inverting the matrix on the left side of (97), we obtain
$\displaystyle\left(\begin{array}[]{c}u_{\breve{\jmath}}J_{\breve{\jmath}}\\\
\lambda\end{array}\right)=\left(\begin{array}[]{cc}2\mathcal{P}_{\underline{i}\breve{l}}\mathcal{P}_{\underline{i}\breve{\jmath}}&-w_{\breve{l}}\\\
w_{\breve{l}}\delta_{\breve{l}\breve{\jmath}}&0\end{array}\right)^{-1}\left(\begin{array}[]{c}2\mathcal{P}_{\underline{i}\breve{l}}\\\
R_{\underline{i}}\end{array}\right)\left(\begin{array}[]{c}u_{\underline{i}}J_{\underline{i}}\end{array}\right).$
(105)
To make the notation less cumbersome we suppress indices by writing
$w_{\breve{l}}$ as $\vec{w}$ and
$w_{\breve{l}}\delta_{\breve{l}\breve{\jmath}}$ as $\mathbf{w}$. Treating the
matrix as a partitioned matrix, we invert it to find
$\displaystyle\left(\begin{array}[]{cc}2\mathcal{P}\mathcal{P}&-\vec{w}\\\
\mathbf{w}&0\end{array}\right)^{-1}=\left(\begin{array}[]{cc}\Pi-\Pi\vec{w}\mathcal{W}\mathbf{w}\Pi&\Pi\vec{w}\mathcal{W}\\\
-\mathcal{W}\mathbf{w}\Pi&\mathcal{W}\end{array}\right).$ (110)
Here we have defined
$\Pi=(2\mathcal{P}\mathcal{P})^{-1},\qquad\mathcal{W}=\left[\mathbf{w}(2\mathcal{P}\mathcal{P})^{-1}\vec{w}\right]^{-1}$
(111)
Substituting (110) into (105) and performing the matrix multiplication we get
$\displaystyle\left(\begin{array}[]{c}u_{\breve{\jmath}}J_{\breve{\jmath}}\\\
\lambda\end{array}\right)=\left(\begin{array}[]{c}\Pi
2\mathcal{P}-\Pi\vec{w}\mathcal{W}\mathbf{w}\Pi
2\mathcal{P}+\Pi\vec{w}\mathcal{W}\vec{R}\\\ -\mathcal{W}\mathbf{w}\Pi
2\mathcal{P}+\mathcal{W}\vec{R}\end{array}\right)_{\breve{\jmath}\underline{i}}u_{\underline{i}}J_{\underline{i}},$
(116)
where $\vec{R}$ is short for $R_{\underline{i}}$. We can see that the first
row of (116) gives
$\displaystyle u_{\breve{\jmath}}J_{\breve{\jmath}}=\left\\{\Pi
2\mathcal{P}-\Pi\vec{w}\mathcal{W}\mathbf{w}\Pi
2\mathcal{P}+\Pi\vec{w}\mathcal{W}\vec{R}\right\\}_{\breve{\jmath}\underline{i}}u_{\underline{i}}J_{\underline{i}},$
(117)
and so the reconstruction matrix used to obtain the DG solution from the FD
solution is given by
$\displaystyle R_{\breve{\jmath}\underline{i}}=\left\\{\Pi
2\mathcal{P}-\Pi\vec{w}\mathcal{W}\mathbf{w}\Pi
2\mathcal{P}+\Pi\vec{w}\mathcal{W}\vec{R}\right\\}_{\breve{\jmath}\underline{i}}.$
(118)
To show that the reconstruction matrix (118) satisfies (85) we start by
substituting (118) into (85):
$\displaystyle\mathcal{R}\mathcal{P}uJ$ $\displaystyle=\left\\{\Pi
2\mathcal{P}-\Pi\vec{w}\mathcal{W}\mathbf{w}\Pi
2\mathcal{P}+Pi\vec{w}\mathcal{W}\vec{R}\right\\}\mathcal{P}uJ$
$\displaystyle=\left\\{\mathbb{1}-\Pi\vec{w}\mathcal{W}\mathbf{w}+\Pi\vec{w}\mathcal{W}\vec{R}\mathcal{P}\right\\}uJ$
$\displaystyle=\left\\{\mathbb{1}-\Pi\vec{w}\mathcal{W}\mathbf{w}+\Pi\vec{w}\mathcal{W}\mathbf{w}\right\\}uJ$
$\displaystyle=uJ,$ (119)
where we used the constraint $\mathbf{w}uJ=\vec{R}\mathcal{P}uJ$. Thus, the
matrix given in (118) is the reconstruction matrix for obtaining the DG
solution from the FD solution on the subcells and is the pseudo-inverse of the
projection matrix. Note that since the reconstruction matrices also only
depend on the reference coordinates, they can be precomputed for all elements
and cached.
We now turn to deriving the integration weights $R_{\underline{i}}$ on the
subcells. One simple option is using the extended midpoint rule:
$\displaystyle\int_{\Omega}\underline{u}\,d^{3}x\approx\Delta\xi\Delta\eta\Delta\zeta\sum_{\underline{i}}\underline{u}_{\underline{i}}J_{\underline{i}},$
(120)
which means $R_{\underline{i}}=\Delta\xi\Delta\eta\Delta\zeta$. However, this
formula is only second-order accurate. To obtain a higher-order approximation,
we need to find weights $R_{\underline{i}}$ that approximate the integral
$\int_{a}^{b}f(x)\,dx\approx\sum_{\underline{i}=0}^{n}R_{\underline{i}}f(x_{\underline{i}}).$
We provide the weights $R_{\underline{i}}$ in B.
### 5.2 Intercell fluxes
One approach to dealing with the intercell fluxes is to use the mortar method
[43, 44, 45, 46]. In the mortar method, the boundary correction terms and
numerical fluxes are computed on a new mesh whose resolution is the greater of
the two elements sharing the boundary. In practice, we have found this not to
be necessary to achieve a stable scheme. This can be understood by noting that
from a shock capturing perspective, violating conservation is only an issue at
discontinuities. Wherever the solution is smooth, conservation violations
converge away. Since the hybrid scheme switches from DG to FD before a shock
enters an element by retaking the time step, and since discontinuities are
inevitably always somewhat smeared in any shock capturing scheme, we have
found that exact conservation is not required between a DG and FD grid.
First, let us describe the element using FD. In this case, the neighbor input
data to the boundary correction from the DG grid is projected onto the FD grid
on the interface. Then the Riemann solver computes the boundary correction
$G$, which is then used in the FD scheme. On the DG grid the FD scheme is used
to reconstruct the neighboring data on the common interface from the subcell
data. The reconstructed FD data is then reconstructed to the DG grid, that is,
it is transferred from the FD to the DG grid on the interface. Finally, the
boundary correction is computed on the DG grid. It is the reordering of the
reconstruction and projection with the Riemann solver that violates
conservation at the truncation error level. Note that the DG and FD solvers
must use the same Riemann solver.
### 5.3 The a posteriori idea
In this section we will discuss how the a posteriori idea is implemented. For
now, we will not concern ourselves with which TCI is used, just that one is
used to detect troubled cells. We first compute a candidate solution
$u^{\star}(t^{n+1})$ at time $t^{n+1}$ using an unlimited DG scheme. The TCI
is then used to check whether or not the candidate solution
$u^{\star}(t^{n+1})$ is admissible. The TCI may depend on the candidate
solution, the solution at the current time $u(t^{n})$ within the element, and
the solution in neighboring elements at time $t^{n}$. In order to minimize
communication between elements, the TCI may not depend on the candidate
solution in neighboring elements. If the candidate solution is found to be
admissible by the TCI, we use it as the solution at $t^{n+1}$. That is,
$u(t^{n+1})=u^{\star}(t^{n+1})$. If the candidate solution is inadmissible,
then we redo the time step using the FD subcells. In this case, the solution
at $t^{n}$ is projected onto the subcells, FD reconstruction is performed,
data for the boundary correction/Riemann solver at the element boundaries is
overwritten by projecting the DG solution to the FD grid on the element
boundaries, and the FD scheme takes the time step. Overwriting the FD
reconstructed data $u_{\mathrm{FD}}^{\mathrm{interface}}$ with the projected
DG solution $\mathcal{P}(u_{\mathrm{DG}}^{\mathrm{interface}})$ on the
interfaces makes the scheme conservative when retaking the time step. Since
the scheme is switching from DG to FD, it is likely a discontinuity is present
and conservation is important. We now describe in detail how the algorithm is
implemented in terms of communication patterns and parallelization.
First consider an element using DG. We start by computing the local
contributions to the time derivative, the fluxes, source terms, non-
conservative terms, and flux divergence. We store $\partial_{t}u$, compute
local contributions to the boundary correction $G$, and then send our
contributions to the boundary correction as well as the ghost cells of the
primitive variables used for FD reconstruction to neighboring elements. By
sending both the inputs to the boundary correction and the data for FD
reconstruction, we reduce the number of times communication is necessary. This
is important since generally it is the number of times data is communicated
not the amount of data communicated that causes a bottleneck. Once all
contributions to the Riemann problem are received from neighboring elements,
we compute the boundary correction and compute the candidate solution
$u^{\star}(t^{n+1})$. We then apply the troubled-cell indicator described in
$\S$5.4 below. If the cell is marked as troubled we undo the last time/sub
step and retake the time step using the FD method. FD reconstruction is
performed, but the projected boundary correction from the DG solve is used to
ensure conservation between neighboring elements using FD. If the cell was not
marked as troubled, we accept the candidate solution as being valid and take
the next time/sub step.
The FD solver starts by sending the data necessary for FD reconstruction to
neighboring elements. This means any neighboring elements doing DG need to
reconstruct the inputs into the boundary correction using FD reconstruction.
However, this allows us to maintain a single communication per time step,
unlike traditional limiting strategies which inherently need two
communications per time step. Once all FD reconstruction and boundary
correction data has been received from neighboring elements, a FD time step is
taken. Any DG boundary correction data is projected to the FD grid in order to
reduce conservation violations at element boundaries. With the FD time step
complete, we apply a troubled-cell indicator to see if the DG solution would
be admissible. In both Runge-Kutta and multi-step methods, care is taken so as
to not introduce discontinuities into the solution because they were present
in past time or sub steps. In the case of Runge-Kutta time stepping we only
switch back to DG at the end of a complete time step in order to avoid
reconstructing discontinuities in the time stepper history to the DG grid.
When multi-step methods are used, we wait until the TCI has marked enough time
steps as being representable on the DG grid so that any discontinuities have
cleared the time stepper history. For example, when using a third-order multi-
step method the TCI needs to deem three time steps as representable on the DG
grid before we switch to DG.
We present a schematic of our DG-FD hybrid scheme in figure 1. The schematic
has the unlimited DG loop on the left and the positivity-preserving FD loop on
the right. Between them are the projection and reconstruction operations that
allow the two schemes to work together and communicate data back and forth.
The scheme starts in the “Unlimited DG Loop” in the top left with a
computation of the volume candidate. If the TCI finds the solution admissible
the “Passed” branch is taken, otherwise the “Failed” branch is taken.
Send ghost cells and fluxes Compute $u^{\star,n+1}_{\breve{\imath}}$
FailedPassed $\mathrm{TCI}\left(u^{\star,n+1}_{\breve{\imath}}\right)$
$u^{n+1}_{\breve{\imath}}=u^{\star,n+1}_{\breve{\imath}}$
$\mathcal{P}\left(u^{n}_{\breve{\imath}}\right)$,
$\mathcal{P}\left(F_{\breve{\imath}}^{i,n}\right)$,
$\mathcal{P}\left(S_{\breve{\imath}}^{n}\right)$
$\mathcal{R}\left(u^{n+1}_{\underline{i}}\right)$ Send ghost cells FD
reconstruction Compute $u^{n+1}_{\underline{i}}$ FailedPassed
$\mathrm{TCI}\left(u^{n+1}_{\underline{i}}\right)$ Unlimited DG Loop
Projection and Reconstruction FD Loop Figure 1: A schematic description of the
proposed DG-FD hybrid method. We use superscripts $n$ and $n+1$ to denote
variables at time $t^{n}$ and $t^{n+1}$. The unlimited DG loop, projection to
and reconstructions from the FD subcells, and the FD loop are boxed to
highlight how the hybrid scheme can be split into the unlimited DG and FD
schemes with a layer that allows the two to communicate.
### 5.4 Troubled-cell indicators
One of the most important parts of the DG-FD hybrid method is the TCI that
determines when to switch from DG to FD. In [12] a numerical indicator based
on the behavior of the polynomials representing the solution was used as well
as physical indicators such as the density or pressure becoming negative. We
believe that the combination of numerical and physical indicators is crucial,
since it enables the development of non-oscillatory methods that also
guarantee physical realizability of the solution. We will first outline the
numerical indicator in this section. Then we will give a detailed description
of the TCIs we use with the GRMHD system for the initial data, determining
when to switch from DG to FD, and when to switch from FD back to DG.
The numerical indicator used in [12] is a relaxed discrete maximum principle
(RDMP). The RDMP is a two-time-level indicator in the sense that it compares
the candidate at $t^{n+1}$ to the solution at time $t^{n}$. The RDMP requires
that
$\displaystyle\min_{\mathcal{N}}\left[u(t^{n})\right]-\delta\leq
u^{\star}(t^{n+1})\leq\max_{\mathcal{N}}\left[u(t^{n})\right]+\delta,$ (121)
where $\mathcal{N}$ are either the Neumann or Voronoi neighbors plus the
element itself, and $\delta$ is a parameter defined below that relaxes the
discrete maximum principle. When computing $\max(u)$ and $\min(u)$ over an
element using DG, we first project the DG solution to the subcells and then
compute the maximum and minimum over both the DG solution and the projected
subcell solution. However, when an element is using FD we compute the maximum
and minimum over the subcells only. Note that the maximum and minimum values
of $u^{\star}$ are computed in the same manner as those of $u$. The parameter
$\delta$ used to relax the discrete maximum principle is given by:
$\displaystyle\delta=\max\left(\delta_{0},\epsilon\left\\{\max_{\mathcal{N}}\left[u(t^{n})\right]-\min_{\mathcal{N}}\left[u(t^{n})\right]\right\\}\right),$
(122)
where, as in [12], we take $\delta_{0}=10^{-7}$ and $\epsilon=10^{-3}$.
We have found that the RDMP TCI is not able to handle slow-moving shocks. This
is precisely because it is a two-time-level TCI and measures the change in the
solution from one time step to the next. Since discontinuities are inevitably
still somewhat smeared with a FD scheme, a discontinuity moving slowly enough
gradually generates large oscillations inside the element it is entering. The
RDMP, measuring relative changes, does not react quickly enough or at all, and
so the DG method ends up being used in elements with discontinuities. We
demonstrate this below in the simple context of a 1d Burgers step solution
with the mesh moving at nearly the speed of the discontinuity.
Since using the RDMP means we are unable to satisfy Requirements 4.1(vi) and
4.1(vii), we seek a supplementary TCI to deal with these cases. We use the TCI
proposed in [47], which we will refer to as the Persson TCI. This TCI looks at
the falloff of the spectral coefficients of the solution, effectively
comparing the power in the highest mode to the total power of the solution.
Consider a discontinuity sensing quantity $U$, which is typically a scalar but
could be a tensor of any rank. Let $U$ have the 1d spectral decomposition:
$\displaystyle U(x)=\sum_{i=0}^{N}c_{i}P_{i}(x),$ (123)
where in our case $P_{i}(x)$ are Legendre polynomials, and $c_{i}$ are the
spectral coefficients.222When a filter is being used to prevent aliasing-
driven instabilities, lower modes need to be included in $\hat{U}$. $\hat{U}$
should generally be the highest unfiltered mode. We then define a filtered
solution $\hat{U}$ as
$\displaystyle\hat{U}(x)=c_{N}P_{N}(x).$ (124)
The main goal of $\hat{U}$ is to measure how much power is in the highest
mode, which is the mode most responsible for Gibbs phenomenon. In 2d and 3d we
consider $\hat{U}$ on a dimension-by-dimension basis, taking the $L_{2}$ norm
over the extra dimensions, reducing the discontinuity sensing problem to
always being 1d. We define the discontinuity indicator $s^{\Omega}$ as
$\displaystyle
s^{\Omega}=\log_{10}\left(\frac{(\hat{U},\hat{U})}{(U,U)}\right),$ (125)
where $(\cdot,\cdot)$ is an inner product, which we take to be the Euclidean
$L_{2}$ norm (i.e. we do not divide by the number of grid points since that
cancels out anyway).
We must now decide what values of $s^{\Omega}$ are large and therefore mean
the DG solution is inadmissible. For a spectral expansion, we would like the
solution to be at least continuous and so the spectral coefficients should
decay at least as $1/N^{2}$ [48]. Since our sensor depends on the square of
the coefficients, we expect at least $1/N^{4}$ decay for smooth solutions.
With this in mind, we have found that requiring
$\displaystyle s^{\Omega}<s^{e}=-\alpha_{N}\log_{10}(N+1),$ (126)
with $\alpha_{N}=4$ works well for detecting oscillations and switching to the
FD scheme. In order to prevent rapid switching between the DG and FD schemes,
we use $\alpha_{N}+1$ for the TCI when deciding whether to switch back to DG.
#### 5.4.1 Initial data TCI for GRMHD
We set the initial data on the DG grid, and then check a series of conditions
to see if the initial data is representable on the DG grid. We require:
1. 1.
that $\min(\tilde{D})$ over both the DG grid and the subcells is above a user-
specified threshold. This is essentially a positivity check on $\tilde{D}$.
2. 2.
that $\min(\tilde{\tau})$ over both the DG grid and the subcells is above a
user-specified threshold. This is essentially a positivity check on
$\tilde{\tau}$.
3. 3.
that for all conserved variables their max and min on the subcells satisfies
an RDMP compared to the max and min on the DG grid. The tolerances chosen are
typically the same as those used for the two-level RDMP during the evolution.
4. 4.
that $\tilde{D}$ and $\tilde{\tau}$ pass the Persson TCI.
5. 5.
that if $\max\left(\sqrt{\tilde{B}^{i}\delta_{ij}\tilde{B}^{j}}\right)$ is
above a user-specified threshold,
$\sqrt{\tilde{B}^{i}\delta_{ij}\tilde{B}^{j}}$ satisfies the Persson TCI.
If all requirements are met, then the DG solution is admissible.
#### 5.4.2 TCI on DG grid for GRMHD
On the DG grid we require:
1. 1.
that the RDMP TCI passes.
2. 2.
that $\min(\tilde{D})$ is above a user-specified threshold. This is
essentially a positivity check. This is done over both the DG and projected
subcell solution.
3. 3.
that $\min(\tilde{\tau})$ is above a user-specified threshold. This is
essentially a positivity check. This is done over both the DG and projected
subcell solution.
4. 4.
that $\tilde{B}^{2}\leq 1.0-\epsilon_{B}2\tilde{\tau}\sqrt{\gamma}$ at all
grid points in the DG element.
5. 5.
that primitive recovery is successful.
6. 6.
that if we are in the atmosphere, we stay on DG. Since we have now recovered
the primitive variables, we are able to say with certainty whether or not we
are in atmosphere.
7. 7.
that $\tilde{D}$ and $\tilde{\tau}$ pass the Persson TCI.
8. 8.
that if $\max\left(\sqrt{\tilde{B}^{i}\delta_{ij}\tilde{B}^{j}}\right)$ is
above a user-specified threshold,
$\sqrt{\tilde{B}^{i}\delta_{ij}\tilde{B}^{j}}$ satisfies the Persson TCI.
If all requirements are met, then the DG solution is admissible.
#### 5.4.3 TCI on FD grid for GRMHD
In order to switch to DG from FD, we require:
1. 1.
that the RDMP TCI passes.
2. 2.
that no conserved variable fixing was necessary. If the conserved variables
needed to be adjusted in order to recover the primitive variables, then even
the FD solution is inaccurate.
3. 3.
that $\min(\tilde{D})$ is above a user-specified threshold. This is
essentially a positivity check.
4. 4.
that $\min(\tilde{\tau})$ is above a user-specified threshold. This is
essentially a positivity check.
5. 5.
that $\tilde{D}$ and $\tilde{\tau}$ pass the Persson TCI.
6. 6.
that if $\max\left(\sqrt{\tilde{B}^{i}\delta_{ij}\tilde{B}^{j}}\right)$ is
above a user-specified threshold,
$\sqrt{\tilde{B}^{i}\delta_{ij}\tilde{B}^{j}}$ satisfies the Persson TCI.
If all the above checks are satisfied, then the numerical solution is
representable on the DG grid.
### 5.5 On the number of subcells to use
The only hard requirement on the number of subcells used in 1d is $N_{s}\geq
N+1$ so that there are at least as many degrees of freedom to represent the
solution on the subcells as there are in the DG scheme. However, the more
optimal choice, as is argued in [12], is $N_{s}=2N+1$. This arises from
comparing the time step size allowed when using a DG method, (77), to the time
step size allowed when using a FV or FD method, (78). Choosing $N_{s}>2N+1$ is
not desirable since that would result in having to take smaller time steps
when switching from DG to FD. We refer the reader to §4.5 of [12] for a more
detailed discussion of the optimal number of subcells to use.
### 5.6 Perspective on DG-FD hybrid method
Given the complexity of the DG-FD hybrid scheme and the relative expense of FD
schemes compared to the DG scheme, the DG-FD hybrid scheme might seem like a
poor choice. We argue that this is not the case and that the hybrid scheme is
actually a good choice. Consider needing a resolution of $130^{d}$ (very
modest) to solve a problem using a FD scheme to a desired accuracy. The
equivalent DG-FD hybrid scheme would use ten seventh-order elements so that in
the worst case, where there are large discontinuities everywhere in the
domain, the scheme is as accurate as the FD scheme. However, wherever the
solution is smooth enough to be representable using DG, roughly $2^{d}$ fewer
grid points are necessary. In 3d this makes a significant difference,
especially if the numerical solution is representable using DG in much of the
computational domain. For example, consider the case where half the elements
are using FD. In this case the DG-FD hybrid scheme uses ${}\sim 0.58$ times as
many grid points as the equivalent FD scheme. Furthermore, the DG scheme only
needs to solve the Riemann problem on element boundaries, and does not need to
perform the expensive reconstruction step necessary in FD and FV schemes.
Thus, the decrease in the number of grid points is a lower bound on the
performance improvement the DG-FD hybrid scheme has to offer. Ultimately, we
believe that the more useful view of the DG-FD hybrid scheme is that it is a
FD scheme that uses DG as a way to compress the representation of the solution
in smooth regions in order to increase efficiency.
## 6 Numerical results
### 6.1 Burgers equation: a slowly moving discontinuity
While extremely simple, Burgers equation allows us to easily test how well the
RDMP and Persson TCI are able to handle slowly-moving discontinuities. Burgers
equation is given by
$\displaystyle\partial_{t}U+\partial_{x}\left(\frac{U^{2}}{2}\right)=0.$ (127)
Whenever we use the Persson TCI we use the evolved variable $U$ as the
discontinuity sensing quantity.
We evolve the solution
$\displaystyle U(x,t)=\left\\{\begin{array}[]{ll}2&\mathrm{if}\;x\leq
0.25+1.5t\\\ 1&\mathrm{otherwise}\end{array}\right.$ (130)
on a moving mesh. The mesh has a velocity $v_{g}^{x}=1.4$, while the
discontinuity moves at speed $1.5$. Thus, the discontinuity moves relatively
slowly across the grid, allowing us to test how well each TCI handles such
discontinuities. We integrate (127) using a third-order Adams-Bashforth time
stepper, on an initial domain $x\in[-1,1]$ with eight P5 elements. We compare
the RDMP TCI and the Persson TCI in figure 2 at a final time of $t_{f}=1.5$.
The top row uses a time step of $\Delta t=2.5\times 10^{-3}$ and the bottom
row uses $\Delta t=5\times 10^{-4}$. In all cases a third-order weighted
compact nonlinear scheme is used for FD reconstruction. We use a Rusanov or
local Lax-Friedrichs numerical flux/boundary correction.
The leftmost plot in the top row of figure 2 uses the Persson TCI with
$\alpha_{N}=3$, the center plot in the top row uses the Persson TCI with
$\alpha_{N}=4$, and the rightmost plot in the top row uses the RDMP TCI. We
see that, in agreement with what is expected from a convergence analysis of
Legendre polynomials [48], using $\alpha_{N}=4$ to switch to the FD scheme is
most robust as an indicator. We see that both the Persson TCI with
$\alpha_{N}=3$ and the RDMP TCI struggle to switch to the FD scheme quickly
enough to prevent unphysical oscillations from entering the solution. In the
bottom row of figure 2 we use a smaller time step size, $\Delta t=5\times
10^{-4}$, to make the relative change in $U$ from one time step to the next
smaller. From left to right we show results using the Persson TCI with
$\alpha_{N}=4$, the RDMP TCI, and the Persson TCI with $\alpha_{N}=3$
alongside the RDMP TCI. In general, the RDMP is much better at preventing
oscillations from appearing on the left of the discontinuity, while the
Persson TCI does a better job on the right of the discontinuity. While
interesting, it is unclear how this translates to more complex systems and
flows. Although we cannot completely discount the RDMP, the Persson indicator
does have an advantage in all cases, but using both TCIs together gives the
best results. We ran the Persson TCI with $\alpha_{N}=4$ alongside the RDMP
TCI for the smaller time step case and found that no unphysical oscillations
are visible, just as in the top middle plot of figure 2. We have verified that
our results are the same whether using the SSP RK3 time stepper or the Adams-
Bashforth time stepper.
| |
---|---|---
| |
Figure 2: The step Burgers problem at $t_{f}=1.5$ using a DG-P5 scheme
hybridized with a WCNS3 FD scheme. A third-order Adams-Bashforth time stepper
is used and the mesh is moving at velocity $v_{g}^{x}=1.4$. Results in the top
row are obtained using a time step size of $\Delta t=2.5\times 10^{-3}$ and in
the bottom row using a time step size of $\Delta t=5\times 10^{-4}$. Going
from left to right in the top row, the TCI used is the Persson TCI with
$\alpha_{N}=3$, the Persson TCI with $\alpha_{N}=4$, and the RDMP TCI. Going
from left to right in the bottom row, the TCI used is the Persson TCI with
$\alpha_{N}=3$, the RDMP TCI, and Persson TCI with $\alpha_{N}=4$ along with
the RDMP TCI.
### 6.2 General relativistic magnetohydrodynamics
In this section we present results of our DG-FD hybrid scheme when applied to
various GRMHD test problems. The final test problem in this section is that of
a single magnetized neutron star, demonstrating that our hybrid scheme is
capable of simulating interesting relativistic astrophysics scenarios. All
simulations use an HLL Riemann solver and a third-order strong-stability
preserving Runge-Kutta time stepper [26]. We also reconstruct the variables
$\\{\rho,p,Wv^{i},B^{i},\Phi\\}$ using a monotised central reconstruction
scheme. We choose the resolution for the different problems by having the
number of FD grid points be approximately equal to the number of grid points
used by current production FD codes. Unless stated otherwise, we do not
monitor $\tilde{B}^{i}$ with the Persson indicator since in most of the test
cases we look at the magnetic field has discontinuities at or near the same
place the fluid variables have discontinuities. All simulations use SpECTRE
v2021.09.11 [17] and the input files are available as part of the arXiv
version of this paper.
#### 6.2.1 1d Smooth Flow
We consider a simple 1d smooth flow problem to test which of the limiters and
troubled-cell indicators are able to solve a smooth problem without degrading
the order of accuracy. A smooth density perturbation is advected across the
domain with a velocity $v^{i}$. The analytic solution is given by
$\displaystyle\rho$ $\displaystyle=1+0.7\sin[k^{i}(x^{i}-v^{i}t)],$ (131)
$\displaystyle v^{i}$ $\displaystyle=(0.8,0,0),$ (132) $\displaystyle k^{i}$
$\displaystyle=(1,0,0),$ (133) $\displaystyle p$ $\displaystyle=1,$ (134)
$\displaystyle B^{i}$ $\displaystyle=(0,0,0),$ (135)
and we close the system with an adiabatic equation of state,
$\displaystyle p=\rho\epsilon\left(\Gamma-1\right),$ (136)
where $\Gamma$ is the adiabatic index, which we set to 1.4. We use a domain
given by $[0,2\pi]^{3}$, and apply periodic boundary conditions in all
directions. The time step size is $\Delta t=2\pi/5120$ so that the spatial
discretization error is larger than the time stepping error for all
resolutions that we use.
Table 1: The errors and local convergence order for the smooth flow problem
using different limiting strategies. Note that the limiter is not applied if
the troubled-cell indicator determines the DG solution to be valid. We observe
the expected convergence rate except when the solution is underresolved
because too few elements are used or when the error is no longer dominated by
the truncation error of the DG scheme.
* Method | $N_{x}$ | $L_{2}(\mathcal{E}(\rho))$ | $L_{2}$ Order
---|---|---|---
DG-FD P3 | 02 | 3.50983e-1 |
| 04 | 1.22554e-1 | 01.52
| 08 | 3.72266e-4 | 08.36
| 16 | 1.61635e-5 | 04.53
| 32 | 9.76927e-7 | 04.05
DG-FD P4 | 02 | 3.62426e-1 |
| 04 | 3.79759e-4 | 09.90
| 08 | 1.15193e-5 | 05.04
| 16 | 3.73055e-7 | 04.95
DG-FD P5 | 2 | 3.45679e-01 |
| 4 | 2.23822e-05 | 13.91
| 8 | 3.18504e-07 | 06.13
| 16 | 5.08821e-09 | 05.97
We perform convergence tests at different DG orders and present the results in
table 1. We show both the $L_{2}$ norm of the error and the convergence rate.
The $L_{2}$ norm is defined as
$\displaystyle L_{2}(u)=\sqrt{\frac{1}{M}\sum_{i=0}^{M-1}u_{i}^{2}},$ (137)
where $M$ is the total number of grid points and $u_{i}$ is the value of $u$
at grid point $i$ and the convergence order is given by
$L_{2}\;\mathrm{Order}=\log_{2}\left[\frac{L_{2}(\mathcal{E}_{N_{x}/2})}{L_{2}(\mathcal{E}_{N_{x}})}\right].$
(138)
We find that when very few elements are used, the TCI decides the solution is
not well represented on the DG grid. Although if we disable the FD scheme
completely, we find the DG method is stable, we find it acceptable that the
TCI switches to FD in order to ensure robustness. Ultimately we observe the
expected rate of convergence for smooth problems.
#### 6.2.2 1d Riemann Problems
One-dimensional Riemann problems are a standard test for any scheme that must
be able to handle shocks. We will focus on the first Riemann problem (RP1) of
[49]. The setup is given in table 2. We perform simulations using an SSP RK3
method with $\Delta t=5\times 10^{-4}$. In the left panel of figure 3 we show
the rest mass density $\rho$ at $t_{f}=0.4$ for a simulation using 64 P5 DG-FD
hybrid elements as well as a simulations using 128 P2 elements. The thin black
curve is the analytic solution obtained using the Riemann solver of [50]. An
ideal fluid equation of state (136) is used.
Table 2: The initial conditions for Riemann Problem 1 of [49]. The domain is
$x\in[-0.5,0.5]$, the final time is $t_{f}=0.4$, and an ideal fluid equation
of state is used with an adiabatic index of 2.
* | $\rho$ | $p$ | $v^{i}$ | $B^{i}$
---|---|---|---|---
$x<0$ | 1.000 | 1.0 | $(0,0,0)$ | $(0.5,\phantom{-}1,0)$
$x\geq 0$ | 0.125 | 0.1 | $(0,0,0)$ | $(0.5,-1,0)$
Figure 3: The left panel shows a comparison of the results of the Riemann
Problem 1 of [49] using a P5 (64 elements) and P2 (128 elements) DG-FD hybrid
scheme. The right panel shows the difference between the analytic and
numerical solution at $t=0.4$ for the DG-FD P2 scheme (solid light blue curve)
and the DG-FD P5 scheme (dashed purple curve). The P5 scheme is able to
resolve the discontinuities just as well as the P2 scheme, while also
admitting fewer unphysical oscillations away from the discontinuities.
Impressively, the DG-FD hybrid scheme actually has fewer oscillations when
going to higher order. In the right panel of figure 3 we plot the error of the
numerical solution using a P2 DG-FD scheme with 128 elements and a P5 DG-FD
scheme with 64 elements. We see that the P5 hybrid scheme actually has fewer
oscillations than the P2 scheme, while resolving the discontinuities equally
well. We attribute this to the troubled-cell indicators triggering earlier
when a higher polynomial degree is used since discontinuities entering an
element rapidly dump energy into the high modes. While the optimal order is
almost certainly problem-dependent, given that current numerical relativity
codes are mostly second order, achieving sixth order in the smooth regions is
promising.
#### 6.2.3 2d Cylindrical Blast Wave
A standard test problem for GRMHD codes is the cylindrical blast wave [51, 52]
where a magnetized fluid initially at rest in a constant magnetic field along
the $x$-axis is evolved. The fluid obeys the ideal fluid equation of state
with $\Gamma=4/3$. The fluid begins in a cylindrically symmetric
configuration, with hot, dense fluid in the region with cylindrical radius
$r<0.8$ surrounded by a cooler, less dense fluid in the region $r>1$. The
initial density $\rho$ and pressure $p$ of the fluid are
$\displaystyle\rho(r<0.8)$ $\displaystyle=10^{-2},$ (139)
$\displaystyle\rho(r>1.0)$ $\displaystyle=10^{-4},$ (140) $\displaystyle
p(r<0.8)$ $\displaystyle=1,$ (141) $\displaystyle p(r>1.0)$
$\displaystyle=5\times 10^{-4}.$ (142)
In the region $0.8\leq r\leq 1$, the solution transitions continuously and
exponentially (i.e., transitions such that the logarithms of the pressure and
density are linear functions of $r$). The fluid begins threaded with a uniform
magnetic field with Cartesian components
$(B^{x},B^{y},B^{z})=(0.1,0,0).$ (143)
The magnetic field causes the blast wave to expand non-axisymmetrically. For
all simulations we use a time step size $\Delta t=10^{-2}$ and an SSP RK3 time
integrator.
DG-FD, P2, $64^{2}$ elements
DG-FD, P5, $32^{2}$ elements
Figure 4: Cylindrical blast wave $\rho$ at $t=4$ showing the results of the
using the DG-FD hybrid scheme with $64\times 64$ P2 elements (left) and
$32\times 32$ P5 elements (right). The regions surrounded by black squares
have switched from DG to FD.
We evolve the blast wave to time $t=4.0$ on a grid of $64\times 64\times 1$
elements covering a cube of extent $[-6,6]^{3}$ using a P2 DG-FD scheme and on
a grid of $32\times 32\times 1$ using a P5 DG-FD scheme. With these choices
the resolution when using FD everywhere is comparable to what FD codes use for
this test. We apply periodic boundary conditions in all directions, since the
explosion does not reach the outer boundary by $t=4.0$. Figure 4 shows the
logarithm of the rest-mass density at time $t=4.0$, at the end of evolutions
using the P2 (left) and P5 (right) DG-FD schemes. The increased resolution of
a high-order scheme is clear when comparing the P2 and P5 solutions in the
interior region of the blast wave. It is not clear that going to even higher
order would be useful in this problem since to maintain the same time step
size we would need to decrease the number of elements. Furthermore, as we can
already see by comparing the P2 and P5 schemes, a greater area of the P5
solution is using FD, though it is difficult to determine what overall effect
this has, especially since high-order FD schemes could be used.
#### 6.2.4 2d Magnetic Rotor
The second 2-dimensional test problem we study is the magnetic rotor problem
originally proposed for non-relativistic MHD [53, 54] and later generalized to
the relativistic case [55, 56]. A rapidly rotating dense fluid cylinder is
inside a lower density fluid, with a uniform pressure and magnetic field
everywhere. The magnetic braking will slow down the rotor over time, with an
approximately 90 degree rotation by the final time $t=0.4$. We use a domain of
$[-0.5,0.5]^{3}$ and a time step size $\Delta t=10^{-3}$ and an SSP RK3 time
integrator. An ideal fluid equation of state with $\Gamma=5/3$ is used, and
the following initial conditions are imposed:
$\displaystyle p$ $\displaystyle=1$ (144) $\displaystyle B^{i}$
$\displaystyle=(1,0,0)$ (145) $\displaystyle v^{i}$
$\displaystyle=\left\\{\begin{array}[]{ll}(-y\Omega,x\Omega,0),&\mathrm{if}\;r\leq
R_{\mathrm{rotor}}=0.1\\\ (0,0,0),&\mathrm{otherwise},\end{array}\right.$
(148) $\displaystyle\rho$
$\displaystyle=\left\\{\begin{array}[]{ll}10,&\mathrm{if}\;r\leq
R_{\mathrm{rotor}}=0.1\\\ 1,&\mathrm{otherwise},\end{array}\right.$ (151)
with angular velocity $\Omega=9.95$. The choice of $\Omega$ and
$R_{\mathrm{rotor}}=0.1$ guarantees that the maximum velocity of the fluid
(0.995) is less than the speed of light. We impose periodic boundary
conditions.
DG-FD, P2, $64^{2}$ elements
DG-FD, P5, $32^{2}$ elements
Figure 5: Magnetic rotor $\rho$ at $t=0.4$ showing the results of the using
the DG-FD hybrid scheme with $64\times 64$ P2 elements (left) and $32\times
32$ P5 elements (right). The regions surrounded by black squares have switched
from DG to FD.
We show the results of our evolutions using $64\times 64$ P2 elements (left)
and $32\times 32$ P5 elements (right) in figure 5. Again, the DG-FD hybrid
scheme is robust and accurate, though a fairly large number of cells end up
being marked as troubled in this problem. However, using FD in more elements
is not something we view as inherently bad, since we favor robustness in
realistic simulations. The process of tweaking parameters and restarting
simulations is both time consuming and frustrating, and so giving up some
efficiency for robustness is acceptable to us.
#### 6.2.5 2d Magnetic Loop Advection
The last 2-dimensional test problem we study is magnetic loop advection
problem [57]. A magnetic loop is advected through the domain until it returns
to its starting position. We use an initial configuration very similar to [24,
58, 59, 60], where
$\displaystyle\rho$ $\displaystyle=1$ (152) $\displaystyle p$
$\displaystyle=3$ (153) $\displaystyle v^{i}$ $\displaystyle=(1/1.2,1/2.4,0)$
(154) $\displaystyle B^{x}$
$\displaystyle=\left\\{\begin{array}[]{ll}-A_{\mathrm{loop}}y/R_{\mathrm{in}},&\mathrm{if}\;r\leq
R_{\mathrm{in}}\\\
-A_{\mathrm{loop}}y/r,&\mathrm{if}\;R_{\mathrm{in}}<r<R_{\mathrm{loop}}\\\
0,&\mathrm{otherwise},\end{array}\right.$ (158) $\displaystyle B^{y}$
$\displaystyle=\left\\{\begin{array}[]{ll}A_{\mathrm{loop}}x/R_{\mathrm{in}},&\mathrm{if}\;r\leq
R_{\mathrm{in}}\\\
A_{\mathrm{loop}}x/r,&\mathrm{if}\;R_{\mathrm{in}}<r<R_{\mathrm{loop}}\\\
0,&\mathrm{otherwise},\end{array}\right.$ (162)
with $R_{\mathrm{loop}}=0.3$, $R_{\mathrm{in}}=0.001$, and an ideal gas
equation of state with $\Gamma=5/3$. The computational domain is
$[-0.5,0.5]^{3}$ with $64\times 64\times 1$ elements and periodic boundary
conditions being applied everywhere. The final time for one period is $t=2.4$.
For all simulations we use a time step size $\Delta t=10^{-3}$ and an SSP RK3
time integrator. Since the fluid variables are smooth in this problem, we
apply the Persson TCI to the Euclidean magnitude of $\tilde{B}^{i}$ in
elements where the maximum value of the magnitude is above $10^{-5}$.
DG-FD, P2, $64^{2}$ elements
DG-FD, P5, $32^{2}$ elements
Figure 6: $B^{x}$ for the magnetic loop advection problem. The left half of
each plot is at the initial time, while the right half is after one period
($t=2.4$). We show the results of the using the DG-FD hybrid scheme with
$64\times 64$ P2 elements (left) and $32\times 32$ P5 elements (right). The
regions surrounded by black squares have switched from DG to FD.
In figure 6 we plot the magnetic field component $B^{x}$ at $t=0$ on the left
half of each plot and after one period $t=2.4$ on the right half of each plot.
In the left panel of figure 6 we show the result using a P2 DG-FD scheme and
in the right panel of figure 6 using a P5 DG-FD scheme. The P5 scheme resolves
the smooth parts of the solution more accurately than the P2 scheme, as is to
be expected. Finally, in figure 7 we plot the divergence cleaning field $\Phi$
at the final time $t=2.4$. We do not observe any artifacts appearing in the
divergence cleaning field at the interfaces between the DG and FD solvers,
demonstrating that the divergence cleaning properties of the system are not
adversely affected by using two different numerical methods.
DG-FD, P2, $64^{2}$ elements
DG-FD, P5, $32^{2}$ elements
Figure 7: The divergence cleaning field $\Phi$ for the magnetic loop advection
problem after one period ($t=2.4$). We show the results of the using the DG-FD
hybrid scheme with $64\times 64$ P2 elements (left) and $32\times 32$ P5
elements (right). The regions surrounded by black squares have switched from
DG to FD.
#### 6.2.6 TOV star
A rigorous 3d test case in general relativity is the evolution of a Tolman-
Oppenheimer-Volkoff (TOV) star [61, 62]. In this section we study evolutions
of both non-magnetized and magnetized TOV stars. We adopt the same
configuration as in [63]. Specifically, we use a polytropic equation of state,
$\displaystyle p(\rho)=K\rho^{\Gamma}$ (163)
with the polytropic exponent $\Gamma=2$, polytropic constant $K=100$, and a
central density $\rho_{c}=1.28\times 10^{-3}$. For the magnetized case, we
choose a magnetic field given by a vector potential
$\displaystyle
A_{\phi}=A_{b}\left(x^{2}+y^{2}\right)\max\left(p-p_{\mathrm{cut}},0\right)^{n_{s}},$
(164)
with $A_{b}=2500$, $p_{\mathrm{cut}}=0.04p_{\max}$, and $n_{s}=2$. This
configuration yields a magnetic field strength in CGS units
$\displaystyle|B_{\mathrm{CGS}}|=\sqrt{b^{2}}\times 8.352\times
10^{19}\,\mathrm{G},$ (165)
of $|B_{\mathrm{CGS}}|=1.03\times 10^{16}\,\mathrm{G}$. The magnetic field is
only a perturbation to the dynamics of the star, since
$(p_{\mathrm{mag}}/p)(r=0)\sim 5\times 10^{-5}$. However, evolving the field
stably and accurately can be challenging. The magnetic field corresponding to
the vector potential in (164) in the magnetized region is given by
$\displaystyle B^{x}$
$\displaystyle=\frac{1}{\sqrt{\gamma}}\frac{xz}{r}A_{b}n_{s}(p-p_{\mathrm{cut}})^{n_{s}-1}\partial_{r}p,$
(166) $\displaystyle B^{y}$
$\displaystyle=\frac{1}{\sqrt{\gamma}}\frac{yz}{r}A_{b}n_{s}(p-p_{\mathrm{cut}})^{n_{s}-1}\partial_{r}p,$
(167) $\displaystyle B^{z}$
$\displaystyle=-\frac{A_{b}}{\sqrt{\gamma}}\left[2(p-p_{\mathrm{cut}})^{n_{s}}+\frac{x^{2}+y^{2}}{r}n_{s}(p-p_{\mathrm{cut}})^{n_{s}-1}\partial_{r}p\right],$
(168)
and at $r=0$ is
$\displaystyle B^{x}$ $\displaystyle=0,$ (169) $\displaystyle B^{y}$
$\displaystyle=0,$ (170) $\displaystyle B^{z}$
$\displaystyle=-\frac{A_{b}}{\sqrt{\gamma}}2(p-p_{\mathrm{cut}})^{n_{s}}.$
(171)
We perform all evolutions in full 3d with no symmetry assumptions and in the
Cowling approximation, i.e., we do not evolve the spacetime. To match the
resolution usually used in FD/FV numerical relativity codes, we use a domain
$[-20,20]^{3}$ with a base resolution of six P5 DG elements. This choice means
we have approximately 32 FD grid points covering the star’s diameter at the
lowest resolution, 64 when using twelve P5 elements, and 128 grid points when
using 24 P5 elements. In all cases we set $\rho_{\mathrm{atm}}=10^{-15}$ and
$\rho_{\mathrm{cutoff}}=1.01\times 10^{-15}$. We do not run any simulations
using a P2 DG-FD hybrid scheme since the P5 scheme has proven to be more
accurate and robust in all test cases so far.
Figure 8: A plot of $\max[\rho(t)]/\max[\rho(0)]$ at three different
resolution (left panel) for the non-magnetized TOV star. The 6-element
simulation uses FD throughout the interior of the star, while 12- and
24-element simulations use DG. The maximum density in the 6-element case
drifts down at early times because of the low resolution and the relatively
low accuracy of using FD at the center. The power spectrum of the maximum
density for the three different resolution is plotted in the right panel. The
vertical dashed lines correspond to the known frequencies in the Cowling
approximation. When the high-order DG scheme is used, more oscillation
frequencies are resolved.
In the left panel of figure 8 we show the maximum rest mass density over the
grid divided by the maximum density at $t=0$ for the non-magnetized TOV star.
The 6-element simulation uses FD throughout the interior of the star because
the corners of the inner elements are in vacuum. In comparison, the 12- and
24-element simulations use the unlimited P5 DG solver throughout the star
interior. The increased “noise” in the 12- and 24-element data actually stems
from the higher oscillation modes in the star that are induced by numerical
error. In the right panel of figure 8 we plot the power spectrum using data at
the three different resolutions. The 6-element simulation only has one mode
resolved, while 12 elements resolve two modes well, and the 24-element
simulation resolves three modes well.
Figure 9: A plot of $\max[\rho(t)]/\max[\rho(0)]$ at three different
resolution (left panel) for the magnetized TOV star. The 6-element simulation
uses FD throughout the interior of the star, while 12- and 24-element
simulations use DG. The maximum density in the 6-element case drifts down at
early times because of the low resolution and the relatively low accuracy of
using FD at the center. The power spectrum of the maximum density for the
three different resolution is plotted in the right panel. The vertical dashed
lines correspond to the known frequencies in the Cowling approximation. When
the high-order DG scheme is used, more oscillation frequencies are resolved.
We show the normalized maximum rest mass density over the grid for the
magnetized TOV star in the left panel of figure 9. Overall the results are
nearly identical to the non-magnetized case. One notable difference is the
decrease in the 12-element simulation between 7.5ms and 11ms, which occurs
because the code switches from DG to FD at the center of the star at 7.5ms and
back to DG at 11ms. Nevertheless, the frequencies are resolved just as well
for the magnetized star as for the non-magnetized case, as can be seen in the
right panel of figure 9 where we plot the power spectrum. Specifically, we are
able to resolve the three largest modes with our P5 DG-FD hybrid scheme. To
the best of our knowledge, these are the first simulations of a magnetized
neutron star using high-order DG methods.
## 7 Conclusions
In this paper we gave a detailed description of our DG-FD hybrid method that
can successfully solve challenging relativistic astrophysics test problems
like the simulation of a magnetized neutron star. Our method combines an
unlimited DG solver with a conservative FD solver. Alternatively, this can be
thought of as taking a standard FD code in numerical relativity and
compressing the data to a DG grid wherever the solution is smooth. The DG
solver is more efficient than the FD solver since no reconstruction is
necessary and fewer Riemann problems need to be solved. In theory a speedup of
about eight is achievable, though we have not optimized our code SpECTRE [17]
enough and so we find in practice a speedup of about two to three when
comparing the hybrid method to using FD everywhere. The basic idea of the
hybrid scheme is similar to [10, 11, 12, 13]. An unlimited DG solver is used
wherever a troubled-cell indicator deems the DG solution admissible, while a
FD solver is used elsewhere. Unlike classical limiting strategies like WENO
which attempt to filter out unphysical oscillations, the hybrid scheme
prevents spurious oscillations from entering the solution. This is achieved by
retaking any time step using a robust high-resolution shock-capturing
conservative FD where the DG solution was inadmissible, either because the DG
scheme produced unphysical results like negative densities or because a
numerical criterion like the percentage of power in the highest modes deemed
the DG solution bad. Our DG-FD hybrid scheme was used to perform what is to
the best of our knowledge the first ever simulations of a magnetized TOV star
using DG methods. In the future we plan to extend the hybrid scheme to curved
meshes, simulations in full general relativity where the metric is evolved,
and to use positivity-preserving adaptive-order FD methods in order to
maintain the highest order possible even when using FD instead of DG.
Charm++/Converse [64] was developed by the Parallel Programming Laboratory in
the Department of Computer Science at the University of Illinois at Urbana-
Champaign. The figures in this article were produced with matplotlib [65, 66],
TikZ [67] and ParaView [68, 69]. Computations were performed with the Wheeler
cluster at Caltech. This work was supported in part by the Sherman Fairchild
Foundation and by NSF Grants No. PHY-2011961, No. PHY-2011968, and No.
OAC-1931266 at Caltech, and NSF Grants No. PHY- 1912081 and No. OAC-1931280 at
Cornell.
## Appendix A Curved hexahedral elements and moving meshes
We have not yet implemented support for curved hexahedral meshes into SpECTRE.
However, we have given careful consideration on how they could be implemented.
In this appendix we discuss two possible implementations, one that requires
many additional ghost cells with dimension-by-dimension reconstruction, and
one that requires multidimensional reconstruction but no additional ghost
cells.
Support for curved hexahedral or rectangular meshes can be achieved by
combining the DG scheme with a multipatch or multidomain FD scheme. We will
discuss only the 2d case, since the 3d case has more tedious bookkeeping, but
otherwise is a straightforward extension. As a concrete example, we consider a
2d disk made out of a square surrounded by four wedges as shown in figure 10.
We focus on an element at the top right corner of the central square and its
neighbors, highlighted by the dashed squared in figure 10. We will first
discuss how to handle the boundaries when a pair of neighboring elements are
using the FD scheme, and then consider the case when one element is using DG
and the other FD.
Figure 10: A 2d disk made out of a central square surrounded by four wedges.
In the text we describe the method of handling intercell fluxes for the
elements inside the dashed square.
In figure 11 we illustrate the domain setup, showing the subcell center points
as circles in the two elements of interest. The diamonds in left panel of
figure 11 represent the ghost cells needed for reconstruction to the element
boundary in the element on the right. We use diagonal dotted lines to trace
out lines of constant reference coordinates in the element on the right and
dashed lines in the element on the left. Notice that the dashed and dotted
lines intersect on the element boundary. This is because the mapping from the
reference frame is continuous across element boundaries and allows us to have
a conservative scheme using centered stencils even in the multipatch case.
An illustration of the ghost points needed for the FD scheme where neighboring
elements do not have aligned coordinate axes in their reference frames.
Circles denote the cell-center FD points in the elements, and diamonds denote
the ghost cells needed for reconstruction in the element on the right. The
diagonal dotted lines trace out lines of constant reference coordinates in the
element on the right, and dashed lines in the element on the left. Notice that
the dashed and dotted lines intersect on the element boundary.
An illustration of extending the FD element by additional cells in order to
support high-order reconstruction to arbitrary points inside the element, as
discussed in the text. The additional cells for the central element are shown
as purple triangles. These additional cells are evolved alongside the cells
inside the element.
An illustration of the first stage of the reconstruction to the ghost cells
needed by the neighboring element on the right. The central element
reconstructs the solution to a line in the reference coordinates, followed by
a second reconstruction to the ghost cells that fall on the line (not shown
for simplicity).
Figure 11: An illustration of the multipatch or multidomain FD reconstruction
needed to support curved meshes. We show a 2d example for simplicity. The 3d
case is a tedious but otherwise straightforward generalization.
Since we are unable to interpolate to the ghost cells shown in the left panel
of figure 11 with centered stencils, one option is to use non-centered
stencils. Using non-centered stencils was explored in reference [70], which
did not find any instabilities from the use of such stencils in their test
cases. Another option is to use reconstruction methods for unstructured meshes
(see, for example, [71, 72, 73, 74, 75, 76] and references therein), though
this adds significant conceptual and technical overhead. Another option is
adding additional subcells that overlap with the neighboring elements to allow
the use of centered reconstruction schemes to interpolate to the ghost cells.
These additional subcells are shown as triangles in the middle panel of figure
11. We can now do two reconstructions to reconstruct the ghost cells. First,
we reconstruct along one reference axis of the central element as shown by the
squares in the right panel of figure 11. Next we reconstruct along the other
direction, which is illustrated by the dotted vertical line in the right panel
of figure 11.
In order to maintain conservation between elements, we need to define a unique
left and right state at the boundary of the elements. A unique state can be
obtained by using the average of the reconstructed variables from the diagonal
and horizontal stencils in figure 11. That is, we use the average of the
result obtained from reconstruction in each element for the right and left
states when updating any subcells that need the numerical flux on the element
boundaries. Recall that when using a second-order FD derivative the semi-
discrete evolution equations are (we only show 1d for simplicity since it is
sufficient to illustrate our point)
$\displaystyle\partial_{t}u+\frac{\partial\xi}{\partial
x}\left(\frac{\hat{F}^{x}_{\underline{i}+1/2,\underline{j}}-\hat{F}^{x}_{\underline{i}-1/2},\underline{j}}{\Delta\xi}\right)=S.$
(172)
Thus, as long as all cells that share the boundary on which the numerical
fluxes are defined use the same numerical flux, the scheme is conservative.
When using higher-order derivative approximations the fluxes away from the
cell boundaries are also needed. In the case of the element boundaries we are
considering, we do not have a unique solution in the region of overlap (e.g.
the region covered by the purple triangles in the middle panel of figure 11)
where we compute the fluxes. As a result, we do not know if using high-order
FD derivatives would violate conservation at the element boundaries. However,
if the solution is smooth in this region, small violations of conservation are
not detrimental, and if a discontinuity is passing through the boundary a
second-order FD derivative should be used anyway.
Another method of doing reconstruction at locations where the coordinate axes
do not align is described in [77] for finite-volume methods. This same
approach should be applicable to FD methods. Whether adding ghost zones or
using unstructured mesh reconstruction is easier to implement and more
efficient is unclear and will need to be tested.
## Appendix B Integration weights
The standard weights available in textbooks assume the abscissas are
distributed at the boundaries of the subcells, not the subcell centers, and so
do not apply. The weights $R_{\underline{i}}$ are given by integrals over
Lagrange polynomials:
$\displaystyle
R_{\underline{i}}=\int_{a}^{b}\prod_{\underline{j}=0\atop\underline{j}\neq\underline{i}}^{n}\frac{(x-x_{\underline{j}})}{(x_{\underline{i}}-x_{\underline{j}})}\,dx.$
(173)
The integration coefficients are not unique since there are choices on how to
handle points near the boundaries and how to stitch the interior solution
together. Rather than using one-sided or low-order centered stencils near the
boundaries, we choose to integrate from $0$ to $3\Delta x$ for the fourth-
order stencil and from $0$ to $5\Delta x$ for the sixth-order stencils. The
fourth-order stencil at the boundary is
$\displaystyle\int_{0}^{3\Delta x}f(x)dx\approx\Delta
x\left(\frac{9}{8}f_{1/2}+\frac{3}{4}f_{3/2}+\frac{9}{8}f_{5/2}\right),$ (174)
and the sixth-order stencil is
$\displaystyle\int_{0}^{5\Delta x}f(x)dx$ $\displaystyle\approx\Delta
x\left(\frac{1375}{1152}f_{1/2}+\frac{125}{288}f_{3/2}+\frac{335}{192}f_{5/2}\right.$
(175)
$\displaystyle\left.+\frac{125}{288}f_{7/2}+\frac{1375}{1152}f_{9/2}\right).$
If we have more than three (five) points we need to stitch the formulas
together. We do this by integrating from $x_{k}$ to $x_{k+1}$. For the fourth-
order stencil we get
$\displaystyle\int_{x_{k}}^{x_{k+1}}f(x)dx\approx\Delta
x\left(\frac{1}{24}f_{k-1/2}+\frac{11}{12}f_{k+1/2}+\frac{1}{24}f_{k+3/2}\right).$
(176)
and for the sixth-order stencil we get
$\displaystyle\int_{x_{k}}^{x_{k+1}}f(x)dx$ $\displaystyle\approx\Delta
x\left(\frac{-17}{5760}f_{k-3/2}+\frac{308}{5760}f_{k-1/2}+\frac{5178}{5760}f_{k+1/2}\right.$
(177)
$\displaystyle\left.+\frac{308}{5760}f_{k+3/2}-\frac{17}{5760}f_{k+5/2}\right).$
We present the weights for a fourth-order approximation to the integral in
table 3 and for a sixth-order approximation to the integral in table 4. The
weights are obtained by using (174) and (175) at the boundaries and (176) and
(177) on the interior. The stencils are symmetric about the center and so only
half the coefficients are shown.
Table 3: Weights for a fourth-order approximation to an integral using
stencils symmetric about the center. Only the first half of the coefficients
are shown, the second half are such that the stencil is symmetric. The number
of points in the stencil is shown in the first column.
* Number of cells | $x_{1/2}$ | $x_{3/2}$ | $x_{5/2}$ | $x_{7/2}$ | $x_{9/2}$
---|---|---|---|---|---
3 | $\frac{9}{8}$ | $\frac{3}{4}$ | — | — | —
4 | $\frac{13}{12}$ | $\frac{11}{12}$ | — | — | —
5 | $\frac{13}{12}$ | $\frac{21}{24}$ | $\frac{13}{12}$ | — | —
6 | $\frac{9}{8}$ | $\frac{3}{4}$ | $\frac{9}{8}$ | — | —
7 | $\frac{9}{8}$ | $\frac{3}{4}$ | $\frac{7}{6}$ | $\frac{11}{12}$ | —
8 | $\frac{9}{8}$ | $\frac{3}{4}$ | $\frac{7}{6}$ | $\frac{23}{24}$ | —
9+ | $\frac{9}{8}$ | $\frac{3}{4}$ | $\frac{7}{6}$ | $\frac{23}{24}$ | 1
Table 4: Weights for a sixth-order approximation to an integral using stencils
symmetric about the center. Only the first half of the coefficients are shown,
the second half are such that the stencil is symmetric. The number of points
in the stencil is shown in the first column.
* Number of cells | $x_{1/2}$ | $x_{3/2}$ | $x_{5/2}$ | $x_{7/2}$ | $x_{9/2}$ | $x_{11/2}$ | $x_{13/2}$ | $x_{15/2}$
---|---|---|---|---|---|---|---|---
5 | $\frac{1375}{1152}$ | $\frac{125}{288}$ | $\frac{335}{192}$ | — | — | — | — | —
6 | $\frac{741}{640}$ | $\frac{417}{640}$ | $\frac{381}{320}$ | — | — | — | — | —
7 | $\frac{741}{640}$ | $\frac{3547}{5760}$ | $\frac{8111}{5760}$ | $\frac{611}{960}$ | — | — | — | —
8 | $\frac{1663}{1440}$ | $\frac{227}{360}$ | $\frac{323}{240}$ | $\frac{139}{160}$ | — | — | — | —
9 | $\frac{1663}{1440}$ | $\frac{227}{360}$ | $\frac{1547}{1152}$ | $\frac{245}{288}$ | $\frac{3001}{2880}$ | — | — | —
10 | $\frac{1375}{1152}$ | $\frac{125}{288}$ | $\frac{335}{192}$ | $\frac{125}{288}$ | $\frac{1375}{1152}$ | — | — | —
11 | $\frac{1375}{1152}$ | $\frac{125}{288}$ | $\frac{335}{192}$ | $\frac{2483}{5760}$ | $\frac{7183}{5760}$ | $\frac{863}{960}$ | — | —
12 | $\frac{1375}{1152}$ | $\frac{125}{288}$ | $\frac{335}{192}$ | $\frac{2483}{5760}$ | $\frac{3583}{2880}$ | $\frac{2743}{2880}$ | — | —
13 | $\frac{1375}{1152}$ | $\frac{125}{288}$ | $\frac{335}{192}$ | $\frac{2483}{5760}$ | $\frac{3583}{2880}$ | $\frac{1823}{1920}$ | $\frac{2897}{2880}$ | —
14 | $\frac{1375}{1152}$ | $\frac{125}{288}$ | $\frac{335}{192}$ | $\frac{2483}{5760}$ | $\frac{3583}{2880}$ | $\frac{1823}{1920}$ | $\frac{5777}{5760}$ | —
15+ | $\frac{1375}{1152}$ | $\frac{125}{288}$ | $\frac{335}{192}$ | $\frac{2483}{5760}$ | $\frac{3583}{2880}$ | $\frac{1823}{1920}$ | $\frac{5777}{5760}$ | 1
## References
## References
* [1] William H Reed and TR Hill. Triangular mesh methods for the neutron transport equation. Technical report, Los Alamos Scientific Lab., N. Mex.(USA), 1973.
* [2] Bernardo Cockburn and Chi-Wang Shu. TVB Runge-Kutta local projection discontinuous Galerkin finite element method for conservation laws. II. General framework. Mathematics of Computation, 52(186):411–435, 1989.
* [3] Bernardo Cockburn, San-Yih Lin, and Chi-Wang Shu. TVB Runge-Kutta local projection discontinuous Galerkin finite element method for conservation laws III: One-dimensional systems. Journal of Computational Physics, 84(1):90 – 113, 1989.
* [4] B. Cockburn, S. Hou, and C.-W. Shu. The Runge-Kutta local projection discontinuous Galerkin finite element method for conservation laws. IV. The multidimensional case. Mathematics of Computation, 54:545–581, April 1990.
* [5] Guang Shan Jiang and Chi-Wang Shu. On a cell entropy inequality for discontinuous Galerkin methods. Mathematics of Computation, 62(206):531–538, 1994.
* [6] Timothy Barth, Pierre Charrier, and Nagi N Mansour. Energy stable flux formulas for the discontinuous Galerkin discretization of first order nonlinear conservation laws. Technical Report 20010095444, NASA Technical Reports Server, 2001.
* [7] Songming Hou and Xu-Dong Liu. Solutions of multi-dimensional hyperbolic systems of conservation laws by square entropy condition satisfying discontinuous Galerkin method. Journal of Scientific Computing, 31(1-2):127–151, 2007.
* [8] S. K. Godunov. A difference method for numerical calculation of discontinuous solutions of the equations of hydrodynamics. Mat. Sb. (N.S.), 47(89):271–306, 1959.
* [9] Bruno Costa and Wai Sun Don. Multi-domain hybrid spectral-WENO methods for hyperbolic conservation laws. Journal of Computational Physics, 224(2):970 – 991, 2007.
* [10] A. Huerta, E. Casoni, and J. Peraire. A simple shock-capturing technique for high-order discontinuous Galerkin methods. International Journal for Numerical Methods in Fluids, 69(10):1614–1632, 2012.
* [11] Matthias Sonntag and Claus-Dieter Munz. Shock capturing for discontinuous Galerkin methods using finite volume subcells. In Jürgen Fuhrmann, Mario Ohlberger, and Christian Rohde, editors, Finite Volumes for Complex Applications VII-Elliptic, Parabolic and Hyperbolic Problems, pages 945–953, Cham, 2014. Springer International Publishing.
* [12] Michael Dumbser, Olindo Zanotti, Raphaël Loubère, and Steven Diot. A posteriori subcell limiting of the discontinuous Galerkin finite element method for hyperbolic conservation laws. Journal of Computational Physics, 278:47 – 75, 2014.
* [13] Walter Boscheri and Michael Dumbser. Arbitrary-Lagrangian–Eulerian discontinuous Galerkin schemes with a posteriori subcell finite volume limiting on moving unstructured meshes. Journal of Computational Physics, 346:449 – 479, 2017.
* [14] Olindo Zanotti, Francesco Fambri, and Michael Dumbser. Solving the relativistic magnetohydrodynamics equations with ADER discontinuous Galerkin methods, a posteriori subcell limiting and adaptive mesh refinement. Mon. Not. Roy. Astron. Soc., 452(3):3010–3029, 2015.
* [15] Francesco Fambri, Michael Dumbser, Sven Köppel, Luciano Rezzolla, and Olindo Zanotti. ADER discontinuous Galerkin schemes for general-relativistic ideal magnetohydrodynamics. Mon. Not. Roy. Astron. Soc., 477(4):4543–4564, 2018.
* [16] Lawrence E. Kidder et al. SpECTRE: a task-based discontinuous Galerkin code for relativistic astrophysics. J. Comput. Phys., 335:84–114, 2017.
* [17] Nils Deppe, William Throwe, Lawrence E. Kidder, Nils L. Fischer, François Hébert, Jordan Moxon, Cristóbal Armaza, Gabriel S. Bonilla, Prayush Kumar, Geoffrey Lovelace, Eamonn O’Shea, Harald P. Pfeiffer, Mark A. Scheel, Saul A. Teukolsky, Isha Anantpurkar, Michael Boyle, Francois Foucart, Matthew Giesler, Jason S. Guo, Dante A. B. Iozzo, Yoonsoo Kim, Isaac Legred, Dongjun Li, Alexandra Macedo, Denyz Melchor, Marlo Morales, Kyle C. Nelli, Teresita Ramirez, Hannes R. Rüter, Jennifer Sanchez, Sierra Thomas, Nikolas A. Wittek, and Tom Wlodarczyk. Spectre, September 2021. github.com/sxs-collaboration/spectre.
* [18] Thomas W. Baumgarte and Stuart L. Shapiro. Numerical Relativity: Solving Einstein’s Equations on the Computer. Cambridge University Press, 2010.
* [19] L. Rezzolla and O. Zanotti. Relativistic Hydrodynamics. Oxford University Press, September 2013.
* [20] Luis Antón, Olindo Zanotti, Juan A. Miralles, José M. Martí, José M. Ibáñez, José A. Font, and José A. Pons. Numerical 3+1 general relativistic magnetohydrodynamics: a local characteristic approach. The Astrophysical Journal, 637:296–312, January 2006.
* [21] Jose A. Font. Numerical hydrodynamics and magnetohydrodynamics in general relativity. Living Rev. Rel., 11:7, 2008.
* [22] Charles W. Misner, Kip S. Thorne, and John Archibald Wheeler. Gravitation. Freeman, New York, New York, 1973.
* [23] A. Dedner, F. Kemm, D. Kröner, C.-D. Munz, T. Schnitzer, and M. Wesenberg. Hyperbolic divergence cleaning for the MHD equations. Journal of Computational Physics, 175:645–673, January 2002.
* [24] Philipp Mösta, Bruno C. Mundim, Joshua A. Faber, Roland Haas, Scott C. Noble, Tanja Bode, Frank Löffler, Christian D. Ott, Christian Reisswig, and Erik Schnetter. GRHydro: A new open source general-relativistic magnetohydrodynamics code for the Einstein Toolkit. Class. Quant. Grav., 31:015005, 2014.
* [25] Saul A. Teukolsky. Formulation of discontinuous Galerkin methods for relativistic astrophysics. J. Comput. Phys., 312:333–356, 2016.
* [26] J.S. Hesthaven and T. Warburton. Nodal Discontinuous Galerkin Methods: Algorithms, Analysis, and Applications. Springer-Verlag New York, New York, 2008.
* [27] Mark A. Scheel, Harald P. Pfeiffer, Lee Lindblom, Lawrence E. Kidder, Oliver Rinne, and Saul A. Teukolsky. Solving Einstein’s equations with dual coordinate frames. Phys. Rev., D74:104006, 2006.
* [28] Daniel A. Hemberger, Mark A. Scheel, Lawrence E. Kidder, Béla Szilágyi, Geoffrey Lovelace, Nicholas W. Taylor, and Saul A. Teukolsky. Dynamical excision boundaries in spectral evolutions of binary black hole spacetimes. Class. Quant. Grav., 30:115001, 2013.
* [29] Michael Dumbser, Manuel Castro, Carlos Parés, and Eleuterio F. Toro. ADER schemes on unstructured meshes for nonconservative hyperbolic systems: Applications to geophysical flows. Computers & Fluids, 38(9):1731 – 1748, 2009.
* [30] G Dal Maso, P. G LeFloch, and F Murat. Definition and weak stability of nonconservative products. Journal de mathématiques pures et appliquées, 1995.
* [31] Cesar A. Acosta Minoli and David A. Kopriva. Discontinuous Galerkin spectral element approximations on moving meshes. Journal of Computational Physics, 230(5):1876 – 1902, 2011.
* [32] Chi-Wang Shu and Stanley Osher. Efficient implementation of essentially non-oscillatory shock-capturing schemes. Journal of Computational Physics, 77(2):439 – 471, 1988.
* [33] Bernardo Cockburn, George E Karniadakis, and Chi-Wang Shu. The development of discontinuous Galerkin methods. In Discontinuous Galerkin Methods, pages 3–50. Springer, 2000.
* [34] Bernardo Cockburn and Chi-Wang Shu. Runge-Kutta discontinuous Galerkin methods for convection-dominated problems. Journal of Scientific Computing, 16(3):173–261, 2001.
* [35] Lilia Krivodonova and Ruibin Qin. An analysis of the spectrum of the discontinuous Galerkin method. Applied Numerical Mathematics, 64:1 – 18, 2013.
* [36] Nils Deppe et al. Simulating magnetized neutron stars with discontinuous Galerkin methods. Submitted.
* [37] B. Cockburn and C.-W. Shu. The Runge-Kutta discontinuous Galerkin method for conservation laws V. Multidimensional systems. Journal of Computational Physics, 141:199–224, April 1998.
* [38] L. Krivodonova, J. Xin, J.-F. Remacle, N. Chevaugeon, and J.E. Flaherty. Shock detection and limiting with discontinuous Galerkin methods for hyperbolic conservation laws. Applied Numerical Mathematics, 48(3):323 – 338, 2004.
* [39] Lilia Krivodonova. Limiters for high-order discontinuous Galerkin methods. Journal of Computational Physics, 226(1):879 – 896, 2007.
* [40] X. Zhong and C.-W. Shu. A simple weighted essentially nonoscillatory limiter for Runge-Kutta discontinuous Galerkin methods. Journal of Computational Physics, 232:397–415, January 2013.
* [41] J. Zhu, X. Zhong, C.-W. Shu, and J. Qiu. Runge-Kutta discontinuous Galerkin method with a simple and compact Hermite WENO limiter. Communications in Computational Physics, 19:944–969, April 2016\.
* [42] Cheng Wang, Xiangxiong Zhang, Chi-Wang Shu, and Jianguo Ning. Robust high order discontinuous Galerkin schemes for two-dimensional gaseous detonations. Journal of Computational Physics, 231(2):653 – 665, 2012.
* [43] Yvon Maday, Cathy Mavriplis, and Anthony T. Patera. Nonconforming mortar element methods - Application to spectral discretizations. In Domain Decomposition Methods, pages 392–418, January 1989.
* [44] David A. Kopriva. A conservative staggered-grid Chebyshev multidomain method for compressible flows. II. A semi-structured method. Journal of Computational Physics, 128(2):475 – 488, 1996.
* [45] David A. Kopriva, Stephen L. Woodruff, and M. Y. Hussaini. Computation of electromagnetic scattering with a non-conforming discontinuous spectral element method. International Journal for Numerical Methods in Engineering, 53(1):105–122, 2002.
* [46] Tan Bui-Thanh and Omar Ghattas. Analysis of an $hp$-nonconforming discontinuous Galerkin spectral element method for wave propagation. SIAM Journal on Numerical Analysis, 50:1801–1826, 01 2012.
* [47] Per-Olof Persson and Jaime Peraire. Sub-cell shock capturing for discontinuous Galerkin methods. In 44th AIAA Aerospace Sciences Meeting and Exhibit. American Institute of Aeronautics and Astronautics, Inc., 2006.
* [48] David Gottlieb and Steven A. Orszag. Numerical Analysis of Spectral Methods. Society for Industrial and Applied Mathematics, 1977.
* [49] Dinshaw Balsara. Total variation diminishing scheme for relativistic magnetohydrodynamics. The Astrophysical Journal Supplement Series, 132(1):83–101, January 2001.
* [50] Bruno Giacomazzo and Luciano Rezzolla. The exact solution of the Riemann problem in relativistic magnetohydrodynamics. Journal of Fluid Mechanics, 562:223–259, September 2006.
* [51] T. Leismann, L. Antón, M. A. Aloy, E. Müller, J. M. Martí, J. A. Miralles, and J. M. Ibáñez. Relativistic MHD simulations of extragalactic jets. Astron. Astrophys. , 436:503–526, June 2005.
* [52] L. Del Zanna, O. Zanotti, N. Bucciantini, and P. Londrillo. ECHO: an Eulerian conservative high order scheme for general relativistic magnetohydrodynamics and magnetodynamics. Astron. Astrophys., 473:11–30, 2007.
* [53] Dinshaw S. Balsara and Daniel S. Spicer. A staggered mesh algorithm using high order Godunov fluxes to ensure solenoidal magnetic fields in magnetohydrodynamic simulations. Journal of Computational Physics, 149(2):270–292, March 1999.
* [54] Gábor Tóth. The $\nabla$· B=0 constraint in shock-capturing magnetohydrodynamics codes. Journal of Computational Physics, 161(2):605–652, July 2000.
* [55] Zachariah B. Etienne, Yuk Tung Liu, and Stuart L. Shapiro. Relativistic magnetohydrodynamics in dynamical spacetimes: A new adaptive mesh refinement implementation. Phys. Rev. D , 82(8):084031, October 2010.
* [56] L. Del Zanna, N. Bucciantini, and P. Londrillo. An efficient shock-capturing central-type scheme for multidimensional relativistic flows. II. Magnetohydrodynamics. Astron. Astrophys. , 400:397–413, March 2003.
* [57] C. Richard DeVore. Flux-corrected transport techniques for multidimensional compressible magnetohydrodynamics. Journal of Computational Physics, 92(1):142–160, January 1991.
* [58] Kris Beckwith and James M. Stone. A second-order Godunov method for multi-dimensional relativistic magnetohydrodynamics. Astrophysical Journal, Supplement, 193(1):6, March 2011.
* [59] Thomas A. Gardiner and James M. Stone. An unsplit Godunov method for ideal MHD via constrained transport. Journal of Computational Physics, 205(2):509–539, May 2005.
* [60] James M. Stone, Thomas A. Gardiner, Peter Teuben, John F. Hawley, and Jacob B. Simon. Athena: A New Code for Astrophysical MHD. Astrophysical Journal, Supplement, 178(1):137–177, September 2008\.
* [61] Richard C. Tolman. Static solutions of Einstein’s field equations for spheres of fluid. Phys. Rev., 55:364–373, 1939.
* [62] J. R. Oppenheimer and G. M. Volkoff. On massive neutron cores. Phys. Rev., 55:374–381, 1939.
* [63] Federico Cipolletta, Jay Vijay Kalinani, Bruno Giacomazzo, and Riccardo Ciolfi. Spritz: a new fully general-relativistic magnetohydrodynamic code. Class. Quant. Grav., 37(13):135010, 2020.
* [64] Laxmikant Kale, Bilge Acun, Seonmyeong Bak, Aaron Becker, Milind Bhandarkar, Nitin Bhat, Abhinav Bhatele, Eric Bohm, Cyril Bordage, Robert Brunner, Ronak Buch, Sayantan Chakravorty, Kavitha Chandrasekar, Jaemin Choi, Michael Denardo, Jayant DeSouza, Matthias Diener, Harshit Dokania, Isaac Dooley, Wayne Fenton, Juan Galvez, Fillipo Gioachin, Abhishek Gupta, Gagan Gupta, Manish Gupta, Attila Gursoy, Vipul Harsh, Fang Hu, Chao Huang, Narain Jagathesan, Nikhil Jain, Pritish Jetley, Prateek Jindal, Raghavendra Kanakagiri, Greg Koenig, Sanjeev Krishnan, Sameer Kumar, David Kunzman, Michael Lang, Akhil Langer, Orion Lawlor, Chee Wai Lee, Jonathan Lifflander, Karthik Mahesh, Celso Mendes, Harshitha Menon, Chao Mei, Esteban Meneses, Eric Mikida, Phil Miller, Ryan Mokos, Venkatasubrahmanian Narayanan, Xiang Ni, Kevin Nomura, Sameer Paranjpye, Parthasarathy Ramachandran, Balkrishna Ramkumar, Evan Ramos, Michael Robson, Neelam Saboo, Vikram Saletore, Osman Sarood, Karthik Senthil, Nimish Shah, Wennie Shu, Amitabh B. Sinha, Yanhua Sun, Zehra Sura, Ehsan Totoni, Krishnan Varadarajan, Ramprasad Venkataraman, Jackie Wang, Lukasz Wesolowski, Sam White, Terry Wilmarth, Jeff Wright, Joshua Yelon, and Gengbin Zheng. Uiuc-ppl/charm: Charm++ version 6.10.2, August 2020.
* [65] J. D. Hunter. Matplotlib: A 2d graphics environment. Computing in Science & Engineering, 9(3):90–95, 2007.
* [66] Thomas A Caswell, Michael Droettboom, Antony Lee, John Hunter, Eric Firing, Elliott Sales de Andrade, Tim Hoffmann, David Stansby, Jody Klymak, Nelle Varoquaux, Jens Hedegaard Nielsen, Benjamin Root, Ryan May, Phil Elson, Darren Dale, Jae-Joon Lee, Jouni K. Seppänen, Damon McDougall, Andrew Straw, Paul Hobson, Christoph Gohlke, Tony S Yu, Eric Ma, Adrien F. Vincent, Steven Silvester, Charlie Moad, Nikita Kniazev, hannah, Elan Ernest, and Paul Ivanov. matplotlib/matplotlib: Rel: v3.3.0, July 2020.
* [67] T. Tantau. The tikz and pgf packages.
* [68] Utkarsh Ayachit. The ParaView Guide: A Parallel Visualization Application. Kitware, Inc., Clifton Park, NY, USA, 2015.
* [69] J. Ahrens, Berk Geveci, and C. Law. Paraview: An end-user tool for large-data visualization. In The Visualization Handbook, 2005.
* [70] Kurt Sebastian and Chi-Wang Shu. Multidomain WENO finite difference method with interpolation at subdomain interfaces. Journal of Scientific Computing, 19(1-3):405–438, 2003.
* [71] W. R. Wolf and J. L. F. Azevedo. High-order ENO and WENO schemes for unstructured grids. International Journal for Numerical Methods in Fluids, 55(10):917–943, 2007.
* [72] Panagiotis Tsoutsanis, Antonios Foivos Antoniadis, and Dimitris Drikakis. WENO schemes on arbitrary unstructured meshes for laminar, transitional and turbulent flows. Journal of Computational Physics, 256:254 – 276, 2014.
* [73] Panagiotis Tsoutsanis. Stencil selection algorithms for WENO schemes on unstructured meshes. Journal of Computational Physics: X, 4:100037, 08 2019.
* [74] Pericles S. Farmakis, Panagiotis Tsoutsanis, and Xesús Nogueira. WENO schemes on unstructured meshes using a relaxed a posteriori MOOD limiting approach. Computer Methods in Applied Mechanics and Engineering, 363:112921, 2020.
* [75] Michael Dumbser, Martin Käser, Vladimir A. Titarev, and Eleuterio F. Toro. Quadrature-free non-oscillatory finite volume schemes on unstructured meshes for nonlinear hyperbolic systems. Journal of Computational Physics, 226(1):204 – 243, 2007.
* [76] Chunhua Sheng, Qiuying Zhao, Dongdong Zhong, and Ning Ge. A strategy to implement high-order WENO schemes on unstructured grids. In AIAA Aviation 2019 Forum. American Institute of Aeronautics and Astronautics, Inc., 2019.
* [77] Lucie Freret, Lucian Ivan, Hans De Sterck, and Clinton P. Groth. A high-order finite-volume method with anisotropic AMR for ideal MHD flows. In 55th AIAA Aerospace Sciences Meeting. American Institute of Aeronautics and Astronautics, Inc., 2017.
|
Subsets and Splits